Tag Archives: VMware

career achievement – unlocked

I started working with VMware products (ESX 3.0) back in January 2007. Like many IT folks that spent years working with physical equipment in DCs, I still remember the excitement I felt when I performed my first vMotion. It was AWESOME! I remember showing my buddies/co-workers how we could move a virtual machine from one physical host to another, while I was RDP’d into the VM, and only losing one network ping. We are obviously “light years” ahead those days, but that initial taste of Virtualization forever changed my career and made me the IT Professional I am today.

From that one moment, I’ve continued to grow my career over the years, and continued to grow with VMware and their myriad of products. I “resurrected” the Central Mississippi VMUG and really enjoyed learning from the vCommunity about what they were working with, issues they were having, etc. In fact, this became a regular session we would have at our VMUG meetings titled “All Things VMware.” Hearing the community talk and share knowledge with one another is an awesome thing, and one I’m very proud to be part of down to this day.

My continued Virtualization and Engineering roles with various companies/industries made my excitement for VMware’s products continue to grow. I started having more and more friends/colleagues go to work for VMware, and tell me what a great culture they have, and to “nudge” me when VMware had a role they thought I would be good at.

I started to really think that I wanted to be on the inside…I wanted the opportunity to help others use VMware products and help them see the benefits of new technologies as they came out. So I finally applied for a role at VMware. I would like to tell you that I was immediately hired and everything was candy and roses since, but that was not the case (yet).

I don’t recall the exact opportunity with VMware that I initially applied to, or exactly when it was, but my guess would be 5 or 6 years ago. I remember getting a canned email that stated something to the effect that they “found other candidates that better fit the qualifications” of the role, or something like that. Like most folks, I don’t like rejection, but realize that is a part of life. It is what you do after the rejection that defines you.

I moved into different roles since that first rejection…going from an SVP, Manager at a Regional Bank, to an Sr Engineer position at an Oil and Gas Company, to a Sr Virtualization Engineer at a Software/Telco company. All along the way, I received hands-on experience with VMware products in larger and larger environments. The one constant I’ve learned throughout my years working with VMware products is that you have to be willing to learn and willing to change. You cannot stay stagnant…you have to continue to invest in yourself and teach yourself as new technologies emerge.

As I continued to grow in my skillset, I would apply for different roles at VMware from time to time. I started getting further along in the hiring process, but ultimately would see the same result, an email thanking me for applying but no offer. Sometimes the role was canceled for various reasons, sometimes an Internal Staff member was moved into the new role. Whatever the reason, it still seemed to be a rejection to me, but also fueled me to keep pushing and pursuing my goal of working for VMware.

I don’t know the exact number, but I believe it has been at least 5 roles I’ve applied for over the years, but each time, I learned something…perhaps something I should teach myself, perhaps something I should handle better in the process, etc. Ultimately, I took these things I learned and pushed myself to do better. My reasoning was, even if it didn’t help me get a job at VMware, at least it will help me at my current employer and be something I can share with the vCommunity at our VMUG meetings.

Fast forward to just a few months ago, and I saw a LinkedIn post by one of the VMware guys I met at our Orlando UserCon. It was for a role as a VMware TAM (Technical Account Manager) in Nashville, TN. My family and I are very familiar with the area and recently thought about moving there when we sold our house in Mississippi. I thought, “why not give this one a shot?” As I read the job description, talked to existing TAMs to get their feedback, and learned more from close friends that work at VMware, I decided to apply. I went through the process, had several phone interviews, Tech Screen, and face-to-face interview and ultimately received an offer to go to work for VMware as a TAM in Nashville. My start date is this coming Tuesday, February 19, 2019.

To say I’m excited would be an understatement…I can’t wait to get started and to see what all I can learn from others, and possibly share some of what I learn. I am very thankful for my vCommunity friends that have continued to push me throughout my career, and thankful for the companies I’ve worked for that allowed me to continue to grow in my Virtualization skillset. I look forward to working with VMware and continuing to be an advocate for the VMUG community!

To SSH or not to SSH — Either way, there is a script!

I’ve had scripts in the past for enabling SSH on all of my VMware Hosts, but recently had a PCI Audit come through requesting that I disable SSH on all hosts in my PCI environment.  Well, that was something I hadn’t done before, but I knew it wouldn’t take long to “reverse engineer” my “enable SSH script” and make a “disable SSH script.”

Below are the different scripts I used for my different environments, and I hope you find them useful.  In the “enable SSH script” it will not only enable SSH, but will also change the default Startup Policy for SSH to “start and stop with the host”…additionally, it suppresses the shell warning you normally see when SSH is enabled on a Host.

In the “disable SSH script” it disables SSH and changes the default Startup Policy back to “start and stop manually.”  Each script is written to function at the Cluster Level in VMware, but you can easily modify it to focus on larger or smaller portions of your environment as needed.

Without further ado, here are the scripts….

Script for Enabling SSH

 

Script to Disable SSH

 

PowerCLI – Indentify VMs with RDM disks

SCRIPT SYNOPSIS / REASON CREATED – At the time of this script, we have over 800 VMs across multiple datacenters. In a few of those DCs, we have a small number of VMs that have RDMs attached (for use with Microsoft Clustering). Our current environment is VMware 5.5 hosts, so we still are VERY careful when doing anything (vMotion, etc.) on these VMs with RDMs. Per the article at https://blogs.vmware.com/apps/2015/02/say-hello-vmotion-compatible-shared-disks-windows-clustering-vsphere.html this will be a non-issue for us when we get all hosts upgraded to vSphere 6.

 

OVERVIEW OF STEPS – This is a very simple script that connects to your vCenter with the supplied credentials and then gets all VMs in the environment, specifically looking for VMs with a disk type of “RawPhysical” or “RawVirtual.” Once the script identifies VMs with these types of disks, it outputs the Parent (VM) Name, Disk File Type, and SCSI Canonical Name. There is an additional line of code that is currently remarked out that can output the results to a CSV file if desired.

 

Create Patch Baselines and Remediate Hosts in a Cluster

SCRIPT SYNOPSIS / REASON CREATED – With Several Hundred Hosts in our environment, we seemingly are constantly applying host patches…many times, before we get all hosts updated to a current version, there are additional patches/fixes that need to be applied. The goal for this script is to make it easier to update all VMHosts in a designated cluster.

OVERVIEW OF STEPS – After manually putting the host(s) to be remediated into Maintenance Mode, the script below connects to vCenter, creates custom baselines (if not previously created), attaches baselines to the designated cluster, scans for needed patches, and then remediates the host(s) while disabling Power Management/FT/HA and runs simultaneously on all hosts in the cluster.

 

 

Complex Script – Rename vDS attached hosts from IP to DNS Name

SCRIPT SYNOPSIS / REASON CREATED — We have several hundred hosts in various DCs around the world, and the majority of these hosts were connected to vCenter via the IP address rather than the DNS Name, which we wanted to change.  We are running vSphere ESXi 5.5 on all hosts, and each host is connected to a site vDs, which added complexity to this renaming process.  This “complex” script is comprised of seven separate steps and should be run on hosts that are already in Maintenance Mode.

OVERVIEW OF STEPS — After manually putting the host(s) into Maintenance Mode, the script below connect to vCenter, migrate the host(s) from the vDS to a vSS, remove the host from the vDS (after the physical NICs are attached to the vSS), take the IP address of the host(s) and perform a reverse-DNS Lookup which is then piped out to a variable, remove the host(s) from vCenter, add the host(s) back to vCenter using the DNS variable previously created, add the host(s) back to the existing vDS and migrate the vmk ports and physical NICs from the vSS back to the vDS, and then disconnect from the vCenter.