Monday, December 26, 2022

What happens to packets with a VMware vSphere Distributed Switch?

Distributed Virtual Port-Groups (dvPGs) in vSphere are a powerful tool for controlling network traffic behavior. vSphere Distributed Switches (vDS) are non-transitive Layer 2 proxies and provide us the ability to modify packets in-flight in a variety of complex ways.

Note: Cisco UCS implements something similar with their Fabric Interconnects, but software control of behavior is key here.

Where do the packets go?

Let's start with a packet flow diagram:

ESXi evaluates a combination of source and destination dvPG/MAC address conditions and will ship the packet to one of the following "stages":

  • vDS Memory Bus: This is only an option if the source and destination VM are both on the same port-group and the same host
  • vDS Uplink Stage: This is where the vSphere Distributed Switch receives the traffic from the vnic and applies any proxy settings
  • UCS FI: In Cisco UCS Environments configured in end-host mode, traffic will depend on the vSphere Distributed Switch's uplink pinning, as Fabric Interconnects do not transit between redundant nodes. If they are configured in transitive node, they function as external layer 2 switches
  • External Switching: If the destination is in the same broadcast domain (determined by network/host bits) packets will flow via the access layer (or higher layers depending on the network design)
  • External Layer 3 Routing: Traffic outside of the broadcast domain is forwarded to the default gateway for handling

Testing The Hypothesis

vDS tries to optimize VM-to-VM traffic to the shortest possible path. If a Virtual Machine attempts to reach another Virtual Machine on the same host, and same dvPG, ESXi will open up a path via the host's local memory bus to transfer the Ethernet traffic. 

This hypothesis is verifiable by creating two virtual machines on the same port-group. If the machines in question are on the same host, it will not matter if the VLAN in question isn't trunked to the host, an important thing to keep in mind when troubleshooting.

An easy method to test the hypothesis is to start an iperf session between two VMs, and change the layout accordingly. The bandwidth available between hosts will often differ between the memory bus and the network adapters provisioned.

For this example, we will execute an iPerf TCP test with default settings between 2 VMs on the same port-group, then vMotion the server to another host and repeat the test.

  • Output (Same Host, same dvPG):
    root@host:~# iperf -c IP
    ------------------------------------------------------------
    Client connecting to IP, TCP port 5001
    TCP window size:  357 KByte (default)
    ------------------------------------------------------------
    [  3] local IP port 33260 connected with IP port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3] 0.0000-10.0013 sec  6.98 GBytes  6.00 Gbits/sec
        
  • Output (Different host, same dvPG):
    root@host:~# iperf -c IP
            ------------------------------------------------------------
    Client connecting to IP, TCP port 5001
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [  3] local IP port 59478 connected with IP port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3] 0.0000-10.0009 sec  10.5 GBytes  9.06 Gbits/sec
      

Troubleshooting

Understanding vSphere Distributed Switch packet flow is key when trying to assess networking issues. The shared memory bus provided by ESXi is a powerful tool when ensuring short, inconsistent paths are used over longer, more consistent ones.

When constructing the VMware Validated Design (VVD) and the system defaults that ship with ESXi, the system architects chose a "one size fits most" strategy. This network behavior would be desirable with Gigabit data centers or edge PoPs, or anywhere the network speed would be less than the memory bus. In most server hardware, system memory buses will exceed backend network adapters' capacity, improving performance with small clusters. It's important to realize that VMware doesn't just sell software to large enterprises - cheaper, smaller deployments make up the majority of customers.

Impacts on Design

Shorter paths are not always desirable. In my lab, hardware offloads like TCP Segmentation Offload (TSO) are available and will make traffic more performant outwards. Newer hardware architectures, particularly 100GbE(802.3by), benefit from relying on the network adapter for encapsulation/decapsulation work instead of CPU resources better allocated to VMs.

This particular "feature" is straightforward to design around. The vSphere Distributed Switch provides us the requisite tools to achieve our aims, and we can follow several paths to control behavior to match design:

  • When engineering for high performance/network offload, creating multiple port-groups with tunable parameters is something a VI administrator should be comfortable doing. Automating port-group deployment and managing the configuration as code is even better.
  • If necessary, consider SR-IOV for truly performance intensive workloads.
  • The default is still good for 9 of 10 use cases. Design complexity doesn't make a VI administrator "leet"; consider any deviation from recommended carefully.

As always, it's important to know oneself (your environment) before making any design decision. Few localized Virtual Machines concentrate enough traffic to benefit from additional tuning. Real-world performance testing will indicate when these design practices are necessary.

Saturday, December 17, 2022

Security patches are available for VMware vCenter 8.0 - Let's try the new vCenter Lifecycle Manager!

Let's take a look at the new lifecycle management process for vCenter.

The old process via the VAMI was easy to execute - the industry is upping the ante with automated pre- and post-testing. Cisco's NX-OS installer is another example - complex procedures (in Cisco's case, sequential PGA or microcode updates) invite problems and escalate a "simple" process to something only the senior-most engineer can safely operate.

vSphere 8 seeks to improve on vSphere 7's upgrade planner by including a "vCenter Lifecycle Manager" to administer package upgrades in an integrated, reliable fashion that includes available pre-checks and reduce "update anxiety".

Then navigate to Updates -> vCenter Server -> Update:

Under Select Version, it's now possible to view eligible updates, along with the type and Release Notes:

For those of us that use NSX Data Center or other integrated products, interoperability checks are part of the wizard

Unfortunately, the vCenter backup step is not included as part of the wizard at this time. (Note: You can back up directly to a filer with vSphere 7 or newer)

Looks like we're not quite ready to use this feature to its fullest potential yet. some notable limitations still exist and should be compensated for:

Establishing processes and automatic patch notifications (RSS is a powerful tool for that) will go a long way toward making a New Year's resolution to keep our systems up to date!

Saturday, December 3, 2022

Why Automate? Programmability is about solving new problems without fear of failure.

Have you ever heard someone say "I'm not a coder" at work?

The IT industry is changing again. Our humble origins began as polymaths moved from adjacent industries and created a new world from scratch. The pioneering phase led to unique opportunities, creating our transport protocols, programming languages, and ways of building.

The appetite for trying new things is fundamentally different now. We don't worry as much about functional quality with our IT products in this day and age. Even Windows, the butt of jokes in the 2000s provides a consistent and reliable user experience.

Is this the denouement for IT innovation? Neal Stephenson predicted this issue in 2011, examining the creativity and breakneck pace the aerospace industry developed in the 1960s/1970s.

More importantly, he brings to light a painful pattern that IT engineers often go through when trying to create new things for their company's goals - "done before" means something isn't worth doing. Don't we all buy products from (more or less) the same companies to achieve similar outcomes? Why should we care if an idea was executed before?

Liability for failure and high expectations both in quality and reduced risk are prolific in today's market. I'd argue that we have a new problem - after a decade or so of easy-to-implement, highly reliable products, we've forgotten what it feels like to try something new. We're told it costs too much, or might hurt the business when infrastructure engineers want to attempt something novel, and removing something costly is too much of a problem.

The software development market has this figured out. The shift to artistic creativity has provided some growing pains, but we see a potential bridge to the future here. Infrastructure engineers may not "be coders" but uncertain outcomes are what engineers (pragmatic creatives, not artistic creatives) excel at. Our analog in the industry, actual engineers incrementally improve a physical resource, creating safer cars or city designs that promote creative growth. They don't need to worry about the low-level components functioning.

IT is maturing, and our goals are changing, but we can't forget where we came from. Software Development is bifurcating from IT infrastructure. The internal focus for infrastructure is shifting to  providing tools and resources to developers as typical customers. We need to find strength in our pragmatic creativity.

Through rose-tinted glasses, "melting pot" innovation imparts a culture of "can-do" wherever success lives - but the transition to disposable electronics/mechanical products is removing opportunities for the development of the required skills. Deep down we know this is bad, and key themes are being set by those who can, "maker spaces" and "hackerspaces" are good examples of this key trend. We need to teach new engineers not to fear failure or the practice of trying.

This doesn't mean that we can throw caution to the wind. While I admire a farmer's ability to innovate at work, creating some trailblazing (albeit somewhat unsafe) fixes in the field is not what we need in IT. (Check out FarmCraft101 for some of the stuff they do)

We need to change how the IT infrastructure industry operates. Educating new engineers will always be at least a little bit of trial and error, and the most important thing we can do is create an environment where we balance the values of trailblazing and reliable delivery. Programmability does this for us, but we don't know how to use it fully yet - but we can look at the other engineering industries to see what might work for us.

What Works

IT Engineers all seem to agree that setting up "labs" facilitates healthy innovation. Home labs offer an environment where an individual can break and rebuild, with an onus to fix it. Resources like VMware's Hands-on-Labs provide a zero-cost method to learn without consequence, albeit with product marketing.

Dyed-in-the-wool engineers love testing. New engineers learn by examining failure without negative context; engineers may work for years before actually building anything themselves. The search term "failure analysis" provides a wealth of information on the processes used by pragmatically creative individuals to steadily improve modular designs to the point where they achieve an artistically creative outcome.

Continuous Delivery practices (DevOps) supercharge these practices; We don't have to deal with physicality. If we manage to automate testing, it costs us virtually zero time, and we can pick up the failure modes as educational resources for new engineers.

How We Can Change

I'd like to see a practice that is two-thirds engineering and one-third "redneck farm repairs". With the FarmCraft101 example, we see an admirable attitude instead of apprehension towards trying new things, and we need to combine it with mature, reliable practices.

The combination of removing or reducing costs for failure and a drive to try new things is about to reach a critical point in the IT industry. We're seeing waves of new engineers enter the industry born in the 2000s, and they don't remember having to try and set IRQs, flip DIP switches with the right type of ballpoint pen, and 32-bit memory ceilings. Personal Computers have become throwaway devices that we don't have to understand well to use, and we need ways to preserve the resilience that comes with "I can solve any problem that comes my way". Raspberry Pi, Arduino, and their lookalikes revitalize this mindset and provide a quality of education that we wish we had when we were young - let's make sure the younglings use it.

I'd also like to see some self-awareness. Most people who "can't code" are in the same boat as those who "can't write", they just don't feel artistically creative. Pragmatic creativity is the backbone of modern engineering - a concept artist doesn't design a car or make it real beyond visual aesthetic and non-functional requirements. The inability to write creatively or "code" is fixed first by identifying a useful goal and achieving it. Infrastructure engineers already do this - just look at how network engineers make long-haul connectivity meet a business objective, or how HTTP forwarding rules make an application behave better.

Let's remove the belief that coding is impossible - most of the truly "propeller-hat" stuff has been done by vendors and community members already - so this leaves most actual software development as "exchanging text files over a network" or "making deterministic paths for behavior". The reason why Object-Oriented Programming and other best practices are emphasized is to ensure you know why it's there. Understanding how an automatic transmission works and should be appropriately used helps a driver improve their skills, but the point to which we learn varies from person to person, and a deep understanding isn't always necessary.

Know your strengths, and know that you can meaningfully contribute by using them. It might take a lifetime to figure out how, but it's worth it.

Popular Posts