Friday, September 23, 2022

Using cloud-init with vSphere and openSUSE 15.4

Rapidly deploying Linux servers to meet a whim represents the essence of home lab activities, but we spend a great deal of time spinning/configuring machines to meet our specs.

Worse, we lose a great deal of time keeping them properly configured and up to date, and none have the privilege of unlimited lab time.

Let's explore a way to get a base template implemented in vSphere 7 and enable the machine to boot with customizations like hostname, IP address, startup scripts, etc.

Constructing a VM Template

First, let's pick up a fresh operating system installer ISO from opensuse.org. Since this is a home lab / server-style deployment, I'd recommend using the network image - we'll add everything we want later.

Upload the ISO file to a datastore. This step will allow the installation process to run unattended, even if you shut down the client:


Create a virtual machine, and name it accordingly. Attach the datastore ISO:

Boot the Linux machine. During the installation wizard, ensure that a logical volume manager (LVM2). I've found that when you build a clone template, any disk size you choose will be wrong in the application owner's mind, so plan for the future.

After the installation is complete, disconnect the CD/DVD virtual drive! If you fail to do this on shared infrastructure, the VI admins will have a difficult time with the VM - and in a home lab, that's you. Establish good habits to make responsible tenancy easy.

Start up the machine, and use zypper to install any packages or keys that may be required to administer the device. In a home lab, CA certificates and SSH keys are OK - but an enterprise environment should have an automated, repeatable way to lifecycle trust in the event of a compromise.

Once that's done, let's install cloud-init. This software package is incredibly useful, but it isn't available by default with OpenSUSE Leap:

After installing the package, ensure it's enabled with:

systemctl enable cloud-init
cloud-init clean

Cloud-Init

Cloud-init is a project managed by Canonical to standardize VM customization on boot, making IaaS more "cloudy", regardless of hosted location. It is structured to receive configuration data from a datasource and abstracts the specific inputs from other "clouds" to the IaaS workload (VM) as consistent instructions. The customization software will use these data sources as "drop points" to transform the cloud-specific instructions (OVF, Azure, EC2) to a common configuration (Metadata, Userdata).

metadata should represent the workload's system configuration, like hostname, network configuration, and mounts.

userdata should represent the workload's user space configuration, like Ansible playbooks, SSH keys, and first-run scripts. With the current state, I would tend towards using automation to register a workload with Ansible and perform that configuration centrally. It's neat that this level of customization is offered, though - cloud-init can automatically register with centralized orchestrators like SaltStack and Puppet on startup.

cloud-init has a ton of goodness available as boot-time customization, and this will only scratch the surface of how it can be used. cloud-init accepts a YAML configuration that can include:

  • Users/Groups
  • CA certificates
  • SSH keys
  • Hostnames
  • Packages/Repositories
  • Ansible Playbooks
  • External mounts (NFS)

VMware offers two data sources for workloads provisioned on vSphere:

VMware's new RESTful API has built-in documentation. From the vSphere GUI, select the triple ellipsis and select "Developer Center":

Unfortunately, VMware's new metadata source does not appear to function with this distribution. According to Canonical's changelog, cloud-init Version 21.3+ is required to recognize the new datasource. I tested with OpenSUSE 15.4 (Ships with cloud-init 21.4) and received the following error:

# A new feature in cloud-init identified possible datasources for        #
# this system as:                                                        #
#   []                                                                   #
# However, the datasource used was: OVF                                  #
#                                                                        #
# In the future, cloud-init will only attempt to use datasources that    #
# are identified or specifically configured.                             #
# For more information see                                               #
#   https://bugs.launchpad.net/bugs/1669675                              #
#                                                                        #
# If you are seeing this message, please file a bug against              #
# cloud-init at                                                          #
#    https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid      #
# Make sure to include the cloud provider your instance is               #
# running on.                                                            #
#                                                                        #
# After you have filed a bug, you can disable this warning by launching  #
# your instance with the cloud-config below, or putting that content     #
# into /etc/cloud/cloud.cfg.d/99-warnings.cfg                            #
#                                                                        #
# #cloud-config                                                          #
# warnings:                                                              #
#   dsid_missing_source: off                                             #
**************************************************************************

To view the provided and applied metadata for a system, cloud-init provides the following file handle:

/run/cloud-init/instance-data.json

To view the userdata for a system, use the following command:

cloud-init query userdata

This indicates that we probably have an upstream issue with the new data source type. Reviewing the change log we see several fixes applied to this data source. 

Applying Workload Templates

Note: This feature is only available on vSphere 7 and up!

Here's how to leverage the OVF data source with vSphere and OpenSUSE.

The flag disable_vmware_customization is a directive that functions as a switch to choose between the metadata source and the OVF data source. following to /etc/cloud/cloud.cfg:

disable_vmware_customization: false
datasource:
  OVF:
    allow_raw_data: true
vmware_cust_file_max_wait: 25

Once installed, shut the virtual machine down. Right-click on the VM, and select Clone -> Clone as Template to Library:

This vCenter feature will orchestrate the conversion to a template object and publish it to a Content Library as one step.

Deploying a customized machine

The next process needs to be executed via vCenter's Content Library vSphere API:

  • Establish API Session Key (required authentication for the endpoints used to deploy)
  • Deploy Content Library Object (/api/vcenter/vm-template/library-items/)
    • Find the correct content library
    • Find the correct content library item
    • Find the content library item via the vsphere API (ID to use in deployment command)
    • Find vSphere Cluster
    • Find vSphere Folder
    • Find vSphere Datastore
    • Deploy Content Library Item
  • Wait until deployment is complete, periodically checking to see if it's complete
    • Normally, an API will respond immediately that the command was successful, and subsequent calls would be required to validate readiness. Instead, vSphere's RESTful API responds with a 200 response only if and when the deployment is complete, which simplifies our code
  • Locate the Virtual Machine. The previous API call responds with a 200 OK, and Postman conveniently times the operation for you as well!
  • Apply Guest Customization
  • Start VM

To replicate this lab, the Postman Environment and Collection will be provided at the bottom of this post. Postman provides a powerful platform to educate engineers unfamiliar with a particular API by expanding the behaviors an HTTP client may have. Automated processes are typically very terse, and do not effectively explain each step and behavior. To import this collection and environment, download the files, and import them:

Postman Environments will stage variables for consumption by Collections.

I have sorted the Postman Collection based on the order of execution. The final customization step will return a 204 if successful, with an empty body. To verify that the configuration was correctly applied, browse to the individual VM in vCenter, and look under Monitor -> Events for an event of the type "Reconfigure VM". If you see the task on the correct VM, start it, and you will see the following:

Soon after, look at the virtual machine to review its customized attributes!

Debugging/Troubleshooting Tips

This process is slightly opaque, and a little confusing at first. Here are some key points for troubleshooting, and the methods to manage it:

  • The vSphere /vm/guest/customization URI will only respond with a 204 if working correctly.
    • If it returns a 400, the error will indicate what part of the JSON spec is having issues. Keep in mind that it may only give you the parent key - tools like JSONLint offer a method to quickly validate payloads as well
  • When locating resources, the Content Library and Templates are returned as a UUID with no description. GET the individual objects to match with names, or use the find API
  • All other resources (datastore, VM name) are listed with their MOB name, e.g. domain-c1008
  • Save the response from the deployment action, it has the VM ID when it finally completes
  • VM Customization can only be applied to a VM that is OFF, and doesn't customize until the VM starts.

Troubleshooting customization after boot can be done by viewing the metadata (/run/cloud-init/) or by reviewing logs at the following locations:

/var/log/
/var/log/vmware/imc
journalctl -xe
systemctl restart cloud-init

The classic "wipe and restart" method is also quite valuable:

cloud-init clean -l -s
systemctl restart cloud-init

Finally, after a host is successfully configured, I'd recommend disabling cloud-init to prevent further customization. This is just as easily achieved with an Ansible playbook

systemctl disable cloud-init

Code

Saturday, August 13, 2022

Identity theft has gotten out of hand. Here are basic ways to protect yourself.

It's not a matter of if you will be the victim of a breach, but when.

Wired is starting to track breaches by halves (as a general tech publication), and security vendors are moving to monthly reporting due to the volume.

It's 2022, and it seems everyone loves to over-share on social media. This may feel good but introduces substantial risks. Let's talk about cyber hygiene.

Information security is a frame of mind, so the most effective way to protect yourself is by being smart. ISC2 has started an institution - The Center for Cyber Safety and Education -  to provide further effective education on how to comprehensively protect yourself online.

Here are some brief tips to help keep an eye on when you shouldn't disclose information online. Always ask "Can I dial this back? Do I need to provide this much information?"

  • Personally Identifiable Information (PII) can provide adversaries with methods to fake your identity
    • Birthdays. social media companies love to collect them, and they're used for ID verification everywhere. Facebook doesn't need your exact birth date, and storing it there increases your risk. Avoid storing your full birth date whenever feasible
    • Credit Card Numbers, Expiration Dates, CVVs
    • Any image of any ID card you own. Driver's License numbers are particularly popular.
    • Hometowns or birth locations are fun to socialize, but fit in this same category
    • Full Middle Name
    • "Mother's maiden name" and other names unlisted and typically used by financial institutions or security questions. Social media quizzes aggressively try to steal information like this!
    • Previous Employers
    • Home address / shipping address. These are typically used to validate credit card transactions, particularly large charges
  • Personal Health Information (PHI) are typically protected by HIPAA, with large exceptions for non-medical institutions. Don't share any of this information without full disclosure on how that information will be used!
    • Medical history, surgeries, etc.
    • Ancestry information

It's worth re-iterating, your children are much more likely to be targeted as well. Here are some guidelines on how to protect them from discovering they have a mortgage and a compromised credit score in junior high.

This is the most important, but also the most difficult. We can use products or services to protect your identity and shore any gaps.

Credit Locking / Credit Freezes

Now that we're done scaring you, the good news is that providing some basic level of protection against identity theft isn't particularly hard. Crime does pay, and the most effective way to terminate the pattern is to pursue every avenue to prevent new credit being opened with your identity. Most banks, utilities and other services won't open a credit account without a credit report, so the most effective method of countering compromise is to disallow any and all credit report attempts. The neat thing about this method is that people who are providing legitimate services to you can be sneaky and execute reports without your consent, dinging your credit score in the process.

If you don't do anything else I suggest, do this. It's going to take 5-10 minutes to do all three. Here are the links to "freeze credit" (prevent credit reports from being executed with your information):

Note: You'll need to create a new account for each of these services! Don't lose this information!

Use a Password Manager

To quote Mel Brooks: "12345! That's amazing! I have the same combination on my luggage!". Cryptography isn't magic, and all the transport security and firewalls in the world can't protect you from weak identity material. 

The most effective way (for the least effort) to de-risk yourself is to set up a password manager. We see some peripheral advantages outside of password storage like storing confidential documents, sharing passwords between family members, etc.

I'm not going to recommend a specific product here, because needs can vary quite a bit depending on needs. Here are some typical requirements I keep in mind when evaluating a password manager:

  • How strict is its MFA? Can you disable SMS TOTP? Is a hardware security token like Yubikey supported?
  • Does it support a family plan?
  • What is its breach response plans?
  • How securely to store their data?
  • Is it compatible with my devices?

Personally, I use 1Password for the Yubikey support and family plan support. It gives me piece of mind, and has a feature where all passwords are released to my family if I fail to log in for a month. Here are some others, in no particular order:

Using one is better than not - so all of these would be an improvement over nothing at all. I've used Dashlane and LastPass and dropped them in favor of 1Password.

Multi-Factor Authentication

Multi-factor authentication can be broken out into the following major categories:

  • Something you know: Passwords are an example of this "authentication factor". If a credential is publicly exposed (e.g. used on the Internet) it should be unique to that service to ensure that your banks don't get compromised if your Twitter password leaks
  • Something you have: The most common MFA tools fit in this category. Yubikeys are fantastic (if supported), and the following Time-based One-time Pad (TOTP) apps are good options. I don't personally have any strong preference other than AVOID SMS / TEXT MESSAGE MFA!
  • Something you are: Murky waters abound here, because you must be completely fine with submitting your biometrics to a third party. I'm not keen on doing this, given its potential for misuse. Most consumer fingerprint scanners are "passable" at best, so I don't consider this a good standalone authentication factor.
  • Somewhere you are: Location-based services are usually somewhat iffy as well for private non-enterprise non-government, as they aren't particularly accurate. If you're consuming a service like Gmail, the company should provide this for you.
  • Something you do: This is a real propeller-hat scientific factor. Capturing behavior patterns can reveal whether you're behaving normally. Again, this is mostly the responsibility of the group providing you a service.
    • There's a low-tech way to provide this authentication factor in the real world - paying a security guard. They're good at this and don't need a Ph.D to do it.

Identity Theft Protection

Now, it's time to bring out the heavy hitters. We don't always have the time to keep an eye on the entire internet, or to research recommendations to reduce our online footprint.

Leaning on the experts in identity theft protection services is the way to go. The industry is awash with good options, and the providers of these services aggressively drive costs down to make it affordable.

Full disclosure, I am employed by Allstate, who provides ID theft services. These recommendations are my own and not my employer's.

Here are some guidelines when evaluating ID Theft Protection services:

  • Do they have a family plan? Children's ID theft is on the rise, mostly because it's easy to predict SSNs given a birth location, easily available information like birth date and addresses, etc. You'd think creditors would avoid opening up a credit card in a newborn's name, but you'd be wrong. Add them to your ID theft protection, freeze their credit!
  • What services do they monitor? A minimum should maintain tracking your credit score without affecting it!
  • What insurance do they provide?
  • What guidance and periodic advice do they offer to customers and the public?
  • What recommendations do they make to improve your online presence?

I'd avoid the ones provided by the credit industries - the Equifax breach impacted my confidence, and nothing brought it back.

As an aside, if you've been a victim of any of the wave of breaches recently, you're probably eligible for free ID theft protection services from multiple companies. Use this to shop around, if you like one, stick with it. If you don't find any you like, here are some popular ones:

Shop around!  The worst thing you can do with your online presence is to do nothing, and there's a wide variety of good products to help you out. These services provide a trial, use it to evaluate if it's a good fit.

Conclusion

Society has passed the "age of innocence" with identity theft, and cybersecurity will need to become a routine for anyone living in it. Pandora's box has been opened, and criminals are not going to forget how easy and low-risk cybercrime is. Protecting yourself is a rabbit-hole where all effort is valuable - but you don't need to be a security expert to get the basics in place.

Saturday, August 6, 2022

NSX Data Center 4.0.0.1 is now available!

NSX 4 is now available, and it was a surprisingly sparse release in terms of new capabilities.

NSX 4.0 appears to be a "clean house" initiative, so while it's missing "whizz-bang" new data plane features it does address a variety of issues I am happy to say are now closed:

  • Numerous documented API deprecations. Normally this wouldn't be that big of a deal, but NSX 3.x dropped several experiments (NSX ALB front-end, for example) that stayed available throughout the release train
  • Deprecating host-based N-VDS
  • Deprecating KVM and older Linux support (RHEL 7.8, 8.0,8.3) KVM was announced early in 3.0, and the affected EOL dates for RHEL releases have already been exceeded. It is an odd choice for physical servers, though.
  • Lifecycle Management improvements (I can't test these until the next upgrade).
  • IPv6 Management Plane support. Unfortunately, VTEPs aren't part of this release, and vSphere is still behind the curve in terms of IPv6 support, limiting efficacy. It's unsurprising to see the Network teams be ahead of the Virtualization teams on network goals.
  • HSTS is implemented for the WebUI as well. New installs will need to run an override prior to installing a new certificate.
    • API endpoint to replace API certificate: /api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=""
    • API endpoint to replace cluster certificate: /api/v1/node/services/http?action=apply_certificate&certificate_id=

Let's review how a new deployment may differ from previous installations:

IPv6 options have now been added to the OVA:

When deploying new workloads with IPv6 support - it's important to have a plan to access those addresses. The best strategy for enterprises and home labs is roughly the same, but with different products. Make your DNS dual-stack, and enter AAAA (IPv6 host records) for each service that supports IPv4 and IPv6. Let your client services do it seamlessly and transparently. End users shouldn't have to care about IPv6 being used. Configuring DNS as-code from a source repository makes this migration easy.

The browser add-on IPvFoo can tell you if you're using native IPv4 or a fallback mode. It'll also tell you what IP addresses you're talking to for a given page to load, which is incredibly useful.

To access an IP address with IPv6 in a web browser, the notation is a little different:

{{protocol}}://[{{site}}]/

Example:

https://2001:dead:beef::2.

To fully leverage IPv6, you need to give vCenter the same treatment. VMware's documentation on it is here. I executed the change from the VAMI (https://vcenter:5480) under Networking using the supported wizard.

Note: This will incur brief downtime for vCenter, and interrupt services like VCHA! Execute a vCenter backup before executing this work!

And that's about it! We can see NSX Manager with an IPv6 address in the Appliance UI:

And, IPvFoo reports all IPv6 for the front-end:

NSX 4.0 was a mellow release by VMware standards - but according to the Semantic Versioning rules, breaking changes automatically increment a major version. The API deprecations justify the version increment on these terms.

Note: The most important part (NSX Control Plane, VTEPs) are still to be completed.

Sunday, July 3, 2022

The Role of Trust and Failure in Information Security

The principles that define the information security field are decades older than computing, and we'd do well to learn from the lessons that precede our industry.

We as security professionals naively construct an "our stuff versus them" model when attempting to defend our networks in our early career. As we develop more of a salty patina, the realization that we shouldn't trust everything begins to set in, transforming previous revisions of our security model from "assume a cow is a sphere in a vacuum at absolute zero" levels of oversimplification to something more worthy. How do we accelerate that learning process?

Confronting Failure

IT Professionals have a truly bizarre relationship with the concept of failure, causing some notably oppressive culture. Burying failure is making our systems vulnerable. This deep-seated crisis of ego has proven to undermine companies a great deal. 

When re-reading Cyberpunk: Outlaws and Hackers on the Computer Frontier my experiential lens provided new insight - Kevin Mitnick probably would not have been as successful if DEC was more transparent about how compromised their systems were. Reviewing past through rose-tinted glasses, DEC was considered a company that provided full-service computing - all maintenance and loading was done by a DEC employee.

DEC needed total implicit trust from their customers to operate, and did not disclose their history of compromise to keep the ego-driven narrative ("we have no problems") going for a number of years. This choice empowered Kevin Mitnick and others to continue compromising DEC customers for years and evade capture.

The industry has learned quite a bit about its problems handling failure since 1991, but it could do much better. Vindictive behavior in the emergence of a new breach is common behavior nowadays, with language like "how could they be compromised?" being bandied about as if we didn't know about thankless dependence on somebody from Nebraska pattern, turns in a counter-productive direction. We need transparency from those who provide us paid software, but we punish them for providing it.

Google feels strongly about learning from failure, and so should we. Engineering professions (the truer, more long-lived ones) have long since begun to analyze failure as a method of teaching, proving that we leave a wealth of information wasted every time we revert to blame in the advent of a problem.

The entire industry needs to figure out how to constructively learn from failure, while simultaneously applying appropriate levels of pressure on all product vendors to ensure that vulnerabilities, breaches, and other problems are disclosed fairly and appropriately. Easy, right?

Building on a Foundation for Trust

Despite this cycle of abuse, the industry does want to see more from companies that provide tech products. Vulnerability disclosure programs are particularly successful and important due to significant pressure to improve. Locksmithing is of particular interest in this case, as is the Enigma story - technology doesn't passively improve over time, it requires conscious effort and does not progress until problems are acknowledged.

Vulnerability disclosure made big strides transitioning from the more negative past (see AT&T's stance here) where the courts would use the CFA as a sledgehammer to cover up or mask problems to the current day's model - "Heartbleed" and "Shellshock". Examine those websites - the vulnerability campaigns maintain blameless language, and focus consumers on how to resolve the issues, what questions to ask of their vendors. We complain about "vulnerability fatigue" often, forgetting that we only began to transform the industry to a more secure future a mere 8 years ago.

Let's commit to some meaningful changes to help us get to the future - We aren't there yet!

  • Encourage and Promote Transparency: When a company provides you information on a security problem, push for more information. CloudFlare publishes their post-mortems here as an example.
  • Don't be Punitive: This part doesn't specifically apply to security, or even IT. The person nearest to you probably has nothing to do with your issue.
    • For bonus points, don't allow others to paint you this way
  • Focus on the Fix: Some people find this part easier than others - shift focus on solving problems and providing real results. Continually ask yourself the question "am I contributing to the objectives of this conversation" and ensure that emphasis stays on what to do next or how something will be prevented.
  • Persuade Others: Group-think begins working against you when building trust or establishing a culture of disclosure. Don't allow others to steer the conversation back into punitive territory:
    • Listen: If others paint your behavior as punitive, listen to what they have to say and example it objectively.
      • This conversation also needs to remain constructive, so cascading tactics may apply. Operating in good faith is key.
    • Recognize Contribution: It takes courage to share information about a problem, sincerely remind those who disclose via a direct verbal utterance.
    • Restate Commitments: A business relationship, like any other human relationship, requires maintenance. In times of strain, it's important to be forward and remind participants of their commitment to each other.
    • Sympathize: Find common ground with those who failed. We've all done it, blur the factional lines by reflecting on other failures - but only bring your own to avoid creating adversarial tension.

Learning from failure is a crucial aspect to improving oneself, improving others, and building trust. Don't let a good failure go to waste by fighting over it.

Sunday, May 22, 2022

Scale datacenters past the number of VLAN IDs with NSX-T Tier-0 and Q-in-X

VMware introduced the ability to double-encapsulate layer 2 frames in via the "Access VLAN" option for VRF instances in NSX Data Center:  https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-4CB5796A-1CED-4F0E-ADE0-72BF7B3F762C.html

Q-in-VNI provides a capable infrastructure engineer the ability to to construct straightforward multitenant constructs. From the documentation and previous testing, we have demonstrated its capability outside of Layer 3 constructs. The objective of this post is to examine and test these capabilities with Tier-0 VRFs:

NSX Data Center provides the ability to pass a tag inside of a segment, which enables a few interesting design patterns:

  • Layer 3 VPN to customer's campus, with each 802.1q tag delineating a separate "tenant", e.g. PCI/Non-PCI
  • Inserting carrier workloads selectively to specific networks
  • Customer empowerment - let's enable the customer to use their cloud how they please

To validate this hypothesis, we will leverage the following isolated topology:

Note: VRF-Lite is required for this feature!

Q-in-VNI on NSX-T Routers

When configuring an interface on a VRF, the following option (Access VLAN ID) becomes available. Select the appropriate "inside" VLAN for each sub-interface:

We then configure the sub-interfaces - the tenant VM is unaware that it's being wrapped into an overlay:

Unsurprisingly, this feature just works. NSX-T is designed to provide a multi-tenant cloud-like environment, and VLAN caps are a huge problem in that space. In this example, we created 2 subinterfaces in the same VRF - normally tenants would not share a VLAN.

Q-in-VNI Design Patterns

Offering Q-in-VNI on a Tier-0 solves valuable use cases for multi-tenant platform services. The primary focus of these solultions is customer empowerment - VMware isn't taking sides on matters of :"vi vs emacs", "Juniper vs Cisco", etc. Instead, we as CSPs can provide a few design patterns that enable a customer to leverage their own chosen methods, or even to allow an ISP to integrate crisply and effectively with their telecommunications services.

NSX-T has some fairly small scalability limits for CSPs leveraging the default recommended design pattern (160 standalone Tier-0s), and the ultimate best solution is to leverage multiple NSX Data Center instances to accommodate. If the desired number of tenants is above, say, twice that, the VRF-Lite feature allows an infrastructure engineer to deploy 100 routing tables per Tier-0. 

VRF-Lite enables scaling to 4,000 Tier-1 gateways at this level, and a highly theoretical maximum of 160,000, but the primary advantage of this approach is that customers can bring their own networking easily and smoothly, front-ending NSX components with their preferred Network OS. Customers and Infrastructure engineers extend the feature set and reducing strain on NSX at the same time, creating a scenario where both the customer and the infrastructure benefit cooperatively.

Note: VMware's current configuration maximums are provided here: https://configmax.esp.vmware.com/guest?vmwareproduct=VMware%20NSX>

VRF-Lite can also be built to provide a solution where customers can "hair-pin" their tenant routing tables to a virtual firewall over the same VN-Segment. Enterprise teams leveraging NSX Data Center benefit the most from this approach, because common virtual firewall deployments are limited by the number of interfaces available on a VM. This design pattern empowers customers by permitting infrastructure engineers to construct thousands of macrosegmentation zones if desired.

Q-in-Q on NSX-T Routers

Time to test out the more complex option!

When I attempt to configure an internal tag with VRF-Lite subinterfaces, the following error is displayed:


Sadly, it appears that Q-in-Q is not supported yet, only Q-in-VNI. Perhaps this feature will be provided at a later date.

Here's the VyOS configuration to perform Q-in-Q:

Retrospective

  • Learn, hypothesize, test is an important cycle for learning and design, and this is why we build home labs. NSX Data Center appeared to support Q-in-Q tagging - but the feature was ultimately for passing a trunk directly to a specific VLAN ID in a port-group.
  • vSphere vDS does not appear to allow Q-in-Q to trunk outwards to other port-groups that do not support VLAN trunking, either.
  • Make sure that MTU can hold inner and outer header without loss. I set the MTU to 1700, but you only need 16 bytes of extra MTU for the 802.1q header.

Friday, May 6, 2022

Different Methods to carry 802.1q tags with VMware vDS and NSX-T

 VMware's vDS is a bit of a misnomer

In a previous post, I covered the concept of transitivity in networking - but in Layer 2 (Ethernet) land, transitivity is critically important to understanding how VMware's Virtual Distributed Switch (vDS) works.

The statement "VMware's Virtual Distributed Switch is not a switch" seems controversial, but let's take a moment to reflect - when you plug in the second uplink on an ESXi host, does the ESXi host participate in spanning tree?

Testing this concept at a basic level is straightforward. Enabling BPDU Guard on an ESXi host-facing port should take the host down immediately if it's actually a switch (it doesn't). This concept is actually quite useful to a capable infrastructure engineer.

A "Layer 2 Proxy"

VMware's vDS is quite a bit more useful than a simple Layer 2 transitive network device - each ESXi host accepts data from a virtual machine, and then leverages a "host proxy switch" to take each packet and re-write its Layer 2 header in a 3-stage process:

Note: For a more detailed explanation of VMware's vDS architecture and how it's implemented, VMware's documentation is here

Note: VMware's naming for network interfaces can be a little confusing, here's a cheat sheet:

  • vnic: A workload's network adapter
  • vmnic: A hypervisor's uplink
  • vmknic: A hypervisor's Layer 3 adapter

A common misinterpretation of vDS is that the VLAN ID assigned to a virtual machine is some form of stored variable in vSphere - it isn't. vDS was designed with applying network policy in mind - and an 802.1q tag is simply another policy.

vDS is designed with tenancy considerations, so a port-group will not be allowed to transit traffic between different port-groups (but the same VLAN ID). Non-transitive behaviors achieve two goals at the same time - providing an infrastructure engineer total control of data egress on a vSphere host, and adequate segmentation to build a multi-tenant VMware Cloud.

Replacing the Layer 2 header on workload packets is extremely powerful - vDS essentially empowers an infrastructure engineer to write policy and change packet behavior. Here are some examples:

  • For a VM's attached vnic, apply an 802.1q tag (or don't!)
  • For a VM's attached vnic, limit traffic to 10 Megabits/s
  • For a VM's attached vnic, attach a DSCP tag
  • For a VM's attached vnic, deny promiscuous mode/MAC spoofing
  • For a VM's attached vnic, prefer specific vmnics
  • For a VM's attached vnic, export IPFix

NSX expands on this capability quite a bit by adding overlay network functions:

  • For a VM's attached vnic, publish the MAC to the global controller table (if it isn't already there) and send the data over a GENEVE or VXLAN tunnel
  • For a VM's attached vnic, only allow speakers with valid ARP and MAC entries (validated via VMware tools or Trust-on-First-Use) to speak on a given segment
  • For a VM's attached vnic,send traffic to the appropriate distributed or service router

NSX also enables a few things for NFV that are incredibly useful, NFV service chaining and Q-in-VNI encapsulation.

Q-in-VNI encapsulation is pretty neat - it allows an "inside" Virtual Network Function (VNF) to have total autonomy with inner 802.1q tags, empowering an infrastructure engineer to create a topology (with segments) and deliver complete control to the consumer of that app. Here's an example packet running inside a Q-in-VNI enabled segment (howto is here).

NSX Data Center is not just for virtualizing the data center anymore. This capability, combined with the other precursors (generating network configurations with a CI tool, automatically deploying changes, virtualization), is the future of reliable enterprise networking

Friday, April 29, 2022

Network Experiments with VMware NSX-T and Cisco Modeling Labs

Cisco Modeling Labs (CML) has turned out to be a great tool for deploying virtual network resources, but the "only Cisco VNFs" limitation is a bit much.

Let's use this opportunity to really take advantage of the capabilities that NSX-T has for virtual network labs!

Overview

For the purpose of lab construction, I will use an existing Tier-0 router and uplinks to facilitate the "basics", e.g. internet connectivity && remote accessibility:

Constructing the NSX-T "Outside" Environment

NSX-T has a few super-powers when it comes to constructing network topologies. We need one in particular to execute this - Q-in-VNI encapsulation - which is enabled on a per-segment basis. Compared to NSX-V, this is simple and straightforward - a user simply applies the allowed VLAN range to the vn-segment, in this case, eng-lab-vn-cml-trunk: 

We'll need to disable some security features that protect the network by creating and applying the following segment profiles. The objective is to limit ARP snooping and force NSX to learn MAC addresses over the segment instead of from the controller.

I can't stress this enough, the Q-in-VNI feature gives us near unlimited power with other VFs, circumventing the 10 vNIC limit in vSphere and reducing the number of segments that must be deployed to what would normally be "physical pipes".

Here's what it looks like via the API:

Expose connectivity to CML

Generating the segments in NSX is complete, and now we need to get into the weeds a bit with CML. CML is, in essence, a Linux machine running containers, and has external interface support. Let's add the segments to the appropriate VM:

CML provides a Cockpit UI to make this easy, but the usual suspects exist to create a Linux bridge. We're going to create both segments as bridge adapters:

A word of warning - this is a Linux Software Bridge, and will copy everything flowing through the port. NSX helps by squelching a significant chunk of the "network storms" that result, but I would recommend not putting it on the same NSX manager or hosts as your production environments.

Leveraging CML connectivity in a Lab

The hard part's done! To consume this new network, add a node of type External Connector, and configure it to leverage the bridge we just created:

The next step would be to deploy resources and configure them to connect to this central point - I'd recommend this approach to make nodes reachable via their management interface, i.e. deploy a management VRF and connect all nodes to it for stuff like Ansible testing. CML has an API, so the end goal here would be to spin up a test topology and test code against it, or to build a mini service provider.

Lessons Learned

This is a network lab, and we're taking some of the guardrails off when building something this free-form. NSX does allow an engineer to do so on a per-segment basis, and insulates the "outside" network from any weird stuff you intend to do. The inherent dangers to spamming BPDUs outwards from a host or packet flooding can be contained with the built-in feature "segment security profiles", indicating that VMware had clear intent to support similar use-cases in the future.

NSX also enables a few other functions if in routed mode with a Tier-1. It's trivial to set up NAT policies, redistribute routes as Tier-1 statics, to control export of any internal connectivity you may or may not want touching the "real network".

I do think that this combination is a really impressive one-two punch for supporting enterprise deployments. If you ask anyone but Cisco, a typical enterprise network environment will involve many different network vendors, and having the flexibility to mix and match in a "testing environment" is a luxury that most infrastructure engineers don't have, but should.

CML Scenario

Here's the CML scenario I deployed to test this functionality:

Sunday, April 17, 2022

Vendor interoperability with multiple STP instances

Spanning Tree is the all-important loop prevention method for Layer 2 topologies and source of ire to network engineers worldwide.

Usually IT engineers list the Dunning-Kruger Effect in a negative context, depicting an oblivious junior or an unaware manager, but I like to focus on the opposite end of the curve with meta-cognition. Striving to developing meta-cognition and developing self-awareness is difficult and competes with the ego, but is an incredibly powerful tool for learning. I cannot stress enough how important getting comfortable with one's limitations is to a career.

Spanning Tree is a key topic that should be revisited frequently, building upon knowledge growth.

Let's examine some methods for plural instantiation with Spanning Tree.

The OSI Model and sublayers

Ethernet by itself is surprisingly limited in scope for the utility we glean from it. It provides:

  • Source/Destination forwarding
  • Variable length payloads
  • Segmentation with 802.1q (VLAN tagging) or 802.3ad (Q-in-Q tagging)

It also doesn't provide a Time-to-Live field at all, which is why switching loops are so critically dangerous in production networks.

Ethernet needs a supporting upper layer (the Logical Link Control sublayer) to relay instructions on how to handle packets. Internal to a switch ASIC, the hardware itself needs some form of indicator to select pipelines for forwarding or processing.  

802.2 LLC and Subnetwork Access Protocol (SNAP) are typically used in concert to bridge this gap and allow a forwarding plane to classify frames and send to the appropriate pipeline. Examples are provided with this article, where LLC and SNAP are used to say "this is a control plane packet, don't copy it, process it" or "this packet is for hosts, copy and forward it".

Multiple Ethernet Versions

Over the years, network vendors implemented multiple versions of the Lower Ethernet sublayer, and in many cases did not update the Control Plane code on network equipment.  manage to proliferate throughout computer networks for , which appear to resemble simply stitching together ancient PDU types for compatibility . It's not entirely surprising that multiple ethernet editions exist given the fragmentation throughout the industry.

I'd strongly recommend reading this study by Muhammad Farooq-i-Azam from the COMSATS Institute of Information Technology. The author outlines methods of testing common forms of Ethernet in production formats, and provides a detailed overview of our progress to standardization. Spanning Tree is a major cause for the remaining consolidation work, as it turns out.

Generally, Ethernet II is what you want to see in a production network, and most host frames will follow this standard. Variable-length fields over 1536 bytes are supported by this protocol, which is a big advantage in data centers.

The original Ethernet standard, 802.3 did not support ethertypes or frames larger than 1536 bytes, and is typically used by legacy code in a switching control plane. Two major variants are used within this protocol, extended by LLC and (optionally) with SNAP as an extension.

So, why does this matter when talking about spanning tree? Network Equipment Providers (NEP) haven't all updated their Layer 2 control plane protocols in a long time. Bridge Protocol Data Units (BPDUs) are inconsistently (consistently in their hardware, but inconsistent with others) transmitted, causing a wide variety of interoperability issues.

Per-VLAN Spanning Tree

From a protocol standpoint, Per-VLAN STP and RSTP are probably the simplest method with the fewest design implications and most intuitive protocol layout - but some dangers are inherent when running multi-vendor networks. 

Let's examine a few captured packets containing the STP control plane

Cisco PVRST+ Dissection

Cisco structures the per-VLAN control plane by wrapping instantiated BPDUs:

  • With an Ethernet II Layer 2 Lower
  • With an 802.1q tag encapsulating the VLAN ID
  • With a SNAP Layer 2 Upper, PID of PVSTP+ (0x010b)
  • Destination of 0100.0ccc.cccd

Arista and Cisco Vendor Interoperability

Interestingly enough, I discovered that Arista appears to implement compatible PVRST to Cisco (with some adjustments covered in Arista's whitepaper). To validate this, I executed another packet dissection with Arista's vEOS, which is available to the public. I have provided the results here, but the PDUs are nearly identical to the Cisco implementation.

MST Dissection

For the majority of vendor interoperable spanning-tree implementations, this will be a network engineer's best option. MST allows an engineer to specify up to 16 separate instances of Spanning Tree, either 802.1d or 802.1w. The primary hazards with leveraging MST have a great deal to do with trunking edge ports, as each topology must be accounted for and carefully planned. BPDU Guard, Loop Guard, and VLAN pruning are absolutely necessary when planning MST, in addition to diagramming each topology that will be instantiated.

IEEE's MST standard is implemented per-instance, and relayed with one common BPDU. It's pleasingly efficient, but...

  • With an 802.3 Ethernet Layer 2 Lower
  • With no 802.1q tag 
  • With an LLC header, ID of BPDU (0x42)
  • With a destination MAC 0180.c200.0000
  • After the BPDU typical attributes, MST attaches an extension indicating priorities, but no unique bridge IDs. If a topology differs, it would be separated onto its own control plane frame.

 

CST Dissection

For comparison, I also compared against a Mikrotik Routerboard, which should follow the implements RSTP as a single instance (Common Spanning Tree) and optionally supports Multiple Spanning-Tree (MST). I found the following attributes with default settings:

  • Destination of 0180.c200.0000 (STP All-Bridges Destination Address)
  •  802.3 Ethernet Frame
  • Spanning-tree BPDU Type

Conclusion

Spanning Tree is a foundation to all enterprise networks, but there really seems to be some legacy code and general weirdness in place here. The industry, starting with the data center, is moving to a more deterministic control plane to replace it, whether that be EVPN or a controller-based model like NSX. 

Campus enterprise deployments are beginning to do the same as businesses realize that a shared, multi-tenant campus network can increase the value against the cost of the same equipment. With the downsizing of corporate offices, the only thing stopping office providers from also providing a consolidated campus network is a general lack of expertise and an industry full of under-developed solutions. As platforms converge, the same pattern will emerge in campus deployments soon.

In terms of design implications, supplanting Layer 2 legacy control planes is no small feat. Even EVPN requires STP at the edge, but containment and exceptions management are both clear design decisions to make when building enterprise networks.

Saturday, March 26, 2022

Cisco Modeling Labs

Ever wonder what it would be like to have a platform dedicated to Continuous Improvement / Testing / Labbing?

Cisco's put a lot of thought into good ways to do that, and Cisco Modeling Labs(CML) is the latest iteration of their solutions to provide this as a service to enterprises and casual users alike.
CML is the next logical iteration of Virtual Internet Routing Labs (VIRL) and is officially backed by Cisco with legal VNF licensing. It's a KVM type-2 hypervisor, and automatically handles VNF wiring with an HTML5/REST interface.

Cisco's mission is to provide a NetDevOps platform to network engineers, upgrading the industry's skillset to provide an entirely new level of reliability to infrastructure. Hardware improves over time, and refresh cycles complete - transferring the "downtime" problem from hardware/transceiver failure to engineer mistakes. NetDevOps is the antidote for this problem, infrastructure engineers should leverage automation to make any operation done on production equipment absolutely safe.

Solution Overview

Cisco Modeling Labs provides you the ability to:

  • Provision up to 20/40 Cisco NOS nodes (personal licensing) as you see fit
  • Execute "What if?" scenarios without having to pre-provision or purchase network hardware, improving service reliability
  • Develop IaC tools, Ansible playbooks and other automation on systems other than the production network
  • Leverage TRex (Cisco's network traffic generator) to make simulations more real
  • Deploy workloads to the CML fabric to take a closer look or add capabilities inside
  • Save and share labbed topologies
  • Do everything via the API. Cisco DevNet even supplies a Python client for CML to make the API adoption easy!

Some important considerations for CML:

  • Set up a new segment for management interfaces, ensuring that external CI Tooling/Ansible/etc can reach it.
  • VNFs are hungry. NX-OSv images are at the high end (8GB of memory each), and IOSv/CSR1000v will monopolize CPU. Make sure that plenty of resources are allocated
  • Leverages Cisco Smart Licensing. CML uses legitimate VNF images with legitimate licensing, but will need internet access 
  • CML does not provide SD-WAN features, Wireless, or Firepower appliance licensing, but does support deploying them

Let's Install CML! Cisco provides an ESXi (vSphere) compatible OVA:

After It's deployed, I assigned 4 vCPUs and 24 GB of memory, and attached the platform ISO. Search under software.cisco.com for Modeling Labs:

Once mounted, the wizard continues from there. CML will ask you for passwords, IP configurations, the typical accoutrements:

The installer will take about 10 minutes to run, as it copies all of the base images into your virtual machine. Once it boots up, CML has two interfaces:

  • https://{{ ip }}/ : The CML "Lab" Interface
  • https://{{ ip }}:9090/ : The Ubuntu "cockpit" view, manage the appliance and its software updates from here. Cisco's integrated CML into this GUI as well.

CML's primary interface will not allow workloads to be built until a license is installed. Licenses are fetched from the Cisco Learning Network Store under settings:

CML is ready to go! Happy Labbing!

Saturday, March 19, 2022

Deploy Root Certificates to Debian-based Linux systems with Ansible

 There are numerous advantages to deploying an internal root CA to an enterprise:

  • Autonomy: Enterprises can control how their certificates are issued, structured, and revoked independently of a third party.
    • Slow or fast replacement cycles are permissible if you control the infrastructure, letting you customize the CA to the business needs
    • Want to set rules for what asymmetric cryptography to use? Don't like SHA1? You're in control!
  • Cost: Services like Let's Encrypt break this a bit, but require a publicly auditable service. Most paid CAs charge per-certificate, which can really add up
  • Better than self-signed: Training users to ignore certificate errors is extremely poor cyber hygiene, leaving your users vulnerable to all kinds of problems
  • Multi-purpose: Certificates can be used for users, services, email encryption, getting rid of passwords. They're not just to authenticate web servers.

 The only major obstacle to internal CAs happens to be a pretty old one - finding a scalable way to deliver the root Certificate Authority to appropriate "trust stores" (they do exactly what it sounds like they do) on all managed systems. Here are a few "hot-spots" that I've found over the years, ordered from high-value, low effort to low-value, high effort. They're all worthwhile, so please consider it an "order of operations" and not an elimination list:

  • Windows Certificate Store: With Microsoft Windows' SChannel library, just about everything on the system will install a certificate in one move. I'm not a Windows expert, but this delivery is always the most valuable up-front.
  • Linux Trust Store: Linux provides a trust store in different locations depending on distribution base.
  • Firefox: Mozilla's NSS will store independently from Windows or Linux, and will need to be automated independently.
  • Java Trust Stores are also independently held and specific to deployed version. This will require extensive deployment automation (do it on install, and do it once).
  • Python also has a self-deployed trust store when using libraries like requests, but Debian/Ubuntu specific packages are tweaked to use the system. There are a ton of tweaks to just make it use the system store, but the easiest is to leverage REQUESTS_CA_BUNDLE as an environment variable pointing to your system trust store.

Hopefully it's pretty clear that automation is about to become your new best friend when it comes to internal CA administration. Let's outline how we'd want to tackle the Linux aspects of this problem:

  • Pick up the root certificate, and deliver from the Controller to the managed node
    • Either Git or an Artifacts store would be adequate for publishing a root certificate for delivery. For simplicity's sake, I'll be adding it to the Git repository.
    • Ansible's copy module enables us to easily complete this task, and is idempotent.
  • Install any software packages necessary to import certificates into the trust store
    • Ansible's apt module enables us to easily complete this task, and is idempotent.
  • Install the certificate into a system's trust store
    • Locations differ based on distribution. Some handling to detect operating system and act accordingly will be worthwhile in mixed environment
    • Ansible's shell module can be used, but only as a fallback. It's not idempotent, and can be distribution-specific.
  • Restart any necessary services

Here's where the beauty of idempotency really starts to shine. With Ansible, it's possible to just set a schedule for the playbook to execute in a CI tool like Jenkins. CI tools add some neat features here, like only executing on a source control change, which may not apply when using an artifacts store to deploy the root certificate.

In this example, I will be adding the play to my nightly update playbook to illustrate how easy this is:

After completion, this action can be tested by a wide variety of means - my favorite would be cURLing a web service that leverages the root CA:

curl https://nsx.engyak.co/

Popular Posts