Tuesday, June 18, 2024

Nornir, Netmiko, TextFSM


Automation of the physical network remains an elusive goal for many. While cloud networking (both private and public clouds of various flavours) have provided accessible network automation through SDN (Software Defined Networking) the physical network is still largely handled manually using the CLI (Command Line Interface) or via APIs (Application Programming Interfaces). 

Products such as Arista's CloudVision and Cisco's SD-Access offer 'simplified' network automation but these aren't the same as SDN as they're more like orchestration tools that can create/read/update/delete configurations on network devices through APIs, they aren't directly managing the control plane like SDN, they may know the state of the network (eg, routing, spanning-tree etc) but they cannot directly influence it without applying configuration changes to the devices.

With modern network devices you will find APIs such as NETCONF (Network Configuration Protocol)/RESTCONF (Representational State Transfer Configuration Protocol, think HTTP NETCONF), and gRPC (Google Remote Procedure Call). However, there are still many older network devices in service that do not support these modern programming interfaces.

Therefore we're left with using imaginative ways to automate the network using CLI based tools - having to programmatically deal with unstructured data that's designed for humans, not machines. 

In this brief example I will show how we can use Nornir to run tasks against an inventory of network devices, Netmiko to connect to each device and run commands, and TextFSM to parse the unstructured output of the commands.

The Issue

Many processes of a network are distributed and stateful in nature, eg routing protocols, spanning-trees, multicast, etc While you can discern how they will behave based upon the configuration, you won't know for sure until you observe them in operation. In this example I will automate the check to see if the desired spanning-tree root bridge is selected. 

In this scenario, MST (Multiple Spanning-Tree) is used, each switch needs to be checked for its root bridge value, to see if it's the expected value (which in this case is the upstream building distribution switch). This will show up any issues such as a misconfigured switch taking over as the root bridge or some other unexpected behaviour. Note the script will also work with PVST (Per VLAN Spanning-Tree).

The Solution

Before doing anything, insure the python virtual environment is setup:

~$ mkdir check_stp_root
~$ cd check_stp_root
~/check_stp_root$ python3 -m venv .venv
~/check_stp_root$ source .venv/bin/activate
(.venv) ~/check_stp_root$ pip install nornir nornir-utils nornir-netmiko textfsm

TextFSM Template

First, I find out the appropriate CLI command to use, for spanning tree on a Cisco device there's a few, but I settled on 'show spanning-tree root'.

Cisco3560-10#show spanning-tree root

                                        Root    Hello Max Fwd
MST Instance           Root ID          Cost    Time  Age Dly  Root Port
---------------- -------------------- --------- ----- --- ---  ------------
MST0                 0 0053.002c.740a         0    2   20  15  Fa0/8        

The interesting part is the Root ID of 0053.002c.740a which is correct but we'll test with a different value to see how the output changes.

Now that we have the expected output, we can create a TextFSM template (called show_spanning-tree_root.template) that identifies the interesting parts and assigns them to values we can use in our script. 

Here is what I came up with:

Value VLAN (\S+)
Value Priority (\d+)
Value RootID (\S+)
Value RootCost (\d+)
Value HelloTime (\d+)
Value MaxAge (\d+)
Value FwdDly (\d+)
Value RootPort (\S+)
  ^${VLAN}\s+${Priority}\s+${RootID}\s+${RootCost}\s+${HelloTime}\s+${MaxAge}\s+${FwdDly}\s+${RootPort} -> Continue
  ^${VLAN}\s+${Priority}\s+${RootID}\s+${RootCost}\s+${HelloTime}\s+${MaxAge}\s+${FwdDly} -> Record

Generous use of regular expressions (aka regex) like \s+ and \d+ , however still less than what would be used without TextFSM, ie if we tried to parse the output directly in the script. There are plenty of regular expression references around, such as https://www.pythoncheatsheet.org/cheatsheet/regular-expressions . 

The desired values are listed with their own regex, and then reference them where they appear in the CLI output. E.g. ${RootID} is located where I could expect it to appear in the output. The -> Continue tells TextFSM to capture the data and to keep going regardless of match, the -> Record tells TextFSM to store the gathered values.

Use https://textfsm.nornir.tech/ to help develop your template to avoid trial and error while writing your script.

It's worth checking https://github.com/networktocode/ntc-templates to see if there is a pre-made template.


For this scenario I'll use Nornir's SimpleInventory to provide the details of each switch. I will also use the groups to store the expected root bridge ID. To make things a little more interesting I'll create a separate groups called building1 and building2 to show how you could specify different switches and root bridge IDs for each building.

First, create the config.yaml file (Nornir uses Yet Another Markup Language, YAML formatting for its configuration files)

  plugin: SimpleInventory
    host_file: "hosts.yaml"  # Path to your hosts file
    group_file: "groups.yaml"  # Path to your groups file (optional)
    defaults_file: "defaults.yaml"  # Path to your defaults file (optional)

  plugin: threaded  # Or "serial" for sequential execution
    num_workers: 10  # Number of threads to use (adjust as needed) 

# Optional: Configure logging
  level: WARNING  # Or "INFO", "WARNING", etc.
  enabled: True
  to_console: True
  log_file: "nornir.log"

This tells Nornir that we're using the SimpleInventory plugin and it can find the inventory and associated particulars in the hosts.yaml, groups.yaml, and defaults.yaml files.

It also specifies for it to run threaded (simultaneously) 10 workers, meaning it will work on 10 switches at a time.

A few other settings such as how verbose we would like it to log events and where.

Then create the hosts.yaml:

  hostname: ''
  platform: 'ios' 
    - 'access_switches'
    - 'building1'
  hostname: ''
  platform: 'ios' 
    - 'access_switches'
    - 'building2'

Here we specify our network switches. The names, the IP addresses, the platform (important for Netmiko, in this case Cisco's IOS), and what groups the device is a member of (see groups.yaml below).


  platform: 'ios'

  platform: 'ios'
    expected_root_id: '0053.002c.740a'

  platform: 'ios'
    expected_root_id: '0053.002c.740a'

Here we specify the groups, I've redundantly specified the platform again, you can set this on the host or group level. The interesting part is the 'expected_root_id' value, you can place anything under data: for reference in your scripts.


port: 22  # Or the SSH port for your devices

This file is optional but here we set the port to 22 for SSH access, which is the default anyway. If you were using a non-default port, you will need to specify that here, or you can specify it in the group or host files.

Now we have a TextFSM template and the Nornir configuration completed, we're ready to write the script.


Create new script called check_stp_root.py .

Import the python modules (libraries), self explanatory names:

import textfsm
from getpass import getpass
from nornir import InitNornir
from nornir.core.inventory import Inventory
from nornir.core.task import Result
from nornir_utils.plugins.functions import print_result  # For nice output
from nornir_netmiko import netmiko_send_command
from nornir.core.filter import F

In this example I'm doing something a little different, I prompt for the credentials to use to access the switches. You can set these in the Nornir SimpleInventory files (in the hosts, or groups, or defaults) but that isn't very secure so for now I just prompt - to make this script automated I suggest using something like Hashicorp Vault to securely manage the credentials, or perhaps environment variables. 

# Prompt for credentials
username = input("Enter your username: ")
password = getpass("Enter your password: ")

Create a function we can use to inject the newly received credentials into the inventory:

# Custom function to load inventory and add credentials
def load_inventory_with_credentials(inventory: Inventory):
    for host in inventory.hosts.values():
        host.username = username
        host.password = password
    return inventory

Create a function that will be ran as a task by Nornir. It calls Netmiko to connect to the device, run the command "show spanning-tree root", and run the output through TextFSM which will then return structured data we can use.

def get_stp_root_info(task):
    """Fetches and parses 'show spanning-tree root' using Netmiko."""
    result = task.run(
        task=netmiko_send_command, command_string="show spanning-tree root"
    if result.result:  # Check if the command was successful
        with open('show_spanning-tree_root.template') as f:
            table = textfsm.TextFSM(f)
            task.host["stp_data"] = table.ParseText(result.result)  # Store data
        task.host["stp_data"] = []  # Store empty list on error
    return Result(host=task.host, result=result.result)  # Return result object

Create a function that checks the resulting data from TextFSM for the expected Root Bridge ID.

def check_stp_root(task, expected_root_id):
    """Checks STP root bridge info against a single expected root ID."""
    discrepancies = []
    stp_data = task.host.get("stp_data", [])  # Retrieve parsed data

    for vlan_data in stp_data:
        vlan = vlan_data[0]
        root_id = vlan_data[2]

        if root_id != expected_root_id:
                f"VLAN/MST {vlan}: Unexpected root bridge: {root_id} (expected: {expected_root_id})"

    return discrepancies

Create a function that creates a report

def generate_report(agg_result):
    """Generates a consolidated report for all hosts."""
    print("Spanning Tree Root Bridge Check Report:\n")
    for host, result in agg_result.items():
        print(f"Host: {host}")

        discrepancies = result.result  # Get result from check_stp_root
        if discrepancies:
            print("  Discrepancies:")
            for discrepancy in discrepancies:
                print(f"    - {discrepancy}")
            print("  No discrepancies found.")

    print("-" * 30)  # Separator between hosts

Then create a Nornir instance, using the function we created earlier to inject the credentials into the inventory:

# Nornir Configuration:

nr = InitNornir(config_file="config.yaml")  # Adjust path if needed

# Update inventory with credentials
nr.inventory = load_inventory_with_credentials(nr.inventory)

Then we kick off the task that will check each "building" group of switches for any Root Bridge ID discrepancies, note how we reference the 'expected_root_id' value in Nornir's groups:

# Check STP for each building
for building in ["building1", "building2"]:
    print(f"Checking Building: {building}")
    # Run get_stp_root_info task on the building group
    results_checks = nr.filter(F(has_parent_group=building)).run(

Running the script:

(.venv) ~/check_stp_root$ python3 check_stp_root.py

If the Root Bridge ID is matched, the output will be:

Enter your username: <username>
Enter your password: 
Checking Building: building1
Spanning Tree Root Bridge Check Report:
Host: switch1
  No discrepancies found.
Checking Building: building2
Spanning Tree Root Bridge Check Report:
Host: switch2
  No discrepancies found.

If I change the Root Bridge ID for building2 from 0053.002c.740a to 0053.002c.741a in the groups.yaml file.

Enter your username: admin
Enter your password: 
Checking Building: building1
Spanning Tree Root Bridge Check Report:
Host: switch1
  No discrepancies found.
Checking Building: building2
Spanning Tree Root Bridge Check Report:
Host: switch2
    - VLAN/MST MST0: Unexpected root bridge: 
0053.002c.740a (expected: 0053.002c.741a)

As you can see, the script highlights the unexpected root bridge value and shows the expected value.

It's easy to build upon this script - creating additional functions that combine these tools to check and report on the network. Also you can make changes to the devices based upon the checks - eg if the root bridge is wrong, maybe see if the priority is incorrect and change it, then recheck. Perhaps the root bridge ID can be automatically derived instead of relying upon a hard set value.


I've demonstrated how combining TextFSM, Nornir, and Netmiko can effectively automate a network that doesn't support contemporary APIs. If it has a CLI and Netmiko supports it then it can be automated. Although you need to keep an eye out for any changes to the CLI and it's output formatting as that will break your scripts - this is where using APIs is a far better approach as any changes to an API, if they can't be avoided, are generally well documented and communicated.

An alternative to using TextFSM and Netmiko directly is Napalm - Napalm abstracts the above by handling the connection and gathering of data for you. An added benefit is that it provides the same data no matter what device is being accessed - eg a Juniper switch port's details will be presented the same way as a Cisco switch port's details. Therefore, you can write code that can be used on any device that Napalm supports. Although in this particular case, Napalm doesn't have a way to show the spanning-tree information - perhaps it will be supported later.

Monday, May 06, 2024

Example Campus VRF Configuration

Quick Summary of a Cisco VRF-Lite Configuration for a Campus Environment


Traditional 3 tier architecture - core, distribution, access.

  • Central firewalls (HA Pair, Active/Passive)
  • Core switches (VSS pair)
  • Distribution switches (VSS pairs)

Virtual Routing and Forwarding

Campus is divided up into 6 VRFs (VRF-Lite):

  • Building Management Systems (BMS) - HVAC, FIPS, CCTV, Emergency Lighting, ECPs etc
  • Information & Communications Technology (ICT) -  Things unique to IT but don't assume secure
  • General Staff (STAFF) - General business staff
  • Affiliates (AFFIL) - Guests, 3rd parties
  • Edge (EDGE) - Printers, Audio/Visual equipment, Voice
  • Network Management (NETINF) - Switch management

Each VRF has a Route Distinguisher (RD), and Route Targets (RT Import/Export)

In this case, the RD and RT import/export are all the per VRF.
  • BMS 65535:100
  • ICT 65535:110
  • STAFF 65535:120
  • AFFIL 65535:130
  • EDGE 65535:140
  • NETINF 65535:150
Each VRF has its own router process and therefore its own route tables, in the example below, OSPFv2 has been used.

Firewall interfaces (Core switch - Firewall)

These are VLAN interfaces trunked over a LAG between the core switches and the firewalls.

  • BMS VLAN 3000
  • ICT VLAN 3010
  • STAFF VLAN 3020
  • AFFIL VLAN 3030
  • EDGE VLAN 3040
  • NETINF VLAN 3050

Core switch interfaces (Per Building: Core switch - Disitribution Switch)

These are P2P VLANs on a LAG between the core switches and the distribution switches. One per VRF, per building. So the first building gets VLANs 2010, 2100, 2200, 2300, 2400, 2500, the second building gets VLANs 2011, 2101, 2201, 2301, 2401, 2501 and so on.

  • BMS VLAN 2010 - 2099
  • ICT VLAN 2100 - 2199
  • STAFF VLAN 2200 - 2299
  • AFFIL VLAN 2300 - 2399
  • EDGE VLAN 2400 - 2499
  • NETINF VLAN 2500 - 2599

Distribution switch interfaces (one pair per building ie VSS/VLT/MCLAG)

These are the access VLANs. They are what endpoints/clients will be using. I use the same VLANs per each building because the boundary is the distribution switches.

  • BMS VLAN 1000 - 1009
  • ICT VLAN 1010 - 1019
  • STAFF VLAN 1020 - 1029
  • AFFIL VLAN 1030 - 1039
  • EDGE VLAN 1040 - 1049
  • NETINF VLAN 1050 - 1059

IP Schema and Routing

In the examples below I have used a Class A RFC1918 address range and OSPFv2 routing.

Example Core and Distribution Switch VRF-Lite configuration

Using the AFFIL VRF as an example. To create the other VRFs you simply copy the configuration while changing the identifiers/numbers/addresses to suite.

Core Switch

A static default route to the firewall's AFFIL VLAN interface is used.

ip vrf AFFIL
 description Affiliates
 rd 65535:130
 route-target export 65535:130
 route-target import 65535:130

ip multicast-routing vrf AFFIL 

vlan 3030
 name AFFIL_P2P_FW

vlan 2300
 name AFFIL_P2P_BldA

interface Loopback130
 description Loop Back AFFIL
 ip vrf forwarding AFFIL
 ip address
 no ip proxy-arp
 ip pim sparse-mode
 ip ospf 130 area 0

interface Loopback131
 description Mulitcast RP AFFIL
 ip vrf forwarding AFFIL
 ip address
 no ip proxy-arp
 ip pim sparse-mode
 ip ospf 130 area 0

interface Vlan3030
 description Firewall AFFIL
 ip vrf forwarding AFFIL
 ip address
 no ip redirects
 no ip proxy-arp
interface Vlan2300
 description Building A AFFIL
 ip vrf forwarding AFFIL
 ip address
 no ip redirects
 no ip proxy-arp
 ip ospf 130 area 0

 router ospf 130 vrf AFFIL
 capability vrf-lite
 passive-interface default
 no passive-interface Loopback130
 no passive-interface Loopback131
 no passive-interface Vlan2300
 default-information originate always

ip pim vrf AFFIL rp-address override

ip route vrf AFFIL

Associate the appropriate VLANs with the Firewall and the distribution switch interfaces.

Distribution Switch

Layer 3 between core and distribution. Layer 2 between distribution and access. VLANs 1030 - 1032 are the SVIs for the access networks for the building - these will be trunked/tagged to each access switch/stack and associated on each port as appropriate as an access/untagged VLAN.

ip vrf AFFIL
 rd 65535:130
 route-target export 65535:130
 route-target import 65535:130

ip multicast-routing vrf AFFIL 

interface Loopback130
 description General Management Loop Back AFFIL
 ip vrf forwarding AFFIL
 ip address
 no ip proxy-arp
 ip pim sparse-mode
 ip ospf 130 area 0

interface Vlan1030
 description AFFIL_VLAN1030
 ip vrf forwarding AFFIL
 ip address

interface Vlan1031
 description AFFIL_VLAN1031
 ip vrf forwarding AFFIL
 ip address

interface Vlan1032
 description AFFIL_VLAN1032
 ip vrf forwarding AFFIL
 ip address

interface Vlan2300
 description AFFIL_P2P_BldA
 ip vrf forwarding AFFIL
 ip address
 no ip redirects
 no ip proxy-arp
 ip pim sparse-mode
 ip ospf network point-to-point
 ip ospf 130 area 0

router ospf 130 vrf AFFIL
 redistribute connected subnets
 passive-interface default
 no passive-interface Loopback130
 no passive-interface Vlan1030

ip pim vrf AFFIL rp-address

Associate the appropriate VLANs with the core switch interfaces and downstream access switches.

Thursday, December 21, 2023

Migrating to VMware NSX Advanced Load Balancer (Avi)


Over the past couple of months we have been working with VMware to migrate from a pair of Citrix Netscaler Application Delivery Controllers (ADC) appliances to a VMware NSX Advanced Load Balancer (NSX ALB) solution. It has been a smooth transition with the product achieving what was said on the packaging and the professional service team providing great service and technical insight on both the proof of capability and migration.

Planning and Procurement

Last year we had identified in our planning that the Netscalers are due for their End of Life in January 2024. For the requirements, outside of typical ADC features and capabilities, I wanted a cloud ready solution, WAF capabilities, and certificate automation. Given this, we kicked off a market discovery effort to see what’s available. We reached out to F5, Citrix, Fortinet, and VMware, arriving at the following conclusions:

  • I have used F5 BigIP LTM in the distant past and found them to be solid and that appears to still be the case - however they are a substantial investment.
  • Citrix, being the incumbent, provides familiarity and a simplified migration. However I found their support challenging, substantial increases in prices a concern, and their management and analytics platform wasn’t performing for us.
  • We're very happy with Fortinet for our network security services. However, their strength lies in firewalls, SD-WAN, and many other areas but their FortiADC was a little behind its competitors in the functionality that I was looking for. Fortinet was very helpful in providing a VM license for me to try it out and in answering any questions I had.
  • VMware was considered because we, with RiOT Solutions, had deployed NSX-T environment as part of a Data Centre refresh project and at that time they had just acquired Avi Networks and were rolling the Avi Advantage load balancer into the NSX portfolio. The pending Broadcom merger was worrying but given that we’re heavily invested with VMware any transition would be slow and beyond the lifespan of this deployment.

Given that we have yet to establish a cloud strategy I figured that a solution that can accommodate all combinations of on-premise, co-location, private, and public clouds would save effort and position us well for whatever hosting solution we settled upon. F5 and Citrix have capabilities that accommodate public clouds but NSX ALB could be considered a cloud native solution and excels in these environments while providing excellent support for traditional on-premise infrastructure.

We produced a position paper with the above options, including costs, with the recommendation to go with NSX ALB. It was an easy recommendation to make as it was substantially cheaper than F5 and Citrix for the same, if not better, capabilities and support.

Prior to purchasing NSX ALB we engaged VMware to run a facilitated proof of capability environment within our environment. At the time of writing this is a free service providing a number of professional service hours and various documents outlining what will be tested and the outcomes of the tests. I found the PoC valuable in that it allowed me to become intimate with the solution and how it applies to our needs. The PoC also allowed me to demonstrate the proposed solution to the team and other IT stakeholders which carries more weight when it is operating within our environment with our test applications. The information obtained from the PoC helped with the high level and detailed design stage of the actual implementation.

If you do not wish to run a formal PoC but would like to spin up a test environment then the controller image comes with a 30 day evaluation licence and a trial licence that goes until January 1st 2035 in my case. The difference being the number of vCPUs available to use for the Service Engines (SEs), 20 for the evaluation, 2 for the trial, I’ll explain licences later.

Note: I also looked at CloudFlare as we use it for public DNS. I have utilised its more advanced features for other organisations and they can provide similar capabilities to the above products - however the adoption would entail quite a mind shift in this case but I will certainly look at CloudFlare in future.


The Netscalers consist of a pair of physical appliances, MPX8015’s, in an active/standby HA configuration. A MPX8015 is capable of up to 6Gbit/s TLS/SSL throughput. We use ‘AppExpert’ which are basically HTTP responder/rewrite rules and ‘Traffic Management’, which makes up the Virtual Services and Pools. There are no security, automation, and scalability considerations to be had with the Netscalers.

Citrix Netscaler implementation

System requirements for the NSX ALB controllers vary depending on where you wish to deploy them, the number of SEs, number of Virtual Services, desired amount of logging and analytics. In this case, deploying to vsphere, I opted for 16 vCPU, 32GB RAM, and 256GB Disk - per controller node. For the SEs we went with 2 vCPU, 4GB RAM, and 25GB Disk each, 2 active/active pairs (4 SEs total). This will provide us with 16Gbit/s TLS/SSL throughput overall, 8Gbit/s per SE pair.

Active/active was chosen to best meet our application availability and performance needs. Active/active provides the least outage time in case of SE failure and the highest performance as both SEs are serving traffic. The other options are active/standby, which provides the best recovery time, but least performance, and N+M. N = minimum number of SEs, and M = number of ‘buffer’ SEs to handle the load should something occur to the N SEs.

5 virtual services placed on an active/active SE group consisting of 6 SEs. During a fault, all virtual services continue to operate, although some experience degraded performance.

Elastic HA N+M group with 20 virtual services, before and after a failure. Virtual Services per Service Engine = 8, N = 3, M = 1, compact placement = ON.

VMware recommended that a pair of SEs be used for each of our DMZ networks. This works out well because our services are evenly distributed across both DMZ networks and therefore the load is evenly shared. Additionally there is less ‘shared fate’ between the two DMZ networks, making them a little more distinct from each other.


NSX ALB implementation



Deploying NSX ALB into an on-premise vSphere environment is straightforward. Deploy three controller OVAs, set up the first one and then bring in the other two to automatically establish a cluster. Then go through and configure all the usual items such as DNS, NTP, Authentication, notifications, logging etc.

Next is the licensing and connecting the controller to the Avi Pulse Portal if you are after the advanced licence features such as proactive support, threat feeds, and central management of controllers (useful if say you have an on-premise controller, and a cloud based controller or SaaS and want a single management interface). The licensing is on a per vCPU basis. In our case we deployed 4 SEs with 2 vCPUs each, meaning we needed 8 licences. There are four tiers of licences - essential (for VMware Tanzu), basic, enterprise, and enterprise with cloud services. For most it will be a decision between the enterprise and enterprise with cloud services - the latter requiring a connection to Avi Pulse. Licences are ‘checked out’ by the controller from the portal, and then assigned to the SEs. This occurs on a regular basis and provides a reasonable grace period should it not be able to access the portal. If a SE becomes ‘unlicensed’ then it will keep working in the same way, but it will not come back should they be restarted or deleted.

After the controllers are established you then connect it to vSphere via its API, this is shown as a ‘cloud’ - the controllers can manage many environments, each shown as a separate cloud. Using this connection, the controllers will automatically deploy and provision the Service Engines (SEs) as defined by the Service Engine group configuration. The SEs are the workers, they handle the actual application delivery/load balancing.

You may download a web server image, perf-server-client.ova, to test NSX ALB. You can find it in the same location as the controller images (https://portal.avipulse.vmware.com/software/additional-tools). This image comes with various utilities for testing a load balancer such as Iperf, ApacheBench, and files of various sizes to download. I used a few of these to test out the controller and SEs prior to the migration. Note that Service Engines won’t be deployed unless a virtual service needs them so it’s a good idea to use a test virtual service to kick off the deployment.


perf-server-client.ova default web page



Once the controllers and SEs are set up and tested, the migration from the Netscalers can begin. In this case VMware provides a docker image that has the necessary scripts/tools to handle a migration from other load balancers to the NSX ALB. You can find it here: https://github.com/vmware/nsx-advanced-load-balancer-tools . In our case a live migration was recommended (connecting directly to the Netscalers API) as this will migrate the certificates, otherwise you can download the Netscaler configuration file and use that as the source.

The migration was a three step process - connect and download the configuration from the Netscalers, translate the configuration to NSX ALB, then upload the configuration to the NSX ALB controllers. It was relatively easy but does require attention and a systematic approach. Best to track each virtual service/application in a spreadsheet.

Run up the Avitools docker image with an interactive shell:

cd ~
mkdir migrationtool
docker pull avinetworks/avitools:latest
docker run -td –hostname avitools-22.1.4 –name avitools -w /opt/avi -v ~/migrationtool:/opt/avi –net=host avinetworks/avitools:latest bash

Netscaler configuration conversion command which downloads and converts the configuration (ensure the Netscaler user account has appropriate permissions):

user@avitools-22:/opt/avi# netscaler_converter.py -ns_host_ip –ns_ssh_user <username> –ns_ssh_password <password> –not_in_use –tenant admin –controller_version 22.1.5 –cloud_name vcenter01 –segroup sec-aa-dmz1 –ansible -o config_output –vs_level_status –vs_filter FirstVSName_lb, SecondVSName_lb, ThirdVSName_lb

The output will be something like:

133537.211: Log File Location: config_output_20231108
133537.221: Copying Files from Host...
133546.399: Parsing Input Configuration...
133609.875: Progress |##################################################| 100.0% \
133610.139: Converting Monitors...
133610.259: Progress |##################################################| 100.0%
133610.259: Converting Profiles..
134952.117: Progress |##################################################| 100.0%
134952.178: Converting Pools...
134953.193: Progress |##################################################| 100.0%
134953.193: Converting VirtualServices...
134953.335: Progress |#######-------------------------------------------| 15.3% /usr/local/lib/python3.8/dist-packages/avi/migrationtools/netsca134953.335: ler_converter/policy_converter.py:574: FutureWarning: Possible nested set at position 8
134953.335: matches = re.findall('[0-9]+.[[0-9]+.[0-9]+.[0-9]+', query)
135011.515: Progress |#################################################-| 99.8% \Generating Report For Converted Configuration...
135025.660: Progress |##################################################| 100.0%
135032.026: SKIPPED: 435
135032.026: SUCCESSFUL: 2761
135032.027: INDIRECT: 1355
135032.028: NOT APPLICABLE: 161
135032.029: PARTIAL: 214
135032.030: DATASCRIPT: 45
135032.030: EXTERNAL MONITOR: 109
135032.031: NOT SUPPORTED: 54
135032.033: MISSING FILE: 0
135032.033: Writing Excel Sheet For Converted Configuration...
135618.648: Progress |##################################################| 100.0% \
135634.108: Total Objects of ApplicationProfile : 4 (5/9 profile merged)
135634.108: Total Objects of NetworkProfile : 6 (2/8 profile merged)
135634.110: Total Objects of SSLProfile : 10 (186/196 profile merged)
135634.110: Total Objects of PKIProfile : 0
135634.110: Total Objects of ApplicationPersistenceProfile : 8 (90/98 profile merged)
135634.110: Total Objects of HealthMonitor : 56 (36/92 monitor merged)
135634.110: Total Objects of SSLKeyAndCertificate : 176
135634.110: Total Objects of PoolGroup : 446
135634.110: Total Objects of Pool : 575
135634.110: Total Objects of VirtualService : 421 (369 full conversions)
135634.110: Total Objects of HTTPPolicySet : 198
135634.110: Total Objects of StringGroup : 0
135634.110: Total Objects of VsVip : 231
135634.110: VServiceName-SSL_lb(VirtualService)
135634.110: |-
135634.110: |- enforce_STS_polXForwardFor_Add_pol-VServiceName-SSL_lb-clone(HTTPPolicySet)
135634.110: |- VServiceName-SSL_lb-poolgroup(PoolGroup)
135634.111: | |- VServiceName-SSL_lb(Pool)
135634.111: | | |- ping_mon(HealthMonitor)
135634.111: |- ns-migrate-http(ApplicationProfile)
135634.111: |- testcertificate(SSLKeyAndCertificate)
135634.111: |- Merged-ssl_profile-KOc-3(SSLProfile)

The output will list all the Virtual Servers converted, as specified by the -vs_filter parameter.

This will create yml files in /opt/avi within the container (~/migrationtool on the host), the ‘avi_config_create_object.yml’ is the conversion output ready to be applied to NSX ALB.

Ansible playbook to apply configuration to NSX ALB (ensure the NSX ALB user account has appropriate permissions):

ansible-playbook avi_config_create_object.yml -e "controller= username=<username> password=<password>" --skip SomeUnwanted-VS_lb

Once all the configuration was migrated onto the NSX ALB controllers it was necessary to go through and clean up the configuration - removing redundant items (such as HTTP-HTTPS redirect rules that are handled by the Application profile), renaming items to suit our conventions and so forth. Then it was a case of disabling the virtual IP (VIP) on the Netscaler and enabling the virtual service (VS) on the NSX ALB. This was done in batches, starting with the development/test environments and then production, each batch was spread across a number of maintenance windows.

At the end there were only a handful of items that needed revisiting. These were primarily related to going to a ‘active/active’ Service Engine configuration which meant we couldn’t rely upon a single Source NAT address when talking to the backend hosts (one per SE minimum). Also, I took the opportunity to optimise the TLS/SSL profile to only allow TLS1.2/1.3 and enable various cross-site scripting and cookie protections - some applications didn’t take too well to some of these features so I fixed those on a case by case basis. Also keep an eye on any HTTP request/response policies and make sure they’re migrated correctly.


Qualys SSL Labs Report with the new TLS/SSL and Application Profile changes


Certificate Automation

With the migration completed and everything testing okay I turned my focus onto the TLS/SSL certificate automation capabilities of the NSX ALB. Out of the box it provides a Let’s Encrypt automation that works as is. However in our case we utilise a different CA that while providing an ACME compatible API it requires External Authentication Bindings (EAB).

I adapted the existing Let’s Encrypt automation script to support EAB and I have been testing successfully. Many CAs require the use of EAB when using ACME so this should prove a useful automation for others, for example I have tested it with ZeroSSL without issue. Certificate automation is going to save us approximately 600 hours a year and reduce the potential downtime and reputational damage caused by expired/incorrect certificates.


ACME Certificate Request Workflow

It’s possible to automate certificates using tools such as Ansible however having this done ‘on box’ means less effort and less moving parts. For example, automatic certificate renewal is started by a local system event, no need to poll, use external events, or monitor certificate validity periods.

Security Capabilities

ADCs are well placed to apply security functionality in that they can see the unencrypted data between the clients and servers without resorting to ‘man-in-the-middle’ techniques to snoop the traffic like a firewall does. Key security features offered by NSX ALB, with the top tier licence, are IP Reputation and Geographic Location databases that are sourced from WebRoot, Web Application Firewall (WAF) Application Rules service that are sourced from TrustWave along with signature lists, and WAF auto-learn capability.

Additionally, NSX ALB enables Denial of Service (DoS) protection by default at various layers of the network stack.

My view on any kind of WAF functionality is that it must be simple to manage and update - not manually picking through lists of signatures and having to have a deep understanding of the application. Therefore the application rules and auto-learn capabilities are what I will be looking to implement shortly.


As we migrate to the cloud we will be able to easily shift applications over as the NSX ALB can integrate with our DNS and IPAM services to automate DNS records and IP re-addressing. The ability to scale out and in automatically will offer us cost savings in public clouds too. When we migrate to the cloud it is likely we will utilise the SaaS controller and leverage the automation capabilities to ensure applications can be deployed in a seamless and timely manner. After that I will likely compare CloudFlare and NSX ALBs Global Service Load Balancing capabilities, with the aim to improve services to our international students.

To summarise, we now have a modern, cloud ready, scalable, application delivery platform that has done away with physical appliances and has uplifted our automation and security capabilities. I can recommend VMware’s NSX Advanced Load Balancer and their professional services team.

Special thanks to the project team that made all this possible. 

Feel free to reach out if you have any questions about NSX ALB.

Saturday, September 03, 2011

F5 BIGIP LTM Reboot Script

In an effort to ensure the best performance and stability of our two BIGIP LTM 6400 Load Balancers I have created a script to synchronise and reboot the units regularly.

This script runs a series of checks before rebooting the unit.
  1. Check Active/Standby state based upon the output of bigpipe failover show 
  2. Check Peer status (up/down) - based upon the result of ping -c 1 -w 5 peer ('peer' is the hostname of the peer BIGIP) 
  3. Check the uptime to see when the last time the unit was started, if under a given period then don't reboot 
  4. Check configuration synchronisation status based upon the output of bigpipe config sync show 
If the configuration is not in sync then it will attempt to synchronise the configuration using bigpipe config sync all and check the status of the synchronisation again. If the configuration is still not in synch it will exit and not reboot the unit.

Each check/task will output to STDOUT and syslog (facility: local0.notice tag: BIGIP-ADMIN-SCRIPT). Also a result file (/tmp/reboot-cron-job-result) that will be left in place until next run and will also be e-mailed to 'user@domain.tld' (change this to suit your environment).

The same reboot.sh script is used on each unit.

Still some tidying up to do - like using a lockfile and better error handling like using 'set -e' and 'set -u' and traps.


I run this script using a cron job that occurs at the start of our weekly maintenance window. You can also use this script as a safe way to force a failover and reboot.

Monday, July 04, 2011

F5 BIGIP LTM Maintenance Page Update for v10

The folks at F5 devcentral have kindly provided a number of 'Maintenance Page' examples that allow you to host a page directly from the BIGIP LTM and display it automatically when all pool members go off-line. The example I used is http://devcentral.f5.com/wiki/default.aspx/iRules/LTMMaintenancePage.html (login required, registration is free).

However there are a few changes required to get it working with the latest version of TMOS (v10).

Follow the instructions provided in the aforementioned link and change them as follows:

Create iRule Data Groups with the following information:


General Properties
Name: maint_index_html_class
Partition: Common
Type: (External File)

Path/Filename: /var/class/maint.index.html.class
File Contents: String
Key/Value Pair Selector: :=
Access Mode: Read/Write

The file will need to look like the following (add "index.html" := to the beginning of existing example):


General Properties
Name: maint_index_logo_class
Partition: Common
Type: (External File)

Path/Filename: /var/class/maint.logo.png.class
File Contents: String
Key/Value Pair Selector: :=
Access Mode: Read/Write

The file will need to look like the following (add "logo.png" := to the beginning of the existing example):

  • Change [lindex $::maint_index_html_class 0] with [class element -value 0 maint_index_html_class] 
  • Change [b64decode [lindex $::maint_logo_png_class 0]] with [b64decode [class element -value 0 maint_index_logo_png_class]] 

Tuesday, May 31, 2011

F5 BIGIP and Blackboard Collaboration Server

Blackboard Collaboration Server is a separate, optional, web server that provides virtual classroom and chat tools. As part of the university’s Blackboard application upgrade I have been asked to develop a way to add resilience to the collaboration server side of the application where possible.

The brief is to provide failover only. The reason for this is that the collaboration server is not “load balancing aware” in that it assumes that it will be hosted on a single host. To provide rudimentary fail-over capability I have set up a method that will switch all sessions to another host should the active host fail. However clients will stay on the new host until it fails and only then switch all sessions to the other. The key word here is ‘all’ because it’s important to keep all sessions on the same host.

From a users perspective; in the event of an active host outage they will lose connectivity but will be able to log back in straight away and continue until such a time the alternative host fails. This prevents them from being switched over only to be kicked again when the prior host has been restored and also ensures that ALL sessions are sent to a single host and not spread across multiple host, so everyone is in the same chatrooms.

My first idea was to adapt BIGIP’s Priority Group capability however this presented the same problem where I could not ‘stick’ the clients to a server. As soon as a same or higher priority server was restored the sessions would be sent to the new host effectively splitting the chat rooms. Also load balancing will take place on member servers of the same priority.

So I did a bit of digging around and discovered a method of using an iRule to provide me with the capability to ‘stick’ sessions based upon an arbitrary number in this case I used the TCP Port number.

The iRule is as follows:

CLIENT_ACCEPTED is an event that is triggered when a connection has been established between a client device and the BIGIP.

‘persist uie’ is where I am manipulating the connection persistence and in this case the Universal Inspection Engine. Here I am simply setting a integer, can be any number but I have chosen to use the connecting TCP port number ([TCP::local_port]). This fixes the session persistence to a single host, preventing load-balancing.

The following BIGIP configuration has been tested as working by  business systems analyst using a combination of application logs, BIGIP statistics and packet captures. He confirmed what traffic was being sent on which ports - Port 8010 is used for the majority of user generated traffic that must be kept on a single host. Port 8443 is used to transport application specific information but does not carry anything that is user generated and therefore does not require persistence.

The aforementioned iRule is referenced by a ‘Universal Persistence’ profile as follows:

And then reference that Univeral Persistence profile from a Performace Layer 4 type Virtual Server like so::

Another Virtual Server is required for HTTPS traffic however this does not require any special configuration and is set up as a typical HTTP type Virtual Server e.g:

The above configuration refers to a six member/node pool. Each member runs both the general Blackboard application and the Collaboration Service. We have yet to load test the combination of the application and collaboration services and how they influence how the BIGIP balances the load across the members - considering using ‘Observed (node)’ as opposed to the current ‘Observed (member)’ method since the same nodes are used in multiple pools. Although at some stage I would like to look at uses Dynamic Ratio if it can play nicely with persistent connections.


Also take note of:

Saturday, May 28, 2011

Windows Wireless Clients and the X6148V-GE-TX Ethernet Switching Module

Burnt hard by a bug that exists in a place that makes plenty of sense when you find it but not so much when you’re looking at the symptoms.

I was tasked with establishing an EduRoam presence at a University. Since there was already a suitable wireless infrastructure in place all I needed to do was build a FreeRADIUS server, hook it into the EduRoam federated RADIUS and point the two Cisco 4404 controllers dressed as a WiSM (Wireless Services Module) at it so they authenticate EduRoam clients. Easy!

Getting FreeRADIUS communicating nicely with EduRoam was made more difficult than it needed to be. The configuration information provided from EduRoam was sketchy and inaccurate. It wasn’t until I decided to chuck it out and build the FreeRADIUS configuration from scratch that it worked. EduRoam have some strange ideas on what should be sent on the outer TLS tunnel... it’s the inner tunnel that’s important, the other is just establishing an anonymous TLS connection to the local RADIUS server which will then pass the inner-tunnel to their home campus RADIUS.

Okay, that was a bit tedious however that should be the hard part over with. Authentication was working nicely with the local LDAP directory (Novell eDirectory) and with other federated entities, tested with accounts from James Cook University, AARNET and the Australian Catholic University. Just the simple task of setting up a WLAN on the WiSM and confirming that it works with EduRoam as I had been using my trusty Mikrotik RouterBoard RB433 for testing. Associate a laptop to the new wlan, go to open google and was presented with a rather slow web experience that would basically stall on the first image that tried to load. However pings were fine so end to end connectivity was all there.

Odd. Maybe I left something out/in or perhaps the RADIUS was setting some kind of QoS value on the controllers that I wasn’t aware of. Checked all that out, nope all good. Maybe it’s the laptop? Try a little netbook running Jolicloud - works fine. Okay, lets check with another laptop - win7 - fail! Macbook - works! A Windows wireless client + WiSM + EduRoam problem?? Hang on, lets try the Intranet, works! Lets try a proxy server, works! This is getting annoying, so it’s a Windows wireless client + WiSM + EduRoam + FWSM/NAT + Internet problem??

The next 8 months consisted of running every conceivable check on the data path between a Windows wireless client and the Internet. The Cisco TAC had crawled over the WiSM - all good, the FWSM, hmm old untrusted software, install another one! test again - all good, even the ASR - nope, all good.

So I figured that it must be something I’m just not doing right. I blew away my test environment which consisted of a C4402 wifi controller and C1131AG/C1142N LWAPs, and the second FWSM running the latest software and rebuilt it. However when I did this I had physically relocated all the kit (except FWSM of course) from the data centre to the foyer just outside. In doing this I had disconnected the C4402 from the C6513 and plugged it into a C3750 I had set up for the link between the APs and controller and the trunk back into the general network. This configuration worked!

The test environment at this stage 
So what did introducing a C3750 or simply moving it elsewhere on the network do to fix the issue? This made me think there was something suss going on with the chassis and/or connecting switching modules.

By now the TAC had grown tired of my pokes and prods so I gave our Cisco account manager a nudge and the SR was escalated and an e-mail that was CC’d to ‘Cisco Australia’ popped into my inbox from the Cisco Switching team asking for a webex session so they could waterboard the 6513 chassis that housed the WiSM and FWSM.

The phone call started at 10am Monday morning and didn’t end until 3pm.

We worked through each stage of the data path again. Luckily they had the history of all the other tests I had done so I didn’t have to do many of the captures again. We narrowed down to the X6148V-GE-TX switching module. This was the one element that shared something in common with all the different combinations I had tried. The C4402 test controller was connected to it along with the link to the ASR/Internet. So I connected the C4402 to a port on the module (issue present, not working), ran a capture. Then moved the C4402 to a X6724-SFP module (no issue pressent, working) and ran another capture. Then the TAC guys ran a comparison between the two caps. It seems the X6148 was silently dropping packets, small ones, particularly ACKs from the client - egress to the ASR/Internet.

Seems we had hit Cisco bug CSCeb67650:

WS-X6548-GE-TX & WS-X6148-GE-TX may drop frames on egress 
Packets destined out the WS-X6548-GE-TX or the WS-X6148-GE-TX that are less than 64 bytes will be dropped. This can occur when a device forwards a packet that is 60 bytes and the 4 byte dot1q tag is to added to create a valid 64 byte packet. When the tag is removed the packet is 60 bytes. If the destination is out a port on the WS-X6548-GE-TX or the WS-X6148-GE-TX it will be dropped by the linecard....

WLC drop TCP ack from wireless client to wired

Symptom: Wireless client has problem loading certain web pages. Conditions: client connected to wireless controller, and has problem loading web pages from certain web sites. Specifically has problem loading pictures. A wired packet capture shows the ack coming from the wireless client are been drop on the controller. Workaround: None

Since there was no workaround the only option was to shift the ASR/Internet link from the X6148 to a X6724. Fixed!

I plan to remove the X6148V-GE-TX from the chassis anyway along with a CSM. These are both ‘classic’ modules that don’t use “fabric switching” (2 x 20Gb dedicated) but instead use an older “bus” method (32Gb shared) thus causing the chassis as a whole to not run as well as it could. However if X61xx modules were all I had then I would be in a pickle.


Wondering why this only affected Windows clients? So am I.

ACKs aren't all the same 'size' given the comparisons between pcaps I've grabbed from public repos. However ACK frames during a HTTP transfer all seem to be 60bytes long no matter the OS.

I think it could be related to the differences between the Slow Start/Congestion Avoidance algorithms. The ACKs are probably being dropped no matter which OS is sending them, however some OSs might be better at recovering. Something to test. Although this problem shows indiscriminate dropping of 60byte frames so how can they recover??

I haven't been able to find a decent comparison between *nix/BSD/MacOS and Win* TCP stacks. It would be an interesting test to get a Linux box running the same algorithms as a Windows box. When I pull the X6148 out I'll toss it into the test 6509 and hang a test webserver off of it.