Thursday, December 21, 2023

Migrating to VMware NSX Advanced Load Balancer (Avi)

Introduction

Over the past couple of months we have been working with VMware to migrate from a pair of Citrix Netscaler Application Delivery Controllers (ADC) appliances to a VMware NSX Advanced Load Balancer (NSX ALB) solution. It has been a smooth transition with the product achieving what was said on the packaging and the professional service team providing great service and technical insight on both the proof of capability and migration.


Planning and Procurement

Last year we had identified in our planning that the Netscalers are due for their End of Life in January 2024. For the requirements, outside of typical ADC features and capabilities, I wanted a cloud ready solution, WAF capabilities, and certificate automation. Given this, we kicked off a market discovery effort to see what’s available. We reached out to F5, Citrix, Fortinet, and VMware, arriving at the following conclusions:

  • I have used F5 BigIP LTM in the distant past and found them to be solid and that appears to still be the case - however they are a substantial investment.
  • Citrix, being the incumbent, provides familiarity and a simplified migration. However I found their support challenging, substantial increases in prices a concern, and their management and analytics platform wasn’t performing for us.
  • We're very happy with Fortinet for our network security services. However, their strength lies in firewalls, SD-WAN, and many other areas but their FortiADC was a little behind its competitors in the functionality that I was looking for. Fortinet was very helpful in providing a VM license for me to try it out and in answering any questions I had.
  • VMware was considered because we, with RiOT Solutions, had deployed NSX-T environment as part of a Data Centre refresh project and at that time they had just acquired Avi Networks and were rolling the Avi Advantage load balancer into the NSX portfolio. The pending Broadcom merger was worrying but given that we’re heavily invested with VMware any transition would be slow and beyond the lifespan of this deployment.

Given that we have yet to establish a cloud strategy I figured that a solution that can accommodate all combinations of on-premise, co-location, private, and public clouds would save effort and position us well for whatever hosting solution we settled upon. F5 and Citrix have capabilities that accommodate public clouds but NSX ALB could be considered a cloud native solution and excels in these environments while providing excellent support for traditional on-premise infrastructure.

We produced a position paper with the above options, including costs, with the recommendation to go with NSX ALB. It was an easy recommendation to make as it was substantially cheaper than F5 and Citrix for the same, if not better, capabilities and support.

Prior to purchasing NSX ALB we engaged VMware to run a facilitated proof of capability environment within our environment. At the time of writing this is a free service providing a number of professional service hours and various documents outlining what will be tested and the outcomes of the tests. I found the PoC valuable in that it allowed me to become intimate with the solution and how it applies to our needs. The PoC also allowed me to demonstrate the proposed solution to the team and other IT stakeholders which carries more weight when it is operating within our environment with our test applications. The information obtained from the PoC helped with the high level and detailed design stage of the actual implementation.

If you do not wish to run a formal PoC but would like to spin up a test environment then the controller image comes with a 30 day evaluation licence and a trial licence that goes until January 1st 2035 in my case. The difference being the number of vCPUs available to use for the Service Engines (SEs), 20 for the evaluation, 2 for the trial, I’ll explain licences later.

Note: I also looked at CloudFlare as we use it for public DNS. I have utilised its more advanced features for other organisations and they can provide similar capabilities to the above products - however the adoption would entail quite a mind shift in this case but I will certainly look at CloudFlare in future.

Design

The Netscalers consist of a pair of physical appliances, MPX8015’s, in an active/standby HA configuration. A MPX8015 is capable of up to 6Gbit/s TLS/SSL throughput. We use ‘AppExpert’ which are basically HTTP responder/rewrite rules and ‘Traffic Management’, which makes up the Virtual Services and Pools. There are no security, automation, and scalability considerations to be had with the Netscalers.


Citrix Netscaler implementation



System requirements for the NSX ALB controllers vary depending on where you wish to deploy them, the number of SEs, number of Virtual Services, desired amount of logging and analytics. In this case, deploying to vsphere, I opted for 16 vCPU, 32GB RAM, and 256GB Disk - per controller node. For the SEs we went with 2 vCPU, 4GB RAM, and 25GB Disk each, 2 active/active pairs (4 SEs total). This will provide us with 16Gbit/s TLS/SSL throughput overall, 8Gbit/s per SE pair.

Active/active was chosen to best meet our application availability and performance needs. Active/active provides the least outage time in case of SE failure and the highest performance as both SEs are serving traffic. The other options are active/standby, which provides the best recovery time, but least performance, and N+M. N = minimum number of SEs, and M = number of ‘buffer’ SEs to handle the load should something occur to the N SEs.




5 virtual services placed on an active/active SE group consisting of 6 SEs. During a fault, all virtual services continue to operate, although some experience degraded performance.




Elastic HA N+M group with 20 virtual services, before and after a failure. Virtual Services per Service Engine = 8, N = 3, M = 1, compact placement = ON.

VMware recommended that a pair of SEs be used for each of our DMZ networks. This works out well because our services are evenly distributed across both DMZ networks and therefore the load is evenly shared. Additionally there is less ‘shared fate’ between the two DMZ networks, making them a little more distinct from each other.


 

NSX ALB implementation

 

Setup

Deploying NSX ALB into an on-premise vSphere environment is straightforward. Deploy three controller OVAs, set up the first one and then bring in the other two to automatically establish a cluster. Then go through and configure all the usual items such as DNS, NTP, Authentication, notifications, logging etc.

Next is the licensing and connecting the controller to the Avi Pulse Portal if you are after the advanced licence features such as proactive support, threat feeds, and central management of controllers (useful if say you have an on-premise controller, and a cloud based controller or SaaS and want a single management interface). The licensing is on a per vCPU basis. In our case we deployed 4 SEs with 2 vCPUs each, meaning we needed 8 licences. There are four tiers of licences - essential (for VMware Tanzu), basic, enterprise, and enterprise with cloud services. For most it will be a decision between the enterprise and enterprise with cloud services - the latter requiring a connection to Avi Pulse. Licences are ‘checked out’ by the controller from the portal, and then assigned to the SEs. This occurs on a regular basis and provides a reasonable grace period should it not be able to access the portal. If a SE becomes ‘unlicensed’ then it will keep working in the same way, but it will not come back should they be restarted or deleted.

After the controllers are established you then connect it to vSphere via its API, this is shown as a ‘cloud’ - the controllers can manage many environments, each shown as a separate cloud. Using this connection, the controllers will automatically deploy and provision the Service Engines (SEs) as defined by the Service Engine group configuration. The SEs are the workers, they handle the actual application delivery/load balancing.

You may download a web server image, perf-server-client.ova, to test NSX ALB. You can find it in the same location as the controller images (https://portal.avipulse.vmware.com/software/additional-tools). This image comes with various utilities for testing a load balancer such as Iperf, ApacheBench, and files of various sizes to download. I used a few of these to test out the controller and SEs prior to the migration. Note that Service Engines won’t be deployed unless a virtual service needs them so it’s a good idea to use a test virtual service to kick off the deployment.


 

perf-server-client.ova default web page

 

Migration

Once the controllers and SEs are set up and tested, the migration from the Netscalers can begin. In this case VMware provides a docker image that has the necessary scripts/tools to handle a migration from other load balancers to the NSX ALB. You can find it here: https://github.com/vmware/nsx-advanced-load-balancer-tools . In our case a live migration was recommended (connecting directly to the Netscalers API) as this will migrate the certificates, otherwise you can download the Netscaler configuration file and use that as the source.

The migration was a three step process - connect and download the configuration from the Netscalers, translate the configuration to NSX ALB, then upload the configuration to the NSX ALB controllers. It was relatively easy but does require attention and a systematic approach. Best to track each virtual service/application in a spreadsheet.

Run up the Avitools docker image with an interactive shell:

cd ~
mkdir migrationtool
docker pull avinetworks/avitools:latest
docker run -td –hostname avitools-22.1.4 –name avitools -w /opt/avi -v ~/migrationtool:/opt/avi –net=host avinetworks/avitools:latest bash


Netscaler configuration conversion command which downloads and converts the configuration (ensure the Netscaler user account has appropriate permissions):

user@avitools-22:/opt/avi# netscaler_converter.py -ns_host_ip 172.10.10.22 –ns_ssh_user <username> –ns_ssh_password <password> –not_in_use –tenant admin –controller_version 22.1.5 –cloud_name vcenter01 –segroup sec-aa-dmz1 –ansible -o config_output –vs_level_status –vs_filter FirstVSName_lb, SecondVSName_lb, ThirdVSName_lb



The output will be something like:


133537.211: Log File Location: config_output_20231108
133537.221: Copying Files from Host...
133546.399: Parsing Input Configuration...
133609.875: Progress |##################################################| 100.0% \
133610.139: Converting Monitors...
133610.259: Progress |##################################################| 100.0%
133610.259: Converting Profiles..
134952.117: Progress |##################################################| 100.0%
134952.178: Converting Pools...
134953.193: Progress |##################################################| 100.0%
134953.193: Converting VirtualServices...
134953.335: Progress |#######-------------------------------------------| 15.3% /usr/local/lib/python3.8/dist-packages/avi/migrationtools/netsca134953.335: ler_converter/policy_converter.py:574: FutureWarning: Possible nested set at position 8
134953.335: matches = re.findall('[0-9]+.[[0-9]+.[0-9]+.[0-9]+', query)
135011.515: Progress |#################################################-| 99.8% \Generating Report For Converted Configuration...
135025.660: Progress |##################################################| 100.0%
135032.026: SKIPPED: 435
135032.026: SUCCESSFUL: 2761
135032.027: INDIRECT: 1355
135032.028: NOT APPLICABLE: 161
135032.029: PARTIAL: 214
135032.030: DATASCRIPT: 45
135032.030: EXTERNAL MONITOR: 109
135032.031: NOT SUPPORTED: 54
135032.032: INCOMPLETE CONFIGURATION: 588
135032.033: MISSING FILE: 0
135032.033: Writing Excel Sheet For Converted Configuration...
135618.648: Progress |##################################################| 100.0% \
135634.108: Total Objects of ApplicationProfile : 4 (5/9 profile merged)
135634.108: Total Objects of NetworkProfile : 6 (2/8 profile merged)
135634.110: Total Objects of SSLProfile : 10 (186/196 profile merged)
135634.110: Total Objects of PKIProfile : 0
135634.110: Total Objects of ApplicationPersistenceProfile : 8 (90/98 profile merged)
135634.110: Total Objects of HealthMonitor : 56 (36/92 monitor merged)
135634.110: Total Objects of SSLKeyAndCertificate : 176
135634.110: Total Objects of PoolGroup : 446
135634.110: Total Objects of Pool : 575
135634.110: Total Objects of VirtualService : 421 (369 full conversions)
135634.110: Total Objects of HTTPPolicySet : 198
135634.110: Total Objects of StringGroup : 0
135634.110: Total Objects of VsVip : 231
135634.110: VServiceName-SSL_lb(VirtualService)
135634.110: |- 192.80.10.23-vsvip(VsVip)
135634.110: |- enforce_STS_polXForwardFor_Add_pol-VServiceName-SSL_lb-clone(HTTPPolicySet)
135634.110: |- VServiceName-SSL_lb-poolgroup(PoolGroup)
135634.111: | |- VServiceName-SSL_lb(Pool)
135634.111: | | |- ping_mon(HealthMonitor)
135634.111: |- ns-migrate-http(ApplicationProfile)
135634.111: |- testcertificate(SSLKeyAndCertificate)
135634.111: |- Merged-ssl_profile-KOc-3(SSLProfile)



The output will list all the Virtual Servers converted, as specified by the -vs_filter parameter.

This will create yml files in /opt/avi within the container (~/migrationtool on the host), the ‘avi_config_create_object.yml’ is the conversion output ready to be applied to NSX ALB.

Ansible playbook to apply configuration to NSX ALB (ensure the NSX ALB user account has appropriate permissions):
 

ansible-playbook avi_config_create_object.yml -e "controller=172.10.10.61 username=<username> password=<password>" --skip SomeUnwanted-VS_lb

Once all the configuration was migrated onto the NSX ALB controllers it was necessary to go through and clean up the configuration - removing redundant items (such as HTTP-HTTPS redirect rules that are handled by the Application profile), renaming items to suit our conventions and so forth. Then it was a case of disabling the virtual IP (VIP) on the Netscaler and enabling the virtual service (VS) on the NSX ALB. This was done in batches, starting with the development/test environments and then production, each batch was spread across a number of maintenance windows.

At the end there were only a handful of items that needed revisiting. These were primarily related to going to a ‘active/active’ Service Engine configuration which meant we couldn’t rely upon a single Source NAT address when talking to the backend hosts (one per SE minimum). Also, I took the opportunity to optimise the TLS/SSL profile to only allow TLS1.2/1.3 and enable various cross-site scripting and cookie protections - some applications didn’t take too well to some of these features so I fixed those on a case by case basis. Also keep an eye on any HTTP request/response policies and make sure they’re migrated correctly.


 

Qualys SSL Labs Report with the new TLS/SSL and Application Profile changes

 

Certificate Automation

With the migration completed and everything testing okay I turned my focus onto the TLS/SSL certificate automation capabilities of the NSX ALB. Out of the box it provides a Let’s Encrypt automation that works as is. However in our case we utilise a different CA that while providing an ACME compatible API it requires External Authentication Bindings (EAB).

I adapted the existing Let’s Encrypt automation script to support EAB and I have been testing successfully. Many CAs require the use of EAB when using ACME so this should prove a useful automation for others, for example I have tested it with ZeroSSL without issue. Certificate automation is going to save us approximately 600 hours a year and reduce the potential downtime and reputational damage caused by expired/incorrect certificates.


 

ACME Certificate Request Workflow



It’s possible to automate certificates using tools such as Ansible however having this done ‘on box’ means less effort and less moving parts. For example, automatic certificate renewal is started by a local system event, no need to poll, use external events, or monitor certificate validity periods.

Security Capabilities


ADCs are well placed to apply security functionality in that they can see the unencrypted data between the clients and servers without resorting to ‘man-in-the-middle’ techniques to snoop the traffic like a firewall does. Key security features offered by NSX ALB, with the top tier licence, are IP Reputation and Geographic Location databases that are sourced from WebRoot, Web Application Firewall (WAF) Application Rules service that are sourced from TrustWave along with signature lists, and WAF auto-learn capability.

Additionally, NSX ALB enables Denial of Service (DoS) protection by default at various layers of the network stack.

My view on any kind of WAF functionality is that it must be simple to manage and update - not manually picking through lists of signatures and having to have a deep understanding of the application. Therefore the application rules and auto-learn capabilities are what I will be looking to implement shortly.


Conclusion

As we migrate to the cloud we will be able to easily shift applications over as the NSX ALB can integrate with our DNS and IPAM services to automate DNS records and IP re-addressing. The ability to scale out and in automatically will offer us cost savings in public clouds too. When we migrate to the cloud it is likely we will utilise the SaaS controller and leverage the automation capabilities to ensure applications can be deployed in a seamless and timely manner. After that I will likely compare CloudFlare and NSX ALBs Global Service Load Balancing capabilities, with the aim to improve services to our international students.

To summarise, we now have a modern, cloud ready, scalable, application delivery platform that has done away with physical appliances and has uplifted our automation and security capabilities. I can recommend VMware’s NSX Advanced Load Balancer and their professional services team.

Special thanks to the project team that made all this possible. 


Feel free to reach out if you have any questions about NSX ALB.






Saturday, September 03, 2011

F5 BIGIP LTM Reboot Script

In an effort to ensure the best performance and stability of our two BIGIP LTM 6400 Load Balancers I have created a script to synchronise and reboot the units regularly.

This script runs a series of checks before rebooting the unit.
  1. Check Active/Standby state based upon the output of bigpipe failover show 
  2. Check Peer status (up/down) - based upon the result of ping -c 1 -w 5 peer ('peer' is the hostname of the peer BIGIP) 
  3. Check the uptime to see when the last time the unit was started, if under a given period then don't reboot 
  4. Check configuration synchronisation status based upon the output of bigpipe config sync show 
If the configuration is not in sync then it will attempt to synchronise the configuration using bigpipe config sync all and check the status of the synchronisation again. If the configuration is still not in synch it will exit and not reboot the unit.

Each check/task will output to STDOUT and syslog (facility: local0.notice tag: BIGIP-ADMIN-SCRIPT). Also a result file (/tmp/reboot-cron-job-result) that will be left in place until next run and will also be e-mailed to 'user@domain.tld' (change this to suit your environment).

The same reboot.sh script is used on each unit.

Still some tidying up to do - like using a lockfile and better error handling like using 'set -e' and 'set -u' and traps.

/home/admin/reboot.sh


I run this script using a cron job that occurs at the start of our weekly maintenance window. You can also use this script as a safe way to force a failover and reboot.


Monday, July 04, 2011

F5 BIGIP LTM Maintenance Page Update for v10

The folks at F5 devcentral have kindly provided a number of 'Maintenance Page' examples that allow you to host a page directly from the BIGIP LTM and display it automatically when all pool members go off-line. The example I used is http://devcentral.f5.com/wiki/default.aspx/iRules/LTMMaintenancePage.html (login required, registration is free).

However there are a few changes required to get it working with the latest version of TMOS (v10).

Follow the instructions provided in the aforementioned link and change them as follows:

Create iRule Data Groups with the following information:

maint_index_html_class

General Properties
Name: maint_index_html_class
Partition: Common
Type: (External File)

Records
Path/Filename: /var/class/maint.index.html.class
File Contents: String
Key/Value Pair Selector: :=
Access Mode: Read/Write

The file will need to look like the following (add "index.html" := to the beginning of existing example):



main_index_logo_class

General Properties
Name: maint_index_logo_class
Partition: Common
Type: (External File)

Records
Path/Filename: /var/class/maint.logo.png.class
File Contents: String
Key/Value Pair Selector: :=
Access Mode: Read/Write

The file will need to look like the following (add "logo.png" := to the beginning of the existing example):



generic_irule_maintenance_page
  • Change [lindex $::maint_index_html_class 0] with [class element -value 0 maint_index_html_class] 
  • Change [b64decode [lindex $::maint_logo_png_class 0]] with [b64decode [class element -value 0 maint_index_logo_png_class]] 

Tuesday, May 31, 2011

F5 BIGIP and Blackboard Collaboration Server

Blackboard Collaboration Server is a separate, optional, web server that provides virtual classroom and chat tools. As part of the university’s Blackboard application upgrade I have been asked to develop a way to add resilience to the collaboration server side of the application where possible.

The brief is to provide failover only. The reason for this is that the collaboration server is not “load balancing aware” in that it assumes that it will be hosted on a single host. To provide rudimentary fail-over capability I have set up a method that will switch all sessions to another host should the active host fail. However clients will stay on the new host until it fails and only then switch all sessions to the other. The key word here is ‘all’ because it’s important to keep all sessions on the same host.

From a users perspective; in the event of an active host outage they will lose connectivity but will be able to log back in straight away and continue until such a time the alternative host fails. This prevents them from being switched over only to be kicked again when the prior host has been restored and also ensures that ALL sessions are sent to a single host and not spread across multiple host, so everyone is in the same chatrooms.

My first idea was to adapt BIGIP’s Priority Group capability however this presented the same problem where I could not ‘stick’ the clients to a server. As soon as a same or higher priority server was restored the sessions would be sent to the new host effectively splitting the chat rooms. Also load balancing will take place on member servers of the same priority.

So I did a bit of digging around and discovered a method of using an iRule to provide me with the capability to ‘stick’ sessions based upon an arbitrary number in this case I used the TCP Port number.

The iRule is as follows:



CLIENT_ACCEPTED is an event that is triggered when a connection has been established between a client device and the BIGIP.

‘persist uie’ is where I am manipulating the connection persistence and in this case the Universal Inspection Engine. Here I am simply setting a integer, can be any number but I have chosen to use the connecting TCP port number ([TCP::local_port]). This fixes the session persistence to a single host, preventing load-balancing.

The following BIGIP configuration has been tested as working by  business systems analyst using a combination of application logs, BIGIP statistics and packet captures. He confirmed what traffic was being sent on which ports - Port 8010 is used for the majority of user generated traffic that must be kept on a single host. Port 8443 is used to transport application specific information but does not carry anything that is user generated and therefore does not require persistence.

The aforementioned iRule is referenced by a ‘Universal Persistence’ profile as follows:



And then reference that Univeral Persistence profile from a Performace Layer 4 type Virtual Server like so::



Another Virtual Server is required for HTTPS traffic however this does not require any special configuration and is set up as a typical HTTP type Virtual Server e.g:


The above configuration refers to a six member/node pool. Each member runs both the general Blackboard application and the Collaboration Service. We have yet to load test the combination of the application and collaboration services and how they influence how the BIGIP balances the load across the members - considering using ‘Observed (node)’ as opposed to the current ‘Observed (member)’ method since the same nodes are used in multiple pools. Although at some stage I would like to look at uses Dynamic Ratio if it can play nicely with persistent connections.

References:
http://devcentral.f5.com/wiki/default.aspx/iRules/CLIENT_ACCEPTED.html
http://devcentral.f5.com/wiki/default.aspx/iRules/persist.html
http://support.f5.com/kb/en-us/archived_products/big-ip/manuals/product/bigip4_5admin/BIGip_uie.html

Also take note of:
http://support.f5.com/kb/en-us/solutions/public/4000/100/sol4166.html

Saturday, May 28, 2011

Windows Wireless Clients and the X6148V-GE-TX Ethernet Switching Module

Burnt hard by a bug that exists in a place that makes plenty of sense when you find it but not so much when you’re looking at the symptoms.

I was tasked with establishing an EduRoam presence at a University. Since there was already a suitable wireless infrastructure in place all I needed to do was build a FreeRADIUS server, hook it into the EduRoam federated RADIUS and point the two Cisco 4404 controllers dressed as a WiSM (Wireless Services Module) at it so they authenticate EduRoam clients. Easy!

Getting FreeRADIUS communicating nicely with EduRoam was made more difficult than it needed to be. The configuration information provided from EduRoam was sketchy and inaccurate. It wasn’t until I decided to chuck it out and build the FreeRADIUS configuration from scratch that it worked. EduRoam have some strange ideas on what should be sent on the outer TLS tunnel... it’s the inner tunnel that’s important, the other is just establishing an anonymous TLS connection to the local RADIUS server which will then pass the inner-tunnel to their home campus RADIUS.

Okay, that was a bit tedious however that should be the hard part over with. Authentication was working nicely with the local LDAP directory (Novell eDirectory) and with other federated entities, tested with accounts from James Cook University, AARNET and the Australian Catholic University. Just the simple task of setting up a WLAN on the WiSM and confirming that it works with EduRoam as I had been using my trusty Mikrotik RouterBoard RB433 for testing. Associate a laptop to the new wlan, go to open google and was presented with a rather slow web experience that would basically stall on the first image that tried to load. However pings were fine so end to end connectivity was all there.

Odd. Maybe I left something out/in or perhaps the RADIUS was setting some kind of QoS value on the controllers that I wasn’t aware of. Checked all that out, nope all good. Maybe it’s the laptop? Try a little netbook running Jolicloud - works fine. Okay, lets check with another laptop - win7 - fail! Macbook - works! A Windows wireless client + WiSM + EduRoam problem?? Hang on, lets try the Intranet, works! Lets try a proxy server, works! This is getting annoying, so it’s a Windows wireless client + WiSM + EduRoam + FWSM/NAT + Internet problem??

The next 8 months consisted of running every conceivable check on the data path between a Windows wireless client and the Internet. The Cisco TAC had crawled over the WiSM - all good, the FWSM, hmm old untrusted software, install another one! test again - all good, even the ASR - nope, all good.

So I figured that it must be something I’m just not doing right. I blew away my test environment which consisted of a C4402 wifi controller and C1131AG/C1142N LWAPs, and the second FWSM running the latest software and rebuilt it. However when I did this I had physically relocated all the kit (except FWSM of course) from the data centre to the foyer just outside. In doing this I had disconnected the C4402 from the C6513 and plugged it into a C3750 I had set up for the link between the APs and controller and the trunk back into the general network. This configuration worked!

The test environment at this stage 
So what did introducing a C3750 or simply moving it elsewhere on the network do to fix the issue? This made me think there was something suss going on with the chassis and/or connecting switching modules.

By now the TAC had grown tired of my pokes and prods so I gave our Cisco account manager a nudge and the SR was escalated and an e-mail that was CC’d to ‘Cisco Australia’ popped into my inbox from the Cisco Switching team asking for a webex session so they could waterboard the 6513 chassis that housed the WiSM and FWSM.

The phone call started at 10am Monday morning and didn’t end until 3pm.

We worked through each stage of the data path again. Luckily they had the history of all the other tests I had done so I didn’t have to do many of the captures again. We narrowed down to the X6148V-GE-TX switching module. This was the one element that shared something in common with all the different combinations I had tried. The C4402 test controller was connected to it along with the link to the ASR/Internet. So I connected the C4402 to a port on the module (issue present, not working), ran a capture. Then moved the C4402 to a X6724-SFP module (no issue pressent, working) and ran another capture. Then the TAC guys ran a comparison between the two caps. It seems the X6148 was silently dropping packets, small ones, particularly ACKs from the client - egress to the ASR/Internet.

Seems we had hit Cisco bug CSCeb67650:

WS-X6548-GE-TX & WS-X6148-GE-TX may drop frames on egress 
Packets destined out the WS-X6548-GE-TX or the WS-X6148-GE-TX that are less than 64 bytes will be dropped. This can occur when a device forwards a packet that is 60 bytes and the 4 byte dot1q tag is to added to create a valid 64 byte packet. When the tag is removed the packet is 60 bytes. If the destination is out a port on the WS-X6548-GE-TX or the WS-X6148-GE-TX it will be dropped by the linecard....

WLC drop TCP ack from wireless client to wired

Symptom: Wireless client has problem loading certain web pages. Conditions: client connected to wireless controller, and has problem loading web pages from certain web sites. Specifically has problem loading pictures. A wired packet capture shows the ack coming from the wireless client are been drop on the controller. Workaround: None

Since there was no workaround the only option was to shift the ASR/Internet link from the X6148 to a X6724. Fixed!

I plan to remove the X6148V-GE-TX from the chassis anyway along with a CSM. These are both ‘classic’ modules that don’t use “fabric switching” (2 x 20Gb dedicated) but instead use an older “bus” method (32Gb shared) thus causing the chassis as a whole to not run as well as it could. However if X61xx modules were all I had then I would be in a pickle.


Note:

Wondering why this only affected Windows clients? So am I.

ACKs aren't all the same 'size' given the comparisons between pcaps I've grabbed from public repos. However ACK frames during a HTTP transfer all seem to be 60bytes long no matter the OS.

I think it could be related to the differences between the Slow Start/Congestion Avoidance algorithms. The ACKs are probably being dropped no matter which OS is sending them, however some OSs might be better at recovering. Something to test. Although this problem shows indiscriminate dropping of 60byte frames so how can they recover??

I haven't been able to find a decent comparison between *nix/BSD/MacOS and Win* TCP stacks. It would be an interesting test to get a Linux box running the same algorithms as a Windows box. When I pull the X6148 out I'll toss it into the test 6509 and hang a test webserver off of it.

Wednesday, August 13, 2008

Weather Station

Installed a Fine Offset Electronics WH1081 Weather Station on my roof the other day. Purchased the device from ebay for $85 and $25 delivered.

The station consists of the following sensors:
  • Thermo-hydro transmitter
  • Wind speed
  • Wind direction
  • Rain gauge

The console is a touch screen LCD panel. Apart from the LCD being difficult to read due to being too light (needs a contrast setting) it works reasonably well. The best part about it is that it has a USB connection for plugging into a PC.

To go with the USB conneciton the station comes with a software package called 'EasyWeather' which is functional and maintains a log with various graphs showing historical data. I wasn't too fussed on it though since it did things like give false readings and has really bad memory leaks.

I tried Cumulus, which is another weather station application that can upload results to a FTP site for Internet access. However it doesn't recognise the WH1081 natively and instead relies upon EasyWeather to gather statistics - not good for the above reasons of instablity and memory leaks.

Currently I am using the Linux console version of Weather Display. I like it because its a no fuss application with no frills and supports uploading results to Weather Underground natively. However it doesn't support 64bit at all, even when using IA32 libraries. This made me install 32bit Ubuntu (CLI only) within a VirtualBox - had to use the closed source version for the USB support. This is on my Mythbuntu 8.04 HTPC too, so the weather console sits on top of the TV in the loungeroom which worked out well.

You can see my weather data on Weather Underground here.

Saturday, April 05, 2008

Network upgrade

And the upgrades continue.

The network here is expanding something chronic so I needed something that could push the vlans harder. It's basically replacing the local Mikrotik/WRAP1-1 Router and Asus GigaX2024 L2 switch with a single Asus GigaX3112 L3 switch. It certainly tidied up the rack by removing two switches and a stack of patch leads.

Now I have two Asus 2024 L2 switches to stick in Admissions and the Crocosium. This will give me a gigabit vlan trunk to the locations and allow me to create some more subnets to reduce some of this needless traffic off the main networks. I'm trying to have a L2 managed switch on the end of every fibre link to get some flexibility into the network and get things into this decade...

I'm not all that impressed with the Asus network kit so far. It's okay for the price but it's buggy as hell and the 3112 tends to crash due to kernel panics and reboot due to buffer overflows or memory errors. I'm hoping future firmware updates will come and fix things. Not that I had much more luck with Netgear and Linksys stuff. There's a reason why Cisco can charge so much.

The campus wireless network is almost completed. Awaiting a cable run from the Machinery shed to the Conference center to install the AP on the roof there. Also need to install the 12th AP at the Tiger Temple to complete the 'ring of coverage' - full wireless coverage of the safari shuttle track and nearby walkways and buildings. Will put off any more expansion until the Hotel is built.