Bridging networks on a VM

So, you’ve got your shiny new Mac and you’re in that ‘in-between’ time where you’re running a VM to support all of your Windows needs. You decide that your VM needs to be connected to the same Layer 3 network as your physical box so you decide to change your VM network settings from ‘NAT’ to ‘Bridged’. This seemingly simple configuration change has some pretty significant ramifications in the Cisco wireless world however so you may be shocked to find out when you take your beloved Mac back to work that your VM stops getting an IP address! As it turns out, there is a feature enabled by default on a Cisco lightweight wireless infrastructure that is spelled out thusly:

In the controller software Release 5.2 or later releases, the controller enforces strict IP address-to-MAC address binding in client packets. The controller checks the IP address and MAC address in a packet, compares them to the addresses that are registered with the controller, and forwards the packet only if they both match. 

Since your Mac(intosh) uses a single adapter (your WLAN adapter) for the connection to the network, the controller only sees a single MAC address. This means that it will only let a single IP address talk on the network since it’s expecting a 1:1 mapping of MAC address to IP address. The quickest way around this is the following global command on your WLC:

config network ip-mac-binding disable

Which will remove this 1:1 mapping expectation. Don’t forget to save your config and you should be good to go with IP addresses issued via DHCP to both your real machine and the Virtual Machines living behind the bridged VM network!

It should also be noted that many ‘security appliances’ serving as your DHCP server will refuse to issue multiple IP addresses to a single MAC address, effectively recreating identical symptoms (a VM that get’s no IP address). As far as I know, there is no workaround aside from not using a security appliance for your DHCP server. This is believed to afflict both Palo Altos as well as ASAs and is likely to impact anything else under the guise of a ‘security appliance’. Your best bet is to try and put DHCP services on a real server (Windows DHCP or Linux ISC-DHCPD) or try running it in IOS on your next hop Catalyst switch. You *do* have a next-hop Catalyst switch, right? 🙂

Advertisements

The Unstoppable MetaGeek – now with CleanAir!

Rarely does such an organization come around that expresses it’s agility and prowess with as much regularity as MetaGeek. The most recently of which is their ability to use Chanalyzer Pro (their premium Spectrum Analyzer software) to talk to the Cognio chipset in a Cisco CleanAir Access Point. PC based Spectrum Analyzers have had a sordid history to say the least. Way back when, Cognio made what you would call ‘the best of the best’ PC based Spectrum Analyzer. This took the place of many of the bulkier, more expensive Spectrum Analyzers and proved to the world that a) it was important to get Layer 1 visibility for enterprise WLANs and b) that they could make it affordable for most services based partners. Everyone OEM’d the Cognio analyzer, AirMagnet, Fluke, and WildPackets. Along came Cisco. They purchased Cognio, killed off all of the OEM agreements, rolled the hardware into their Access Points, and started selling the Cognio product with the Cisco name on it (Cisco Spectrum Expert). Unfortunately, they didn’t do much with the CardBus product and let the non-AP components stale. The aging interface form factor left quite a few holes in the market and along came a few people here and there to make it all shake out like this (generally):

  • Cisco Spectrum Expert: Highest resolution, CleanAir AP and CardBus form factor, Cognio based
  • AirMagnet Spectrum XT: Middle resolution, USB form factor, bandspeed based
  • AP based Spectrum Analyzers: Low resolution, integrated into many APs, Atheros based
  • MetaGeek Wi-Spy: Low resolution, USB form factor, keyboard controller based

Ryan and team over at MetaGeek did an excellent job of using very affordable components to give us an alternative to the aging CardBus adapter and the newer, more expensive AirMagnet adapter. They were an awesome product for the money but never really achieved huge market penetration due to the fact that the Cognio and bandspeed products still offered higher resolution. With the Cognio hardware all locked up in the Cisco Access Points, it seemed inevitable that we’d never have a good way to access it. Imagine our surprise when at this years Cisco Live event, MetaGeek was there – showing off their integration between Chanalyzer and the CleanAir Access Points! Ladies and Gentlemen, this is the *exact* same Cognio hardware, high resolution Spectrum Analyzer goodness that we all know and love from the old days. When I first heard about this, there was much trepidation about MetaGeek perhaps not being able to address the ‘full power’ of the Cognio (ahem, CleanAir) chip in its rawest form, but I’m here to tell you, when compared side by side with a legacy CardBus based Cognio adapter, the data is identical! The user interface is the updated, Chanalyzer interface with all of the modern enhancements they’ve made over the years with the WiSpy products, but you’re using the high-fidelity data that Cognio gives us. Here’s how it works:

You can connect to a CleanAir AP that is autonomous or lightweight (registered to a WLC) and it can be either servicing clients or in dedicated ‘SE-Connect’ Mode. You get the highest resolution, widest image when it’s in this last mode so let’s start there. Log into your controller, select your AP from the wireless tab and change it from ‘local’ to ‘SE-Connect’. Click Apply and let the AP reboot and join back to the WLC.

Screen Shot 2013-08-12 at 9.02.03 PM

Once it’s joined back, select the AP again and you’ll find both the IP address of the AP and something called the NSI key:

Screen Shot 2013-08-12 at 9.08.06 PM

Lauch Chanalyzer Pro with CleanAir and goto the File Menu. Select the intuitive ‘Connect to a CleanAir AP:

Screen Shot 2013-08-12 at 9.12.25 PM

Once you do that, enter the values from the AP page that you previously saw including the IP address, NSI key and a friendly name for this AP:

Screen Shot 2013-08-12 at 9.13.07 PM

Once you’ve done that, mash the Connect button and you’ll start to see the familiar Chanalyzer Pro interface with all of the wonderful resolution we all grew so fond of all those years ago! For reference, I ran Chanalyzer Pro with CleanAir on the same machine at the same time as a Cisco Spectrum Expert instance (using the CardBus adapter). Aside from the waterfall flowing up in the Cisco product, and down in the Chanalyzer product, you’ll see striking similarities in the respective waterfall views:

Screen Shot 2013-08-12 at 9.21.24 PM

Screen Shot 2013-08-12 at 9.21.41 PM

and at the same time, getting all of the other awesome details out of the Cognio SaGE like interferer auto-classification and AirQuality Index. Proving once again that MetaGeek are the top kids on the block when it comes to innovation and integration – but don’t take my word for it, head on over to MetaGeek, grab yourself a copy and give it a spin!

Full Disclosure: As an delegate of the Wireless Field Day event, I was given a copy of Chanalyzer Pro with CleanAir to play with without promise or commitment to write anything – much less something positive. 🙂 MetaGeek is a regular supporter of the Tech Field day events and generally makes awesome products and is regularly engaged in Social Media – you should go follow them at @metageek and catch up on the NoStringsAttached Show where Blake Krone and I also talk with MetaGeek about Chanalyzer with CleanAir!

Aruba wants you to stop buying the AP-134 and AP-135. Offers no alternative.

Every once and a while, I stumble across articles that make no sense, are poorly worded or constructed, or flat out wrong. Last week, I ran across one such article that was so out of left field that I felt compelled to address it directly here in my own words. The article is over on the Aruba Networks official blog site (presuming it’s still up). Take a moment, head on over and give it a read (article preserved here for posterity). I was so flabbergasted by the article and its combination of FUD and flat out incorrect information that I used the ‘leave a comment’ link on the bottom. Once I did that, it dawned on me that my comments would likely never get posted – I then realized that I have my own forum to respond to this in, so the next portion of this blog post is the comments I left (with a few typographical and edits to make it flow):

Begin reply post

Wow – there is so much FUD in this article, it’s laughable.

Regarding the 1252 comment:

Remember the Cisco 1250 access point? This pre-standard AP offered future-proofing with an upgradable 802.11n radio meeting the ratified standard. It didn’t work out as it was costly and difficult to upgrade, and didn’t meet the promised performance benefits. 

This is flat out untrue. It ‘didn’t work out’ because it didn’t *need* to work out. The 802.11n pre standard was rolled into the final 802.11n spec. This (upgradability) was only there to ensure users that, in the event the specification was not implementable in the 1252 hardware, that they had an option to field upgrade the units. The performance was on par with other first generation 802.11n products and the 1252 was the wifi alliance test bed for compatibility – it was basically *the* reference 802.11n platform for a very long time.

Difficult to deploy: The 3600 11ac module must plug into the base of the access point, exactly where the mounting brackets are located. This means users will need to remove a deployed AP from operation. This is not a simple plug-in but more akin to opening your laptop for a RAM upgrade. 

Have you actually *seen* the 802.11ac module or a 3600? There is a piece of tape on the back of the AP and two thumb screws. This is more like replacing the battery in your laptop instead of opening it up for a RAM upgrade. This upgrade also will not compromise the thermal venting that is required in lesser manufactures Access Points since the main unit remains sealed.

Lack of promised performance: The IEEE 802.11ac standard promises increased performance over 11n technologies, but the 3600 11ac module’s throughput is dependent on its two-year old processor and RAM, which only scales to 11n rates. This means that although you will be able to connect with newer 11ac clients, there will be questionable increase in performance by doing so. Why spend money for increased performance when you won’t notice it? 

Really? You’ve done performance testing to empirically validate your claims? No? I didn’t think so. Cisco knew well in advance that 802.11ac was coming and the CPU and memory in the 3600 is significantly greater than in the 3500 – specifically for this reason. Until you can show us numbers to back up your vapor-stats you have no evidence that the CPU/memory subsystems of the AP will hinder its performance.

Constrained RF: The 3600 11ac module has its own antennas, and since Wi-Fi rates depend a great deal on antenna design, shoe-horning antennas into the small space of the module will yield less than optimal performance to clients. The result will be your 11ac clients will connect to stronger RF signals from 11n radios. 

Have you discussed the RF design characteristics of this module? Do you know how it will integrate with, instead of replace or work against, the (integrated) 802.11n radio? You assume this will be a discreet radio operating independently of the 802.11n radio. Don’t assume – know. Once you can declare the design is somehow faulty and back it up with block diagrams from Cisco on how the module will (or won’t) interoperate with the host AP, you’re basically guessing and spreading FUD.

Inconsistent feature set: The 3600 11ac module will use a new, untried chipset that may be incompatible with existing Cisco WLAN controller code. So if you add the 11ac module, you have the same hardware, but different features. That will lead to a management challenges and increased operational expense. 

The mindset of ‘don’t move because it’s a new chipset’ or ‘it may require new code’ is a completely invalid conversation. When Aruba releases its 802.11ac AP don’t you expect it to be a) a new chipset or b) to require new code? This is going to happen for every infrastructure manufacturer – Aruba included.

More upgrades coming: The 3600 AP itself requires you have the latest 5500 series or WiSM2 controllers as well as NCS management. So if you have older 2400, 4000, WiSM or WCS, it is that time to write your Cisco tax check again. Make it out to, “Cisco Catalog of Compromise”. And consider this- the 3600 11ac module is pre- standard and will not meet promised performance increases, so you will likely be replacing those 3600 APs at some point in the near future. 

You position the requirements for the 3600 as having a very narrow list of supported controllers (which is misleading) – it is also supported on the 7500 controller, the 2504 controller and the SRE controller. Are you telling me that every modern Aruba AP is supported on every past Aruba controller? At some point you have to lifecycle manage your gear – even Aruba. I don’t even know what a 2400 is.

All told, the expectation of having a Cisco 3600 AP + module will a) give you better performance today with 3 spatial streams and the cost of the module plus the 3600 will be far less expensive than purchasing an Aruba 3 SS AP today and replacing it with an Aruba 802.11ac AP tomorrow. There is no upgrade assurance with the Aruba. The message is loud and clear – if you’re an Aruba customer, do *not* purchase the AP-135. You will end up needing to forklift it out when you move to 802.11ac next year. Buy a Cisco 3600 + 802.11ac module and you’ll have spent far less money than buying two Aruba Access Points (1 now, 1 later).

-Sam

End reply post

Now, I realize it’s laughable to infer that Aruba is advocating you not purchasing their flagship Access Points and it’s a leap assume that since Aruba has no upgrade investment protection that this means that you should stick with your old Aruba equipment, but this leap is a small step – more akin to jumping off of the bottom step of your stairs to the ground floor. The leaps that Aruba makes regarding 802.11ac and the module from Cisco are more akin to Arubas entire executive team finding the tallest building in San Jose and jumping off it all the while waving their fists in the general direction of Tasman Drive. Shame on Aruba for not fact checking their article. Shame on Aruba for spreading FUD. Shame on Aruba for picking a fight with baseless facts and accusations – declaring facts about a product that they’ve not even laid hands on.

-Sam

Cisco WLC 7.2 FUS code release

Cisco recently released version 7.2 of their Wireless LAN Controller code. Along with this update came something new for several administrators in the form of an ‘FUS’ update. This update is available for the 5500 , WiSM 2, and the Flex 7500 platforms and contains a variety of firmware specific updates for each platform including:

  • For the 5500 and WiSM2
  • Field Recovery image update
  • Bootloader updates to 1.0.16
  • Offline Field Diagnostics to version 0.9.28
  • USB Console to 2.2
  • MCU image update too 1.8 (5500 only)
  • FPGA update to 1.7 (5500 only)

For the Flex 7500 controllers, there is a RAID firmware update. There is no FUS update for the 2500 controller or any of the legacy platforms (they’re not supported in release 7.2 in general anyway). Buried in the release notes are a variety of nuggets, but it is imperative that this update be installed by itself with a reboot between it and the main 7.2 code release. The order is not important, just the fact that there is a reboot in-between. Additionally, in order for the FUS image to actually update the various components, you need to have a serial attachment to the WLC during the reboot and you must interact with the image upgrades in order for them to execute. This means that if you’re used to doing the ER updates that you just ‘apply and forget’, this is going to be a deviation from that process. To add to this, each update requires you to answer ‘yes’ in order happen but they’re not quick. You will end up burning somewhere south of about a half an hour to pull off a complete upgrade and if you happen to miss one, you’ll have to reupload the image and step through it again. Cisco is nice enough to tell us during the update approximately how long each will take and these numbers are fairly close to what I’ve experienced in the field. The tally on a 5500 is:

Upgrade Bootloader from 1.0.1 to 1.0.16

  • Erasing Flash (estimated 6 seconds)
  • Writing to Flash (estimated 41 seconds)
  • Checking Boot loader integrity (estimated 2 seconds)
  • Total: 49 seconds

Upgrading FPGA from rev 1.3 to rev 1.7

  • Upgrade takes about 75 seconds to complete

Upgrading Env from rev 1.6 to rev 1.8

  • Upgrade takes about 4 seconds to complete

Upgrading USB from rev 1.27 to rev 2.2

  • Upgrade takes about 11 seconds to complete

Upgrade OFD from version WLCNG OFD 0.8.1 to WLCNG OFD 0.9.28

  • Erasing Flash (estimated 24 seconds)
  • Writing to flash (estimated 111 seconds)
  • Total: 135 seconds

Upgrade Field Recovery Image from version 6.0.182.0 to 7.0.112.21

  • Erasing Flash (estimated 49 seconds)
  • Writing to flash (estimated 716 seconds)
  • Total: 765 seconds

Yes, you read that correctly – the Field Recovery Image takes a whopping 13 minutes to execute! Of interest to those of you that use the USB serial console built into the WLC is the fact that the USB update will flat out break your session. Once you kick off that particular update, you should suspend you session and wait for it to complete. The kicker of course is that you won’t know since you don’t have a console session. The lesson here is that while it is possible to perform these updates using the USB console, you’ll not regret preferring the good old fashioned RJ-45 console cable method.

If you happen to miss an update and have to reapply the image, you’ll notice that the FUS image will proactively check to see if the updates have been applied already:

====================

Checking for Bootloader upgrade

Bootloader upgrade …

Bootloader 1.0.16 is up to date.

====================

Checking for FPGA upgrade

FPGA upgrade …

FPGA image is up to date

It will perform this check for all components, but when it gets to the Field Recovery Image, it will actually ask you if you want to re-apply it:

Field Recovery Image upgrade …

        Field recovery image Current version 7.0.112.21 is up-to-date.

        Answer “y” below will force upgrade to run again.

        Are you sure you want to proceed (y/N) ? n

Again, note that if you re-apply this particular update, you’re in for a thrilling 13 minutes of ‘edge of your seat’ thrills while it completes. There is no way to cancel it and as you’re warned numerous times throughout the FUS process in bad english:

      * Lost POWER will completely kill this unit and not recoverable. *

      * There may be multiple reboot. Please let the program run.      *

Once you’ve completed your updates, and you’re observing the production image boot, it will verbosely tell you what the version of all of these components are so you can tell that they’ve been successfully applied or not:

Cisco AireOS Version 7.2.103.0

Firmware Version FPGA 1.7, Env 1.6, USB console 2.2

Initializing OS Services: ok

Applying these updates is important and does resolve a variety of issues so it is recommended to go through whatever outage window you’re going to require to apply them or you may want to consider pulling a spare (+1) controller out of service, upgrading it and moving all of your Access Points over to free up your primary for upgrade. Either way, you should do this – just make sure the updates actually apply!

Resurrecting a bricked NM-AIR-WLC6

Cisco recently posted this addendum to their Software Downloads section for the Cisco Wireless LAN Controller Module:

Warning: the Wireless LAN Controller Network Module (NM-AIR-WLC6-K9) is not supported in any software release after 4.2.209.0. Attempting to install 5.0 or later software can permanently damage the module.

This is a pretty recent addition and appears to have been an oversight the past year or so while they’ve been happily releasing version after version of NMWLC code without this disclaimer. If you’re like me, you’ve been keeping up on your latest and greatest software releases and you may find yourself in some murky waters if you happen to have this module. Where I landed was a module that would boot fine, but would not establish any network connectivity (management, AP join, etc). You should note that this article in it’s entirety does not apply to the NME module, just the NM. The NME module has more memory and a 1G internal interface to the ISR, the regular old NM has less memory and only a 10/100 interface. You can tell which module you have by looking at the silkscreen on the back of the module or by doing a ‘show sysinfo’ at the CLI of the controller.

This article is not supported by Cisco, TAC, or myself. You may further damage your NM if you proceed. This article is not for the faint of heart and will most certainly void any warranty you may have. If you have a bricked NM under SmartNET, you should contact TAC for a replacement unit, not follow the directions in this post. I do not guarantee any work here and you can severely damage your module, it’s flash, or your PC. Read and follow this article at your own risk!

Now thats out of the way, the specifics of my problem landed me in a situation where I could not roll back the version of code on my flash (not having network connectivity really limits you, I gotta admit). You may find yourself with a corrupt flash, unable to boot, or other general mayhem. Once Cisco released this notification, it dawned on me that the version of code on my flash was likely the culprit. Since the NMs are basically an Pentium III with some memory and a flash to boot off (similar to the CUE or Content Engine modules) running Linux, I figured I should be able to copy the flash from a good NM and I’d be back in business. Having located a donor NM (thanks to Robert B. for his support here), I assembled  the following items to move on:

  • Donor working NM to rob/copy the flash off of
  • A small screwdriver
  • Old laptop with Cardbus/PCMCIA slot
  • CF to PCMCIA adapter (like one of these)
  • USB Flash drive larger than 256M formatted something that Linux can write to
  • A Linux distribution that I could boot off of CD like DSL
  • A static bag to work off of
Getting a donor module
The first thing I did was to remove the CF module from the donor NM.
Step 1) Place the donor NM on a static safe work place. The bag it came in would be good.
Step 2) Confirm that the module you’re working on is an NM, not an NME.
Step 3) Locate the cover that hides the flash module.
Extracting the flash
You must then remove the protective cap around the flash module.
Step 1) Unscrew the CF housing.
Step 2) Lift up gently on the right edge of the cap and it should fall off the module.
Remove the flash from the NM
Gently grasp the Cisco flash module by both edges and pull it directly out of the NM.
Insert the flash module into your CF reader.
This should be pretty straightforward.
Once you have the flash module in a CF reader, we’re going to be focused on getting a good block image off of it. The rest of this article will discuss how to take an image of the flash module and store it on a USB flash drive. Once you have your favorite LiveCD of Linux (or BSD if you prefer) downloaded, boot your old laptop off of it. We chose a LiveCD release so that we can do this on a laptop without having to do a fully blown installation of Linux just for this one project. Feel free to use any sort of Linux box you happen to have laying around. 🙂
Once you’ve successfully booted Linux, you’ll need to open a terminal window. In DSL, there is a link to the Terminal app in the bottom left corner. Attach your USB drive and insert your PCMCIA flash reader once your system is booted and your terminal is up.
Type:
sudo su


  -This puts us into super-user mode so we don’t run into any permissions issues
Then:
dmesg


  -This gives you a dump of system messages. In particular we’re looking for two things. The USB drive and the CF adapter. In my system, this looked like:
<6>hub.c: new USB device 00:1d.1-1, assigned address 3
<6>scsi2 : SCSI emulation for USB Mass Storage devices
<4>  Vendor: USB 2.0   Model: FLASH DISK        Rev: 1.0
<4>  Type:   Direct-Access                      ANSI SCSI revision: 02
<4>Attached scsi removable disk sdb at scsi2, channel 0, id 0, lun 0
<4>SCSI device sdb: 2033664 512-byte hdwr sectors (1041 MB)
<4>sdb: Write Protect is off
<6> sdb: sdb1
This tells us that our flash drive is at /dev/sdb1 (the last line above) so let’s mount it using:
mount /dev/sdb1 /mnt
Next we look for our flash reader. In my system, this looked like:
<6>cs: memory probe 0xa0000000-0xa0ffffff: excluding 0xa0000000-0xa0ffffff
<6>cs: memory probe 0x60000000-0x60ffffff: clean.
<4>hde: STI Flash 8.0.0, ATA DISK drive
<4>ide2 at 0x100-0x107,0x10e on irq 11
<4>hde: attached ide-disk driver.
<6>hde: 501760 sectors (257 MB), CHS=980/16/32
<6> hde: hde1 hde2 hde3
<6>ide_cs: hde: Vcc = 3.3, Vpp = 0.0
For this one, we’re not interested in any partition information like we were on the USB device, we’re just interested in the device name. Here, we see that this device is hde (the beginning of the third line) . Once we have both the USB drive mounted and the flash drive identified, we’re going to use dd to take a block image of the device by typing:


dd if=/dev/hde of=/mnt/sdb1/nm.image
This deconstructs like this:
  -dd is the name of the application we’re going to use to take the image.
  -if=/dev/hde tells dd that the input file is the device of /dev/hde (our CF).
  -of=/dev/sdb1/nm.image tells dd that the output file is a file on our USB drive called nm.image.


This will take some time since we’re reading the CF block by block and writing it out to the USB flash drive. The resultant image will be the same size as the CF (257M in this case) since it’s copying everything – data, unused bits, partition info, etc.
Once the read of the flash is complete, you should be back at a command prompt. You can confirm that it’s there and the right size by typing:
ls -lh /mnt/sdb1


Shutdown your laptop by using:
halt


Eject the CF adapter and re-install it into your donor NM following the instructions in reverse. Once you’ve put away your good hardware, shutdown your ISR with the bad NM, remove it out of the ISR, and extract the flash out of it as described above. Insert it into your CF reader as described above, boot your laptop as described above, insert your devices (USB and CF) into the laptop as described above, and open the terminal application as described above.


Once you’re at your terminal prompt, we’re going to do the following:
Type:
sudo su


  -This puts us into super-user mode so we don’t run into any permissions issues
Then:
dmesg


  -This gives you a dump of system messages. Look for your USB device and CF device like you did before and confirm they’re there.
Now we’re going to take our image that we created above and write it out to our bad flash:
dd if=/mnt/sdb1/nm.image of=/dev/hde



This deconstructs like this:
  -dd is the name of the application we’re going to use to take the image.
  -if=/dev/sdb1/nm.image tells dd that the input file is a file on our USB drive called nm.image.
-of=/dev/hde tells dd that the output file is the device of /dev/hde (our CF).


This will take some time as well since we’re now reconstructing all of the data bits back onto our module. Once that completes, shutdown your laptop by using:
halt


Once it’s powered off, you should have a complete copy of the flash from your donor module in your hands. Reassemble your module and re-insert it back into your ISR. Power it all back on and you should be able to use:


service-module wlan-controller 1/0 session


to confirm that your module boots successfully. One of the more obvious side effects of this is that you’ll loose your NM configuration and you’ll have your donor NMs configuration now on yours. You’ll want to watch the card boot and do a clear config first off to ensure you have a good starting point. If you don’t have a donor NM to get this process done, you may want to look around to see if anyone else in your situation has the data bits from the dd process above. Once you have an extracted image, this should work on any of the like platforms regardless of where it came from.