Analyzing analytic offerings

In case you’ve been living under a rock recently, the calm before the 802.11ax storm seems to increasingly be around Wi-Fi Assurance and/or Analytics. In particular, how is your Wi-Fi network performing and how happy are your clients (devices, not users). Most solutions on the market leverage a healthy dose of buzzwords to accomplish answering this question – most notably Machine Learning (ML), Artificial Intelligence (AI), Big Data, and don’t forget Cloud – to make you, the consumer feel like you’re genuinely on the bleeding edge of what a health related system can give you. It struck me during the recent MFD3 event that each of these solutions has a different way to approach the Assurance/Analytics problem, and of course each touts theirs as being ‘the best’ way to get all of the data needed to give you actionable data. Here is my take on the pro’s and con’s of some of the leading/competing solutions:

1) Mist Systems

Mist Systems claims to be the the First & Only AI-Driven WLAN – a bold statement indeed! Their primary source for retrieving statistics about users performance is directly inline from AP. This ‘at the edge’ approach allows them a deep insight into the radio and first hop performance of applications on their network. With a healthy punting of metadata to the Cloud, they claim to achieve “Automation & Insight through AI”.

Pro: A great example of ‘Cloud enabled’ Analytics and they do seem to genuinely seem to be hyper-focused on WLAN performance.

Con: Requiring Mist infrastructure means rip & replace for many organizations. Being hyper-focused on WLAN hardware leaves many organizations splitting their LAN infrastructure between vendors and that certainly diminishes the ‘one throat to choke’ troubleshooting. Visibility is at the AP layer only, ultimately leading to assumptive troubleshooting when issues outside of their visibility arise. Being a nascent company (and one of the last WLAN-only players) makes me wonder how long before they’ll be acquired.

Consumption: Cloud with a premium capex spend as well as ongoing required opex.

Bold claims!

Bold claims!

2) Cisco Meraki

Since being acquired by Cisco in November 2012, Meraki has continued to deliver on bringing features to market through their flagship product, the Meraki dashboard. The closest anyone comes to a ‘single pane of glass’ management portal, Meraki continues to shine for those Cloud-friendly organizations that have hyper-value on a single point of administration for their network. Generally, these tend to be the highly distributed organizations as opposed to the campus enterprise. Meraki’s ‘Wireless Health’ feature is in beta now and was ‘automagically’ delivered to existing customers.

Pro: Meraki’s AGILE product development targets the 80/20 rule pretty squarely. It’s ‘good enough’ for a lot of folks, and it’s ‘free’ to existing customers (if you don’t consider opex an expense of course).

Con: Wireless Health is Wi-Fi only – with no end to end correlation of their switches or security appliances, and it fragments the message around full-stack solutions. While focusing on making an ‘okay for most’ product, they certainly lose out on much of the deeper technical data commonly found in some of the larger platforms.

Consumption: Cloud with a premium capex spend as well as ongoing required opex (free to existing paying customers).

Slap a beta logo on it, call it good!

Wireless Health from Meraki

3) nyansa

Arguably *the* pioneer in Wi-Fi Assurance and Analytics, they were founded in 2013 and have a head start on most of the players in the market. Interestingly enough, nyansa is the only player in this space that not only doesn’t manufacture hardware to pitch at you, they work with an ever-growing number of existing infrastructure providers (including most of the major ones!). Leveraging an onsite ‘crawler’ to gather the data and to punt metadata to the Cloud, the onsite components are generally lightweight and assuming you’re already a VM friendly organization, no real hardware requirements (including any ripping and replacing of APs) is needed.

Pro: They’ve been at it a longer than anyone else and are clearly ahead of the game. They accept data from a variety of network sources including your LAN infrastructure so their ability to more accurately pinpoint issues is likely to be more accurate than a Wi-Fi only solution. Being able to ‘compare’ your data to peers of your own ilk is an interesting proposition and clearly one of the premier features they hang their hats on.

Con: Having an analytics only platform that’s not tightly coupled with your infrastructure leads me to wonder about the long-term stickiness of the solution. The perceived high-cost of the solution, has lead many to ‘deploy, diagnose, then remove’ – very much defeating the long term goals of Analytics and Assurance platforms. Ongoing success when ‘all is good’ is very hard to demonstrate and the vendor neutral approach leaves them vulnerable.

Consumption: Primarily an opex play since there isn’t really a capex component to speak of (no APs or appliances to install).

That's not creepy at all.

nyansa

 

4) 7signal

7signal has been fairly quiet on the Assurance front as of late, but they’re worth a mention. Being the pioneer in sensor driven tests, hanging an ‘eye’ to connect to your network and measure/gather various statistics about how well it’s performing has been their pitch from day 1. Falling more on the ‘stats digestion’ side of the house rather than on the ML/AI side of the spectrum, 7signal is worth noting due to their synthetic testing that closely mimics what a client sees on the network.

Pro: Client first is the best way to view the network and a sensor (or embedded into a client) is the only way to get this data.

Con: Having *only* client data means that correlation has to happen in a guesswork fashion. Coupled with a difficult install and a user interface that could stand a healthy dose of sprucing up and the platform overall is feeling pretty stale.

Consumption: Capex spend for the sensors and ongoing support and maintenance. On premises deployment model with ‘lightweight-at-best’ analytics.

5) Aruba

Aruba acquired Rasa in May of 2016 to become part of the Aruba Clarity team. They’ve since changed gears and are rolling the Rasa features into NetInsight. They’ve been relatively quiet on the productization front here, opting instead to show it off at events like Aruba Atmosphere and Mobility Field Day events. They get some interesting insights out of the education campus use case they show but I’ve not seen any readily actionable insights that don’t require some level of Data Scientist level of queries. They have the potential to move the needle in the industry here, but making it easy to use is clearly something they’re struggling with.

Pro: Buying a ready made analytics company reduces their time to market and clearly Aruba is moving aggressively to get into the analytics game here. If you’re an Aruba Wi-Fi, AirWave, or Clarity/NetInsight customer, they have some big things in store.

Con: Today the data is clearly difficult to get at. Usability leaves a lot to be desired and there is some pretty unclear things about where the platform is going. Between the legacy Clarity offering, the Rasa integration, NetInsight, and don’t forget about the recent Niara acquisition on the security side. There are lots of moving pieces here and Aruba will have to bring some quick clarity (hah!) to their consumption model.

Consumption: NetInsight productization is currently TBD, but I expect it will be Cloud-first, if not Cloud-only by the time you can get your hands on a production ready solution.

Doing thoughtful things.

Thoughtful people

6) Cisco Enterprise

Cisco has been focused on DNA-Center, the successor to the APIC-EM platform. The platform runs ‘apps’ on top, and one of the flagship applications shipping today is DNA Assurance. This platform is the ‘all-in’ Cisco assurance platform that takes data from everywhere you can think of – netflow feeds from your WLC and/or switch, radio data from the AP, synthetic data from sensors, and feedback from actual clients. In short, they take the best of all worlds and attempt to lump it into one big platform without giving people the heebie-jeebies about their data being in the Cloud.

Pro: Ambitiously Cisco is taking the ‘whatever you can feed me’ approach to Analytics and Assurance. The more feeds you can send to it, the better. This allows organizations to deploy the solution components that make sense to them and add more later if they want improved fidelity. Deploying an Analytics platform that you can actually run onsite in a 1RU appliance is no small feat and will be an undoubted boon for those Cloud adverse.

Cons: All of that horsepower isn’t cheap. Coupled with Cisco’s somewhat tarnished reputation as of late around code quality makes some people nervous about ‘one box to rule them all’, but this should generally be a mitigated concern for out-of-band analytics. Of course, this all works best if you’re Cisco end to end and that could be perceived as a negative to some.

Consumption: On premises hardware appliance fed by Cloud updates for the applications. Your Cisco ONE licensing consumption model and Smart Licenses will be key to getting this off of the ground, but so far there is no ‘break if you don’t pay’ approach.

I hope that the roll-up was a useful overview to the Analytics and Assurance market as it sits today. Did I miss anyone? Let me know and I’ll try and get a summarization up for you ASAP!

Advertisements

Management Frame Detection?

Nope! But MFD does stand for something even more exciting! Mobility Field Day (3!) is just around the corner! As a long time delegate with a few minutes to burn on the family PTO trip, I thought I’d take a moment to reflect on the upcoming event. As you can see from the Tech Field Day page there are tons of great sponsors lined up. Here is my take on the coming week, the sponsors strengths, weaknesses, and what I’d like to see. In order of presentation:

Arista (http://techfieldday.com/companies/arista-networks/, @AristaNetworks)

Arista has made a splash in the Wi-Fi space with their recent acquisition of Mojo Networks (nee: AirTight). I’m happy to see Mojo get scooped up, especially in the ever diminishing landscape of infrastructure providers especially since they have a strong story about ‘hardware agnostic’ solutions. Their story since the AirTight days has been one of open platforms and this strength has carried them to the success they’ve had so far. Arista has not. Admittedly I’m not a strong Data Center switch guy, but I don’t see a similar story of how the open, commodity hardware platforms with custom ‘better than you’ software on top meshes well with their corporate messaging. I’d love to see some reconciliation on that front, and a clear vision for the Mojo team moving forward. Please spare me the ‘HP acquired Aruba’, ‘Cisco acquired Meraki’, and those companies are fine story. Paint me a genuine story of market leadership backed by strong technical chops that promise to survive the acquisition.

Aruba (http://www.arubanetworks.com/, @ArubaNetworks)

Aruba (a Hewlett Packard Enterprise company) has been touting ‘industry leadership’ on several fronts recently. They have clearly claimed leadership on several fronts including WPA3 and some intriguing messaging around 802.11ax. Their strength is messaging. They do it well, but I fail to see how Aruba single handedly ‘landed’ WPA3 and how their messaging around 802.11ax (buy when *we’re* ready, but not anyone else) is anything more than corporate marketing fluff. I’d love to see how they are helping the industry move forward *as a whole* on more than just ‘standards stuff coming down the road’. Help me understand why Aruba’s implementation of QCA radios is better than someone else’s. Help me understand why their switches brings more value to an enterprise other than an ABC play. Help me understand why end to end networking with the Aruba logo on it is better.

Cisco (http://www.cisco.com/, @Cisco)

Cisco, the 800 lb. gorilla that everyone loves to hate. Cisco is a machine unlike any other. They have critical mass despite themselves and are painting some intriguing messaging around Assurance products that seem to resonate well with the on-premises enterprises. All other networking aside (routing, switching, security, Data Center, etc), Cisco Wi-Fi has seemingly lost its way as of late. Their adoption of QCA radios (CleanAir is awesome, unless they sell an AP without it!), their continued duality around the Meraki acquisition (it’s right when it will land a sale), and the feature gaps as new platforms come online has always stuck in my craw. The 802.11ac wave 2 APCOS change (the OS on the APs) debacle has left many with souring appetites for a monolithic beast of an assurance platform. I’d love to see how Cisco is involved in driving standards (WPA3, 802.11ax) while allowing their ecosystem around CCX fall to the wayside despite not having a standards based equivalent to 100% of those components (DTPC anyone?).

Fortinet (http://fortinet.com/, @Fortinet)

Fortinet (nee: Meru) has always been intriguing to me. If there is a dark horse in the Wi-Fi space, this is it. Out of left field, some strange security company acquired ‘those SCA guys’ which raised more than a few eyebrows in the industry. I’m not super passionate about firewalls so when someone touts that their strong suit is plopping some security stuff onto an already delicate Wi-Fi ecosystem, I get nervous. I’d love to see what Fortinet is doing on the SCA front (other than the occasional corner case deployment). How are you fostering the technology that made Meru, Meru? If you’re going to be the one exception in the CWNP curriculum, own that. Embrace it, get the delegates to see what makes it special. Get into the nuts and bolts of how it works, what makes it tick. Get your radio firmware developer into the room and nerd out with us for a bit. Don’t be afraid to put that unpolished guy on stage that only knows protocol. We love that kind of stuff.

Mist (http://mist.com, @MistSystems)

Mist is on the short list of Wi-Fi only players that I suspect will be acquired soon. Between them and AeroHive, there aren’t many players left and to be fair, Mist came out of nowhere when Cisco ‘spun out’ (my speculation) the previous owners of the AireOS legacy. They claimed virtual BLE was the next big thing, now it’s AI driven Wi-Fi – what’s next? Do they realize that the ‘heritage’ that they claim ownership of has turned off more people than it’s attracted? When someone claims to be at the helm of Cisco Wi-Fi during the Meraki acquisition, or to have the father of controllers (and RRM) in the drivers seat, how is that a compelling story when so many of todays woes are centered around those two topics? I’d like to hear how Mist has those people at the helm, but how they’re not destined to repeat the past. Mist claims to have an AI driven interface but fails to answer some pretty plain english queries. Tell me how Mist is better. How the AI is not just a bunch of if statements. Burning Man Wi-Fi, I hope not!

NETSCOUT (http://www.netscout.com, @NETSCOUT)

NETSCOUT (or is it netscout or NetScout?) has long held the mantle of go to wired insight products and only recently entered into the Wi-Fi foray with the Fluke (nee: AirMagnet) acquisition. They inherited an impressive product in the AirCheck G2, but also a legacy of tools that are, quite frankly, stale. What is next for the G2? Many of us in the industry love our hulk green Wi-Fi diagnostics tool and the G2 v2 additions were welcome. Is there enough left in the AirCheck to hope for a v3? I’d love to see a cleaner picture about link-live and how it plays a role in the beloved AirCheck G2. I’d love to hear a definitive story on the likes of AirMagnet Survey Pro, Wi-Fi Analyzer, Spectrum XT – all of which are *very* stale. Let’s put these to bed or make something of them that the industry can use.

nyansa (http://www.nyansa.com, @Nyansa)

nyansa has been that strange analytics company with the funny name that promises to fix all of our ails through machine learning and comparative analytics. They’re doing some neat things with ‘just a bunch of flows’, but is it enough? It seems like everyone is jumping on the analytics bandwagon now a days, but with the hefty price tag for a point-in-time resolution product, it feels somewhat estranged. Do you know what happens when your help desk has 9 dashboards all with different data in it, and you try to aggregate and correlate it into a meaningful dashboard? Your help desk now has 10 dashboards. I’d love to see why your data is better (of course), but tell me how it gets rid of data I don’t use today, and tell me how it does it at a price point that makes it a no brainer.

Dear reader, what do you want to see? Feel free to reach out to me by comment, or privately, or on twitter before or during the event and I’ll make sure you get a response. Till then, see you at MFD3 on September 12 through the 14th – make sure to tune in at: http://techfieldday.com/event/mfd3/

AirCheck G2 gets a v2

It’s no secret that I’m a fan of the Netscout AirCheck G2 and have been since before it’s release. I’m happy to see that today they announced the version 2 of the firmware for the AirCheck G2 which brings some pretty neat features to the product. The official page goes into greater detail on the updates, but the two I’m most impressed with are the new interferers page and the integration of iPerf testing for the unit.

The interference detection is a nice to have feature for those field teams that need an initial look at the non-Wi-Fi devices in the air around them. It’s leveraging the integrated WLAN radio for spectrum analysis so it’s not perfect, but it readily enough identified several of the more common interference devices around me (bluetooth and microwave ovens). In addition to identifying the interference, the ‘locate’ functionality that you’ve come to expect with the AirCheck also works with the source of interference. In my testing, I was able to demonstrate that moving away from an active microwave oven did indeed show a corresponding drop in detected signal strength. Let’s be fair, it’s not a fully blown FFT based Spectrum Analyzer, but in a pinch, and for common items, it’s far more insight than we’ve ever had in a a handheld tester.

Screenshot0001 Screenshot0004 Screenshot0007

The iPerf server is another interesting new feature – not only for the AirCheck side of the equation, but for the far side tester as well. The newly announced Test Accessory from Netscout is reminiscent of the LinkSprinter products – handheld, portable, battery or PoE powered, and cloud enabled tester.

IMG_9364

The Test Accessory

This integrates quite nicely with the v2 firmware with the new iPerf test option once you connect to your SSID. The nice bit is that this removes most all of the headache of doing iPerf testing – configuring the server and figuring out what it’s IP address is. You can plug the Test Accessory into your network anywhere and it will phone home to the Link-Live service. The AirCheck will query the service and automatically populate the testers IP address for your test. This makes a very simple to use throughput tester that’s easy to carry and accessible to everyone. If you don’t have a Test Accessory or if you’re comfortable with iPerf testing, you can also just use the AirCheck as a standard iPerf endpoint, so you’re covered either way you want to go here – with a stock, static internal iPerf tester or with a field accessible, cloud enabled tester that goes anywhere.

Screenshot0008Screenshot0009 Screenshot0010

These features take an already dead-useful testing tool and expand it’s role for the Wireless LAN deployer to go beyond just ‘is it up’ testing. With the ability to now detect a number of interferers and do actual throughput testing of a Wi-Fi network, the field implications are that your existing installation teams (or other G2 users) can more deeply validate the functionality of a network – and when things go wrong, have another level of insight that they previously did not have. I’m quite happy that Netscout is clearly investing in, not only the G2, but additional products that augment and expand it’s functionality. The G2 is an overbuilt hardware platform and it’s refreshing to see that Netscout is taking advantage of that extra horsepower. If you’re an AirCheck G2 user, you really need to go get the firmware update now. If you’re not an AirCheck G2 user, what are you waiting for?

All about DART

Yes, I’m writing a blog post on a connector. Just a connector. If you’re like me, you can appreciate the little things in life. This is one of those times that something little snuck past me and it wasn’t until now that I’m starting to fully appreciate it’s impact and importance. When Cisco launched their 2800/3800 APs, dual 5GHz was certainly at the top of the list of the most talked about features (see #MFD session here!). This came with some caveats (as all new features do) and using a separate set of antennas for the second 5GHz radio was the biggest. This is handled on the internal antenna models with an in-built extra set of antennas, but on the external antenna models, this presented a bit of a challenge. In the wide world of antenna connectors, in the Wi-Fi space, we commonly deal with RP-SMA, RP-TNC, and N-type connectors depending on your vendor and the deployment type. In the Cisco world, that’s RP-TNC for indoor APs. With a single, 4 element antenna today, that’s four connectors (or four, single element antennas). With two antennas, that drives the number of antenna connectors up to a whopping 8 cables you’re looking to have coming out of your AP! 8 cables, 8 connectors, it gets messy quick. Enter the DART connector:

All covered up!

All covered up!

DART revealed!

Inconspicuously located on the side of the AP, behind a little door, the new DART connector reveals itself in a complex looking array of pins and connectors in a tight external facing form factor. Here’s the interesting part though, this isn’t a new connector! In fact, it’s been shipping to the public for a little bit now in the form of the Cisco Hyperlocation module and antenna!

Hyperlocation with DART

Hyperlocation with DART

DART on Hyperlocation Exposed!

DART on Hyperlocation Exposed!

So, that’s all great and all, but what’s really *in* the DART connector? DART stands for Digital Analog Radio Termination and it does all of those wonderful things. Firstly, the analog antenna connectors that we use (so we don’t have 8 RP-TNC ports on our AP) are the 4 larger pins across the bottom of the connector.

Look at all those pins!

Look at all those pins!

When we use the DART to RP-TNC pig tail for backwards compatibility with shipping antennas, these are the connectors that map directly to the 4 RP-TNC connectors. In short, these are the 4 analog ports that carry the actual analog signal through the connector.

DART to RP-TNC Cable!

DART to RP-TNC Cable!

Fully assembled!

Fully assembled!

For existing RP-TNC based antennas

For existing RP-TNC based antennas

On the cable end!

On the cable end!

Which leave us with the extra 16 pins. Those are the ‘Digital’ piece of the DART connector and can be used for a variety of uses. Initially, this is used to identify the type of cable that is attached to the DART connector. For example, in the Hyperlocation module, this shows up on the AP details:

Circular Antenna

Circular Antenna

For the DART to RP-TNC connector, this is in the form of a simple resistor that maps two of the pins back to each other:

DART disassembled

DART disassembled

It’s easy to see that there’s quite a bit of left over functionality that could be used in a connector of this type. Today if we use very high gain antennas we have to have multiple models of APs (see the 3602p and 3702p). If we could identify the gain of the antenna by way of an automated mechanism, we could have the AP auto adjust itself to not exceed EIRP. Another potential use case is DART native versions of our existing antennas in a simple to use connector. Imagine not having to screw on connectors anymore! With a quick-connect antenna mechanism that auto-IDs the antenna capabilities to the AP, this could certainly be the new connector of choice for external antennas in the future!

With DART connector on edge.

With DART connector on edge.

Note the DART connector on the left.

Note the DART connector on the left.

10 reasons to take another look at 2015 Cisco Mobility

Let’s face it, Cisco is huge. They’re massive, and occasionally they get things wrong. If you’ve strayed away from Cisco in the past year (or longer) because of a specific issue or gap, it’s high time you took another look. The Cisco Mobility offerings today are a far cry from what they were just an easy year back. Here are 10 great reasons to go get reacquainted with the 2015 Cisco Mobility offerings:

1) 5520/8540 WLCs

The introduction of a Converged Access 60G solution highlighted the gaps in the WLC portfolio in the 20/40G of throughput range. Both of these new controllers (one 20G, one 40G capable) are based on the more mature AireOS codebase running 8.1 and later. While this doesn’t mark an EOS/EOL announcement for the 5508 (clocking in at 8G), it does give that 7 year old platform some good alternatives for lifecycle management.

2) Prime Infrastructure 2.2 then 3.0

Ever since WCS was taken over and moulded into the NCS then Prime Infrastructure products, it’s always bore the scars of a legacy mired in Adobe Flash performance issues. Couple that with a dramatic uptick in features and you’ve got a recipe for disaster. The new versions of Prime Infrastructure are actually performing as well as they should be starting at about the 2.2 version and the new UI of Prime Infrastructure 3.0 completely moves away from Flash and demonstrates a significant re-think of the product – including ‘Make a wish!’.

3) 802.11ac wave 2

Let’s not forget the fun stuff – APs and radios. With competitively positioned 802.11ac Wave 2 products, Cisco is staying in the lead of the latest and greatest standards. With impressive throughput numbers, multiple gigabit uplinks, and fancy new features like MU-MIMO, the 1830/1850 APs are clearly paving the way for the next generation of some pretty obviously numbered future platforms. The only question is, what does Cisco have in store for us next?

4) HALO

No, not the game – the new Hyper-Location Module and antenna array. Cisco is delivering on the promise that the industry made to us oh so many years ago about leveraging your WiFi network as a platform for tracking your enterprises assets. Touting down to 1 meter accuracy, this module for your AP3600/AP3700s will take your location fidelity ‘to the next level’.

5) Mobility Express

Those that don’t like having a bare metal controller and don’t see the need for controller based features (such as centralized data plane), we now have a ‘controller on the AP’ option! This allows us to focus on the smaller deployments without the extra cost and complexity (such as it is) for those customers. This isn’t a ‘one size fits all’ approach that we’ve seen in the past, but rather an evolution of a well thought out strategy to bring enterprise features to every market segment.

6) UI improvements

Along with the Mobility Express product, the ‘metal WLCs’ are sporting a new user interface and out of the box setup experience (Day 0 and Day 1 support). If you’ve felt the WLC interface was a bit dated in the past, go take a gander at the plethora of new graphs, charts, and actual usable data about your infrastructure – all without having to goto a larger NMS platform!

7) CMX Evolution

The MSE product is finally getting some legs under the advanced location pieces. This easy to deploy ‘for everyone’ product starts to bring some pretty insightful analytics to any sized deployment – all the way down to a ‘no maps required’ presence analytics and all the way up to a Hyperlocation enabled, social media engagement platform. With both on premises and cloud based offerings available, it really is very easy to start getting very insightful data out of any sized network.

8) CCIE Wireless version 3

The dated CCIE (Cisco Certified Internetwork Expert, Wireless) exam has been updated to include software and hardware platforms from this year. You can now tackle one of the industries most challenging certifications on contemporary labs that are actually relevant to solutions you’re deploying today!

9) UX domain APs

See my previous blog on the topic for a more in-depth look at the UX products, but for those buying and deploying APs spanning multiple countries, this is a pretty good way to reduce a ton of deployment and ordering complexities. By standardizing on a single SKU globally, you can make quick work of some of the logistics nightmares of the past.

10) Cisco ONE licensing

Yes, licensing is boring, complicated, and expensive. Cisco ONE addresses all three of those pain points in one easy go. With a ‘count the AP’ approach to licensing, you can now start to take advantage of all of the above products in an easy to consume, deploy, and license fashion – without breaking the bank. For example, if you wanted to replace your old WLC with a new one, in the past, you would end up repurchasing your AP licenses. In this model, all products start at 0 APs and you pick the size that’s right for you – at any scale. Pick the solutions you want to deploy: ISE, Prime Infrastructure, advanced location analytics, etc – and go! A significant departure from the traditional licensing model in Cisco-land.

I know that a ‘recap overview’ blog sometimes seems too lofty, but there really is a ton to see if you’ve been unplugged from the Cisco world over the past year or so. Take a deep breath and plunge back in at any level and you’ll find something new that wasn’t there before. The Cisco ship sometimes turns slowly and sometimes it’s easy to forget that there is innovation happening all over the mobility space in San Jose.

Disclaimer: I was part of the Wireless Field Day 8 delegation to Cisco where we learned about several of the above topics. For more information on Cisco’s appearance at WFD8, go check out the video!

The evolution that will start the revolution

You’ve heard it all before, evolutionary technology versus revolutionary technology. Everyone wants their newest technology to be revolutionary – expecting it to be life changing and a wide-sweeping, compelling reason to spend tons of money. This is rarely the case and more often than not marketing fluff to try and get you onto the next big thing. Occasionally we get such an unassuming technology announcement that fits squarely in the ‘no big deal’ from a speeds and feeds perspective that it’s easy to overlook. This is clearly the case with the recent multigigabit announcements from Cisco during Cisco Live, Milan. Multigigabit is a technology that allows your existing cabling to support speeds in excess of 1G, without having to make the jump all the way to significantly more costly 10G. Since we already have technology that address speeds and feeds above what we’re talking about here (how many 10G uplinks have you deployed recently?), it’s easy to overlook the impact this will bring to our daily lives. The ability to move to 2.5G and 5G link speeds without having to make the jump all the way to 10G will allow us to get improved link speeds without having to pay a premium for them. The expected cost increase is estimated to be anywhere from 0% to 15% according to the rumor mill which makes the 250% to 500% speed bump quite attractive!

802.11ac wave 2
The reason I’m taking about it is the fact that this brings with it the promise of addressing the 1G bottleneck that people have been gnashing their teeth over in the wireless space for the past couple of years. While we’ve been able to reasonably deflect the speeds and feeds conversation with 802.11ac wave 1 (speeds approaching 1G wired requirements), there has been no good way to move past that without having a two-cable conversation. The assumption up till now has been that 2x 1G links will be the way forward and many people have been running two copper runs out to their Access Points for the past several years in anticipation of this approach. 802.11ac wave 2 will undoubtedly break the 1G barrier in fairly short order with speeds being promised of (best case) 6,930Mbps PHY rate (about 4,900Mbps on the wire). Multigigabit solutions will allow us to address these concerns without having to invest in 10G links. Better yet, it will allow us to address these concerns without having to consume two 1G ports on our switches. Regardless of the solution you choose (1x 10G or 2x 1G), the cost for deploying a single Multigigabit link supporting up to 5G will be less at scale.

Power
The other unassuming byproduct of this conversation is that Access Points require power to bring up all of those components. It will be nearly impossible to power up a 10G ethernet interface in an AP in the power budgets that we have today. By reducing the link speed requirements to 5G, we can save power at the edge device and still fit in modern negotiation. Multigigabit solutions today will provide PoE, PoE+, and UPoE to ensure that the wave 2 APs that we’re going to be hanging will have ample power for whatever they’re going to bring.

The Revolution
I predict that the incremental cost and intermediary speeds will allow us to start having conversations about the jump to 10G. Multigigabit solutions on Access Points, switch uplinks, and desktop and server nics will be the next big thing. Stackable solutions today promise backwards compatibility so you don’t have to rip and replace – just add a stack member and you’re good to go in that closet/IDF! Regardless of your future proofing plans, enabling faster wireless, or just ensuring that you’re not spending money after (can you believe it?) now legacy 1G infrastructure, make sure you’re having a conversation today about ethernet to bridge the gap to 10G.

For more information about the NBASE-T alliance, go here.
For the Cisco Live, Milan – Tech Field Day Extra event with Peter Jones, go here.

Avaya Wireless is all about SDN

After hearing about Avaya’s wireless portfolio recently, I kept coming back around to a common thread that seemed so entrenched in the core of their solution – SDN. Admittedly I’m not a Data Center or Applications kind of guy, but Avaya has an interesting take on positioning their wireless portfolio. Instead of focusing heavily on a unique set of hardware specific features in their Access Points, they focus on a ‘module enabled’ Software Defined Network strategy. Paul Unbehagen, Chief Architect at Avaya accurately describes SDN as meaning something different to everyone.

At its core, regardless of vendor or implementation, SDN is meant to ease network administration and orchestration by way of software (the S in SDN). Avaya enables this by way of software running on their hardware to create Fabric Attach (FA) Elements. These elements use FA Signaling as a way of communicating amongst each other. These modules running throughout your network (on Avaya hardware) automatically discover and become a part of your FA Core through the orchestration suite.

Avaya does this across their entire infrastructure portfolio which includes their core products, edge switching, and Wireless Access Points. These components all orchestrate together to automatically configure and allocate resources in your infrastructure as needed. In one example, they showed an Access Point coming online and auto-registering using Fabric Attach and magically the requisite VLANs for the wireless infrastructure were automatically provisioned on the uplink switch. It’s clear that Avaya has invested significant resources in enabling this FA functionality including going as far as proposing Fabric Attach as a standard to the IEEE but their messaging is clear – when you run an FA enabled network end to end, it ‘just works’.

It was interesting in hearing the Avaya story in their own words including their addressing some of the more interesting corner cases:

  • Running an FA network without FA enabled devices being attached – this is supported using standards based LLDP TLVs but will likely require more effort than having the FA ‘agent’ running natively on your device.
  • Running Avaya wireless on a non-FA infrastructure – this is supported, but Avaya doesn’t bring anything special to that story that someone else doesn’t already do. This is an interesting scenario that could be positioned for transition needs.

In short, Avaya has taken a link-layer protocol, customized it heavily and allowed it to ask for network resources in an orchestrated fashion. It remains to be seen if this meets everyones definition of SDN and is somewhat predicated on the ‘controller bottleneck myth’ that seems so pervasive in the wireless industry. I, for one, am very interested in seeing where this takes us over the next several years. Addressing distributed challenges at scale (such as provisioning resources) is a problem that has been solved in the wireless space for a long time – do it centralized and scale from the inside out. I look forward to seeing how (and if) Avaya can leverage this FA architecture across multiple platforms and vendors to create the foundational panacea that SDN promises.