Meraki in the Enterprise ASTERISK

If two decades in the VAR space teaches you anything, it’s that if you want to challenge yourself, learn to say yes, then qualify the relevance of that “yes” to the proposed question. One of the most heavily asterisk’d responses I’ve had to give in the past is in response to the question, “Is Meraki right for the Enterprise”, and I’m of the opinion that it’s high time we start leaving the asterisk off. Look, I’ve been a proponent of Meraki since prior to the Cisco acquisition, but let’s be honest, I’ve clarified that “asterisk’d Yes” in many ways over the years:

Is Meraki right for the Enterprise? Yes*

Over the years you may recall such greats as:

*But they don’t have external antennas, so challenging RF environments are out!

Shortly after acquisition, Meraki did indeed role out External antenna support for their indoor portfolio and it’s been a part of the portfolio since!

*But they don’t have firmware update controls or release notes

Meraki has now one of the best software image management functions available. Or if you’re like most folks, you just want the peace of mind knowing you can refer to it after you’ve set it and forgotten it.

*But they don’t have a way to turn off 2.4GHz radios or have RF profiles so challenging RF environments are still out!

Adoption of RF Profies in the dashboard with no on-premises hardware changes or updates? All delivered seamlessly as part of a dashboard update – and you didn’t have to do anything to get it.

*But they don’t have high gain antennas so challenging RF environments are still still out!

With Smart Antennas, Meraki is able to use the same RP-TNC connectors as other antennas, but this hardware level innovation never got much attention despite the fact that it enabled more complex designs with regulatory compliance assurance.

*But the operating temperature of the AP is on the weak side and I’m concerned that they’ll melt in the rafters long term (or really any other AP centric “quality concern” that could be raised) so challenging RF environments are still still still out!

Meraki adopting the Cisco AP portfolio for a unified approach to the hardware handily assuaged *many* concerns throughout the industry. Meraki APs were fine, but this makes is easy to reconcile for the field. If you can put a Cisco AP in a challenging RF environment, you can put a Meraki AP there now (since they’re the same thing)!.

*But RRM! The Cloud simply can’t do as good of a job at challenging RF environments as my WLCs RRM so challenging RF environments are still still still still out!

At Mobility Field Day, Cisco showed off a unified RRM control plane feature that is a standalone RRM function that either your on premises controllers can use or your Meraki dashboard can use. The goal here is that regardless of the management platform (DNA Center or Meraki dashboard), you should be able to adopt an RRM system that works consistently across both platforms.

*But centralized Data Plane/VLANs/tunneling/<other WLC centric concern here> means I have to abandon the Meraki dashboard, so clearly campus enterprise is out!

Well, this is one that has been sticking in my craw for years, I’ll admit. It wasn’t until last years Catalyst portfolio transition for the Meraki wireless offering, that I had any hope that we’d see what I call “good” guidance on the topic. This has always left Campus Enterprise customers out in the wind to some degree or another. That was until I saw with my very own eyes Meraki pulling a Catalyst 9800 WLC into the dashboard for monitoring. Now, there’s a lot to unpack here and when I first saw this I had about a billion questions. Like:

1) What do you mean I can have my 9800 controllers in the Meraki cloud?

This one I just had to have clarification on – it finally delivers on a huge part of the Meraki/Catalyst integration and yes, you can simply add your WLCs to the dashboard. You then get AP insights (they look just like a Meraki AP!) and of course all of that delicious client info. In short, add your WLCs to the dashboard and you have instantly massive insight into your clients and infrastructure. All without making any changes on your WLC!

2) I’ve been in the dashboard, and there’s no way I can configure all the same things as I need on my 9800 there. How limited am I going to find myself here?

Well, the good news on that front is that for now, this is monitoring only – you retain all of your 9800 functionality and do all of your configuration still on the 9800 directly – no loss of features since there really isn’t much to do from the dashboard once you’ve added it other than bask in the glory of the insights!

3) Monitor only? What if I want to configure my 9800 from the dashboard and I can’t? 

Well yes – that is the current state of the integration – but believe me when I say, the trajectory that Cisco laid out at MFD is clear – Meraki and Catalyst everywhere that DNA Center is “too much”. I think it’s safe to assume that based on the current features available across the EN portfolio (especially on the MS and MR platforms) integrated into the dashboard – that Cisco isn’t done here.

Regardless of where you stand on on your definition of Enterprise requirements (campus or otherwise), it’s clear that if you haven’t been watching the Meraki integrations happening over at Cisco, you really need to spend some time making sure that any of those pesky preconceived blockers you may have in the past aren’t quite so show-stopping now a days…

So long, and thanks for all the great times

Cisco posted the EOS notices for their stalwart Wireless LAN Controllers yesterday, covering the 5520 and the 8540 (and VM). This, coupled with the EOS notice for the 3504 model just the week prior marks the end of all of the hardware/virtual AireOS controllers from Cisco. It’s worth noting that the embedded AireOS (called Mobility Express) is not included in this months announcements. Mobility Express aside, this marks an ending of an era that began with the Aironet acquisition by Cisco in 1999. 22 years of service out of an acquisition is a pretty good run if you ask me. As I reflect on the past two decades, we’ve seen a ton of changes – not only on the Cisco front, but industry wide. We saw 802.11 evolve from hotspot networks of connivence to being mission critical, redundancy focused, pervasive solutions that our business critical applications rely on. We’ve also seen an industry where every single enterprise WLAN only manufacturer has been absorbed by those looking to address “access layer” technologies all in, regardless of physical medium. We saw Cisco mature the Wi-Fi portfolio with some pretty significant milestones:

  • Migration of APs running VxWorks to Cisco IOS
  • Cisco acquire meraki for their Cloud infrastructure offering
  • Rolling some pretty awesome tech from Navini into the core product offering (Beamforming)
  • Turning “real” spectrum analyzers from Cognio into everyday table stakes (CleanAir still can’t be beat!)
  • 26 major WLC releases in the AireOS family (more on this below)
  • Converged Access (although we largely gloss over this milestone)
  • Cisco APs migrate from IOS to AP-COS (with it’s heritage in ClickOS from the meraki acquisition)
  • WCS to NCS to Prime Infrastructure to DNA Center management platforms
  • Merchant silicon from the 1242 days to custom silicone in Marvell radios, and back again to QCA/BCM based solutions intermixed with custom RF ASIC
  • Driving fixes back into 802.11 through custom Wi-Fi extensions in the CCX program (802.11r and others)
  • Countless forays into industrial and outdoor Wi-Fi solutions along with some pretty cool innovations (PRP over Wi-Fi, FSR, MRC, and on and on…)
  • Cisco transitioning their core WLC architecture over to IOS-XE and not screwing it up (frankly, like everyone expected them to do)

I’ll admit, it’s easy to beat up on Cisco – they’re a large target – but the fact remains that a very large percentage of Wi-Fi in the world today is driven by AireOS networks and it’s worth stopping down for a moment to acknowledge that Cisco devoted well over two decades of development and maturity into the product. Since we’re looking at the close of a generation, I wanted to share a list I’ve been working on for sometime now that marks each and every AireOS code name and the version/release it went with. It’s well known that the AireOS founder is enamored with wineries so all major release has been named after a winery – and here they all are, in alphabetical order:

ReleaseVersionCode
A3.20Amberhill
B4.00Beringer
C4.10Concannon
D4.20D-Cubed
E5.00Edgewood
F5.10Franciscan
G5.20Grgich hill
H6.00Heitz
INever built
J7.00Jwine
7.10Unnamed
K7.20Kenwood
L7.30LaReserve
M7.40Mosaic
N7.50NineHills
O7.60Oakcreek
P8.00Pineridge
Q8.10Quintessa 
R8.20Riesling
S8.30Sherry
T8.40Testarossa
U8.50Uva
V8.60Veuve
W8.70Wente
X8.80Xurus
Y8.90Yara
Z8.10Zucca
Not sure why, but this fascinates me – 8.10 being the last release, did they run out of letters or wineries?

When Cisco launched the Catalyst 9800 almost two years ago, it was well acknowledged that they actually delayed the release more than once to allow time for the product to mature – integrating 2 decades of features into a new product take time and I must admit, Cisco has done a pretty fantastic job of keeping new features rolling over the past two years in both AireOS and cat9800 platforms – something that’s difficult to do (especially as we reflect on Converged Access). With this weekends announcements, it’s safe to say that new APs from this point forward will require Catalyst 9800 WLCs. Consider yourself warned, especially as we look into 2021 and 2022 with every one eyes forward on getting to 6GHz (Wi-Fi 6e). If you’re still on AireOS, regardless of where you may be in it’s (which has been significant), the not-so-new-anymore kid on the block is the Catalyst 9800 WLC. I won’t gush on endlessly about what others have written, but suffice it to say, if you’re not getting on the 9800 bandwagon, you’re being left behind. Get up on the IOS-XE based 9800 sooner rather than later and start understanding how your migration looks, especially around models of APs that are supported. Check out the EOS notices for the 3504, 5520, 8540, and Virtual WLC at these links, and check out some of the CCIE preparedness videos I helped with here. Regardless of where you’re at on your journey, if you’ve got virtualization resources available to you – you really should be running a 9800 in a lab, or really anywhere you can.

Analyzing analytic offerings

In case you’ve been living under a rock recently, the calm before the 802.11ax storm seems to increasingly be around Wi-Fi Assurance and/or Analytics. In particular, how is your Wi-Fi network performing and how happy are your clients (devices, not users). Most solutions on the market leverage a healthy dose of buzzwords to accomplish answering this question – most notably Machine Learning (ML), Artificial Intelligence (AI), Big Data, and don’t forget Cloud – to make you, the consumer feel like you’re genuinely on the bleeding edge of what a health related system can give you. It struck me during the recent MFD3 event that each of these solutions has a different way to approach the Assurance/Analytics problem, and of course each touts theirs as being ‘the best’ way to get all of the data needed to give you actionable data. Here is my take on the pro’s and con’s of some of the leading/competing solutions:

1) Mist Systems

Mist Systems claims to be the the First & Only AI-Driven WLAN – a bold statement indeed! Their primary source for retrieving statistics about users performance is directly inline from AP. This ‘at the edge’ approach allows them a deep insight into the radio and first hop performance of applications on their network. With a healthy punting of metadata to the Cloud, they claim to achieve “Automation & Insight through AI”.

Pro: A great example of ‘Cloud enabled’ Analytics and they do seem to genuinely seem to be hyper-focused on WLAN performance.

Con: Requiring Mist infrastructure means rip & replace for many organizations. Being hyper-focused on WLAN hardware leaves many organizations splitting their LAN infrastructure between vendors and that certainly diminishes the ‘one throat to choke’ troubleshooting. Visibility is at the AP layer only, ultimately leading to assumptive troubleshooting when issues outside of their visibility arise. Being a nascent company (and one of the last WLAN-only players) makes me wonder how long before they’ll be acquired.

Consumption: Cloud with a premium capex spend as well as ongoing required opex.

Bold claims!

Bold claims!

2) Cisco Meraki

Since being acquired by Cisco in November 2012, Meraki has continued to deliver on bringing features to market through their flagship product, the Meraki dashboard. The closest anyone comes to a ‘single pane of glass’ management portal, Meraki continues to shine for those Cloud-friendly organizations that have hyper-value on a single point of administration for their network. Generally, these tend to be the highly distributed organizations as opposed to the campus enterprise. Meraki’s ‘Wireless Health’ feature is in beta now and was ‘automagically’ delivered to existing customers.

Pro: Meraki’s AGILE product development targets the 80/20 rule pretty squarely. It’s ‘good enough’ for a lot of folks, and it’s ‘free’ to existing customers (if you don’t consider opex an expense of course).

Con: Wireless Health is Wi-Fi only – with no end to end correlation of their switches or security appliances, and it fragments the message around full-stack solutions. While focusing on making an ‘okay for most’ product, they certainly lose out on much of the deeper technical data commonly found in some of the larger platforms.

Consumption: Cloud with a premium capex spend as well as ongoing required opex (free to existing paying customers).

Slap a beta logo on it, call it good!

Wireless Health from Meraki

3) nyansa

Arguably *the* pioneer in Wi-Fi Assurance and Analytics, they were founded in 2013 and have a head start on most of the players in the market. Interestingly enough, nyansa is the only player in this space that not only doesn’t manufacture hardware to pitch at you, they work with an ever-growing number of existing infrastructure providers (including most of the major ones!). Leveraging an onsite ‘crawler’ to gather the data and to punt metadata to the Cloud, the onsite components are generally lightweight and assuming you’re already a VM friendly organization, no real hardware requirements (including any ripping and replacing of APs) is needed.

Pro: They’ve been at it a longer than anyone else and are clearly ahead of the game. They accept data from a variety of network sources including your LAN infrastructure so their ability to more accurately pinpoint issues is likely to be more accurate than a Wi-Fi only solution. Being able to ‘compare’ your data to peers of your own ilk is an interesting proposition and clearly one of the premier features they hang their hats on.

Con: Having an analytics only platform that’s not tightly coupled with your infrastructure leads me to wonder about the long-term stickiness of the solution. The perceived high-cost of the solution, has lead many to ‘deploy, diagnose, then remove’ – very much defeating the long term goals of Analytics and Assurance platforms. Ongoing success when ‘all is good’ is very hard to demonstrate and the vendor neutral approach leaves them vulnerable.

Consumption: Primarily an opex play since there isn’t really a capex component to speak of (no APs or appliances to install).

That's not creepy at all.

nyansa

 

4) 7signal

7signal has been fairly quiet on the Assurance front as of late, but they’re worth a mention. Being the pioneer in sensor driven tests, hanging an ‘eye’ to connect to your network and measure/gather various statistics about how well it’s performing has been their pitch from day 1. Falling more on the ‘stats digestion’ side of the house rather than on the ML/AI side of the spectrum, 7signal is worth noting due to their synthetic testing that closely mimics what a client sees on the network.

Pro: Client first is the best way to view the network and a sensor (or embedded into a client) is the only way to get this data.

Con: Having *only* client data means that correlation has to happen in a guesswork fashion. Coupled with a difficult install and a user interface that could stand a healthy dose of sprucing up and the platform overall is feeling pretty stale.

Consumption: Capex spend for the sensors and ongoing support and maintenance. On premises deployment model with ‘lightweight-at-best’ analytics.

5) Aruba

Aruba acquired Rasa in May of 2016 to become part of the Aruba Clarity team. They’ve since changed gears and are rolling the Rasa features into NetInsight. They’ve been relatively quiet on the productization front here, opting instead to show it off at events like Aruba Atmosphere and Mobility Field Day events. They get some interesting insights out of the education campus use case they show but I’ve not seen any readily actionable insights that don’t require some level of Data Scientist level of queries. They have the potential to move the needle in the industry here, but making it easy to use is clearly something they’re struggling with.

Pro: Buying a ready made analytics company reduces their time to market and clearly Aruba is moving aggressively to get into the analytics game here. If you’re an Aruba Wi-Fi, AirWave, or Clarity/NetInsight customer, they have some big things in store.

Con: Today the data is clearly difficult to get at. Usability leaves a lot to be desired and there is some pretty unclear things about where the platform is going. Between the legacy Clarity offering, the Rasa integration, NetInsight, and don’t forget about the recent Niara acquisition on the security side. There are lots of moving pieces here and Aruba will have to bring some quick clarity (hah!) to their consumption model.

Consumption: NetInsight productization is currently TBD, but I expect it will be Cloud-first, if not Cloud-only by the time you can get your hands on a production ready solution.

Doing thoughtful things.

Thoughtful people

6) Cisco Enterprise

Cisco has been focused on DNA-Center, the successor to the APIC-EM platform. The platform runs ‘apps’ on top, and one of the flagship applications shipping today is DNA Assurance. This platform is the ‘all-in’ Cisco assurance platform that takes data from everywhere you can think of – netflow feeds from your WLC and/or switch, radio data from the AP, synthetic data from sensors, and feedback from actual clients. In short, they take the best of all worlds and attempt to lump it into one big platform without giving people the heebie-jeebies about their data being in the Cloud.

Pro: Ambitiously Cisco is taking the ‘whatever you can feed me’ approach to Analytics and Assurance. The more feeds you can send to it, the better. This allows organizations to deploy the solution components that make sense to them and add more later if they want improved fidelity. Deploying an Analytics platform that you can actually run onsite in a 1RU appliance is no small feat and will be an undoubted boon for those Cloud adverse.

Cons: All of that horsepower isn’t cheap. Coupled with Cisco’s somewhat tarnished reputation as of late around code quality makes some people nervous about ‘one box to rule them all’, but this should generally be a mitigated concern for out-of-band analytics. Of course, this all works best if you’re Cisco end to end and that could be perceived as a negative to some.

Consumption: On premises hardware appliance fed by Cloud updates for the applications. Your Cisco ONE licensing consumption model and Smart Licenses will be key to getting this off of the ground, but so far there is no ‘break if you don’t pay’ approach.

I hope that the roll-up was a useful overview to the Analytics and Assurance market as it sits today. Did I miss anyone? Let me know and I’ll try and get a summarization up for you ASAP!

Meraki gets smart

I’m a fan of antennas. They’re pretty awesome components of Wi-Fi networks and I think they’re one of the most under-appreciated and oft-overlooked components, so when someone introduces a new antenna related technology, I tend to sit up and take notice!

 Recently, Meraki released their new external antenna model APs, the MR42E and MR53E. In the past, if you needed antenna flexibility in a Meraki solution, you had to use their outdoor rated AP. This introduction, in addition to rounding out their AP portfolio, snuck a new innovation into the market that Meraki has dubbed ‘Smart Antennas’. With the promise of auto-identifying an antenna to the AP, I couldn’t not know more about it! One of the more notable aspects of using external antennas is the potential risk to exceed regulatory compliance. While not terribly complex, the risks for getting it wrong could see the Feds breathing down your back – and nobody wants that! In addition to self-identification for compliance reasons, the new models of APs include more connectors than one might otherwise expect – 5 connectors for the MR42E, and 6 for the MR53E! This breaks down to 3 Wi-Fi antennas, 1 security/scan antenna, and 1 BLE/IoT antennas for the MR42E, and the same compliment on the MR53E with one more Wi-Fi antenna to support that 4th spatial stream. Without delving into each individual component, I really wanted to get a feel for if this thing did what it promised it would do, so I hooked them all up to their respective ports:

That’s a lot of cables!

Fired up the AP, claimed the hardware in my dashboard account and went poking on the antenna settings! Sure enough, where you would normally define an antenna, the exact model number of the antenna array I had was shown!

The cloud got it right!

Hoping it wasn’t fluke of some sort, I powered off the AP, disconnected them all, and tried again. Sure enough, this time, the dashboard presented me with the expected drop down list of available antennas.

The cloud still wants to help out.

I was impressed, it was magic, it worked automatically and wonderfully – and I had to know how. One screwdriver later (the tool, not the drink), I had done the unthinkable, and performed the ill-advised dissection of the shiny new antenna looking for something out of place:

No stranger to the inside of an antenna, the culprit jumped out at me pretty readily:

What appears to be a Maxim Integrated DS2431 1-wire EEPROM was sitting inline just before an antenna element. I traced it back to the connector and found it belonged to the externally-labeled IoT connector:

So, I dutifully connected just the IoT port to the AP, fired it up and viola! The dashboard indicated that the antenna was identified properly despite the fact that only 1 of the 6 connections was attached. This seems to reinforce that Meraki has indeed found a pretty intuitive way to integrate a digital component onto an analog line (as opposed to Cisco that has actual digital connectors in the DART) for a one-time polling of the antenna ID. This was further reinforced by booting the AP without the IoT port connected (so it did not identify the antenna correctly) and then re-attaching it without powering down the AP. After a day of uptime, the AP never properly re-identified it’s antenna. This means that, if you’re using the Meraki smart antenna solution:

  1. Make sure that the antenna cables are attached to the proper port using the silkscreen indicator on the RP-TNC connectors
  2. Make sure that if you change any antenna ports (especially the IoT port), you should reboot the AP so it can properly identify itself to the AP, and subsequently the cloud

It remains to be seen what kind of ecosystem Meraki intends to develop with 3rd party antenna developers, but rest assured, if you want to use a 3rd party antenna today on these new Meraki APs, you certainly can – you just need to log into the dashboard and make sure you pick the equivalent Meraki antenna that closest matches the gain of your 3rd party antenna.

The Cloud giveth, the Cloud taketh away

We all love ‘The Cloud’. It’s flexible, fast, always (mostly) available, and takes our business agility to heretofore unknown heights – but what happens when the service you’re using in the cloud goes a different direction than you need or want it to?

Meraki has been touting the Cloud flexibility as *the* single most important reason to move to their infrastructure management platform. This brings with it a whole host of great things like access-anywhere management, rapid feature development, and a whole new paradigm of how to configure your infrastructure equipment. In one move, Cisco has rocketed past the CLI based days of old, past ‘here’s a pretty GUI’ to 100% web driven, ‘don’t worry your pretty little head about it’ dashboards for everything from configuration, monitoring, troubleshooting, and deployment. It works and it works well.

Today marks the closing of Copy – a Cloud based file sync service from Barracuda and it got me thinking. When someone shudders their doors and it’s ‘just files’, you go to another Cloud based service provider – in this case Dropbox or box.com. What happens when/if Meraki goes away? Okay, they’re under the wing of big-brother Cisco now, so the chances of that happening are basically nil, but what if you ratchet that concern back a notch? What if they make a change you don’t like? What about ‘perpetual beta’ features such as the Remote Control that have been in beta since prior to the Cisco acquisition? What happens if you don’t pay your bill? Those of us familiar with Cloud services like Office 365 know that when you stop paying, you stop playing and for software based services (like Copy today) that doesn’t seem to as big as a deal to most people. What happens when that service is your network?

Remote control

Perpetual Beta features

When Meraki adds a new feature to their product, the Cloud enables rapid deployment of those features. This is good. What happens when they remove a feature you use such as WAN Optimization? As you an see here Meraki decided to retire what they perceived to be either a little-used feature or a feature that was too difficult to maintain to keep functioning properly.

WAN Opt

WAN Optimization, gone baby, gone!

What happens when Meraki decides to artificially cap the performance of your router (intentionally or unintentionally) to 50M?

Z1 Cap

Astute reddit users, always on the lookout.

While the WAN Optimization removal is clearly an intentional move and the Z1 cap is clearly unintentional, these both raise very significant questions about allowing someone else to be the ultimate authority for the features that are deployed on hardware you’ve purchased. What is your recourse when this happens? Open a support ticket? Make a wish? Roll back the firmware (hah!)? With no fail-safe mode of operation by design, when you lock yourself into a Cloud based infrastructure product, you are ultimately at the mercy of using features how and where they determine are best suited. Your only recourse is to scrap your gear if they make a decision to go in a direction that you no longer support. What is the environmental impact to this business model? How many Cloud-only products end up in landfills because of expired licenses? How much eWaste is generated because the product has stopped functioning (not through MTBF, but intentionally crippling through code)? You used to have options like Cucumber Tony and OpenWRT, but apparently Meraki has fixed the technical loophole that those folks used to use for the MR-12 and MR-16 Access Points by way of a Trusted Platform Module.

What is your take on Meraki and other Cloud based services that you operate your business with? Cloud based products are great and work as designed – but is loss of features something you consider prior to your investment in a solution? Does your organization rely on perpetually beta features that never seem to make it into production? Has a feature been ‘pulled out from underneath you’? What are you doing with that old AP/switch/firewall that is perfectly good hardware but you let the license lapse on? Inquiring minds want to know – please leave me a comment and let me know how you and your organization handles this kind of quandary!

Meraki: The bolt on Cloud that wasn’t

When Cisco acquired Meraki last year, there was much confusion. Being ‘down in the trenches’ I struggled as much as the next guy trying to wrap my head around the acquisition and I believe I have a good handle on it. Others not so much. I regularly consult with customers that are just as confused today as they were last year. Cloud is such an over used buzz word and so many vendors are trying to jump on the buzzword bandwagon de jour that it’s easy to get lost admist the jargon and solutions, much less the technical merits or differences in the platforms. I’m here to offer some advice on the strategy and perhaps a perspective on the acquisition that you haven’t yet considered. First some advice:

Don’t purchase Meraki Access Points. You read that right. Don’t do it. Also, don’t purchase Meraki switches. For that matter, don’t buy the Meraki firewall either. If you purchase a Meraki Access Point, a Meraki switch, or a Meraki firewall, you’re not buying an Access Point, you’re not buying a switch, you’re not buying a firewall. You’re buying ‘The Cloud’. When you consider purchasing infrastructure equipment that is ‘Cloud Enabled’, this should be a purchase that lines up with your organizations Cloud Strategy first and foremost. Don’t have a Cloud Strategy? Don’t be so sure. There are a few questions to ask yourself before you jump to that conclusion. Does your organization use DropBox? Salesforce.com? Office 365? Webex or Goto Meeting? Google Mail? All of these are examples of Cloud Applications. If you use these, someone, somewhere in your organization has made the determination to embrace services from ‘The Cloud’. Understand this strategy. Understand what this enables. Understand what this means to your data and where your data lives. Then (and only then) should you consider purchasing ‘Cloud Managed Infrastructure devices’.

Let’s be frank about it, there’s nothing special about the hardware in a Meraki Access Point. There’s nothing special about the hardware in a Meraki Switch, nothing special about the hardware in a Meraki firewall. When you purchase Meraki equipment, this gear is purpose built to be Cloud Managed with features driven by that Cloud Management. When you make a Meraki purchase, purchase an end-to-end Cloud-enabled infrastructure. If it’s right for one component, it’s right for all of them. If it’s not right for all of them, it’s not right for any of them.

Now some perspective. Everyone is talking about Cloud. Everyone wants in on the Cloud action. Everyone is ‘bolting on’ Cloud to their existing products in some fashion or another. When Cisco purchased Meraki, they made a decision to not ‘bolt on’. They decided to pick the one organization that understood Cloud from bottom to top and embrace that strategy despite the fact that there was some hardware overlap. The Meraki acquisition wasn’t about Access Points, switches, or firewalls. It was about finding the one organization that was never built for ‘on premises’ management and this shines through in every aspect of their products. Others tout ‘free protocols’, ‘cloud provisioning’, or a variety of other nonsense but at the end of the day, these are bolt-on solutions that are all afterthoughts. I would encourage you to revisit the Meraki product portfolio but when you do, ask yourself the following questions:

  • What are my existing Cloud Applications?
  • How do I rely on ‘the Cloud’ today?
  • Do I want to leverage that existing strategy in my infrastructure?
  • Do I want a solution that is built from the ground up around ‘the Cloud’ with a no-compromises featureset or do I want to deal with someone bolting on features to their existing ‘heavy gear’?

Then go buy a Meraki AP.