Bringing Wireless Back in to the Fold

I’m sitting in the airport in Barcelona just having had an amazing week of conversations ranging from potentially core-belief shattering to crazy ideas for puppet shows. The best part of these events, for those of us who are social, is the ability to interact with people in meatspace that we’ve already “known” for a period of time on twitter. I had the pleasure this week of hanging out with such luminaries of the networking social scene like Tom Hollingsworth (@networkingnerd ), Amy Arnold (@amyengineer), Jon Herbert (@mrtugs ), Ethan Banks @ecbanks and, not to be left out of any conversation, Mr. Greg Ferro.

 

There were a lot of great conversations, and more than a couple of packet pushers shows recorded during the week but the one that’s sticking in my mind right now is a conversation we had around policy and wireless. This has been something on my mind now for awhile and I think I’ve finally thought this through enough to put something down on paper.

Before we get started, I think it’s important that everyone understand that I”m not a wireless engineer, I’m making some assumptions here that I”m hoping that will be corrected in the comments if I’m headed in the wrong direction.

 

Wireless: The original SDN

So in many ways, wireless networking have been snickering at us wired lovers for our relatively recent fascination with SDN. Unsurprisingly, I’ve heard a lot of snark on this particular subject for quite awhile. The two biggest being:

  • Controller Based networking? That’s sooooooo 2006. Right?
  • Overlays?  We’ve been creating our own topologies independent of the physical layout of the network for years!!!!

 

I honestly can’t say I disagree with them in principle, but I love considering the implications of how SDN, combined with the move to 802.11ac is going to really cause our words to crash back together.

 

I’m a consumer of wireless. I love the technology and have great respect for the witchdoctor network engineers who somehow manage to keep it working day-in and day-out. I’m pretty sure where I have blue books on my book shelf, they have a small alter to the wireless gods. I poke fun, but it’s just such a different discipline requiring intense knowledge of the transmission medium that I don’t think a lot of wired engineers can really understand how complicated wireless can be and how much of an art form that creating a good stable wireless design actually is.

On a side note, I heard this week that airplanes actually use sacks of potatoes in their airplanes when performing wireless surveys to simulate the conditions of passengers in the seats. If that doesn’t paint a picture of the differences with wireless, I don’t know what does.

 

The first wireless controller I had a chance to work with was the Trapeze solution back in approx 2006. It was good stuff. It worked. It allowed for centralized monitoring, not to mention centralized application of policy. The APs were 802.11G and it was awesome. I could plug in an AP anywhere in the network and the traffic would magically tunnel back to the controller where I could set my QoS and ACLs to apply the appropriate policies and ensure that users were granted access and priority, or not, to the resources that I wanted. Sounds just like an Overlay doesn’t it?

In campus environments, this was great. An AP consumed a theoretical bandwidth of 54Mbps and we had, typically, dual Gig uplinks. If we do some really basic math here, we see the following equation

Screen Shot 2014 12 05 at 1 08 31 PM

 

Granted, this is a napkin calculation to make a point.  But you can see it would be REALLY hard to oversubscribe the uplinks with this kind of scenario.  There weren’t that many wireless clients at the time. AP density wasn’t that bad. 2.4 Ghz went pretty far and there wasn’t much interference.

Screen Shot 2014 12 05 at 1 09 14 PM

 

Hmmm… things look a little different here.  Now there are definitely networks out there that have gone  to 10Gb connections between their closets in the campus. But there are still substantial amount that are still running dual gig uplinks between their closets and their core switches. I’ve seen various estimates, but consensus seems to suggest that most end-stations connected to the wireless network consume, on average, about 10% of the actual bandwidth. Although I would guess that’s moving up with the rich media (video) getting more and more widely used.

Distributed Wireless

We’ve had the ability to allow the wireless APs to drop the wireless client traffic directly on to the local switch for years. Although vendors have implemented this feature at different times in their product life cycles. I think it’s safe to say this is a me-too feature at this point. I don’t see it implemented that much though because, in my opinion, having a centralized point in the network, aka. the controller, were I can tunnel all of my traffic back to allows me to have a single point to apply policy. Because of the limited bandwidth, we could trade off the potential traffic trombone of wireless going back to the controller to access local resources for the simplicity of centralized policy.

Now that a couple of 802.11ac access points can potentially oversubscribe the uplinks on your switch, I think we’re quickly going to have to rethink that decision. Centralized policy not only won’t be worth the cost of the traffic trombone, but I would argue it’s just not going to be possible because of the bandwidth constraints.

 

I’m sure some people who made the decision to move to 10Gb uplinks will continue to find centralized policy to be the winner of this decision, but for a large section of network designers, this just isn’t going to be practical

Distributed Policy

This is where things start to get really interesting for me. Policy is the new black. Everyone’s talking about it. Promise Theory, Declaritive Intent. Congress, etc… There are a lot of theories and ideas out there right now and it’s a really exciting time to be in networking. I don’t think this is going to be a problem we solve overnight, but I think we’re going to have to start creating the foundation now with more consistent design and configurations allowing us to provide a consistent, semi homogenous foundation when we start to see the policy discussion resulting in real products.

What do I mean by this? Right not there, really two big things that will help to drive this forward.

Globally Significant, but not Unique VLANS

Dot1x, or more accurately, the RADIUS protocol, allows us to send back a tunnel-group ID attribute in the RADIUS response that corresponds to a VLAN ID ( name or dot1q tag are both valid ). We all know the horrors of stretched VLANS, but there’s no reason you can’t refuse the same VLAN number in various places in the network as long as they have a solid L3 boundary in between them and are assigned different L3 address space. This means that we’re going to have to move back towards L3 designs and turn to configuration tools to ensure that VLAN ids and naming conventions are standardized and enforced across the global network policy domain.

Consistent Access Control Lists and QoS Policies

RADIUS can also send back a specific attribute in the RADIUS response that will tell the switch put apply a specific ACL or QoS policy to the authenticated connection for the session time of that connection. Some vendors, but not all, allow for the dynamic instantiation of the ACL/QoS policy, but most still require the ACL or QoS construct to be already present in the network device before the RADIUS policy can consume that object. This means we’re going to be forced to turn to configuration management tools to make sure that these policy enforcement objects are present in all of the network devices across the network, regardless of the medium.

 

The future

I think we’re swiftly arriving at a point where wireless can not be designed in a vacuum as an overlay technology. The business need policy to be consistently applied across the board and bandwidth to be available and efficiently used.  I don’t see any other way for this to be done without ensuring that the we start to ignore the medium type that a client is connecting on.  On the bright side, this should result in more secure, more flexible, and more business policy driven wired connectivity in the coming years. I don’t believe we’ll be thinking about how the client connected anymore. We won’t care.

 

Agree? Disagree? Did I miss something? Feel free to comment below!

@netmanchris

Adding Custom Device Fingerprints to the HPN BYOD Solution

So I was working on the new HP BYOD Solution in my lab and I just didn’t have enough wireless devices to really make it interesting.  So I decided to look for other devices in my house which I could connect to the HPN BYOD Controlled  MSM controller-based wireless networks.

I did find a Nintendo Wii, but we don’t have fingerprints in IMC to properly identify the Nintendo Wii.  I guess Nintendo didn’t make the cut.   ( They don’t even support WPA2 Enterprise!!! )

 

Anyways, the great thing about HP’s new BYOD solution, based on IMC and UAM, is the ability for operators to extend the default fingerprints to devices beyond what was shipped with the product. Although the process does require some knowledge of wireshark, it’s nothing that a little google-technician skills can’t get you through.  The adding of fingerprints was super easy. 

Creating the foundation

So before we actually get to creating the fingerprints, we need to create the custom vendor, endpoint type, and OS type that we’re going to assign to the DHCP and HTTP fingerprints we are going to create. If you’re doing this for a new smart phone, like the blackberry 10, you’ll probably be able to skip this step as RIM is already listed as a vendor. As you can imagine, Nintendo wasn’t 

So let’s look at what the process looks like. 

Add Vendor

As you can imagine, there’s no default vendor category for Nintendo, so I’m going to go into the Service>>User Access Manager>> Endpoint Identification Management>>Vendor screen and add a new vendor

NewImage

 

Add Endpoint Type

 

IMC ships with a bunch of endpoint types by default to cover all the normal devices you would see in a business environment. I don’t see that many Wii’s in offices these days though, so we’ll have to create this one too.

 

NewImage

 

Add OS type

Again, No love for Nintendo in the OS department.  Let’s add that too.

 

NewImage

 

 

 

Creating the fingerprints 

For those of you who don’t know, IMC uses digital fingerprints to be able to identify devices accessing the network. We use a combination of characteristics that are mostly unique to one specific type of device to be able to make an educated decision on the model, operating system, and type of the endpoint accessing the network. The three types of fingerprints we can use are

DHCP Fingerprint – In this option IMC uses the options requested in the DHCP client option 55 field to identify the device requesting an address. The specific sequence and number of options are considered to be unique to that specific operating system.  ie. All Nintendo Wii machines should request the same values in the same order in the option 55 field of the DHCP request packet. This is considered to be the most reliable of the fingerprinting techniques.

HTTP User-Agent – In this option IMC uses the User-agent portion of the HTTP request headers sent to the BYOD web server to be able to identify the device requesting the webpage. As most browsers will identify themselves through the use of HTTP User-Agent, this is a still a good method for making an educated decision.

 

MAC Address – In this option IMC uses the MAC address, obtained through the RADIUS server, to identify the vendor based on the MAC address OUI.  This is considered to be the weakest form of fingerprinting, but necessary as some devices do not use a unique DHCP signature, nor a web browser. An example of this might be an IP Telephone or Printer. 

 

So let’s get started here and setup our first fingerprint.

Capture the DHCP fingerprint

This is where the nerdiness starts. I have a Windows Active Directory Server that is serving up addresses for the network that my Wii connects to. So I just installed wireshark on the domain controller and start capturing packets. 

note:  I use the filter bootp.option.type == 53 which will get allow me to see just the DHCP traffic. Cuts down on the packets I need to look through. 

I turn on my Nintendo Wii and wait a few seconds for it to try and connect to the network

 

NewImage

Now that I’ve got the packet, I need to look a little closer for the Option 55 information.  INSERT SOME INFO ON OPTION 55 FROM WIKI

You can see in the packet capture above that the option 55 parameters list has a length of 6, and the values are 1,3,6,15,28, and 33.

 

Creating the DHCP Fingerprint

 

So now we go back to the IMC console and navigate to Service>>User Access Manager>> Endpoint Identification Management>> DHCP Character Identification Configuration

click the add button and input the values above

 

NewImage

 

Now that we’ve got the DHCP Fingerprint, let’s go after the HTTP Fingerprint.

 

Capturing the HTTP User-Agent Fingerprint

This time, I”m  running a packet trace from wireshark loaded on my IMC machine ( this is handy for a whole bunch of reasons)  I use the Internet Channel on my Wii and attempt to login to the IMC server. Now I check Wireshark again, this time using HTTP as the filter. I could also add the filter for the specific host 10.101.0.116, but in this case it’s just as easy to resort by the source server and get to the right packet.

NewImage

There it is… “Nintendo Wii”.

 

Now that I’ve got the HTTP User Agent Signature, I can now go back to IMC and add that in as well.

 

 

Creating the HTTP User-Agent Fingerprint

NewImage

 

Putting it all together

So, we created the DHCP Character Identification as well as the HTTP User Agent Feature Identification. Now we’re going to connect the wii to the BYOD-enabled wireless network and see what the  test this out and see what our work has gotten us. 

NewImage

 

Fingerprinting Successful.  As you can see, the Nintendo Wii was identified by the DHCP client identifier and has been successfully registered in the endpoint MAC address management list in UAM.

 

 

The one step other step which I did skip here was adding a MAC address finger print to identify devices which would allow you to identify the device by it’s MAC address. To be honest, that doesn’t require a packet trace, so I skipped that step. What fun is something that doesn’t require a packet trace?

 

@netmanchris

BYOD – The other implications

WARNING – MIDNIGHT POST.  I’ll come back and fix this in a couple of days, but it’s been banging around in my head and I needed to get it out.

 

So I’m going to get a little controversial here. I’m actually hoping to have my thought process attacked on this one. Hopefully, not personally attacked, but I guess that’s the danger of blogging.

 

Open Disclosure: I don’t work for Cisco.  I guess that’s why I can write this piece and think this through as I’ve got nothing to lose here. I’m sure someone will point and say “Hey! HP GUY!” but I truly don’t feel that whom I work for is going to change the power of this argument.  But because some people get wrapped around those things, I wanted to state that loud and clearly. I am an HP employee. This blog is purely my own thoughts and musings and i no way represents that of my employer in any way shape or form. 🙂

 

So I was at HP discover last week and had a chance to catch up with a TON of customers and partners, as well as have some great conversations with the independent bloggers. To be honest, those are my favorite, because they are the last people to drink the koolaid.If you are trying to convince them of anything, you better have a well constructed argument and proof to support it.

 

So the other topic on everyone’s minds was of course BYOD. Bring Your own Device. Other than Openflow and SDN, I think this is one of the most talked about waves that’s hitting our industry right now. Of course we had the usual discussions about access control, DHCP finger printing, user-agent finger printing, dot1x , web portal, etc… but we also got into some VERY interesting discussions about the greater implications of BYOD.

Now keep in mind, I’m an old voice guy too. My voice books are so old, they’re actually blue, and not that snazzy purple color that you kids use to color coordinate your bookshelves. I know what the SEP in the Callmanagler stands for, and I remember CCM when it shipped on CDs. ( yes, it actually did kids ).

 

So in some ways, I feel like I’m watching my past wash away when I type the following words.

Voice is dead.

Now it might be a few years before everyone realizes it, but there are a lot of forces going on in our industry right now and they seem to all be pointing to a place where handsets are obsolete.

The argument goes something like this

 

1) BYOD is here and it’s not going away.

2) If BYOD is here, then employees are probably teleworking and using their cel phones.

3) If customers are teleworking and using their cel phones, they don’t need desk phones.

4) If customers don’t need desk phones…. they don’t need desk phones.

 

The implications of this really started to hit me and I did a self check and realized, I don’t remember the last time I used a “normal” handset. I work out of a home office. I use a cel phone with unlimited calling.

Not to mention the fact that HP has hooked us up with Microsoft Lync, which means plugin the headset and escalate that IM call to voice or video whenever I need it. and NO handset involved. Oh.. and the Lync client for the iPhone was released too.

The last time I looked, this was an approx $1-2B business for Cisco, so I’m fairly sure they don’t want anyone to realize that investing in new handsets is probably not the wisest move right now. This is a Billion dollar market that they are going to have to replace with something else, or continue to milk it for as long as they can.

Now to be honest, there’s always the Call Center argument which I’ll try and stop right now. Call Centers are not going away. There’s always going to be a business need. Voicemail systems? They might just become part of the cloud, I don’t know. But traditional handset deployments? I think maybe people just haven’t realized they have been throwing money away.

 

On with the rambling midnight logic!

 

The extension to this logic is that if we’re done with handsets, then

why do we need all this POE everywhere?

 

To be honest, I think the only phone that every used anywhere close to the 15.4 watts of 802.3af was the Cisco 7970 series. Most other phones used 2-3 watts, maybe up to 7 with a speaker phone on. So the whole ” I need all 24 ports running full 802.3af class 3 devices at the same time ” is a something that never actually happened ( or at least I’ve never seen it ). 

Now we’re seeing RFP disqualifiers requiring 740 watts per switch ( full 15 watts on all 48 ports ), and I’m sure we will soon be seeing new models coming out with 1,440 watts of POE+ power!!! ( 30 watts per port on a 48 port switch ).

Now POE is an enabling tool, we still need it for access points at the least, but other than that? I can’t name one practical business tool that runs on POE right now that would not qualify as a corner case.

And I don’t see anyone plugging in 24 or 48 access points into the same switch.

 

I would love a sanity check here guys. Is it just me? I’m making an informed prediction throw a crystal ball. Feel free to let me know if my ball’s broken. 🙂

 

@netmanchris