Bringing Wireless Back in to the Fold

I’m sitting in the airport in Barcelona just having had an amazing week of conversations ranging from potentially core-belief shattering to crazy ideas for puppet shows. The best part of these events, for those of us who are social, is the ability to interact with people in meatspace that we’ve already “known” for a period of time on twitter. I had the pleasure this week of hanging out with such luminaries of the networking social scene like Tom Hollingsworth (@networkingnerd ), Amy Arnold (@amyengineer), Jon Herbert (@mrtugs ), Ethan Banks @ecbanks and, not to be left out of any conversation, Mr. Greg Ferro.

 

There were a lot of great conversations, and more than a couple of packet pushers shows recorded during the week but the one that’s sticking in my mind right now is a conversation we had around policy and wireless. This has been something on my mind now for awhile and I think I’ve finally thought this through enough to put something down on paper.

Before we get started, I think it’s important that everyone understand that I”m not a wireless engineer, I’m making some assumptions here that I”m hoping that will be corrected in the comments if I’m headed in the wrong direction.

 

Wireless: The original SDN

So in many ways, wireless networking have been snickering at us wired lovers for our relatively recent fascination with SDN. Unsurprisingly, I’ve heard a lot of snark on this particular subject for quite awhile. The two biggest being:

  • Controller Based networking? That’s sooooooo 2006. Right?
  • Overlays?  We’ve been creating our own topologies independent of the physical layout of the network for years!!!!

 

I honestly can’t say I disagree with them in principle, but I love considering the implications of how SDN, combined with the move to 802.11ac is going to really cause our words to crash back together.

 

I’m a consumer of wireless. I love the technology and have great respect for the witchdoctor network engineers who somehow manage to keep it working day-in and day-out. I’m pretty sure where I have blue books on my book shelf, they have a small alter to the wireless gods. I poke fun, but it’s just such a different discipline requiring intense knowledge of the transmission medium that I don’t think a lot of wired engineers can really understand how complicated wireless can be and how much of an art form that creating a good stable wireless design actually is.

On a side note, I heard this week that airplanes actually use sacks of potatoes in their airplanes when performing wireless surveys to simulate the conditions of passengers in the seats. If that doesn’t paint a picture of the differences with wireless, I don’t know what does.

 

The first wireless controller I had a chance to work with was the Trapeze solution back in approx 2006. It was good stuff. It worked. It allowed for centralized monitoring, not to mention centralized application of policy. The APs were 802.11G and it was awesome. I could plug in an AP anywhere in the network and the traffic would magically tunnel back to the controller where I could set my QoS and ACLs to apply the appropriate policies and ensure that users were granted access and priority, or not, to the resources that I wanted. Sounds just like an Overlay doesn’t it?

In campus environments, this was great. An AP consumed a theoretical bandwidth of 54Mbps and we had, typically, dual Gig uplinks. If we do some really basic math here, we see the following equation

Screen Shot 2014 12 05 at 1 08 31 PM

 

Granted, this is a napkin calculation to make a point.  But you can see it would be REALLY hard to oversubscribe the uplinks with this kind of scenario.  There weren’t that many wireless clients at the time. AP density wasn’t that bad. 2.4 Ghz went pretty far and there wasn’t much interference.

Screen Shot 2014 12 05 at 1 09 14 PM

 

Hmmm… things look a little different here.  Now there are definitely networks out there that have gone  to 10Gb connections between their closets in the campus. But there are still substantial amount that are still running dual gig uplinks between their closets and their core switches. I’ve seen various estimates, but consensus seems to suggest that most end-stations connected to the wireless network consume, on average, about 10% of the actual bandwidth. Although I would guess that’s moving up with the rich media (video) getting more and more widely used.

Distributed Wireless

We’ve had the ability to allow the wireless APs to drop the wireless client traffic directly on to the local switch for years. Although vendors have implemented this feature at different times in their product life cycles. I think it’s safe to say this is a me-too feature at this point. I don’t see it implemented that much though because, in my opinion, having a centralized point in the network, aka. the controller, were I can tunnel all of my traffic back to allows me to have a single point to apply policy. Because of the limited bandwidth, we could trade off the potential traffic trombone of wireless going back to the controller to access local resources for the simplicity of centralized policy.

Now that a couple of 802.11ac access points can potentially oversubscribe the uplinks on your switch, I think we’re quickly going to have to rethink that decision. Centralized policy not only won’t be worth the cost of the traffic trombone, but I would argue it’s just not going to be possible because of the bandwidth constraints.

 

I’m sure some people who made the decision to move to 10Gb uplinks will continue to find centralized policy to be the winner of this decision, but for a large section of network designers, this just isn’t going to be practical

Distributed Policy

This is where things start to get really interesting for me. Policy is the new black. Everyone’s talking about it. Promise Theory, Declaritive Intent. Congress, etc… There are a lot of theories and ideas out there right now and it’s a really exciting time to be in networking. I don’t think this is going to be a problem we solve overnight, but I think we’re going to have to start creating the foundation now with more consistent design and configurations allowing us to provide a consistent, semi homogenous foundation when we start to see the policy discussion resulting in real products.

What do I mean by this? Right not there, really two big things that will help to drive this forward.

Globally Significant, but not Unique VLANS

Dot1x, or more accurately, the RADIUS protocol, allows us to send back a tunnel-group ID attribute in the RADIUS response that corresponds to a VLAN ID ( name or dot1q tag are both valid ). We all know the horrors of stretched VLANS, but there’s no reason you can’t refuse the same VLAN number in various places in the network as long as they have a solid L3 boundary in between them and are assigned different L3 address space. This means that we’re going to have to move back towards L3 designs and turn to configuration tools to ensure that VLAN ids and naming conventions are standardized and enforced across the global network policy domain.

Consistent Access Control Lists and QoS Policies

RADIUS can also send back a specific attribute in the RADIUS response that will tell the switch put apply a specific ACL or QoS policy to the authenticated connection for the session time of that connection. Some vendors, but not all, allow for the dynamic instantiation of the ACL/QoS policy, but most still require the ACL or QoS construct to be already present in the network device before the RADIUS policy can consume that object. This means we’re going to be forced to turn to configuration management tools to make sure that these policy enforcement objects are present in all of the network devices across the network, regardless of the medium.

 

The future

I think we’re swiftly arriving at a point where wireless can not be designed in a vacuum as an overlay technology. The business need policy to be consistently applied across the board and bandwidth to be available and efficiently used.  I don’t see any other way for this to be done without ensuring that the we start to ignore the medium type that a client is connecting on.  On the bright side, this should result in more secure, more flexible, and more business policy driven wired connectivity in the coming years. I don’t believe we’ll be thinking about how the client connected anymore. We won’t care.

 

Agree? Disagree? Did I miss something? Feel free to comment below!

@netmanchris

Advertisements

Working with PYSNMP con’t – SNMP simple SETs

So inspired by @KirkByers blog on SNMP which I wrote about here using Python 3, I decided that I wanted to go past just reading SNMP OIDs and see how PYSNMP could be used to set SNMP OIDs as well.  As with anything, it’s best to start with the docs and apply the KISS principle.

Why SNMP?

Although RESTful APIs are definitely an easier, more human readable, way to get data in and out of a network device, not all network devices support modern APIs. For a lot of the devices out there, you’re still stuck with good ol’ SNMP.

Tools

SNMP is a horrible human readable language. We have MIBS to translate those nasty strings of digits into something that our mushy brains can make sense of. Thats known as a MIB Browser. There are a lot of MIB browsers out there, which one you use doesn’t matter much, but I would HIGHLY recommend that you get one if you’re going to start playing with SNMP.

http://www.ireasoning.com has a free version that works great on Windows and supports up to 10 MIBs loaded at the same time.

The Goal

There are a lot of powerful actions available through the SNMP interface, but I wanted to keep it simple.  For this project, I wanted to go after something simple. For this project, I wanted to use SNMP to change the SYSCONTACT and SYSLOCATION fields.

For those of you who are used to a CLI, the section of the config I’m looking after resembles this.

 snmp-agent sys-info contact contact

 snmp-agent sys-info location location

 

The Research

So I know what I want to change, but I need to know how I access those values though SNMP. For this, I’ll use the MIB Browser

 

Using the MIB Browser, I was able to find the SYSLOCATION MIB which is identified as .1.3.6.1.2.1.1.6.0

Screen Shot 2014 11 28 at 1 37 36 PM

I was also able to find the SYSCONTACT MIB which is identified as .1.3.6.1.2.1.1.4.0

Screen Shot 2014 11 28 at 1 37 14 PM

 

So now I’ve got the two OIDs that I’m looking for.

The Code

Looking through the PYSNMP documentation, there’s an example there of how to do an  SNMP SET. But they threw in a couple of options there that I didn’t want, specifically, I didn’t want to use the lookupNames option. So I changed the lookupNames option to False and then I was able to use the OIDs above directly without having to find the names of the MIBs.

So looking through the code below, you can see that I’ve created a function which will take an input syscontact and use it as the variable to set the MIB object  .1.3.6.1.2.1.1.4.0 which corresponds to the

SNMP-AGENT SYSCONTACT …  in the configuration of the device.

from pysnmp.entity.rfc3413.oneliner import Camden

cmdGen = cmdgen.CommandGenerator()

#using PYSNMP library to set the network devices SYSCONTACT field using SNMP
def SNMP_SET_SYSCONTACT(syscontact):
errorIndication, errorStatus, errorIndex, varBinds = cmdGen.setCmd(cmdgen.CommunityData(‘private’),cmdgen.UdpTransportTarget((‘10.101.0.221’, 161)),(cmdgen.MibVariable(‘.1.3.6.1.2.1.1.4.0’), syscontact), lookupNames=False, lookupValues=True)
# Check for errors and print out results
if errorIndication:
print(errorIndication)
else:
if errorStatus:
print(‘%s at %s’ % (errorStatus.prettyPrint(),errorIndex and varBinds[int(errorIndex)-1] or ‘?’))
else:
for name, val in varBinds:
print(‘%s = %s’ % (name.prettyPrint(), val.prettyPrint()))

 

 Running the Code

Now we run the code

>>>
>>> SNMP_SET_SYSCONTACT(‘test@lab.local’)
1.3.6.1.2.1.1.4.0 = test@lab.local
>>>

Looking back at the MIB Browser we can see that the SYSCONTACT location has been changed to the new value above.

Screen Shot 2014 11 28 at 2 24 50 PM

And when we log back into the network device

 snmp-agent sys-info contact test@lab.local

 snmp-agent sys-info location location

 

Wrap Up

This is just a small proof-of-contact code that shows, although RESTful APIs are definitely sweeter to work with, they are not the only way to programatically interface with network devices.

Comments or Questions?  Feel free to comment below

 

@netmanchris

 

 

 

PYSNMP with HP 5500EI Comware Switch

Inspired by @kirkbyers post over here  I wanted to stretch my python skills and see about playing around with the PYSNMP libraries as well as Kurt’s SNMP_HELPER.PY function which is available here.

Clean up the SNMP_HELPER.PY function for Python 3.x

There are some differences in Python 2 vs. Python 3. One of those differences is that the print command now requires you to actually have parans ()   around the content that you wish to print.  This was about the only thing that I had to do to get Kirk’s code working in Python 3.  If you try to run the code in the python IDLE software it will come up with this error right away.  I could also have run the py2to3  scripts, but since this was a small file, it was easy to just search for the 4 or so print statements and edit it manually as I was reading through the code to try and understand what Kirk was doing.

 

Easy Installation

So Kirk takes you through the normal PIP installation. I’m performing this on OS X Mavericks with Python 3. So for those not familiar with the differences yet. Python 2.x is natively installed on OSX. If you do a pip install …  command, this will result in you downloading and making that specific library available to the python 2.x version on your OS.  Since I’m using python 3.x, I instead need to use the pip3 install command which will, instead, make the library you’re downloading available to python 3.x on your system

$pip3 install pysnmp

 

Note: Kirk has a couple of other ways to install the pysnmp library over on his blog, so I won’t repeat them here.

Testing Out SNMP

So it’s a good idea to ensure that SNMP is running and you have the right community strings on the machine you’re going to access. For this, I’m going to use an

SNMP MIB browser that I have installed on my MBA to test this out. You could also use the net-snap utilities as shown on Kirk’s blog if you’d like to do this from the CLI. I highly recommend getting a MIB Browser installed on your system. http://www.ireasoning.com has a nice free one available.

Screen Shot 2014 11 27 at 3 51 04 PM

 

So now that we’ve confirmed this all works. on to the code.

Setting the Stage

So I’m assuming that you’re able to run the SNMP_Helper.py file in IDLE.  If you look at the code, one of the first things it does is import the cmdgen method from the pysnmp library

“from pysnmp.entity.rfc3413.oneliner import cmdgen” 

One of the ways that has really helped me learn is to go through other people’s code and try and understand exactly what they are doing. I don’t think I could have written SNMP_Helper.py on my own yet, but I can understand what it’s doing, and I can DEFINITELY use it. 🙂

Now we set up a few variables, using the exact same names that Kirk used over in his blog here

>>> COMMUNITY_STRING = ‘public’
>>> SNMP_PORT = 161
>>> a_device = (‘10.101.0.221’, COMMUNITY_STRING, SNMP_PORT)

Running the Code

Now we’ll run the exact same SNMP query against the sysDescr OID that Kirk used. And Amazingly enough, get a very similar output.

>> snmp_data = snmp_get_oid(a_device, oid=’.1.3.6.1.2.1.1.1.0′, display_errors=True)
>>> snmp_data
[(MibVariable(ObjectName(1.3.6.1.2.1.1.1.0)), DisplayString(hexValue=’485020436f6d7761726520506c6174666f726d20536f6674776172652c20536f6674776172652056657273696f6e20352e32302e39392052656c6561736520323232315030350d0a48502041353530302d3234472d506f452b204549205377697463682077697468203220496e7465726661636520536c6f74730d0a436f707972696768742028632920323031302d32303134204865776c6574742d5061636b61726420446576656c6f706d656e7420436f6d70616e792c204c2e502e’))]

 

It’s nice to see that we have gotten that same nasty output. SNMP is a standard after all and we should expect to see the same response from Cisco, HP, and other vendors devices when using standard SNMP functions, such as the MIBII sysDescr OIDs.

So now, let’s use Kirk’s cleanup function to be able to see what the data actually looks like. Again, remember Python3 needs those parens for the print statement to work properly.

>>> output = snmp_extract(snmp_data)
>>> print (output)
HP Comware Platform Software, Software Version 5.20.99 Release 2221P05
HP A5500-24G-PoE+ EI Switch with 2 Interface Slots
Copyright (c) 2010-2014 Hewlett-Packard Development Company, L.P.

Just for giggles, I also used this code against my Synology Diskstation

>>> print(output)
Linux DiskStation 2.6.32.12 #4482 Fri Apr 18 02:12:31 CST 2014 armv5tel

Then against my Server Technologies intelligent PDU

>>> print(output)
Sentry Switched CDU

Then against my DIGI console server.

>>> snmp_data = snmp_get_oid(a_device, oid=’.1.3.6.1.2.1.1.1.0′, display_errors=True)
ERROR DETECTED:
error_message No SNMP response received before timeout
error_status 0
error_index 0

The last one was working exactly as expected as I have ACL’s in place to only allow SNMP access from certain devices in my network. 🙂

Observations

It’s nice to see that standards like SNMP and widely available libraries like pysnmp can be used to access the devices regardless of the vendor they come from.

SNMP gets a bad wrap in general as there are new cooler technologies out there like NETCONF, OpenFlow, OVSDB, NetFlow, sFlow, and I’m sure a dozen others that I’m missing that can do a better job of the functions that SNMP was originally designed to go after.

But sometimes, SNMP is what we have, and the reason that it’s still around after all these years is that it’s  “good enough”

 

Questions or comments?  please post below!

@netmanchris

 

 

Surfing your NMS with Python

Python is my favourite programming language. But then again, it’s also the only one I know. 🙂

I made a choice to go with python because, honestly, that’s what all the cool kids were doing at the time. But after spending the last year or so learning the basics of the language, I do find that it’s something that I can easily consume, and I’m starting to get better with all the different resources out there. BTW http://www.stackoverflow.com is your friend.  you will learn to love it.

On with the show…

So in this post, I’m going to show how to use python to build a quick script that will allow you to issue the RealTimeLocate API to the HP IMC server. In theory, you can build this against any RESTful API, but I make no promises that it will work without some tinkering.

Planning the project.

I’ve written before how I’m a huge fan of OPML tools like Mind Node Pro.  The first step for me was planning out the pieces I needed to make this:

  • usable in the future
  • actually work in the present
In this case I’m far more concerned about the present as I’m fairly sure that I will look back on this code in a year from now and think some words that I won’t put in print.
Aside: I’ve actually found that using the troubleshooting skills I’ve honed over the years as a network engineer helps me immensely when trying to decompose what pieces will need to go in my code. I actually think that Network Engineers have a lot of skills that are extremely transportable to the programming domain. Especially because we tend to think of the individual components and the system at the same time, not to mention our love of planning out failure domains and forcing our failures into known scenarios as much as possible.

Screen Shot 2014 11 24 at 9 33 50 PM

Auth Handler

Assuming that the RESTful service you’re trying to access will require you to authenticate, you will need an authentication handler to deal with the username/password stuff that a human being is usually required to enter. There are a few different options here. Python actually ships with URLLIB or some variant depending not the version of python you’re working with.  For ease of use reasons, and because of a strong recommendation from one of my coding mentors, I chose to use the REQUESTS library.  This is not shipped by default with the version of python you download over at http://www.python.org but it’s well worth the effort over PIP’ing it into your system.

The beautiful thing about REQUEST’s is that the documentation is pretty good and easily readable.

In looking through the HP IMC eAPI documentation and the Request library – I settled on the DigestAuth

Screen Shot 2014 11 24 at 10 17 07 PM

So here’s how this looks for IMC.

Building the Authentication Info

>>>import requests   #imports the requests library you may need to PIP this in if you don’t have it already

>>> from requests.auth import HTTPDigestAuth    # this imports the HTTPDigestAuth method from the request library.
>>>
>>> imc_user = ”’admin”’   #The username used to auth against the HP IMC Server
>>> imc_pw = ”’admin”’   #The password of the account used to auth against the HP IMC Server.
>>>  

auth = requests.auth.HTTPDigestAuth(imc_user,imc_pw)     #This puts the username and password together and stores them as a variable called auth

We’ve now built the auth handler to use the username “admin” with the password “admin”. For a real environment, you’ll probably want to setup an Operator Group with only access to the eAPI functions and lock this down to a secret username and password. The eAPI is power, make sure you protect it.

Building the URL

So for this to work, I need to assign a value to the host_ip  variable above so that the URL will complete with a valid response. The other thing to watch for are types. Python can be quite forgiving at times, but if you try to add to objects of the wrong type together… it mostly won’t work.  So we need to make sure the host_ip is a string and the easiest way to do that is to put three quotes around the value.

In a “real” program, I would probably use the input function to allow this variable to be input as part of the flow of the program, but we’re not quite there yet.

>>> host_ip = ”’10.101.0.109”’   #variable that you can assign to a host you want to find on the network
>>> h_url = ”’http://”’    #prefix for building URLs use HTTP or HTTPS
>>> imc_server = ”’10.3.10.220:8080”’   #match port number of IMC server default 8080 or 8443
>>> url = h_url+imc_server    #combines the h_url and the IP address of the IMC box as a base URL to use later
>>> find_ip_host_url = (”’/imcrs/res/access/realtimeLocate?type=2&value=”’+host_ip+”’&total=false”’)   # This is the RealTimeLocate API URL with a variable set
>>>

Putting it all together.

This line takes puts the url that we’re going to send to the web server all together. You could ask “Hey man, why didn’t you just drop the whole string in one variable to begin with? “   That’s a great question.  There’s a concept in programming called DRY. (Don’t Repeat Yourself).  The idea is that when you write code, you should never write the same thing twice. Think in a modular fashion which would allow you to reuse pieces of code again and again.

In this example, I can easily write another f_url variable and assign to it another RESTful API that gets me something interesting from the HP IMC server. I don’t need to write the h_url portion or the server IP address portion of the header.  Make sense?

>>> f_url = url + find_ip_host_url
>>>    #  This is a very simple mathematical operation that puts together the url and the f_url which will product the HTTP call. 

Executing the code.

Now the last piece is where we actually execute the code. This will issue a get request, using the requests library.  It will use the f_url as the actual URL it’s going to pass, and it will use the variable auth that we created in the Authentication Info step above to automatically populate the username and password.

The response will get returned in a variable called r.

>>> r = requests.get(f_url, auth=auth)    #  Using the requests library get method, we’re going to pass the f_url as the argument for the URL we’re going to access and pass auth as the auth argument to define how we authenticate Pretty simple actually . 
>>>

The Results

So this is the coolest part. We can now see what’s in r.  Did it work? Did we find out lost scared little host?  Let’s take a look.

>>> r
<Response [200]>

Really? That’s it? .

The answer is “yes”.  That’s what’s been assigned to the variable r.  200 OK may look familiar to you voice engineers who know SIP and it means mostly the same thing here. This is a response code to let you know that your request was successful – But not what we’re looking for. I want that content, right?  If I do a type(r) which will tell me what python knows about what kind of object r is I will get the following.

>>> type(r)

<class ‘requests.models.Response’>

So this tells us that maybe we need to go back to the request documentation and look for info on the responses. Now we know to access the part of the response that I wanted to see, which is the reply to my request on where the host with ip address 10.101.0.111 is actually located on the network.

So let’s try out one of the options and see what we get

>>> r.content
b'<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?><list><realtimeLocation><locateIp>10.101.0.111</locateIp><deviceId>4</deviceId><deviceIp>10.10.3.5</deviceIp><ifDesc>GigabitEthernet1/0/16</ifDesc><ifIndex>16</ifIndex></realtimeLocation></list>’

How cool is that. We put in an IP address and we actually learned four new things about that IP address without touching a single GUI. And the awesome part of this?  This works across any of the devices that HP IMC supports.

Where to from here?

So we’ve just started on our little journey here.  Now that we have some hints to the identity of the network devices and specific interface that is currently harbouring this lost host, we need to use that data as hints to continue filling in the picture.

But that’s in the next blog…

Comments or Questions?  Feel free to post below!