Using JSONSchema to Validate input

There are a lot of REST APIs out there. Quite a few of them use JSON as the data structure which allows us to get data in and out of these devices. There are a lot of network focused blogs that detail how to send and receive data in and out of these devices, but I wasn’t able to find anything that specifically talked about validating the input and output of the data to make sure we’re sending and receiving the expected information.

Testing is a crucial, and IMO too often overlooked, part of the Infrastructure as Code movement. Hopefully this post will help others start to think more about validating input and output of these APIs, or at the very least, spend just a little more time thinking about testing your API interactions before you decide to automate the massive explosion of your infrastructure with a poorly tested script. 🙂

What is JSONSchema

I’m assuming that you already know what JSON is, so let’s skip directly to talking about JsonSchema. This is a pythonlibrary which allows you to take your input/output  and verify it against a known schema which defined the data types you’re expecting to see.

For example, consider this snippet of a schema which defines what a valid VLAN object looks like

"vlan_id":
{
    "description": "The unique ID of the VLAN. IEEE 802.1Q VLAN identifier (VID)", 
    "minimum": 1, 
    "maximum": 4094, 
    "type": "integer", 
    "sql.not_null": true
}

You can see that this is a small set of rules that defines what is a valid entry for the vlan_id property of a VLAN.  As a network professional, it’s obvious to us that a valid VLAN ID  must be between 1 and 4094. We know this because we deal with it every day. But our software doesn’t know this. We have to teach it what a valid vlan_id property looks like and that’s what the schema does.

our software doesn’t know this. We have to teach it

Why do we care?

Testing is SUPER important. By being able to test the input/output of what you’re feeding into your automation/orchestration framework, it can help you to avoid, at worst, a total meltdown, or, at best, a couple of hours trying to figure out why your code doesn’t work.

Using JSONSchema

So the two things you’re going to need to use JSONSchema are

  • The JSON Schema for a specific API endpoint
  • The input/output that you want to validate.

In this case, we’ll use a VLAN object that is coming out of an ArubaOS-Switch REST API.

You did know the ArubaOS-Switches have a REST API, right?

Step1 – Loading the VLAN object

We’re going to gather the output from the VLANS API. Instead of writing custom code, we’ll just use the pyarubaoss library. I’ll leave you to check out the GitHub repo and just paste the contents of the output of a single VLAN JSON output here. I’m also going to create a second VLAN with a VLAN_ID of 5000. Just to show how this works. 5000 of course, is not valid and we’d like to prove that. Right?

Step 2 – Loading the JSON Schema Definition

Now we have the output, we want to make sure that the output here complies with the JSON Schema definition that we’ve been provided.

Loading the JSON schema

Here’s a sub-set of the JSON schema that defines what a valid VLAN looks like

Step 3 – Importing the JSON Schema Library and Validating

Now we’re going to load the JSON Schema library into our python session and use it to validate the VLAN object using the Schema we defined above.

First we’ll look at the vlan_good object and run it through the validate function

As you can see, there’s nothing to see here. Basically this means that the vlan_good object is conforming properly to the provided JSON Schema. The VLAN ID is valid as it’s a integer value between 1 and 4094

Now let’s take a look at the vlan_bad object and run it through the same validate function

We can see that the validate function now raises an exception because and let’s us know very specifically that the VLAN ID 5000 is not valid

jsonschema.exceptions.ValidationError: 5000is greater than the maximum of 4094

Pretty cool right? We can still definitely shoot ourselves in the foot, but at least we know the input/output data that we’re using for our API is valid. To me this is important for two reasons

  • I can validate that the data I’m trying to send to a given API conforms to what that API is expecting
  • I can validate that the vendor didn’t suddenly change their API which is going to break my code

Wrap Up

There are a lot of networking folk who have started to take on the new set of skills required to automate their infrastructure. One of the most crucial parts of this is testing and validating the code to ensure that you’re not just blowing up your network more efficiently. JSON Schema is one of the tools in your tool box that can help you do that.

Comments, questions? Let me know

@netmanchris

Advertisements

Shedding the Lights on Operations: REST, a NMS and a Lightbulb

It’s obvious I’ve caught the automation bug. Beyond just automating the network I’ve finally started to dip my toes in the home automation pool as well.

The latest addition to the home project was the Philipps hue light bulbs. Basically, I just wanted a new toy, but imagine my delight when I found that there’s a full REST API available. 

I’ve got a REST API and a light bulb and suddenly I was inspired!

The Project

Network Management Systems have long suffered from information overload.

Notifications have to be tuned and if you’re really good you can eventually get the stream down to a dull roar. Unfortunately, the notification process is still broken in that the notifications are generally dumped into your email which if you are anything like me…

NewImage

Yes. That’s really my number as of this writing

One of the ways of dealing with the deluge is to use a different medium to deliver the message. Many NMS systems, including HPE IMC, has the capability of issuing audio alarms, but let’s be honest. That can get pretty annoying as well and it’s pretty easy to mute them.

I decided that I would use the REST interfaces of the HPE IMC NMS and the Phillips Hue lightbulbs to provide a visual indication of the general state of the system.Yes, there’s a valid business justifiable reason for doing this. But c’mon, we’re friends?  The real reason I worked on this was because they both have REST APIs and I was bored. So why not, right?

The other great thing about this is that you don’t need to spend your day looking at a NOC screen. You can login when the light goes to whatever color you decide is bad.

Getting Started with Philipps Hue API

The Philipps SDK getting started was actually really easy to work through. As well, there’s an embedded HTML interface that allows you to play around with the REST API directly on the hue bridge.

Once you’ve setup your initial authentication to the bridge ( see the getting started guide ) you can login to the bridge at http://ip_address/debug/clip.html

From there it’s all fun and games. For instance, if you wanted to see the state of light number 14, you would navigate to api/%app_name%/lights/14 and you would get back the following in nice easy to read JSON.

http://ipaddress/debug/clip.html/

NewImage

From here, it would be fairly easy to use a http library like REQUESTS to start issuing HTTP commands at the bridge but, as I’m sure you’re aware by now, there’s very little unread territory in the land of python.

PHUE library

Of course someone has been here before me and has written a nice library that works with both python 2 and python 3.  You can see the library source code here, or you can simple

>>> pip install phue

From your terminal.

The Proof of Concept

You can check out the code for the proof of concept here. Or you can watch the video below.

Breaking down the code

1) Grab Current Alarm List

2) Iterate over the Alarms and find the one with the most severe alarm state

3) Create a function to correlate the alarm state to the color of the Philipps Hue lightbulb.

4) Wait for things to move away from green.

Lessons Learned

The biggest lesson here was that colours on a screen and colours on a light bulb don’t translate very well. The green and the yellow lights weren’t far enough apart to be useful as a visual indicator of the health of the network, at least not IMHO.

The other thing I learned is that you can waste a lot of time working on aesthetics. Because I was leveraging the PHUE library and the PYHPEIMC library, 99% of the code was already written. The project probably took me less than 10 minutes to get the logic together and more than a few hours playing around with different colour combinations to get something that I was at least somewhat ok with. I imagine the setting and the ambient light would very much effect whether or not this looks good in your place of business.If you use my code, you’ll want to tinker with it.

Where to Next

We see IoT devices all over in our personal lives, but it’s interesting to me that I could set up a visual indicator for a NOC environment on network health state for less than 100$.  Just thinking about some of the possibilities here

  • Connect each NOC agents ticket queue with the light color. Once they are assigned a ticket, they go orange for DO-NOT-DISTURB
  • Connect the APP to a Clearpass authentication API and Flash the bulbs blue when the boss walks in the building. Always good to know when you should be shutting down solitaire and look like you’re doing something useful, right?
  • Connect the APP to a Meridian location API and turn all the lights green when the boss walks on the floor.

Now I’m not advocating you should hide things from your boss, but imagine how much faster network outages would get fixed if we didn’t have to stop fixing them to explain to our boss what was happening and what we were going to be doing to fix them, right?

Hopefully, this will have inspired someone to take the leap and try something out,

Comments, questions?

@netmanchris

Automating your NMS build – Part 4 Adding Custom Views

This is part four in a series of using python and the RESTful API to automate the configuration of the HP IMC network management station.  Like with all things, it’s easier to learn something when you’re able to find a good reason to use those skills. I decided to extend my python skills by figuring out how to use python to configure my NMS using the RESTful API. 

If you’re interested, check out the other posts in this series

Creating Operators

Adding Devices

Changing Device Categories

Adding Custom Views

In this post, we’re going to use the RESTful API to programatically add a custom view, and then add devices to that custom view.  For those of you who don’t know, a custom view is simply a logical grouping of devices.  In IMC custom views also form the basis of the topology maps. In fact, a custom view and a topology map are essentially the same object internally,  The difference is just whether you chose to look at it as a list of devices in the normal interface or chose to look at it in the typical topology/visio format which we all know and might-not-love. 

Why might we want to add a custom view programatically you ask? While, the answer to that might be simply that we’re lazy and it’s easier and faster.  Or it might be that you don’t want to have to look through all the different devices, or, as in my case, that you simply want an excuse to extend your python skills.  Whatever you’re reasons are, custom views are something that just don’t get used enough in my opinion. 

Custom views are a great way to be able to zoom in on the status of a specific branch, a geographic area, maybe a logical grouping of devices that support a specific application?  It really doesn’t matter and the best part is that a single device can exist in multiple custom views at the same time, so there’s really no limit to how you put these views together. It all depends on what makes sense to you. 

 

The Code

This code is a little bit more complicated than some of the other examples we’ve looked at. We’re actually going to be using multiple functions together, but the logic should be pretty easy to follow.  I’m sure there are better ways to do this, but this seems to work for me.  I’m sure I’ll be back here in a year going “Why did I write it that way!?!?!?!?!?” but for now, I hope it’s simple enough for someone else to follow and possibly get inspired. 

The main function is really just calling the other functions which we will break down below

Step 1 – Create the Custom View

In this code, we’re going to simply use this small function that I created to gather the name of the custom view ( the view_name variable ) and then use that to create the JSON payload ( the payload variable ).  The other part of the JSON payload is the autoAddDevType variable which is hard-coded to 0.  This could be used to have the system automatically add new devices of a given type, but I”m in interested in doing the automation myself here. The last part of this code will be used as the input for another function, which is the return of the view name.  You can see in the main  function in the the view_name = create_new_view() line. 

Step 2 – Get Custom Views

For the next part, we are going to need to figure out what Id was assigned to this new view, to do this we’re going to have to go through a couple of steps. The first one is to ask the NMS to send us a list of all the known custom views. The following function will request the list, which will be returned as a JSON array, and then convert it over into a python list of dictionaries so we can work with it natively as a python object. You can see this in the main function in the  view_list = get_custom_views() line

Now that we have the view_list which is the list of all the views, we’re going to have to find the ID for the new view that we just created.  We do that by by using the get_view_id() function using the view_name as the input.  Essentially, this will look through each of the views in the view_list that was returned above and let us know when the ‘name’ value is equal to the view_name value that we captured above. Once it’s equal, we then return the ‘symbolId’ which is the internal unique numeric value assigned to this particular custom view. This is the number we’re going to use to identify the view that  that we want to add devices to.  Make sense? Now that we’ve got this number, we’re going to assign it to the object view_id for use later on. 

Note: I actually could have added the devices directly to the view in the original add_custom_view code code above, but then I’d have to write the modify function later if I ever wanted to change or add new devices to the view. I’m trying to follow the DRY ( Don’t Repeat Yourself ) advice here so I just write the modify here and I can then leverage it later without having to re-write the code.  

Step 3 – Generate the Device List

We’re not going to spend too much time on this part as I’m essentially re-using the code from the Changing Device Categories blog.  It’s pretty straight forward. I’m using some user-put to gather a list of devices that we want to add to this specific view and then capture it in the dev_list object.  What’s cool about doing it this way is that the returned list will search through all the IP addresses assigned to your devices, not just the managed address which might come up in the NMS interface where you would normally perform this step. There are ways around that as well, but that’s another blog. 

Step 4 – Add Devices to Existing Custom View

This last step is where things come together. We’re going to run the add_device_to_view(dev_list, view_id) function which is using the dev_list generated in step 3 and the view_id that we captured at the end of step 2 as the inputs.  Essentially, we’re just saying here  “ add all the devices I want to the view I just created “. 

Wrapping it Up

So this is just an example of how you can tie a few pieces of code together to help automate something that might otherwise take you a lot of manual labour. In this case, just a bunch of mouse clicks and depending on the fact that you were able to manually identify all the devices you wanted to add to a specific view.  Personally, I’d rather leave the hard work to the computers and move on to something that requires my brain.

Automating your NMS build – Part 3 Changing Device Categories

In the first couple of posts in this series, we created some operators, then we added some devices.   In this post we’re going to look at something a little more complicated. In this post, we’re going to link a couple of different python functions together to meet the requirement which is to change a device from one category to another.

A bit about Device Categories

For those of you haven’t used HP’s Intelligent Management Centre before, the system automatically categorizes any discovered device, usually based upon SNMP sysobjectid. What this means is that when you run an auto discovery, the majority of your infrastructure will be properly classified right out of the box.

Screen Shot 2015 07 07 at 11 11 51 PM

This works great for devices which are SNMP enabled, as well as some other devices like ESX machines ( Virtual Devices ) which use SOAP as the management protocol, but it fails pretty miserably when dealing with devices which don’t support anything more than PING.  IMC’s default behaviour is to put anything which doesn’t respond to SNMP in the Desktop Category.

IP Phones aren’t smart

I’ve had customers who decided to discover all of their expensive IP phones and suddenly found out that none of them were classified properly. The problem with IP phones is that they are usually pretty stupid devices. Low memory, weak CPUs. Most of the processing/thinking is done by the PBX. I’ve never seen an IP phone that supports SNMP.

In this case we have two choices

  1. Manually change each IP phone from the Desktop category into the Voice category one device at a time.
  2. Use the RESTful API and a bit of code to do it wihile we go have a coffee

Filtering First

So the first thing we need to do is to identify the IP phones from the rest of the devices in the Desktop category. Thankfully, this is where having a well designed network can come in REALLY handy. Most Voice networks are designed so that the IP phones are automatically put into a voice vlan. This means that all phones SHOULD be in the same layer 3 network range.

The first piece of code we need to write is a simple piece of code which will allow the user to identify which of the categories they want to filter by. Although this might sound strange, we actually need to make sure that we only grab the DESKTOP devices in a specific subnet range. Imagine if you accidentally move the router or switch for this subnet into the voice category too. Really sucks leaving when you lose your router, right?

In this piece of code, we’re simply creating a dictionary which creates the link between the categoryId, which is the number IMC internally uses to identify the defined categories and the labels we humans use to identify them.  In a nutshell, this simply prints out the available categories.  Why you ask? Because the human running this script needs to known which categories they want to filter by.

This next performs two different functions

  1. Filters all known devices by the categories listed above
  2. Filters all known devices by an IP range.

Combining the two of them we’re able to easily fine all of the devices that have been classified in the Desktop Category in the L3 subnet of the IP phones and return that as a dictionary which contains, among other things, the device IDs which we’ll need in the next step.

Putting it all together

So the last piece of code is where the magic actually happens.

First we assign the output of the filtering function above into the variable called dev_list.  Essentially, this is a list of devices which meet the search criteria of the filtering function.

In our case, this means the list will consists of all the IP phones in the specific subnet that we filtered.

From there, we use the items in dev_list as input into a for loop which changes them into the new category.  ( see how we use the same print_category function from above? )

Wrapping it up

Hopefully this is a fairly useful example of how putting a few basic python functions together can help to substantially cut down on the amount of time it takes to perform what’s really a simple task. That’s the whole point of automation right?

As I continue learning, I can already see there a bunch of ways to improve this code, but hopefully having a simple working example will help people who are a couple of steps behind me on this journey take another step forward.

If you’re ahead of me on this journey and have suggestions, please feel free to comment!

@netmanchris

Automating your NMS build using Python and Restful APIs Part 1 – Creating Operators

It’s a funny world we live in.  Unless you’re hiding under a rock, there’s been a substantial push in the industry over the last few years to move away from the CLI.  As someone right in the middle of this swirling vortex of inefficiency, I’d like to suggest that it’s not so much the CLI that’s the problem, but the fact that each box is handled on an individual basis and that human beings access the API through a keyboard. Not exactly next-generation technology.

 

I’ve been spending lot of time learning python and trying to apply it to my daily tasks. I started looking at the HP IMC Network Management station a few months ago. Mainly as a way to start learning about how I can use python to access RESTFul APIs as well as gain some hands on working with JSON and XML. As an observation, it’s interesting to be that I’m using a CLI ( python ) to configure an NMS ( IMC) that I’m using to avoid using the CLI. ( network devices ).   

I’ve got a project I’m working on to try and automate a bunch of the initial deployment functions within my NMS. There are a bunch of reasons to do this that are right for the business. Being able to push information gathering onto the customer, being able to use lower-skilled ( and hence lower paid!) resources to do higher level tasks. Being able to be more efficient in your delivery, undercut the competitors on price and over deliver on quality. It’s a really good project to sink my teeth and use some of my growing coding skills to make a difference to the business. 

This is the first post in which I’ll discuss and document some of the simple functions I’m developing. I make no claims to be a programmer, or even a coder. But I’m hoping someone can find something here usefull, and possibly get inspired to start sharing whatever small project you’re working on as well. 

 

Without further ado, let’s jump in and look at some code. 

What’s an Operator

Not familiar with HP IMC?  You should be! It’s chock full of goodness and you can get a 60 day free trial here.   In IMC an Operator is someone who has the right to log into the system and perform tasks in the NMS itself.  The reason they use the word operator vs. user is that there’s a full integrated BYOD solution available as an add-on module which treats a user as resource, which of course is not the same thing as an administrator on the system. 

IMC’s got a full RBAC system as well which allows you to assign different privilege levels to your operators, from view only to root-equiv access, as well as splitting up what devices you can perform actions on, as well as segmenting what actions you’re allowed to perform. Pretty powerful stuff once you understand how the pieces go together. 

Adding an Operator in the GUI

 This is a screen capture of the dialog used to add an operator into IMC.  It’s intuitive. You put the username in the username box, you put the password in the password box. Pretty easy right?

If you know what you’re doing and you’re a reasonably good typist, you can add probably add an operator in a minute or less.  

Screen Shot 2015 04 16 at 12 19 17 PM

Where do Operators come from?

Don’t worry. This isn’t a birds and bees conversation.  One of the biggest mistakes that I see when people start into any network management system project, whether that’s Solarwinds, Cisco Prime, What’s up Gold, HP NNMi, or HP IMC, is that they don’t stop to think about what they want/need to do before they start the project.  They typically sit down, start an auto-discovery and then start cleaning up afterwards.  Not exactly the best way to ensure success in your project is it?

When I get involved in a deployment project, I try to make sure I do as much of the information gathering up front. This means I have a bunch of excel spreadsheets that I ask them to fill in before I even arrive onsite. This ensures two things:

  1. I can deliver on what the customer actually wants
  2.  I know when I’m done the project and get to walk away and submit the invoice. 

 

I won’t make any judgement call on which one of those is more important. 

 

 

My Operator Template

My operator template looks like this

NewImage

The values map to the screen shot above exactly as you would expect them to. 

Full name is the full name. Name is the login name, password is the password etc…  

The authType is a little less intuitive, although it is documented in the API docs. The authType maps to the authentication type above which allows you to choose how this specific operator is going to authenticate, through local auth, LDAP, or RADIUS. 

The operator group, which is “1” in my example, maps to the admin operator group which means that I have root-level access on the NMS and can do anything I want. Which is, of course, how it should be, right?

 

The Problem

So I’ve got a CSV file and I know it takes about one minute to create an operator because I can type and I know the system. Why am I automating this? Well, there are a couple of reasons for that.

  • Because I can and I want to gain more python experience
  • Because if I have to add ten operators, this just became ten minutes.
  • Because I already have the CSV file from the customer. Why would I type all this stuff again?
  • Because I can reuse this same format at every customer project I get involved in. 
  • Because I can blame any typos on the customer

Given time, I could add to this list, but let’s just get to the code. 

The Code

Authenticating to the Restful API

Although the auth examples in the eAPI documentation use the standard URLIB HTTP library, I’ve found that the requests library is MUCH more user friendly and easier to work with.

So I first create a couple of global variables called URL and AUTH that I will use to store the credentials.  

 

#url header to preprend on all IMC eAPI calls
url = None

#auth handler for eAPI calls
auth = None 

Now we get to the meat. I think this is pretty obvious, but this function gathers the username and password used to access the eAPI and then tests it out to make sure it’s valid. Once it’s verified as working ( The 200 OK check ). The credentials are then stored in the URL and AUTH global variables for use later on. I’m sure someone could argue that I shouldn’t be using global variables here, but it works for me. :) 
 
def imc_creds():
    ''' This function prompts user for IMC server information and credentuials and stores
    values in url and auth global variables'''
    global url, auth, r
    imc_protocol = input("What protocol would you like to use to connect to the IMC server: \n Press 1 for HTTP: \n Press 2 for HTTPS:")
    if imc_protocol == "1":
        h_url = 'http://'
    else:
        h_url = 'https://'
    imc_server = input("What is the ip address of the IMC server?")
    imc_port = input("What is the port number of the IMC server?")
    imc_user = input("What is the username of the IMC eAPI user?")
    imc_pw = input('''What is the password of the IMC eAPI user?''')
    url = h_url+imc_server+":"+imc_port
    auth = requests.auth.HTTPDigestAuth(imc_user,imc_pw)
    test_url = '/imcrs'
    f_url = url+test_url
    try:
        r = requests.get(f_url, auth=auth, headers=headers)
    except requests.exceptions.RequestException as e: #checks for reqeusts exceptions
        print ("Error:\n"+str(e))
        print ("\n\nThe IMC server address is invalid. Please try again\n\n")
        imc_creds()
    if r.status_code != 200: #checks for valid IMC credentials
        print ("Error: \n You're credentials are invalid. Please try again\n\n")
        imc_creds()
    else:
        print ("You've successfully access the IMC eAPI")
 
 
I”m using this function to gather the credentials of the operator accessing the API. By default when you first install HP IMC, these are admin/admin.    You could ask: Why don’t you just hardcode those into the script? Why bother with writing a function for this? 
Answer: Because I want to reuse this as much as possible and there are lots of things that you can do with the eAPI that you would NOT want just anyone doing. Plus, hardcoding the username and password of the NSM system that controls your entire network is just a bad idea in my books. 
 

Creating the Operators

I used the HP IMC eAPI /plat/operator POST call to as the basis for this call. 

Screen Shot 2015 04 16 at 1 06 21 PM

 

After doing a bit of testing, I arrived at a JSON array which would allow me to create an operator using the “Try it now” button in the API docs.  ( http://IMC_SERVER:PORTNUMBER/imcrs to access the online docs BTW ).

    {
"password": "access4chris",
"fullName": "Christopher Young",
"defaultAcl": "0",
"operatorGroupId": "1",
"name": "cyoung",
"authType": "0",
"sessionTimeout": "10",
"desc": "admin account"
}

Using the Try it now button, you can also see the exact URL that is used to call this API. 

The 201 response below means that it was successfully executed. ( you might want to read up on HTTP codes as it’s not quite THAT simple, but for our purposes, it will work ).

Screen Shot 2015 04 16 at 1 10 46 PM

Now that I’ve got a working JSON array and the URL I need, I’ve got all the pieces I need to put this small function together. 

You can see the first thing I do is check to see if the auth and url variables are still set to None. If they are still None I use the IMC_CREDS function from above to gather them and store them. 

 

I create another variables called headers which stores the headers for the HTTP call. By default, the HP IMC eAPI will respond with XML. After working with XML for a few months, I decided that I prefer JSON. It just seems easier for me to work with.

This piece of code takes the CSV file that we created above and decodes the CSV file into a python dictionary using the column headers as the key and any additional rows as the values. This is really cool in that I can have ten rows, 50 rows, or 100 rows and it doesn’t matter. This script will handle any reasonable number you throw at it. ( I’ve tested up to 20 ).

 

#headers forcing IMC to respond with JSON content. XML content return is the default

headers = {‘Accept’: ‘application/json’, ‘Content-Type’: ‘application/json’,’Accept-encoding’: ‘application/json’}

def create_operator():
    if auth == None or url == None: #checks to see if the imc credentials are already available
        imc_creds()
    create_operator_url = ‘/imcrs/plat/operator’
    f_url = url+create_operator_url
    with open (‘imc_operator_list.csv’) as csvfile: #opens imc_operator_list.csv file
        reader = csv.DictReader(csvfile) #decodes file as csv as a python dictionary
        for operator in reader:
            payload = json.dumps(operator, indent=4) #loads each row of the CSV as a JSON string
            r = requests.post(f_url, data=payload, auth=auth, headers=headers) #creates the URL using the payload variable as the contents
            if r.status_code == 409:
                print (“Operator Already Exists”)
            elif r.status_code == 201:
                print (“Operator Successfully Created”)

 Now you run this code and you’ve suddenly got all the operators in the CSV file imported into your system. 

Doing some non-scientific testing, meaning I counted in Mississippi’s, it took me about 3 seconds to create 10 operators using this method.  

Time isn’t Money

Contrary to the old saying, time isn’t actually money. We can always get more money. There’s lots of ways to do that. Time on the other hand can never be regained. It’s a finite resource and I’d like to spend as much of it as I can on things that I enjoy.  Creating Operators in an NMS doesn’t qualify.

Now, I hand off a CSV file to the customer, make them fill out all the usernames and passwords and then just run the script. they have all the responsibility for the content and all I have to do is a visual on the CSV file to make sure that they didn’t screw anything up.

 

Questions or comments or better ways to do this?  Feel free to post below. I’m always looking to learn.

 

@netmanchris 

 

Surfing your NMS with Python

Python is my favourite programming language. But then again, it’s also the only one I know. 🙂

I made a choice to go with python because, honestly, that’s what all the cool kids were doing at the time. But after spending the last year or so learning the basics of the language, I do find that it’s something that I can easily consume, and I’m starting to get better with all the different resources out there. BTW http://www.stackoverflow.com is your friend.  you will learn to love it.

On with the show…

So in this post, I’m going to show how to use python to build a quick script that will allow you to issue the RealTimeLocate API to the HP IMC server. In theory, you can build this against any RESTful API, but I make no promises that it will work without some tinkering.

Planning the project.

I’ve written before how I’m a huge fan of OPML tools like Mind Node Pro.  The first step for me was planning out the pieces I needed to make this:

  • usable in the future
  • actually work in the present
In this case I’m far more concerned about the present as I’m fairly sure that I will look back on this code in a year from now and think some words that I won’t put in print.
Aside: I’ve actually found that using the troubleshooting skills I’ve honed over the years as a network engineer helps me immensely when trying to decompose what pieces will need to go in my code. I actually think that Network Engineers have a lot of skills that are extremely transportable to the programming domain. Especially because we tend to think of the individual components and the system at the same time, not to mention our love of planning out failure domains and forcing our failures into known scenarios as much as possible.

Screen Shot 2014 11 24 at 9 33 50 PM

Auth Handler

Assuming that the RESTful service you’re trying to access will require you to authenticate, you will need an authentication handler to deal with the username/password stuff that a human being is usually required to enter. There are a few different options here. Python actually ships with URLLIB or some variant depending not the version of python you’re working with.  For ease of use reasons, and because of a strong recommendation from one of my coding mentors, I chose to use the REQUESTS library.  This is not shipped by default with the version of python you download over at http://www.python.org but it’s well worth the effort over PIP’ing it into your system.

The beautiful thing about REQUEST’s is that the documentation is pretty good and easily readable.

In looking through the HP IMC eAPI documentation and the Request library – I settled on the DigestAuth

Screen Shot 2014 11 24 at 10 17 07 PM

So here’s how this looks for IMC.

Building the Authentication Info

>>>import requests   #imports the requests library you may need to PIP this in if you don’t have it already

>>> from requests.auth import HTTPDigestAuth    # this imports the HTTPDigestAuth method from the request library.
>>>
>>> imc_user = ”’admin”’   #The username used to auth against the HP IMC Server
>>> imc_pw = ”’admin”’   #The password of the account used to auth against the HP IMC Server.
>>>  

auth = requests.auth.HTTPDigestAuth(imc_user,imc_pw)     #This puts the username and password together and stores them as a variable called auth

We’ve now built the auth handler to use the username “admin” with the password “admin”. For a real environment, you’ll probably want to setup an Operator Group with only access to the eAPI functions and lock this down to a secret username and password. The eAPI is power, make sure you protect it.

Building the URL

So for this to work, I need to assign a value to the host_ip  variable above so that the URL will complete with a valid response. The other thing to watch for are types. Python can be quite forgiving at times, but if you try to add to objects of the wrong type together… it mostly won’t work.  So we need to make sure the host_ip is a string and the easiest way to do that is to put three quotes around the value.

In a “real” program, I would probably use the input function to allow this variable to be input as part of the flow of the program, but we’re not quite there yet.

>>> host_ip = ”’10.101.0.109”’   #variable that you can assign to a host you want to find on the network
>>> h_url = ”’http://”’    #prefix for building URLs use HTTP or HTTPS
>>> imc_server = ”’10.3.10.220:8080”’   #match port number of IMC server default 8080 or 8443
>>> url = h_url+imc_server    #combines the h_url and the IP address of the IMC box as a base URL to use later
>>> find_ip_host_url = (”’/imcrs/res/access/realtimeLocate?type=2&value=”’+host_ip+”’&total=false”’)   # This is the RealTimeLocate API URL with a variable set
>>>

Putting it all together.

This line takes puts the url that we’re going to send to the web server all together. You could ask “Hey man, why didn’t you just drop the whole string in one variable to begin with? “   That’s a great question.  There’s a concept in programming called DRY. (Don’t Repeat Yourself).  The idea is that when you write code, you should never write the same thing twice. Think in a modular fashion which would allow you to reuse pieces of code again and again.

In this example, I can easily write another f_url variable and assign to it another RESTful API that gets me something interesting from the HP IMC server. I don’t need to write the h_url portion or the server IP address portion of the header.  Make sense?

>>> f_url = url + find_ip_host_url
>>>    #  This is a very simple mathematical operation that puts together the url and the f_url which will product the HTTP call. 

Executing the code.

Now the last piece is where we actually execute the code. This will issue a get request, using the requests library.  It will use the f_url as the actual URL it’s going to pass, and it will use the variable auth that we created in the Authentication Info step above to automatically populate the username and password.

The response will get returned in a variable called r.

>>> r = requests.get(f_url, auth=auth)    #  Using the requests library get method, we’re going to pass the f_url as the argument for the URL we’re going to access and pass auth as the auth argument to define how we authenticate Pretty simple actually . 
>>>

The Results

So this is the coolest part. We can now see what’s in r.  Did it work? Did we find out lost scared little host?  Let’s take a look.

>>> r
<Response [200]>

Really? That’s it? .

The answer is “yes”.  That’s what’s been assigned to the variable r.  200 OK may look familiar to you voice engineers who know SIP and it means mostly the same thing here. This is a response code to let you know that your request was successful – But not what we’re looking for. I want that content, right?  If I do a type(r) which will tell me what python knows about what kind of object r is I will get the following.

>>> type(r)

<class ‘requests.models.Response’>

So this tells us that maybe we need to go back to the request documentation and look for info on the responses. Now we know to access the part of the response that I wanted to see, which is the reply to my request on where the host with ip address 10.101.0.111 is actually located on the network.

So let’s try out one of the options and see what we get

>>> r.content
b'<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?><list><realtimeLocation><locateIp>10.101.0.111</locateIp><deviceId>4</deviceId><deviceIp>10.10.3.5</deviceIp><ifDesc>GigabitEthernet1/0/16</ifDesc><ifIndex>16</ifIndex></realtimeLocation></list>’

How cool is that. We put in an IP address and we actually learned four new things about that IP address without touching a single GUI. And the awesome part of this?  This works across any of the devices that HP IMC supports.

Where to from here?

So we’ve just started on our little journey here.  Now that we have some hints to the identity of the network devices and specific interface that is currently harbouring this lost host, we need to use that data as hints to continue filling in the picture.

But that’s in the next blog…

Comments or Questions?  Feel free to post below!