Hey Alexa, Turn my lab on!

TL/DR Put together a custom Alexa Skill so I can turn switches and routers off in my lab as shown in the video here. Feels pretty great.

 

As most of my twitter followers have noticed, I’ve been doing a lot of Home Automation, mostly with Apple #homeKit. But I also picked up an Amazon Dot because… well why not?

One of the great things abut the digital voice assistance from Amazon, is that they have created an extensible framework that enables those with a little bit of coding skills to add to Mrs. A’s already already impressive impressive array of abilities.

The Amazon Alexa developer page is pretty impressive. There’s a ton of information and tutorials there, as well as an SDK and code examples in Node.js. I’m almost exclusively a python coder at this point, so I decided to look for something a little more familiar and came upon this.

Flask-Ask

Flask-Ask is a Flask extension that makes building Alexa skills for the Amazon Echo easier and much more fun.

Essentially, John Wheeler took the flask WSGI ( web) framework and made it super easy to be able to create Amazon Alexa skills using this familiar library. I’ve used Flask in the past for a few projects, so this was a no-brainer for me.

John also put together a set of tutorials here which can be used to jumpstart the Alexa skills development process. There’s also a flask-ask Quickstart on the amazing developers blog which pointed me towards ngrok which came in really handy!

*Ngrok allows you to create secure tunnels to a local host. You run ngrok with the port number you want to expose and it automatically exposes the host as a resource on the grok website. It’s really really really cool. 

The Project

Like many of us, I have a physical lab in my house from my CCIE studies. As well, specializing in network management over the years requires access to physical gear in a lot of instances. Powering on that gear full time is out of the question because of the cost and power drain. As I’m sure you can imagine, going back and forth to turn things on and off gets old real quick.

To address that problem, I picked up a couple of intelligent PDU’s on eBay. There are many “smart” PDUs out there and I happen to have a set of Server Technologies that allows me to control each socket on a 16 port power bar. Pretty cool, right?  No more walking to the garage, which is a good thing when you’re trying to focus on a problem.

So things are heading in the right direction;  I can pop over to the local web interface of my PDU and turn my devices on and off. That’s nice…. But all the home automation stuff I’ve been doing lead me to wonder…

Why can’t I just ask for the device to be turned on?

I can ask Siri or Alexa to turn on the lights or adjust the temperature of my house. I can ask the about the weather or to check my calendar. There’s no reason why I shouldn’t be able to do the same with my lab gear.

So I decided to make that a reality.

What’s not covered in this blog

The one step which is not covered in this blog is writing the pyservertech library which I built on top of the pysnmp library. Essentially I walked the MIBs until I found how to gather the info I needed and figured out which specific MIB I needed to set to turn an individual power socket on or off.  I might do a blog on that specific piece too, but for now, I’m trying to focus on the Alexa piece.

If there’s interest, please let me know in comments or on twitter and I’ll prioritize the SNMP set blog. 🙂 

Building the Alexa Skill

Alexa skills are a combination of three components

  • Ask-Flask – This is the actual code and includes the templates file shown below
  • Intent_Schema – Kinda obvious, but this includes the various intents that you’re going to use in your skill
  • Sample Utterances – Are a collection of the various verbal phrases and how they are connected to the intents.

I’ll do my best to connect these in the code below, but I’d really recommend going through a couple of the tutorials above and play around with the examples to built some intuition on how these components connect.

The code below let’s a user do the following using Amazon Alexa’s voice assistant.

  1. Ask Alexa to open the Lab skill ( Lab is what I called it )
  2. Alexa asks the user “Welcome to the lab. I’m going to ask you which plug you want me to turn on. Ready?”
  3. User responds with “Yes”or “Sure”
  4. Alexa asks the user “Please tell me which power socket you would like to turn on?”
  5. User responds with a number which is the power socket they would like to turn on
  6. Alexa decodes the response and returns the number in a JSON array to the local Flask server
  7. Python code takes the number from the JSON array and uses that as input into the power_on() function.
  8. Power_On() function sends an SNMP SET command to the appropriate input.
  9. Device powers on.Alexa says “I’ve turned the power socket on.”
  10. I don’t walk to the garage.

Now that we understand how the code is supposed to work, let’s take a look at the individual pieces and how they fit together.

Alexa Skill

This is the python code that you’ll run on your local machine. This contains only a portion of the logic of the “program” as Amazon is really doing the majority of the lifting on their side as far as the speech recognition and returning the appropriate data in a JSON array.

Templates file

This file contains the various phrases that Alexa is going to speak on behalf of your application. You can see we’ve only got a few different.

Intent_Schema

This file gets loaded on the Amazon website.  Using the developer interface, you load the JSON which defines the Intent Schema directly into the intent schema location on the Interaction Model page.

NewImage

Sample Utterences File

Just like the Intent Schema, the Sample Utterances is also loaded directly into the Amazon developer portal into the Interaction model for this specific skill

NewImage

What’s next

This is just the start of this skill. All it does right now is turn things on, which is cool, but I want more. Just off the top of my head here are some of the things I’d like to do

  • Turn individual devices on or off
  • Turn individual devices on or off by name “Alexa ask the lab to turn on the HPE 2920 switch!”
  • Turn groups of devices on or off “Alexa ask the lab to turn on the Juniper branch!”
  • Request data from the PDUs “Alexa ask the lab How many devices are currently turned on?”  Or “ask the lab how much power is currently being used”

As you can imagine, this would require a lot more code and logic to accomplish all these goals. Definitely something I’m going to pursue, but I’m hoping that the simple example above helps to inspire someone else in their journey down this path.

Questions? Comments? You know what to do…

@netmanchris

Advertisements

Using JSONSchema to Validate input

There are a lot of REST APIs out there. Quite a few of them use JSON as the data structure which allows us to get data in and out of these devices. There are a lot of network focused blogs that detail how to send and receive data in and out of these devices, but I wasn’t able to find anything that specifically talked about validating the input and output of the data to make sure we’re sending and receiving the expected information.

Testing is a crucial, and IMO too often overlooked, part of the Infrastructure as Code movement. Hopefully this post will help others start to think more about validating input and output of these APIs, or at the very least, spend just a little more time thinking about testing your API interactions before you decide to automate the massive explosion of your infrastructure with a poorly tested script. 🙂

What is JSONSchema

I’m assuming that you already know what JSON is, so let’s skip directly to talking about JsonSchema. This is a pythonlibrary which allows you to take your input/output  and verify it against a known schema which defined the data types you’re expecting to see.

For example, consider this snippet of a schema which defines what a valid VLAN object looks like

"vlan_id":
{
    "description": "The unique ID of the VLAN. IEEE 802.1Q VLAN identifier (VID)", 
    "minimum": 1, 
    "maximum": 4094, 
    "type": "integer", 
    "sql.not_null": true
}

You can see that this is a small set of rules that defines what is a valid entry for the vlan_id property of a VLAN.  As a network professional, it’s obvious to us that a valid VLAN ID  must be between 1 and 4094. We know this because we deal with it every day. But our software doesn’t know this. We have to teach it what a valid vlan_id property looks like and that’s what the schema does.

our software doesn’t know this. We have to teach it

Why do we care?

Testing is SUPER important. By being able to test the input/output of what you’re feeding into your automation/orchestration framework, it can help you to avoid, at worst, a total meltdown, or, at best, a couple of hours trying to figure out why your code doesn’t work.

Using JSONSchema

So the two things you’re going to need to use JSONSchema are

  • The JSON Schema for a specific API endpoint
  • The input/output that you want to validate.

In this case, we’ll use a VLAN object that is coming out of an ArubaOS-Switch REST API.

You did know the ArubaOS-Switches have a REST API, right?

Step1 – Loading the VLAN object

We’re going to gather the output from the VLANS API. Instead of writing custom code, we’ll just use the pyarubaoss library. I’ll leave you to check out the GitHub repo and just paste the contents of the output of a single VLAN JSON output here. I’m also going to create a second VLAN with a VLAN_ID of 5000. Just to show how this works. 5000 of course, is not valid and we’d like to prove that. Right?

Step 2 – Loading the JSON Schema Definition

Now we have the output, we want to make sure that the output here complies with the JSON Schema definition that we’ve been provided.

Loading the JSON schema

Here’s a sub-set of the JSON schema that defines what a valid VLAN looks like

Step 3 – Importing the JSON Schema Library and Validating

Now we’re going to load the JSON Schema library into our python session and use it to validate the VLAN object using the Schema we defined above.

First we’ll look at the vlan_good object and run it through the validate function

As you can see, there’s nothing to see here. Basically this means that the vlan_good object is conforming properly to the provided JSON Schema. The VLAN ID is valid as it’s a integer value between 1 and 4094

Now let’s take a look at the vlan_bad object and run it through the same validate function

We can see that the validate function now raises an exception because and let’s us know very specifically that the VLAN ID 5000 is not valid

jsonschema.exceptions.ValidationError: 5000is greater than the maximum of 4094

Pretty cool right? We can still definitely shoot ourselves in the foot, but at least we know the input/output data that we’re using for our API is valid. To me this is important for two reasons

  • I can validate that the data I’m trying to send to a given API conforms to what that API is expecting
  • I can validate that the vendor didn’t suddenly change their API which is going to break my code

Wrap Up

There are a lot of networking folk who have started to take on the new set of skills required to automate their infrastructure. One of the most crucial parts of this is testing and validating the code to ensure that you’re not just blowing up your network more efficiently. JSON Schema is one of the tools in your tool box that can help you do that.

Comments, questions? Let me know

@netmanchris

Shedding the Lights on Operations: REST, a NMS and a Lightbulb

It’s obvious I’ve caught the automation bug. Beyond just automating the network I’ve finally started to dip my toes in the home automation pool as well.

The latest addition to the home project was the Philipps hue light bulbs. Basically, I just wanted a new toy, but imagine my delight when I found that there’s a full REST API available. 

I’ve got a REST API and a light bulb and suddenly I was inspired!

The Project

Network Management Systems have long suffered from information overload.

Notifications have to be tuned and if you’re really good you can eventually get the stream down to a dull roar. Unfortunately, the notification process is still broken in that the notifications are generally dumped into your email which if you are anything like me…

NewImage

Yes. That’s really my number as of this writing

One of the ways of dealing with the deluge is to use a different medium to deliver the message. Many NMS systems, including HPE IMC, has the capability of issuing audio alarms, but let’s be honest. That can get pretty annoying as well and it’s pretty easy to mute them.

I decided that I would use the REST interfaces of the HPE IMC NMS and the Phillips Hue lightbulbs to provide a visual indication of the general state of the system.Yes, there’s a valid business justifiable reason for doing this. But c’mon, we’re friends?  The real reason I worked on this was because they both have REST APIs and I was bored. So why not, right?

The other great thing about this is that you don’t need to spend your day looking at a NOC screen. You can login when the light goes to whatever color you decide is bad.

Getting Started with Philipps Hue API

The Philipps SDK getting started was actually really easy to work through. As well, there’s an embedded HTML interface that allows you to play around with the REST API directly on the hue bridge.

Once you’ve setup your initial authentication to the bridge ( see the getting started guide ) you can login to the bridge at http://ip_address/debug/clip.html

From there it’s all fun and games. For instance, if you wanted to see the state of light number 14, you would navigate to api/%app_name%/lights/14 and you would get back the following in nice easy to read JSON.

http://ipaddress/debug/clip.html/

NewImage

From here, it would be fairly easy to use a http library like REQUESTS to start issuing HTTP commands at the bridge but, as I’m sure you’re aware by now, there’s very little unread territory in the land of python.

PHUE library

Of course someone has been here before me and has written a nice library that works with both python 2 and python 3.  You can see the library source code here, or you can simple

>>> pip install phue

From your terminal.

The Proof of Concept

You can check out the code for the proof of concept here. Or you can watch the video below.

Breaking down the code

1) Grab Current Alarm List

2) Iterate over the Alarms and find the one with the most severe alarm state

3) Create a function to correlate the alarm state to the color of the Philipps Hue lightbulb.

4) Wait for things to move away from green.

Lessons Learned

The biggest lesson here was that colours on a screen and colours on a light bulb don’t translate very well. The green and the yellow lights weren’t far enough apart to be useful as a visual indicator of the health of the network, at least not IMHO.

The other thing I learned is that you can waste a lot of time working on aesthetics. Because I was leveraging the PHUE library and the PYHPEIMC library, 99% of the code was already written. The project probably took me less than 10 minutes to get the logic together and more than a few hours playing around with different colour combinations to get something that I was at least somewhat ok with. I imagine the setting and the ambient light would very much effect whether or not this looks good in your place of business.If you use my code, you’ll want to tinker with it.

Where to Next

We see IoT devices all over in our personal lives, but it’s interesting to me that I could set up a visual indicator for a NOC environment on network health state for less than 100$.  Just thinking about some of the possibilities here

  • Connect each NOC agents ticket queue with the light color. Once they are assigned a ticket, they go orange for DO-NOT-DISTURB
  • Connect the APP to a Clearpass authentication API and Flash the bulbs blue when the boss walks in the building. Always good to know when you should be shutting down solitaire and look like you’re doing something useful, right?
  • Connect the APP to a Meridian location API and turn all the lights green when the boss walks on the floor.

Now I’m not advocating you should hide things from your boss, but imagine how much faster network outages would get fixed if we didn’t have to stop fixing them to explain to our boss what was happening and what we were going to be doing to fix them, right?

Hopefully, this will have inspired someone to take the leap and try something out,

Comments, questions?

@netmanchris

Auto Network Diagram with Graphviz

One of the most useful and least updated pieces of network documentation is the network diagram. We all know this, and yet we still don’t have/make time to update this until something catastrophic happens and then we says to ourselves

Wow. I wish I had updated this sooner…

Graphviz

According to the website 

Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics,  software engineering, database and web design, machine learning, and in visual interfaces for other technical domains.

note: Lots of great examples and docs there BTW.  Definitely check it out.

Getting started

So you’re going to have to first install graphviz from their website. Go ahead… I’l wait here.

Install the graphviz python binding

This should be easy assuming you’ve already got python and pip installed. I’m assuming that you do.

>>> pip install graphviz

Getting LLDP Neighbors from Arista Devices

You can use the Arista pyeapi library, also installable through pip as well.  There’s a blog which introduces you to the basics here which you can check out. Essentially I followed that blog and then substituted the “show lldp neighbors” command to get the output I was looking for.

Creating a Simple Network Diagram

The code for this is available here

Essentially, I’m just parsing the JSON output from the Arista eAPI and creating a DOTfile which is used to generate the diagram.

Pros: It’s automated

Cons: It’s not very pretty at all.

SimpleTopo.png

 

Prettying it up a Bit

Code for this is available here

So with a little bit of work using the .attr methods we can pretty this up a bit.  For example the

dot.attr('node', shape='box')

method turns the node shape from an ellipse into a box shape. The other transformations are pretty obvious as well.

Notice that we changed the shape of the shape, the style of the arrows a bit and also shaded in the box.  There are a lots of other modifications we can make, but I’ll leave you to check out the docs for that. 

SimplePrettierTopo.png

 

 

Adding your own graphics

Code for this is available here

Getting a bit closer to what I want, but still I think we can do a bit better. For this example, I used mspaint to create a simple PNG file with a switch-ish image on it. From what I can tell, there’s no reason you couldn’t just use the vendor icons for whatever devices you’re using, but for now, just playing with something quick and dirty.

Once the file is created and placed somewhere in the path, you can use this method

dot.attr('node', image="./images/switch1.png")

to get the right image.  You’ll also notice I used

dot.attr('edge', arrowhead='none')

to remove the arrow heads. ( I actually removed another command, can you spot it? )

SimplePrettierGraphicTopo.png

 

Straighter Lines

Code for this is available here

So looking at this image, one thing I don’t like is the curved lines. This is where Graphviz beat me for the day. I did find that I was able to apply the

dot.graph_attr['splines'] = "ortho"

attribute to the dot object to get me the straight lines I wanted, but when I did that, I got a great message that told me that I would need to use xlables instead of standard labels.

SimplePrettierGraphicOrthoTopo.png

Next Steps

Code for this is available here

For this next step, it was time to get the info live from the device, and also to attempt to stitch multiple devices together into a single topology. Some things I noticed is that the name of the node MUST match the hostname of the device, otherwise you end up with multiple nodes.  You can see there’s still a lot of work to do to clean this up, but I think it’s worth sharing. Hopefully you do too.

MultiTopo.png

 

Thoughts

Pros: Graphviz is definitely cool. I can see a lot of time spent in drawing network diagrams here. The fact that you could automatically run this every X period to ensure you have a up to date network diagram at all times is pretty awesome. It’s customizable which is nice, and multi-vendor would be pretty easy to implement. Worse case scenario, you could just poll the LLDP MIB with SNMP and dump the data into the appropriate bucket. Not elegant, but definitely functional.

Cons:  The link labels are a pain. In the short time I was playing with it, I wasn’t able to google or documentation my way into what I want, which is a label on each end of the link which would tell me what interface on which device. Not the glob of data in the middle that makes me wonder which end is which.

The other thing I don’t like is the curvy lines. I want straight lines. Whether that’s an issue with the graphviz python library that I’m using or it’s actually a problem with the whole graphviz framework isn’t clear to me yet. Considering the time saved, I could probably live with this as is, but I’d also like to do better.

If anyone has figured out how to get past these minor issues, please drop me a line!  @netmanchris on twitter or comment on the blog.

As always, comments and fixes are always appreciated!

@netmanchris

Pseudo-Math to Measure Network Fragility Risk

Some of you may have heard me ranting on Packet Pushers on stupid network tricks and why we continue to be forced to implement kluges as a result.  I made some comment about trying to come up with some metric to help measure the deviation of the network from the “golden” desired state to the dirty, dirty thing that it’s become over time due to kluges and just general lack of network hygiene.

So I decided that I would write a bit of code to get the conversation started. All code discussed is available on my github here

The Idea

What I wanted here was to create some pseudo-mathematical way of generating a measurement that can communicate to the management structure WHY the requested change is a really, really, bad idea.

Imagine these two conversations:

bad-conversation

good-conversation

Which conversation would you like to be part of?

Assumptions:

I’m making some assumptions here that I think it’s important to talk about.

  1. You have a source-of-truth defined for your network state. That is you have abstracted your network state into some YAML files or something like that.
  2. You have golden configurations defined in templates (ex Jinja2 ). These templates can be combined with your source-of-truth and used to generate your “golden” config for any network device at any time.
  3. You have “current-state” templates  (jinja2) defined that include all your kluges that can be combined with your source-of-truth and used to generate your “golden” config for any network device at any time.

The Fragility Metric

So how does one calculate the fragility of a network?

Wow! Thanks for asking!

My methodology is something like this.

  1. Generate the configurations for all network devices using the golden configuration templates.
  2. Generate the configurations for all network devices using the “current-state” configuration templates.

We should now be left with a directory full of pairs of configs.

We then use the python difflib SequenceMatcher library to calculate the difference between the pairs of files. The difflib library allows us to take two text files, eliminate the white space and compare the contents of the two files. One of the cool things is that it can give us a ratio metric which gives us a number between zero and one to measure how close the two files are.

What this means is that you can get this as output.

5930-1.cfg stability metric: 1.0
5930-2.cfg stability metric: 0.9958677685950413
7904-1.cfg stability metric: 0.9428861101723556
7904-2.cfg stability metric: 0.9405405405405406

Now that we’ve got a ratio for how different all of the pairs of files are, we can then calculate the mean average of all the files to calculate the network stability metric and network fragility metric

Network Stability Metric: 0.9698236048269844
Network Fragility Metric: 0.030176395173015624

HINT: If you add the two numbers together…

You can also get a nice graph

blog_graphic

Note: The pygal library produces a much cooler graphic which you can see here

The Approach

So the first thing I want to make clear is that I don’t intend this to REALLY measure the risk of a given configuration.

One idea I did have was to adjust the weighting of a specific configuration based on the role of that device.

Example – The core switch blowing up is PROBABLY a lot worse than an edge switch tanking because of some kludgey configuration.

This would be fairly easy to implement by placing some meta data on the configs to add their role.

It would be fairly easy to go down rat holes here on trying to identify every single line that’s changed and try to weight individual changes

Example – Look for [‘BGP’,’OSPF’,’ISIS’,’EIGRP’] in the dirty config and then weight those lines higher. Look for [‘RIP’] and rate that even higher.

Cause.. C’Mon… Friend don’t let friends run RIP, right?

Again, all the code is available here. Have a look. Think about it. Give me your feedback, I’d love to know if this is something you see value in.

 

@netmanchris

Jinja2 and… Powershell? Automation(ish) Microsoft DHCP

Most of us have home labs, right?

I’m in the middle of doing some zero touch provisioning testing, and I had the need to create a bunch of DHCP scopes and reservations, some with scope specific options, and some with client specific options. As often as I’ve had to create a Microsoft DHCP server in the lab and set up some custom scopes, I decided I was going to figure out how to automate this as much as I could with a little effort as possible.

After taking a quick look around for a python library to help me out, python being my weapon of choice, I realized that I was going to have to get into some Powershell scripting. I’ve dabbled before, but I’ve never really take the time to learn much about Powershell control structures ( loops, conditionals, pipes, etc…).  I really didn’t want to spend the time getting up to speed on a new language, so I instead decided I was going to use the python skills I had to auto generate the scripts using a little jinja2 and some google-technician skills.

Figuring out the Powershell Syntax

This was the easy part actually, Microsoft has some pretty great documentation for Powershell CmdLets and there was more than a couple of blogs out there with examples, Unfortunately, I didn’t take notes on all the posts I went through… yeah, I suck, but I offer thanks to everyone

Creating the Scopes

The Jinja Template for Creating Scopes

Once I figured out the specific syntax that I needed to generate the DHCP scopes with the proper scope options, I dropped the syntax into a Jinja template using the For loop to run over multiple scopes as defined in the YAML file ( see the next GIST ).

The YAML file to define the Scopes

I chose to use YAML to define the inputs because well, that’s what I felt like working in at the time and it also allowed me to separate out the global Values from those specific to each scope. As I move forward in my full home lab automation project, I’m thinking I might use a single globals values YAML file to hold all the global values for everything in the entire infrastructure, but for now, I decided to keep things simple and just include it in the same YAML file.

If you take a look at the GIST below, you should be able to easily identify what each of the different elements are for.

The Python Script to Generate the Powershell Script

Nothing too complicated here, I load the variables, pass them into the jinga library and spit out a file with a PS1 extension.

Creating the Reservations

For my specific project, I need to set different DHCP option 67s for some of my clients. Although I could have manually created these as well, I decided that I would just take the same approach and template the whole thing.

The Jinja Template for Creating DHCP Reservations

Very similar to the approach above, I figured out the syntax for one, and then I created a Jinja template using a For loop.

The CSV file to define the DHCP Reservations

In this case, since I didn’t have to deal with anything more than the reservations, I decided on using a CSV file as the input format. Although YAML is what all the cool kids are doing, using a CSV file allows me to edit this in Excel which I found to be easier for this specific project. There are only a coupe of reservations in here right now, but I’ve got another 30 or so devices which I will need to perform this same step for, so having the ability to quickly add reservations into a CSV file is a good thing in the long run.

The Python Script to Generate the Powershell Script

Wrap up

To be honest, it’s a bit lazy and I wish I had more time to learn more things, but sometimes, you just use what you know to address a problem in a quick and dirty way. Hopefully, someone else will find these useful as well.

At the beginning of the year, I wrote a blog that said my major goal was to be able to automate the configuration of my entire lab with as little effort as possible. Considering how many times I’ve had to manually create DHCP Scopes and Reservations over the years, I think this one will be something that will definitely come in handy. Hopefully someone else will thing so to!

Questions, Comments? Feel free to post below!

@netmanchris

Serial numbers how I love thee…

No one really like serial numbers, but keeping track of them is one of the “brushing your teeth” activities that everyone needs to take care of. It’s like eating your brussel sprouts. Or listening to your mom. You’re just better of if you do it quickly as it just gets more painful over time.

Not only is it just good hygene, but you may be subject to regulations, like eRate in the United States where you have to be able to report on the location of any device by serial number at any point in time.

Trust me, having to play hide-and-go seek with an SSH session is not something you want to do when government auditors are looking for answers.

I’m sure you’ve already guessed what I’m about to say, but I”ll say it anyway…

There’s an API for that!!!

HPE IMC base platform has a great network assets function that automatically gathers all the details of your various devices, assuming of course they supportRFC 4133, otherwise known as the Entity MIB. On the bright side, most vendors have chosen to support this standards based MIB, so chances are you’re in good shape.

And if they don’t support it, they really should. You should ask them. Ok?

So without further ado, let’s get started.

 

Importing the required libraries

I’m sure you’re getting used to this part, but it’s import to know where to look for these different functions. In this case, we’re going to look at a new library that is specifically designed to deal with network assets, including serial numbers.

In [1]:
from pyhpeimc.auth import *
from pyhpeimc.plat.netassets import *
import csv
In [2]:
auth = IMCAuth("http://", "10.101.0.203", "8080", "admin", "admin")
In [3]:
ciscorouter = get_dev_asset_details('10.101.0.1', auth.creds, auth.url)
 

How many assets in a Cisco Router?

As some of you may have heard, HPE IMC is a multi-vendor tool and offers support for many of the common devices you’ll see in your daily travels.

In this example, we’re going to use a Cisco 2811 router to showcase the basic function.

Routers, like chassis switches have multiple components. As any one who’s ever been the victem owner of a Smartnet contract, you’ll know that you have individual components which have serial numbers as well and all of them have to be reported for them to be covered. So let’s see if we managed to grab all of those by first checking out how many individual items we got back in the asset list for this cisco router.

In [4]:
len(ciscorouter)
Out[4]:
7
 

What’s in the box???

Now we know that we’ve got an idea of how many assets are in here, let’s take a look to see exactly what’s in one of the asset records to see if there’s anything useful in here.

In [5]:
ciscorouter[0]
Out[5]:
{'alias': '',
 'asset': 'http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=1',
 'assetNumber': '',
 'boardNum': 'FHK1119F1DX',
 'bom': '',
 'buildInfo': '',
 'cleiCode': '',
 'containedIn': '0',
 'desc': '2811 chassis',
 'devId': '15',
 'deviceIp': '10.101.0.1',
 'deviceName': 'router.lab.local',
 'firmwareVersion': 'System Bootstrap, Version 12.4(13r)T11, RELEASE SOFTWARE (fc1)',
 'hardVersion': 'V04 ',
 'isFRU': '2',
 'mfgName': 'Cisco',
 'model': 'CISCO2811',
 'name': '2811 chassis',
 'phyClass': '3',
 'phyIndex': '1',
 'physicalFlag': '0',
 'relPos': '-1',
 'remark': '',
 'serialNum': 'FHK1119F1DX',
 'serverDate': '2016-01-26T15:20:40-05:00',
 'softVersion': '15.1(4)M, RELEASE SOFTWARE (fc1)',
 'vendorType': '1.3.6.1.4.1.9.12.3.1.3.436'}
 

What can we do with this?

With some basic python string manipulation we could easily print out some of the attributes that we want into what could easily turn into a nicely formated report.

Again realise that the example below is just a subset of what’s available in the JSON above. If you want more, just add it to the list.

In [7]:
for i in ciscorouter:
    print ("Device Name: " + i['deviceName'] + " Device Model: " + i['model'] +
           "\nAsset Name is: " + i['name'] + " Asset Serial Number is: " +
           i['serialNum']+ "\n")
 
Device Name: router.lab.local Device Model: CISCO2811
Asset Name is: 2811 chassis Asset Serial Number is: FHK1119F1DX

Device Name: router.lab.local Device Model: VIC2-2FXO
Asset Name is: 2nd generation two port FXO voice interface daughtercard on Slot 0 SubSlot 2 Asset Serial Number is: FOC11063NZ4

Device Name: router.lab.local Device Model:
Asset Name is: 40GB IDE Disc Daughter Card on Slot 1 SubSlot 0 Asset Serial Number is: FOC11163P04

Device Name: router.lab.local Device Model:
Asset Name is: AIM Container Slot 0 Asset Serial Number is:

Device Name: router.lab.local Device Model:
Asset Name is: AIM Container Slot 1 Asset Serial Number is:

Device Name: router.lab.local Device Model:
Asset Name is: C2811 Chassis Slot 0 Asset Serial Number is:

Device Name: router.lab.local Device Model:
Asset Name is: C2811 Chassis Slot 1 Asset Serial Number is:

 

Why not just write that to disk?

Although we could go directly to the formated report without a lot of extra work, we would be losing a lot of data which we may have use for later. Instead why don’t we export all the available data from the JSON above into a CSV file which can be later opened in your favourite spreadsheet viewer and manipulated to your hearst content.

Pretty cool, no?

In [9]:
keys = ciscorouter[0].keys()
with open('ciscorouter.csv', 'w') as file:
    dict_writer = csv.DictWriter(file, keys)
    dict_writer.writeheader()
    dict_writer.writerows(ciscorouter)
 

Reading it back

Now we’ll read it back from disk to make sure it worked properly. When working with data like this, I find it useful to think about who’s going to be consuming the data. For example, when looking at this remember this is a CSV file which can be easily opened in python, or something like Microsoft Excel to manipuate further. It’s not realy intended to be read by human beings in this particular format. You’ll need another program to consume and munge the data first to turn it into something human consumable.

In [12]:
with open('ciscorouter.csv') as file:
    print (file.read())
 
firmwareVersion,vendorType,phyIndex,relPos,boardNum,phyClass,softVersion,serverDate,isFRU,alias,bom,physicalFlag,deviceName,deviceIp,containedIn,cleiCode,mfgName,desc,name,hardVersion,remark,asset,model,assetNumber,serialNum,buildInfo,devId
"System Bootstrap, Version 12.4(13r)T11, RELEASE SOFTWARE (fc1)",1.3.6.1.4.1.9.12.3.1.3.436,1,-1,FHK1119F1DX,3,"15.1(4)M, RELEASE SOFTWARE (fc1)",2016-01-26T15:20:40-05:00,2,,,0,router.lab.local,10.101.0.1,0,,Cisco,2811 chassis,2811 chassis,V04 ,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=1,CISCO2811,,FHK1119F1DX,,15
,1.3.6.1.4.1.9.12.3.1.9.3.114,14,0,FOC11063NZ4,9,,2016-01-26T15:20:40-05:00,1,,,2,router.lab.local,10.101.0.1,13,,Cisco,2nd generation two port FXO voice interface daughtercard,2nd generation two port FXO voice interface daughtercard on Slot 0 SubSlot 2,V01 ,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=14,VIC2-2FXO,,FOC11063NZ4,,15
,1.3.6.1.4.1.9.12.3.1.9.15.25,30,0,FOC11163P04,9,,2016-01-26T15:20:40-05:00,1,,,2,router.lab.local,10.101.0.1,29,,Cisco,40GB IDE Disc Daughter Card,40GB IDE Disc Daughter Card on Slot 1 SubSlot 0,,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=30, ,,FOC11163P04,,15
,1.3.6.1.4.1.9.12.3.1.5.2,25,6,,5,,2016-01-26T15:20:40-05:00,2,,,0,router.lab.local,10.101.0.1,3,,Cisco,AIM Container Slot 0,AIM Container Slot 0,,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=25,,,,,15
,1.3.6.1.4.1.9.12.3.1.5.2,26,7,,5,,2016-01-26T15:20:40-05:00,2,,,0,router.lab.local,10.101.0.1,3,,Cisco,AIM Container Slot 1,AIM Container Slot 1,,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=26,,,,,15
,1.3.6.1.4.1.9.12.3.1.5.1,2,0,,5,,2016-01-26T15:20:40-05:00,2,,,0,router.lab.local,10.101.0.1,1,,Cisco,C2811 Chassis Slot,C2811 Chassis Slot 0,,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=2,,,,,15
,1.3.6.1.4.1.9.12.3.1.5.1,27,1,,5,,2016-01-26T15:20:40-05:00,2,,,0,router.lab.local,10.101.0.1,1,,Cisco,C2811 Chassis Slot,C2811 Chassis Slot 1,,,http://10.101.0.203:8080/imcrs/netasset/asset/detail?devId=15&phyIndex=27,,,,,15

 

What about all my serial numbers at once?

That’s a great question! I’m glad you asked. One of the most beautiful things about learning to automate things like asset gathering through an API is that it’s often not much more work to do something 1000 times than it is to do it a single time.

This time instead of using the get_dev_asset_details function that we used above which gets us all the assets associated with a single device, let’s grab ALL the devices at once.

In [13]:
all_assets = get_dev_asset_details_all(auth.creds, auth.url)
In [14]:
len (all_assets)
Out[14]:
1013
 

That’s a lot of assets!

Exactly why we automate things. Now let’s write the all_assets list to disk as well.

**note for reasons unknown to me at this time, although the majority of the assets have 27 differnet fields, a few of them actually have 28 different attributes. Something I’ll have to dig into later.

In [15]:
keys = all_assets[0].keys()
with open('all_assets.csv', 'w') as file:
    dict_writer = csv.DictWriter(file, keys)
    dict_writer.writeheader()
    dict_writer.writerows(all_assets)
 
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-15-e4c553049911> in <module>()
 3     dict_writer = csv.DictWriter(file, keys)
 4     dict_writer.writeheader()
----> 5dict_writer.writerows(all_assets)

/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/csv.py in writerows(self, rowdicts)
 156         rows = []
 157         for rowdict in rowdicts:
--> 158rows.append(self._dict_to_list(rowdict))
 159         return self.writer.writerows(rows)
 160

/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/csv.py in _dict_to_list(self, rowdict)
 147             if wrong_fields:
 148                 raise ValueError("dict contains fields not in fieldnames: "
--> 149 + ", ".join([repr(x) for x in wrong_fields]))  150         return [rowdict.get(key, self.restval) for key in self.fieldnames]
 151

ValueError: dict contains fields not in fieldnames: 'beginDate'
 

Well That’s not good….

So it looks like there are a few network assets that have a different number of attributes than the first one in the list. We’ll write some quick code to figure out how big of a problem this is.

In [16]:
print ("The length of the first items keys is " + str(len(keys)))
for i in all_assets:
    if len(i) != len(all_assets[0].keys()):
       print ("The length of index " + str(all_assets.index(i)) + " is " + str(len(i.keys())))
 
The length of the first items keys is 27
The length of index 39 is 28
The length of index 41 is 28
The length of index 42 is 28
The length of index 474 is 28
The length of index 497 is 28
The length of index 569 is 28
The length of index 570 is 28
The length of index 585 is 28
The length of index 604 is 28
The length of index 605 is 28
The length of index 879 is 28
The length of index 880 is 28
The length of index 881 is 28
The length of index 882 is 28
The length of index 883 is 28
The length of index 884 is 28
The length of index 885 is 28
The length of index 886 is 28
 

Well that’s not so bad

It looks like the items which don’t have exactly 27 attribues have exactly 28 attributes. So we’ll just pick one of the longer ones to use as the headers for our CSV file and then run the script again.

For this one, I’m going to ask you to trust me that the file is on disk and save us all the trouble of having to print out 1013 seperate assets into this blog post.

In [18]:
keys = all_assets[879].keys()
with open ('all_assets.csv', 'w') as file:
    dict_writer = csv.DictWriter(file, keys)
    dict_writer.writeheader()
    dict_writer.writerows(all_assets)
 

What’s next?

So now that we’ve got all of our assets into a CSV file which is easily consumable by something like Excel, you can now chose what to do with the data.

For me it’s interesting to see how vendors internally instrument their boxes. Some have serial numbers on power supplies and fans, some don’t. Some use the standard way of doing things. Some don’t.

From an operations perspective, not all gear is created equal and it’s nice to understand what’s supported when trying to make a purchasing choice for something you’re going to have to live with for the next few years.

If you’re looking at your annual SMARTnet upgrade, at least you’ve now got a way to easily audit all of your discovered environment and figure out what line cards need to be tied to a particualr contract.

Or you could just look at another vendor who makes your life easier. Entirely your choice.

@netmanchris