Hey Alexa, Turn my lab on!

TL/DR Put together a custom Alexa Skill so I can turn switches and routers off in my lab as shown in the video here. Feels pretty great.

 

As most of my twitter followers have noticed, I’ve been doing a lot of Home Automation, mostly with Apple #homeKit. But I also picked up an Amazon Dot because… well why not?

One of the great things abut the digital voice assistance from Amazon, is that they have created an extensible framework that enables those with a little bit of coding skills to add to Mrs. A’s already already impressive impressive array of abilities.

The Amazon Alexa developer page is pretty impressive. There’s a ton of information and tutorials there, as well as an SDK and code examples in Node.js. I’m almost exclusively a python coder at this point, so I decided to look for something a little more familiar and came upon this.

Flask-Ask

Flask-Ask is a Flask extension that makes building Alexa skills for the Amazon Echo easier and much more fun.

Essentially, John Wheeler took the flask WSGI ( web) framework and made it super easy to be able to create Amazon Alexa skills using this familiar library. I’ve used Flask in the past for a few projects, so this was a no-brainer for me.

John also put together a set of tutorials here which can be used to jumpstart the Alexa skills development process. There’s also a flask-ask Quickstart on the amazing developers blog which pointed me towards ngrok which came in really handy!

*Ngrok allows you to create secure tunnels to a local host. You run ngrok with the port number you want to expose and it automatically exposes the host as a resource on the grok website. It’s really really really cool. 

The Project

Like many of us, I have a physical lab in my house from my CCIE studies. As well, specializing in network management over the years requires access to physical gear in a lot of instances. Powering on that gear full time is out of the question because of the cost and power drain. As I’m sure you can imagine, going back and forth to turn things on and off gets old real quick.

To address that problem, I picked up a couple of intelligent PDU’s on eBay. There are many “smart” PDUs out there and I happen to have a set of Server Technologies that allows me to control each socket on a 16 port power bar. Pretty cool, right?  No more walking to the garage, which is a good thing when you’re trying to focus on a problem.

So things are heading in the right direction;  I can pop over to the local web interface of my PDU and turn my devices on and off. That’s nice…. But all the home automation stuff I’ve been doing lead me to wonder…

Why can’t I just ask for the device to be turned on?

I can ask Siri or Alexa to turn on the lights or adjust the temperature of my house. I can ask the about the weather or to check my calendar. There’s no reason why I shouldn’t be able to do the same with my lab gear.

So I decided to make that a reality.

What’s not covered in this blog

The one step which is not covered in this blog is writing the pyservertech library which I built on top of the pysnmp library. Essentially I walked the MIBs until I found how to gather the info I needed and figured out which specific MIB I needed to set to turn an individual power socket on or off.  I might do a blog on that specific piece too, but for now, I’m trying to focus on the Alexa piece.

If there’s interest, please let me know in comments or on twitter and I’ll prioritize the SNMP set blog. 🙂 

Building the Alexa Skill

Alexa skills are a combination of three components

  • Ask-Flask – This is the actual code and includes the templates file shown below
  • Intent_Schema – Kinda obvious, but this includes the various intents that you’re going to use in your skill
  • Sample Utterances – Are a collection of the various verbal phrases and how they are connected to the intents.

I’ll do my best to connect these in the code below, but I’d really recommend going through a couple of the tutorials above and play around with the examples to built some intuition on how these components connect.

The code below let’s a user do the following using Amazon Alexa’s voice assistant.

  1. Ask Alexa to open the Lab skill ( Lab is what I called it )
  2. Alexa asks the user “Welcome to the lab. I’m going to ask you which plug you want me to turn on. Ready?”
  3. User responds with “Yes”or “Sure”
  4. Alexa asks the user “Please tell me which power socket you would like to turn on?”
  5. User responds with a number which is the power socket they would like to turn on
  6. Alexa decodes the response and returns the number in a JSON array to the local Flask server
  7. Python code takes the number from the JSON array and uses that as input into the power_on() function.
  8. Power_On() function sends an SNMP SET command to the appropriate input.
  9. Device powers on.Alexa says “I’ve turned the power socket on.”
  10. I don’t walk to the garage.

Now that we understand how the code is supposed to work, let’s take a look at the individual pieces and how they fit together.

Alexa Skill

This is the python code that you’ll run on your local machine. This contains only a portion of the logic of the “program” as Amazon is really doing the majority of the lifting on their side as far as the speech recognition and returning the appropriate data in a JSON array.

Templates file

This file contains the various phrases that Alexa is going to speak on behalf of your application. You can see we’ve only got a few different.

Intent_Schema

This file gets loaded on the Amazon website.  Using the developer interface, you load the JSON which defines the Intent Schema directly into the intent schema location on the Interaction Model page.

NewImage

Sample Utterences File

Just like the Intent Schema, the Sample Utterances is also loaded directly into the Amazon developer portal into the Interaction model for this specific skill

NewImage

What’s next

This is just the start of this skill. All it does right now is turn things on, which is cool, but I want more. Just off the top of my head here are some of the things I’d like to do

  • Turn individual devices on or off
  • Turn individual devices on or off by name “Alexa ask the lab to turn on the HPE 2920 switch!”
  • Turn groups of devices on or off “Alexa ask the lab to turn on the Juniper branch!”
  • Request data from the PDUs “Alexa ask the lab How many devices are currently turned on?”  Or “ask the lab how much power is currently being used”

As you can imagine, this would require a lot more code and logic to accomplish all these goals. Definitely something I’m going to pursue, but I’m hoping that the simple example above helps to inspire someone else in their journey down this path.

Questions? Comments? You know what to do…

@netmanchris

Advertisements

Amazon S3 Outage: Another Opinion Piece

So Amazon S3 had some “issues” last week and it’s taken me a few days to put my thoughts together around this. Hopefully I’ve made the tail-end of the still interested-enough-to-find-this-blog-valuable period.

Trying to make the best of a bad situation, the good news, in my opinion, is that this shows that infrastructure people still have a place in the automated cloudy world of the future. At least that’s something right?

What happened:

You can read the detailed explanation on Amazon’s summary here.

In a nutshell

  • there was a small problem
  • they tried to fix it
  • things went bad for a relatively short time
  • They fixed it

What happened during:

The internet lost it’s minds. Or more accurately, some parts of the internet went down. Some of them extremely ironic

UNADJUSTEDNONRAW thumb bbfd

Initial thoughts

The reaction to this event is amusing and it drives home the point that infrastructure engineers are as critical as ever, if not even more important considering the complete lack of architecture that seems to have gone into the majority of these “applications”.

First let’s talk about availability: Looking at the Amazon AWS S3 SLA, available here, it looks like they did fall below there 99.9% SLA for availability. If we do a quick look at https://uptime.is/ we can see that for the monthly period, they were aiming for no more than 43m 49.7s of outage. Seems like they did about 6-8 hours of an outage so clearly they failed. Looking at the S3 SLA page, looks like customers might be eligible for 25% service credits. I’ll let you guys work that out with AWS.

Don’t “JUST CLICK NEXT”

One of the first things that struck me as funny here was the fact that this was the US-EAST-1 Region which was affected. US-EAST is the default region for most of the AWS services. You have to intentionally select another region if you want your service to be hosted somewhere else. But because it’s easier to just cllck next, it seems that the majority of people just clicked past that part and didn’t think about where they were actually hosting there services or the implications of hosting everything in the same region and probably the same availability zone. For more on this topic, take a look here.

There’s been a lot of criticism of the infrastructure people when anyone with a credit card can go to amazon sign up for a AWS account and start consuming their infrastructure. This has been thrown around like this is actually a good thing, right?

Well this is exactly what happens when “anyone” does that. You end up with all your eggs in one basket.  (m/n in round numbers)

“Design your infrastructure for the four S’s. Stability Scalability, Security, and Stupidity” — Jeff Kabel

Again, this is not an issue with AWS, or any Cloud Providers offerings. This is an issue with people who think that infrastructure and architecture don’t matter and it can just be “automated” away. Automation is important, but it’s there so that your infrastructure people can free up some time from mind numbing tasks to help you properly architect the infra components your applications rely upon.

Why o Why o Why

Why anyone would architect their revenue generating system on an infrastructure that was only guaranteed to 99.9% is beyond me.  The right answer, at least from an infrastructure engineers point of view is obvious, right?

You would use redundant architecture to raise the overall resilience of the application. Relying on the fact that it’s highly unlikely that you’re going to lose the different redundant pieces at the same time.  Put simply, what are the chances that two different systems, both guaranteed to 99.9% SLA are going to go down at the exact same time?

Well doing some really basic probability calculations, and assuming the outages are independent events, we multiple the non-SLA’d time happening ( 0.001% ) in system 1 times the same metric in system 2 and we get.

0.001 * 0.001 = 0.000001 probability of both systems going down at the same time.

Or another way of saying that is 0.999999% of uptime.   Pretty great right?

Note: I’m not an availability calculation expert, so if I’ve messed up a basic assumption here, someone please feel free to correct me. Always looking to learn!

So application people made the mistake of just signing over responsibility to “the cloud” for their application uptime, most of whom probably didn’t even read the SLA for the S3 service or sit down to think.

Really? We had people armed with an IDE and a credit card move our apps to “the cloud” and wonder why things failed.

What could they have done?

There’s a million ways to answer this I’m sure, but let’s just look at what was available within the AWS list of service offerings.

Cloudfront is AWS’s content delivery system. Extremely easy to use. Easy to setup and takes care of automatically moving your content to multiple AWS Regions and Availability Zones.

Route 53 is AWS’s DNS service that will allow you to perform health checks and only direct DNS queries to resources which are “healthy” or actively available.

There are probably a lot of other options as well, both within AWS and without, but my point is that the applications that went down most likely didn’t bother. Or they were denied the budget to properly architect resiliency into their system.

On the bright side, the latter just had a budget opening event.

Look who did it right

Unsurprisingly, there were companies who weathered the S3 storm like nothing happened. In fact, I was able to sit and binge watch Netflix well the rest of the internet was melting down. Yes, it looks like it cost 25% more, but then again, I had no problems with season 4 of Big Bang Theory at all last week, so I’m a happy customer.

Companies still like happy customers, don’t they?

The Cloud is still a good thing

I’m hoping that no one reads this as a anti-cloud post. There’s enough anti-cloud rhetoric happening right now, which I suppose is inevitable considering last weeks highly visible outage, and I don’t want to add to that.

What I do want is for people who read this to spend a little bit of time thinking about their applications and the infrastructure that supports them. This type of thing happens in enterprise environments every day. Systems die. Hardware fails. Get over the it and design your architecture to take into consideration these failures as a foregone conclusion. It IS going to happen, it’s just a matter of when. So shouldn’t we design up front around that?

Alternately, we could also chose to take the risk for those services that don’t generate revenue for the business. If it’s not making you money, maybe you don’t want to pay for it to be resilient. That’s ok too. Just make an informed decision.

For the record, I’m a network engineer well versed in the arcane discipline of plumbing packets. Cloud and Application architectures are pretty far away from the land of BGP peering and routing tables where I spend my days. But for the low low price of $15 and a bit of time on Udemy, I was able to dig into AWS and build some skills that let me look at last weeks outage with a much more informed perspective. To all my infrastructure engineer peeps I highly encourage you to take the time, learn a bit, and get involved in these conversations at your companies. Hoping we can all raise the bar collectively together.

Comments, questions?

@netmanchris

Auto Network Diagram with Graphviz

One of the most useful and least updated pieces of network documentation is the network diagram. We all know this, and yet we still don’t have/make time to update this until something catastrophic happens and then we says to ourselves

Wow. I wish I had updated this sooner…

Graphviz

According to the website 

Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics,  software engineering, database and web design, machine learning, and in visual interfaces for other technical domains.

note: Lots of great examples and docs there BTW.  Definitely check it out.

Getting started

So you’re going to have to first install graphviz from their website. Go ahead… I’l wait here.

Install the graphviz python binding

This should be easy assuming you’ve already got python and pip installed. I’m assuming that you do.

>>> pip install graphviz

Getting LLDP Neighbors from Arista Devices

You can use the Arista pyeapi library, also installable through pip as well.  There’s a blog which introduces you to the basics here which you can check out. Essentially I followed that blog and then substituted the “show lldp neighbors” command to get the output I was looking for.

Creating a Simple Network Diagram

The code for this is available here

Essentially, I’m just parsing the JSON output from the Arista eAPI and creating a DOTfile which is used to generate the diagram.

Pros: It’s automated

Cons: It’s not very pretty at all.

SimpleTopo.png

 

Prettying it up a Bit

Code for this is available here

So with a little bit of work using the .attr methods we can pretty this up a bit.  For example the

dot.attr('node', shape='box')

method turns the node shape from an ellipse into a box shape. The other transformations are pretty obvious as well.

Notice that we changed the shape of the shape, the style of the arrows a bit and also shaded in the box.  There are a lots of other modifications we can make, but I’ll leave you to check out the docs for that. 

SimplePrettierTopo.png

 

 

Adding your own graphics

Code for this is available here

Getting a bit closer to what I want, but still I think we can do a bit better. For this example, I used mspaint to create a simple PNG file with a switch-ish image on it. From what I can tell, there’s no reason you couldn’t just use the vendor icons for whatever devices you’re using, but for now, just playing with something quick and dirty.

Once the file is created and placed somewhere in the path, you can use this method

dot.attr('node', image="./images/switch1.png")

to get the right image.  You’ll also notice I used

dot.attr('edge', arrowhead='none')

to remove the arrow heads. ( I actually removed another command, can you spot it? )

SimplePrettierGraphicTopo.png

 

Straighter Lines

Code for this is available here

So looking at this image, one thing I don’t like is the curved lines. This is where Graphviz beat me for the day. I did find that I was able to apply the

dot.graph_attr['splines'] = "ortho"

attribute to the dot object to get me the straight lines I wanted, but when I did that, I got a great message that told me that I would need to use xlables instead of standard labels.

SimplePrettierGraphicOrthoTopo.png

Next Steps

Code for this is available here

For this next step, it was time to get the info live from the device, and also to attempt to stitch multiple devices together into a single topology. Some things I noticed is that the name of the node MUST match the hostname of the device, otherwise you end up with multiple nodes.  You can see there’s still a lot of work to do to clean this up, but I think it’s worth sharing. Hopefully you do too.

MultiTopo.png

 

Thoughts

Pros: Graphviz is definitely cool. I can see a lot of time spent in drawing network diagrams here. The fact that you could automatically run this every X period to ensure you have a up to date network diagram at all times is pretty awesome. It’s customizable which is nice, and multi-vendor would be pretty easy to implement. Worse case scenario, you could just poll the LLDP MIB with SNMP and dump the data into the appropriate bucket. Not elegant, but definitely functional.

Cons:  The link labels are a pain. In the short time I was playing with it, I wasn’t able to google or documentation my way into what I want, which is a label on each end of the link which would tell me what interface on which device. Not the glob of data in the middle that makes me wonder which end is which.

The other thing I don’t like is the curvy lines. I want straight lines. Whether that’s an issue with the graphviz python library that I’m using or it’s actually a problem with the whole graphviz framework isn’t clear to me yet. Considering the time saved, I could probably live with this as is, but I’d also like to do better.

If anyone has figured out how to get past these minor issues, please drop me a line!  @netmanchris on twitter or comment on the blog.

As always, comments and fixes are always appreciated!

@netmanchris

Devops for Networking Forum in Santa Clara

Normally, I would be writing this a few weeks ago, but sometimes the world just takes the luxury of time away from you.  In this case, I couldn’t be happier though as I’m about to part of something that I believe is going to be really really amazing.  This event is really a testimony to Brent Salisbury and John Willis’s commitment to community and their relentless pursuit of trying to evolve the whole industry, bringing along as many of the friends they’ve made along the way as possible. 

Given the speaker list, I don’t believe there’s been any event in recent ( or long term!) memory that has such an amazing list of speakers. The most amazing part is that this event was really put together in the last month!!!! 

If you’re in the bay area, you should definitely be there. If you’re not in the area, you should buy a plane ticket as you might not ever get a chance like this again. 

 

DevOps Forum for Networking

From the website

 

previously known as DevOps4Networks is an event started in 2014 by John Willis and Brent Salisbury to begin a discussion on what Devops and Networking will look like over the next five years. The goal is to create a conversation for change similar to what CloudCamp did for Cloud adoption and DevopsDays for Devops.

 

When and Where

You can register here

DevOps Networking Forum 2016

Monday, March 14, 2016 9:00 AM – 5:00 PM (Pacific Time)

Santa Clara Convention Center
5001 Great America Pkwy
Santa ClaraCalifornia 95054
United States
Questions? Contact us at events@linuxfoundation.org

 Who

You can hit the actual speakers page here, but the here’s the short list

  • Kelsey Hightower, Google,
  • Kenneth Duda, Arista
  • Dave Meyer, Brocade
  • Anees Shaikh, Google
  • Chris Young, HPE
  • Leslie Carr, SFMIX
  • Dinesh Dutt, Cumulus
  • Petr Lapukhov, Facebook
  • Matt Oswalt, keepingitclasseless 
  • Scott Lowe, VMware

I’ve also heard that other of a few industry notables who will be wandering the hallways as ONS starts to spin up for the week. 

Yup. What an amazing list and for the low low price of $100, you can join us as well!

OMG

Im absolutely honoured and, to be honest, a little intimidated to be sharing a spot with some of the industry luminaries who have been guiding lights personally for me in the last five years. I’m hoping to be a little education, a little entertaining, and other than that, I’ll be in the front row with a box of popcorn soaking up as much as I can from the rest of the speakers.  

Hope to see you there!

 

@netmanchris

 

GIT and Jinja – Like Peanut butter and Pickles!

Thanks to @mierdin for point this out. It looks like the wordpress format is causing some strange word-wrap issues. For a better view please click here to see the full post without presentation issues. 

 

Using GITHub to build our Network Configs

As I wrote in this post, one of my goals for this year is to be able to compltely automate the build of my lab environment programatically.

In the last couple of jinja posts, I wrote about the basics of Jinja2 templates and how they can be applied to building network configurations.

In this post, I’m going to take the next step and move those files from my local hard drive out to…

 

duh duh dahhhhhhhhhh

The cloud.

The cloud

 

Before we get started…

We’re going to go over some basics on the tools we’re using to make sure everyone’s on the same page. cool?

What’s GIT?

Git is a widely-used source code management system for software development. It is a distributed revision control system with an emphasis on speed, data integrity, and support for distributed, non-linear workflows. wikipedia

Huh?

GIT is a piece of software that allows you to track changes to files over time.

So what’s GITHub?

“Where software is built Powerful collaboration, code review, and code management for open source and private projects. Public projects are always free. “Github.com

GITHub is like facebook for developers. It’s a place where you can sync your local GIT repository to a central location, and then sync that central location to other local repositories.

Different people can connect to the same repository allowing multiple people to work on the same project.

What’s a repository?

A repository is essentially a collection of files that make up a project. You could think of it like a folder or directory. That analogy is not exact as it’s possible for a repository to have multiple sub-folders or directories, but it’s close enough for our purposes.

Is GIT only for Code?

GIT was definitely designed for software developers to as a versioning control system while developing software, but you can use it for tracking changes to things other than

You could use it for anything text format that you want to track changes to over time. For example

  • grocery lists
  • contact list
  • tracking your weight

There are a lot of interesting uses for GIT, one of those that we’re going to use today is looking at storing our Jinja2 templates on a public GIT repository and loading them directly into our python script as part of the code.

 

Import Required Libraries

Unles you’ve already got them, you’ll need to  pip install jinj2  and  pip install requests these two libraries before loading them into your running environment.

In [1]:
import requests
import yaml
import githubuser
from jinja2 import Environment, FileSystemLoader, Template
 

Loading Templates from GITHub

Like with most things in python, if it’s useful enough, chances are there’s probably someone else who already put a library together for that. In our case, we’re going to use the python request library to handle loading files directly from our Github repository.

 

The first thing we’ll do is load the HPE comware switch template from that we used in this post. If you wanted to take a look at this directly on github, it can be found here. All we have to do is to copy and paste the URL from our browser directly into the first input of the requests.get function.

note: The requests function will return a whole object that has various attributes. the ” .text ” at the end of this tells the function to just give us the contents of the file, not of the other information, like the HTTP status_code.

Simple, right?

In [75]:
comware_template = requests.get('https://github.com/netmanchris/Jinja2-Network-Configurations-Scripts/blob/master/simple_comware.j2').text
 

Looking at the output

So now that we’ve loaded the contents of the simple_comware.j2 template directly from the Github site into the comware_template variable. Let’s take a look to make sure that we have what we need.

In [76]:
print (comware_template)
 
<!DOCTYPE html>
<html lang="en" class="">
  <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#">
    <meta charset='utf-8'>
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta http-equiv="Content-Language" content="en">
    <meta name="viewport" content="width=1020">
    
    
    <title>Jinja2-Network-Configurations-Scripts/simple_comware.j2 at master · netmanchris/Jinja2-Network-Configurations-Scripts · GitHub</title>
    <link rel="search" type="application/opensearchdescription+xml" href="/opensearch.xml" title="GitHub">
    <link rel="fluid-icon" href="https://github.com/fluidicon.png" title="GitHub">
    <link rel="apple-touch-icon" href="/apple-touch-icon.png">
   
...
 

Hmmmmm. That’s not right?

The requests library is reaching out and grabbing whatever we put into that first variable. If we look at the print contents, we can see the first line is<!DOCTYPE html> . So it looks like we’re grabbing the rendered webpage, not just the contents of the file. Thankfully, looking at the GITHub website, there’s an option to look at any of your files in raw mode. So let’s grab that URL and try this again, ok?

In [77]:
comware_template = requests.get('https://raw.githubusercontent.com/netmanchris/Jinja2-Network-Configurations-Scripts/master/simple_comware.j2').text
In [78]:
print (comware_template)
 
#sysname config
sysname {{ simple['hostname'] }}
#vlan config
{% for vlan in simple['vlans'] -%}
vlan {{ vlan['id'] }}
    name {{ vlan['name'] }}
    description {{ vlan['description'] }}
{% endfor %}#snmp_config
snmp-agent
snmp-agent community read {{ simple['snmp']['read'] }}
snmp-agent community write {{ simple['snmp']['write'] }}
snmp-agent sys-info contact {{ simple['snmp']['syscontact']  }}
snmp-agent sys-info location {{ simple['snmp']['syslocation'] }}
snmp-agent sys-info version all
 

Ahhhh… That’s better.

 

Loading Network Specific Values from GITHub

Now we’re going to load our network specific values which were stored in the YAML file in this post. But this time, we’re going to load them directly from a private github repository.

The free GITHub accounts allow you to have public repositories, which means everyone can see what you’re doing, but if you have a paid version, you can get private repositories for as little as five dollars a month.

The private repositories are secured and can only be accessed by someone with a GIThub username and password who has explicitly been given access to this repository.

I would say that it’s probably a bad idea for us to keep any secure information like usernames, passwords, or SNMP strings in a online repository. But for my purposes, I don’t have anythng of value in this lab environment so I’m not too worried about it.

note: Before you put any sensitive data into an online repository of any kind, be sure to check with your companies data policies to see if you’re breaking any corporate rules.

 

Creating an Auth Object

First, I’m going to create an auth object, which is basically a single object that represents the username and password for my github account. In my case, I’ve got a file on my local hard drive that will automatically create the auth object for me.

In case you’re interested, the file is called githubuser.py and contains the following code. 

 

from requests.auth import HTTPBasicAuth

def gitcreds(): auth = HTTPBasicAuth('netmanchris', 'my_secret_password') return auth

In [79]:
auth = githubuser.gitcreds() #you didn't think I was going to give you my password did you?
 

Loading simple.yaml

We’ll now load the simple.yaml file like we did in this post but instead of opening it from a local file, we’re going to load it directly from the raw version of the file on github. I’d give you the link but it’s in a private repository, so you won’t be able to access it anyways.

Thigs I want to point out

  • yaml.load: takes the response and processes the yaml content directly into a python data structure ( dictionary )
  • .text: takes the “.text” attribute from the requests object which is the content of the page.
  • auth = auth: takes the auth object we created above and passes it as the username and password during the HTTP request.

Make sense?

In [80]:
simple = yaml.load(requests.get('https://raw.githubusercontent.com/netmanchris/PrivateRepo/master/simple_config.yaml', auth=auth).text)
In [81]:
simple
Out[81]:
{'hostname': 'testswitch',
 'ip': '10.101.0.221',
 'snmp': {'read': 'supersecret',
  'syscontact': 'admin.lab.local',
  'syslocation': 'lab',
  'trap': [{'target': '10.101.0.200'},
   {'target': '10.101.0.201'},
   {'target': '10.101.0.202'}],
  'write': 'macdonald'},
 'vlans': [{'description': 'management vlan',
   'id': '10',
   'name': 'management'},
  {'description': 'users vlan', 'id': '15', 'name': 'users'},
  {'description': 'phones vlan', 'id': '16', 'name': 'phones'},
  {'description': 'servers vlan', 'id': '20', 'name': 'servers vlan'}]}
 

Putting it all together

So looking at our list

  • download simple_comware.j2 template from Github public repo: **Check!**
  • download simple.yaml values file from Github private repo: **Check!**
  • rendered templates: **Nope**

So I guess we know what comes next, right?

 

Rendering the final config

We use the Template function to create a jinja2 template object and then we use the simple variable we created during the yaml section as input into the cw_template object.

In [82]:
cw_template = Template(comware_template)
type(cw_template)
Out[82]:
jinja2.environment.Template
In [83]:
print (cw_template.render(simple=simple))
 
#sysname config
sysname testswitch
#vlan config
vlan 10
    name management
    description management vlan
vlan 15
    name users
    description users vlan
vlan 16
    name phones
    description phones vlan
vlan 20
    name servers vlan
    description servers vlan
#snmp_config
snmp-agent
snmp-agent community read supersecret
snmp-agent community write macdonald
snmp-agent sys-info contact admin.lab.local
snmp-agent sys-info location lab
snmp-agent sys-info version all
 

Writing the Config to Disk

So far we’ve only been rendering and printing configurations, but it would be kinda nice to be able to have these on disk so that we can open them in our favorite editor before we cut and paste them into a telnet session to our network device.

The next two commands simply write the rendered template to disk with the filename comware.cfg and then we open and print the file to screen just to make sure it worked.

In [84]:
with open('comware.cfg', "w") as file:
    file.write(cw_template.render(simple=simple))
In [85]:
with open('comware.cfg') as file:
    print (file.read())
 
#sysname config
sysname testswitch
#vlan config
vlan 10
    name management
    description management vlan
vlan 15
    name users
    description users vlan
vlan 16
    name phones
    description phones vlan
vlan 20
    name servers vlan
    description servers vlan
#snmp_config
snmp-agent
snmp-agent community read supersecret
snmp-agent community write macdonald
snmp-agent sys-info contact admin.lab.local
snmp-agent sys-info location lab
snmp-agent sys-info version all
 

What’s next?

So far, we’ve come pretty far. We’ve written a couple of jinja templates, we’ve figure out how to store those files in a centralized control versioning system, but we’re still cut’ing and past’ing those configurations ourselves which is not ideal.

In the next post, we’ll look at using APIs to push the configuraiton directly to a configuraiton management tool.

Questions or comments? Feel free to post below!

@netmanchris

2015 Recap and plans for 2016

About this time last year, I wrote this post.  It’s time to revisit again and plan for 2016. 

 

How did I do?

In 2015, I planned to work on skills in four major areas; python, data science, virtualization, and then just keep up on networking.  In general, I think I did good in all areas, with the breakaway really being in the python area. I made a concerted effort this to year to seek out project after project that would allow me to explore different aspects of python, and force me to grow in as many areas as possible. Attending Interop sessions led by such trailblazers as Jason Edelman, Jeremy Schulman, and Matt Oswalt definitely gave me ideas and inspiration to explore new areas, push boundaries, and have really helped me to grow as a coder/developer not to mention really helped me to cement some opinions on the future of networking in general.

I did manage to get through a bunch of the R courses in Cousera and they were great. I’d love to say that I’m going to return and finish the specialization  (I’m two courses shorts) but if I”m honest, I’m probably going to move towards the Data Science aspects of Python and get into Anaconda and Pandas more in 2016. Nothing like combining two growth areas into one to really push the growth. 

 

More so this year that others, having great conversations over beers, Moscow Mules, and many a sugery umbrella drink helped me expand my knowledge and firm up some of the thoughts I’ve been having for the last couple of years. No need to name drop, but you all know who you are and I’d like to say thanks for all of the conversations and laughs. As much as I love the tech, it’s the people that always help to drive me forward. 

If I’m keeping score, I think that 2015 was a good year. 

 

Plans Plans Plans

Now comes time to publicly declare what I want to accomplish in 2016. This is always the scary part as I know I’m now publicly accountable for any grandiose designs I throw into the ether. 🙂 

 

Practice to Application:

 

I’ve got a bit of a lab at home. If things go as planned, at the end of the year, I will be able to factory reset *almost* the entire lab and have it come back from the dead in a completely automated fashion. The plan here is to use a combination of python, jinja2, IMC, Ansible, and whatever other pieces I need to duck tape together to make this work by the end of the year.  Just because I like to make my life harder than it has to be, I’m planning on building out the topology using vendor independent methodologies, meaning that I want to be able to place a Cisco, HP, or Juniper box into any position in my lab access/distribution/core/dmz/wan/etc… and use a YAML file to dynamically build the required configurations on demand.  

 

Yeah… I know….   But it’s good to set goals right?

 

OpenSwitch

OpenSwitch is also another area which I will definitely be exploring during 2016.  The project is still very new and definitely has some places, mostly in the documentation area, where there’s room to make a difference. I’ve been really lucky to be able to work at a place where I have direct access to some of the projects core developers and I’m hoping I can share the fruits of that access in more blogs posts, pull requests to enhance the documentation, as well as some interoperability testing with some of the usual-suspect network kit that I already have in my lab. Right now, I’m thinking OSPF, BGP, Spanning-Tree as an unambitious start, and moving from there to using the declarative interface and REST interfaces to see how I can incorporate it into project one.

 

 

Thoughts on 2015

2015 was a good year. As an industry, I think we’ve made some great gains in general. The whole “Is SDN really a thing?” conversations seem to be over and we’ve moved on to “ I don’t care what you call it, what does it do for me?” conversations  are starting to really get interesting.  The projects with value are starting to separate themselves from the science fair exhibits and it looks like parts of the networking profession are finally past the out-right denial and have reached the bargaining stage ( “ Can someone else write the scripts for me? Please? ) 

I’ve been able to make forward momentum in all the areas I wanted to and I’m generally where I thought I was going to be at the start of the year.

 

Looking forward to 2016!

 

@netmanchris

XML, JSON, and YAML… Oh my!

I”m a network engineer who codes. Maybe even a network coder. Probably not a a network programmer. Definitely not a programer who knows networking.  I’m in that weird zone where I’m enough of two things that don’t normally go together that it makes conversations I”m having with some of my peers awkward.

I had one such conversation today trying to explain the different data serializations modes in python and why, at the end of the day, they really don’t matter.

The conversation started with one of those “But they have an XML API!!!” comments thrown out as a criticism of someone’s product. My response was something like “ And why does that matter? ”

The person who made the comment certainly couldn’t answer that question. It was just something they had read in a competitive deck somewhere.

I’m all about competing and trying to make sure that the customer’s have the BEST possible information to make the best decisions for their particular requirements, but this little criticism was definitely not, IMHO, the best information. In fact, it was totally irrelevant.   This post is my way of trying to explain why. Hopefully, this will help clear up some of the confusion around data structures and APIs and why they really don’t matter so much, as least not their formatting.

XML

You can read more about XML here. In a nutshell,  XML uses tags, similar to HTML to represent different values in your data stream.  the <item> opens up an item and the </item> closes the item, and what lives between the two is the value for that item. Take a look at the following XML output from the HP IMC NMS. I just cut and paste this straight out of the API interface, so you should be able to do the same if you want to follow along at home.  In this code, I have created a string called x and pasted in the XML formatted text which is a bunch of information about a Cisco 2811 router that lives in my lab. Pay attention to the values as they will stay the same going through this exercise.

XML is the oldest of the bunch, being a W3C recommendation in 1998. It’s important to note though that XML is still relevant, being the native data format of Netconf and still used in a lot of places. It’s old, but that doesn’t mean devoid of value.

Ordered Dictionary

A dictionary is a way of storing data in python that uses keys instead of an index to access the content or value of a specific piece of information you want. Example item[‘ip’] would return “10.101.0.1” with a dictionary.

One of the “issues” with dictionaries is that they are unordered. That means that there’s no guaranty that when you print out a dictionary that the values will be in the same order. ( Pretty obvious when you read the word “unordered” I know.) The OrderedDictionary is a “ dict subclass that remembers the order entries were added”.  So we’re going to use a great little library called xmltodict which takes an XML string ( called x ) above and transforms it into a python ordered dictionary. Now we can do interesting things to it in python. We can access they keys and get to the values directly. We can iterate over top of it because it’s one of pythons native data structures. It’s easy to use. People know it and understand it. It’s a good thing. Lists and dictionaries are the bread and butter of data structures in python. You need, need, need them.

In this code example, we’re going to take the XML string from above, run it through the xmltodict to convert it to an ordered dictionary and assign it to the variable y.  Once I’ve got the ordereddict Y, I could also use xmltodict to convert it back into XML with little to no effort. Cool?

JSON

JSON has become one of the standard ways to represent data between machines. It’s structured, well understood and it’s mostly human readable. A lot of “newer” systems now use JSON as the default data type. Most RESTful APIs for instance seem to have settled on JSON.

This is where things get interesting. Now that I’ve got XML in an ordereddict, I can use the JSON library to convert it to a JSON formatted string which I can then send along to any system that understands JSON. Or write it to a file, or just stare at those pretty, pretty braces.

Note: If I convert from JSON back to a python structure using the json.loads method, it will actually return a regular dictionary, not an Ordered Dictionary, so the values might appear out of order which COULD, in theory, cause issues with an upstream system, but I haven’t seen that in any of my work.

YAML

Although JSON is “more” readable than XML, it’s still got all those braces and apostrophes to worry about. And so YAML was born. YAML is easily the most human readable of the formats I’ve worked with. It uses white space, dashes and asterisk to denote different levels of the data structure. It’s what is commonly used with Jinja2 templating and Ansible and other cool buzzwords that we all are starting to play with.

Just like with the JSON example above, I can take the Ordered Dictionary and convert it to a YAML format (shown below ) and back again.  The yaml.load method does actually return an Ordered Dictionary.

 

What’s my point?

So the original criticism was “But they have an XML api!!!” right?  Well in these little code snippets I just demonstrated how using python and a couple of readily available libraries ( pyyaml and xmltodict are not native python and must be installed ) I was able to go from XML, to OrderedDict, to JSON, to YAML,  with almost no effort. I could take any of these and convert it to something like a Python Pickle, pull it back and convert it to something else. It really doesn’t matter. I can go from one to another without much effort.

Personally, I don’t like working with XML. I can do it, but I would RATHER work with JSON. But that’s just my personal preference, there’s no technical reason why JSON is superior to XML that I can see. At least not in the implementations and the levels that I’m dealing with.

Just like Bilbo Baggins, I can go from there and back again without worrying to much about the actual format in between because when I”m doing something in python, I’m really looking to be working with a native structure like a list of a dictionary anyways.

Anything that I get from externally, I’m just going to convert into a native python data type, munge away, then I”m going convert it back to whatever data format I need, be that JSON, XML or YAML and be on to the next task.

The actual data is what matters.

As long as it’s structured in a way that I can parse easily, I couldn’t care less how it comes in and how it goes out.

Don’t even get me started about simple wrapping CLI commands in XML…

Does that mean the format doesn’t matter at all?

No, I’m sure there are many more experienced programmers who can explain the horror stories of converting between different data formats, or that time when this thing happened that caused this other thing to blow up.  But for me; I’d much rather you had a well structured API that gives me data in a way that I can easily access, convert to a format I can work with, and move on.

Hopefully if you’ve made it to the end of this blog. You’ll agree that the actual format is much less important that you might once have believed. Disagree? Let me know if the comments below. Always looking to learn something and in the coding real, I ‘know I’ve got a LOT to learn!!!

@netmanchris