The First IoT Culling: My Devices are Dying.

 

Cull: to reduce the population of (a wild animal) by selective slaughter

As an early adopter of technology, I sometimes feel like I get to live in the future. Or as William Gibson said “The future is already here, it’s just not evenly distributed”. There are a lot of benefits to be gained from this, but there are also risks.  One of the biggest risks is 

How long is the product you choose going to be around?

 

I was an early adopter in the first wave of IoT devices, from wearables to home convenience devices, I dipped my toes in the pool early. Most of these platforms were Kickstarter projects and I’ve been generally happy with most of them, at least the ones that were actually delivered. ( That’s a story for another time…).

But in the last six months, the market seems to have decided that there are just too many of these small companies.

The Death Bells are Ringing

In the last year, I’ve noticed that there’s starting to be a trend. Many of the early platforms that I invested in seem to be disappearing. Some have been bought off and killed. Remember the Pebble watches which was acquired by Fitbit? I’ve got an original and a Pebble time that are now little more than short-term battery traditional watches.

 

NewImageSome are just dying on the vine.

The latest victim? The Sense sleep monitor system by Hello.  This was a Kickstarter project that really helped to define a new category. When the project was launched in 2013, there was nothing else like it in the market, at least nothing that I’m aware of. Like most Kickstarter projects, they shipped later than their Aug 2014 estimate, but when it arrived it was definitely worth the wait.

This device had multiple sensors including light, sound, humidity, VoC ( air quality ), and temperature. It also had remote bluetooth motion sensors that attached to your pillows to track body movement while you sleep. The basic idea is that you sleep 1/3 of your life. Shouldn’t we make sure we are doing it right?  The combination of the sensor data combined with sleep research will help users understand why they feel good or bad in the morning. How to create the optimal conditions in your bedroom etc…  Obviously I’m not a sleep expert, but I can say that sense has improved the quality of my sleep since I started using it. 

Just last week, we all received some sad news that Hello, the company behind the Sense product is shutting down.

NewImage 

What’s happening?

Although there are some people who believe that Sense was just a faulty product, I think there is something deeper going on here. This shows a fundamental flaw with the business models that some of the early IoT and wearables players came to market with. There is a very simple business principle that somehow they seemed to completely miss.

If you want to survive as a business, you’re incoming cash must be more than you’re outgoing case. 

 

Pebble and Sense, and several other wearable and IoT products in the market right now were built on a single-purchase model. You buy the product and you get unlimited right to use that product.  This is the consumers preferred model. When I spend my money on something, I want to own it. I want to be able to use it, and I don’t want to have to pay for it again and again. At least this is the simple version of the thought process.

Spending some time watching various devices become little more than expensive bricks has made me re-examine that thought process though.

Why I’m looking for Subscription models now

Yup. That’s right. You heard me.

I want to find companies that are actively looking to provide value funded through a subscription model of some kind. Companies like Nest and Ring who are providing cloud storage for security cameras are a great example of this in action.

Looking at the failing companies; the one thing in common that I’m starting to see in common with these different devices is that they tend to make a whole bunch of money up front. ( $10M+ for the initial Pebble Kickstarter project. One of the largest ever on that platform! ).  But they tend to be niche products that have a limited target audience, and when that target market has been saturated….  No more money comes in and they’re left having to continue to pay for the “cloud” infrastructure required to keep their products going.

Looking at Sense and Pebble, both of these platforms sold with a hardware model. They have a product offering where your devices connect to cloud-based infrastructure, whether that’s AWS based or other is irrelevant. What most consumers don’t realize is that cloud-based infrastructure has a reoccurring monthly cost to it. This also doesn’t include the cost of ongoing platform development, whether that’s adding new features, creating a better user-experience, or just upgrading to stay current with the newest versions of Apple iOS or Android that are shipping on current devices. 

This is fine as long as you continue to sell new hardware product, but as the number of new users start to trend down and your costs stay the same… we start to see what’s happening in the market right now.

Are Subscription models that only way?

Absolutely not. There are other companies, like Dlink, iHome or iDevices that have a fairly broad portfolio of products and are continuously creating new products. These helps to ensure they have a healthy income stream as individual product segments become saturated. They can afford to continue to fund app development and the infrastructure required to host them as they are spreading that cost over many devices.

 

More Deaths in the future

There have been some notable passings, such as Pebble and Sense, but I don’t think they are going to be the last by any stretch of the imagination. 2017 and 2018 are going to be a hard year on early adopters as we start to look at the mirrors, watches, and gadgets blink eternally as they have no home in the cloud to call back to. Hoping that many of the new IoT players start to realize that having a good technology idea isn’t enough if you want to survive. Strange that I’m now looking at business models in a consumer product purchasing decision. I guess this just goes to show how educated the consumer is truly becoming.

As I invest in my SmartHome products, I look for companies who are established with multiple streams of revenue. Companies like Lutron or Philipps. In some cases, like the Soma Smart Blinds, I really don’t have another option. I’ll probably buy them, but I’m not expecting to these to last the long term. I wish Soma the best of luck, but I don’t see a subscription model and it’s not like shades are something you replace every year. 

Bottom line is enjoy your first generation wearables now. They might not be around for that much longer. 

 

@netmanchris

Using JSONSchema to Validate input

There are a lot of REST APIs out there. Quite a few of them use JSON as the data structure which allows us to get data in and out of these devices. There are a lot of network focused blogs that detail how to send and receive data in and out of these devices, but I wasn’t able to find anything that specifically talked about validating the input and output of the data to make sure we’re sending and receiving the expected information.

Testing is a crucial, and IMO too often overlooked, part of the Infrastructure as Code movement. Hopefully this post will help others start to think more about validating input and output of these APIs, or at the very least, spend just a little more time thinking about testing your API interactions before you decide to automate the massive explosion of your infrastructure with a poorly tested script. 🙂

What is JSONSchema

I’m assuming that you already know what JSON is, so let’s skip directly to talking about JsonSchema. This is a pythonlibrary which allows you to take your input/output  and verify it against a known schema which defined the data types you’re expecting to see.

For example, consider this snippet of a schema which defines what a valid VLAN object looks like

"vlan_id":
{
    "description": "The unique ID of the VLAN. IEEE 802.1Q VLAN identifier (VID)", 
    "minimum": 1, 
    "maximum": 4094, 
    "type": "integer", 
    "sql.not_null": true
}

You can see that this is a small set of rules that defines what is a valid entry for the vlan_id property of a VLAN.  As a network professional, it’s obvious to us that a valid VLAN ID  must be between 1 and 4094. We know this because we deal with it every day. But our software doesn’t know this. We have to teach it what a valid vlan_id property looks like and that’s what the schema does.

our software doesn’t know this. We have to teach it

Why do we care?

Testing is SUPER important. By being able to test the input/output of what you’re feeding into your automation/orchestration framework, it can help you to avoid, at worst, a total meltdown, or, at best, a couple of hours trying to figure out why your code doesn’t work.

Using JSONSchema

So the two things you’re going to need to use JSONSchema are

  • The JSON Schema for a specific API endpoint
  • The input/output that you want to validate.

In this case, we’ll use a VLAN object that is coming out of an ArubaOS-Switch REST API.

You did know the ArubaOS-Switches have a REST API, right?

Step1 – Loading the VLAN object

We’re going to gather the output from the VLANS API. Instead of writing custom code, we’ll just use the pyarubaoss library. I’ll leave you to check out the GitHub repo and just paste the contents of the output of a single VLAN JSON output here. I’m also going to create a second VLAN with a VLAN_ID of 5000. Just to show how this works. 5000 of course, is not valid and we’d like to prove that. Right?

Step 2 – Loading the JSON Schema Definition

Now we have the output, we want to make sure that the output here complies with the JSON Schema definition that we’ve been provided.

Loading the JSON schema

Here’s a sub-set of the JSON schema that defines what a valid VLAN looks like

Step 3 – Importing the JSON Schema Library and Validating

Now we’re going to load the JSON Schema library into our python session and use it to validate the VLAN object using the Schema we defined above.

First we’ll look at the vlan_good object and run it through the validate function

As you can see, there’s nothing to see here. Basically this means that the vlan_good object is conforming properly to the provided JSON Schema. The VLAN ID is valid as it’s a integer value between 1 and 4094

Now let’s take a look at the vlan_bad object and run it through the same validate function

We can see that the validate function now raises an exception because and let’s us know very specifically that the VLAN ID 5000 is not valid

jsonschema.exceptions.ValidationError: 5000is greater than the maximum of 4094

Pretty cool right? We can still definitely shoot ourselves in the foot, but at least we know the input/output data that we’re using for our API is valid. To me this is important for two reasons

  • I can validate that the data I’m trying to send to a given API conforms to what that API is expecting
  • I can validate that the vendor didn’t suddenly change their API which is going to break my code

Wrap Up

There are a lot of networking folk who have started to take on the new set of skills required to automate their infrastructure. One of the most crucial parts of this is testing and validating the code to ensure that you’re not just blowing up your network more efficiently. JSON Schema is one of the tools in your tool box that can help you do that.

Comments, questions? Let me know

@netmanchris

Amazon S3 Outage: Another Opinion Piece

So Amazon S3 had some “issues” last week and it’s taken me a few days to put my thoughts together around this. Hopefully I’ve made the tail-end of the still interested-enough-to-find-this-blog-valuable period.

Trying to make the best of a bad situation, the good news, in my opinion, is that this shows that infrastructure people still have a place in the automated cloudy world of the future. At least that’s something right?

What happened:

You can read the detailed explanation on Amazon’s summary here.

In a nutshell

  • there was a small problem
  • they tried to fix it
  • things went bad for a relatively short time
  • They fixed it

What happened during:

The internet lost it’s minds. Or more accurately, some parts of the internet went down. Some of them extremely ironic

UNADJUSTEDNONRAW thumb bbfd

Initial thoughts

The reaction to this event is amusing and it drives home the point that infrastructure engineers are as critical as ever, if not even more important considering the complete lack of architecture that seems to have gone into the majority of these “applications”.

First let’s talk about availability: Looking at the Amazon AWS S3 SLA, available here, it looks like they did fall below there 99.9% SLA for availability. If we do a quick look at https://uptime.is/ we can see that for the monthly period, they were aiming for no more than 43m 49.7s of outage. Seems like they did about 6-8 hours of an outage so clearly they failed. Looking at the S3 SLA page, looks like customers might be eligible for 25% service credits. I’ll let you guys work that out with AWS.

Don’t “JUST CLICK NEXT”

One of the first things that struck me as funny here was the fact that this was the US-EAST-1 Region which was affected. US-EAST is the default region for most of the AWS services. You have to intentionally select another region if you want your service to be hosted somewhere else. But because it’s easier to just cllck next, it seems that the majority of people just clicked past that part and didn’t think about where they were actually hosting there services or the implications of hosting everything in the same region and probably the same availability zone. For more on this topic, take a look here.

There’s been a lot of criticism of the infrastructure people when anyone with a credit card can go to amazon sign up for a AWS account and start consuming their infrastructure. This has been thrown around like this is actually a good thing, right?

Well this is exactly what happens when “anyone” does that. You end up with all your eggs in one basket.  (m/n in round numbers)

“Design your infrastructure for the four S’s. Stability Scalability, Security, and Stupidity” — Jeff Kabel

Again, this is not an issue with AWS, or any Cloud Providers offerings. This is an issue with people who think that infrastructure and architecture don’t matter and it can just be “automated” away. Automation is important, but it’s there so that your infrastructure people can free up some time from mind numbing tasks to help you properly architect the infra components your applications rely upon.

Why o Why o Why

Why anyone would architect their revenue generating system on an infrastructure that was only guaranteed to 99.9% is beyond me.  The right answer, at least from an infrastructure engineers point of view is obvious, right?

You would use redundant architecture to raise the overall resilience of the application. Relying on the fact that it’s highly unlikely that you’re going to lose the different redundant pieces at the same time.  Put simply, what are the chances that two different systems, both guaranteed to 99.9% SLA are going to go down at the exact same time?

Well doing some really basic probability calculations, and assuming the outages are independent events, we multiple the non-SLA’d time happening ( 0.001% ) in system 1 times the same metric in system 2 and we get.

0.001 * 0.001 = 0.000001 probability of both systems going down at the same time.

Or another way of saying that is 0.999999% of uptime.   Pretty great right?

Note: I’m not an availability calculation expert, so if I’ve messed up a basic assumption here, someone please feel free to correct me. Always looking to learn!

So application people made the mistake of just signing over responsibility to “the cloud” for their application uptime, most of whom probably didn’t even read the SLA for the S3 service or sit down to think.

Really? We had people armed with an IDE and a credit card move our apps to “the cloud” and wonder why things failed.

What could they have done?

There’s a million ways to answer this I’m sure, but let’s just look at what was available within the AWS list of service offerings.

Cloudfront is AWS’s content delivery system. Extremely easy to use. Easy to setup and takes care of automatically moving your content to multiple AWS Regions and Availability Zones.

Route 53 is AWS’s DNS service that will allow you to perform health checks and only direct DNS queries to resources which are “healthy” or actively available.

There are probably a lot of other options as well, both within AWS and without, but my point is that the applications that went down most likely didn’t bother. Or they were denied the budget to properly architect resiliency into their system.

On the bright side, the latter just had a budget opening event.

Look who did it right

Unsurprisingly, there were companies who weathered the S3 storm like nothing happened. In fact, I was able to sit and binge watch Netflix well the rest of the internet was melting down. Yes, it looks like it cost 25% more, but then again, I had no problems with season 4 of Big Bang Theory at all last week, so I’m a happy customer.

Companies still like happy customers, don’t they?

The Cloud is still a good thing

I’m hoping that no one reads this as a anti-cloud post. There’s enough anti-cloud rhetoric happening right now, which I suppose is inevitable considering last weeks highly visible outage, and I don’t want to add to that.

What I do want is for people who read this to spend a little bit of time thinking about their applications and the infrastructure that supports them. This type of thing happens in enterprise environments every day. Systems die. Hardware fails. Get over the it and design your architecture to take into consideration these failures as a foregone conclusion. It IS going to happen, it’s just a matter of when. So shouldn’t we design up front around that?

Alternately, we could also chose to take the risk for those services that don’t generate revenue for the business. If it’s not making you money, maybe you don’t want to pay for it to be resilient. That’s ok too. Just make an informed decision.

For the record, I’m a network engineer well versed in the arcane discipline of plumbing packets. Cloud and Application architectures are pretty far away from the land of BGP peering and routing tables where I spend my days. But for the low low price of $15 and a bit of time on Udemy, I was able to dig into AWS and build some skills that let me look at last weeks outage with a much more informed perspective. To all my infrastructure engineer peeps I highly encourage you to take the time, learn a bit, and get involved in these conversations at your companies. Hoping we can all raise the bar collectively together.

Comments, questions?

@netmanchris

Shedding the Lights on Operations: REST, a NMS and a Lightbulb

It’s obvious I’ve caught the automation bug. Beyond just automating the network I’ve finally started to dip my toes in the home automation pool as well.

The latest addition to the home project was the Philipps hue light bulbs. Basically, I just wanted a new toy, but imagine my delight when I found that there’s a full REST API available. 

I’ve got a REST API and a light bulb and suddenly I was inspired!

The Project

Network Management Systems have long suffered from information overload.

Notifications have to be tuned and if you’re really good you can eventually get the stream down to a dull roar. Unfortunately, the notification process is still broken in that the notifications are generally dumped into your email which if you are anything like me…

NewImage

Yes. That’s really my number as of this writing

One of the ways of dealing with the deluge is to use a different medium to deliver the message. Many NMS systems, including HPE IMC, has the capability of issuing audio alarms, but let’s be honest. That can get pretty annoying as well and it’s pretty easy to mute them.

I decided that I would use the REST interfaces of the HPE IMC NMS and the Phillips Hue lightbulbs to provide a visual indication of the general state of the system.Yes, there’s a valid business justifiable reason for doing this. But c’mon, we’re friends?  The real reason I worked on this was because they both have REST APIs and I was bored. So why not, right?

The other great thing about this is that you don’t need to spend your day looking at a NOC screen. You can login when the light goes to whatever color you decide is bad.

Getting Started with Philipps Hue API

The Philipps SDK getting started was actually really easy to work through. As well, there’s an embedded HTML interface that allows you to play around with the REST API directly on the hue bridge.

Once you’ve setup your initial authentication to the bridge ( see the getting started guide ) you can login to the bridge at http://ip_address/debug/clip.html

From there it’s all fun and games. For instance, if you wanted to see the state of light number 14, you would navigate to api/%app_name%/lights/14 and you would get back the following in nice easy to read JSON.

http://ipaddress/debug/clip.html/

NewImage

From here, it would be fairly easy to use a http library like REQUESTS to start issuing HTTP commands at the bridge but, as I’m sure you’re aware by now, there’s very little unread territory in the land of python.

PHUE library

Of course someone has been here before me and has written a nice library that works with both python 2 and python 3.  You can see the library source code here, or you can simple

>>> pip install phue

From your terminal.

The Proof of Concept

You can check out the code for the proof of concept here. Or you can watch the video below.

Breaking down the code

1) Grab Current Alarm List

2) Iterate over the Alarms and find the one with the most severe alarm state

3) Create a function to correlate the alarm state to the color of the Philipps Hue lightbulb.

4) Wait for things to move away from green.

Lessons Learned

The biggest lesson here was that colours on a screen and colours on a light bulb don’t translate very well. The green and the yellow lights weren’t far enough apart to be useful as a visual indicator of the health of the network, at least not IMHO.

The other thing I learned is that you can waste a lot of time working on aesthetics. Because I was leveraging the PHUE library and the PYHPEIMC library, 99% of the code was already written. The project probably took me less than 10 minutes to get the logic together and more than a few hours playing around with different colour combinations to get something that I was at least somewhat ok with. I imagine the setting and the ambient light would very much effect whether or not this looks good in your place of business.If you use my code, you’ll want to tinker with it.

Where to Next

We see IoT devices all over in our personal lives, but it’s interesting to me that I could set up a visual indicator for a NOC environment on network health state for less than 100$.  Just thinking about some of the possibilities here

  • Connect each NOC agents ticket queue with the light color. Once they are assigned a ticket, they go orange for DO-NOT-DISTURB
  • Connect the APP to a Clearpass authentication API and Flash the bulbs blue when the boss walks in the building. Always good to know when you should be shutting down solitaire and look like you’re doing something useful, right?
  • Connect the APP to a Meridian location API and turn all the lights green when the boss walks on the floor.

Now I’m not advocating you should hide things from your boss, but imagine how much faster network outages would get fixed if we didn’t have to stop fixing them to explain to our boss what was happening and what we were going to be doing to fix them, right?

Hopefully, this will have inspired someone to take the leap and try something out,

Comments, questions?

@netmanchris

Auto Network Diagram with Graphviz

One of the most useful and least updated pieces of network documentation is the network diagram. We all know this, and yet we still don’t have/make time to update this until something catastrophic happens and then we says to ourselves

Wow. I wish I had updated this sooner…

Graphviz

According to the website 

Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics,  software engineering, database and web design, machine learning, and in visual interfaces for other technical domains.

note: Lots of great examples and docs there BTW.  Definitely check it out.

Getting started

So you’re going to have to first install graphviz from their website. Go ahead… I’l wait here.

Install the graphviz python binding

This should be easy assuming you’ve already got python and pip installed. I’m assuming that you do.

>>> pip install graphviz

Getting LLDP Neighbors from Arista Devices

You can use the Arista pyeapi library, also installable through pip as well.  There’s a blog which introduces you to the basics here which you can check out. Essentially I followed that blog and then substituted the “show lldp neighbors” command to get the output I was looking for.

Creating a Simple Network Diagram

The code for this is available here

Essentially, I’m just parsing the JSON output from the Arista eAPI and creating a DOTfile which is used to generate the diagram.

Pros: It’s automated

Cons: It’s not very pretty at all.

SimpleTopo.png

 

Prettying it up a Bit

Code for this is available here

So with a little bit of work using the .attr methods we can pretty this up a bit.  For example the

dot.attr('node', shape='box')

method turns the node shape from an ellipse into a box shape. The other transformations are pretty obvious as well.

Notice that we changed the shape of the shape, the style of the arrows a bit and also shaded in the box.  There are a lots of other modifications we can make, but I’ll leave you to check out the docs for that. 

SimplePrettierTopo.png

 

 

Adding your own graphics

Code for this is available here

Getting a bit closer to what I want, but still I think we can do a bit better. For this example, I used mspaint to create a simple PNG file with a switch-ish image on it. From what I can tell, there’s no reason you couldn’t just use the vendor icons for whatever devices you’re using, but for now, just playing with something quick and dirty.

Once the file is created and placed somewhere in the path, you can use this method

dot.attr('node', image="./images/switch1.png")

to get the right image.  You’ll also notice I used

dot.attr('edge', arrowhead='none')

to remove the arrow heads. ( I actually removed another command, can you spot it? )

SimplePrettierGraphicTopo.png

 

Straighter Lines

Code for this is available here

So looking at this image, one thing I don’t like is the curved lines. This is where Graphviz beat me for the day. I did find that I was able to apply the

dot.graph_attr['splines'] = "ortho"

attribute to the dot object to get me the straight lines I wanted, but when I did that, I got a great message that told me that I would need to use xlables instead of standard labels.

SimplePrettierGraphicOrthoTopo.png

Next Steps

Code for this is available here

For this next step, it was time to get the info live from the device, and also to attempt to stitch multiple devices together into a single topology. Some things I noticed is that the name of the node MUST match the hostname of the device, otherwise you end up with multiple nodes.  You can see there’s still a lot of work to do to clean this up, but I think it’s worth sharing. Hopefully you do too.

MultiTopo.png

 

Thoughts

Pros: Graphviz is definitely cool. I can see a lot of time spent in drawing network diagrams here. The fact that you could automatically run this every X period to ensure you have a up to date network diagram at all times is pretty awesome. It’s customizable which is nice, and multi-vendor would be pretty easy to implement. Worse case scenario, you could just poll the LLDP MIB with SNMP and dump the data into the appropriate bucket. Not elegant, but definitely functional.

Cons:  The link labels are a pain. In the short time I was playing with it, I wasn’t able to google or documentation my way into what I want, which is a label on each end of the link which would tell me what interface on which device. Not the glob of data in the middle that makes me wonder which end is which.

The other thing I don’t like is the curvy lines. I want straight lines. Whether that’s an issue with the graphviz python library that I’m using or it’s actually a problem with the whole graphviz framework isn’t clear to me yet. Considering the time saved, I could probably live with this as is, but I’d also like to do better.

If anyone has figured out how to get past these minor issues, please drop me a line!  @netmanchris on twitter or comment on the blog.

As always, comments and fixes are always appreciated!

@netmanchris

Pseudo-Math to Measure Network Fragility Risk

Some of you may have heard me ranting on Packet Pushers on stupid network tricks and why we continue to be forced to implement kluges as a result.  I made some comment about trying to come up with some metric to help measure the deviation of the network from the “golden” desired state to the dirty, dirty thing that it’s become over time due to kluges and just general lack of network hygiene.

So I decided that I would write a bit of code to get the conversation started. All code discussed is available on my github here

The Idea

What I wanted here was to create some pseudo-mathematical way of generating a measurement that can communicate to the management structure WHY the requested change is a really, really, bad idea.

Imagine these two conversations:

bad-conversation

good-conversation

Which conversation would you like to be part of?

Assumptions:

I’m making some assumptions here that I think it’s important to talk about.

  1. You have a source-of-truth defined for your network state. That is you have abstracted your network state into some YAML files or something like that.
  2. You have golden configurations defined in templates (ex Jinja2 ). These templates can be combined with your source-of-truth and used to generate your “golden” config for any network device at any time.
  3. You have “current-state” templates  (jinja2) defined that include all your kluges that can be combined with your source-of-truth and used to generate your “golden” config for any network device at any time.

The Fragility Metric

So how does one calculate the fragility of a network?

Wow! Thanks for asking!

My methodology is something like this.

  1. Generate the configurations for all network devices using the golden configuration templates.
  2. Generate the configurations for all network devices using the “current-state” configuration templates.

We should now be left with a directory full of pairs of configs.

We then use the python difflib SequenceMatcher library to calculate the difference between the pairs of files. The difflib library allows us to take two text files, eliminate the white space and compare the contents of the two files. One of the cool things is that it can give us a ratio metric which gives us a number between zero and one to measure how close the two files are.

What this means is that you can get this as output.

5930-1.cfg stability metric: 1.0
5930-2.cfg stability metric: 0.9958677685950413
7904-1.cfg stability metric: 0.9428861101723556
7904-2.cfg stability metric: 0.9405405405405406

Now that we’ve got a ratio for how different all of the pairs of files are, we can then calculate the mean average of all the files to calculate the network stability metric and network fragility metric

Network Stability Metric: 0.9698236048269844
Network Fragility Metric: 0.030176395173015624

HINT: If you add the two numbers together…

You can also get a nice graph

blog_graphic

Note: The pygal library produces a much cooler graphic which you can see here

The Approach

So the first thing I want to make clear is that I don’t intend this to REALLY measure the risk of a given configuration.

One idea I did have was to adjust the weighting of a specific configuration based on the role of that device.

Example – The core switch blowing up is PROBABLY a lot worse than an edge switch tanking because of some kludgey configuration.

This would be fairly easy to implement by placing some meta data on the configs to add their role.

It would be fairly easy to go down rat holes here on trying to identify every single line that’s changed and try to weight individual changes

Example – Look for [‘BGP’,’OSPF’,’ISIS’,’EIGRP’] in the dirty config and then weight those lines higher. Look for [‘RIP’] and rate that even higher.

Cause.. C’Mon… Friend don’t let friends run RIP, right?

Again, all the code is available here. Have a look. Think about it. Give me your feedback, I’d love to know if this is something you see value in.

 

@netmanchris

Jinja2 and… Powershell? Automation(ish) Microsoft DHCP

Most of us have home labs, right?

I’m in the middle of doing some zero touch provisioning testing, and I had the need to create a bunch of DHCP scopes and reservations, some with scope specific options, and some with client specific options. As often as I’ve had to create a Microsoft DHCP server in the lab and set up some custom scopes, I decided I was going to figure out how to automate this as much as I could with a little effort as possible.

After taking a quick look around for a python library to help me out, python being my weapon of choice, I realized that I was going to have to get into some Powershell scripting. I’ve dabbled before, but I’ve never really take the time to learn much about Powershell control structures ( loops, conditionals, pipes, etc…).  I really didn’t want to spend the time getting up to speed on a new language, so I instead decided I was going to use the python skills I had to auto generate the scripts using a little jinja2 and some google-technician skills.

Figuring out the Powershell Syntax

This was the easy part actually, Microsoft has some pretty great documentation for Powershell CmdLets and there was more than a couple of blogs out there with examples, Unfortunately, I didn’t take notes on all the posts I went through… yeah, I suck, but I offer thanks to everyone

Creating the Scopes

The Jinja Template for Creating Scopes

Once I figured out the specific syntax that I needed to generate the DHCP scopes with the proper scope options, I dropped the syntax into a Jinja template using the For loop to run over multiple scopes as defined in the YAML file ( see the next GIST ).

The YAML file to define the Scopes

I chose to use YAML to define the inputs because well, that’s what I felt like working in at the time and it also allowed me to separate out the global Values from those specific to each scope. As I move forward in my full home lab automation project, I’m thinking I might use a single globals values YAML file to hold all the global values for everything in the entire infrastructure, but for now, I decided to keep things simple and just include it in the same YAML file.

If you take a look at the GIST below, you should be able to easily identify what each of the different elements are for.

The Python Script to Generate the Powershell Script

Nothing too complicated here, I load the variables, pass them into the jinga library and spit out a file with a PS1 extension.

Creating the Reservations

For my specific project, I need to set different DHCP option 67s for some of my clients. Although I could have manually created these as well, I decided that I would just take the same approach and template the whole thing.

The Jinja Template for Creating DHCP Reservations

Very similar to the approach above, I figured out the syntax for one, and then I created a Jinja template using a For loop.

The CSV file to define the DHCP Reservations

In this case, since I didn’t have to deal with anything more than the reservations, I decided on using a CSV file as the input format. Although YAML is what all the cool kids are doing, using a CSV file allows me to edit this in Excel which I found to be easier for this specific project. There are only a coupe of reservations in here right now, but I’ve got another 30 or so devices which I will need to perform this same step for, so having the ability to quickly add reservations into a CSV file is a good thing in the long run.

The Python Script to Generate the Powershell Script

Wrap up

To be honest, it’s a bit lazy and I wish I had more time to learn more things, but sometimes, you just use what you know to address a problem in a quick and dirty way. Hopefully, someone else will find these useful as well.

At the beginning of the year, I wrote a blog that said my major goal was to be able to automate the configuration of my entire lab with as little effort as possible. Considering how many times I’ve had to manually create DHCP Scopes and Reservations over the years, I think this one will be something that will definitely come in handy. Hopefully someone else will thing so to!

Questions, Comments? Feel free to post below!

@netmanchris