Devops for Networking Forum in Santa Clara

Normally, I would be writing this a few weeks ago, but sometimes the world just takes the luxury of time away from you.  In this case, I couldn’t be happier though as I’m about to part of something that I believe is going to be really really amazing.  This event is really a testimony to Brent Salisbury and John Willis’s commitment to community and their relentless pursuit of trying to evolve the whole industry, bringing along as many of the friends they’ve made along the way as possible. 

Given the speaker list, I don’t believe there’s been any event in recent ( or long term!) memory that has such an amazing list of speakers. The most amazing part is that this event was really put together in the last month!!!! 

If you’re in the bay area, you should definitely be there. If you’re not in the area, you should buy a plane ticket as you might not ever get a chance like this again. 

 

DevOps Forum for Networking

From the website

 

previously known as DevOps4Networks is an event started in 2014 by John Willis and Brent Salisbury to begin a discussion on what Devops and Networking will look like over the next five years. The goal is to create a conversation for change similar to what CloudCamp did for Cloud adoption and DevopsDays for Devops.

 

When and Where

You can register here

DevOps Networking Forum 2016

Monday, March 14, 2016 9:00 AM – 5:00 PM (Pacific Time)

Santa Clara Convention Center
5001 Great America Pkwy
Santa ClaraCalifornia 95054
United States
Questions? Contact us at events@linuxfoundation.org

 Who

You can hit the actual speakers page here, but the here’s the short list

  • Kelsey Hightower, Google,
  • Kenneth Duda, Arista
  • Dave Meyer, Brocade
  • Anees Shaikh, Google
  • Chris Young, HPE
  • Leslie Carr, SFMIX
  • Dinesh Dutt, Cumulus
  • Petr Lapukhov, Facebook
  • Matt Oswalt, keepingitclasseless 
  • Scott Lowe, VMware

I’ve also heard that other of a few industry notables who will be wandering the hallways as ONS starts to spin up for the week. 

Yup. What an amazing list and for the low low price of $100, you can join us as well!

OMG

Im absolutely honoured and, to be honest, a little intimidated to be sharing a spot with some of the industry luminaries who have been guiding lights personally for me in the last five years. I’m hoping to be a little education, a little entertaining, and other than that, I’ll be in the front row with a box of popcorn soaking up as much as I can from the rest of the speakers.  

Hope to see you there!

 

@netmanchris

 

Advertisements

OpenSwitch in an OVA

 

First, disclaimer: I’m an HPE employee. Hewlett Packard Enterprise is a major contributor to the OpenSwitch project. Just thought you should know in case you think that affects my opinion here.

If you need more info on the OpenSwitch project, you can check out the other post in this series here and here

Network Engineers Don’t Like Learning New Things

Got your attention, didn’t I?  After the first couple of posts on OpenSwitch and a lot of discussions about this cool new project at some recent events, there was one piece of feedback that came back fairly consistently from the traditional engineers. OpenSwitch is hard to get running because there’s so many new things to learn.

When released in November of last year, the initial demonstration environment was actually pretty simple and streamlined to get up and running, as long as you’re a developer.  

The process involved the standard set of dev tools:

  • Virtual Box
  • Vagrant
  • DockerToolbox
  • Docker

 For anyone involved in a development environment, these tools are like an old hoody on a cold winter day. Welcome and familiar. 

But for the majority of network engineers who are far more comfortable with a console cable and a telnet session, it appears that the barrier to entry was just too high for people to start getting their hands dirty. 

 

I was able to bring this feedback to the OpenSwitch engineering team and I’m happy to bring the news that OpenSwich is now available in a OVA format that you can run natively on VirtualBox. 

 

Read the Docs

I’m going to go the long way around to get this up and running, but I’ve heard that the OVA file may be prebuilt and available on the OpenSwitch website in the near future. *I’ll try and come back to edit this post with a direct link if that happens*

The build process for OpenSwitch is actually well documented here.  Depending on the OS you’re running, there are some specific dependencies that are well documented. I won’t cover those since they are already there, but make sure you do check the docs carefully when you’re creating you’re build system as it won’t work unless you follow them.

Getting the Code

Since we’re going to be simply creating an OVA image, we don’t need the entire OPS GIT repo, we only need the ops-build portion. The first thing we’re going to do is to get to a terminal window on your linux ubuntu 14.04 host, create a directory called opsova and then GIT clone the ops-build repository using the following command. This command will copy the contents of the ops-build directory on GITHUB into a local directory called ops-build on your local machine.

git clone https://git.openswitch.net/openswitch/ops-build
 

 Selecting the Build

Now that we’ve cloned the necessary code to our local machines. We’re going to select the type of OpenSwitch build that we’d like to create. If you were pushing this to a supported white box switch you would use the following commands

make configure genericx86-64

But since we’re going to be creating an OVA so that we can import directly into Oracle Virtual box (Because it’s free!) we’re going to configure the appliance build

make configure appliance
 

Creating the OVA

Now for the final-ish step of the build. We’re going to run the make command to actually create the OVA file.

Warning: If you’re doing this in a VM. You want to give it lots of CPU for this step or it could take quite a long time. Remember burning CD’s on a 1x speed burner? Yeah… it feels like that. 

make
 

Running the OVA

Now that we’ve successfully created the OVA, the next step is to move it out of the VM to the host machine where you have Oracle VirtualBox installed.. This, of course, is assuming that you followed my example and you were doing this in a Ubuntu VM rather than a bare metal machine. From here, we follow a typical deployment and import the OVA using the following steps.

Finding the OVA

Once the make process, finishes ( there may be a couple of warnings, but it should build successfully  ), you will navigate to the ./images folder where you will find a symbolic link to OVA file. Following the symbolic link, the actual OVA was located in ./ops-build/build/tmp/deploy/images/appliance.  

Screen Shot 2016 01 19 at 1 14 15 PM

Now you need to get it off of your VM host and move it over to the machine where you are running VirtualBox.  ( I’m assuming you are comfortable with moving files between two machines and I’m not covering that here. Please feel free to point out in the comments if I’ve made a false assumption ).


Importting the OVA into Virtual Box

 

Now that we’ve moved this over to the host machine where you’re running VirtualBox, you simply choose File\import Appliance and navigate to the directory where you stored the OVA click next a couple of times and you should be good to go.

Screen Shot 2016 01 19 at 1 18 25 PM

 

Logging into OpenSwitch

In the last part of this post, we’re going to login to the OPS image.  The default username for the appliance build is root with no password.  Simply type in the username and you should be in the system

If you want to jump ahead of the next post, you would now type vtysh at the command prompt to pop into the quagga network shell which is where us network types will find ourselves most at home.

 

 

Screen Shot 2016 01 19 at 1 24 48 PM

What’s Next

In the next post, I’ll be looking at some basic configuration tasks, like adding an IP address and establishing basic network connectivity. If you have any issues getting this running, please feel free to post in the comments below, or even better, get involved in the OPS community by using the mailing list or the IRC Channels ( You can find information on all the ways to participate in the OPS community here

 

@netmanchris

Installing OpenSwitch

 

First, disclaimer: I’m an HP employee. HP’s a major contributor to the OpenSwitch project. Just thought you should know in case you think that affects my opinion here.

If you need more info on the OpenSwitch project, you can check out the first post in this series here.

Getting our hands dirty

This section comes down into three steps, if you don’t follow these steps, you won’t succeed. I’m not going to go into details on these steps as I’m assuming you can figure this out if you found this blog. 🙂 

– Install Virtual Box

– Install Vagrant

– Install Docker Toolbox

I’m running OpenSwitch on a Windows box in this case as the documentation covers the ‘IX build. I’m running on this natively on OSX which also means that I’ve got to install the docker toolbox plugin to get docker containesr to work. I’m  also assuming that you’ve already installed Virtual Box and Vagrant for the following section.

Installing Vagrant Plugin

From a terminal window, run the vagrant plugin install vagrant-reload command from the CLI. This should show the following output.

Screen Shot 2015 10 20 at 8 28 45 PM

Installing the OpenSwitch Dev Environment

For this section, I’m assuming you have already downloaded the vagrant files from here into your working directory.

Running the Docker Toolbox Plugin

Run the Docker QuickStart Terminal application and wait for the virtual box image to come to a running state. You should be able to see the following 

Screen Shot 2015 10 20 at 8 30 50 PM

Vagrant up!

From the terminal, navigate to where you have unzipped the OpenSwitch vagrant files that you downloaded from here.  Run vagrant up command from the CLI. At this point some magic happens ( read more on Vagrant here if you’ve never worked with this tool before. Magic is obviously not magic, but I just don’t feel like explaining the whole process in this post. )

 

On a OSX box, you’re not running as root so you may end up with the following window

Screen Shot 2015 10 20 at 8 33 27 PM

If you hit this, don’t worry, just SUDO it! 

Screen Shot 2015 10 20 at 8 36 39 PM

 Accessing the OpenSwitch

From the same terminal window issue the sudo vagrant ssh command to be able to access the shell (CLI) of the OpenSwitch. 

 

If you are successful, you should see the following output. Notice the shell has changed to vagrant@switch

Screen Shot 2015 10 20 at 8 39 21 PM

Accessing the network interface

From the vagrant@switch prompt, issue the sudo vtysh command and you will now have access to an industry standard hierarchical CLI like we all know and love!

Screen Shot 2015 10 20 at 8 41 36 PM

 My thoughts so far

Getting this up and running has been relatively painless. There were a couple of small things to get it running that were particular to OSX which was not covered on the OpenSwitch quick start guide. Nothing that a little patience and google didn’t help me cover in a few minutes though.  The install experience was pretty easy.  The guides were pretty accurate and the getting this up and running should be something most of us can follow without much trouble.

OpenSwitch doesn’t have what I would call a robust network stack at this point in time, but we’re still really early in this world.  Now that I’ve got it up and running, I’m looking forward to starting to look at the alternate interfaces such as OVSDB and REST as described here

Anyone else got this up and running yet? Thoughts? Let me know in the comments below!

 

@netmanchris

Sometimes Size Matters: I’m sorry, but you’re just not big enough.

So now that I’ve got your attention…I wanted to put together some thoughts around a design principal of what I call the acceptable unit of loss, or AUL. 

Acceptable Unit of Loss: def. A unit to describe the amount of a specific resource that you’re willing to lose

 

Sounds pretty simple doesn’t it?  But what does it have to do with data networking? 

White Boxes and Cattle

2015 is the year of the white box. For those of you who have been hiding under a router for the last year, a white box is basically a network infrastructure device, right now limited to switches, that ships with no operating system.

The idea is that you:

  1. Buy some hardware
  2. Buy an operating system license ( or download an open source version) 
  3. Install the operating system on your network device
  4. Use DevOps tools to manage the whole thing

add Ice. Shake and IT operational goodness ensures. 

Where’s the beef?

So where do the cattle come in?  Pets vs. Cattle is something you can research elsewhere for a more thorough dealing, but in a nutshell, it’s the idea that pets are something that you love and care for and let sleep on the bed and give special treat to on Christmas.  Cattle on the other hand are things you give a number, feed from a trough, and kill off without remorse if a small group suddenly becomes ill. You replace them without a second thought. 

Cattle vs. Pets is a way to describe the operational model that’s been applied to the server operations at scale. The metaphor looks a little like this:

The servers are cattle. They get managed by tools like Ansible, Puppet, Chef, Salt Stack, Docker, Rocket, etc… which at a high level are all tools which allow for a new version of the server to be instantiated on a very specific configuration with little to no human intervention. Fully orchestrated,   

Your servers’s start acting up? Kill it. Rebuild it. Put in back in the herd. 

Now one thing that a lot of enterprise engineers seem to be missing is that this operational model is predicated on the fact that you’re application has been built out with a well thought out scale-out architecture that allows the distributed application to continue to operate when the “sick” servers are destroyed and will seamlessly integrate the new servers into the collective without a second thought. Pretty cool, no?

 

Are your switches Cattle?

So this brings me to the Acceptable Unit of Loss. I’ve had a lot of discussions with enterprise focused engineers who seem to believe that Whitebox and DevOps tools are going to drive down all their infrastructure costs and solve all their management issues.

“It’s broken? Just nuke it and rebuild it!”  “It’s broken? grab another one, they’re cheap!”

For me, the only way that this particular argument that customers give me is if there AUL metric is big enough.

To hopefully make this point I’ll use a picture and a little math:

 

Consider the following hardware:

  • HP C7000 Blade Server Chassis – 16 Blades per Chassis
  • HP 6125XLG Ethernet Interconnect – 4 x 40Gb Uplinks
  • HP 5930 Top of Rack Switch – 32 40G ports, but from the data sheet “ 40GbE ports may be split into four 10GbE ports each for a total of 96 10GbE ports with 8 40GbE Uplinks per switch.”

So let’s put this together

Screen Shot 2015 03 26 at 10 32 56 PM

So we’ll start with

  • 2 x HP 5930 ToR switches

For the math, I’m going to assume dual 5930’s with dual 6125XLGs in the C7000 chassis, we will assume all links are redundant, making the math a little bit easier. ( We’ll only count this with 1 x 5930, cool? )

  • 32 x 40Gb ports on the HP 5930 – 8  x 40Gb ports saved per uplink ) = 24 x 40Gb ports for connection to those HP 6125XLG interconnects in the C7000 Blade Chassis.
  • 24 x 40Gb ports from the HP 5930 will allow us to connect 6 x 6125XLGs for all four 40Gb uplinks. 

Still with me? 

  • 6 x 6125XLGs means 6 x C7000 which then translates into 6*16 physical servers.
Just so we’re all on the same page, if my math is right; we’ve got 96 physical servers on six blade chassis connected through the interconnects at 320Gb ( 4x40Gb x 2 – remember the redundant links?) to the dual HP 5930 ToR switches which will have (16*40Gb – 8*40Gb from each HP 5930) 640Gb of bandwidth out to the spine.  .  

If we go with a conservative VM to server ratio of 30:1,  that gets us to 2,880 VMs running on our little design. 

How much can you lose?

So now is where you ask the question:  

Can you afford to lose 2,880 VMs? 

According to the cattle & pets analogy, cattle can be replaced with no impact to operations because the herd will move on with out noticing. Ie. the Acceptable Unit of Lose is small enough that you’re still able to get the required value from the infrastructure assets. 

The obvious first objection I’m going to get is

“But wait! There are two REDUNDANT switches right? No problem, right?”

The reality of most of networks today is that they are designed to maximize the network throughput and efficient usage of all available bandwidth. MLAGG, in this case brought to you by HPs IRF, allows you to bind interfaces from two different physical boxes into a single link aggregation pipe. ( Think vPC, VSS, or whatever other MLAGG technology you’re familiar with ). 

So I ask you, what are the chances that you’re running the unit at below 50% of the available bandwidth? 

Yeah… I thought so.

So the reality is that when we lose that single ToR switch, we’re actually going to start dropping packets somewhere as you’ve been running the system at 70-80% utilization maximizing the value of those infrastructure assets. 

So what happens to TCP based application when we start to experience packet loss?  For a full treatment of the subject, feel free to go check out Terry Slattery’s excellent blog on TCP Performance and the Mathis Equation. For those of you who didn’t follow the math, let me sum it up for you.

Really Bad Things.  

On a ten gig link, bad things start to happen at 0.0001% packet loss. 

Are your Switches Cattle or Pets?

So now that we’ve done a bit of math and metaphors, we get to the real question of the day: Are you switches Cattle? Or are they Pets? I would argue that if your measuring your AUL in less that 2,000 servers, then you’re switches are probably Pets. You can’t afford to lose even one without bad things happening to your network, and more importantly the critical business applications that are being accessed by those pesky users. Did I mention they are the only reason the network exists?

Now this doesn’t mean that you can’t afford to lose a device. It’s going to happen. Plan for it. Have spares, Support Contracts whatever. But my point is that you probably won’t be able to go with a disposable infrastructure model like what has been suggested by many of the engineers I’ve talked to in recent months about why they want white boxes in their environments.

Wrap up

So are white boxes a bad thing if I don’t have a ton of servers and a well architected distributed application? Not at all! There are other reasons why white box could be a great choice for comparatively smaller environments. If you’ve got the right human resource pool internally with the right skill set, there are some REALLY interesting things that you can do with a white box switch running an OS like Cumulus linux. For some ideas,  check out this Software Gone Wild podcast with Ivan Pepelnjak and Matthew Stone.

But in general, if your metric for  Acceptable Unit of Loss is not measured in Data Centres, Rows, Pods, or Entire Racks, you’re probably just not big enough. 

 

Agree? Disagree? Hate the hand drawn diagram? All comments welcome below.

 

@netmanchris

Network Management – How to get started

Network Management Skills

In the last few years, I’ve noticed that I’m a little different. It’s not just because I wear coloured socks or my hair looks like I style after Albert Einstein.  I noticed that I’ve developed a different skill set than the majority of my pre-sales or post-sales network professional peers. What skills you ask?  Network Management and Operations.

Why I choose to develop Network Management skills

About five years ago, I took a look at the market and thought ” This stuff is complicated “.  Earth shattering observation, right?   It sounds simple, but then I started looking at some of the tools we had at the time and I realized that NMS tools could really help to automate not only the information gathering, but also the configuration tasks in our networks.  At the time, we had a cool little tool called 3Com Network Director.  It ran on a single PC. No web interface and it really only managed 3Com gear. But it was better than running CLI commands all day long. And the monitoring aspects really helped my customers identify and resolve problems quickly.  This was a moment of inspiration for me.  I choose to develop skills in network management and operations.

Let me say that again.

I choose to develop skills in network management and operations. 

I didn’t choose to develop skills in 3ND, or IMC, or Solarwinds, Cisco Prime, or any of the various other tools. Overtime, I’ve gained experience on all of those products, but I would say my true value is having gone through the process to develop skills in the sub disciplines of network management. Learning a product is only a very small part of the whole domain. 

What does that mean? 

It’s easy to learn a product. They have bells and whistles. Click this check box. fill in this box. etc..    Those skills are important. But they don’t help us understand how to apply the product to resolve our customers business challenges. They don’t help us understand when not to click that box. And they don’t help us to design a network management strategy, or to consult with our customers on operational efficiencies and what can be done to help increase their networks stability, to reduce the MTTR times, or to mitigate pressures put on the operations team. Learning the domain knowledge has helped me to understand WHY we have developed the product features and what they are to be used for.

My Learning Roadmap

To put it simply, I consumed everything I could on the subject. It’s amazing how much free information is out there if you set your mind on finding it. If anyone’s looking to increase their skills in this area, I’ve put together the following list of resources that have really helped me in this domain. I’ve tried to keep this out of vendor specific products, but I’m sure you’ll find that any product you choose will probably have training and learning resources around it as well. This is in NO way inclusive, there are a lot of resource out there. I highly encourage everyone to read, watch, and listen to as many of them as you can and to think about them critically.

Free Resource

Solarwinds SCP training  The Solarwinds SCP training is online and free. What I really liked about this training is that it’s really focused on network management, netman protocols, and the operational aspects of network management. There are, of course, some product specific aspects to the training, but in general this is a really good primer on network management in general. Oh… did I mention there’s a bunch of videos as well?  Great stuff to rip and put on your tablet when you’re stuck on a plane and you’ve seen all the movies. 

Solarwinds has also provided a bunch of whitepapers going further in depth on network management specific subjects which are a great reference.  If you’re interested there’s also the Solarwinds Certified Professional certification if you’re looking for a way to validate your knowledge.

The Information Technology Infrastructure Library  (ITIL) is a compilation of IT service management practices compiled over the last 30 years. There’s a lot of great stuff in here. The books are expensive though.  There is an entire industry that’s sprung up around ITSM.  If you have some commute time to spare, I would highly suggest typing in the words “ITSM” into your favourite Podcast app and sit back and listen. 

If you’re interested, there’s also the ITILv3 Foundations certification if you’re looking for a way to validate your knowledge.

Blogs and Podcast

Social Media is a great way to learn how people apply ITIL concepts to the real world. I particularly like http://www.itskeptic.org as it’s got a great following of a bunch of smart people who disagree on a regular basis. You never know when the customer you’re going into is operating in a traditional ITIL based ops model, perhaps they are using the Microsoft OperationsFramework, or perhaps they’ve moved on to Agile and DevOps. It’s good to have at least a cursory knowledge in all of this approaches to IT operations, to mention traditional Network Management Frameworks like FCAPS and eTOM

Paid resources

Books are a great way to learn about network management and operations. Here’s my  abbreviated reading list. These are the reference books that sit on my shelf within easy reach. 

Network Maturity Model – This book is actually a academic thesis focused on trying to extend the CMMI models to network specific capabilities maturity models. Of course, network operations is part of the capabilities of an organization, so there’s a lot of great content in here.  The book is definitely academic, but it’s got a LOT of great content in it, assuming you can get through all of the required footnotes and pointers to other academic works. 

Fundamentals of EMS, NMS, and OSS/BSS – This books is wonderful. It covers all aspects of traditional telecom management from FCAPS to eTOM, as well as looking at OSS/BSS architectures which usually exist only in Service Provider networks. Great information in here.  My biggest problem with this book is the font size. I have glasses and it’s tiny.  Worth the effort to make it through, but plan on multiple reading sessions. This is not a book you’re going to get through in one sitting.

Network Management Fundamentals – Cisco Press book that’s a great read. A lot of information in here is covered in some of the other books. What I like about this is that it written as an introduction to network management for people already working in the field. This is not an academic text.

Network Management: Accounting and Performance Strategies – Cisco Press book again. This one focuses strictly on performance management, focusing a lot on Netflow and how it can be applied to accounting and performance in large network.  

Performance and Fault Management – Cisco Press Book again. This is an older book, so the technologies discussed may not be as relevant as they once were. The nice thing though is that we’re talking about operational models and processes here, so the principles still apply. 

VoIP Performance Management and Optimization – Last Cisco Press book. This book looks at the operational aspects of VoIP/IP Telephone/Unified Communications networks specifically. There are a lot of very detailed recommendations in here that can be leveraged to give customer guidance on what they should be doing and what they should be monitoring. This book has helped me a few times when working with customer who have chosen to implement a dual-vendor strategy and want to have HP Intelligent Management Center managing and monitoring there Cisco Callmanager environment in addition to their network.

The Phoenix Project – This book is written as a novel to teach people about the DevOps movement. This is a MUST read for anyone interested in IT operations and the current trends in the industry. It also will help get a first hand accounting of what many customers go through. Read it. Read it. Read it.

The Visible Ops – From the same authors of the Phoenix Project. This book tries to tie DevOps and ITIL together. Interesting read. Many people see DevOps and ITIL as two opposites of the spectrum. Most have had a bad ITIL experience and now the pendulum swings in the other direction. Finding a happy middle is a good goal. I’m not sure they’ve hit the mark, but it’s a start.

Network Management: Principals and Practice – Expensive book. Good information, but the technology is also quite dated. Concepts and knowledge is great. Good diagrams, but it’s sometimes hard to get through the hubs and token ring.

Domain Related Knowledge

Network Management is really about ensuring stability and helping the business to meet their operational requirements with the greatest efficiency possible. In that light, it’s important to understand what some of those operational burdens are. In recent years, businesses have had a ton of GRC (governance, risk, and compliance) requirements put on the operations teams that threaten to break an already overloaded team.   On the bright side, I believe that although they have been forced into these requirements through legislation and governance like SOX, COSO, PCI-DSS, HIPPA, Gramm-Leach-Bliley, etc.  have actually forced network operations teams to get much tighter on their controls, forcing us into more stable and secure networks. 

note: This list is US specific, if international readers can post some examples in the comments section, I would be happy to add them to the list of references.

In my experience, one of the issues with GRC requirements in general is that they are very rarely descriptive of what actually needs to be done. They have generic statements like “monitor network access”  or ” secure the it assets”.  

ISACA noticed this and put together the COBIT framework which is a very detailed list of over 30 high-level processes and over 200 specific IT control objectives. Most of the GRC requirements can be mapped to specific COBIT objectives. COBIT is a good thing to be familiar with. 

 Next Steps

As we move forward in IT, operations and orchestration skills are starting to become some of the hottest requirements in IT. 

Whether it’s products like HP’s Cloudsystem, or industry wide projects like OpenStack, CloudStack or Eucalyptus, having solid Operational knowledge and skills is going to be a requirement for anyone seeking the coveted Trusted IT Advisor role in any customer. 

For anyone looking to gain or just brush up on their network management specific skills.

I would recommend

  • Solarwinds videos as a place to get started with the basics of network management
  • become familiar with the basics of COBIT and GRC in general..  Doing some reading on the various GRC requirements that apply to your specific regions and customers is also a great way to change the conversation from speeds and feeds to the challenges of the business. 
  • Read on OpenStack
  • Learn about ITIL and DevOps

Social Media is always a great way to stay current as well. One of the biggest challenges of operations is the best way to learn it is to do it. Unfortunately, many of the really good Network professionals, whether pre-sales or professional services, don’t get an opportunity as they are usually hands-off or on turning over the keys to an ops team after the project has been delivered. Socialmedia helps to connect to the daily challenges of people who are living in the trenches. 

Get some ITSM experience. If you don’t work in a company where you get to babysit the same environment, you can always do what I did and experiment with it at home.

Anyone else have any suggestions on how to get up to speed? Feel free to comment below!

@netmanchris

A Network Services Platform

So things are starting to get interesting with the HP IMC eAPI that was recently released. It’s really amazing to see the types of creative projects when technical people are presented with new toys. 🙂

So for those of you who didn’t read my last eAPI blog post, let me catch you up. The eAPI is a RESTFul inteface that allows programmers, or scripters to leverage the various network services that HP IMC presents.

Thanks@ioshintsfor a quick look at SNMP vs. RESTfull interface

Basically it looks a little like this.

note: This is not a full list of the IMC modules or services. Check out the HP website for a complete list.

The RESTfull inteface presents the services in a XML format which is consumable to any programming language that can parse XML. ( I’m not a programmer, but that’s pretty much all of the current ones from what I understand ).

Those services are then applied to specific devices. But what’s COOL about this, is the following.

Say you want to change a VLAN on a bunch of ports. Some of those happen to be HP Comware switches, some of them happen to be HP Procurve Switches, and some of them happen to be Cisco switches. The IMC device adapters at the bottom do all the work for you, providing a device abstraction layer so that you can just say ” add VLAN” rather than having to worry about the syntax of all the individual devices.

So what’s actually available in the HP IMC eAPI? Well you can checkout @neelixx’s blog for the documentation. This is the first release, but I’m told that the eAPI will continue to grow with each future release of the platform AND the modules.

But I think what’s a LOT more interesting is some of the projects that have started to creep up.

For example

1) Wouldn’t it be cool if when you sent someone an outlook invite for a meeting in your office that your network access control system would automatically create guest accounts for the day of the meeting and send them to your guests?

2) Wouldn’t it be cool if when your support desk could simply click on a user in Microsoft Lync and automatically see where they have been logged in the network? Check out what access service is assigned to them. Maybe they are having trouble accessing some resources and you want to make sure they are in the right VLAN.

I’ve also started to see other apps pop up such as an application that searches the entire network for the mac addresses of lost laptops and locates the interface they are plugged into. Pretty handy for a hospital where a lost laptop with patient data is a nightmare. Or something as simple as an app for a a College which allows the teacher to shut down all the interfaces for the switches which are in their classroom, and then to turn them all back on with a click of the button.

No login to the NMS.

No call to the help desk.

Just shutting down the ports when the students aren’t listening, and turning them back on when it’s time to work.

What about you guys? HP has given you some color. What are YOU going to paint?