Sometimes Size Matters: I’m sorry, but you’re just not big enough.

So now that I’ve got your attention…I wanted to put together some thoughts around a design principal of what I call the acceptable unit of loss, or AUL. 

Acceptable Unit of Loss: def. A unit to describe the amount of a specific resource that you’re willing to lose

 

Sounds pretty simple doesn’t it?  But what does it have to do with data networking? 

White Boxes and Cattle

2015 is the year of the white box. For those of you who have been hiding under a router for the last year, a white box is basically a network infrastructure device, right now limited to switches, that ships with no operating system.

The idea is that you:

  1. Buy some hardware
  2. Buy an operating system license ( or download an open source version) 
  3. Install the operating system on your network device
  4. Use DevOps tools to manage the whole thing

add Ice. Shake and IT operational goodness ensures. 

Where’s the beef?

So where do the cattle come in?  Pets vs. Cattle is something you can research elsewhere for a more thorough dealing, but in a nutshell, it’s the idea that pets are something that you love and care for and let sleep on the bed and give special treat to on Christmas.  Cattle on the other hand are things you give a number, feed from a trough, and kill off without remorse if a small group suddenly becomes ill. You replace them without a second thought. 

Cattle vs. Pets is a way to describe the operational model that’s been applied to the server operations at scale. The metaphor looks a little like this:

The servers are cattle. They get managed by tools like Ansible, Puppet, Chef, Salt Stack, Docker, Rocket, etc… which at a high level are all tools which allow for a new version of the server to be instantiated on a very specific configuration with little to no human intervention. Fully orchestrated,   

Your servers’s start acting up? Kill it. Rebuild it. Put in back in the herd. 

Now one thing that a lot of enterprise engineers seem to be missing is that this operational model is predicated on the fact that you’re application has been built out with a well thought out scale-out architecture that allows the distributed application to continue to operate when the “sick” servers are destroyed and will seamlessly integrate the new servers into the collective without a second thought. Pretty cool, no?

 

Are your switches Cattle?

So this brings me to the Acceptable Unit of Loss. I’ve had a lot of discussions with enterprise focused engineers who seem to believe that Whitebox and DevOps tools are going to drive down all their infrastructure costs and solve all their management issues.

“It’s broken? Just nuke it and rebuild it!”  “It’s broken? grab another one, they’re cheap!”

For me, the only way that this particular argument that customers give me is if there AUL metric is big enough.

To hopefully make this point I’ll use a picture and a little math:

 

Consider the following hardware:

  • HP C7000 Blade Server Chassis – 16 Blades per Chassis
  • HP 6125XLG Ethernet Interconnect – 4 x 40Gb Uplinks
  • HP 5930 Top of Rack Switch – 32 40G ports, but from the data sheet “ 40GbE ports may be split into four 10GbE ports each for a total of 96 10GbE ports with 8 40GbE Uplinks per switch.”

So let’s put this together

Screen Shot 2015 03 26 at 10 32 56 PM

So we’ll start with

  • 2 x HP 5930 ToR switches

For the math, I’m going to assume dual 5930’s with dual 6125XLGs in the C7000 chassis, we will assume all links are redundant, making the math a little bit easier. ( We’ll only count this with 1 x 5930, cool? )

  • 32 x 40Gb ports on the HP 5930 – 8  x 40Gb ports saved per uplink ) = 24 x 40Gb ports for connection to those HP 6125XLG interconnects in the C7000 Blade Chassis.
  • 24 x 40Gb ports from the HP 5930 will allow us to connect 6 x 6125XLGs for all four 40Gb uplinks. 

Still with me? 

  • 6 x 6125XLGs means 6 x C7000 which then translates into 6*16 physical servers.
Just so we’re all on the same page, if my math is right; we’ve got 96 physical servers on six blade chassis connected through the interconnects at 320Gb ( 4x40Gb x 2 – remember the redundant links?) to the dual HP 5930 ToR switches which will have (16*40Gb – 8*40Gb from each HP 5930) 640Gb of bandwidth out to the spine.  .  

If we go with a conservative VM to server ratio of 30:1,  that gets us to 2,880 VMs running on our little design. 

How much can you lose?

So now is where you ask the question:  

Can you afford to lose 2,880 VMs? 

According to the cattle & pets analogy, cattle can be replaced with no impact to operations because the herd will move on with out noticing. Ie. the Acceptable Unit of Lose is small enough that you’re still able to get the required value from the infrastructure assets. 

The obvious first objection I’m going to get is

“But wait! There are two REDUNDANT switches right? No problem, right?”

The reality of most of networks today is that they are designed to maximize the network throughput and efficient usage of all available bandwidth. MLAGG, in this case brought to you by HPs IRF, allows you to bind interfaces from two different physical boxes into a single link aggregation pipe. ( Think vPC, VSS, or whatever other MLAGG technology you’re familiar with ). 

So I ask you, what are the chances that you’re running the unit at below 50% of the available bandwidth? 

Yeah… I thought so.

So the reality is that when we lose that single ToR switch, we’re actually going to start dropping packets somewhere as you’ve been running the system at 70-80% utilization maximizing the value of those infrastructure assets. 

So what happens to TCP based application when we start to experience packet loss?  For a full treatment of the subject, feel free to go check out Terry Slattery’s excellent blog on TCP Performance and the Mathis Equation. For those of you who didn’t follow the math, let me sum it up for you.

Really Bad Things.  

On a ten gig link, bad things start to happen at 0.0001% packet loss. 

Are your Switches Cattle or Pets?

So now that we’ve done a bit of math and metaphors, we get to the real question of the day: Are you switches Cattle? Or are they Pets? I would argue that if your measuring your AUL in less that 2,000 servers, then you’re switches are probably Pets. You can’t afford to lose even one without bad things happening to your network, and more importantly the critical business applications that are being accessed by those pesky users. Did I mention they are the only reason the network exists?

Now this doesn’t mean that you can’t afford to lose a device. It’s going to happen. Plan for it. Have spares, Support Contracts whatever. But my point is that you probably won’t be able to go with a disposable infrastructure model like what has been suggested by many of the engineers I’ve talked to in recent months about why they want white boxes in their environments.

Wrap up

So are white boxes a bad thing if I don’t have a ton of servers and a well architected distributed application? Not at all! There are other reasons why white box could be a great choice for comparatively smaller environments. If you’ve got the right human resource pool internally with the right skill set, there are some REALLY interesting things that you can do with a white box switch running an OS like Cumulus linux. For some ideas,  check out this Software Gone Wild podcast with Ivan Pepelnjak and Matthew Stone.

But in general, if your metric for  Acceptable Unit of Loss is not measured in Data Centres, Rows, Pods, or Entire Racks, you’re probably just not big enough. 

 

Agree? Disagree? Hate the hand drawn diagram? All comments welcome below.

 

@netmanchris

Advertisements

Which HP Switches will run OpenFlow?

Every once in awhile someone reaches out to me wondering if the older HP Provision switch they have is OF capable or should they just through it in the garbage. 

One of the easiest way to decide on blog post is “If you’ve been asked the same question more than three times…”. 

EDIT:  These switches can all run OpenFlow1.3.  Table sizes vary per model. Check the release notes and product documentation for specifics around what of the optional OF1.3 spec is covered. (hint: None of these switches support MPLS ) 

Stackable Swiches

HP 3500yl

HP 6600

HP 2920

HP 3800

Chassis Switches

HP 5400zl

HP 8200zl

 

Getting OpenFlow Code

So pretty much all of these switches have the HP warranty, which means you should be able to get code for all of them from the HP support website. Easiest way is to go to http://www.hp.com/networking/support and should be good to go. It may be that all of these switches are covered under the warranty, but I didn’t verify, so I’ll leave that you to. 🙂 

 

Happy NewYears!

@netmanchris

 

Working with PYSNMP con’t – SNMP simple SETs

So inspired by @KirkByers blog on SNMP which I wrote about here using Python 3, I decided that I wanted to go past just reading SNMP OIDs and see how PYSNMP could be used to set SNMP OIDs as well.  As with anything, it’s best to start with the docs and apply the KISS principle.

Why SNMP?

Although RESTful APIs are definitely an easier, more human readable, way to get data in and out of a network device, not all network devices support modern APIs. For a lot of the devices out there, you’re still stuck with good ol’ SNMP.

Tools

SNMP is a horrible human readable language. We have MIBS to translate those nasty strings of digits into something that our mushy brains can make sense of. Thats known as a MIB Browser. There are a lot of MIB browsers out there, which one you use doesn’t matter much, but I would HIGHLY recommend that you get one if you’re going to start playing with SNMP.

http://www.ireasoning.com has a free version that works great on Windows and supports up to 10 MIBs loaded at the same time.

The Goal

There are a lot of powerful actions available through the SNMP interface, but I wanted to keep it simple.  For this project, I wanted to go after something simple. For this project, I wanted to use SNMP to change the SYSCONTACT and SYSLOCATION fields.

For those of you who are used to a CLI, the section of the config I’m looking after resembles this.

 snmp-agent sys-info contact contact

 snmp-agent sys-info location location

 

The Research

So I know what I want to change, but I need to know how I access those values though SNMP. For this, I’ll use the MIB Browser

 

Using the MIB Browser, I was able to find the SYSLOCATION MIB which is identified as .1.3.6.1.2.1.1.6.0

Screen Shot 2014 11 28 at 1 37 36 PM

I was also able to find the SYSCONTACT MIB which is identified as .1.3.6.1.2.1.1.4.0

Screen Shot 2014 11 28 at 1 37 14 PM

 

So now I’ve got the two OIDs that I’m looking for.

The Code

Looking through the PYSNMP documentation, there’s an example there of how to do an  SNMP SET. But they threw in a couple of options there that I didn’t want, specifically, I didn’t want to use the lookupNames option. So I changed the lookupNames option to False and then I was able to use the OIDs above directly without having to find the names of the MIBs.

So looking through the code below, you can see that I’ve created a function which will take an input syscontact and use it as the variable to set the MIB object  .1.3.6.1.2.1.1.4.0 which corresponds to the

SNMP-AGENT SYSCONTACT …  in the configuration of the device.

from pysnmp.entity.rfc3413.oneliner import Camden

cmdGen = cmdgen.CommandGenerator()

#using PYSNMP library to set the network devices SYSCONTACT field using SNMP
def SNMP_SET_SYSCONTACT(syscontact):
errorIndication, errorStatus, errorIndex, varBinds = cmdGen.setCmd(cmdgen.CommunityData(‘private’),cmdgen.UdpTransportTarget((‘10.101.0.221’, 161)),(cmdgen.MibVariable(‘.1.3.6.1.2.1.1.4.0’), syscontact), lookupNames=False, lookupValues=True)
# Check for errors and print out results
if errorIndication:
print(errorIndication)
else:
if errorStatus:
print(‘%s at %s’ % (errorStatus.prettyPrint(),errorIndex and varBinds[int(errorIndex)-1] or ‘?’))
else:
for name, val in varBinds:
print(‘%s = %s’ % (name.prettyPrint(), val.prettyPrint()))

 

 Running the Code

Now we run the code

>>>
>>> SNMP_SET_SYSCONTACT(‘test@lab.local’)
1.3.6.1.2.1.1.4.0 = test@lab.local
>>>

Looking back at the MIB Browser we can see that the SYSCONTACT location has been changed to the new value above.

Screen Shot 2014 11 28 at 2 24 50 PM

And when we log back into the network device

 snmp-agent sys-info contact test@lab.local

 snmp-agent sys-info location location

 

Wrap Up

This is just a small proof-of-contact code that shows, although RESTful APIs are definitely sweeter to work with, they are not the only way to programatically interface with network devices.

Comments or Questions?  Feel free to comment below

 

@netmanchris

 

 

 

PYSNMP with HP 5500EI Comware Switch

Inspired by @kirkbyers post over here  I wanted to stretch my python skills and see about playing around with the PYSNMP libraries as well as Kurt’s SNMP_HELPER.PY function which is available here.

Clean up the SNMP_HELPER.PY function for Python 3.x

There are some differences in Python 2 vs. Python 3. One of those differences is that the print command now requires you to actually have parans ()   around the content that you wish to print.  This was about the only thing that I had to do to get Kirk’s code working in Python 3.  If you try to run the code in the python IDLE software it will come up with this error right away.  I could also have run the py2to3  scripts, but since this was a small file, it was easy to just search for the 4 or so print statements and edit it manually as I was reading through the code to try and understand what Kirk was doing.

 

Easy Installation

So Kirk takes you through the normal PIP installation. I’m performing this on OS X Mavericks with Python 3. So for those not familiar with the differences yet. Python 2.x is natively installed on OSX. If you do a pip install …  command, this will result in you downloading and making that specific library available to the python 2.x version on your OS.  Since I’m using python 3.x, I instead need to use the pip3 install command which will, instead, make the library you’re downloading available to python 3.x on your system

$pip3 install pysnmp

 

Note: Kirk has a couple of other ways to install the pysnmp library over on his blog, so I won’t repeat them here.

Testing Out SNMP

So it’s a good idea to ensure that SNMP is running and you have the right community strings on the machine you’re going to access. For this, I’m going to use an

SNMP MIB browser that I have installed on my MBA to test this out. You could also use the net-snap utilities as shown on Kirk’s blog if you’d like to do this from the CLI. I highly recommend getting a MIB Browser installed on your system. http://www.ireasoning.com has a nice free one available.

Screen Shot 2014 11 27 at 3 51 04 PM

 

So now that we’ve confirmed this all works. on to the code.

Setting the Stage

So I’m assuming that you’re able to run the SNMP_Helper.py file in IDLE.  If you look at the code, one of the first things it does is import the cmdgen method from the pysnmp library

“from pysnmp.entity.rfc3413.oneliner import cmdgen” 

One of the ways that has really helped me learn is to go through other people’s code and try and understand exactly what they are doing. I don’t think I could have written SNMP_Helper.py on my own yet, but I can understand what it’s doing, and I can DEFINITELY use it. 🙂

Now we set up a few variables, using the exact same names that Kirk used over in his blog here

>>> COMMUNITY_STRING = ‘public’
>>> SNMP_PORT = 161
>>> a_device = (‘10.101.0.221’, COMMUNITY_STRING, SNMP_PORT)

Running the Code

Now we’ll run the exact same SNMP query against the sysDescr OID that Kirk used. And Amazingly enough, get a very similar output.

>> snmp_data = snmp_get_oid(a_device, oid=’.1.3.6.1.2.1.1.1.0′, display_errors=True)
>>> snmp_data
[(MibVariable(ObjectName(1.3.6.1.2.1.1.1.0)), DisplayString(hexValue=’485020436f6d7761726520506c6174666f726d20536f6674776172652c20536f6674776172652056657273696f6e20352e32302e39392052656c6561736520323232315030350d0a48502041353530302d3234472d506f452b204549205377697463682077697468203220496e7465726661636520536c6f74730d0a436f707972696768742028632920323031302d32303134204865776c6574742d5061636b61726420446576656c6f706d656e7420436f6d70616e792c204c2e502e’))]

 

It’s nice to see that we have gotten that same nasty output. SNMP is a standard after all and we should expect to see the same response from Cisco, HP, and other vendors devices when using standard SNMP functions, such as the MIBII sysDescr OIDs.

So now, let’s use Kirk’s cleanup function to be able to see what the data actually looks like. Again, remember Python3 needs those parens for the print statement to work properly.

>>> output = snmp_extract(snmp_data)
>>> print (output)
HP Comware Platform Software, Software Version 5.20.99 Release 2221P05
HP A5500-24G-PoE+ EI Switch with 2 Interface Slots
Copyright (c) 2010-2014 Hewlett-Packard Development Company, L.P.

Just for giggles, I also used this code against my Synology Diskstation

>>> print(output)
Linux DiskStation 2.6.32.12 #4482 Fri Apr 18 02:12:31 CST 2014 armv5tel

Then against my Server Technologies intelligent PDU

>>> print(output)
Sentry Switched CDU

Then against my DIGI console server.

>>> snmp_data = snmp_get_oid(a_device, oid=’.1.3.6.1.2.1.1.1.0′, display_errors=True)
ERROR DETECTED:
error_message No SNMP response received before timeout
error_status 0
error_index 0

The last one was working exactly as expected as I have ACL’s in place to only allow SNMP access from certain devices in my network. 🙂

Observations

It’s nice to see that standards like SNMP and widely available libraries like pysnmp can be used to access the devices regardless of the vendor they come from.

SNMP gets a bad wrap in general as there are new cooler technologies out there like NETCONF, OpenFlow, OVSDB, NetFlow, sFlow, and I’m sure a dozen others that I’m missing that can do a better job of the functions that SNMP was originally designed to go after.

But sometimes, SNMP is what we have, and the reason that it’s still around after all these years is that it’s  “good enough”

 

Questions or comments?  please post below!

@netmanchris

 

 

Surfing your NMS with Python

Python is my favourite programming language. But then again, it’s also the only one I know. 🙂

I made a choice to go with python because, honestly, that’s what all the cool kids were doing at the time. But after spending the last year or so learning the basics of the language, I do find that it’s something that I can easily consume, and I’m starting to get better with all the different resources out there. BTW http://www.stackoverflow.com is your friend.  you will learn to love it.

On with the show…

So in this post, I’m going to show how to use python to build a quick script that will allow you to issue the RealTimeLocate API to the HP IMC server. In theory, you can build this against any RESTful API, but I make no promises that it will work without some tinkering.

Planning the project.

I’ve written before how I’m a huge fan of OPML tools like Mind Node Pro.  The first step for me was planning out the pieces I needed to make this:

  • usable in the future
  • actually work in the present
In this case I’m far more concerned about the present as I’m fairly sure that I will look back on this code in a year from now and think some words that I won’t put in print.
Aside: I’ve actually found that using the troubleshooting skills I’ve honed over the years as a network engineer helps me immensely when trying to decompose what pieces will need to go in my code. I actually think that Network Engineers have a lot of skills that are extremely transportable to the programming domain. Especially because we tend to think of the individual components and the system at the same time, not to mention our love of planning out failure domains and forcing our failures into known scenarios as much as possible.

Screen Shot 2014 11 24 at 9 33 50 PM

Auth Handler

Assuming that the RESTful service you’re trying to access will require you to authenticate, you will need an authentication handler to deal with the username/password stuff that a human being is usually required to enter. There are a few different options here. Python actually ships with URLLIB or some variant depending not the version of python you’re working with.  For ease of use reasons, and because of a strong recommendation from one of my coding mentors, I chose to use the REQUESTS library.  This is not shipped by default with the version of python you download over at http://www.python.org but it’s well worth the effort over PIP’ing it into your system.

The beautiful thing about REQUEST’s is that the documentation is pretty good and easily readable.

In looking through the HP IMC eAPI documentation and the Request library – I settled on the DigestAuth

Screen Shot 2014 11 24 at 10 17 07 PM

So here’s how this looks for IMC.

Building the Authentication Info

>>>import requests   #imports the requests library you may need to PIP this in if you don’t have it already

>>> from requests.auth import HTTPDigestAuth    # this imports the HTTPDigestAuth method from the request library.
>>>
>>> imc_user = ”’admin”’   #The username used to auth against the HP IMC Server
>>> imc_pw = ”’admin”’   #The password of the account used to auth against the HP IMC Server.
>>>  

auth = requests.auth.HTTPDigestAuth(imc_user,imc_pw)     #This puts the username and password together and stores them as a variable called auth

We’ve now built the auth handler to use the username “admin” with the password “admin”. For a real environment, you’ll probably want to setup an Operator Group with only access to the eAPI functions and lock this down to a secret username and password. The eAPI is power, make sure you protect it.

Building the URL

So for this to work, I need to assign a value to the host_ip  variable above so that the URL will complete with a valid response. The other thing to watch for are types. Python can be quite forgiving at times, but if you try to add to objects of the wrong type together… it mostly won’t work.  So we need to make sure the host_ip is a string and the easiest way to do that is to put three quotes around the value.

In a “real” program, I would probably use the input function to allow this variable to be input as part of the flow of the program, but we’re not quite there yet.

>>> host_ip = ”’10.101.0.109”’   #variable that you can assign to a host you want to find on the network
>>> h_url = ”’http://”’    #prefix for building URLs use HTTP or HTTPS
>>> imc_server = ”’10.3.10.220:8080”’   #match port number of IMC server default 8080 or 8443
>>> url = h_url+imc_server    #combines the h_url and the IP address of the IMC box as a base URL to use later
>>> find_ip_host_url = (”’/imcrs/res/access/realtimeLocate?type=2&value=”’+host_ip+”’&total=false”’)   # This is the RealTimeLocate API URL with a variable set
>>>

Putting it all together.

This line takes puts the url that we’re going to send to the web server all together. You could ask “Hey man, why didn’t you just drop the whole string in one variable to begin with? “   That’s a great question.  There’s a concept in programming called DRY. (Don’t Repeat Yourself).  The idea is that when you write code, you should never write the same thing twice. Think in a modular fashion which would allow you to reuse pieces of code again and again.

In this example, I can easily write another f_url variable and assign to it another RESTful API that gets me something interesting from the HP IMC server. I don’t need to write the h_url portion or the server IP address portion of the header.  Make sense?

>>> f_url = url + find_ip_host_url
>>>    #  This is a very simple mathematical operation that puts together the url and the f_url which will product the HTTP call. 

Executing the code.

Now the last piece is where we actually execute the code. This will issue a get request, using the requests library.  It will use the f_url as the actual URL it’s going to pass, and it will use the variable auth that we created in the Authentication Info step above to automatically populate the username and password.

The response will get returned in a variable called r.

>>> r = requests.get(f_url, auth=auth)    #  Using the requests library get method, we’re going to pass the f_url as the argument for the URL we’re going to access and pass auth as the auth argument to define how we authenticate Pretty simple actually . 
>>>

The Results

So this is the coolest part. We can now see what’s in r.  Did it work? Did we find out lost scared little host?  Let’s take a look.

>>> r
<Response [200]>

Really? That’s it? .

The answer is “yes”.  That’s what’s been assigned to the variable r.  200 OK may look familiar to you voice engineers who know SIP and it means mostly the same thing here. This is a response code to let you know that your request was successful – But not what we’re looking for. I want that content, right?  If I do a type(r) which will tell me what python knows about what kind of object r is I will get the following.

>>> type(r)

<class ‘requests.models.Response’>

So this tells us that maybe we need to go back to the request documentation and look for info on the responses. Now we know to access the part of the response that I wanted to see, which is the reply to my request on where the host with ip address 10.101.0.111 is actually located on the network.

So let’s try out one of the options and see what we get

>>> r.content
b'<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?><list><realtimeLocation><locateIp>10.101.0.111</locateIp><deviceId>4</deviceId><deviceIp>10.10.3.5</deviceIp><ifDesc>GigabitEthernet1/0/16</ifDesc><ifIndex>16</ifIndex></realtimeLocation></list>’

How cool is that. We put in an IP address and we actually learned four new things about that IP address without touching a single GUI. And the awesome part of this?  This works across any of the devices that HP IMC supports.

Where to from here?

So we’ve just started on our little journey here.  Now that we have some hints to the identity of the network devices and specific interface that is currently harbouring this lost host, we need to use that data as hints to continue filling in the picture.

But that’s in the next blog…

Comments or Questions?  Feel free to post below!

Working with RESTFul APIs

There are a lot of talk about APIs right now.  Every vendor has an API, but not all are created equal. What does an API even mean?  I’m not going to get too wrapped around definitions. But I’ll provide you a link

A more formal definiition of REST may be found here.  For my purposes, I propose the following:

RESTful API

Something that I can work with using the HTTP protocol and probably returns data in XML or JSON.

 

Some examples

I’m working with HP’s Intelligent Management Center and it’s eAPI, which offers a RESTful interface to the network management system which will return both XML and JSON.

Here’s an example of a call using XML

Screen Shot 2014 11 24 at 8 38 21 PM

Update: For those who noticed – The URL for both are the same. But the content of the HTTP request actually shows a slightly different story

Here’s the Wireshark of the XML request. If you look at the trace below, you can see the Accept: application /xml\r\n which shows that the request is asking for XML.

Screen Shot 2014 11 27 at 8 47 04 PM

Here’s an example of a call using JSON.

Screen Shot 2014 11 24 at 8 38 39 PM

 

Here’s the JSON request which is essentially the same except the accept: portion in this shows application as JSON.

Screen Shot 2014 11 27 at 8 46 39 PM

 

 

 

As you can see they carry the exact same data, but the way the data is structured is slightly different.  From what I can tell, programatically speaking, there’s no difference in that you can definitely work with either one easily.

For right now, I’ve chosen to focus on the just XML under the theory that if I focus on figuring out how to work with just XML for now, I can go back and learn how to work with JSON later after my overall skills as a programer have increased.

 

It’s a theory.

 

Note: For those of you who haven’t seen this interface, it’s a standard ( at least for HP ) RS-Docs interface which provides the documentation for the RESTful interface directly on the machine and allows you to test it. This is also available for the HP SDN controller.  

For IMC. it can be accessed at  http://localhost:8080/imcrs   and will require you to authenticate. BTW the port numbers may also change if you installed in something other than the default state. 

 

RealTimeLocation API – Breaking it Down.

For those of you with keen eyes. You can probably guess that this particular API is used to locate a host on the network.  Let’s break it down a little so that we can see what the the return is actually telling us here.

 

<list> # This lets us know this is the start of the list

<realtimeLocation> # This lets us know the data below is about realtimeLocation

<locateIp>10.101.0.111</locateIp> # This is the IP address we wanted to locate

<deviceId>4</deviceId> # This is the device ID of the switch where we found it.

<deviceIp>10.10.3.5</deviceIp> # This is the IP address of the switch where we found it.

<ifDesc>GigabitEthernet1/0/16</ifDesc> # This is the interface description where we found it.

<ifIndex>16</ifIndex> # This is the ifIndex value of the interface where we found it.

</realtimeLocation> # This lets us know that the realtimeLocation data has ended.

</list> # This lets us know that the list has ended.

 

So a couple of quick notes about this list.

  • Device ID is an internal numbering scheme that HP IMC uses to keep track of the devices. This has no practical relation to anything outside the IMC system.
  • ifDesc  is the SNMP ifDesc.  You may be tempted to look at this and think “ That must be the description on the interface!!!” You would be wrong. The description you configure on the interface when you type in the command “description This_Is_My_Interface” is actually held in the ifAlias ( 1.3.6.1.2.1.31.1.1.1.18 ) . Blame SNMP for this one.
  • ifIndex is the SNMP ifIndex value. This is, again, an easy way for computers to keep track of the number of the port. Also, important to know that on some vendors devices, these values can change with a reboot. Cisco used to have this issue, but they do allow you to make them persist across reboot

 

Next Time

 

So this is a brief introduction into XML and an example of a RESTful API.  As you can see, it’s not that  intimidating. It’s actually almost readable by a human being.

In the next post. I’m going to look at building some basic python code to use this API directly.

 

Questions or Comments?  Please post below and I”ll be happy to do my best. Again, I’m student in progress here, so please take any answers I give with a grain of salt. If you’re further along on this journey that I am. Please feel free to suggest improvements in the comments as well. I wouldn’t say I’ve got no ego, but I”ll check it at that door if it helps me improve.

HP Discover 2013

It’s that time of the year again.  HP Discover 2013 is happening this weekend in Las Vegas and I’m actually sitting in a hotel room wondering just how hot it’s going to get outside while I write this. 

 

I’ve actually been involved in HP Discover, previous known as TechForum, since 2010 in one way or another, but this is the first year where I’m officially involved as part of the HP Networking Technical Marketing organization.

Without letting any secrets out, it’s going to be a good show for us, but I’m really looking forward to checking out the HP Discover Zone, not to mention the various things that are going on at the HP Innovation Theatre

Some of the official blogs have covered a bit on session highlights  as well as the who’s who role call for the Social Media program. 

 

What I love about Discover

Learn about HP

Hewlett Packard is a HUGE company. To put this in context, HP has more employees than the top 26 smallest Countries. Not cities. Countries.  As you can imagine, it’s tough to keep up to date on all the things that are happening across the entire company. HP Discover is a place where we all come together and the curious can learn about what the other parts of the companies are doing, not to mention meet some great people. 

Learn about HP’s Partners 

The fact that the largest IT companies of the world are represented there as well as partner’s is wonderful. We get a up close and personal with VMware, Microsoft, Citrix, Brocade and many others, large and small, who have something to show. Pretty amazing to see all this technology in one place. It’s a phenomenal place to learn about what’s going on in the whole industry for someone who’s willing to brave the show floor and engage with the people who have come out to educate people about their products and services. It truly is a place to Discover. ( <- see what I did there? ).

 

Talking with Customers

The one session I did want to mention is HOL 2809 , HP Intelligent Management Center custom scripting,  I”m co-presenting with Aaron Paxon @neelix and Lindsay Hill @northlandboy.  I’m very excited about this presentation, not just because I’m presenting with some great guys, but because it really represents to me the beauty of the SoMe community. 

Lindsay is an HP partner, Aaron is an HP customer, and I’m an HP employee.

We’ve spent the last few month getting together the session abstract working over email, twitter, google+ and even the phone!  Amazing how technology can bring us all together. I’m hoping that everyone who shows up for our session gets a good show, but also walks away with something they can use. 

 

What I hope to get from the week

 

My goals for this year are, in no order of importance, the following

  • Meet as many people as possible
  • Visit as many people on the HP Discover Zone as possible
  • Learn more about vCloud and Azure
  • Blog and Tweet about what’ going on.

As this is the first major event in my new role, it’s going to be great to see the show from an insider’s perspective. I’m not sure how much I’m going to be able to deliver on my goals, but at least it’s going to be a great learning experience and that, after all, is the thing I love the most. 

 

Hope to see you all there,

 

@netmanchris