Plans for 2015: Where to from here?

I know I’m a little bit late for New Years resolutions, but it’s been a tough decision making process. There is so much going on right now in the networking industry and, to be honest, I’m not sure that networking is going to be a skill that will demand the premium that it’s been able to for the last 10-15 years.  Don’t get me wrong, I’m not saying that networking is dead. In fact, just the opposite, networking is going to flourish. There is going to be so much networking that needs to be done that the only way we will be able to deal with it is to dump all of our collective knowledge into code and start to automate what would have previously been the domain of the bit-plumbers that we are. 

 

What skills to pick up in 2015:

So the question: What skills am I looking at picking up in 2015?  I am a huge believer the infrastructure-as-code movement. Looking at what leaders like Matt Oswald, Jason Edelman, Brent Salisbury, Dave Tucker, Colin McNamara, Jeremy Schulman, etc… are taking us, it’s obvious that coding skills are becoming a mandatory skill for anyone in the networking field who wants to become, or remain, at the top of the field.  That’s not to say that core networking skills are not going to be important, but I’m definitely branching out this year in trying to gain some another language, as well as improve my chops with what I already know.

Increase Python Skills

As anyone who’s been here for the last year knows, I’ve been playing around with python a lot. I’m hoping that 2015 will allow me to continue to increase my python skills, specifically as focused around networking, and I’m hoping that I will have enough time to go from just learning to actually contributing back to some code to the community. I’m signed up for Kirk Bayles Python for Networking Engineers course starting in January, as well as going through a few different books. Bets of all, my 9 year old son has also shown some interest in learning to code, so this might actually become a father son project.

I’m also hoping to get more involved with things like Ansible, Schprokits, as well as possibly releasing some of my own all projects.  Crossing my fingers on the stretch goals. 🙂

Gain Data Analysis Skills

Cousera is awesome. If you haven’t checked it out, you need to. You would have to be living under a rock buried in a lead can stored in a faraday cage at the bottom of the ocean to not have heard about SDN. I believe that there’s an ENORMOUS opportunity within the networking space for applying data analysis techniques to the massive amounts of information that flows across our networks every day. There’s a Cousera Data Science Specialization that I’m signed up for that I”m hoping will start me down the path of being able to execute on some the ideas that I’ve had bouncing around in my skull for more than half a decade. I’m sure I will be blogging on the course, but you might have to wait for some of the ideas.  

Virtualization-Ho!

Docker, Rocket, NSX, ESX, KVM, OVS. They are all going to get a little love this year from this guy. I’m not sure how much I’m going to be able to consume, but I believe these are all technologies that are going to be relevant in the coming years. I believe that Containers are going to get a lot of love in the industry and companies like http://www.socketplane.io are going to be something I”m watching closely. 

Networking Networking Networking

This is my core knowledge set and, I believe, what will continue to be the foundation of my value for the foreseeable future. I hit my CCIE Emeritus this year and also had a chance to attend a Narbik bootcamp. It was an incredibly humbling experience and reminded me of how much there is still to learn in this space that I love. If you get a chance to attend a Micronics CCIE bootcamp, I couldn’t recommend it highly enough. There are very few people who understand and can TEACH this information at the level Narbik can. I’m actually planning on finding time to resit the bootcamp this year just soak up more of the goodness. 

 

Plans Plans Plans

2014 was a bit of a mess for me. But I think I still did fairly well in executing on gaining some of the programming skills that I wanted. 2015 is going to be a crazy time for the whole industry. I’m not sure which of these four areas is going consume the most of my time. The way our industry has been going, it’s entirely possible that I will fall in love with something else entirely. 🙂  

If at the end of 2015 I have managed to move forward in these four areas by at least a few steps, I think I will consider the year a success. 

 

What about you?

 

@netmanchris

 

Advertisements

Bringing Wireless Back in to the Fold

I’m sitting in the airport in Barcelona just having had an amazing week of conversations ranging from potentially core-belief shattering to crazy ideas for puppet shows. The best part of these events, for those of us who are social, is the ability to interact with people in meatspace that we’ve already “known” for a period of time on twitter. I had the pleasure this week of hanging out with such luminaries of the networking social scene like Tom Hollingsworth (@networkingnerd ), Amy Arnold (@amyengineer), Jon Herbert (@mrtugs ), Ethan Banks @ecbanks and, not to be left out of any conversation, Mr. Greg Ferro.

 

There were a lot of great conversations, and more than a couple of packet pushers shows recorded during the week but the one that’s sticking in my mind right now is a conversation we had around policy and wireless. This has been something on my mind now for awhile and I think I’ve finally thought this through enough to put something down on paper.

Before we get started, I think it’s important that everyone understand that I”m not a wireless engineer, I’m making some assumptions here that I”m hoping that will be corrected in the comments if I’m headed in the wrong direction.

 

Wireless: The original SDN

So in many ways, wireless networking have been snickering at us wired lovers for our relatively recent fascination with SDN. Unsurprisingly, I’ve heard a lot of snark on this particular subject for quite awhile. The two biggest being:

  • Controller Based networking? That’s sooooooo 2006. Right?
  • Overlays?  We’ve been creating our own topologies independent of the physical layout of the network for years!!!!

 

I honestly can’t say I disagree with them in principle, but I love considering the implications of how SDN, combined with the move to 802.11ac is going to really cause our words to crash back together.

 

I’m a consumer of wireless. I love the technology and have great respect for the witchdoctor network engineers who somehow manage to keep it working day-in and day-out. I’m pretty sure where I have blue books on my book shelf, they have a small alter to the wireless gods. I poke fun, but it’s just such a different discipline requiring intense knowledge of the transmission medium that I don’t think a lot of wired engineers can really understand how complicated wireless can be and how much of an art form that creating a good stable wireless design actually is.

On a side note, I heard this week that airplanes actually use sacks of potatoes in their airplanes when performing wireless surveys to simulate the conditions of passengers in the seats. If that doesn’t paint a picture of the differences with wireless, I don’t know what does.

 

The first wireless controller I had a chance to work with was the Trapeze solution back in approx 2006. It was good stuff. It worked. It allowed for centralized monitoring, not to mention centralized application of policy. The APs were 802.11G and it was awesome. I could plug in an AP anywhere in the network and the traffic would magically tunnel back to the controller where I could set my QoS and ACLs to apply the appropriate policies and ensure that users were granted access and priority, or not, to the resources that I wanted. Sounds just like an Overlay doesn’t it?

In campus environments, this was great. An AP consumed a theoretical bandwidth of 54Mbps and we had, typically, dual Gig uplinks. If we do some really basic math here, we see the following equation

Screen Shot 2014 12 05 at 1 08 31 PM

 

Granted, this is a napkin calculation to make a point.  But you can see it would be REALLY hard to oversubscribe the uplinks with this kind of scenario.  There weren’t that many wireless clients at the time. AP density wasn’t that bad. 2.4 Ghz went pretty far and there wasn’t much interference.

Screen Shot 2014 12 05 at 1 09 14 PM

 

Hmmm… things look a little different here.  Now there are definitely networks out there that have gone  to 10Gb connections between their closets in the campus. But there are still substantial amount that are still running dual gig uplinks between their closets and their core switches. I’ve seen various estimates, but consensus seems to suggest that most end-stations connected to the wireless network consume, on average, about 10% of the actual bandwidth. Although I would guess that’s moving up with the rich media (video) getting more and more widely used.

Distributed Wireless

We’ve had the ability to allow the wireless APs to drop the wireless client traffic directly on to the local switch for years. Although vendors have implemented this feature at different times in their product life cycles. I think it’s safe to say this is a me-too feature at this point. I don’t see it implemented that much though because, in my opinion, having a centralized point in the network, aka. the controller, were I can tunnel all of my traffic back to allows me to have a single point to apply policy. Because of the limited bandwidth, we could trade off the potential traffic trombone of wireless going back to the controller to access local resources for the simplicity of centralized policy.

Now that a couple of 802.11ac access points can potentially oversubscribe the uplinks on your switch, I think we’re quickly going to have to rethink that decision. Centralized policy not only won’t be worth the cost of the traffic trombone, but I would argue it’s just not going to be possible because of the bandwidth constraints.

 

I’m sure some people who made the decision to move to 10Gb uplinks will continue to find centralized policy to be the winner of this decision, but for a large section of network designers, this just isn’t going to be practical

Distributed Policy

This is where things start to get really interesting for me. Policy is the new black. Everyone’s talking about it. Promise Theory, Declaritive Intent. Congress, etc… There are a lot of theories and ideas out there right now and it’s a really exciting time to be in networking. I don’t think this is going to be a problem we solve overnight, but I think we’re going to have to start creating the foundation now with more consistent design and configurations allowing us to provide a consistent, semi homogenous foundation when we start to see the policy discussion resulting in real products.

What do I mean by this? Right not there, really two big things that will help to drive this forward.

Globally Significant, but not Unique VLANS

Dot1x, or more accurately, the RADIUS protocol, allows us to send back a tunnel-group ID attribute in the RADIUS response that corresponds to a VLAN ID ( name or dot1q tag are both valid ). We all know the horrors of stretched VLANS, but there’s no reason you can’t refuse the same VLAN number in various places in the network as long as they have a solid L3 boundary in between them and are assigned different L3 address space. This means that we’re going to have to move back towards L3 designs and turn to configuration tools to ensure that VLAN ids and naming conventions are standardized and enforced across the global network policy domain.

Consistent Access Control Lists and QoS Policies

RADIUS can also send back a specific attribute in the RADIUS response that will tell the switch put apply a specific ACL or QoS policy to the authenticated connection for the session time of that connection. Some vendors, but not all, allow for the dynamic instantiation of the ACL/QoS policy, but most still require the ACL or QoS construct to be already present in the network device before the RADIUS policy can consume that object. This means we’re going to be forced to turn to configuration management tools to make sure that these policy enforcement objects are present in all of the network devices across the network, regardless of the medium.

 

The future

I think we’re swiftly arriving at a point where wireless can not be designed in a vacuum as an overlay technology. The business need policy to be consistently applied across the board and bandwidth to be available and efficiently used.  I don’t see any other way for this to be done without ensuring that the we start to ignore the medium type that a client is connecting on.  On the bright side, this should result in more secure, more flexible, and more business policy driven wired connectivity in the coming years. I don’t believe we’ll be thinking about how the client connected anymore. We won’t care.

 

Agree? Disagree? Did I miss something? Feel free to comment below!

@netmanchris

Network Developer: A network engineers Journey into Python

Like most other people in the networking industry, I’ve been struggling with answering the question as to whether or not Network Engineers need to become programmers. It’s not an easy question to answer and after a few years down this SDN journey, I’m still no closer to figuring out whether or not network engineers need to fall into one of the following categories

Become Full-Time Software Developers

DaveTucker

For those of you who don’t know @dave_tucker, he was a talented networking engineer who choose to make the jump to becoming a full time programmer. Working on creating consumption libraries using python for the HP VAN SDN Controller, contributing to the OpenDayLight controller, and now joined up with @networkstatic, another great example. and @MadhuVenugopal   to form SocketPlane focused on the networking stack in Docker. 

Gain some level of proficiency in a modern programming language

One of the people that i think has started to lead in this category is @jedelman8. Jason is a CCIE who glimpsed what the future may hold for some in our profession and has done a great job sharing what he’s been learning on his journey on his blog at http://www.jedelman.com/.  Definitely check it out if you haven’t already. 

This is also where I’ve chosen to be for now. The more I code, I think it’s possible that I could go full programmer, but I also love networking too much. I guess the future will tell with that one. 

For this category, this will mean putting in extra time on nights and weekends to focus on learning the craft.  As someone once told me, it takes about 10 years to become a really good network engineer, no one can be expected to become a good programmer in a year, especially not with a full time day-job. 

On the bright side there are a lot of resources out there like

Coursera.org – Just search for the keyword “python” and there are several good courses that can help you gain the basics of this language.

CodeAcademy.com – CodeAcademy has a focused python track that will allow you to get some guided hands on labs as long as you have an internet connection.

 pynet.twb-tech.com – @kirkbyers has put together an email led python course specifically for network engineers over at   He’s also got some great blogs  that discuss how to use python for different functions that are specifically related to network engineers day-to-day jobs. Having something relevant always helps to make you’re live easier. 

Gain the ability to think programmatically and articulate this in terms software developers understand

I don’t have any really good examples of this particular category.  For some reason, that has so far eluded me, there just isn’t many network engineers in this category. If you know of any great examples, please comment below and I’ll be happy to update the post!

This is where I was a coupe of years ago. I knew logic. I could follow simplistic code if it was already written, and I could do a good enough job communicating to my programming friends enough to ensure that the bottle of tequila I was bribing them with would most likely result in something like what I had in my head. 

 

Stay right where they are today. 

The star fish is one of the few creatures in the history of evolution that went “ Hmmm. I think I’m good! “   This isn’t a judgement, but you need to decide where you want to be and if Star Fish is it… you might find your future career prospects limited. 

starfish

 

 

Journey Ahead

 

As I get back into actually posting, I’m planning on sharing some of the simplistic code that I’ve been able to cobble together. I make no claims as to how good this code is, but I hope that it will inspire some one else reading this to take some classes, find a project, and then write and share some small script or program that makes their life just a little bit easier. Guys like Jason have done this for me. I recently hit a place where I finally have enough skills to be able to accomplish some of the the goals I had in mind. My code is crap, but it’s so simplistic that it’s easy to understand exactly what I’m doing.  And that’s where I think the value comes from sharing right now.

 

Comments or thoughts? Please feel free to comment below!

 

 

Apple Watch: It’s all in the Ecosystem

Unless you were under busy living under a rock, you probably saw the Apple announcements last week launching the iPhone 6 and the Apple watch. I’ve doing a lot of thinking lately about the intersection of Big Data and the internet of things and specifically how they apply to the Quantified Self movement. 

 

A little about me

 

Currently, I’ve got a lot of stuff going on in my life. Specifically going through a separation from my wife of ten years. One of the ways that I’m choosing to deal with this is to try and focus on the moment and the daily activities. To set small goals for myself and to track them using various methods to see if I’ve achieved at the end of the day or not.  Most days, I’m hitting the goal. Others I miss and I come back that much more determined the next day to get my life back on track. Three amazing kids are an awfully powerful motivation. 

 

My devices

 

I’ve been loosely tracking my stats for a few years now and I’ve had a bunch of different devices in that time more or less in chronological order from when I started using them. Here’s the non-inclusive list that I can remember off the top of my head, although I’m sure there’s a couple I’ve missed. 

 

Garmin 305 Forerunner ( Circa. 2008)

NewImage

This was one of my first entires into the QS movement. It was a good product. Stable. Built in GPS and connected to an ANT+ hear-rate monitor. I used this a lot at the time. I’m sure if I go back you can see my tracks over Europe, Canada, US, and even a few weeks in Vietnam.

Not a bad application to go with it, but the data was pretty much locked in. They did eventually kill the 32-bit App in favour of a web interface, but by the time that was released, I had moved on. 

 

Wii Fit (Circa, 2008 )

NewImageI first got into the wii fit with the original release. Lots of family fun. Getting my kids active is important to me and that’s not always easy. This was arguably the first gamification of fitness.  It worked. The kids loved it.

I’ve upgrade these to the latest Wii Fit U which is currently a favourite of my kids. The balance board is an awesomely accurate scale as far as tracking balance. The biggest problem I have is there’s no way to get all this awesome data out of the game. Locked completely in Nintendo’s hands. I can see the data over time, but there’s no way to pull it out and do any data mashups to see if anything interesting comes out of the combined data. 

 

Fitbit ( original ) (Circa. 2012)

NewImage

I got a fitbit when they were first released around 2012. I lost the first one while walking because of a bad design on the built clip, lost the second on e trip coming back from Barcelona, and I’m onto my 3rd iteration which I’m happy to say has a much better built clip and is currently hanging from my belt. One of the things I like the most about the fitbit is that they have allowed other vendors, like Runkeeper to access the data and use it in their own applications. I’ve tried a couple, but so far, I always end up coming back to the Fitbit apps, whether the iOS or Webapp, they are still the way I prefer to look at that data. 

 www.fitbit.com

 

 

 

 

 

Apple iPhone 5s

NewImage

With the build in motion sensor, the 5s some interesting capabilities. I’ve not taken advantage of them to be honest, but I’m aware the data is there, if only I locked. 

 

 

 

 

 

 

 

Pebble Smartwatch

NewImage

Pebble’s got the tech to be able to do a lot of what the fitbit does, but so far, I haven’t seen anyone taking advantage of it. There are the golf training apps which show the potential of the hardware platform, but I just haven’t had a chance to play with that yet. not a golfer 🙂 

 

http://www.getpebble.com




Wahoo Bluetooth Heartrate reader

Self Explanatory I think. The Wahoo sends heart-rate data to the iPhone. The apps, like run keeper, can then access the data as it tracks your pace, speed, GPS position etc… and help to provide you some “your heart rate goes this fast when you move this fast” style of observations. 

http://www.wahoofitness.com

Fit Aria Scale

This is an interesting Fitbit product that takes the pain out of tracking your weight. Sure, I could write it down and later manually input it into a system, but the Aria connects directly through my wi-fi network and auto uploads the results of each weigh-in into the fitbit online system.  I can also weigh myself in the morning, afternoon and evening and have all that data, complete with the timestamp of the measure to be able to look at fluctuations. 

http://www.fitbit.com

Muse personal EEG by InterAxio

NewImage

The muse is an interesting product that I wrote a bit about here.  Personal EEG reader. Good SDK, but all manual. You can do anything with the data you want as long as you can write the code yourself.  The app is decent and they just updated it, taking into account user feedback and improving what their customers told them was important. I’m starting to expect good things from this company. So far, I’m impressed with the product, the packaging, and especially the willingness to engage and listen to their customers and enhance the product based on customer feedback.

http://www.choosemuse.com

 

 

 

 

Sense

NewImage

 This is a kickstarter project which takes the QS movement and applies it to sleep. Sure, my fitbit can give me an idea of how I slept last night? Number of times awake, how restless, etc.. and quantify some measurement on how good my sleep was, but it doesn’t give me any insight into WHY my sleep was the way it was. That’s where Sense comes in. The sense has a sleep sensor that attaches to your pillow which looks like it will gather a lot of the same data as the fitbit. Where the sense differs greatly is the base station which also gathers audio and air quality data, and potentially other pieces of data such as level of light during the night and then attempts to correlate those different pieces of data with the quality of your sleep.  Did a car alarm go off at 3:17am?  Was there an abnormally high amount of dust or pollen in the air? Was it too dark/toolight?  how did all these factors affect the measured quality of your sleep.  Again, I don’t actually have the product, so the final features may differ, but the concept is there at a price which caused me to jump in. 

http://www.hello.is 

 

My Apps

 The other side of the hardware equation is the apps. I’ve used quite a few of them over the last few years. Here’s another non-inclusive list with some quick comments. Almost all of these devices have their own app ( fat or web ) and most of them also have an API that one could choose, if one had the inclination and ability, to mine for data. 

Garmin Forerunner App

Fat 32 bit app. Locked in data. It worked, but they stopped updating it and moved to a web. I think it might still be around, but I abandoned it long ago. 

Fitbit Website

This is my return-to interface for my health data currently. My aria scale and my Fitbit both publish nicely into this interface with no action on my part. The interface is clean and they do make an effort to make improvements to both the iOS apps as well as the web on a regular basis. 

Runkeeper

Runkeeper is something I use on and off. I tend to use Runkeeper when I am focused and going to the gym daily. When work gets too busy and I’m not able to make it to the gym, I stop using it and fall back on the fitbit apps to track my physical activity and make sure I get my 15,000 steps a day. 

Fitocracy

Interesting app. Abandoned it quickly. Social + workout may be good for some people but didn’t motivate me. 

Endomundo

Lots of people love it. I found it similar to run keeper and not enough difference to commit to trying something new. It’s a good app and I’m sure I would be there if I hadn’t tried runkkeeper first. I’m sure there are differences that would make a person choose one over the other. I just don’t know about those particular differences so I stuck with runkkeeper/fitibit combo.

Nike+

I had a Apple iPod Nano with the Nike+ pedometer built into it, but I stopped using it when I got my pebble. The historical data is in the Nike+ website and I have no idea how I might get it out.  

So what does the Apple Watch do for me?

I’ve got no special access to any data around the Apple Watch ( although I’d love to review it if someone sent me one! hint! hint! ) so I’m basing my comments on the Apple Watch launch last week. This device appears the be a nice combination of some of the devices I already own. At first glance, it could replace the Wahoo heart rate reader as well as potentially the fitbit, although what information the Apple Watch is tracking or the accuracy is still mostly unknown. Steps, sure, but what about the ability to track flights of stairs like the fitbit does? Or sleep patterns like the fitbit does? I would guess no noise pollution or air quality like the sense, but Apple has surprised us before. I suppose they could have hidden a smell sensor and they could definitely leverage the microphone in the iPhone or iPad to gather noise data. The Watch certainly looks capable, but considering what else it’s doing, I imagine the battery life may become a problem.

Data all over the place

“Data Data everywhere and not a drop to drink”

The main problem that I currently see with the QS movement, and my personal attempt to derive some data-inspired observations of my life is the fact that the data is all tied into a particular vendors data structures and repositories. Of the different tracking devices that I’ve used over the year, the most accessible of these has been, by far, the fitbit.  Fitbit has put out a pretty decent API and has allowed other vendors, like Runkeeper or MyFitnessPal to be able to draw out the fitbit data into their own webapp. There’s also a custom watch face for the Pebble smart watch which can also draw out the fitibit data through an android or iPhone and display how you’re doing on a given day at-a-glance right there on your wrist. You’re keenly aware of where you sit for your movement goals that day every time you look at your wrist. But fitbit does not allow access to all of the data they track through their API. There are some portions, like the sleep data, that they appear to be keeping to themselves, for either business or resourcing reasons. They seem to be a great company, so I’ll just assume that they are too busy building out great new features to extend the APIs for sleep data right now. 

 

The Garmin device? I had to abandon the data completely. The Wii is great, but a data blackhole. Anything that goes in does not come out. The Muse is new and extremely accessible, decent SDK, etc…

Long story short: All of these devices have left me with an extremely disjointed collection of data and data sources that are oozing with unconnected potential insights, if only I had the time and patience to sit down and create a framework to pull it all together. 

Ecosystem in the making?

Apple makes great products. Period. I own many of them and I’m deeply entrenched in, what I think is going to be the really value proposition of the Watch, the Apple Ecosystem. Apple has done a phenomenal job of connecting the various different iPods, iPhones, iPads, Apple TVs and OSX running machines all together through a common Ecosystem all accessed, primary, through the iTunes and AppStore interfaces. This brings together a common interface, a common user experience, and a common expectation throughout the entire range of Apple devices. They have done what I consider to be an amazingly good job of connecting those devices and the applications running on those devices.  What’s most amazing to me is that they actually extended this functionality to their developer ecosystem as well, allowing the ISV’s to be able to take advantage of those same connection points to provide a more seamless user experience.  And if reports are to be believed, this is only going to get tighter with the OSX Yosemite and iOS 8 releases.

I believe that Apple Watch and the Health sensors could possibly pave the way for a framework which would allow independent hardware and software vendors to plug into, very similar to what is done with the iPhone and iPad platforms today. Run an app?  Sure!  Have a custom peripheral that you want to use to send data to the device like the Muse? Absolutely! Come one come all! 

I expect to see Apple create a health framework to receive all the Apple Watch and currently available iPhone health sensor data. In the first iteration, it will most likely be Apple only and most likely limited in functionality. But as they iterate and extend, I think we’re going to see the Apple Health framework become the defacto standard to which health related IoT devices are going to send their data and to which ISV’s are going to look to as the primary access point for consuming this data. 

Granted, there are a bunch of potential privacy concerns that may get in the way of this, but Apple managed to get the record industry to bend beneath their will. I think that if there’s a company out there that could possibly tackle the issues and come out with something useful, Apple is more likely than most to rise to the challenge.

 

What the Future holds

With the democratization of all of this data, I’m extremely excited to the possibilities and insights into the nature of the human condition that can be derived from having such an abundance of data across such a huge proportion of the population. There’s a lot of work to be done to figure out how to categorize the contributors in useful demographics that allow us to start grouping and sifting the data for interesting correlations. 

Imagine if all of that data can be sanitized and drawn into a connected series of data sources which are all uniformly accessible through a common set of Apple HealthNet (I’m making the name up!) APIs which allow App developers to write to a common API and allows Hardware developers a common schema to which they can deploy their data. If they need something else allow them to extend it themselves or have them work with Apple to extend the schema where necessary so all devices can take advantage of it.

Even better, have the medical community give input into the schema as well allowing them to actively solicit different types of data from the collective apple-bearing masses. Crowd sourcing huge amounts of data.

There are only a couple of ways to improve accuracy in statistical analysis. Increase the number of samples in a given time period or increase the number of time periods across which you sample. Either way, more data leads to more accurate data. 

 

Are there privacy issues?  Sure there are. How do we allow medical researchers to be able to mine that data pool while protecting the individual right to privacy.  One of the ways to do that is for a single organization to take on the burden of such responsibility and allow other entities to then access through the structure, secure methods.  Kinda sounds like Apple might be in that position soon.

Am I crazy?  What do you think? Looking forward to comments below

 

@netmanchris

 

 

 

 

http://www.wahoofitness.com/

Quantified Self Meets BIg Data: A Meeting of the Minds

As I’ve written about before, I’m diagnosed ADHD. I’m not one of those “squirrel!” joking guys who is “sure” they have ADHD but have never been tested. I’ve been on meds and done a ton of reading over the years to develop coping strategies to deal with the challenges that are presented by the different way that my brain works to try and mitigate the drawbacks and take full advantage of all the gifts that come with ADHD.

 
One of the the coping strategies that I’ve always been very interested in is that of bio-feedback. Imagine if you could actually “see” what you’re brain is doing. Imagine that you could actually “watch” your attention lapse in near real-time! How amazing would that be? Imagine the insights that could be derived and the potential to identify potential triggers in attention deficits. ( For the record, I’ve not struggled as much with an inability to focus, so much as an inability to SHIFT focus when i need to. )
 

Enter the year of the portable EEG.

 
2014 is the year of the portable EEG. In 2013, there were at least three different projects focusing on bringing brain science to the masses that I’m aware of. 
 
For the record, I”m not a brain scientist and any assessments that I make here are PURELY my own very limited ability to judge. 
 

Emotive Insight:


This seems to be the most technically advanced of the three projects. The kickstarter project has been slow to say the least. They’ve had a few set-backs over the course of the project. But they have been fairly consistent with feedback and the company seems to have more participation in the academic community. 

I can’t make any judgement call on the actual device, as they are behind on delivery.  ( April 2013 est delivery date ).  But I have high expectations on this one. 
 
The SDK will probably be quite mature as I’m pretty sure they will be leveraging tech from their earlier products. 
 
In the latest update, they mentioned a company called Neurospire who currently uses EEG data for marketing purposes (very cool concept!). Turns out they are changing their game a bit to something closer to my heart. They just won their first round of funding to develop a biofeedback application aimed directly at aiding children with ADHD.  I’m very excited to see what they come up with and see if they come up with something that can help my kids as they learn to deal with the pros and cons of their differences. 
 

Melon

 
Melon seems to be more of a fun project. The science and tech seem to be there, but the focus seems to be more on bringing the fun. They have made some adjustments to their original, based on kickstarted backers feedback, to allow the headband to adjust from kids to gargantuan cranium size. The application is also more focused on fun, or so I’ve been led to believe. The app measures your focus, and IF you can stay focused, it will allow you to fold origami animals.  Sounds kinda funny, but I can tell you my kids are actually excited about this one. 
 
Imagine… Folding. Paper. With. Your. Mind.    
 
Yeah. I know, right?
 
SDK is also an unknown at this point as it’s still listed as “available soon”.
 
Looking forward to this one which is also on the late shipping train. The est. ETA was November 2013, but according to the latest update, we should be seeing it in late September. 
 

Muse:  

 
I actually got turned on to this one by @beaker.  They went the indigogo.com way rather than kickstarter.com.  I didn’t end up getting in on the funding on this one, so no deal for me.  But…  they actually shipped. 
 
Yup. I put in an order and it arrived 2 days later at my door. InterAxon, the company who makes muse, is actually out of Toronto, so this is one of the RARE occasions that I’ve not had to wait or pay extra for shipping to Canada! ( woo-hoo! ). 
 
This product just started shipping, but they already have an SDK in place, as well as apps, titled Calm, for both iOS and Android.  Being an apple-guy, I tried it out and was actually pretty impressed. Clean interface, simple for now, but the concept works. In a nutshell, the weather gets calmer when you get calmer. 
 
The hardware seems solid, There’s one of the sensors that I have a little bit of trouble with, but I’m not sure if that’s just more practice or something actually wrong with the unit. Only time will tell I guess. 
 
The SDK seems not too bad either. I had some trouble getting the Muse to connect on OSX, but that’s MOST likely because I’m running a beta of a pre-release version of a certain fruity OS.   
 
The Windows and the OSX install were pretty similar to be honest. The SDK is python based and requires python 2.7 ( WHY NOT Python 3????) and a few typical libraries ( numby and Scipy from memory ). Pretty well documented on the choosemuse.com website. 
 

Big Data meeting of the minds.

 
One of the truly cool things which the quantified self movement brings is the sudden  influx of contributors to datasets.  The Calm application for the Muse allows the user to share their data in a non-identifiable way back to the InterAxon servers. There’s the obvious demographic questions that get asked as part of the initial registration, 
 
Imagine how Big Data algorithms can be applied once enough of us start to donate the output of our sessions along with enough demographic information to allow data scientist to create K-plots and run Baysian functions and start pulling some interesting observations. 
 
Imagine how baysian algorithms can suddenly pull out astonishing insights when you combine the EEG readings from the Insight with the activity level and sleep patterns from the fitbit, throw in a little dash of air quality and noise pollution from the sense.  Mix it up in “the cloud” and start comparing our sanitized non-personally identifiable with other peoples sanitized non-personally identifiable of similar demographics and we start to have enough data to start pushing the envelope of our understanding of our behaviours. 
 
The scariest thing for me is that we might actually be able to quantity what normal actually is. 🙂  
 
Ok… so maybe that last one is a bit of a stretch, but it’s certainly going to be interesting watching what happens in the next couple few years as this data starts to coalesce. Data gravity starts to kick in and we have suddenly have a large enough data set for things to get REALLY interesting.
 
Anyone else out there donating data? Scared? Paranoid?  Anyone else looking forward?

Voice Engineers Will Rule the SDN World

Voice Engineers take a lot of crap.

 

There’s definitely a hierarchy to the world of network engineers and sometimes it looks a little like the following

 

NewImage

 

As a former Voice Engineer, I’ve definitely been the victim of this good natured jockeying for positioning. As a former voice engineer, I have to admit that I’ve even participated in the ribbing of my former voice peers.  Being a voice engineer is kinda like being the Rodney Dangerfield of the networking world.  ( You know the tag line )

I’ve even been told on more than one occasion that my CCIE “doesn’t count” because I did Voice. I’ve always taken it as the joke it is. I’ve got digits just like everyone else who’s ever passed a lab. But some people out there might not have the same thick skin I’ve developed over the years. ( Thick skin is something else Voice Engineers get. I think it’s all the exposure to end-users ). 

For the record, I have the upmost respect for voice engineers. It is one of the toughest gigs in networking. You have to have skills in so many areas, not to mention the constant human interaction that comes with the job. And we all know that not every network engineer handles constant human interaction well.

I spent some great twitter conversations reminiscing about the past and I realized that it just might be that voice engineers might just be the most well positioned to really make things happen in the new world. I think if we look at any past experience, there are skills which are transferable if you just look at them in the right way.  

There’s a lot of network engineers of all stripes and colours out there who are busy asking themselves the questions “ How do I make myself relevant in the new SDN world?”.   I think the first part of answering this question is to identify which of your current skills are transferable. Once you’ve got that figured out, you identify what’s missing and then hit the books. 

So here’s the list of why I think Voice engineers will have a great shot of making themselves quickly indispensable in the new world order.

Why Voice Engineer will rule the world

Experience with Controller-Based Networking

When explaining the concept of controller based networking, people always jump to the obvious analogy of wireless controllers with FIT APs. But it occurred to me last night that a potentially better example is the voice PBX.  Especially in a world where VMWare’s NSX is quickly becoming part of the conversation, the parallels become even stronger.  Voice is the original overlay.

Voice has it’s own addressing scheme (E.164) independent of the underlay infrastructure. There’s a centralized controller ( Callmanager ) who has completely knowledge of the state of every endpoint ( phone ) in the network.  As voice engineers, we’re so used to looking at symptoms of a problem with no direct linkages between the overlay and the underlay other than our knowledge. I believe this experience is going to come in very handy for those of us who chose to make the jump.

System-Level Thinking

Voice Engineers think in terms of the entire PBX as a system. There’s something just different about the approach of voice engineers as opposed to traditional R&S engineers. R&S guys have a great ability to focus in on the individual device they are working on. The are so used to living in a distributed state system that they can comfortably skip into the shoes of any device on the network and quickly analyze that particular devices perspective. As a voice engineer, having the ability to see things from the perspective of one particular voice gateway and analyze dial-patters is important, but been able to understand the entire system is what makes us fundamentally different. 

Operating System Skills

I have no idea how this happens. But I’ve actually seen REALLY good network guys who actually struggled to statically define an IP address on a Windows box. As a network management advocate now, I’ve seen the feet that a GUI can place in the eyes of a packet head one to many times to dismiss how valuable all the exposure to operating systems was. As a voice engineer, I had to work with Windows NT4 ( Callmanager 2.4 ), Windows 2000 ( Callmanager 3+), Linux ( Callmanager 5+), VXWORKS (3Com NBX), Linux ( 3Com VCX), not to mention all the client machines that I had to install the original Cisco Softphone ( oh! the pain!) the IP Communicator, or any of the other soft phone clients that have come along the ways. 

I’m comfortable on OS’s. And when I’m asked to install the HP VAN SDN on Ubuntu, or want to play with the OpenDayLight or NOX or Floodlight or any other controller out there right now. I’m not intimidated in the least. I wouldn’t call myself a Linux expert, but I’ll jump right in and figure it out. 

Programming skills

 

The CCIE Voice is one of the only ( THE only?) Lab which has ever forced candidates to do java programming. For those of you who’ve never done it, ACD programming in an IPCC environment is based on java beans. Digit translations are based on RegEx. And trouble shooting complex number translations through an entire dial patter with AAR turned on will force you into a boolean mindset whether you actually understand that term or not. 

 

People Skills

Voice Engineers have to interact with people. A lot. In fact we probably have to interact with people more than any other sub-genre in the networking profession. We have to learn various IT dialects to communicate with the legacy voice people. The hard-core network people. The “real” programmers who are going to do the complex ACD scripts for us. Not to mention the actual end-users who are experiencing difficulties with one-way audio or making a fax go through. ( Yes – People still use fax machines ). 

Strong communications skills and the ability to quickly shift vocabulary to effectively communicate between the different stockholders in an SDN environment is going to probably be one of the most important skills as the silos start to break down.

 

Strong Networking Skills

As much as people like to make fun of voice engineers, most of them have an unbelievable level of foundational networking. They may not be the strongest in BGP or MPLS, but in my experience they understand the basics of networking at a level that most of the other sub-genre’s don’t get to. You don’t ever want to get into an argument about QoS with a voice engineer. We understand spanning-tree like nobodies business. In fact, because of the complete lack of tolerance of RTP for any packet loss or delay, we have had to become really, really good at performance tuning the network to ensure that every packet arrives in order in less than 150ms ( G.114 standard people!). 

 

Wrapping it up

To be honest, I believe every network person who wants to is going to make it into the new world. I think that we spend so much time talking about whether or not that SDN is going to force people to become programers that we haven’t spent enough identifying what skills we already have that are going to serve us in the new world.

 

Do you believe that one of the other network sub-professions above are better equipped to move into the SDN world?  I challenge you to write the blog and make a better case. 

 

Comments, as always, encouraged and welcome!

 

@netmanchris

Things I learned as a voice engineer

 I have something to admit. I’m a recovering Voice Engineer. It’s been almost 10 years since my last installation. I occasionally experiment in the privacy of my home. But I’ve mostly broken the habit. It’s actually been so long that most of the people I work with have no idea that my CCIE is actually in Voice. I was actually one of the first 50 or so in the world which means I’ve been out of that world longer than most people have been in it. I try to stay current, but to be honest, my passion has moved to other things. 

 

For someone reason, Voice Engineers are viewed as the bottom feeders of the networking world. But I never seem to be surprised that some of the best networking people I know seem to have a solid voice background. Just to call out a few of the twitter personalities 

 

@networkingnerd – Yup. Tom used to be a voice engineer.

@amyengineer – Uh huh. Her too. She slips occasionally, but she’s getting better.

@treylayton  – CTO over at that little VCE company?  Before vBlocks and Netapp. He was one too.

and then tonight I just found out

@colinmcnamara has a great voice background from the early days as well. Yup. The DevOps/OpenStack/automation guy? He’s one too.

 

I’m not talking about the kind of voice engineers that we see today. One who reads one of the many Cisco Press purple books which are available. ( in my day they were teal darn it! ) Logs into a website which automatically puts together a complete Cisco Communication Manager Express ( #hursttoevenwritethat ) configuration based on your input into a webpage. People who think that SIP has always been here and that H.323 is a “legacy protocol”. 

I spent some time reminiscing about AS5300s, MCS 3800s, MGCP inconsistencies, importing hundreds of MAC addresses with a Bar code reader and various other shenanigans that we had to do at that point of the industry.   To be honest, it feels so good to know that we will never. ever. ever. have to do any of those things again.

But there were also some really great lessons I learned from that point in my career.

 

Be nice to everyone

I’m lucky that I’m Canadian and had my parents and country to thank for teaching me good manners. Unfortunately, I’ve seen this go very very wrong when I was paired with a sales person who didn’t have the same upbringing I did.

Situation:  We walk into a potential customer. He’s immediately rude to the receptionist and tells ( not asks ) here to let the CIO know we’re here and to go get him a coffee. Not even a please. I don’t think he called her honey or sweetie, but it was definitely implied. I was speechless and young enough to not step up and defend her. 

So the sale goes on. We get the buy in of the CIO, the CFO, the accountants ( they eventually became known as the IT department ), and basically nailed the sale. We had everyone’s buy in and the sale guy was looking forward to a big commission check.  Then we were marched back to the front of the office and sat down in front of the receptionist that he had offended from the moment he walked in the office.  She was shown the receptionist station phone and asked what she thought.  Anyone want to guess at how this turned out?

This just reinforced what mom and dad taught me. Be nice to everyone. It’s important. You never know when a lack of manners is going to come back to haunt you.

 

It’s all about getting from point A to point B. The rest is just details. 

 

For anyone who lived through the early days of Voice, you will all remember the media frenzy of the time telling everyone that the network team was going to absorb the comms team and that all the legacy voice people were going to lose their jobs. In fact, Cisco at the time was using the headcount reduction as one of the selling features.  This setup for a huge war between the Voice and Networking teams. Never a good thing. The legacy comms teams were afraid of this new technology. They didn’t understand what IP, TCP, CME, CCM, SRST, H.323, SIP, etc… meant. And they didn’t even know where to start.

There was one particular account where I was dropped into and all of the information sat in the head of the legacy comms guy who was running a multi-site Nortel network with some Option 11s and various norstars ( nortel key switch ) in a fairly complicated voice network. I was the 3rd or 4th person to get dropped in and I was warned about him. He was feeling threatened and had been intentionally difficult.  And rightly so. He was told that we were coming in to replace him and he had no path forward. 

I remember sitting down with Dave and I spent about 2 hours with him explaining routing in voice engineer terms. Routing in terms of dial plans, etc…  At the end of it he just looked at me and  shook his head and laughed.  Then he looked up and said “ That’s it?”

Apparently, the guys before me had never taken the time to explain any of what we were doing in terms that he could understand. Turns out the guy is a routing genius and understand the route summarization, advanced routing concepts, ACLs, and much much more. He was actually running a complicated voice network spanning 6 separate LATAs and doing Tail-End-Hop-Off ( TEHO ) while dealing with non-contiguous NSX codes.  

The network ops teams came to me a couple of days later and wanted to know what kind of magic I had pulled with Dave because they had never seen him more cooperative or helpful. Plus, he suddenly was a lot more interested in what they were doing and seemed to have gained 10 years of network knowledge overnight.

This project taught me that when you have a good conceptual model and you really understand it. The rest is just implementation details.  

 

Just because it’s a standard, doesn’t mean it’s a standard.

SOOOOOO many examples of this, but I think the first time I learned this was when SIP first came out. Wow. What a mess. Now we see non-standard implementations of protocols all the time. It’s sad that we’ve gotten to the point where we don’t even have outrage over it anymore. This brings me to the next lesson.

 

Trust but verify.

I worked a customer where we had ordered 2 PRIs ( ISDN Primary Rate interfaces ) for Voice service. We were clear that we HAD to have CallerID as part of the bundle. We received the sheet from the vendor ( Telco who doesn’t exist anymore ) who was actually subleasing the lines from another Telco.  It had all the regular line information on there. Protocol DMS 100, etc… So we spun up the line and we didn’t have CallerID.   Long Story short, the customer didn’t pay their bill for almost 2 years because the Telco could never get CallerID to work and every time we started getting somewhere, the account team changed and we had to start over from zero. 

One night during an outage window, just for giggles, we changed the protocol on our end from DMS 100 to NI1 or NI2. For those of you who don’t know, they are all pretty similar as ISDN protocols and the one thing which I remember is that the place where the incoming CallerID was stored was different among the three implementations.  The customer didn’t say a word to the vendor until for another few months. 🙂  

 

The application is what’s important. 

The most important thing I learned as a voice engineer is that the application is what’s important. Or more accurately, the value that the business derives from the application. In those days, if you said Callmanager had a minimum of features, it would have been an understatement.  I remember Lucent ( now Avaya ) competing against us with the “ We have 250+ features! You might need them! You’ll want to make sure that you have them!”.  The response was usually “Name 10 of those features”.   I’ve never had a customer get past 6 or 7.  Funny enough, I know see network vendors doing the same thing. 

All that really matters is If the product you’re pitching has the right features and you can align the to what the business is doing or wants to be doing.

 

 

Funny enough. I find myself having a lot of the same conversations know as our industry does another paradigm shift into the world of SDN, APIs, and Orchestration. Hopefully, I learned the lessons well enough the first time to not repeat them this time around. 

 

To all my other crusty old voice guys. Thanks for all the lessons you taught me.