The First IoT Culling: My Devices are Dying.

 

Cull: to reduce the population of (a wild animal) by selective slaughter

As an early adopter of technology, I sometimes feel like I get to live in the future. Or as William Gibson said “The future is already here, it’s just not evenly distributed”. There are a lot of benefits to be gained from this, but there are also risks.  One of the biggest risks is 

How long is the product you choose going to be around?

 

I was an early adopter in the first wave of IoT devices, from wearables to home convenience devices, I dipped my toes in the pool early. Most of these platforms were Kickstarter projects and I’ve been generally happy with most of them, at least the ones that were actually delivered. ( That’s a story for another time…).

But in the last six months, the market seems to have decided that there are just too many of these small companies.

The Death Bells are Ringing

In the last year, I’ve noticed that there’s starting to be a trend. Many of the early platforms that I invested in seem to be disappearing. Some have been bought off and killed. Remember the Pebble watches which was acquired by Fitbit? I’ve got an original and a Pebble time that are now little more than short-term battery traditional watches.

 

NewImageSome are just dying on the vine.

The latest victim? The Sense sleep monitor system by Hello.  This was a Kickstarter project that really helped to define a new category. When the project was launched in 2013, there was nothing else like it in the market, at least nothing that I’m aware of. Like most Kickstarter projects, they shipped later than their Aug 2014 estimate, but when it arrived it was definitely worth the wait.

This device had multiple sensors including light, sound, humidity, VoC ( air quality ), and temperature. It also had remote bluetooth motion sensors that attached to your pillows to track body movement while you sleep. The basic idea is that you sleep 1/3 of your life. Shouldn’t we make sure we are doing it right?  The combination of the sensor data combined with sleep research will help users understand why they feel good or bad in the morning. How to create the optimal conditions in your bedroom etc…  Obviously I’m not a sleep expert, but I can say that sense has improved the quality of my sleep since I started using it. 

Just last week, we all received some sad news that Hello, the company behind the Sense product is shutting down.

NewImage 

What’s happening?

Although there are some people who believe that Sense was just a faulty product, I think there is something deeper going on here. This shows a fundamental flaw with the business models that some of the early IoT and wearables players came to market with. There is a very simple business principle that somehow they seemed to completely miss.

If you want to survive as a business, you’re incoming cash must be more than you’re outgoing case. 

 

Pebble and Sense, and several other wearable and IoT products in the market right now were built on a single-purchase model. You buy the product and you get unlimited right to use that product.  This is the consumers preferred model. When I spend my money on something, I want to own it. I want to be able to use it, and I don’t want to have to pay for it again and again. At least this is the simple version of the thought process.

Spending some time watching various devices become little more than expensive bricks has made me re-examine that thought process though.

Why I’m looking for Subscription models now

Yup. That’s right. You heard me.

I want to find companies that are actively looking to provide value funded through a subscription model of some kind. Companies like Nest and Ring who are providing cloud storage for security cameras are a great example of this in action.

Looking at the failing companies; the one thing in common that I’m starting to see in common with these different devices is that they tend to make a whole bunch of money up front. ( $10M+ for the initial Pebble Kickstarter project. One of the largest ever on that platform! ).  But they tend to be niche products that have a limited target audience, and when that target market has been saturated….  No more money comes in and they’re left having to continue to pay for the “cloud” infrastructure required to keep their products going.

Looking at Sense and Pebble, both of these platforms sold with a hardware model. They have a product offering where your devices connect to cloud-based infrastructure, whether that’s AWS based or other is irrelevant. What most consumers don’t realize is that cloud-based infrastructure has a reoccurring monthly cost to it. This also doesn’t include the cost of ongoing platform development, whether that’s adding new features, creating a better user-experience, or just upgrading to stay current with the newest versions of Apple iOS or Android that are shipping on current devices. 

This is fine as long as you continue to sell new hardware product, but as the number of new users start to trend down and your costs stay the same… we start to see what’s happening in the market right now.

Are Subscription models that only way?

Absolutely not. There are other companies, like Dlink, iHome or iDevices that have a fairly broad portfolio of products and are continuously creating new products. These helps to ensure they have a healthy income stream as individual product segments become saturated. They can afford to continue to fund app development and the infrastructure required to host them as they are spreading that cost over many devices.

 

More Deaths in the future

There have been some notable passings, such as Pebble and Sense, but I don’t think they are going to be the last by any stretch of the imagination. 2017 and 2018 are going to be a hard year on early adopters as we start to look at the mirrors, watches, and gadgets blink eternally as they have no home in the cloud to call back to. Hoping that many of the new IoT players start to realize that having a good technology idea isn’t enough if you want to survive. Strange that I’m now looking at business models in a consumer product purchasing decision. I guess this just goes to show how educated the consumer is truly becoming.

As I invest in my SmartHome products, I look for companies who are established with multiple streams of revenue. Companies like Lutron or Philipps. In some cases, like the Soma Smart Blinds, I really don’t have another option. I’ll probably buy them, but I’m not expecting to these to last the long term. I wish Soma the best of luck, but I don’t see a subscription model and it’s not like shades are something you replace every year. 

Bottom line is enjoy your first generation wearables now. They might not be around for that much longer. 

 

@netmanchris

Amazon S3 Outage: Another Opinion Piece

So Amazon S3 had some “issues” last week and it’s taken me a few days to put my thoughts together around this. Hopefully I’ve made the tail-end of the still interested-enough-to-find-this-blog-valuable period.

Trying to make the best of a bad situation, the good news, in my opinion, is that this shows that infrastructure people still have a place in the automated cloudy world of the future. At least that’s something right?

What happened:

You can read the detailed explanation on Amazon’s summary here.

In a nutshell

  • there was a small problem
  • they tried to fix it
  • things went bad for a relatively short time
  • They fixed it

What happened during:

The internet lost it’s minds. Or more accurately, some parts of the internet went down. Some of them extremely ironic

UNADJUSTEDNONRAW thumb bbfd

Initial thoughts

The reaction to this event is amusing and it drives home the point that infrastructure engineers are as critical as ever, if not even more important considering the complete lack of architecture that seems to have gone into the majority of these “applications”.

First let’s talk about availability: Looking at the Amazon AWS S3 SLA, available here, it looks like they did fall below there 99.9% SLA for availability. If we do a quick look at https://uptime.is/ we can see that for the monthly period, they were aiming for no more than 43m 49.7s of outage. Seems like they did about 6-8 hours of an outage so clearly they failed. Looking at the S3 SLA page, looks like customers might be eligible for 25% service credits. I’ll let you guys work that out with AWS.

Don’t “JUST CLICK NEXT”

One of the first things that struck me as funny here was the fact that this was the US-EAST-1 Region which was affected. US-EAST is the default region for most of the AWS services. You have to intentionally select another region if you want your service to be hosted somewhere else. But because it’s easier to just cllck next, it seems that the majority of people just clicked past that part and didn’t think about where they were actually hosting there services or the implications of hosting everything in the same region and probably the same availability zone. For more on this topic, take a look here.

There’s been a lot of criticism of the infrastructure people when anyone with a credit card can go to amazon sign up for a AWS account and start consuming their infrastructure. This has been thrown around like this is actually a good thing, right?

Well this is exactly what happens when “anyone” does that. You end up with all your eggs in one basket.  (m/n in round numbers)

“Design your infrastructure for the four S’s. Stability Scalability, Security, and Stupidity” — Jeff Kabel

Again, this is not an issue with AWS, or any Cloud Providers offerings. This is an issue with people who think that infrastructure and architecture don’t matter and it can just be “automated” away. Automation is important, but it’s there so that your infrastructure people can free up some time from mind numbing tasks to help you properly architect the infra components your applications rely upon.

Why o Why o Why

Why anyone would architect their revenue generating system on an infrastructure that was only guaranteed to 99.9% is beyond me.  The right answer, at least from an infrastructure engineers point of view is obvious, right?

You would use redundant architecture to raise the overall resilience of the application. Relying on the fact that it’s highly unlikely that you’re going to lose the different redundant pieces at the same time.  Put simply, what are the chances that two different systems, both guaranteed to 99.9% SLA are going to go down at the exact same time?

Well doing some really basic probability calculations, and assuming the outages are independent events, we multiple the non-SLA’d time happening ( 0.001% ) in system 1 times the same metric in system 2 and we get.

0.001 * 0.001 = 0.000001 probability of both systems going down at the same time.

Or another way of saying that is 0.999999% of uptime.   Pretty great right?

Note: I’m not an availability calculation expert, so if I’ve messed up a basic assumption here, someone please feel free to correct me. Always looking to learn!

So application people made the mistake of just signing over responsibility to “the cloud” for their application uptime, most of whom probably didn’t even read the SLA for the S3 service or sit down to think.

Really? We had people armed with an IDE and a credit card move our apps to “the cloud” and wonder why things failed.

What could they have done?

There’s a million ways to answer this I’m sure, but let’s just look at what was available within the AWS list of service offerings.

Cloudfront is AWS’s content delivery system. Extremely easy to use. Easy to setup and takes care of automatically moving your content to multiple AWS Regions and Availability Zones.

Route 53 is AWS’s DNS service that will allow you to perform health checks and only direct DNS queries to resources which are “healthy” or actively available.

There are probably a lot of other options as well, both within AWS and without, but my point is that the applications that went down most likely didn’t bother. Or they were denied the budget to properly architect resiliency into their system.

On the bright side, the latter just had a budget opening event.

Look who did it right

Unsurprisingly, there were companies who weathered the S3 storm like nothing happened. In fact, I was able to sit and binge watch Netflix well the rest of the internet was melting down. Yes, it looks like it cost 25% more, but then again, I had no problems with season 4 of Big Bang Theory at all last week, so I’m a happy customer.

Companies still like happy customers, don’t they?

The Cloud is still a good thing

I’m hoping that no one reads this as a anti-cloud post. There’s enough anti-cloud rhetoric happening right now, which I suppose is inevitable considering last weeks highly visible outage, and I don’t want to add to that.

What I do want is for people who read this to spend a little bit of time thinking about their applications and the infrastructure that supports them. This type of thing happens in enterprise environments every day. Systems die. Hardware fails. Get over the it and design your architecture to take into consideration these failures as a foregone conclusion. It IS going to happen, it’s just a matter of when. So shouldn’t we design up front around that?

Alternately, we could also chose to take the risk for those services that don’t generate revenue for the business. If it’s not making you money, maybe you don’t want to pay for it to be resilient. That’s ok too. Just make an informed decision.

For the record, I’m a network engineer well versed in the arcane discipline of plumbing packets. Cloud and Application architectures are pretty far away from the land of BGP peering and routing tables where I spend my days. But for the low low price of $15 and a bit of time on Udemy, I was able to dig into AWS and build some skills that let me look at last weeks outage with a much more informed perspective. To all my infrastructure engineer peeps I highly encourage you to take the time, learn a bit, and get involved in these conversations at your companies. Hoping we can all raise the bar collectively together.

Comments, questions?

@netmanchris

Sometimes Size Matters: I’m sorry, but you’re just not big enough.

So now that I’ve got your attention…I wanted to put together some thoughts around a design principal of what I call the acceptable unit of loss, or AUL. 

Acceptable Unit of Loss: def. A unit to describe the amount of a specific resource that you’re willing to lose

 

Sounds pretty simple doesn’t it?  But what does it have to do with data networking? 

White Boxes and Cattle

2015 is the year of the white box. For those of you who have been hiding under a router for the last year, a white box is basically a network infrastructure device, right now limited to switches, that ships with no operating system.

The idea is that you:

  1. Buy some hardware
  2. Buy an operating system license ( or download an open source version) 
  3. Install the operating system on your network device
  4. Use DevOps tools to manage the whole thing

add Ice. Shake and IT operational goodness ensures. 

Where’s the beef?

So where do the cattle come in?  Pets vs. Cattle is something you can research elsewhere for a more thorough dealing, but in a nutshell, it’s the idea that pets are something that you love and care for and let sleep on the bed and give special treat to on Christmas.  Cattle on the other hand are things you give a number, feed from a trough, and kill off without remorse if a small group suddenly becomes ill. You replace them without a second thought. 

Cattle vs. Pets is a way to describe the operational model that’s been applied to the server operations at scale. The metaphor looks a little like this:

The servers are cattle. They get managed by tools like Ansible, Puppet, Chef, Salt Stack, Docker, Rocket, etc… which at a high level are all tools which allow for a new version of the server to be instantiated on a very specific configuration with little to no human intervention. Fully orchestrated,   

Your servers’s start acting up? Kill it. Rebuild it. Put in back in the herd. 

Now one thing that a lot of enterprise engineers seem to be missing is that this operational model is predicated on the fact that you’re application has been built out with a well thought out scale-out architecture that allows the distributed application to continue to operate when the “sick” servers are destroyed and will seamlessly integrate the new servers into the collective without a second thought. Pretty cool, no?

 

Are your switches Cattle?

So this brings me to the Acceptable Unit of Loss. I’ve had a lot of discussions with enterprise focused engineers who seem to believe that Whitebox and DevOps tools are going to drive down all their infrastructure costs and solve all their management issues.

“It’s broken? Just nuke it and rebuild it!”  “It’s broken? grab another one, they’re cheap!”

For me, the only way that this particular argument that customers give me is if there AUL metric is big enough.

To hopefully make this point I’ll use a picture and a little math:

 

Consider the following hardware:

  • HP C7000 Blade Server Chassis – 16 Blades per Chassis
  • HP 6125XLG Ethernet Interconnect – 4 x 40Gb Uplinks
  • HP 5930 Top of Rack Switch – 32 40G ports, but from the data sheet “ 40GbE ports may be split into four 10GbE ports each for a total of 96 10GbE ports with 8 40GbE Uplinks per switch.”

So let’s put this together

Screen Shot 2015 03 26 at 10 32 56 PM

So we’ll start with

  • 2 x HP 5930 ToR switches

For the math, I’m going to assume dual 5930’s with dual 6125XLGs in the C7000 chassis, we will assume all links are redundant, making the math a little bit easier. ( We’ll only count this with 1 x 5930, cool? )

  • 32 x 40Gb ports on the HP 5930 – 8  x 40Gb ports saved per uplink ) = 24 x 40Gb ports for connection to those HP 6125XLG interconnects in the C7000 Blade Chassis.
  • 24 x 40Gb ports from the HP 5930 will allow us to connect 6 x 6125XLGs for all four 40Gb uplinks. 

Still with me? 

  • 6 x 6125XLGs means 6 x C7000 which then translates into 6*16 physical servers.
Just so we’re all on the same page, if my math is right; we’ve got 96 physical servers on six blade chassis connected through the interconnects at 320Gb ( 4x40Gb x 2 – remember the redundant links?) to the dual HP 5930 ToR switches which will have (16*40Gb – 8*40Gb from each HP 5930) 640Gb of bandwidth out to the spine.  .  

If we go with a conservative VM to server ratio of 30:1,  that gets us to 2,880 VMs running on our little design. 

How much can you lose?

So now is where you ask the question:  

Can you afford to lose 2,880 VMs? 

According to the cattle & pets analogy, cattle can be replaced with no impact to operations because the herd will move on with out noticing. Ie. the Acceptable Unit of Lose is small enough that you’re still able to get the required value from the infrastructure assets. 

The obvious first objection I’m going to get is

“But wait! There are two REDUNDANT switches right? No problem, right?”

The reality of most of networks today is that they are designed to maximize the network throughput and efficient usage of all available bandwidth. MLAGG, in this case brought to you by HPs IRF, allows you to bind interfaces from two different physical boxes into a single link aggregation pipe. ( Think vPC, VSS, or whatever other MLAGG technology you’re familiar with ). 

So I ask you, what are the chances that you’re running the unit at below 50% of the available bandwidth? 

Yeah… I thought so.

So the reality is that when we lose that single ToR switch, we’re actually going to start dropping packets somewhere as you’ve been running the system at 70-80% utilization maximizing the value of those infrastructure assets. 

So what happens to TCP based application when we start to experience packet loss?  For a full treatment of the subject, feel free to go check out Terry Slattery’s excellent blog on TCP Performance and the Mathis Equation. For those of you who didn’t follow the math, let me sum it up for you.

Really Bad Things.  

On a ten gig link, bad things start to happen at 0.0001% packet loss. 

Are your Switches Cattle or Pets?

So now that we’ve done a bit of math and metaphors, we get to the real question of the day: Are you switches Cattle? Or are they Pets? I would argue that if your measuring your AUL in less that 2,000 servers, then you’re switches are probably Pets. You can’t afford to lose even one without bad things happening to your network, and more importantly the critical business applications that are being accessed by those pesky users. Did I mention they are the only reason the network exists?

Now this doesn’t mean that you can’t afford to lose a device. It’s going to happen. Plan for it. Have spares, Support Contracts whatever. But my point is that you probably won’t be able to go with a disposable infrastructure model like what has been suggested by many of the engineers I’ve talked to in recent months about why they want white boxes in their environments.

Wrap up

So are white boxes a bad thing if I don’t have a ton of servers and a well architected distributed application? Not at all! There are other reasons why white box could be a great choice for comparatively smaller environments. If you’ve got the right human resource pool internally with the right skill set, there are some REALLY interesting things that you can do with a white box switch running an OS like Cumulus linux. For some ideas,  check out this Software Gone Wild podcast with Ivan Pepelnjak and Matthew Stone.

But in general, if your metric for  Acceptable Unit of Loss is not measured in Data Centres, Rows, Pods, or Entire Racks, you’re probably just not big enough. 

 

Agree? Disagree? Hate the hand drawn diagram? All comments welcome below.

 

@netmanchris

Quantified Self Meets BIg Data: A Meeting of the Minds

As I’ve written about before, I’m diagnosed ADHD. I’m not one of those “squirrel!” joking guys who is “sure” they have ADHD but have never been tested. I’ve been on meds and done a ton of reading over the years to develop coping strategies to deal with the challenges that are presented by the different way that my brain works to try and mitigate the drawbacks and take full advantage of all the gifts that come with ADHD.

 
One of the the coping strategies that I’ve always been very interested in is that of bio-feedback. Imagine if you could actually “see” what you’re brain is doing. Imagine that you could actually “watch” your attention lapse in near real-time! How amazing would that be? Imagine the insights that could be derived and the potential to identify potential triggers in attention deficits. ( For the record, I’ve not struggled as much with an inability to focus, so much as an inability to SHIFT focus when i need to. )
 

Enter the year of the portable EEG.

 
2014 is the year of the portable EEG. In 2013, there were at least three different projects focusing on bringing brain science to the masses that I’m aware of. 
 
For the record, I”m not a brain scientist and any assessments that I make here are PURELY my own very limited ability to judge. 
 

Emotive Insight:


This seems to be the most technically advanced of the three projects. The kickstarter project has been slow to say the least. They’ve had a few set-backs over the course of the project. But they have been fairly consistent with feedback and the company seems to have more participation in the academic community. 

I can’t make any judgement call on the actual device, as they are behind on delivery.  ( April 2013 est delivery date ).  But I have high expectations on this one. 
 
The SDK will probably be quite mature as I’m pretty sure they will be leveraging tech from their earlier products. 
 
In the latest update, they mentioned a company called Neurospire who currently uses EEG data for marketing purposes (very cool concept!). Turns out they are changing their game a bit to something closer to my heart. They just won their first round of funding to develop a biofeedback application aimed directly at aiding children with ADHD.  I’m very excited to see what they come up with and see if they come up with something that can help my kids as they learn to deal with the pros and cons of their differences. 
 

Melon

 
Melon seems to be more of a fun project. The science and tech seem to be there, but the focus seems to be more on bringing the fun. They have made some adjustments to their original, based on kickstarted backers feedback, to allow the headband to adjust from kids to gargantuan cranium size. The application is also more focused on fun, or so I’ve been led to believe. The app measures your focus, and IF you can stay focused, it will allow you to fold origami animals.  Sounds kinda funny, but I can tell you my kids are actually excited about this one. 
 
Imagine… Folding. Paper. With. Your. Mind.    
 
Yeah. I know, right?
 
SDK is also an unknown at this point as it’s still listed as “available soon”.
 
Looking forward to this one which is also on the late shipping train. The est. ETA was November 2013, but according to the latest update, we should be seeing it in late September. 
 

Muse:  

 
I actually got turned on to this one by @beaker.  They went the indigogo.com way rather than kickstarter.com.  I didn’t end up getting in on the funding on this one, so no deal for me.  But…  they actually shipped. 
 
Yup. I put in an order and it arrived 2 days later at my door. InterAxon, the company who makes muse, is actually out of Toronto, so this is one of the RARE occasions that I’ve not had to wait or pay extra for shipping to Canada! ( woo-hoo! ). 
 
This product just started shipping, but they already have an SDK in place, as well as apps, titled Calm, for both iOS and Android.  Being an apple-guy, I tried it out and was actually pretty impressed. Clean interface, simple for now, but the concept works. In a nutshell, the weather gets calmer when you get calmer. 
 
The hardware seems solid, There’s one of the sensors that I have a little bit of trouble with, but I’m not sure if that’s just more practice or something actually wrong with the unit. Only time will tell I guess. 
 
The SDK seems not too bad either. I had some trouble getting the Muse to connect on OSX, but that’s MOST likely because I’m running a beta of a pre-release version of a certain fruity OS.   
 
The Windows and the OSX install were pretty similar to be honest. The SDK is python based and requires python 2.7 ( WHY NOT Python 3????) and a few typical libraries ( numby and Scipy from memory ). Pretty well documented on the choosemuse.com website. 
 

Big Data meeting of the minds.

 
One of the truly cool things which the quantified self movement brings is the sudden  influx of contributors to datasets.  The Calm application for the Muse allows the user to share their data in a non-identifiable way back to the InterAxon servers. There’s the obvious demographic questions that get asked as part of the initial registration, 
 
Imagine how Big Data algorithms can be applied once enough of us start to donate the output of our sessions along with enough demographic information to allow data scientist to create K-plots and run Baysian functions and start pulling some interesting observations. 
 
Imagine how baysian algorithms can suddenly pull out astonishing insights when you combine the EEG readings from the Insight with the activity level and sleep patterns from the fitbit, throw in a little dash of air quality and noise pollution from the sense.  Mix it up in “the cloud” and start comparing our sanitized non-personally identifiable with other peoples sanitized non-personally identifiable of similar demographics and we start to have enough data to start pushing the envelope of our understanding of our behaviours. 
 
The scariest thing for me is that we might actually be able to quantity what normal actually is. 🙂  
 
Ok… so maybe that last one is a bit of a stretch, but it’s certainly going to be interesting watching what happens in the next couple few years as this data starts to coalesce. Data gravity starts to kick in and we have suddenly have a large enough data set for things to get REALLY interesting.
 
Anyone else out there donating data? Scared? Paranoid?  Anyone else looking forward?