Dipping my toes in the IoT pool: Microsoft DevKit IoT Board

In my personal life, I’ve jumped in the SmartHome head first, but I’ve been really reluctant to blur the lines into my professional life. Recently, I saw something that changed all that. The Microsoft IoT DevKit board.

NewImage

What is it?

This is Microsoft hardware product that allows developers to jump into the IoT pool. Specifically, the Microsoft Azure IoT hub pool. This is a very capable board with multiple sensors.

What sensors you ask?

  • Humidity: 
  • Air Pressure
  • Temperature
  • Magnetometer
  • Motion
  • Microphone

Basically, it’s a really good sensor that can grab the majority of the physical measurements that we want to look at in the IoT world. 

What do I do with it?

Right now, this is really just a tech toy for me. I have no specific projects that I’m trying to achieve. Rather this is a device that I’m using to try and really understand HOW the IoT ecosystem works and ensure my employability in years to come. So I don’t have any specific goals, but that’s really ok because Microsoft has been wonderful enough to supply us with the Microsoft IoT DevKit page over at Github which has a few different projects that allow me to grow my skills in several IoT use cases until my own specific use-case pops into my head. 

What Projects are available?

The great thing about the DevKit project page is that there are a bunch of projects that are intended to take us from zero to hero in the IoT developer world. Or at least from noob to semi-competent. 

From basic projects like Connect to Azure IoT hub and Remote Monitoring to more advanced projects like DevKit translator which leverages Microsoft’s Bing to create a speech translator or Shake,Shake for a tweet which integrates motion sensors with twitter.

There are ten projects in all and I wouldn’t be surprised if more projects are released in the future.

What Next?

This is an interesting project for me. It’s not python, it’s not really networking. And I’ve never used any Microsoft Azure services before. So this is going to be an interesting ride.

My plan is to start with the first tutorial, and see if I can bang my head against it until it works. 🙂  Look for that blog soon!

Interested in joining me on the journey? I picked up a board for $39 from DFRobot and it arrived in just a couple of days. If anyone wants to jump in and compare notes, please feel free to reach out. I’m more than happy to share what I’ve learned.

 

@netmanchris

 

Advertisements

Hey Alexa, Turn my lab on!

TL/DR Put together a custom Alexa Skill so I can turn switches and routers off in my lab as shown in the video here. Feels pretty great.

 

As most of my twitter followers have noticed, I’ve been doing a lot of Home Automation, mostly with Apple #homeKit. But I also picked up an Amazon Dot because… well why not?

One of the great things abut the digital voice assistance from Amazon, is that they have created an extensible framework that enables those with a little bit of coding skills to add to Mrs. A’s already already impressive impressive array of abilities.

The Amazon Alexa developer page is pretty impressive. There’s a ton of information and tutorials there, as well as an SDK and code examples in Node.js. I’m almost exclusively a python coder at this point, so I decided to look for something a little more familiar and came upon this.

Flask-Ask

Flask-Ask is a Flask extension that makes building Alexa skills for the Amazon Echo easier and much more fun.

Essentially, John Wheeler took the flask WSGI ( web) framework and made it super easy to be able to create Amazon Alexa skills using this familiar library. I’ve used Flask in the past for a few projects, so this was a no-brainer for me.

John also put together a set of tutorials here which can be used to jumpstart the Alexa skills development process. There’s also a flask-ask Quickstart on the amazing developers blog which pointed me towards ngrok which came in really handy!

*Ngrok allows you to create secure tunnels to a local host. You run ngrok with the port number you want to expose and it automatically exposes the host as a resource on the grok website. It’s really really really cool. 

The Project

Like many of us, I have a physical lab in my house from my CCIE studies. As well, specializing in network management over the years requires access to physical gear in a lot of instances. Powering on that gear full time is out of the question because of the cost and power drain. As I’m sure you can imagine, going back and forth to turn things on and off gets old real quick.

To address that problem, I picked up a couple of intelligent PDU’s on eBay. There are many “smart” PDUs out there and I happen to have a set of Server Technologies that allows me to control each socket on a 16 port power bar. Pretty cool, right?  No more walking to the garage, which is a good thing when you’re trying to focus on a problem.

So things are heading in the right direction;  I can pop over to the local web interface of my PDU and turn my devices on and off. That’s nice…. But all the home automation stuff I’ve been doing lead me to wonder…

Why can’t I just ask for the device to be turned on?

I can ask Siri or Alexa to turn on the lights or adjust the temperature of my house. I can ask the about the weather or to check my calendar. There’s no reason why I shouldn’t be able to do the same with my lab gear.

So I decided to make that a reality.

What’s not covered in this blog

The one step which is not covered in this blog is writing the pyservertech library which I built on top of the pysnmp library. Essentially I walked the MIBs until I found how to gather the info I needed and figured out which specific MIB I needed to set to turn an individual power socket on or off.  I might do a blog on that specific piece too, but for now, I’m trying to focus on the Alexa piece.

If there’s interest, please let me know in comments or on twitter and I’ll prioritize the SNMP set blog. 🙂 

Building the Alexa Skill

Alexa skills are a combination of three components

  • Ask-Flask – This is the actual code and includes the templates file shown below
  • Intent_Schema – Kinda obvious, but this includes the various intents that you’re going to use in your skill
  • Sample Utterances – Are a collection of the various verbal phrases and how they are connected to the intents.

I’ll do my best to connect these in the code below, but I’d really recommend going through a couple of the tutorials above and play around with the examples to built some intuition on how these components connect.

The code below let’s a user do the following using Amazon Alexa’s voice assistant.

  1. Ask Alexa to open the Lab skill ( Lab is what I called it )
  2. Alexa asks the user “Welcome to the lab. I’m going to ask you which plug you want me to turn on. Ready?”
  3. User responds with “Yes”or “Sure”
  4. Alexa asks the user “Please tell me which power socket you would like to turn on?”
  5. User responds with a number which is the power socket they would like to turn on
  6. Alexa decodes the response and returns the number in a JSON array to the local Flask server
  7. Python code takes the number from the JSON array and uses that as input into the power_on() function.
  8. Power_On() function sends an SNMP SET command to the appropriate input.
  9. Device powers on.Alexa says “I’ve turned the power socket on.”
  10. I don’t walk to the garage.

Now that we understand how the code is supposed to work, let’s take a look at the individual pieces and how they fit together.

Alexa Skill

This is the python code that you’ll run on your local machine. This contains only a portion of the logic of the “program” as Amazon is really doing the majority of the lifting on their side as far as the speech recognition and returning the appropriate data in a JSON array.

Templates file

This file contains the various phrases that Alexa is going to speak on behalf of your application. You can see we’ve only got a few different.

Intent_Schema

This file gets loaded on the Amazon website.  Using the developer interface, you load the JSON which defines the Intent Schema directly into the intent schema location on the Interaction Model page.

NewImage

Sample Utterences File

Just like the Intent Schema, the Sample Utterances is also loaded directly into the Amazon developer portal into the Interaction model for this specific skill

NewImage

What’s next

This is just the start of this skill. All it does right now is turn things on, which is cool, but I want more. Just off the top of my head here are some of the things I’d like to do

  • Turn individual devices on or off
  • Turn individual devices on or off by name “Alexa ask the lab to turn on the HPE 2920 switch!”
  • Turn groups of devices on or off “Alexa ask the lab to turn on the Juniper branch!”
  • Request data from the PDUs “Alexa ask the lab How many devices are currently turned on?”  Or “ask the lab how much power is currently being used”

As you can imagine, this would require a lot more code and logic to accomplish all these goals. Definitely something I’m going to pursue, but I’m hoping that the simple example above helps to inspire someone else in their journey down this path.

Questions? Comments? You know what to do…

@netmanchris