Come quietly or there will be… trouble.

The day I decided to engage an agent for my house

Marcello Barile
CloudBoost
Published in
7 min readDec 22, 2017

--

If you are thinking about that, no no no, I didn’t really ask RoboCop to secure my nest… nor I want to talk about“Artificial Intelligence”, so then I will label the software I’m going to describe you with a pretty old-fashioned term, an“Agent”.

Yet another [Siri, Alexa, Google Home]

Not really; I am a software developer and, since I was a young boy, I have developed this kind of compulsion about building things on my own (within certain limits, indeed); that’s why, one day, I decided to start dedicating (a small) part of my spare time to this ambitious “always in beta” project.

HaaS — Home as a Service

We all do, on a daily basis, lots of very simple tasks and sub-tasks, so many that we often end up with losing focus from more important things…

… remembering events, taking appointments, setting timers, alarms, or just recalling the trash to bring out… these are all things that we can easily delegate to someone else, if not a human why not a software?

Here it comes the home assistant — in my case, the so called HaaS (which only speaks Italian so far… sorry for that).

Talking about details, what can I use it for?

Calendar and time manager

I can ask it what are the events or appointments for a given, past or future, date… or I can use it to set a timer having back a reminder as soon as it expires.

Me: remember me the events for tomorrow
Home: there are no events scheduled for tomorrow.
Me: set a timer of 2 minutes
Home: Ok, I’ll call you back as soon as it expires
Home: Time out. 2 minutes are passed.

Morning wakeup

It wakes me up every morning reminding me the events for the day and some information about the weather… it is also able to turn on the lights accordingly to the sunrise, eventually it can also turn on some on-demand music or a radio station.

Weather notifier

If it’s going to rain or snow… well… I don’t need to look out the window anymore! ;-)

Home: today it might rain.
Home: it will definitively rain today.
Me: How is the weather
Home: There are 17°C outside. Humidity is at 55% and the wind blows at 42 km/h North.

Music handler

It is pleased to turn on some music for me, increasing or decreasing the volume and stop it when asked to.

Me: put some jazz
Home: Ok.
Me: turn on the volume
Home: Ok.
Me: stop the music.
Home: Done. Tell me when I can resume it.

Garbage collection reminder

This was something very useful since I was always forgetting the bin to bring out for the garbage collection. Now it’s a no-brainer.

Home: Hi, I remind you that today you can throw the compostable.
Me: what should I throw today
Home: today you can throw compostable, paper and cardboard.

Security agent

I can ask it to give me some statistics about the situation of the entire house having back the (total) percentage of movement, the level of noise, the temperature and the humidity percentage.

Me: how is going?
Home: I don’t see that much movement (0.00%) and the environment is silent (48.1).
Temperature is at 20.6 °C, humidity is at 51.1% and 20.05°C are perceived.

Eventually, if something happens while the system is armed and I’m not around, I’ll get a warning message with a picture of the event (while a video is saved for a later analysis).

Hey, an alarm is happening. It has been recorded a movement quantity of the 17.2414% in the kitchen.

It knows the difference between the ambients of the house, so I can ask it to show me see a specific room or zone.

Me: tell me what’s happening in the veranda
Home: Ok.

Last but not least, and this is really an experiment, I can ask it to analyze the ambient… in the example below it is telling me that there is a man (me) standing in a room.

Me: analyze the kitchen
Home: Ok, let me think about it… I see a man standing in a room

Last piece of the puzzle is the web app with some environmental data… video streams, temperature of the system, memory usage and some environmental data, such as if lights are on, if it’s time to sleep, if there is an alarm in progress, the temperature and humidity of the ambient.

Microclimate surveillance

There are some parameters that are constantly monitored and, if needed, alarms are dispatched; i.e. if it’s too cold, too warm or if some harmful substance gets detected in the air.

Every hour a log is stored for a further analysis, in the case below I was looking at the temperature and humidity readings of the last hour (plots have to be improved quite a bit). Other logs are saved for lights status, noise level, motion level and “sleep times” (when basically there is no noise and no light within a certain period of time).

… and if I don’t want to check the web app, I can always interact with it via Telegram.

Human caring

I’m currently working in connecting the system to biosensors in order to track and prevent harmful events to family members. (More details to come…)

Let’s talk about techie things

The hardware

The key feature of the whole system is a very low energy consumption beside an acceptable computational power; this is the main goal of every IoT system, among stability and durability.

The heart is a NVIDIA Jetson TK1, a SoC (System on Chip) equipped with a 192 CUDA Cores Kepler GPU and a Quad-Core ARM Cortex-A15 CPU (full specs here).

All sensors (Grove sensors, more details here) are connected to a RaspberryPi 2 through a GrovePi module (made by DexterIndustries, for whom I contribute writing JavaScript libraries. Click here for further information.)

Cameras are standard IP cameras (TP-Link actually).

Talking about performances, the RPi is enough for the tasks it has to do; the TK1, instead, is starting shrinking its capacity especially for the fact that NVIDIA has dropped, for this board, the development of its customized operating system (Linux4Tegra) whereby, some new interesting features, introduced by OpenCV 3.0 and by the latest versions of TensorFlow, are missing.
Having a Tegra TX2 would be a burst for my system :-)

The software

The current configuration can be described with three tiers…

First level: data collector
Stack:
Node.js for sensors (RPi), C++ for Computer Vision (Jetson TK1).

Second level: orchestrator and data handler
Stack:
Node.js, MongoDB (the Node.js orchestrating app runs on the Jetson TK1 and exposes a REST API, the MongoDB instance is hosted on a VPS).

Third level: webapp and user interaction
Stack:
Node.js, Standard JS. (the Node.js front-end app runs on the VPS and interacts with the Jetson TK1 through its API).

The event-driven approach

The whole system is thought for being highly scalable and modular, codewise everything is handled by events and the whole state is stored inside a unique JSON object (it’s not the Redux paradigm but more like its closest ancestor since I started working on it in the 2014).
Having events for each action makes really easy to plug in/out modules without affecting the entire system, it also allows to have separated components which are in listening to the same event handling it with different behaviors.

What about the future

AI seems to be the must-to-have nowadays; in my case, having some kind of Deep Neural Network might help in tracing common behavioral patterns which might be helpful for optimizing the energy consumption(heating and lights) or preventing harmful situations.

Unfortunately, it still costs a lot having a dedicated desktop computer with a GPU (necessary for any kind of DNN or ML/DL) just for running this system. I’m waiting the price of the Tegra TX2 to drop off a little bit, then I’ll replace my Tegra TK1 with it.

Codewise, first thing first, I’ve been working on this project during the weekends of bad weather or when I was hanging around at home without nothing better to do… it’s definitively far from being production ready :-) it works though!!

Secondly, It’s old, I’ve started the project in the 2014 and it now needs a refresh; my plan for the 2018 is to rewrite it from scratch using the very last stack of JavaScript technologies:
TypeScript, Redux, GraphQL, React.

If you want to stay in sync with this project (and be informed when the source code will be released), just add me on one of these networks:
LinkedIn, Twitter or GitHub.

If you need further details (since I’ve omitted a lot) feel free to bug me in private.

Ciao!

--

--

A computer scientist who works on a daily basis with TypeScript and Node.js, passionate about Computer Vision and Robotics.