Pages

Sunday 3 February 2019

Home Automation Multi-sensor

Overview

In this project I use a commercially available PIR (passive infrared) sensor, the same typically found in home alarm systems, to make a home automation multi-sensor by equipping it with an ESP8266.


Motivation

There are many project quality PIR sensors available for use with Arduino-like devices such as this...
https://amzn.to/2GGhyPs

In fact, I used one of these for a previous project with an Arduino, it worked very well. They have configurable sensitivity levels and timeouts.
It was an obvious choice to use one when making a home automation trigger.
However... after building a test device on breadboard with an ESP8266 I found there were a lot of false alarms. After a bit of research I found others had similar issues, it appears this, and similar units, have poor RF interference protection resulting is false triggers.

I have come across others that used this PIR with better RF protection against false alarms.




I decided against using this for my home automation sensor for two reasons;
  1. It's still hobby grade. By this I mean it's susceptible to change design at any time with no requirements to adhere to any standards.
  2. I still need to design/make/print a case


Solution

Use a commercial sensor. This perfectly answers the two points above.
  1. Commercial grade. This has to conform to alarm grade standards so it's RF shielded, designed to operate 24/7 in home environments.
  2. It has a purpose designed case, mount, IR window, etc.

The Build

I had no preference for PIRs so just went for any alarm sensor that was at the lower end of the price spectrum and looked large enough to accomodate an ESP8266. I went for the 
Honeywell IS312B as it is 'Pet Tolerant' and includes a 'Swivel Bracket' all for sub £12.

That'll do.

The first task is to work out how to interface with the PIR. Looking at the green edge connector along the top of the PCB, left to right we have T2, T1, NC, C, V-, V+


The obvious terminals are V- and V+ and reading the data sheet it says it uses 9 – 15 VDC, a nice wide supply voltage. I powered it up with a 12v bench supply to test it worked. It has about a 10 second warm up time before it's ready to start sensing. Getting the multimeter out and using the continuity mode it was pretty quick to find out that when the sensor is idle NC and C are a closed-circuit. When motion is detected NC and C go open-circuit.

I think T1 and T2 are tamper detection terminals, I'm not interested in this feature just now so didn't bother to characterise their behaviour.

Now we have the connections mapped we can move to task two, ESP8266 integration.
I have a drawer full of ESP8266 devices in various guises, NodeMCU, Wemos D1 Mini and bare 8266's in the 01, 07 forms. The bare forms like the 01 and 07 are great because they are tiny, however they lack the USB controller and power regulator so for development I just used the Wemos D1 Mini. 

It turned out the D1 fits perfectly under the PIR PCB. I mean perfectly, like it's designed for it. I didn't even bother fixing it in place, just used some tiny sticky foam pads on top of the D1 and used the compression from the PIR PCB when it's clipped into its mounting points.


The alarm state is just read using a digital read. C is held low by connecting it to V- (which is actually ground). A GPIO of the D1 is connected to NC and configured as an input with the internal pull up resistor enabled.

  pinMode (PIR_PIN, INPUT_PULLUP);

This means that when NC - C goes open circuit we read the internal rail voltage thanks to the pull up resistor. When NC - C is closed the input pin is pulled low.

I said at the start this was a multi-sensor. I simply added a DHT22 externally to the rear of the case, there are hundreds of articles and posts about using these so I won't rehash any of those.

All wired up!


Tip: Temporarily mount it while testing locations for best coverage.


That's pretty much it. I now have a PIR with a neatly integrated ESP8266. The USB port D1 is easily accessible should I want to flash new firmware onto the device. This is still a bit awkward to do, so I've configured the D1 to accept OTA (over the air) firmware updates.

Code can be found here https://github.com/oliver9523/IoT_Sensor

The Dollar Game

In this post I'll talk about how I created The Dollar Game app for Android using Unity 3D. This was inspired by the Numberphile video on the topic.


This ad-free game can be downloaded free from the Play store here.

As soon as I saw this video about the Dollar Game I knew I had to turn it into a game.
Where. To. Start.
There are three things I need to solve to create this game:

1 - Control logic
This is the mechanism for interacting with the nodes, sending/taking money, keeping track of which nodes are connected.

2 - Visuals
The graphical style, visual representation of the nodes, edges, balance and transactions.

3 - Game Logic
This is quite a biggy. This will be responsible for generating new random games, this in itself isn't difficult [create nodes, randomly connect nearest n nodes, distribute money]... the difficulty is I have to ensure that the level is solvable, nobody is going to be happy if the graph is impossible to solve.

Lets knock off task 2 first, the visuals. Seeing as numberphile inspired the app I thought I'd use their brown-paper look and a handwriting style font. The rest of the elements are in keeping with that style. Done.

Now to build the control logic.
//TODO Explain the approach and publish code


Easy

Medium

Hard



Weighted node connections indicate how many dollars will be transferred for each transaction.




IoT Audio Source Switcher [Part 2 - Software]

Recap

In Part 1 I outlined the hardware aspects of creating an IoT audio source switcher. In Part 2 I'll cover the software side of things.

Problem
To be able to remotely select the input audio source from a variety of devices.

Solution
Use network (WiFi / LAN) connections to send messages to enable control.

Architecture
When it comes to network communications there are a number of different ways to accomplish a task. The most common being HTTP calls to some predefined API e.g. http://server1.sky-map.org/search?star=polaris
This is great for sending requests for information, in this example we request the information for the star polaris and we receive XML encoded response. XML and JSON are common formats for communicating this type of information as they can both be parsed quickly and easily.
The HTTP approach also requires that we know the IP address of the device we want to communicate with. This can cause unnecessary extra work.

HTTP Scenario
Here we have deployed the audio switched with an HTTP endpoint that looks something like this...
http://192.168.0.123:8080/audio?source=1
This looks great, we can send commands from any device on the network. However, if we have several devices that are setup to control the audio switcher, then for some reason we need to change the IP address of the audio switcher, all the control devices have to be updated (or perform a search of the full IP range) in order to keep communicating. This might need another endpoint so that we can check what device we're interrogating e.g. http://192.168.0.x:8080/info. 
Another solution is configuring a local DNS server so we can address each device by a name, but that sounds like even more overkill.

Multiple Device Scenario
Let's say we want to extend our setup, now we want to turn on/off some lights when we switch source on the audio switcher. To extend the HTTP implementation we'd need to hit multiple URLs on multiple IPs and possibly send multiple parameters to the endpoints. 

Example:
Our Audio Switcher is on 192.168.0.123
We have light controller on 192.168.0.124
And a power plug on 192.168.0.125

Say we want to switch to source 2, turn off two lights and turn on the device connected to the power plug.

We might have to hit the following endpoints, something like
192.168.0.123/audio?source=2
192.168.0.124/off
192.168.0.125/on

That's 3 different IPs we need to manage and 3 different http endpoints. To make it reliable we'll need to make the IPs static or allocate them in the DHCP settings of the router. 

Now imagine trying to update this after deploying. Adding another device, changing IPs, adjusting the workflow. It's starting to get complicated and difficult to maintain.

Let me introduce something called MQTT.


MQTT

"MQTT is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport."

Sounds promising doesn't it?

MQTT uses a publish/subscribe approach to delivering messages to devices. Without just regurgitating hundreds of other sites / YouTube videos that explain MQTT I'll just get to the crux of the matter.

Unlike our HTTP example that requires the controllers to directly send data to the audio switcher (at a known IP), we now tell the audio switcher to subscribe to a topic. The topic is a communication channel that any device can listen to and any other device can publish to.
In the case of the audio switcher it subscribes to "0/kitchen/av/a" where I've defined the topic hierarchy format as "{floor}/{room}/{device}/{property}".

However, we still need to communicate with a specific IP, but at least it's only 1. The MQTT broker or server that handles all the messages. This grunt work is performed by a Raspberry Pi.

Here's how MQTT would work in our multiple devices example.
The audio switcher subscribes to "0/kitchen/av/a" and receives the simple message of 1-4 that dictates which input to switch to.
Any controller devices simply have to publish to the same topic "0/kitchen/av/a" with the message 1-4 to set the source.
Now, if I want to control a second device, say a power switch to turn on the TV if the input of the audio switcher is '1'. I simply have to subscribe the power switch to the same topic "0/kitchen/av/a", the controller knows nothing of the power switch, we haven't had to update the controller with another API endpoint or IP address of another device to control.




Summary

I built an IoT audio source switcher with 4 stereo inputs and 1 stereo output. This will be used to switch the input of an audio amp. Sources are switched using relays to avoid any cross-talk or other components leaking noise into the audio path.
I'm using an ESP8266 to control the unit and provide wifi access. Communication is handled using the MQTT protocol whereby clients can 'subscribe' and 'publish' messages to 'topics'.

The switcher subscribes to a topic e.g. "0/kitchen/av". Other devices can publish to this topic "0/kitchen/av" and include a message, this could be JSON encoded e.g. {"set": "source", "value": "2"}. Any device subscribed to this topic will receive this message and can process it accordingly.
I can publish to this topic from anything, an app, console, or another ESP8266. In one example I show a simple button press on another ESP8266 publishing a message to select the next source.

All the MQTT messages are handled by a local server or 'broker' running on a Raspberry Pi. The local nature of this 'IoT' means it doesn't rely on any external internet connections, third-party services or companies (https://www.telegraph.co.uk/technology/2018/10/12/glitch-yales-smart-security-system-sees-brits-locked-homes/)

Thursday 27 December 2018

IoT Audio Source Switcher [Part 1 - Hardware]

Problem In our kitchen we are currently unable to simply switch audio inputs, to select between the TV audio or Chromecast Audio for music playback.

Obvious Solution Buy an AV amp with multiple inputs and a remote.


My Solution Create a generic WiFi enabled audio switcher using an ESP8266 controlled by MQTT messages.


Why...?
Why go through this effort to make my own IoT audio switcher? Well, as always the first answer is because I can. But, talking strictly about tangible benefits to a custom solution over a bought one, these were my main driving factors;
- Lower Cost (minus my time)
- Better API (if any at all with a commercial product)
- No reliance on installing yet another app and granting access to my details
- No cloud - if my internet fails it'll still work
- It's a generic external device I can choose to pair with any future amp of my choice




Step 1: The Amp

At this proof-of-concept stage I didn't want to run the risk of frying my main home cinema amp so I went with a cheap stereo amp from Amazon LEPY LP 2024A+ Plus Amplifier. Honestly, it's really not that bad, driving my Monitor Audio Bronze 2 bookshelf speakers and it sounds great, obviously it's not very loud but enough for testing.


Step 2: Breadboard

The circuit is pretty simple, use a relay to switch audio sources. This is actually my preferred method over using solid state components. Any electrical connections shared with the audio path is just asking for trouble, every shared connection is a point of ingress for electrical noise that will corrupt the audio signal. The switcher isn't going to be selecting between input sources at any fast rates so the slow mechanical relays are absolutely fine.

I sketched out a rough circuit diagram (it has been about 15 years since doing any circuit diagrams so I expect there are mistakes), at least I knew roughly what I wanted to do.

Surface Pro is great for this sort of task (ok... so is a pen and paper but at least with the Surface it's already in picture format)

Once all the components arrived I set about prototyping on a breadboard. Always work in stages. There's no point in building up the whole board and discovering you've made a mistake. Create the smallest viable circuit that will perform the task, in this case it will control the relay allowing audio signals through or not. This may be pretty obvious that it would work, but I wanted to test if the making and breaking of the contacts in the relay would create cracks and pops on the audio line.

Baby steps. Does it work with 1 relay?

Success, audio output is nice and clean. If the relay does create any audio artefacts they are filtered out by the amp. 
Now to continue building up the circuit. Scaling it up to 4 stereo channels then checking it still works. Add a Node-MCU ESP8266 and some simple code to switch 4 outputs to test each relay and switching multiple active audio source.




Add some I/O. Check it still works.


Step 3: Strip-board / Veroboard

Now it's getting exciting. The design is valid and the prototype works. Time to step it up and migrate the breadboard components to a copper strip-board or veroboard. Taking my time to make sure I've copied is exactly and making best use of the copper tracks to minimise the number of wires I need to cut and strip.

Following proof of concept on breadboard move the circuit over to veroboard.

Even with careful planning the veroboard wasn't wide enough to fit the Wemos D1 mini, so I had to create a little daughter board for it to sit on.
Regardless of this mishap it's starting to look and sound pretty good.


Check it all still works.


Step 4: Features and Functions

Now it's time to add the finishing touches. Adding 3mm blue LEDs on flyouts means I can easily position them in the case. This indicates which source is the current active input.


Add indicator LEDs to show which output is active.




Check it all still works again.


Sometimes it's less convenient to pull out the phone, unlock it, open an app and switch the audio source. Just press a button on the wall.

The software side of this project will be outlined in part 2. I'll talk about the ESP8266 and MQTT.


Step 5: Assemble

Putting it back together I noticed the amp has a really cheap look to it with a bright blue light all round the clear plastic of the volume knob. This won't do.
I drilled a 1mm hole next to the power switch of the amp, desolderd the power LED and put it on a flyout wire so I could position it behind the new hole.


Drill a tiny 1mm hole.

The LED on its own is too bright.
Covering the hole and with masking tape attenuates the brightness just enough.


Much better.


Reassemble the amp.

After much searching for an aluminium enclosure I found it better value to just buy another amp, remove the internals and make custom face and back plates. Now I have nice consistent looking, stackable, units... and I now also have spare amp internals from the donor unit!

Top: Amp Front
Bottom: Switcher Front

Top: Switcher Rear
Bottom: Amp Rear


Step 6: Deploy

Fin.
The amp and switcher combo is now deployed in the kitchen and paired to some spare Creative speakers.
Connecting it up to the kitchen TV. Currently two inputs are used, TV and Chromecast Audio.


Summary

I built an IoT audio source switcher with 4 stereo inputs and 1 stereo output. This will be used to switch the input of an audio amp. Sources are switched using relays to avoid any cross-talk or other components leaking noise into the audio path.
I'm using an ESP8266 to control the unit and provide wifi access. Communication is handled using the MQTT protocol whereby clients can 'subscribe' and 'publish' messages to 'topics'.

The switcher subscribes to a topic e.g. "0/kitchen/av". Other devices can publish to this topic "0/kitchen/av" and include a message, this could be JSON encoded e.g. {"set": "source", "value": "2"}. Any device subscribed to this topic will receive this message and can process it accordingly.
I can publish to this topic from anything, an app, console, or another ESP8266. In one example I show a simple button press on another ESP8266 publishing a message to select the next source.

All the MQTT messages are handled by a local server or 'broker' running on a Raspberry Pi. The local nature of this 'IoT' means it doesn't rely on any external internet connections, third-party services or companies (https://www.telegraph.co.uk/technology/2018/10/12/glitch-yales-smart-security-system-sees-brits-locked-homes/)















Wednesday 26 December 2018

New year new effort to post

It's almost a new year and I'm going to put more effort into sharing my projects.

Some of the topics I'll be posting about will be around computer vision, deep learning, augmented and mixed reality (Hololens), IoT, Android app development, UWP app development, and other random electronics and computing projects.

Thursday 26 February 2015

Zoom and enhance...



We all know the score when it comes to surveillance in TV & Movies these days.

Scene: The killer has all but got away with it until some upstart agent spots a smudge in a reflection...
Agent: "Can you guys zoom in on that region"
Tech: "Sure"
Agent: "Now enhance it a little"
Tech: "How's that?"
Agent: "...a little bit more...bingo! We've got him!"

If you have no idea what I'm talking about pop over to zoomandenhance.tumblr.com for some entertaining examples.

To get to the point, I set myself the challenge of being able to reconstruct an image from the seemingly impossible. A task partly inspired by this work...
http://gizmodo.com/an-algorithm-found-these-images-hidden-in-reflections-f-1679242769
http://arxiv.org/abs/1412.7884
Instead of glitter I would use crumpled foil.

...yep, that's clear as bell.

Hardware Configuration
My set up was to use a laptop screen to display images, the crumpled foil placed on the keyboard, the laptop screen tilted forwards and the camera pointed at the foil reflecting the images on the screen.



Calibration
First stage to reconstructing the images is to profile the foil in terms of a reflectance map. To do this I display a black screen with a white rectangle in different known x-y positions. Because I'm in control of the white test square I can capture a frame, from the camera, of the crumpled foil with the test square at different grid positions on the laptop screen. This gives me, in this case, a 20x20 matrix of matrices. Each sub matrix is a 1280x960 grayscale image, this is the image captured from the camera that's pointed at the crumpled foil. The 20x20 matrix is the position of the white rectangle on the laptop screen. This video shows the output from the camera as the white test square scans across the laptop screen.




After the calibration I now know what parts of the reflectance image, from the foil, contribute to what parts of the laptop screen. The theory now is that if I display an image on the screen, capture the reflected image from the crumpled foil I should be able to back project this mess below in order to partially reconstruct the input image, but only to an accuracy of a 20x20 image as that's the accuracy used for calibration. If I use a finer calibration matrix, say 100x100, the white square on the laptop screen is also reduced in size therefore it doesn't provide enough illumination to overcome the background light that leaks through the black regions of the screen. The black regions of an LCD are still illuminated from behind, they are not completely dark. 

Processing
Once calibrated we can process any image like this...

This foil reflected image was created by displaying a grayscale image on the laptop screen. I used grayscale because it meant I was only dealing with single channel images and made things more simple to prove the concept. Below is the image I test the processing with.

Note: This is not my input image, I'm not sure who owns it, I found it for free on a couple of different wallpaper sites so figured it was fair game.




Results
OK, so it's not going to allow the upstart CSI Agent to see the killer... but you can clearly make out the pyramids...just. The processing time takes a couple of seconds per image but there is no parallel or optimised processing or going on so it could easily be made an order of magnitude faster by using the other 7 threads on the CPU + a bit of optimisation.


It's actually a bit easier to see the results on smaller images.
 

The pyramids image is the first one I tested and it probably worked the best...


No bit of computer vision work is complete without a chessboard.


It's important to also do a comparison of Input/Output using the input image scaled to the calibration dimensions. As I could only achieve a good light level from a large white calibration pixel the input image should be scaled down to this same size so we can appreciate the maximum reconstruction resolution that could be achieved. Here is the pyramid image scaled to 20x20 then up scaled again for viewing.



Conclusion
The image can definitely be reconstructed from fractured images captured from reflections off a crumpled foil sheet. After looking at the best possible case for reconstruction, i.e. the resized input image to the calibration matrix size, we can see the results are actually pretty impressive, if I do say so myself!

For an evenings worth of work, about 150 lines of C++ and OpenCV the results are promising for reconstructing higher quality images and even worth experimenting with colour.

There are limitations with the hardware configuration I was using:
- The low level background illumination from LCD black regions is going to impact the reflectance map accuracy
- The relatively large calibration 'pixel' greatly reduces the maximum output resolution
- Compensation for the viewing/brightness angle of the LCD has not been considered either
- Foil reflections are not perfect, a clean sheet of kitchen foil does not make a mirrored surface

Finally, a massive thanks to COSMONiO for the use of their hardware.

Sunday 29 September 2013

Blog usage stats

This isn't related to my work just a random post with some interesting stats about blog visitors. Nothing else to say about it really so enjoy the pretty plots.


Blog visitor locations

Browser stats for blog visitors


OS usage for blog visitors



The International Conference on Image Processing 2013

After attending my first international conference as a PhD student I thought I'd do a quick review of the event and the content.

Day 0 - Thursday 12th September
Australia is far away. 10,500 miles (16,898 km) that's 3 x 7 hr flights + 2 x flight changes. I can't wait for sub-orbital intercontinental flights to become routine! The +Boeing+ 777-300ER (Extended Range) +Emirates aircraft was impressive nonetheless. Equipped with a ~13" LCD touchscreen with a removal phone-sized touchscreen remote, flanked by a USB port for photo playback on the ICE (Information, Communication & Entertainment) system and a universal (110v/60Hz) power socket provided plenty of things to keep me busy.
+Emirates ICE system 
A view that never gets old
Day 1 - Sunday 15th September
The event kicked off for me on the Sunday afternoon with a tutorial on RGBD Image Processing for 3D Modeling and Texturing. The main focus of this session was depth information from structured light cameras, it was interesting to see some techniques being used to improve depth information from colour cameras. Hopefully I'll be able to use some of this in my research.


Sunday evening was the welcome reception. This was a great chance to mingle, chat and network. An open bar and good food facilitated this very well. Another attraction that provided a good talking point was the addition of a mini petting zoo... This was pretty surreal, I found myself in the middle of a conference center stroking a koala, kangaroo and a wombat. 

Day 2 - Monday 16th September
Now the conference kicks off properly and gets into full swing pretty quickly. There were oral presentations, tech demos and poster presentations all running in parallel. If you have a broad area of research you'll be darting from room-to-room trying to catch talks of interest. I'd recommend having a good look though the technical program and highlighting the ones of most interest and focus on attending those. It sounds obvious but trying to get round everything is impossible.
This is where the poster sessions are possible more useful as they are literally hanging round longer and it's easier to discuss the work face-to-face.

Monday was topped off with a workshop on VP9 from +Google presented by +Debargha Mukherjee. It wasn't really in my field but was an interesting insight into the performance of VP9, where it's going next and the process of software development at the big G!

Day 3 - Tuesday 17th September
There's not much else I can say quickly, other than it was much the same as Monday, a busy schedule and plenty of things to attend. I'm not going to review all the work I saw, it would take far too long and not really provide anything of great use to anyone. If you are interested in the program it can be found here.

Tuesday evening was polished off with a banquet. It started with some pretty stereotypical aboriginal music and dancing being served with the first course which was "a taste of Australia". The main course of steak followed a rather odd combination of dance and technology, not really sure it was right for a geeky audience. Fillet steak is always a welcome meal in my books, but being honest it was overcooked for my liking, it's medium-rare or gtfo. Desert was accompanied by some more stereotypical music, this time by a folk-like band.

Day 4 - Wednesday 18th September
Finally it was almost time to present my work, unfortunately it was toward the end of the last day so inevitably the numbers started to dwindle a bit. This said, I did have some good discussions with quite a few people who I think were interested in the results presented or very good at feigning it. Below is the poster presented.


Summary
With such a broad range of topics being presented from medical imaging, radar, image enhancement, recognition and quality assessment it was easy to get mentally fatigued by trying to take it all in. My hint for anyone yet to attend one of these conferences is just decide on whats of interest and really concentrate on those. However, don't ditch everything else entirely, the poster session are invaluable for gaining insights into other areas of research that may be of interest but at least you can read them and absorb them at your own rate.