A tide line of greys,
Heaped into trees.
Speckled with mistle,
Frozen with dew.
I regularly work from home, and because there has been a great deal of house renovation to be done there are plenty of deliveries to be waiting in for. I work at the back of the house, and being such a solidly built house with blockwork walls throughout – no plasterboard stud walls here – it’s hard to hear people knock at the front door. Also the door bell doesn’t work.
Before I hear you say – why don’t you fix the door bell – well we are in the middle of doing that bit and the rewiring hasn’t happened yet – and the wireless battery powered door bell has been as useful as a chocolate teapot.
So as I had the parts to hand here’s what happened…
- 1 usb webcam
- 1 raspberry pi
- 1 wireless usb dongle (or you can plug in to your network using a network cable)
- 1 usb powered hub
- a power brick to power the hub
- Another to power the pi
- Another device to use as the door maid – I have used several…
- Plug the usb webcam and wireless dongle into the hub.
- plug the hub into the raspberry pi.
- plug the power leads into both and switch on.
I am going to make an assumption that you have already got your raspberry pi bootable with the Raspbian operating system, and that you have already got your wireless dongle connected to your wireless network. There are plenty of guides ‘out there’ to take a look at to get you to the point that you have a bootable and workable raspberry pi that is networked.
Put the webcam somewhere where it has a view of the area outside the front door. (or of course point it any anything else you want to be able to see. You probably want to avoid getting too much sky in your shot – especially when the sun might be passing through the field of view of your webcam. You also might want to avoid getting a busy road in your webcam’s field of view if you expect to use motion detection – otherwise motion will be detected every time a vehicle goes by!
I am running some software called motion which is a program that monitors the video signal from cameras. It has a couple of useful features, like motion detection, and the ability to take time-lapse videos. The main one I use is the streaming of the video feed to other devices.
Full details for installing motion and basic setup can be found here: http://pingbin.com/2012/12/raspberry-pi-web-cam-server-motion/
Consuming the feed:
Over the summer and early autumn I had the opportunity to continue work on the Reading Lives project. The project has grown over the last few years from a discussion at a ‘hack-day’ about a column of data in a survey result-set that nobody knew what to do with, to a web based application that allows people to explore the answers given to the question “What role has reading played in your life?”. The fact that you are reading this now suggests that you too could answer that question, and share a short summary of the role that reading has played in your life.
About the project
The project has become what it is through some funding from the Arts and Humanities Research Council via the CATH (Collaborative Arts Triple Helix) project lead by do.collaboration at the University of Birmingham in partnership with the University of Leicester. CATH consists of several teams, each one of which is made up of a developer, an arts organisation and researchers. Our team consists of researchers Danielle Fuller (University of Birmingham) and DeNel Rehberg Sedo (Mount Saint Vincent University, Canada); myself Tim Hodson (the developer) and Writing West Midlands (the arts organisation).
At the time of writing this, the project is not only allowing people to explore the existing survey answers, but to contribute their own answers. The app presents a user profile which people can fill out with their own survey answers. Recently the app featured at the Birmingham Literature Festival, and was seen on the big screen at several of the events.
About the Data
The Answers as I call them are the answers to the question “What role has reading played in your life?”. Each answer is analysed for it’s word content by using a (relatively) simple algorithm called Term Frequency – Inverse Document Frequency. This algorithm allows me to decide how important a word is in a particular document based on it’s frequency in the Answers and it’s frequency in the corpus of all answers. To quote Wikipedia “The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.”. This calculated importance of words is used to build a word cloud which is meant to act as an alternative way to explore the frequently occurring themes of peoples Answers. We also have demographic data for the Answers which was collected through the original survey. It is planned to use this to allow further empathetic connections between the viewer of the app and the original Answers, but we haven’t got to build that bit yet :).
About the app (warning – gets technical!)
In a previous post I talked about how I put together some temperature sensors to log temperature in the green house and lounge. The sensors use XRF radio modules to send the data back to a Raspberry Pi (also sporting an XRF module) and are running from a 3.3v button cell battery.
The XRF module on the Raspberry Pi is sending the messages from the sensors to the RPi’s serial port, and this is where we start to talk about code…
The plan was to build a realtime display of the data from the temperature sensors. You can now see the live temperature from our green house and lounge, along with a history of readings over the last 1, 12 and 24 hours.
The code I used is available in a public Github repo – but is considered to be ‘alpha’ code – in that it is not tested end to end or from build to deployment. So you use at your own risk and with an assumption that you’ll have to tinker with it to get things running for you.
The steps below give an overview of the architecture, and each of these steps is explained in more detail in the sections which follow. However, this is not intended to be an in-depth how-to guide.
- Sensor sends LLAP message to RPi serial port
- Python script is listening to the serial port every minute
- LLAP Messages from the sensors are parsed into a simple JSON document which is posted to a Firebase database
- Firebase is an event driven database where all additions and updates fire events.
- An AngularJS app (served from Amazon S3 backed storage) shows the latest readings in a live updating display by listening to those Firebase events.
Sensor Sends LLAP messages to RPi serial port.
This is the easy bit, as the hard work of programming the sensors is already done for you by the Ciseco people. The sensors send messages using a simple 12 character message system called Lightweight Logical Application Protocol. The actual sensor setup and connection to the RPi is covered in the previous article.
Python script listens to serial port
I wrote a small python module to parse LLAP messages into a simple JSON document which could then be stored or otherwise manipulated as necessary. The module is available as part of the Github repo. The LLAP python module treats every message received as an event, and if you have registered a callback to a particular type of LLAP message, every time that message is received your callback will be notified and whatever your callback does – it will do! The LLAP module simply deals with the parsing of a string of messages captured from a serial port, and passes those messages on to your callbacks. This means that you can react to each temperature reading or battery low message as and when that message arrives.
It is up to your callbacks to decide whether to make a note of the time the message was received, and what action to take based on the message. But using this method it would be simple to have a temperature change over or under some threshold trigger some action. For example, if it get’s too warm in the green house, a motorised window opener could be triggered to let in some fresh air.
The code which listens to the serial port and registers the callback is up to you to write, but you can see the code in the Github repo which I am using to listen to the sensor messages.
LLAP messages sent as JSON documents to a Firebase database
This is where it starts to get fun!
Firebase is an awesome cloud based database which allows you to post and get data using simple HTTP requests. Because the database is event driven, and because it already has deep integrations with Angular JS, you can quickly build a responsive, data driven site which instantly responds to many events happening in different browsers all over the web. For our purposes – we want to show a live updating latest temperature displayed in a webpage – this is ideal.
The python code mentioned in the previous section simply takes the parsed LLAP message, adds a timestamp, a priority (used for ordering the result sets in Firebase) and a reading id which is just the timestamp float with the period (.) replaced with the sensor id (you can’t have periods in a Firebase key!). The resulting JSON object is then posted as a reading to the Firebase database.
Firebase is event based and fires events that you can monitor
Every time a new reading is added to the database by the python script, Firebase fires off some events to any other clients which are listening for those events.
This means that we can write a web app which listens to those events and updates it’s display with the new readings. This essentially means we can have a realtime display of the sensor readings.
So the next step is to build that interface…
AngularJS app to show readings in near realtime
In the Github repo, you’ll find the code for an AngularJS app which shows the sensor readings for the sensors in my network. Now it has to be said that the app has not been written to be generic, and if you decide to fork the repo to build your own, I suspect you’ll have to do a fair bit of ‘hacking’ to get it to work.
The app was an opportunity for me to play with the following tools and what you see here was built in a weekend – just goes to show how useful these tools are.
- Yeoman – for building the initial angular and firebase app skeleton.
- Grunt – for automating some of the build and preview.
- AngularJS – for realtime binding of variables in the HTML to variables which are fed directly from Firebase data.
- angularFire – for the abstraction of the code needed to actually talk to the Firebase database.
- Bootstrap 3 for reactive presentational elements to make it work on mobile and desktop.
I don’t pretend that this code is pretty – and there are no proper tests, but it works and it was fun to build!
Finally, apologies to all those whom I have bored with recounting the current temperature in the green house!
Reference managers like Endnote, Refworks or Zotero often allow you to export your bibliographic citations as a RIS file. You can import these into things like Talis Aspire Reading Lists.
The script below will look in the current directory for RIS files and analyse their contents. We are looking to see what types they have and how many of them have some sort of identifier that can be used to find better bibliographic data from some other source.
#!/bin/bash while IFS= read -r -d '' file do echo -n "#=== " printf '%q\n' "$file" egrep "^TY" "$file" | sort | uniq -c typeCount=$(egrep "^TY" "$file" | wc -l) snCount=$(egrep "^SN" "$file" | wc -l) echo $(($snCount*100/$typeCount))"% of records have an SN ("$snCount" of "$typeCount")" echo done < <(find . \( -name "*.ris" -o -name "*.txt" \) -print0 )
#=== ./PMUP00DNMod3.txt 17 TY - CHAP 4 TY - JOUR 80% of records have an SN ( 17 of 21) #=== ./PMUP00DNMod4.txt 11 TY - CHAP 10 TY - JOUR 95% of records have an SN ( 20 of 21)
I’ve been wanting to do some Raspberry Pi tinkering for some time. Having a little computer on hand to handle the logic processing and interfacing with the outside online world, while also having input/output pins directly controllable by code running on the pi is just too tempting.
A little while later after following an Adafruit guide to making a LED based new email notifier, I was hooked…
I am no electronics guru – very much in the category of hobbyist who can comfortably fill header sockets with too much solder without realising it! (yes that’s a bad thing!) Therefore I have been looking for something that combined ease of use with great versatility in order to start exploring how I could use sensors to begin on that hobbyist’s delight – home automation.
So what project would make a good first project?
In our new house we changed the boiler, and so I wanted to track temperature in at least one room to see when the temperature was comfortable and to allow us to re-programme the thermostat to be as efficient as we could get it.
The other thing I wanted to track was the green house temperature – although I must get some new glass to replace the missing panes, as it isn’t going to be that useful with a gale blowing through it.
Two immediate ideas which involved temperature measurement and tracking. Sounds like a good basis for a project.
I first looked at the 1-Wire network Dallas temperature sensors, but as these predominately needed a wire to make the network, and I didn’t want to run wire out and down the garden, I dismissed these. Though I did find various ideas to make them wireless – this didn’t seem the simple solution I was looking for.
I then stumbled on this blog post which pointed me in the direction of small radio modules which had a potential range of hundreds of meters, combined with a very simple text based serial port message system that would make getting the readings super easy. not only that but I could theoretically have around 676 devices int he same sensor network, and some of them could also do things like actuate switches… needless to say this sounds terribly promising!
For the initial setup I decided to do exactly as described in the blog post above.
Parts ordered and delivered: within 1 evening and a saturday morning I had a network setup and sending temperature data on an periodic basis.
Next step is to capture the data somewhere (possibly using Firebase) and render the results as a chart. For that you’ll have to see the next post (when I have written it!).
I get fed up with trying to quickly check line endings in files – especially when I am working with a file format that absolutely requires DOS line endings
#!/usr/bin/python import sys import os print 'Number of files given as args: ', len(sys.argv) padding = 20 for file in sys.argv: if os.path.isfile(file): if "\r\n" in open(file,"rb").read(): print str(file).ljust(padding, " ")," : DOS line endings found" continue if "\n" in open(file,"rb").read(): print str(file).ljust(padding, " "), " : UNIX line endings found" continue if "\r" in open(file,"rb").read(): print str(file).ljust(padding, " ")," : MAC line endings found"
As I used to be a support analyst, and as I still work in a customer focused team, I get to see a lot of support tickets and how they are handled. This post summarises some learnings from over 10 years working in customer facing positions. Do these things, and you’ll have the support analyst on your side.
- Be polite
Too often people on support desks have to put up with people who are rude and impatient. It is too easy to take frustrations out on the person at the other end of the support line. You won’t win any favours by being rude.
- Be patient
Every new ticket from every customer is important to the customer who raised it. It is also likely to be in a queue, and if there is a problem that is affecting several people the queue can sometimes be large. Smart support analysts will spot patterns in the tickets coming in, and can alert systems teams to deal with potential issues. System temas will then need to take 10 or more minutes to investigate thoroughly, and so patience is a useful attitude.
- Raise a ticket for one issue at a time
If you raise a ticket which rambles on about umpteen issues, you will confuse yourself and the support analyst as you won’t know which issue you are being asked questions about.
- Don’t blame the computer system for your inadequacies
Systems are fallible, but so are humans. Over the years I have seen several examples of customers who feel such anger toward the system, based on the feeling that they are failing because the system isn’t helping them. They then lash out at every opportunity to say the system is unworkable and doesn’t do what they were told it would. Usually however, it is not because the system is not working. There is often clear evidence that other users of the very same system are being successful due to using the system as a tool, and not expecting it to replace the strategy and planning needed to make it work for them.
- Be helpful
Give the support analyst as much information that is relevant to the ticket, but don’t be offended if the analyst asks for something else.
- Demonstrate that you have used the knowledge base
Sometimes you need to get beyond the “have you looked at the article in the knowledge base” response from the support desk. Support analysts tend to assume that you didn’t bother to look, so show them that you tried and that you didn’t find anything that helped. Also tell them when you have followed some steps to fix an issue if that didn’t work. This will all help the analyst get to a resolution more quickly.
- Say thank you
Tell the support analyst working on your ticket that you appreciate the time they have taken out of their day to help you.
I could probably add to these, but these are probably the main things to get right. So go on – make a support analyst’s day, and tell them they’ve done a great job!
If you haven’t seen it already, you should have a look at this Digital Essay by Will Self. Not because you are a fan of Will Self or necessarily interested in Kafka’s Wound, but because you are interested in the way the essay can be brought to life through embedded references. I spent a good portion of a very interesting hackday at the National Archives in March, Talking to Helen Jeffrey from the London Review of Books. We talked about Linked Data and how these concepts when applied to something like an essay might make it a different experience.
In the outcome of the hackday, I used a graph to illustrate the connections between letter writers in the ancient correspondence of Henry III (and others) between 1175-1538. Connections between people and graphs are natural bedfellows.
In the digital essay a graph is used to illustrate connections between references to external sources. The references give a sense of flavour to the essay – illustrating and extrapolating rather than merely backing up the author’s intent. I wouldn’t ordinarily have the sticking power to read an essay, but the tempting insights from other media are enough to make me want to go back to the beginning and read through the essay properly.
The graph shaped table of contents invites interaction and – in a wheedling crone’s wheeze – says, “Let me tempt you in, with candy and bright colours to whet your appetite and draw you further in to this gingerbread essay”.
My one criticism is that I cannot seem to work out why things in the table of contents are connected. Well, I can with a bit of thought, but the connections could be more meaningful. Who are the people this essay mentions? Where are the places within which the essay takes backdrop? how do those people and places link to the creative and archival works curated to illuminate the essay? As Will says in his opening paragraph:
“…I find I cannot prevent myself from linking one idea with another purely on the basis of their contiguity, in time, in place, in my own mind…”
Giving the author the power to link such concepts to sections of essay which in some way trigger emotive neural connections could be argued to be a true advantage of a digital essay. A way for the author to lay out their mind as set of associations interwoven with and adding to the meaning of the words.