29
Mar 14

RPi sensor network

I’ve been wanting to do some Raspberry Pi tinkering for some time. Having a little computer on hand to handle the logic processing and interfacing with the outside online world, while also having input/output pins directly controllable by code running on the pi is just too tempting.

A little while later after following an Adafruit guide to making a LED based new email notifier, I was hooked…

2014-03-29 11.37.59

The Gmail notifier – a simple true false statement turns a pin voltage high or low depending on presence of new mail in your inbox.

 

I am no electronics guru – very much in the category of hobbyist who can comfortably fill header sockets with too much solder without realising it! (yes that’s a bad thing!) Therefore I have been looking for something that combined ease of use with great versatility in order to start exploring how I could use sensors to begin on that hobbyist’s delight – home automation.

So what project would make a good first project?

The hub of the network, a slice of pi and an XRF radio module.

The hub of the network, a slice of pi and an XRF radio module.

In our new house we changed the boiler, and so I wanted to track temperature in at least one room to see when the temperature was comfortable and to allow us to re-programme the thermostat to be as efficient as we could get it.

The other thing I wanted to track was the green house temperature – although I must get some new glass to replace the missing panes, as it isn’t going to be that useful with a gale blowing through it.

Two immediate ideas which involved temperature measurement and tracking. Sounds like a good basis for a project.

I first looked at the 1-Wire network Dallas temperature sensors, but as these predominately needed a wire to make the network, and I didn’t want to run wire out and down the garden, I dismissed these. Though I did find various ideas to make them wireless – this didn’t seem the simple solution I was looking for.

2014-03-29 11.38.30

One of the sensors. You can see the thermister and aerial.

I then stumbled on this blog post which pointed me in the direction of small radio modules which had a potential range of hundreds of meters, combined with a very simple text based serial port message system that would make getting the readings super easy. not only that but I could theoretically have around 676 devices int he same sensor network, and some of them could also do things like actuate switches… needless to say this sounds terribly promising!

For the initial setup I decided to do exactly as described in the blog post above.

Parts ordered and delivered: within 1 evening and a saturday morning I had a network setup and sending temperature data on an periodic basis.

Next step is to capture the data somewhere (possibly using Firebase) and render the results as a chart. For that you’ll have to see the next post (when I have written it!).


21
Dec 12

How to get the most from any support desk

As I used to be a support analyst, and as I still work in a customer focused team, I get to see a lot of support tickets and how they are handled. This post summarises some learnings from over 10 years working in customer facing positions. Do these things, and you’ll have the support analyst on your side.

  • Be polite
    Too often people on support desks have to put up with people who are rude and impatient.  It is too easy to take frustrations out on the person at the other end of the support line. You won’t win any favours by being rude.
  • Be patient
    Every new ticket from every customer is important to the customer who raised it. It is also likely to be in a queue, and if there is a problem that is affecting several people the queue can sometimes be large. Smart support analysts will spot patterns in the tickets coming in, and can alert systems teams to deal with potential issues. System temas will then need to take 10 or more minutes to investigate thoroughly, and so patience is a useful attitude.
  • Raise a ticket for one issue at a time
    If you raise a ticket which rambles on about umpteen issues, you will confuse yourself and the support analyst as you won’t know which issue you are being asked questions about.
  • Don’t blame the computer system for your inadequacies
    Systems are fallible, but so are humans. Over the years I have seen several examples of customers who feel such anger toward the system, based on the feeling that they are failing because the system isn’t helping them. They then lash out at every opportunity to say the system is unworkable and doesn’t do what they were told it would. Usually however, it is not because the system is not working. There is often clear evidence that other users of the very same system are being successful due to using the system as a tool, and not expecting it to replace the strategy and planning needed to make it work for them.
  • Be helpful
    Give the support analyst as much information that is relevant to the ticket, but don’t be offended if the analyst asks for something else.
  • Demonstrate that you have used the knowledge base
    Sometimes you need to get beyond the “have you looked at the article in the knowledge base” response from the support desk.  Support analysts tend to assume that you didn’t bother to look, so show them that you tried and that you  didn’t find anything that helped.  Also tell them when you have followed some steps to fix an issue if that didn’t work. This will all help the analyst get to a resolution more quickly.
  • Say thank you
    Tell the support analyst working on your ticket that you appreciate the time they have taken out of their day to help you.

I could probably add to these, but these are probably the main things to get right. So go on – make a support analyst’s day, and tell them they’ve done a great job!


04
Sep 12

The Digital Essay

If you haven’t seen it already, you should have a look at this Digital Essay by Will Self.  Not because you are a fan of Will Self or necessarily interested in Kafka’s Wound, but because you are interested in the way the essay can be brought to life through embedded references. I spent a good portion of a very interesting hackday at the National Archives in March, Talking to Helen Jeffrey from the London Review of Books. We talked about Linked Data and how these concepts when applied to something like an essay might make it a different experience.

In the outcome of the hackday, I used a graph to illustrate the connections between letter writers in the ancient correspondence of Henry III (and others) between 1175-1538. Connections between people and graphs are natural bedfellows.

In the digital essay a graph is used to illustrate connections between references to external sources. The references give a sense of flavour to the essay – illustrating and extrapolating rather than merely backing up the author’s intent. I wouldn’t ordinarily have the sticking power to read an essay, but the tempting insights from other media are enough to make me want to go back to the beginning and read through the essay properly.

The graph shaped table of contents invites interaction and – in a wheedling crone’s wheeze – says, “Let me tempt you in, with candy and bright colours to whet your appetite and draw you further in to this gingerbread essay”.

My one criticism is that I cannot seem to work out why things in the table of contents are connected. Well, I can with a bit of thought, but the connections could be more meaningful. Who are the people this essay mentions? Where are the places within which the essay takes backdrop? how do those people and places link to the creative and archival works curated to illuminate the essay? As Will says in his opening paragraph:

“…I find I cannot prevent myself from linking one idea with another purely on the basis of their contiguity, in time, in place, in my own mind…”

Giving the author the power to link such concepts to sections of essay which in some way trigger emotive neural connections could be argued to be a true advantage of a digital essay. A way for the author to lay out their mind as set of associations interwoven with and adding to the meaning of the words.

 


01
Aug 12

View from my new desk

Today I can write this post with a view from my new desk.  I will be Talis Education’s first Technical Consultant, and as such will be exploring ways which you – as users of Talis Education’s learning resource management and delivery tools – can integrate these into your institutional systems and make the most of your connections with the Education Graph.

Launched two years ago, Talis Education’s first commercial application, Talis Aspire Campus Edition, has now been adopted by more than 25% of Universities in the UK. With successful validation of their approach, their plans for the next phase include building out new products in both the enterprise and direct to end user spaces.

My role here is to help people integrate, extend and explore Talis Education’s product and APIs.


13
Jul 12

Nobody has given up on Linked Data

In light of the news that Talis Systems is suspending it’s investment in a generic semantic web platform and it’s semantic web consulting business, I wanted to explore why I think you shouldn’t draw gloomy conclusions for Linked Data and the web of data.

But first a short aside to illustrate what I think is one of the core differences between a graph based approach and a relational database approach to building applications.

When you try and think about data as anything other than describing the way the world looks and works, you have to make compromises in your view of the way the world looks and works. The biggest compromise trap you will likely fall into is making the assumption that everything that you don’t know about doesn’t exist so therefore cannot be true.

Faced with such a closed outlook on life you are going to find it really difficult to react to challenges that force you to accept that your closed world is a bit bigger than you thought it was. One of those difficulties is deciding whether to live with the status quo, or spend time and effort re-writing software to make use of this newly extended view of the world.

So don’t do that.

The core concepts of Linked Data and the web of data allow you to build a view of your world, described by your data, which your applications can then feed off. Your applications become either parasitic or symbiotic depending on whether they are purely consuming data or consuming the data to generate some new insight which is fed back into the system as new data.

Because one of the assumptions of Linked Data is that there is other stuff that you don’t know about, you have to build you applications to also assume that there is stuff that it doesn’t know. Your application can become more aware of the types of data it is dealing with, and recognise data patterns that it knows how to display or work with. This is a more organic way of designing applications that to my mind feels more natural.

This is just one of the ways in which organisations like Talis have changed the way they build software. So just because the economics didn’t work out for Talis in building a generic semantic web solution, it doesn’t mean that the learnings we have made over the last few years don’t apply to your specific problem area.


05
Jul 12

Too early, too slowly

You’ll have heard the news. Talis Systems is being wound down. The considerable investment that Talis have made in fostering the vision of the web of data has resulted in notable successes with notable organisations such as the BBC, the Ordnance Survey and the British Library. I’m proud to have been a part of the word-class consulting team that helped to get these organisations to a point where they could see the benefits of and join the web of data vision. However, the commercial realities of a small organisation working in a market that is growing at too slow a rate meant that we could not sustain the required level of investment.

For the last year and a half I have been talking to organisations about how they use data and how they want to take steps to make that data more openly available. This is worthy stuff, but for most organisations this is also experimental stuff. Some were more wiling to go for the ride than others, but even those organisations baulked at changing everything all at once.

And that’s not surprising.

I’ve written before about how graph thinking and open world assumptions make you approach a project in different ways. Some organisations are not ready to do that.  They feel that the change to a more open approach challenges their existing revenue streams.  Yet they don’t see that the people who currently pay for data in its existing form will continue to do so for some time because they too are resistant to change. However, there will come a time when those customers will also feel they need to change the way they do things. If you want to protect your current revenue stream you have got to explore some new opportunities so that you are ready when your existing customers move to a new technology.

But still, as employees working in an uncertain climate, not everyone is willing to risk their own standing within an organisation. Especially when you look outside that organisation and you see unemployment, recruitment freezes and belt tightening.

Which brings me to the factor in winding Talis Systems down which couldn’t be foreseen, and that is one of pure timing. A general attitude of wariness has emerged as a result of the current economic climate; the Queen’s jubilee and a longer royal wedding holiday pushed spending decisions into the following months; a typically apathetic month is usually expected in August when everyone is on holiday and a decision freeze for the Olympic games all served to make people put off purchasing decisions, especially where those decisions related to experimental projects. We can’t afford for everyone to wait until September to decide to do something.

This indecision is one of the indicators of a slow moving market that is not ready for commercial exploitation by a small business.

So we were too early. We had a vision for easy data flow into and out of organisations, where everyone can find what they need in the form that they need it through the use of linked data and APIs, and where those data streams could be monetized and data layers could add value to your datasets. But for many, the vision simply seems to be more of a dream. The difference between a dream and a vision is that a dream is more fantastical, while a vision is a practical goal. I think we had the vision, but others saw it as a dream, something unobtainable, something floating around within the cloud computing, big data, semantic web and linked data marketing spiel.

Other organisations besides Talis, sharing similar visions, have all had to change the way they present themselves as they realise that the market is simply not ready for something so new.

I have had the privilege over the last few years to work with many very smart people, both within Talis and within the organisations who engaged us as consultants and providers of a service around their data. I wish we could have gone on for longer, but sensible business decisions are made and have to be stuck to.

I look forward to sharing with you what I’m going to do next.

As a footnote: I should point out that Talis Education Ltd are still going strong with the Talis Aspire reading list management tool which is used by around 25% of UK universities and by similar organisations internationally. This is where Talis Group investment will be focused.


18
May 12

Why you should learn [code]

This was going to be a rather long biographical post about how I learnt to code. But I deleted all the boring stuff and I leave you with this reusable nugget…

Simply replace [code] with another subject.

Learning [code] was my way of solving a problem. If your problem can be solved by [code], then learn it. If not, don’t.  Don’t learn [code] just because you think you should learn [code].  Don’t let anyone make you learn [code] if it isn’t going to help you solve a problem.

Now go and reform the education system so that we teach our children how to choose the right tool to solve a problem, and not how to pass tests.


18
May 12

Working the internal data universe

When I started first writing code, I wrote in a typically haphazard way. (actually I still do this, but now have half an eye on well formed code style)

But what I definitely used to do was think about how to store persistent data that the application needed in a relational database backend. I wrote a CMS from scratch to host this website on (long since decommissioned), and a system for working out how much parents of nursery children owed based on the time their children had been in the nursery compared to contracted time (there were penalties for late collection). Both of these web apps required persistant data in the backend, and at the time my first recourse to a data store was a relational database.

Working with a relational database didn’t sit well with my style of fast iteration to get a problem solved, then tidy up afterwards, to make sure that the code was workable and readable.

Having to constantly add columns to a table as new peices of information were required to be stored, for either optimisation or to support some new feature, was a pain. It broke code on more than one occasion, and created hard to pin down bugs that were usually the result of using a select statement that called for named columns that were now either renamed or moved or gone.

I wasn’t agreeing my schema up front, because I didn’t know exactly what data I would need to solve the problems. I admit I could have used some coding strategies that would have abstracted the database layer from the application layer, but even though I tried to implement MVC, it seemed to just create more code and dependancies.

Contrast that to how I think about solving a code problem today.

Graph thinking; graph databases; the graph; are all things I didn’t know about when I started writing code. Now, when I am thinking of solving a problem I think about exactly two things.

  • What data does my application use?
  • What do I need to show to the user?

The answer to the first question is not a database schema, nor is it something that will break my code if changed. It is a description of the data my app will interact with, but described as a graph so that I can just add more description as appropriate.

The answer to the second question, is that each bit of code knows about a small part of the graph that it needs in order to function correctly. Each function will know exactly which bits of the graph it needs, and how to get it. If the shape of the graph changes it will either flag an error if something critical is missing, or simply ignore the changes and continue working as before.

I said earlier that I didn’t agree my scheme up front because I didn’t know what data I would need to solve my problems. I realise now that if I had simply described the things my application would be interested in, I would have had more than enough data to get going with. I would also have been able to add new data back into the graph thus persisting the solutions to the problems I was solving.

But I couldn’t do that with a relational database because of the key difference between using relational databases and graph databases: I am no longer forced into using the assumption that the edge of my relational database is the edge of the world, and that anything outside of my database does not exist.

There are challenges to this way of working too, but I find it a far more natural way of working. I even find that new opportunities for data display or analysis become apparent as the data morphs into shape.

In short: working with graph data is more like working with the data universe that is inside my head. And that view of the world is exactly that, a view of the real world.


30
Mar 12

A letter from the Middle Ages

This post originally appeared on the Talis Consulting Blog

Well actually, not just one letter, but over a thousand letters from the middle ages.

Last weekend, the National Archives held a Hackathon in the reading room at Kew. Around 40 developers and interested people took data from the National Archives and played with it.  There were new mobile interfaces for the NRA discovery API; collections of tweets mined for the data and PDFs they contained; stats on historical participation in the olympics pulled from the archives and shown on interactive maps. In all it was a fun weekend with lots of smart people in the room and very quiet but rapid typing on keyboards to get something finished by the 4pm Sunday deadline.

Prizes were:

  • 1st – Jonathan Tweed and Kai En Ong (ably assisted by Michael Smethurst, Faith Mowbray and Paul Rissen). A hack that pulls out data surrounding people & places in documents tweeted by @ukwarcabinet (and which – for a hack – is beautifully presented!).
  • 2nd – Jamie Mahoney – Debtors & creditors dataset hack maps the most popular lenders & shows who’s borrowing from where – Show me the money.
  • 2nd – Tim Hodson – A hack showing who wrote to whom in the middle ages.
  • 3rd – Crystal and Steven Hirschorn – A hack showing participation in the Olympics on an interactive map.

You can read more about these entries on the National Archives blog.

I hope you’ll forgive my showing off of my joint second prize winning contribution to the pizza and jelly baby fuelled hack fest.

I took a suggestion from Paul Risson as a personal challenge, and started puling the data that I wanted into a new CSV file.  I then converted that CSV file to a rudimentary RDF based model of the letters and people that the data described.  I now had a graph dataset which captured – in the way only a graph can – the network of relationships between people who are corresponding. It was then a case of finding a suitable javascript library to render my graph as a visual and to allow people to find out about who wrote to whom without cluttering up the graph diagram.


24
Mar 12

Riding London

I find myself increasingly frustrated by the idea of having to use public transport especially when i have heavy bags to carry and know that i could do it much more easily by bike. So last night I spent extra effort to make sure that I could ride my bike across London today. My folder is a Birdy Blue, and as such has mudguards that are integral to the bikes stability when folded. They are also vulnerable and so have got somewhat damaged over the last two years. So much damage that the front is held together with black gaffa tape and the rear had split in half.

My new set of mudguards arrived last week, and so I would normally have spent my Saturday at the Wolverhampton bike shed fixing other people’s bikes, and then in our yard, fixing up my own. However this week I was bound for London to attend a hack day, so Saturday bike tinkering was out of the picture.

For a while I was evaluating the options for travel from Marylebone to Kew, and thought that I could probably do it via the underground fairly easily. But then I thought of the bag of stuff I’d be carrying and how I would have to lug it around the underground stations. The wheeled bag is a drag (literally) and doesn’t fit onto the front rack of a boris bike, and a backpack is out of the question because of the weight.

I kept looking at the Birdy and realised that I wasn’t going to be happy on this journey without it. Therefore I set to and after removing two stubbornly steel-to-aluminium bonded screws with the aid of a drill, I am now enjoying the prospect of cycling to Richmond on a gloriously sunny London day.

Cycling is my favourite transport.