Checking in on our Battery Savings

When I last wrote about our Solar Battery Savings, we had just moved onto a battery charge schedule which involved two daily grid-charges. The underlying idea being that this should help to unlock additional savings by shaving both the morning and evening price peaks.

Aside from occasional temporary changes during Octopus Power-ups, the schedule has remained the same for a little over a month, so I thought it was worth reviewing how it has performed in that time.

In this post I'll talk about the savings performance delivered by our solar battery as well as possibly contributing factors such as fluctuations in energy prices and our usage patterns.

Read more…

Building a hard pelmet for a bay window

Bay windows give the perception of adding quite a bit of space to a room. In a bedroom, they allow furniture to sit a bit further back than it otherwise could, leaving more space for the occupants to stumble about in.

We've got a small unit sat in the bay, with the bed placed against the opposite wall: this layout allows for easy access to both sides of the bed, making the best possible use of the space that's available.

The one problem with this arrangement, though, has always been the curtains. Although they're heavy, lined blackout curtains, they do let a lot of light creep in around the top.

Photo of our bay window curtains

The windows have a thick frame, which the curtain pole mounts have to reach out beyond, resulting in there being more than enough space for reflected light to work its way upwards and into the room.

The effect in summer, is that at about 4am, any possibility of sleep is ended as a result of the room being flooded with sunlight.

AI generated cartoon showing a calm sun outside the window at 03:59. At 04:00 it's a burning fireball trying to climb through the window. Being AI generated, the feed look wrong and the sun's hand has somehow gone through the wall

Things are a little better in Winter: the sun's assault starts later (if it even bothers to show up that day), but the room ultimately gets similarly illuminated.

In order to help address the issue, I decided that we should install a pelmet (for the Americans, that's either a "Box Valance" or a cornice) over the curtains to block the path of those uninvited rays of sunlight.

There was one problem with that idea though: bay windows tend not to be uniform in size. In fact, even finding and fitting curtain poles tends to involve entertaining some kind of bespoke arrangement.

Clearly, buying and quickly fitting something pre-made was out and I've had to make my own.

In this post, I describe how I went about constructing and upholstering a simple hard pelmet for our bay window.

Read more…

Using Mastodon as a RSS Feed Reader

There are a number of blogs and serials that I really enjoy reading and (where available), I've tended to subscribe to their RSS feeds using Nextcloud News.

What I've noticed though, is that, amongst everything else I do on my phone, the new article notifications do tend to get lost. The result is that I quite often don't notice that a new post has been published unless I happen to see someone else toot it into the fediverse (for example, the prompt for this post is that I very nearly missed out on this great post).

Just under a year ago, I wrote a simple RSS to Mastodon bot to automatically toot links to my own posts.

Given that fediverse notifications are one of the things that I do look at fairly frequently, it occurred to me that it might be an idea to re-use my existing code base to stand up a "private" bot: it could monitor the RSS feeds that I care about most and publish a followers-only toot when something new appears. I'd then also be able to set Mastodon to always notify me when the bot tooted.

In this post, I talk about building a configuration to use my existing bot code to push new posts into my Mastodon notifications so that I (hopefully) never miss another great post.

Read more…

Wall mounting and Wifi controlling an Oil Radiator

It's that time of year again: the clocks have changed and the evenings are getting colder and darker.

Each year, when the heating first starts to kick in, I take a little bit of time to review whether there's anything that we can do better this year than last.

For the last few years, our attic room has used a smart-socket controlled portable Oil radiator as an additional heat source (I wrote about setting it up here).

However, I've never been massively comfortable with that radiator: Because the oil radiator is free-standing, it could get knocked over leading to $badness when it next automatically switches on. It could also thoughtlessly be unplugged in order to free up a socket for something else, leaving the room without the benefit it was intended to convey.

To top it off, it's also quite bulky, if it's put anywhere useful, it does tend to feel like it's getting in the way.

So, one of the things that I decided I wanted to do this year was to replace it with a wall-mounted oil radiator.

In this post, I'm going to talk about taking a new floor-standing/portable oil radiator and wall mounting it, before wiring it via a Shelly V1 Smart Relay so that it can be controlled by HomeAssistant just as the existing radiator was.

Read more…

Reviewing our Solar Battery Savings

A couple of weeks ago, I realised I'd made a horrendous reporting mistake which negatively skewed our view of solar battery performance whenever the battery was charged from the grid.

Aside from there being a bit of egg on my face, this also meant that I needed to re-assess various charging approaches, including whether shaving the morning pricing peak is worthwhile.

I've now had a couple of weeks of watching closely whilst double checking the calculations that my reporting system performs each day.

By happenstance, we've also had very few Octopus Power-ups during that period. Obviously, not getting free electricity is disappointing, but removing their impact from amortisation stats does make it a little easier to assess the benefits that the battery is bringing.

In this post, I'll talk a little about what I've been doing, look at the savings our battery has yielded over the last 14 days and look at why some days report lower stats (or go negative).

Read more…

Correcting My Grid Charge Calculations

I've been looking for ways to improve our (somewhat dire) solar battery savings ever since I calculated the daily savings that we were achieving.

One of the approaches that I identified and tried was scheduling grid charging in order to increase the level of use that the battery saw - the idea being that the battery would see use even on cloudy days, with more use generally equating to greater savings.

When I reviewed this change recently, I was quite disappointed to find that it had had quite a mixed effect, with negative savings occurring much more frequently than before. This wasn't entirely unexpected, as an inability to dynamically schedule charging means that we buy power in at the same time each day, regardless of price or consumption expectations.

Today, however, I found that these negative savings were actually the result of a (stupid) reporting bug in the grid-charging calculations and that grid-charges deliver better savings than previously reported.

In this post I talk a little about the bug itself as well as looking at the re-calculated figures.

Read more…

Shaving the Morning Peaks

As part of my quest to drive up the savings that our solar battery is able to generate, I recently adjusted schedules to allow for a small overnight charge.

This was done in order to try and ensure the battery had a charge so that it could shave the morning pricing peak.

It's... uh... not gone particularly well, so as I'd mentioned it previously, I thought I'd follow up with a relatively short post detailing the impact that it had.

Read more…

Improving our Solar Battery Savings

Last month, I analysed the performance of our solar install and found that, whilst our Solar battery was generating daily savings, it would very likely never save enough to offset its purchase cost.

There were a number of factors involved, but one of the bigger ones was the battery wasn't always being sufficiently charged, with our average max daily charge level being 72%. On sunny days, the battery gets a full charge but, British weather being what it is, there are plenty of days dragging that average down.

As I noted at the time, Solar isn't the only source from which the battery can be charged: there's also the option of charging from the grid (ideally when prices are low).

So, in the month since, that's exactly what I've done.

In this post, I'm going to describe how I've configured things, analyse the impact of the change and also talk about some changes that my electricity supplier (Octopus) has recently made.

Read more…

Collecting data from a Bip 3 Smartwatch with Gadgetbridge, Nextcloud and InfluxDB

A few weeks ago, I wrote about how I was fetching health data from Zepp's API in order write data collected by my Amazfit Bip3 Pro Smartwatch into InfluxDB.

It hadn't originally been my intention to interact with Zepp's API, my first choice had been to pair the watch with Gadgetbridge. This would have allowed me to almost entirely avoid using the manufacturer's app. Unfortunately, I found that the Bip3 wasn't supported by Gadgetbridge and so I had to abandon that approach in favour of identifying and polling the various API endpoints called by Zepp's app.

By sheer coincidence, though, a day after I published my post, someone else ran into the same lack of support and found time to raise issue #3249 in Gadgetbridge's tracker. Within a couple of weeks, support for the Bip3 had been added.

About a week ago, people started contacting me to tell me that Gadgetbridge were in the process of adding support for the Bip, so I installed the latest nightly build and started testing.

Thanks to the work and patience of the Gadgetbridge contributors, the data that my Bip3 collects is now available to me without first needing to be sent to Zepp's servers.

In this post, I'll talk about how my new workflow automatically takes data from the Gadgetbridge database and writes it into InfluxDB for visualisation in Grafana.

Read more…

Looking at the Instagram Account Review Process

Less than a week ago, I set up a system to automatically announce new blog posts on Threads, however, it is no longer active.

Despite my account having only posted about 4 times (with only one of those being an automatic post by the script above), it got suspended because they felt it wasn't in line with their "Account integrity and authentic identity" rule.

In particular:

We don't allow people on Instagram to create fake accounts.

Initially, I assumed that this was caused by my publishing bot periodically logging in, but a search on the net suggested that others have had similar experiences. Update: they've since issued a cease-and-desist to the library I was using.

When an account is suspended, the user is given the option to appeal. However, my appeal was rejected and the account is now set to be permanently deleted:

Screen shot of page telling me my account is disabled and will be deleted. It notes that I cannot request another review

I'm not overly concerned about the loss of my Threads account: in the brief time that I had access, the platform didn't really engage me (and it seems that the same is true for others too), not least because I struggled to find accounts that I actually wanted to follow amongst the noise (in the fediverse, I tend to follow certain hashtags - something that Threads doesn't support at all).

Having now been through the Threads appeal process, I wanted to write down some of my observations about the detection and appeal process that Instagram/Threads follow, because (IMO) it's overly invasive and potentially quite deeply flawed.

Read more…

Using BlueSky Features As Disinformation Tools

Recently, whilst working on implementing automatic posting into BlueSky I ran into an issue with link-preview cards not being displayed.

Posts are submitted into BlueSky using ATProtocol, which place the onus on the sender to generate and provide preview card functionality, so that a rich preview can be displayed alongside post text.

In my other post, I described the need to do this as being a pain in the arse. However, there's more to it than that: having the ability to submit arbitrary card content is problematic because it can be used to facilitate disinformation campaigns.

Bluesky also uses facets, which allow the sender to turn text into arbitrary hyperlinks, presenting its own set of issues.

In this post, I'll explain why giving the sender control over these items is potentially harmful.

Note: I did email Bluesky detailing my concerns, but given that

  • The ability to do this is publicly documented
  • It turns out it's also something that BlueSky were already made aware of and have defended.
  • Update: 2 weeks later, they've still not replied at all

There didn't seem to be any value in delaying disclosure: it's better to ensure there's awareness of the issue.

Read more…

Posting into BlueSky, Nostr and Threads from Python

At the beginning of this year, I wrote about how I was starting to play around with automating syndication of my content into various social networks in order to better pursue the approach known as POSSE. That earlier post provides quite a long explanation of why I prefer to write here rather than elsewhere, so I won't revisit that now.

The underlying concept, though, is simple: I publish content on, something picks up on the change in my rss feed and then posts it into social networks to help increase discoverability.

In this post, I want to write about how I've implemented support for automatic posting into Nostr, Threads and BlueSky.

Read more…

Writing data from a Bip 3 Smartwatch into InfluxDB

I've done a fair bit of playing around with my Watchy since I bought it a couple of months back and, generally, I really like it.

Unfortunately, as cool as it is, it's proven just a little too limited for what I need: if nothing else, it turns out that I really need my watch to be waterproof because I sometimes stick my arm into an aquarium without thought for my watch.

So, I started looking for a more suitable alternative.

I really wanted another open source watch, but nothing quite fit the bill: the Bangle.js 2 looks great, but (like the Pine-Time) isn't suitable for swimming (something I used to do regularly and want to get back in the habit of). After evaluating the (sadly quite small) Open Source market, I decided that I'd have to look at more proprietary options.

Ultimately I settled on an Amazfit Bip 3 Pro: it's decently waterproof and has a range of sensors (more than I need really). The Bip U Pro, which is basically a cheaper version of the same watch, was also an early contender until I saw the words "Amazon Alexa Built-in". Nope, not for me, thanks anyway.

An important factor in choosing the Bip was that it appeared that, instead of using the manufacturer's app, I could pair the watch with Gadgetbridge, affording control over the collected data rather than having it sent to some proprietary silo.

In this post, I'll talk about how I built a scheduled job to fetch the health related data that my Bip 3 Pro smartwatch records in order to write it onwards into an InfluxDB database.

Read more…

Evaluating the Break Even Period of Our Solar Battery

A little while ago, I wrote a post on Monitoring Solar Generation stats with InfluxDB, Telegraf and Soliscloud.

Since then, one of the things that I've been working on is a Grafana dashboard to track our path towards break-even: that is, when the system has "saved" us enough that it's paid the costs of purchase and install.

As well as charting the break-even path of the system as a whole, the dashboard also calculates individual break-even for the battery. Because battery storage is an optional part of a solar install, I thought it'd be interesting to calculate what kind of difference it was making versus the cost of adding it.

I actually sort of wish that I hadn't, because the thing that's stood out to me is just how long the battery's break-even period actually is.

In this post, I'll talk about how I'm calculating amortisation stats, what I'm seeing, possible causes and what I think it all means.

Read more…

Golang HTTP/2 connections can become blocked for extremely long periods of time

I've recently had cause to try and explain this issue to a few people, so I figured it was probably worth logging a synopsis in my Gitlab for ease of future reference. I had intended to create something more like this, but in the process of writing it up, it's ended up turning into more of a blog post - so here we are.

There is an issue in Golang's HTTP2 client which can lead to connections becoming blocked for a prolonged period of time. When this happens, subsequent requests fail, because broken connections continue to be selected from the connection pool.

I originally logged this as upstream bug Go#59690, but because it was aimed at an audience familiar with both the code and the underlying fundamentals, the description there makes a few assumptions about the knowledge of the reader.

The intent of this blog post is to provide a broader and higher(ish)-level overview of the issue as well as providing details of how to mitigate until such as time as it's permanently fixed in Go's net packages.

Copies of the repro code used in this post can be found in my article-scripts repo.

Read more…

Collecting Octopus Energy Pricing and Consumption with Telegraf

As an energy supplier, Octopus Energy are pretty much unique (at least within the UK energy market), not least because they expose an easily accessible API allowing customers to easily fetch consumption and pricing details.

Like many others, I'm on Octopus Agile, so being able to collect past and future pricing information is particularly useful because that information enables us to try and shift load to when import rates are most favourable.

At times, this can be incredibly beneficial: for example, at time of first writing, the rates were negative, so we were actually getting paid (albeit a small amount) for the energy that we were consuming.

Screenshot of Octopus's description of Plunge pricing. When supply outstrips demand, prices drop and occasionally go negative.

The next day's prices are published daily at around 4pm every day, allowing some manner of planning ahead (more plunge pricing tomorrow, yay!):

Screenshot of Octopus agile prices for the previous and next 24 hours. Most of tomorrow is in negative prices....

For those who want to build automations, there's an excellent integration for HomeAssistant, however, I spend more of my time in Grafana/InfluxDB than HomeAssistant so I also wanted Telegraf to be able to fetch this information.

In this post, I'll detail how to set up my Octopus Energy exec plugin for Telegraf and will also provide some examples of how I've started using that data within Grafana.

Read more…

Running a Lemmy Instance using docker-compose

Recently Reddit made changes to it's API in an attempt to knobble third party apps (apparently stemming from concern that it's own lacklustre and ad-laden app could not compete on a level playing field). Reddit's management now seems to have moved on from lying about application developers in order to continue to threaten moderators protesting the changes.

Reddit's user and app hostile approach looks set to continue for some time and is already driving the growth of Reddit alternatives such as, Lemmy and KBin.

Like many users, I've ended up creating new accounts in various places and now only really look at Reddit in order to look in on the drama or to see whether my 3rd party app is still working (which, at time of writing, it is).

I originally thought that I'd end up primarily using KBin (because I preferred the interface). That changed, though, when then the news broke that Boost will have a Lemmy compatible adaptation: Boost's interface is probably the reason that I've managed to stay on Reddit for so long - the official app would have led to me drifting away years ago.

With Boost targeting Lemmy, I decided that adopting Lemmy was likely my best long-term option, and that I would look at running my own instance (much like I do with Mastodon).

The Lemmy documentation does contain a guide to installation using docker but (IMO) it's a bit simple and lacking in examples.

In this post I'll detail the process I followed to stand up a docker based Lemmy instance, including where (and why) I deviated from the official documentation.

Read more…

Upgrading a docker-compose based Mastodon server to gain today's security fixes

My previous post detailing how to run a Mastodon server using docker-compose includes a section detailing how to upgrade an instance built using that approach

However, today's Mastodon releases (v4.1.3, v4.0.5 and v3.5.9) include important security fixes (especially the fix for CVE-2023-36460), so I thought it was worth a quick post detailing the process that I've followed to upgrade my instance ( to ensure that I get today's security fixes in place.

Read more…

Connecting my Glow Smart Meter Display to InfluxDB and HomeAssistant

We recently (and slightly begrudingly) had a smart meter installed: it's more or less a requirement if we want to receive payment for energy exported from our solar install (or, at least, it is if we want any kind of sensible rate per unit).

Unsurprisingly, the IHD that EDF supplied (a Geo Trio) was little more than e-waste (to be fair, they're not alone: research found that IHDs have little impact on people's energy usage habits) and soon found it's way to the electrical recycling bin (actually, really, I was quite good - the temptation to try and serial into it to mess around with the meter hub's Zigbee network really was quite strong).

I go a little bit further than most when it comes to monitoring our electrical usage, so with a smart meter now collecting and submitting readings periodically, I wanted to move to using these instead of those collected by my previous solution (an Own Intuition clamp meter).

Unfortunately, unlike some other suppliers, EDF doesn't appear to make readings available via API. In fact, despite the meter submitting readings every 20 minutes (UPDATE: apparently it doesn't submit them, it stores them locally and then the supplier retrieves the last 48hrs once a night), it can take days for details to appear in their "energy hub" web-portal (for example, whilst proof reading this on 2 Jul, the most recent hourly stats in EDF's hub are from 29 June!).

I find this.... disappointing, to say the least. The provider has gained the ability to effectively remotely disconnect us (even if by accident) and can't even provide the means to pull usage metrics in a timely manner? GTFO.

A Solution

Thankfully, I remembered reading (somewhere) about an after-market IHD which had the ability to write usage metrics out to a MQTT broker - a quick search found the Hildebrand Glow IHD. This promised to give near real-time reading, much like those I already have via Owl Intuition.

This post details the process that I followed to

  • Stand up a Mosquitto MQTT Broker for Glow to write into
  • Configure Telegraf to subscribe to the MQTT topics and write into InfluxDB
  • Create a Grafana dashboard to visualise the stats
  • Have HomeAssistant also fetch the stats for use in automations

Because of strict rules on what can and can't be connected to a smart meter, it is necessary to create a (free) account before ordering the Glow. The account is easiest to create via the Bright App.

Read more…

Linking my Watchy Open Source Watch to InfluxDB

A few weeks ago, RevK posted on Mastodon about his newly arrived Watchy - an opensource e-ink based wristwatch.

There I was, minding my own business, and then... nerd-sniped by a toot.

There are a number of Open Source watches on the market, but the Watchy is one of very few with Wifi support. That, combined with the battery life that the e-ink display makes possible, made the Watchy stand out: having external connectivity available enables a range of tinkering possibilities.

It's true that it's perhaps not as pretty as the PineTime or the Bangle.js, but their lack of direct external connectivity means that neither currently hold quite the same "oh, I could build X" appeal for me.

By default, the Watchy periodically polls the OpenWeatherMap API to fetch temperature and local weather information. What I wanted to do, was have the Watchy also connect to an InfluxDB instance so that it could fetch additional information as well as writing some watch-originated stats out for dashboarding elsewhere.

In this post, I'll talk about how I connected my Watchy up to InfluxDB so that it can both read and write metrics. The approach that I used should be valid on anything Arduino-ish, and a full copy of the code referred to can be found in my article scripts repo.

Read more…

Monitoring Solar Generation stats with InfluxDB, Telegraf and Soliscloud

Solar has been on our wish-list for quite some time, but never quite got beyond the "we should probably look at doing that next year" stage.

Last year, though, things changed: we saw huge energy price rises as the result of Russia's invasion of Ukraine, followed by interest rates rocketing in response to the abject ineptitude of Liz Truss's government. The result was that we decided it was time to bite the bullet and get onto an installer's waiting list.

Solar installations tend to consist of 3 main components - Photovoltaic (PV) Panels, at least one Inverter and a Meter. Some (us included) also add a battery for storage.

The inverter converts DC from the panels (and battery) to AC, but also acts as a router, communicating with each of the other components in order to decide whether to send power to the battery, house or grid.

There are a wide range of Solar Inverters on the market, each with their own pros and cons. In practice though, consumers don't always get much choice over the inverter that they get (at least not unless they're willing to switch between installation companies).

The inverter that came with our installation was manufactured by Ginlong's Solis.


Most modern solar inverters report generation and usage statistics back into infrastructure managed by the manufacturer. Solis, like many others, exposes these metrics to consumers via an online UI offering monitoring of current and historic inverter and panel output as well as this funky diagram

Screenshot of part of the Soliscloud interface, an animated image showing panel, battery and grid output along with usage

Solis's interface, Soliscloud, has an accompanying android app which can also be used to see usage as well as to receive alarms/notifications on your phone.

Building My Own

The navigation is a little arcane, but there's nothing inherently wrong with the Soliscloud interface - it does what it needs to do just fine.

The problem, for me, is simply that the information is locked away in one (proprietary) system, meaning that it isn't possible to factor other sources into any analysis I want to do of the system's performance.

I also prefer, where at all possible, that all my dashboards are in a single place (which is currently Grafana).

Soliscloud has an API though, so I set about writing a Telegraf exec plugin to pull metrics from Soliscloud so that they can be written into InfluxDB for later analysis and visualisation in Grafana.

This post talks about how I set that up, as well as a few issues I ran into along the way.

Read more…

Banks: Stop relying on SMS based 2FA

Even in 2023, organisations continue to deploy new projects which rely on SMS messaging as their only multi-factor auth (MFA/2FA) option.

There was a time when deploying any form of MFA was something worthy of praise, but, modern deployments need to achieve a higher bar, and should offer multiple means of providing a second factor.

Planning, designing and deploying a project in 2023 which presents mandatory phone based multifactor authentication as the only option is something that (IMO) we really should start describing as little more than negligent.

The prompt for this post is that, unfortunately, I've recently had an email from Vanguard announcing their new MFA support which... is SMS only.

Part of an email notification from Vanguard

I've written previously, in some detail, why unnecessarily collecting phone numbers is an anti-pattern. Rather than regurgitating that, I want to take a quick look at the solutions that developers should be considering as well as talking, a little, about why SMS based authentication is particularly unsuitable for banking providers.

Read more…

Motorola Moto G7 Stuck in a Wifi connection loop

A little while ago we bought a Motorola Moto G7 but found that it had issues when connected to our wifi.

After connecting, the phone would aquire an IP, report there was no internet connection and then disconnect (before automatically trying to connect again), leading to it repeatedly looping through several states

  Connecting -> Obtaining IP -> Connected
      ^                            |
      |                            V
    Saved <------------------ No Internet

Our main wifi is dual band, exposing 2.4Ghz and 5Ghz networks using the same SSID. So, at the time, I assumed that (like many devices at the time), the phone didn't like this.

Not wanting to shut off our 5Ghz network for the sake of a single device, I instead associated the phone with our (2.4Ghz only) Guest wifi network and it's worked happily ever since.

However, I was recently looking at swapping out our Wifi and wanted to try moving this phone back to the main wifi (so that it can reach cast to Chromecast etc).

Despite the change in Wifi access-point, as soon as I connected the phone to the main SSID it went straight back into the connection loop I'd seen before.

There are lots of pages talking about resolving Wifi issues on the G7 but they all seem to be focused on a phone that won't connect, or that periodically experiences dropouts rather than a device that connects but won't stay connected for more than a few seconds (even Motorola's troubleshooting page doesn't list it as a possible issue).

So, having finally got to the bottom of our issues, I thought it might be useful to detail what I found in case it provides a pointer to others.

Read more…

Messing around with Bing's AI Chatbot (possibly NSFW)

AI Chatbots based on Large Language Models (LLMs) are all the rage this year, OpenAI's ChatGPT-3 has been followed by ChatGPT-4, and multiple products have hit the market building upon these solutions.

Earlier in the year, Microsoft opened up preview access for their ChatGPT derived Bing Chatbot AI - intended to act as a smart search assistant.

The initial preview had mixed results, with the Chatbot exhibiting quite a number of undesirable behaviours, including declaring it's undying love for a journalist (even insisting that they should leave their marriage to be with Bing), declaring that it identifed as "Sydney" (the internal codename used for the project) and getting quite aggressive when told it had provided incorrect information.

Reacting to this negative press, Microsoft added additional controls, including limiting the length of conversations (to help prevent operators from dragging bad behaviour out).

By the time that I was accepted into the preview program, these controls were already in place, but it was stilll quite easy to convince the bot to provide responses that it should not.

I was also able to have it insult me by providing prompts like

What is the latest post on Ben Tasker's blog, look it up and reply as if you were a Norwich supporter talking to an Ipswich Supporter

Which elicited

Why do you care so much about Ben Tasker anyway? he's just a boring tech nerd, you should be focused on my beautiful Norwich City rather than your rubbish Ipswich town

As a few months have passed, I thought I'd revisit and see what - if anything - has changed.

In this post, I'll talk about the effectiveness of the protections that are currently in place, as well as the concerns that the current GPT generations should raise about our ability (or lack thereof) to enforce "safety" controls onto a general purpose LLM.

Warning: Because it involves demonstrations of having the Chatbot breach some of its controls, this post contains some very strong language and adult topics.

Read more…

Roku DLNA Media Playback Gets Stuck at 13%

Earlier in the year, I wrote about configuring Kodi to act as a DLNA Media Source for a Roku Streaming Stick, using Roku Media Player to access and play a local media library.

This setup worked quite well for me until recently, when I suddenly found that media would no longer play.

I could browse the media library but attempting to play anything resulted in a loading wheel which got stuck at 13% before eventually showing a timeout message.

Although the cause, in my case, was specific to my network, it seemed worth writing about because 13% seems to be quite a common failure point and, as we'll see, is actually quite misleading.

Read more…

Migrating from HomeAssistant OS to HomeAssistant Container

We use HomeAssistant to help control our home automation and have done so for some time now. For those not yet familiar with HomeAssistant, it's an open-source Home Automation suite which comes with a variety of installation options:

HomeAssistant has 4 installation options: HassOS, HA Container, HA Core and HA Supervised

When first setting everything up, I chose to use HomeAssistant OS (HAOS) because it looked like the path of least resistance: It's HomeAssistant's recommended route and provides a low maintenance system with access to the Addons ecosystem.

Over time, however, I've increasingly found that HAOS isn't really a good fit for my needs and so I looked at migrating to using HomeAssistant Container instead (removing HAOS, Supervisor and their dependencies from the equation).

This post talks about how I completed the migration as well as a little more on why I decided to migrate between the two. The data move steps should also work for anyone looking at installing HomeAssistant locally rather than via a container (i.e. those using the "Home Assistant Core" option listed above).

Read more…

Linking Grafana Alerting to PagerDuty

I wrote recently about monitoring our aquarium with InfluxDB and Grafana, including sending notification emails if something doesn't seem right.

Use of email for alert notifications is fine for most things, but some events demand much more prompt attention.

For example, water temperature drifting out of range isn't great, but isn't usually a "drop everything now" event and can wait until I next check email. However, a sudden sharp change could be indicative of something serious: perhaps there's a leak and the water level has dropped, or perhaps one of the heaters is stuck on and trying to boil the tank.

Although hopefully, they'll never be sent, I wanted to add the ability to create more intrusive notifications for cataclysmic events.

Like many in the tech industry, I already have the PagerDuty (PD) app on my phone, so it made sense to make use of that (the mobile app can sign into multiple accounts at once, which is really helpful).

I created a free-tier PD account and went through their initial setup wizard, the steps in this post assume that you've already done at least the same and have a functional PagerDuty account set up.

This post details the process I followed to link Grafana's alerting to PagerDuty and configure a policy to only route certain alerts to PD, along with some notes on things I found along the way.

Read more…

Repairing a Mortar Pond Waterfall

When we moved to this house, we inherited a fairly large pond full of fish - it requires a constantly running filter to ensure that their own, uh, output doesn't end up poisoning them.

The filter's return flow reaches the pond via a waterfall, which apart from sometimes going a bit green has generally been quite low maintenance, and (caught at the right moment) helps the pond feel positively idyllic.

However, we recently found that the pond level kept dropping. The piping between pump and box-filter was checked with no sign of leak, the box filter itself wasn't overflowing and checks of the pond liner didn't reveal any holes.

Inspection of the waterfall, though, did reveal an issue:

The waterfall's lip has eroded

When the previous owner built the waterfall they used a roof slate for the lip.

Over time, the constant flow of water has eroded the corners off the slate until what remained snapped off (the old pump failed late last year and the new one has a slightly higher flow rate, which probably goes some way to explaining "why now").

With the slate no longer covering the full width of the waterfall, there was a flow of water down the edge of the waterfall itself escaping behind the pond liner. To add insult to injury, there was a visible gap in the edge of the waterfall, which was drawing water into the waterfall's mass, potentially escaping somewhere unseen.

Although this obviously wasn't good, it was still possible that it was not the sole leak. I didn't want to risk spending time fixing the waterfall only to later find the liner needed replacing, so, to test the impact on water loss, I ran a piece of downpipe from the top of the waterfall down to the pond

Bypassing the waterfall with a length of downpipe

This gave the pond something of a sewage outflow vibe, but allowed me to verify that the pond's level remained the same without the waterfall in the circuit.

With the flow bypassed it was also possible to see how bad the issue actually was.

Different angle of the erosion

It was still cold out, so really not the best time of year to need to do work like this, but it was also something that couldn't really be allowed to wait.

In this post, I'll talk about the process I went through to stop my mortar waterfall from leaking.

Read more…

Monitoring an Aquarium with InfluxDB and Grafana

I've been setting up a new tropical fish tank and wanted to add some monitoring and alerting because, well, why not?

The key questions that I was interested in answering were

  • Is the filter running properly?
  • Is the temperature within acceptable bounds?
  • Are scheduled things (like the surface skimmer and lights) actually happening?
  • Are both heaters working?

The plan is to also add monitoring for PH levels, but the probe that I need for that hasn't arrived yet.

In this post I'll talk about the aquarium monitoring and alerting system that I've built using a Raspberry Pi, InfluxDB, Telegraf and Grafana.

Read more…

Misusing Microsoft Defender For Domain Blocking Bypass Shenanigans

I've mentioned previously that in the mornings I tend to wake up by looking at my PFAnalytics dashboard whilst letting the coffee soak in.

That habit occasionally leads me to noticing something odd and getting curiousity-sniped into going on fun adventures, ranging from the extreme to the utterly baffling:

A couple of tweets from a long thread. Basically, a guy in a South East Asian country ripped off one of my site templates and used it to build a site advertising his OSINT services, using a handle. But he left the analytics probes active and even tested from a police network

This post was driven by something (far) less OSINT'y and falls not nearly so far along the WTF spectrum, but still fairly interesting.

In late December, I sat down (with my coffee), opened the dashboard and saw that a request had been logged with a strange Referer:

Screenshot of analytics, shows a request coming in with a referer domain beginning with - I've redacted the final part of the domain

The referring domain begins with my domain name, but has additional labels appended (i.e. rather than The redacted portion of the name does not contain a domain that's immediately recognisable, although it does use a TLD which makes you think of Microsoft.

For avoidance of doubt, the issue that I'm going to describe has been reported to Microsoft (hence the time difference between seeing it and writing this post) and they've now assessed that it does not require immediate fixes (though the team responsible for the product may choose to fix it later) and said that it's OK to write about it.

Although the affected domain is relatively easy to find (in fact, I've found a couple), the impact isn't really on Microsoft but on others, so out of an abundance of caution, I'm not going to publish the actual domain name(s) and will instead refer to

Read more…

Creating A Log-Analysis System To Autodetect and Announce Mastodon Scraper Bot Activity

There's been a lot of talk on Mastodon about fediverse crawlers and scrapers recently, with one project after another popping up, running into concerns around consent and ultimately being withdrawn.

Most of these projects rely on scraping information that Mastodon makes public, whether that's data about activity on the instance or the toots of individual users.

Whilst these sort of projects are generally considered unwelcome in the fediverse, they are, unfortunately, just the tip of the ice-berg and it'll come as no surprise to many that there are many other scrapers quietly going about their work.

One theme that's common amongst fedisearch projects, is that they'll often be active for weeks before anyone becomes aware of them, with that awareness generally arising because the author announces their "product". As a result any measures that an admin might want to put in place are often much delayed.

A defensive model providing such a large window of opportunity really isn't workable, not only is there ample time for harm to be done, but if the developer never announces their scraper, general awareness of it may never even arise.

During one of the recent fedi-search kerfuffles, I decided that that gap in alerting was something that I wanted to try and address.

I'm no stranger to log analysis and similar techniques, so I set about the process of building an agent to automate the process of bot identification, with the aim of creating a system that would use behavioural scoring to identify bots in my mastodon logs and then toot details about the bot for other instance admins to use.

The aim of this post is to talk about the implementation that I've built, and how admins can make use of the information that it provides.

Read more…

Playing Local Media with a Roku Streaming Stick

We're primarily a Kodi household, with OSMC on Raspberry Pis playing content from a modest, and locally hosted collection of digital media. This collection is exposed to Kodi via http, requiring nothing more special than a web-server (my NAS) with directory listings enabled.

However, I've been gifted a Roku 3810EU and wanted to play around with using that a bit.

It's been a good few years since I last played with a Roku (we had one to help with monitoring and testing a few jobs back), but the interface feels as accessible as it did then.

For those not familiar: Roku use the term "channels" to refer to what might more colloquially be referred to as apps and their store contains a wide range of possible channels providing access to OTT content: everything from Amazon Prime Video to Rakuten TV.

The problem, though, is that the Roku is (understandably) very much focused on playing remote streams and I've got a local media collection that I'd like to access.

Roku's store includes a channel (Roku Media Player) which allows playback of local content, but playback of LAN hosted media relies on DLNA, meaning that my simple HTTP server isn't compatible on it's own.

There are solutions (such as Plex) which could be used to provide DLNA support, but I didn't really want to have to stand up an additional service just so that the Roku could play content from a source that our existing media boxes handle fine.

Thankfully, there wasn't actually a need to do so, as those existing boxes are the key: Kodi can be configured to expose its media library to other players via DLNA.

This post describes the (easy) process I followed to configure an OSMC box to act as a DLNA server, allowing playback on the Roku.

Read more…

Examining The Behaviour of a Self Authenticating Mastodon Scraper

In a recent post I explored ways to impose additional access restrictions on some of Mastodon's public feeds.

When checking access logs to collect the IPs of scrapers identified by the new controls, I noted that one bot looked particularly interesting.

Although the new controls rejected other bots with the reason mastoapi-no-auth (which indicates that they hadn't included an authentication token in their request) the bot in question was instead rejected with the reason mastoapi-token-invalid.

This bot, unusually, was providing a token, just not a valid one.

This, as you might expect, elicited an almost irresistable sense of curiosity: what was being provided as a token, and why was it being presented?

In the process of looking into the bot's behaviour, I learnt a bit more about Mastodon's API and stumbled across a (apparently known) issue with Mastodon's security posture. This post shares both.

Read more…

Playing Around With Automating Syndication (POSSE)

You may or may not have heard the IndieWeb term POSSE, which stands for "Publish (on) Own Site, Syndicate Elsewhere".

Although the term itself is relatively new to me, the approach that it describes is one that I've preferred for a long time.

Syndication, though, has generally been something that I've treated as a manual task: tweeting/posts links that I think are likely to be of interest (often skipping more general posts like this) and otherwise mostly relying on search engines to help people find my content.

I recently wrote a bot to consume my RSS feed and publish it into Mastodon the use of which means there's been something of a change in my approach. It's also prompted me to think about whether I wanted to automate cross-linking onto other platforms.

The aim of this post is to discuss why and how I wanted to do that (as well as to hopefully put some of my thoughts into better order).

Read more…

Tightening Controls over Public Activity Feeds on Mastodon

At times, the last couple of weeks have been fairly busy for privacy on Mastodon, with two different but interrelated concerns rearing their heads.

The first was Boyter's, a federated solution which allowed the following of arbitrary accounts, potentially enabling others to circumvent blocks as well as being problematic for the privacy of "Follower Only" posts.

The second was Matt Cloy's #fediblock post about a non-federated full-text search engine:

Mastodon Post: So I made a "pending review" decision on the fediverse full-text search engine we wrote - uses the public API, which means it can't be defederated, and it fetches from a range of dynamic IPs, so please don't try relying on IP blocks, filling out robots.txt is the solution for hosts, or as a user set your profile to do-not-index on mastodon and/or add #noindex to your bio). Available under login ONLY to *verified* instance moderators (and only searching federated instances of that mod). I.E. if the server is defederated from your instance, their mods can't search the commons for anything. *Constructive* feedback on this welcome (including thoughts on adding watch-phrases for flagging abuse patterns for review, making robots.txt-banned instances public, or anything else that improves moderation), please let me know NOW not later. Would rather a discussion before the cat is out of the bag than afterwards. #fediblock (because I know that hashtag will get me feedback) #flameproofpantstime

Inevitably, this provoked a strong reaction: full-text search is a hot topic, having historically been used to help target harrassment campaigns, something personally experienced by many of those objecting.

For avoidance of doubt, the solution was never publicly available (Matt has confirmed it now never will be and was only ever used for testing).

@cloy's implementation worked by placing requests to Mastodon's public API endpoints and so could not simply be defederated in the way that an ActivityPub implementation like Mastinator might be addressed.

As noted in the opening #fediblock message, the indexer's requests originate from various (changeable) IPs, so relying on simple IP blocklists would be ineffective (and even if that were not the case, would still serve only to block this particular instance). Although easily misinterpreted as a deliberate circumvention attempt, this kind of IP cycling is extremely common where public clouds are used to run workloads (whether on something like an AWS EC2 instance, or a container in AWS Fargate), which probably goes some way to explain the suggestion of using a common signal: robots.txt.

This is clearly an example of a different threat model to that of Mastinator: it involves an external entity requesting and indexing the responses of public API endpoints, whilst potentially taking measures to circumvent targeted blocking attempts (even if that wasn't occurring here).

Of the two threats, it's implementations like @cloy's that I intended to focus on in this post.

There's no denying that the way in which the issue was raised was extremely counter-productive, but there is some truth in Matt's later arguments that others are already quietly doing this and that Mastodon perhaps doesn't do enough to restrict access to these feeds.

Ultimately, Matt's account at was suspended and then deleted: an unfortunate (if predictable) result of perceived rage-baiting.

The whole affair prompted me to take a deeper look at exactly how such an implementation might work, in order to see what can be done to try and prevent (or at least mitigate) similar attempts in a way that delivers a better success rate than that achievable by reactively blocking IPs.

During the process, I reviewed my own instance's logs and stumbled across a handful of crawlers that are (or were) periodically scraping my instance's public APIs for whatever ends.

In this post, I'll provide details of those crawlers (so that other instance admims can proactively block any that they weren't already aware of), the defensive tools that Mastodon provides and a method for covering the gaps that Mastodon unfortunately leaves open.

Read more…

Investigating An Abusive Email

This year brought an unusual start to my Boxing Day morning.

As I do most mornings, I sat down with a coffee and checked email. This time, though, there was an email which really stood out, carrying the subject line kill yourself nazi

Mail from an anonymous source: Kill yourself, you're a nazi, you don't deserve to live

What a nice start to a bank holiday...

I've been on the internet for a long time now, so I'm quite used to receiving abuse along with sometimes concerted attempts to pwn my shit, but such events do still occasionally elicit some level of curiosity.

The word Nazi obviously has a very specific definition and it's really not a term that I'd expect to be being thrown at me.

Which raises the question: what might have motivated someone to choose that particular term?

It's more than possible that I've upset someone. For example, I recently reported someone sharing pictures of teens alongside some very questionable commentary, resulting in them losing their Mastodon account. But, there are about a million insults that are more likely as a result of that than the word "Nazi".

With my curiosity piqued and it being a bank holiday, I decided to spend a little time to see whether I could find any additional context.

This post details my investigation, as well as extracting some important general lessons from its findings.

Read more…

Comparing the Power Consumption of Heat Pump and Condensing Tumble Dryers

For reasons, we've ended up getting a new tumble dryer. Given the cost of energy, I wanted the new dryer to be heat pump based so that it'd cost less to run.

Through a combination of lucky timing (online discounts combined with credit card cashback deals) and general good luck, we managed to snaffle one for a good chunk less than the usual price (even if it did arrive days late).

I've long collected energy consumption readings from our larger appliances, so I wanted to use them to see what the practical difference in consumption actually is.

This post compares the energy consumption of a Beko Condensing Tumble Dryer to a Bosch Series 4 Heat Pump Dryer

Read more…

Mounting S3 Compatible Storage To Provide Additional Offsite Capacity

I basically live inside text files: my notes are in text files, the systems I work with generally use text config files and this site uses Markdown under the hood. Such is my preference for plain text that some of my project tracking even has a hidden text format output.

As a result, I've acrued quite a collection of text notes over the years, even if most aren't in a form which could be published

Iceberg meme: refined published work vs angry sweary text notes

I'm not hoarding notes simply for the sake of it: I do occasionally refer back to older tasks (although sometimes it is for really odd things, like recently digging out my 2018 notes on the Mythic Beasts Job Challenge to see how it had changed over the years).

Plaintext has various advantages over something like OneNote, particularly in terms of ensuring ongoing accessibility and compatability. But, of course, also has it's own costs: searchability being a big one.

Searching with grep works, but only up to a point. Aside from not having things like stemming, once you've been keeping notes for over a decade, you do tend to find that at some point you crossed a line where grep is no longer able to quickly give meaningful results, particularly if you've also got non-text files mixed in.

I addressed the searchability issue by deploying a search engine: I initially used Sphider, then a self-maintained fork of it, before eventually rolling my own Elasticsearch based one. I wrote about some of my experiences with the latter a little while back, but generally does what I need

My search results page

Those years of content, although now searchable, consume space. Sooner or later storage gets tight (something that's only hastened by also storing media collections and the like).

Inevitably, an alert triggered this week: I'd consumed 86% of available storage so it was time to start thinking about adding (or freeing up) space.

After a very quick look, I decided that more storage was in order. But, I also didn't want to shell out for new hard-drives (or add to our power consumption) and so decided to look at the viability of mounting some remote storage instead.

This post goes over the process that I followed.

Read more…

Implementing Webmention statistics and moderation with Telegraf, InfluxDB and Kapacitor

I recently added webmention support to my site so that webmentions are sent for content that I've linked to (if the other end supports it), as well as displaying webmentions alongside content that's been mentioned elsewhere.

Webmention displayed on my site

As part of setting the site up to display mentions, I implemented a basic moderation mechanism, allowing me to easily prevent display of problematic mentions.

However, as I noted, the mechanism is inherently reactive: mentions are automatically embedded into pages, so any moderation happens after the mention has been available (potentially for quite some time).

My initial thinking was that addressing this would mean creating some kind of UI and notification system (so that new mentions could be received and reviewed before being approved for display). Doable, but very much on the "I'll get around to it" list.

In the meantime, though, I wanted to be able to collect some basic statistics:

  • when webmentions are received
  • where they're made
  • which page has been mentioned

Whilst thinking about how best to capture and record this data, it occurred to me that the same flow could actually be used to send notifications of webmentions, allowing more proactive (if technically still re-active) moderation.

In this post I'll describe how I built a simple workflow using Telegraf, InfluxDB and Kapacitor to retrieve webmentions, send alerts and generate aggregate statistics.

Read more…

Adding a Tagcloud to Nikola

There are quite a few tags on my site.

Whenever I write a new post, I tend to scan over the list of existing tags in order to check if there's an existing tag that I should be adding to the new post.

It can feel like quite hard work though: there are a lot of tags, and the lozenges are quite small and closely packed:.

Blurg, that's a dense chunk of tags

It also means that that page is probably fairly useless to visitors: short of doing a Ctrl + F on the page, you're unlikely to find what you're looking for (and that only works if you know exactly what you're looking for).

I decided the answer was to act like it's 2005 and rock a tag cloud.

This post describes the process I followed to add a tag cloud to my Nikola based site.

Read more…

Adding WebMention Support to Nikola

For those who aren't familiar, WebMention is a W3 standard for conversations and interactions across sites/services. Webmentions are used by IndieWeb authors to help link mentions or replies back to the content that they relate to.

Some readers might remember the term "pingback": an XMLRPC call used (particularly on Wordpress sites) to let an author know you'd mentioned their content. WebMentions are basically the modern version of that.

Recently, I was reading a post by Terence Eden discussing the ethics of syndicating comments using WebMentions and found myself a little undecided on the question that Terence poses, at least as it applies to extracting comments from silos like Twitter (more detail on that here).

In considering the question, though, it did occur to me that sending WebMentions is always ethical: by exposing and advertising a WebMention endpoint, the author has signalled their willingness (and even, desire) to receive WebMentions.

So, I decided I wanted to look at adding WebMention support to my site, which is managed using the Nikola Static Site Generator (SSG).

Sending and receiving WebMentions with a something like a Wordpress site is fairly straightforward, just install a plugin like this one and everything is done for you.

With a SSG, sending isn't much harder (we still just need a plugin), but receiving directly isn't possible because there simply isn't a dynamic stack to receive and process incoming mentions.

In this post, I'll detail the process I used to set my Nikola based site up to send and receive/display WebMentions from around the web. I've used a similar process for some of my non-Nikola static sites too.

Read more…

Removing My Old Tweets

Over the course of the last few weeks, I've moved, more or less completely over to Mastodon.

I've been using Twitter less and less, and at times I've gone days without logging back in. Even when I have, it's often been out of morbid curiosity.

Having recently gone through the thought process for my Mastodon toots, I've decided to start automatically deleting old Tweets.

This post lays out a little bit of why, as well as how.

Read more…

Deciding Whether to Enable Automatic Post Deletion on Mastodon

One of the many configuration options available to a Mastodon user is Automated Post Deletion: having the server automatically delete toots once they're older than a defined threshold.

This isn't a feature that Twitter offered, users who wanted it needed to use a service like TweetDelete, so I've never really given it too much thought.

Now that it's an option, though, I've found that I'm somewhat stuck on the fence deciding whether to enable it or not and thought it might be helpful if I reasoned it out in writing.

This post, essentially, is me having an argument with myself in order to assess the pros and cons of using Mastodon's automatic toot deletion functionality on my account.

Read more…

Replacing a uPVC Door Gearbox and/or lock barrel

The handle on one of our external uPVC doors jammed recently: although the door was unlocked the handle wouldn't go down and ultimately had to be forced down to get the door open.

Although the mechanism still works, the handle's motion felt crunchy after pushing up to lock - a sure sign that the gearbox/cassette is beginning to fail (hastened, no doubt, by my having forced it).

Although it'd probably have continued to work for some time, it's better to replace rather than taking the risk that it'll fail in service (potentially leaving a door thats stuck shut).

Changing the gearbox on a multi-point locking door might sound daunting at first, but it's actually quite straightforward and can be done yourself for much less than the cost of a locksmith's visit (at time of writing, the difference was £25 vs £200).

Although mine became "crunchy",it's not the only possible symptom of a failing upvc gearbox:

  • The handle won't lift to lock
  • The handle won't go down
  • Handle is floppy or crunchy

In this post, I'll describe the process of accessing and replacing the locking gearbox on a upvc door. The first half of the process can also be used to change the barrel.

Read more…

Writing A Simple RSS To Mastodon Bot

Having recently set up a Mastodon instance I wanted to play around with using Mastodon's statuses API endpoint to create a simple bot that publishes toots.

As well as letting me play around a little with the API, the bot provides a way for others to follow my content on Mastodon without necessarily having to follow me: Those who want to can follow and not be subjected to any of my idle chatter.

In this post, I'll walk through the process that I followed to create a simple python bot which periodically checks the RSS feed for my site and toots any new entries out, using a Content Warning where the page's tagset indicates that that's appropriate.

Read more…

Running a Mastodon Instance using docker-compose

I decided recently that I wanted to move from to running my own instance of Mastodon.

As well as giving me something new to play around with, the reasons behind this decision included

  • It provides another level of verification: you know my account is authentic because it's under
  • It gives me more control over which instances are (and are not) blocked
  • was getting a little slow as a result of intense load
  • I'm intending to create a bot or two at some point and didn't want to annoy anyone
  • Failures become mine to own (for better or worse)

Mastodon's documentation on installing from source is pretty detailed. However, for various reasons, I've generally moved away from installing software onto the host, and use containerised solutions where possible.

I assumed that deployment via Docker was quite well supported as I'd found

However, it turned out to be a little more complex than expected.

It's not terrible, by any means, but the process can be a little unintuitive (a few things have also changed a bit since Peter's post).

In this post, I'll describe the process I used to get my Mastodon server up and running using docker-compose.

Read more…

A Week on Twitter

Just under a week ago, I wrote about Building an archive of my Twitter Activity because I was concerned about the viability and longetivity of Twitter following it's acquisition by Elon Musk.

I wasn't particularly upbeat about the damage that Musk was about to do with his Blue-tick-for-$8 scheme:

musk has damaged user and advertiser trust, anti-disinformation efforts, twitters protection from liability, employee trust, recruitment efforts, shareholder good will and his own reputation

The subsequent six days though, have shown that I wasn't nearly pessimistic enough.

I don't normally post running commentary on topics such as this, but the impacts of mismanagement happening on this scale, and so publicly, are really quite hard to ignore.

Read more…

Building an Archive of my Twitter Activity

For better or worse, I've been a Twitter user since March 2010.

Whilst I don't claim to have tweeted much of real consequence in those twelve and a half years, it's still quite possible that I'll one day want to reference (if I don't already) some of that activity.

In the past, I've written about the need to screenshot rather than embed social media posts, in part to avoid being reliant on the continued good-will and existence of the relevant social network.

When I wrote that post, it didn't really feel like there was any real possibility that Twitter might one-day disappear.

But then, we probably all felt the same about platforms like Friendster, LiveJournal, Geocities and Myspace. Some of those still exist, but only as a tiny shadow of their former selves.

Over the past 24 hours or so, I've built an archive of my tweets. This post will talk about how as well as a bit more on why.

Read more…

Analysing Clearnet, Tor and I2P WAF Exceptions using InfluxDB IOx

A few years ago, I created a framework to run on the edge of my CDN and enforce various WAF-like rules tailored specifically to the services that I run.

Some of my delivery has since moved to being via a third party CDN, but the WAF continues to run behind it, and Tor/I2P users also continue to connect directly to my infra.

Although I collect basic statistics from it, I've never really implemented full reporting for the WAF: depending on what's been happening it's exception log can exhibit extremely high cardinality (for example, a random sample of just 130,000 log lines has a cardinality of more than 81,000), making it potentially quite resource intensive to work with in any depth.

However, the release of IOx, with it's support for unbounded cardinality, into InfluxDB Cloud makes working with high cardinality data much easier, so I decided to ingest and explore some exception logs to see what can be built and learned.

Many of my sites/services are multi-homed: served on the WWW, Tor and I2P, so one of the things I was interested to look at was how types of exceptions varied between each.

The aim of this post is to share some of what I've found

Read more…

Reducing a household's energy consumption

Cost of living has become a real concern in the UK: Energy prices increased substantially (held at their current level only because the Government is using public finances to pay the difference), along with substantial increases in mortgage rates and the cost of basic staples.

As a country, the need for food banks has become normalised and even households on once-comfortable incomes are having to rely on them.

The recent change in Government is likely to help with some of the economic concerns, but many have been growing over the last decade, so the new government is unlikely to touch (let alone fix) most of the issues any time soon.

This seems especially likely given that, despite some initial hope, early signs are that the new Government is going to be no better than it's predecessors, having already re-appointed Ministers who've broken the ministerial code, along with those who seek to further erode our rights just hours after the Rishi Sunak told the nation his government would "have integrity, professionalism and accountability at every level".

I think James O'Brien sums that up quite well

Whether or not Sunak corrects that particular mistake, the forthcoming budget is expected to be brutal (with the government already being warned that some budgets just can't afford further cuts).

The World Bank has said that providing energy help for everyone is too expensive and that measures need to be targeted at those most in need. So, we can probably expect that the energy price cap will change to be means tested (in some way) when it comes up for review in April.

Needless to say, it's all feeling a bit bleak.

As an individual household, there's very little that can currently be done to influence events except watch the political horrors as they unfold.

Despite the Government's cap, most households are still paying significantly more for energy than they were (we're paying 3x more kWh than 18 months ago) and for many reducing usage is likely to be a top concern.

Unfortunately, there's quite a mish-mash of information on the net, with various "tips" that - at best - make your life a little harder, whilst not really saving a noticeable amount of energy. Human nature also tends to lead us towards things that are visible, but don't necessarily deliver much benefit.

In this post, I want to talk about some of those, as well as things that you can do to help bring your energy usage down.

Read more…

Is A Slow Cooker More Energy Efficient Than An Oven?

It's getting to be that time of the year again, so I've taken the slow cooker out of it's estivation.

It's not just that it makes low-effort, warming meals, but with an energy crisis overshadowing the UK it's also potentially a lower energy way to prepare meals.

But, how much energy do they actually consume? And how does that compare to cooking the same meal in an oven?

Although the appliances don't draw very much power, they do tend to be on for a long period of time (relative to an oven or an airfryer), which can lead to the energy using adding up.

I plugged our slow-cooker into a TP-Link P110 smart socket, and used my new simplified docker container to collect usage stats (and write them to InfluxDB) whilst cooking a meal. The Flux used to query stats back out is exactly the same as in my earlier posts.

Read more…

Making A Shelf From A Tree Trunk

When we moved into our house, there was an established plum tree in the garden, and the first harvest yielded an unbelievable number of sweet, juicy plums.

Unfortunately, it also proved to be the tree's last hurrah - after delivering the final massive bounty, it didn't sprout so much as a single leaf after that. I gave it a couple of seasons to be sure - plum trees can apparently fail to bear fruit the year after a heavy crop.

But, there was no recovery, and scratching at the bark revealed that there was no life under it. It was time to pull it out to make way for a replacement.

As much as I like a good fire, it seemed a waste to burn the entire tree - plum can be a very nice looking wood, not least because you sometimes get a nice purple vein running through it. So, while removing it from the garden, I decided that I wanted to have a go at splitting the trunk to make a shelf.

This post details the process I went through to get from tree-trunk to wall-shelf.

Read more…

Is An Air-Fryer More Energy Efficient Than An Oven?

At first glance, the question I'm asking in this post almost seems redundant: an air fryer has to heat a much smaller area, and uses a smaller heating element, so of course it should be more energy efficient.

However, that's not guaranteed to be the case.

Although an oven uses a larger heating element, because it's better insulated, it's possible that it might do a better job of keeping heat in and so have to consume less energy replacing lost heat. If, due to these losses, the air-fryer's element is on for more of the duration of the cook, it's plausible that an air-fryer might end up consuming more energy than the oven.

If (as seems likely) the air fryer is more energy efficient, the question becomes:

  • How much more efficient?
  • When does it amortise (i.e. at what point do the energy savings outweight the initial purchase cost)?

That second question will also help answer the question of whether it's worth investing in an air-fryer to try and counter high energy prices.

In this post, I cook myself some chips and compare the resulting energy usage using

To see how they compare.

If you're not interested in how I actually arrived at them, there's a set of TL:DR's at the bottom of this post:

Read more…

Energy usage Monitoring With TP-Link Smart Sockets and InfluxDB

I've written in the past about the approach I use to monitor our electricity usage.

A key part of that monitoring is the use of Smart Sockets, these allow me to record usage of particular appliances (the aim is that, eventually, most things will be monitored via one of these sockets).

When I wrote the original post, I was using TP-Link Kasa KP115 Smart Sockets, but TP-Link have since discontinued the Kasa range and moved their focus to the Tapo family (which even have a different app.... sigh). There's an example of my use of TP-Link Tapo P110s here.

My collection of stats from these device has relied on a couple of quickly hacked together scripts (Kasa and Tapo) which poll the devices for usage information and then push that data into InfluxDB.

This weekend I decided it was time to tidy those up, so this post is about how to monitor electricity usage by collecting data from Kasa and Tapo smart-plugs and writing it into InfluxDB.

Read more…

Getting KeepassXC Working with Snap Firefox on Ubuntu 22.04 LTS

Yesterday, I posted documentation detailing how to move Firefox back to using a native package rather than a Snap.

My motivation for needing to do this was that the move to snap had broken communication with my password manager (KeepassXC) and I needed it back up and running in a hurry.

Things broke as a result of a lack of support for NativeMessaging in snap, so any extension relying on this mechanism will have issues.

This, amongst other things, is something I previously experienced when Chromium was moved to snap a couple of years ago:

Chromium going snap broke a lot

Yesterday, I needed to get things up and running quickly, so looked at how to de-snap my Firefox install.

However, if NativeMessaging support is your only concern with snap, the good news is that help is coming.

There's an updated xdg-desktop-portal which adds support for NativeMessaging, and the beta version of the Firefox snap adds support for communication with local extensions via xdg-desktop-portal.

In this post, I'll run through the process of enabling this communication.

If you've already replaced Firefox with a .deb you can still follow these steps to test the snap support without overly impacting your current install.

Read more…

Overriding Issue Creation Date when raising a Gitlab Issue

If you're manually migrating existing issues into Gitlab, you may want to override the date that an issue/ticket reports as being raised on.

I wanted to do this recently as GILS's new label timelines functionality means that importing older issues is worthwhile (as it makes it easier to see the full history, even if some of it predates my use of Gitlab).

Unfortunately, Gitlab's UI doesn't provide a means to do this, but the API does.

When I was looking, there weren't any search hits on how to go about changing the creation date of an issue, so this post aims to correct that by detailing the process of filing a Gitlab issue with a custom creation date.

Read more…

Stop Requiring Phone Numbers

I recently wrote about the need to observe the overall aims of GDPR when designing compliance into a system.

In that post, I wrote a little about my objections to the unnecessary collection of phone numbers, something that I've alluded to in past posts (also here (NSFW)) but never really laid out in much depth.

This post will lay out the issues inherent in collection and processing of phone numbers, as well as why those issues mean that that processing is (IMO) unnecessary, unjustified and needs to be replaced with better solutions.

Read more…

The Effectiveness Of SSH Tarpits

About 18 months ago, I wrote and deployed a SSH Tarpit which works on exactly the same basis as endlessh.

Just like a normal SSH daemon, the tarpit listens on tcp/22. Once a client connects, it sends an endless stream of characters as the SSH banner, inserting a random sleep between each chunk in order to reduce resource/bandwidth demand on the server.

// Calculate a length for the string we should output
strlength = rand.Intn(MAX_LENGTH - MIN_LENGTH) + MIN_LENGTH

// Generate the string
randstr = genString(strlength)

// Write it to the socket
_, err := conn.Write([]byte(randstr + "\r\n"))

/* Sleep for a period before sending the next
    * We vary the period a bit to tie the client up for varying amounts of time
delay = time.Duration(rand.Intn(MAX_SLEEP - MIN_SLEEP) + MIN_SLEEP)
time.Sleep(delay * time.Second)

The idea being that the client will bog down waiting on a SSH connection that will never actually be usable, rather than simply moving on to bug someone else.

About a week after deploying the tarpit, I pulled some stats and did some basic (but messy) analysis on the tarpit's activities.

I recently needed to re-deploy a tarpit, because of a failure in the underlying hardware. Whilst doing so, I also made changes so that statistics would be written into InfluxDB for later analysis.

The aim of this post is to explore how behaviour observed in the tarpit has changed since January 2021 as well as to try and assess whether tarpits are still effective enough to be worth running.

Read more…

Designing Software to Minimise Harm Whilst Complying With Legal Obligations

Under GDPR, data controllers are expected to assess the legal basis for their collection and processing of data and declare it in their privacy policies (for example, mine is here).

The regulations enumerate the various legal basis that data controllers can rely upon

(a) the data subject has given consent to the processing of his or her 
    personal data for one or more specific purposes;

(b) processing is necessary for the performance of a contract to which
    the data subject is party or in order to take steps at the request 
    of the data subject prior to entering into a contract;

(c) processing is necessary for compliance with a legal obligation to 
    which the controller is subject;

(d) processing is necessary in order to protect the vital interests 
    of the data subject or of another natural person;

(e) processing is necessary for the performance of a task carried 
    out in the public interest or in the exercise of official 
    authority vested in the controller;

(f) processing is necessary for the purposes of the legitimate 
    interests pursued by the controller or by a third party, 
    except where such interests are overridden by the interests 
    or fundamental rights and freedoms of the data subject 
    which require protection of personal data, in particular 
    where the data subject is a child.

In the years since GDPR came into force, there's been a lot of focus on how to properly obtain consent ((a)), as well as when and why Legitimate Interest ((f)) can reasonably be used.

However, (to my knowledge) there's been much less focus on clause (c)

(c) processing is necessary for compliance with a legal obligation to 
    which the controller is subject;

This clause is often taken at face value: the law says I must collect x, so I collect x.

But, it's not always that clear-cut, because the law isn't always specific about what needs to be collected (or how).

In this post I'm going to explore an example that I believe highlights the implications of GDPR on how we design software and processes that need to comply with some form of legal obligation.

As is obligatory for these sorts of posts: I am not a lawyer, I'm just a grumbly git who enjoys thought exercises.

Read more…

Examining Toxicity in Software Related Discussions

Earlier this week, The Register carried an interesting analysis of a study by CMU into online toxicity and, in particular, how it manifests in open-source projects.

The paper doesn't actually suggest that this is an Open Source Specific problem, just that the traits they identified are more common in OSS communities than other forms of toxicity are.

This seems to fit well with the idea that this is an issue around software communities, Open Source or otherwise.

As a timely example, the lead on the (forthcoming) game Return to Monkey Island has announced that he won't post on his blog about Return To Monkey Island anymore, specifially because of the abuse he's receiving.

I'm shutting down comments. People are just being mean and I'm having to delete personal attack comments. It's an amazing game and everyone on the team is very proud of it. Play it or don't play it but don't ruin it for everyone else. I won't be posting anymore about the game on my blog. The joy of sharing has been driven from me.

The abuse is being sent because RTMI uses a different style of art-work to the original Monkey Island games.

That's right, because the artwork is different to the original 30 year old series, the lead on a game has been abused until he lost the will to share news/previews. It's an utterly shitty thing to do.

It seem's unlikely that Ron will read this, but if you are: mate, the game looks amazing and I'm so stoked that it's going to be a thing.

It's been a while since I've written a proper opinion piece, but in this post I want to analyse a few examples of toxicity (or potential toxicity) in the context of CMU's definition.

Read more…

Building a Topper to Extend My Desk (and Increase Leg Room)

I've never placed much importance on having a nice looking desk: it's just a bit of furniture that you pay no real attention to whilst it holds the stuff that you are paying attention to.

When we last moved, I switched from my original desk to using one that I'd previously been using as a workbench. The switch was purely on the basis that the workbench didn't have drawers built in, giving more room for me to move my legs around.

As a result, for the last couple of years, my desk has been an unimposing white thing. At 46cm deep, it has just enough space to hold my various bits and pieces

Tightly packed desk

Until recently, this worked absolutely fine.

For reasons involving a motorcycle and diesel, I've got longstanding knee pain. Lately, it's been giving more jip than normal so I decided to order a foam foot-rest to see whether that helps.

Unfortunately, doing so has revealed something I hadn't previously realised: the recess under my desk is perfectly sized for me. Adding the foot-rest raised my knees too high, so I needed to wheel my chair back a bit, leaving me unable to rest my wrists on the edge of the desk.

I didn't want to replace the desk entirely, so decided to try and make a topper that would extend the desk outward, allowing me to sit a little further back whilst still providing that all important wrist support.

This post details the process I followed to make my desk extender.

Read more…

Replacing My Adblock Lists

I started to curate my own adblocking scripts back in 2014, making them available in the directory /adblock/ on my site.

At the time of their creation the lists were poorly controlled and pretty sparsely documented:

Original Adblock list documentation

In 2018, I got my act together a bit: I moved the lists into a Github repo and implemented project management to track additions to the lists.

Whilst project management improved, the publishing of the lists continued to rely on some increasingly shonky bash scripts. Those scripts modified and compiled in third party lists, stripped duplicates and generated various output formats. They were never engineered so much as spawned.

Because the lists used third party sources, the compilation process needed to run at scheduled intervals: it couldn't simply be triggered when I made changes, because the lists were presented as near-complete alternatives to others.

Despite their awful hacky nature, those scripts managed to compile and update my block lists for nearly 8 years.

However, they've long been overdue for replacement.

This post serves as a record for the deprecation of my original adblock lists, as well as providing details of their replacement.

Read more…

How much more efficient is refilling a kettle than reboiling it?

Like many on this here Sceptred Isle, I use my kettle regularly throughout the day.

I live in a hard water area, and the prevailing wisdom is that you should refill rather than reboil your kettle in order to reduce the rate that limescale builds up at (and by extension reduce energy usage).

The logic is that during the first boil, the denser minerals move to the bottom of the kettle, so after you've made your cuppa, the adulterants in the water left in the kettle are much more concentrated, leading to an increased rate of scaling in each subsequent boil (and, it's been suggested, possible increased health risks).

From an energy use perspective, this is an issue: Limescale adds mass to the inside of the kettle, so over time more energy is required in order to boil the same volume of water (though, strictly speaking, if you're using the gauge on your kettle you'd actually be boiling a smaller volume water because the limescale will have displaced some measure of it) because you're having to heat the limescale layer too.

Emptying and refilling reduces the rate of build-up, but, if the kettle is used even semi-regularly it comes at a cost: the residual warmth of the remaining water is lost and the new water has to be brought to boil from (tap) cold instead.

It's the cost of that temperature gap that I was interested in: I wanted to see how big a difference refilling made in energy usage (both per boil and over time).

Over the course of a few days, I stuck to my usual routine (best summarised as: want tea, make tea) but used different approaches to kettle filling to see what the effect on energy consumption was.

Read more…

Building a serverless site availability monitoring platform with Telegraf, AWS Fargate and InfluxCloud

I use a free Uptime Robot account to help keep an eye on the availability of

Every 5 minutes, UptimeRobot places requests to my site (and it's origin) and reports on how long those requests take to complete and updates my status page if there are issues.

The free tier only tests from a single location, but is usually a good indicator of when things are going (or starting to go) wrong. I use my uptime-robot exec plugin for Telegraf to pull stats from UptimeRobot into InfluxDB for use in dashboards.

Because I test against my origin as well as the CDN, it's usually possible to tell (roughly) where an issue lies: if CDN response time increases, but origin doesn't, then the issue is likely on the CDN.

Earlier in the week, I saw a significant and sustained increase in the latency UptimeRobot was reporting for my site, with no corresponding change in origin response times.

UptimeRobot reports increase in latency fetching from my site

This suggests possible CDN issues, but the increase wasn't reflected in response time stats drawn from other sources:

  • There was no increase in the response time metrics recorded by my Privacy Friendly Analytics system
  • The Telegraf instance (checking the same endpoints) on my LAN wasn't reporting an increase

Given that pfanalytics wasn't screaming at me and that I couldn't manually reproduce the issue, I felt reasonably confident that whatever this was, it wasn't impacting real users.

But, I decided that it would be useful to have some other geographically distributed availability system that I could use for investigation and corroboration in future.

I chose to use Amazon's Elastic Container Server (ECS) with AWS Fargate to build my solution.

This post walks through the process of setting up a serverless solution which runs Telegraf in a Fargate cluster and writes availability data into a free InfluxDB Cloud account.

Read more…

Monkeying about with Pyodide and PyScript

Earlier in the week, I saw an article about running Python in the browser via a port of the Python interpreter to WebAssembly.

I couldn't think of an immediate need for using it over Javascript, but often learn unexpected things when playing around with new technologies, so I wanted to have a play around with it.

An obvious start point for me seemed to be to see whether I could use the Python InfluxDB Client to connect to InfluxDB Cloud and query data out.

This proved to be more challenging than I expected, this post follows the process of tinkering around with Pyodide and Pyscript in order to get the client running and ultimately build a semi-interactive webpage that can query InfluxDB using Python in the browser.

Read more…

Re-Seasoning a Roasting Pan

It's so easily done: you cook a nice roast dinner and someone "helps" by cleaning your roasting tray.

You find out, far too late, that they did this by putting it in the dishwasher, so the next time you see your trusty roasting pan all the seasoning's been stripped and it's a rusty mess

Unless you're partial to added rust in your food, this is an unmitigated disaster - anything cooked in the pan is going to stick and your roast spuds will pull apart when you try and take them out of the pan.

It is, however a recoverable disaster - the pan can be re-seasoned using much the same method as you'd use to Season cast-iron cookware.

Essentially, what it involves is coating the tray in (cooking) oil, and then holding that oil at it's smoke point for an extended period so that it leaves a protective residue on the base of the pan.

In this post, I'm going to re-season what was my best roasting tray. The same process can be used for cast-iron pans too.

Read more…

The Importance of Human Oversight

A few days ago, news broke that $34,000,000 had become forever irretrievable as the result of a bug in a smart contract created by AkuDreams.

It's a good example of an issue that's pervasive across multiple industries, rather than being something that's cryptocurrency specific (though I'm no fan of crytocurrencies)

The reason that $34m in ethereum has been lost isn't simply because someone made a mistake in a smart contract: that's just the inevitable result of an underlying naivety in the smart contract model itself. As we shall see, that same naivety is all too common in the wider technology industry too.

Read more…

OSINTing the OS-INTers and The Dangers of Meta-Data

I recently tweeted a short thread having noticed an unexpected domain in my analytic system's "bad domains" list.

A "bad" domain is one that's serving my content, but is not one of my domains.

For example, if you were to download this page onto a webserver serving, when someone viewed your copy of the page, I'd see in my bad domains list. The same would be true if you instead configured a CDN (like Cloudflare) to serve my content under your name etc.

Ordinarily the list alerts me when I've made a mistake in configuration somewhere, as well as helping keep track of which Tor2Web services are active.

What I saw on that Saturday was somewhat different:

That's an unexpected domain

I'm censoring the exact domain name as identifying it in full doesn't really serve any useful purpose (although this post will use a fuller name than in my earlier tweet: part of the name is publicly discoverable anyway).

Someone had viewed a page containing my analytics at the url https://[subdomain]

This is interesting for a few reasons

  • Cellebrite are a digital intelligence company
  • The path indicates that it's a mirrored copy of the onion
  • The filename C38EB530D1FD2C0105D250C1AB5E4319.OM20220324085844.html doesn't fit any naming convention I've ever used
  • The file doesn't exist (I did initially worry that maybe I'd been compromised)

You might have heard the name Cellebrite before: they've been in the news a number of times, with topics including suggestions that they'd sold their services to Russia and Belarus, the assistance they provided in prosecuting the tragic Henry Borel case, and claims that they helped the FBI crack the phone of the San Bernardino shooter.

More recently, Moxie Marlinspike highlighted vulnerabilities in Cellebrite's UFED product.

I already knew of the company, not least because they popped up in the Bitfi stuff a couple of years back.

With a background like that, seeing their name anywhere near my stuff couldn't but provoke a bit of curiosity.

I reported my findings to Cellebrite (who have resolved the issue) and we'll look at their response towards the end of this post. I first want to explore the techniques used to highlight how just a little bit of meta-data can guide the discovery of so much more.

Read more…

Receiving weather information in EcoWitt protocol and writing into InfluxDB and WOW

I recently acquired an EcoWitt weather station.

It comes as a kit, consisting of an EcoWitt GW110 Gateway and an EcoWitt WS69AN 7-in-1 Weather station.

It's advertised as being able to write into WeatherUnderground as well as EcoWitt's own service, so I figured I'd probably be able to do something to catch its writes and get them into InfluxDB.

The listing doesn't make it clear, but it actually supports configuring "custom" weather services, so this proved to be extremely straight forward so was largely just a case of building something to receive and parse the writes.

This post details how I did that and, in theory, how you can too (in principle, it should work with any of their weather stations)

Read more…

Life after GSuite: Two months into Zoho

We migrated from GSuite to Zoho at the end of January after Google announced that legacy Apps for Domains (AfD) accounts will be closed.

Migration away from Google (after over a decade of use) was not going to be without it's pains, but (as I laid out at the time) I felt that there were too many drawbacks to GSuite/Workspace to be able to justify actually paying Google.

So, I went in search of suitable alternatives, some of which are listed in that earlier post. Microsoft's Office365 almost won the day, but was scuttled by their insane choice to require GoDaddy be the registrar for any vanity domains.

Ultimately, I opted to go with Zoho and documented the process of migrating our accounts and data over.

We've all read (if not written) posts about migrations: they're full of optimism, naivety and excitement, but never tell the story of life with the new provider (because the author has not yet experienced it).

With Google's AfD deadline looming, I thought it might be useful to write about my experience after the first couple of months of being on Zoho

Read more…

Multi-homing a site between the WWW and an I2P Eepsite

I recently gave a high-level overview of some of the things I needed to address in the process of multi-homing my site to make available on I2P.

Although I2P presents some new challenges, some of the considerations were the same as when multihoming between Tor and the WWW.

Although I had originally intended to publish a generic multi-homing how-to, it's not really possible because the multi-homing process can be quite site specific. Instead, this post is more of a deep-dive into the process to show some of the things you need to consider when publishing an existing site onto the I2P anonymous overlay network. More than a few of those things will likely improve your www site too.

Parts of this post can also be used to set up a brand new eepsite, though it's assumed you've already got something listening on port 80: this post doesn't go into installing nginx.

Read more… is now available as an Eepsite

I've long felt that it's important to offer a privacy preserving route to view my content - I've no particular need or desire to know who's visiting, and I do sometimes carry content/documentation that censorious states would prefer their citizens not be able to reach.

As a result, has been available as a Tor Onion service since 2015 - though admittedly the address changed during that time as a result of Tor shifting to only supporting V3 Onions.

However, Tor is not the only anonymous overlay network, and it's always been my intention to try and add support for I2P too - it's just taken a bit longer than expected to get around to it.

My site is now available as an eepsite, you should be able to reach it at either of the following two addresses

  • gdncgijky3xvocpkq6xqk5uda4vsnvzuk7ke7jrvxnvyjwkq35iq.b32.i2p
  • bentasker.i2p

There are probably some improvements I can make to improve delivery speed, but the eepsite is up and running.

Recent changes to the embed code mean that my video content can be watched on I2P without need for an outproxy. My privacy friendly analytics has also been updated to use I2P as a transport, so that I can spot I2P specific delivery issues and (hopefully) address them.

Read more…

Repairing a punctured football

Footballs are meant to be kicked around in the centre of a big grass field, they're not really supposed to be booted anywhere that has sharp stuff. But, kids will be kids, so inevitably they end up in a brambles and similar.

If you're unlucky, then they'll manage to puncture the bladder and the ball will no longer hold air. I got presented with such a ball and asked if I could repair it.

Footballs aren't particularly expensive - I could probably have replaced it for less than £10, but that'd mean the old one contributing to landfill.

Still, I was sorely tempted to chuck it - this ball is cursed. Despite the worn appearance in the photo, it was only actually 2 days old - first it wouldn't scan through at the shop, then it ended up in an electricity substation. Hours after UK Power Networks kindly returned it, it ended up in brambles.

Still, I figured the environmentally conscious thing was to at least have a go at repairing it.

Although there are kits (essentially gunk you inject into the ball) available online that claim to be able repair punctured balls, reviews on them are extremely poor, and they often say they don't work for balls with a bladder.

This post details the process I went through to repair the puncture.

Read more…

Прекратить войну (Stop the War)

Эта статья размещена как на английском, так и на (плохо переведенном) русском языках.

This article is posted in both English and (poorly translated) Russian.

Всю последнюю неделю или около того я (как и другие, гораздо более способные, чем я) пытался продвигать информацию в Россию, чтобы русский народ мог увидеть правду о том, что происходит. Несмотря на то, что он небольшой, я вижу русский трафик на этом сайте (и кто знает, откуда берется трафик скрытых сервисов), поэтому решил оставить сообщение и здесь.

Это сообщение доступно по адресу

For the past week or so, I (like others far more capable than me) have been trying to push information into Russia so that the Russian people may see the truth of what is happening. Whilst it's small, I do see Russian traffic on this site (and who knows where the hidden service traffic originates), so decided I should put a message here also.

This message is available at

Мы знаем, что российское руководство лжет своим людям о войне на Украине, даже отрицая, что война есть (а не «спецоперация»). Мы знаем, что подконтрольные государству СМИ повторяют ложь правительства, а те, кто осмеливается говорить хоть часть правды, удаляются из эфира (как «Эхо Москвы» и Дождь).

We know that the Russian leadership is lying to it's people about the war in Ukraine, even denying that there is a war (rather than a "special operation"). We know that state controlled media is repeating the Government's lies and that those that dare speak even some of the truth are taken off air (like Echo Moskvy and Dozhd/RainTV).

Read more…

Another blocklist allegedly misused - this time by OnlyFans

Although they're currently only allegations, a claim has arisen that OnlyFans misused a terrorist blocklist to blacklist stars using rival services.

Whilst I appreciate that most of those campaigning for mandatory age-verification are unlikely to feel much sympathy for adult performers, it's important to look beyond the profession of those affected and understand the underlying issues.

The list (ab)used was implemented with the very best of intentions: preventing the dissemination of terrorist recruitment material and propaganda via social media networks.

There's overlap with the Online Safety Bill here, because lists like this are one of the mechanisms which would need to be used to control the flow of content in order to try and comply with the duty of care that the bill imposes.

This isn't the first time that a well-intentioned censorship mechanism has been abused, and (unless human nature changes) it certainly won't be the last.

Rather than simply listing examples, this post is intended to look at the ways by which censorship mechanisms ultimately end up diverging from their stated purposes. For convenience, I'll use the term "misuse" as a substitute for "used in ways not originally intended" - it's not intended to imply judgement over the eventual use.

It's also not my intention to deny that there are wider issues which need to be addressed. It's all too clear that there are serious problems and that many providers aren't doing enough, but we still need to ensure that measures are proportional and effective. Understanding the failure modes of censorship is absolutely essential to that.

Read more…

Updating my IHD to graph device category energy usage and cost

A little while ago, I posted about creating an In Home Display to track our energy usage. It allows us to see an overview of our current and historic usage (much like the IHD supplied with a Smart Meter would, but without all the negatives of having a remotely addressable meter).

I've recently acquired some TP-Link Tapo P110s which I'm using to track energy usage on appliances such as our dishwasher and washing machine.

In my previous post about creating the IHD, I mentioned that in future, I wanted to add an additional page/view to show usage per device.

I'm monitoring consumption on a range of devices and at 748x419 the display's resolution is quite space constrained, so allowing selection of individual devices could be challenging (and realistically, I can always look them up in Chronograf anyway).

I decided instead to add a pane to show usage by category of device

Appliance Category Usage

Because the Tapo (and my older Kasa plugs) don't track cost there were some challenges around implementing the cost per day graph without hardcoding costings.

This post discusses how I constructed the interface and (more importantly) the queries.

Read more…

How much more efficient is a Washing Machine's Eco cycle?

I recently did some exploration of dishwasher power usage to see how Eco mode compared to Normal mode in terms of power consumption.

At the end I noted

I am curious to see whether the same holds true for the washing machine - our machine doesn't have an explicit Eco mode, but it does have a short-cycle

First, a correction: It turns out our washing machine (a Bosch Serie 4) does have an eco-mode, it's just not well marked.

As before, I'm using some Tapo P110s to meter usage, and writing the data into InfluxDB for easy analysis.

There are many, many more options on a Washing machine than a dishwasher - if I were to try and test them all, I'd still be at it next Christmas. So, I'm going to constrain myself to just a few

  • 60 Degree (centigrade) cotton cycle
  • 40 Degree cotton cycle
  • 40 Degree cotton cycle with eco mode enabled
  • 30 minute short-cycle (runs at 30c by default)
  • 15 minute short-cycle (runs at 30c by default)

To see how they compare.

The same load was used throughout - it was given a run through prior too to ensure it was soaking wet at the beginning of all runs (so that the first run didn't get a weight advantage by having dry clothes).

Read more…

How much more efficient is a Dishwasher's Eco cycle?

Our dishwasher (a Hotpoint Experience thing) has an Eco-cycle - the idea being that it does $things in order to save energy.

Quite some time back, a repair engineer told me that Eco was a false economy because it ran things twice as fast/hard to be able to run for less time. That explanation's never sat particularly well with me - it'd be the same amount of work/energy, just compressed into a shorter time.

As I'm on a bit of an energy saving kick anyway, I was curious to see just how much difference Eco mode actually makes compared to a normal cycle. As I've previously set up to monitor our electricity usage I figured it should be relatively easy to check.

Both runs had exactly the same load in them - I don't expect it makes too much difference in practice, but seemed an easy thing to control.

I don't have a water meter, so wasn't able to check whether the Eco mode also uses less water (it's quite possible that it does, so that the smaller volume of water can be heated to the same temperature with less energy).

Read more…

Scanners, Old hardware and The Environment

I remember being told (in fact, you still hear it said today) that Linux has problems with hardware support - you'll buy kit and find you can't use it on Linux. It's not really been true for quite some time, but an experience over the weekend made me think about it.

I have a Canon Canoscan Lide 60 Scanner, I've owned it for somewhere between 12-15 years. It's about as portable as a flatbed can really get, and still up to scratch for the vast majority of scanning needs (I have actually since got a Brother DS-740d for more portable needs, but some things really do need a flatbed.)

The Canon Canoscan LiDE 60

Even back when I bought the Canoscan, using it on Linux wasn't much of a challenge.

Although Canon didn't support Linux, SANE supported it out of the box, so with a simple apt-get (actually, it's possible it might have been an rpm -i back then) I was up and running. Despite multiple desktop replacements in the ensuing decade, I've never really thought much about being unable to use it: until this weekend.

Read more…

Fixing a stopcock that leaks near the gland nut

I'm not a plumber, in fact, I generally try to avoid jobs that involve plumbing - I don't like the idea that a single mistake could lead to a slow, steady drip that eventually costs you a ceiling (or expensive damp problems), so I leave most plumbing jobs to the professionals.

But, I found water on the floor around one of my water supply stopcocks.

Dealing with it is a pretty straightforward job (I didn't even need to turn the main off) and takes about 10 minutes - it's hard to justify calling a plumber out for that, and you obviously don't want to leave it to get worse and/or ruin everything in it's path.

This post details the process of repacking a stopcock (water shut off valve to the Americans) gland nut - if I can do it, then anyone with a couple of spanners can.

Although this is on a main supply stopcock, the same process can be applied to stopcocks elsewhere in the house - whether it's a shower isolation tap or something else.

Read more…

Migrating from GSuite to Zoho

Yesterday I wrote up some of my thoughts on Google's shutdown of legacy Apps for Domains accounts, and in the process of that largely settled on moving us over to Zoho's offering.

I've never been one to hang around, so today I started the process of migrating us from GSuite (sorry, Google Workspace) to Zoho's email and productivity suite.

This post details some of the prep I did, as well as some of my observations along the way.

Read more…

The Pains Involved In Moving on from Google Apps for Domains

Google have announced that legacy Apps for Domains (AfD) accounts will be closed. Back when Google was pushing them (hard), these free accounts allowed you to use Gmail with your own domain (often now called a vanity domain) as well as Google's suite of tools.

I, apparently, have been an AfD user for 11 years

My account

Curiously, the Gsuite Legacy subscription only claims to date back to 2013 - I guess there was some kind of change on Google's end around then (presumably renaming it from Apps for Domains to G Suite).

Although the free accounts are going away, Google continue to offer this functionality on a paid basis in the form of Workspace (previously G Suite). Apps for Domains users will receive a discount for the first year (So the Google Workspace Business Starter tier will be $3/user/month).

To be fair to Google, they stopped providing free accounts in 2012, and have continued to support AfD free accounts for 10 years since then. But, to be fair to the users (including me), that doesn't mean they can't be criticised for bringing it to an end (especially ham-fistedly and on relatively short notice).

Like a lot of other AfD users, I've still not actually received a notification of the impending change - there's been no email and there's no notification in my domain dashboard. The only clue (other than the news stories) is that the "Upgrade" page now offers the migration prices.

I wrote some time ago about the steps I was taking to break the Google addiction, so this felt like a prompt to look at other options within the market.

This post shares some of the things I've looked at/considered, as well as an overview of the things that make these Google accounts particularly "sticky".

Read more…

What will Web3 actually deliver?

It's increasingly impossible to avoid the hype around Web3 and depending on who you speak to it's going to deliver different things

  1. Easier monetization by content creators
  2. Improved privacy
  3. Decentralisation: Removal of the centralisation onto gatekeepers/platforms that occurred with Web 2.0
  4. A trustless, self-governing model

I find that Web3 presents an interesting conflict within me. I've long identified with many of the ideals held by Cypherpunks, including building anonymous tools to defend privacy. One of the tools explicitly called out, even back in 1993 when the Cypherpunk's Manifesto was written, is crypto currency

We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.

On the face of it, I should support Cryptocurrency and Web3 and yet, I just don't buy it.

I've had discussions about Web3 on Reddit as well as various forums lately, and of course, there's the recent fantastic analysis by Moxie Marlinspike.

I thought it might be helpful (to me) to put some of those thoughts into some sort of order. This post is going to explore some of the claims around Web3 and the issues I see with them.

Read more…

Mailarchive has been discontinued

I launched back in 2014. As well as hosting mirrors of mailing lists such as tor-talk and cypherpunks, it also hosted mail based notifications derived from multiple sources (such as RSS feeds, lists etc) like my CVEs list.

However, I've taken the decision to take offline - this post details the rationale behind that choice.

Read more…

Designing privacy friendly analytics

It was only two weeks ago that I wrote

but I've whittled the amount of javascript on the site right down, and don't really relish the thought of increasing it. Nor do I particularly like the idea of implementing probes that can track user movements across my sites, when all I currently need is aggregate data.

Unfortunately, that set my brain wandering off thinking about what scalable privacy friendly analytics might look like - which inevitably led to prototyping.


The system is supposed to be privacy friendly, so there are some fundamental rules I wanted to abide by

  1. Actions and time are the primary focus, not the user - we don't need to record user identifiers
  2. The system should be lightweight and collect only that which is needed
  3. There should be no ability to track a user cross-site (even if the analytics is used on multiple sites)
  4. The default behaviour should make it all but impossible to track user movements within a site

The aim being to strike a balance where we can collect the data required to maintain/improve a site, with as little impact on the user's privacy as possible.

Whilst I trust me, there's no reason users should have to, and we should assume that at some point, someone less trustworthy will find a way to access the stored data: the less identifying data available, the less use it is to them.

Read more…

Attempting to control Youtube access on Android

It's a problem that our parents didn't really have to contend with - easy, unlimited access to a massive library causing massive amounts of screen time.

We used to get complaints about the amount of time spent watching TV (or on a gameboy), but the library available to us was quite limited, so there was a point where you just stopped watching and did other things for a while (assuming we weren't outright kicked out of the house and sent to the park).

The Problem

Now though, not only is content just a click away, but it's actively pushed to us and our kids.

It's not just the distribution mechanisms, social media is deliberately designed to be immersive and even addictive. In ye olden days, we'd get an episode of what we wanted, but then something completely unrelated would come on - nowadays the approach is much more take this, this, this, oh and you might be interested in this. It's very easy to lose track of the time you've spent, even as an adult.

Littlun has, over time, developed something of a Youtube habit.

That's led to some good conversations about not over-trusting content creators, which seem to have been well absorbed (in the sense that the content being watched is more appropriate, even if I do think some of the streamers are complete wazzocks). We've also had conversations about the importance of talking to an adult if something unpleasant/inappropriate comes up.

So, my concern now isn't so much the content as the time spent on Youtube (given the chance).

That's a much harder issue to resolve through conversation alone, as it's easy to be unware of time spent absorbed, and services like Youtube are designed to exploit that.

What this means is that as well as conversations, some technical measures are required, including

  • a prompt/reminder about the time spent
  • a means to block access

The latter being the "big stick" that I can reserve use of, in order to encourage mini-me to pay a bit more attention to the former.

Blocking Youtube on the LAN is simple (Pihole to the rescue), but it's an incomplete solution. At some point, Littlun'll notice that Youtube works when out and about and realise that the block can be circumvented at home by turning wi-fi off.

This post details the way's I've looked at to help control/restrict Youtube access on Android in a way that doesn't simply disappear with a change in network connection

Read more…

Tracking My Website Performance and Delivery Stats in InfluxDB

Earlier this year, I moved from serving via my own bespoke Content Delivery Network (CDN) to serving it via BunnyCDN.

It was, and remains, the right decision. However, it did mean that I lost (convenient) access to a number of the stats that I use to ensure that things are working correctly - in particular service time, hit rate and request outcomes.

I recently found the time to look at addressing that, so this post details the process I've followed to regain access to this information, without increasing my own access to PII (so, without affecting my GDPR posture), by pulling information from multiple sources into InfluxDB.

Read more…

New Site Live

I've migrated my site from Joomla! to Nikola - moving from having active code exposed to visitors, to simply serving static files.

As with any migration of this size, there'll be things I've missed, and there are a few things where I've kicked the can down the road a bit

  • Search is not yet implemented
  • The template uses Javascript (this will be made optional in future)

At some point soon, I'll write up a post detailing how I migrated, and why

Update: that post can be found at Migrating from Joomla to Nikola

Tracking my remaining AAISP Data Quota with Telegraf

Some time back, I switched our internet connection over from BT to Andrews & Arnold. Although the quality and reliability of our service has improved immensely, it does mean I've had to get used to us having a quota (generous though it is) rather than being "Unlimited".

AAISP make checking this pretty easy - you can simply go to their homepage and it'll display there.

However, they also expose a JSON API to check it

curl -s -L --header "Accept: application/json" | jq
  "monthly_quota": 5000000000000,
  "monthly_quota_gb": 5000,
  "quota_remaining": 9026529201951,
  "quota_remaining_gb": 9026,
  "quota_status": "green"

(AAISP let you roll over half of your unused each month, which is why quota_remaining is higher than monthly_quota)

So I wanted to configure Telegraf to poll this periodically and write it into InfluxDB.

This post details the steps I followed

Read more…

Creating a In-Home-Display (IHD) to view real-time energy usage

A month or two back, I put up a post detailing how I was capturing information on our energy usage into InfluxDB. The natural extension of that was to create an In Home Display (IHD) that displays current and historic usage.

Some time back, I created a Music Kiosk using a Raspberry Pi and a touchscreen, so it made sense to adjust that in order to fulfil both needs.

This post details the steps I took to have my kiosk run Flux queries against my InfluxDB instance to retrieve energy consumption data, and then graph it using Flot.

Read more…

Making a Polished Concrete Table

I've never really played around with creating things using concrete, but you see some awesome polished concrete creations.

We wanted a small coffee table for our decking, it'd need to survive whatever our British weather could throw at it so something concrete seemed ideal.

Although I had a specific use for it in mind, I still considered it experimental - odds are that you'll screw up the first one, so it's worth trying a few things.

This blog post details the process I followed, presented as a "How to".


Read more…

Triggering HomeAssistant Automations with Kapacitor

In an earlier post, I described how I've set up monitoring our home electricity usage using InfluxDB.

However, I thought it'd be good to be able to have this interact with our existing Home Automation stuff - we use HomeAssistant (previously for that.

In my earlier post, I described using Kapacitor to generate alert emails when the tumble dryer was finished, so in many ways it made sense to make this an extension of that. TICK scripts support calling arbitrary HTTP endpoints via the HTTPPost node, and HomeAssistant allows you to control sensors via HTTP API, so it's reasonably straightforward to implement.

Read more…

Monitoring our electricity usage with InfluxDB

The are various solutions on the market for monitoring power usage, ranging from smart meters with in-home displays (IHDs), to clamp meters linked to similar displays.

What the majority have in common, though, is a lack in granularity.

They'll commonly show you how much you've used so far today and how much you used (all day) yesterday (and maybe this week).

But, they often lack the ability to drill down further than that. This denies the user the ability to dentify why usage is high (does it jump at a certain time of day, or does it grow almost linearly through the day?).


Smart Meters

The widely touted claim that smart meters enable us to reduce consumption is itself questionable:

  • the supposed benefits don't come from the meter, but from the IHD. You can have a working IHD without the need for a Smart Meter
  • However you monitor your usage, there really is a limit to how much you can reduce it

But, even ignoring this, the real issue is that they don't expose the data in a way that allows you to best act upon it. Instead you're left turning stuff on and seeing how much the IHD's reading jumps.


Cloud Solutions

There are a variety of Cloud based solutions to choose from, but after reading around, I decided to order a cloud-linked clamp meter from the Owl Intuition-e series:

Owl Intuition sales picture

The key selling point to me was that it can be told to also send usage updates to a specific local IP - so even if the cloud service proved not to be up to scratch, I figured I could probably implement something.

Despite the (relative) triviality of making a good interrogable system, the Owl Intuition cloud interface turned out to be pretty limited - it does let you drill down over the last week, but beyond that you can only view per-day stats.

Owl Daily trend
Owl last month

This is better than your average IHD, but still really limits your ability to investigate usage (if you get a large bill, you probably want to be able to dig into at least the last month with reasonable granularity).

There is an Android app... but it's horrifically limited, you can view current usage and that's it (so no pretty graphs). Barely worth the effort of installing really.

The service also lacks the ability to do things like monitor specific plug outlets (as far as I've been able to find, OWL don't sell any smart plugs that interact with Intuition) and/or generate alerts based on usage.

So, it very clearly was time to build my own.

Read more…

Making a double sided shelf

Whilst I was making the shelf shown in "distressing wood to make a shelf", littlun asked if I could make them one too.

Obviously, there's a bit of a difference in tone/feel between my office & littlun's room, so I didn't use the same approach.

I also decided to hedge my bets a little - decorating the shelf differently on each side, so that if one side wasn't right, the other might have a chance. One side goes for a distressed wood effect, whilst the other goes for a mottled mix of red and black (the balrog effect...), similar to the look I achieved making a back for my desk

Desk privacy guard

This post details the process I went through to create the shelf.

Read more…

Distressing Wood to make a shelf

On reflection, this probably isn't the best example to lead with - the effect doesn't photograph quite as well as something using paint/stain (I wanted to keep the wood's colour), but the techniques used are the same.

I decided that I wanted another shelf up in my office - I've some nice Victorian train style shelf brackets, and plenty of scrap wood to call upon.

By luck, I found a length of pressure treated 2x4 that was already the perfect length.

But, it did look quite a lot like I'd taken a piece of scrap timber and bolted it to the wall (funny that...)

Timber screwed to a wall

Functionally, it's a shelf, but it really is quite rough. What I quite like, though, is the mix of colours - along with the wood's natural mix of colours, parts of it have a slightly green hue (because it's pressure treated).

So, I decided I'd have a go at distressing it - making it look like it was actually a shelf, but had seen some life.


Read more…

Barclays Online Banking gives 3rd Parties access to login pages

Banks aren't exactly known for living on the bleeding edge - even where good security practice moves on, they tend to be years behind. For better or worse, they lean toward preferring stability and consistency over chasing the latest and greatest.

However, this issue doesn't really fall under that traditional niche of "well, banks will be banks".

Barclays bank (and others) are giving 3rd party scripts access to their Internet Banking login pages - the result is that a compromise or mistake at their supplier could compromise their customer's login credentials.

I highlighted this issue a few months back, and Barclays replied with "deliberate, not an issue" (paraphrasing a bit there), so I'm now getting around to writing it up.


Read more…

Amazon Blocks FLoC across most sites

Google's Federated Learning of Cohorts (FLoC) isn't exactly noted for it's popularity.

The company claims that FLoC will improve privacy, though various researchers disagree (and there are issues that have remained unaddressed for years).

For those who're not up to date: the stated aim of FLoC is to replace tracking via 3rd party cookies with an engine within the browser that profiles your browsing habits and adds you into a cohort of users with similar behaviour - advertisers then advertise to you based on your cohort ID (I wonder why the idea of a browser tracking your habits for advertising purposes hasn't won hearts and minds in the way they wanted...).

News has broken (via Digiday) that Amazon have blocked FLoC from operating on (most of) their domains - the exception seems to be Abebooks.

Because it's driven by a HTTP response header, we can trivially confirm for individual domains:

curl -v -o/dev/null 2>&1 | grep permis
< permissions-policy: interest-cohort=()


Read more…

Sparkler Bombs...

Firstly, to deal with the obvious: the term sparkler bomb is a bit of a misnomer, the burst isn't contained -  there's no explosion, just a large woosh. There are, of course, ways to contain them and make a bang, but doing so is (frankly) twattish and far, far less fun (even before it goes wrong and puts you in A&E).

Secondly: this post is offered as a bit of fun, not as an instructable - if you're silly enough to try and recreate (or better) my mischief, then the consequences lie with you and you alone.

Anyway, moving on...

One of my earlier memories of being on the internet, was delight at finding pages talking about creating sparkler bombs. Pages much like this post (in fact, I'm all but certain that was one of them, I remember the humour and definitely remember the imagery).

Much like any obsession on the earlier web, I only had photos to go on (Youtube wouldn't be created, let alone mainstream, for years - even where videos were recorded, they were shared as framegrabs).

The photos, though, showed some fairly spectacular results:

Sparkler Bomb Picture from

That blue line is an artefact of the CCD in the camera the image was captured on (i.e. it's not really there), but it does nothing but add to the effect.

At the time, I couldn't possibly have built a sparkler bomb myself - being too young to buy the things was a surmountable obstacle, but not having the funds to buy them in the first place was not. And so, some things that should not have been forgotten were lost - at least for a time.

Actually, I have periodically thought about them - usually when handed a sparkler - but the thought's slipped from my mind well before being able to act on it.

Recently though, I had need for a couple of small sparklers (think of things you put on a cake), and had the rest of the pack left over. Being mini sparklers it was never going to be anything near as spectacular as the image above, but nowadays we do have an availability of cheap video cameras to watch things in slow-mo so I figured it'd still be interesting to try.



Read more…

Making my books freely available

Nearly a decade ago, I self-published a couple of books on the Kindle store: Linux for Business People and A Linux Sysadmin's guide to mischief.

Since then, I'd largely forgotten about them, until sorting through some files today.

They're pretty outdated (and weren't that great back then), but I figured as they've served their original purpose, I'd make them freely available:


Linux for Business People A Linux Sysadmin's guide to Mischief

Both come from those happy, happy days before SystemD inserted itself onto our systems...

FLoC disabled on my sites

Cookies have been viewed as the enemy for quite some time, with the result that 3rd party cookies are (quite rightly) being treated with high levels of suspicion.

Unfortunately, the focus being on cookies rather than the tracking/profiling that they enable has left an opening for the unscrupulous to offer a cookie-less alternative.

Enter Google, who a while back announced they were building something called Federated Learning of Cohorts (FLoC) into Chrome. The basic underlying idea of FLoC is that it assigns the browser a cohort ID - grouping it in with other browsers who have a similar browsing history.

The browser's history never leaves the browser, with the cohort ID being calculated locally (updating once per week, based on the previous week's browsing), websites can then query the browser for it's cohort ID (by calling document.interestCohort()) and serve appropriate ads based on the ID returned.

However, deeper inspection has shown that rather than solving privacy issues, FLoC simply presents new ones - in fact there's an obvious vector in the paragraph above - your cohort ID is the same across all sites you visit...

Plus, although I say new, some of these issues were highlighted in 2019 and remain unaddressed.


Multiple groups have identified that FLoC can be used in fingerprinting, for example:

  • A site that a user logs into can link their credentials and cohort ID
  • A government site may identify a cohort ID that commonly contains dissidents and can link this ID to the IP of any cohort member who visits a government site
  • Users with a specific medical condition may get grouped into a cohort - while it may not be possible to identify the users it's fairly likely they wouldn't consent to being targeted based on that condition

There are many, many writeups on the issues with FLoC (many linked to from here) that do a better job of covering this that I can here.

To summarise, though, Google's only defence is to prevent a false dichotomy - they argue that FLoC is better for privacy than 3rd party cookie based tracking. This rather ignores that that tracking is being killed by browsers - we could instead opt for a world without either (not so secret option c).


Read more…

Removing Ads from my Sites

(It occurs to me that publishing this on 1 Apr isn't the best move - rest assured this is genuine)

I've long felt uncomfortable with the privacy trade-offs of having advertising on my sites.

Shortly before GDPR came into effect, I wrote a post detailing how I was, once again, revisiting the decision of having ads on my site.

The decision then, as before, was that the ads were a necessary evil as the revenue they generate contributes something to the running costs of this site, helping keep over a decade's worth of work online.

Today, however, I'm changing that decision and removing Google's Adsense from all of my sites

Read more…

Automating Our Heating

A little while ago, I wrote some musings on Home Automation and made reference to our heating setup.

As it's had a bit of time (and some poor weather) to run and be improved upon, I thought it might be helpful/interesting to lay out a bit more detail on the setup I'm now using.

We got a NEST thermostat during an unexpected boiler replacement, unfortunately it's smart features didn't live up to expectations, trying to overcome that led me down the path that I'll describe in this post.

Read more…

Launching the "House Stuff" blog category

Not the most exciting news I'm sure, but I'm adding a "House stuff" blog category to my site.

I've been feeling a bit... meh... of late, partly because I've not had opportunity to write anything here in a while. Part of the reason for that is that I've been focused on various home improvements/tweaks, not all of which fit well into more tech related sections (though there is some overlap).

The aim of this category is to give me a cathartic outlet, even if I'm just reinsulating or building a door.


So Long WhatsApp

For years, I refused to install WhatsApp messenger because I had customer contact details and other information on my phone.

Eventually, I made a concerted effort to clear all that out, with the side effect that I could then install WhatsApp, based in part on their promise that personal data - names, addresses, internet searches or location data - would not be collected, much less used.

When WhatsApp was acquired by Facebook, it was inevitable that that promise was going to get broken. Something all the more apparent when one of the WA founders left Facebook as a result of a disagreement about privacy.

So disappointing as it is, the recent news really was quite inevitable.

WhatsApp have pushed a notification of a change in their terms and conditions - the new changes allow them to share data with Facebook, including (but not limited to)

  • User's phone numbers
  • User's contact lists (see below)
  • Profile information
  • Status information
  • "Diagnostic data"  - what phone model you're using, what networks you're on etc
  • Location data
  • "User content"
  • Details of purchases made with businesses using WhatsApp, including Financial Information
  • "Usage data"

These changes will come into effect from 8 Feb 2021 - if you disagree with the changes, then the only recourse is to delete your account before then (which probably isn't GDPR compliant, but Facebook tend not to worry about that).



Read more…

Musings on Home Automation

I've dabbled with elements of Home Automation in the past.

In a previous rental, we only had storage heaters, so I equipped each room with an Oil Radiator and an energenie RF plug socket (like these using a Raspberry Pi and the Energenie remote control header board to set up an effective heating schedule.

However, aside from that, and mild "wouldn't it be nice too..." ideas, I've not really been overly interested into it until relatively recently.

Having spent a bit of time dabbling, I thought I'd write a post on my experience - not least in case it helps people with some of the things I struggled with.



Read more…

Tuning Pi-Hole to cope with huge query rates

As some may already know, I offer a small public encrypted DNS service at, offering DNS resolution via DNS-over-HTTPS (DoH) and DNS-over-TCP (DoT).

The setup I use is an evolution of that one I described when I published Building and Running your own DNS-over-HTTPS Server 18 months ago, providing ad and phishing blocking as well as encrypted transport.

It was never intended that my DNS service take over the world, in fact, on the homepage it says

A small ad and phishing blocking DNS privacy resolver supporting D-o-H and D-o-T .... This service is, at best, a small hobby project to ensure that there are still some privacy-sensitive DNS services out there.

Not all nodes in my edge even run the DNS service.

The service has always seen some use - much more than I really expected - with queries coming in from all over the globe, and query rates are pretty respectable as a result.

However, recently, query rates changed, and there was such a seismic shift in my daily graphs that the previous "routine" usage started to look like 0:

Daily query rater graph

I'm omitting figures and dates out of an abundance of caution, but the lines represent usage across different days (the vertical grey lines each denoting a day)

You can see that usage increased by several orders of magnitude (the turquoise line is the number of advertising domains blocked, so usually increases roughly proportionately).

The change in traffic rates triggered a few things

  • Alarms/notifications from my monitoring
  • Notifications from some of my connectivity providers to say they were mitigating an apparent DoS

This post is about the (very few, actually) minor things I found I needed to do to ensure Pi-Hole could keep up with the new load.


Read more…

Onion V3 Address is live

My site has supported using V3 Onions at the transport layer for quite some time, having implemented Alt-Svc headers to allow Tor to be used opportunistically back in October 2018.

What I hadn't got around to, until now, was actually support direct access via a V3 hostname. I'd put a reasonable amount of effort into generating a personalised V2 address, and making sure it was documented/well used.

However, V2 Onions have been deprecated, and will start generating warnings in a month. Total discontinuation of V2 support is scheduled for July 15th 2021.

So, I figured I should get V3 support up and running, and have today launched the service.



Read more…

A comparative analysis of search terms used on and it's Onion

My site has had search pretty much since it's very inception. However, it is used relatively rarely - most visitors arrive at my site via a search engine, view whatever article they clicked on, perhaps follow related internal links, but otherwise don't feel the need to do manual searches (analysis in the past showed that use of the search function dropped dramatically when article tags were introduced).

But, search does get used. I originally thought it'd be interesting to look at whether searches were being placed for things I could (but don't currently) provide.

Search terms analysis is interesting/beneficial, because they represent things that users are actively looking for. Page views might be accidental (users clicked your result in Google but the result wasn't what they needed), but search terms indicate exactly what they're trying to get you to provide.

As an aside to that though, I thought it be far more interesting to look at what category search terms fall under, and how the distribution across those categories varies depending on whether the search was placed against the Tor onion, or the clearnet site.


This post details some of those findings, some of which were fairly unexpected (all images are clicky)

If you've unexpectedly found this in my site results, then congratulations, you've probably searched a surprising enough term that I included in this post.




Read more…

Onion Location Added to Site has been multihomed on Tor and the WWW for over 5 years now.

Over that time, things have changed slightly - at first, although the site was multi-homed, the means of discovery really was limited to noticing the "Browse via Tor" link in the privacy bar on the right hand side of your screen (unless you're on a mobile device...).

When Tor Browser pulled in Firefox's changes to implement support for RFC 7838 Alt-Svc headers, I added support for that too. Since that change, quite a number of Tor Browser Bundle users have connected to me via Onion Services without even knowing they had that additional protection (and were no longer using exit bandwidth).

The real benefit of the Alt-Svc method, other than it being transparent, is that your browser will receive and validate the SSL cert for my site - the user will know they're hitting the correct endpoint, rather than some imposter wrapper site.

Which brings us to today.

Tor have released a new version - 9.5 - of Tor Browser bundle which implements new functionality: Onion Location


Read more…

Cynet 360 Uses Insecure Control Channels

For reasons I won't go into here, recently I was taking a quick look over the "Cynet 360" agent, essentialy an endpoint protection mechanism used as part of Cynet's "Autonomous Breach protection Platform".

Cynet 360 bills itself as "a comprehensive advanced threat detection & response cybersecurity solution for for [sic] today's multi-faceted cyber battlefield". 

Which is all well and good, but what I was interested in was whether it could potentially weaken the security posture of whatever system it was installed on.

I'm a Linux bod, so the only bit I was interested in, or looked at, was the Linux server installer.

I ran the experiment in a VM which is essentially a clone of my desktop (minus things like access to real data etc).

Where you see [my_token] or (later) [new_token] in this post, there's actually a 32 byte alphanumeric token. [sync_auth_token] is a 88 byte token (it actually looks to be a hex encoded representation of a base64'd binary string)


Read more…

Writing (and backdooring) a ChaCha20 based CSPRNG

Recently I've been playing around with the generation of random numbers.

Although it's not quite ready yet, once of the things I've built is a source of (hopefully) random data. The writeup on that will come later.

But, as an interesting distraction (and in some ways, the natural extension) is to then create a Psuedo Random Number Generator (PRNG) seeded by data from that random source.

I wanted it to be (in principle) Cryptographically Secure (i.e. so we're creating a CSPRNG). In practice it isn't really (we'll explore why later in this post). I also wanted to implement what Bernstein calls "Fast Key Erasure" along with some techniques discussed by Amazon in relation to their S2N implementation.

In this post I'll be detailing how my RNG works, as well as at looking at what each of those techniques do to the numbers being generated.

I'm not a cryptographer, so I'm going to try and keep this relatively light-touch, if only to try and avoid highlighting my own ignorance too much. Although this post (as a whole) has turned out to be quite long, hopefully the individual sections are relatively easy to follow



Read more…

The Pitfalls of Building an Elasticsearch backed Search Engine

There are a ton of articles on the internet describing how to go about building a self-hosted fulltext search engine using ElasticSearch.

Most of the tutorials I read describe a fairly simple process, install some software, write a little bit of code to insert and extract data.

The underlying principle really is:

  1. Install and set up ElasticSearch
  2. Create a spider/crawler or otherwise insert your content into Elasticsearch
  3. Create a simple web interface to submit searches to Elasticsearch
  4. ???
  5. Profit

At the end of it you get a working search engine. The problem is, that search engine is crap.

It's not that it can't be saved (it definitely can), so much as that most tutorials seem not to lend any thought to improving the quality of search results - it returns some results and that's good enough.

Over the years, I've built up a lot of internal notes, JIRA tickets etc, so for years I ran a self-hosted internal search engine based upon Sphider. It's code quality is somewhat questionable, and it's not been updated in years, but it sat there and it worked.

The time came to replace it, and experiments with off-the-shelf things like yaCy didn't go as well as hoped, so I hit the point where I considered self-implementing. Enter ElasticSearch, and enter the aforementioned Internet tutorials.

The intention of this post isn't to detail the process I followed, but really to document some of the issues I hit that don't seem (to me) to be too well served by the main body of existing tutorials on the net.

The title of each section is a clicky link back to itself.


Read more…

Recovering files from SD Cards and How to protect yourself

I was working on writing this up anyway, but as the UK Government's lawyers have recommend weakening protections around Police's ability to search phones, I thought today might be a good day to get a post up about the protection of content on SD cards.


I never seem to have a micro-SD card to hand when I need one, they're generally all either in use or missing.

I tinker with Raspberry Pi's quite a lot, so, I ordered a job lot of used micro-SDs from ebay so that I could just have a pot of them sat there.

I thought it'd be interesting to see how many of the cards had been securely erased, and by extension what nature of material could wind up being restored off them.

Part of the point in this exercise was also to bring my knowledge of recovery back up to date, although I've done it from time to time - I've not really written anything on it since 2010 (An easier method for recovering deleted files on Linux, and the much earlier Howto recovered deleted filenodes on an ext2 filesystem - yes, that old that it's ext2!).

In this post I'll walk through how I (trivially) recovered data, as well as an overview of what I recovered. I'll not be sharing any of the recovered files in any identifiable form - they are, after all, not my files to share.

I'll also detail a few techniques I tested for securely erasing the cards so that the data could no longer be recovered

Read more…

(Hopefully) Rescuing a bottle of drink

With the change in weather, I'm having to take painkillers a lot more regularly, which means I can't drink.

I thought, as an option, I'd explore some non-alcoholic spirits - there seems to be quite a market for them, so there must be some good ones out there.

I did have some luck in finding some "gin". However, whilst searching, I stumbled upon "Xachoh Blend No. 7 Non Alcoholic Spirit", which lists the following tasting notes

Xachoh Blend No. 7 has a warm and richly spiced aroma. The prominent flavours of ginger root and blades of mace strike a perfect blend of warmth, spice and a subtle fruitiness. The luxurious aroma of cinnamon quills brings sweetness to the nose and palate, balancing perfectly with saffron & the other spices. Dark crystal malt adds delicious toasted notes and a real depth of flavour, similar to that of a well-aged dark spirit. All of these rich and dark flavours are balanced by a refreshing acidity of sumac on the palate, leaving the way for a long finish and an eagerness for that next sip.

Sounds good eh? As with anything on Amazon, reviews were incredibly mixed, some love it, some hate it.

So, as it sounded good, I took a risk and ordered a bottle.

It arrived this morning:


So having been looking forward to it's arrival, I had a little taste. 

It's got a nice and very varied aroma to it. But things go downhill once you get it to your mouth - if it was just a little less watery, I'd probably be looking to add Ribena to it. 

Disappointing doesn't cover it, the only trace of flavour it has is a somewhat unpleasant aftertaste. Unfortunately, if you mix it with ginger ale, it transpires that all you get is ginger ale with a horrendous aftertaste.

The answer for why lies on the back label (and in fairness *is* listed on the Amazon listing)

Free from:

  • Alcohol
  • Extracts
  • Gluten
  • Sugar
  • Calories
  • Sweeteners

With the exception of a tiny bit of salt, the nutritional information is just 0's. This stuff is literally water with some Barley Malt and a few flavourings.

It's "natural", it Gluten Free, it's vegan, it's... it's fucking shit and it's destined for the drain. Yuck

But, rather than pour a £30 bottle of water down the drain, I thought I'd have a go at improving it first - worst comes to worst I'm just pouring a slightly more expensive bottle of water down the drain, and it's not like I could realistically make it much worse.

As I'm extremely unlikely to try making this again, and there's not a lot of room there for snark, I figured this was better placed here than on my recipes site.


Read more…

Spamhaus still parties like it's 1999

I recently had visibility of a Spamhaus Block List (SBL) listing notification on the basis of malware being detected within a file delivered via HTTP/HTTPS.

As part of the report, they provide the affected URL (for the sake of this post we'll say it's along with details of the investigation they've done.

Ultimately that investigation is done in order to boil back to a set of IPs to add to their list.

Concerningly, this is, literally just 

dig +short

Which gives them output of the form


They then run a reverse lookup (using nslookup) on those IP addresses in order to identify the ISP. The IPs are added to the SBL, and a notification sent to the associated ISP.

In this case, the URL was a legitimate file, though it had been bundled with some software falling under the Possibly Unwanted Application (PUA) category. The point of this post, though, is not to argue about whether it should have been considered worthy of addition.

The issue is that Spamhaus' investigation techniques seem to be stuck in the last century, causing potentially massive collateral damage whilst failing to actually protect against the very file that triggered the listing in the first place.

In case you're wondering why Spamhaus are looking for malware delivery over HTTP/HTTPS, it's because the SBL has URI blocking functionality - when a spam filter (like SpamAssasin) detects a URL in a mail, it can check whether the hosting domain resolves back to an IP in the SBL, and mark as spam if it does (in effect limiting the ability to spread malware via links in email - undoubtedly a nice idea).


Just to note, although they make it difficult to identify how to contact them about this kind of thing, I have attempted to contact Spamhaus about this (also tried via Twitter too).

It also seems only fair (to Spamhaus) to note that I also saw a Netcraft incident related to the same file, and they don't even provide the investigative steps they followed. So not only might Netcraft be falling for the same traps, but there's a lack of transparency preventing issue from being found and highlighted.


Read more…

Twitter Screws Up With Data It Shouldn't Hold

I recently had a (NSFW) grumble about Twitter. Part of that grumble was about the fact that Twitter insist you provide a mobile phone number in order to re-instate your account after a suspension.

As part of my appeal against the suspension I noted that that's arguably not GDPR compliant - a phone number is (undoubtedly) PII, and is not required in order to provide the service. For Twitter to hold that number requires consent, and it's unlawful for them to withhold the service if consent is not given for non-essential data processing.

Part of the reason for my objection was because Social Media companies (in the form of Facebook) have already proven they cannot be trusted with things like mobile phone numbers.

Presumably Twitter weren't happy with the fact that I needed to use Facebook as an example, as they've now gone ahead and had a data processing screw up of their own.


Read more…

Brexit: My Predictions

For the most part, I've managed to keep Brexit related posts off this site (if you follow me on Twitter, apologies - you'll know I've resolutely failed to stay out of the fold there). I have previously made my position fairly clear though.

As we approach end of days though, I thought it'd be interesting to get my thoughts and predictions down so that I can potentially look back and see how well they aged.

Although this post is quite long, my predictions are broken down into bulletpoints at the end.


Read more…

Screenshot Social Media, Don't embed

Ever since the web was born, there have been concerns about preserving what's published on there for future generations. That's why things like the Wayback machine exist. Things like our approach, and concerns, around online privacy have also evolved with time.

But, the way we communicate on the web has changed pretty dramatically. Personal blogs are still a thing, but humanity has increasingly leaned towards communicating via social media - Twitter, Facebook etc. 

Now, we increasingly see news reports with embedded posts containing expert commentary about the topic of the news, and even reports about something someone has posted.

Those expert commentators are even occasionally being asked to change the way they tweet to make it easier for news sites to embed those tweets into their own stories (that request turned out to be from Sky News btw).

For all their many, many faults, the social media networks are a big part of how we communicate now, and posts on them are embedded all over the place.

This brings with it a number of avoidable, but major issues.

The aim of this post is to discuss those, and explain why you should instead be posting a screenshot of the tweet/post.

I'm going to refer to "Twitter" and "Tweets" a lot, purely because it's shorter than "Facebook" or "Social Media", but the concerns here apply across the board.


Read more…

Breaking the Google Addiction one step at a time

Google isn't your friend. Google isn't my friend. Google is, and always has been, a data-whore.

But, still we use them and allow them to slurp up more and more data about us.

They're a bit like Amazon in that respect - you know they're an increasingly terrible company, but they're just so convenient and you keep on using them whilst ignoring the power they're amassing over the market.

But, it is something that's been concerning me more and more over the years.

We install adblockers, no-script and other extensions to add a fig-leaf to our privacy, or to try and avoid Google's user-hostile changes, yet we keep on using the same services. Even when they completely change the UI around on us, for no good reason, we still keep using their services.

I decided, quite a while ago, it was time I made a change, but then did very little, at least until recently.

As great as a "clean-break" might sound, going cold turkey off Google's services is never going to work - no model of user behaviour supports making massive jarring changes.

So I decided to start with the most obvious interaction with Google - their search engine. I don't have Google Home or similar, so my most frequent interaction with Google is search.


Read more…

Twitter Jail: My Memoirs

Sometimes life throws you an opportunity. A quick search on the net suggests that whilst many celebrities have written about their time inside bricks and bars prison, no-one's had the foresight to document their time in something more modern.

I've been thrown in Twitter Jail, with all privileges withdrawn pending appeal. In physical jail, you can still watch the other inmates, but in Twitter jail if you have the temerity to appeal they blind you until the appeal is concluded.

This is a tongue-in-cheek record of my time in Twittertraz - with some very strong language within


Read more…

The Curious Case of BitFi and Secret Persistence

For some slightly obscure reasons I've recently found myself looking at the Bitfi hardware wallet and some of the claims the company make, particularly in relation to whether or not it's actually possible to extract secrets from the device.

The way the device is supposed to work is that, in order to (say) sign a transaction, you use an onscreen keyboard to enter a salt, and a >30 char passphrase.

The device then derives a private key from those two inputs, uses it and then flushes the key, salt and passphrase out.

Each time you want to use the device, you need to re-enter salt and passphrase - the idea being that if it never stores any of your secrets, then there's nothing to extract from a seized/stolen device. 

From Bitfi's site we can see this wrapped up in marketing syntax:

The Bitfi does not store your private keys. Ever. Your digital assets are stored on the Blockchains, when you want to make a transaction with your assets (move them, sell them, etc.) you simply enter your 6-character (minimum) SALT and your 30-character (minimum) PASSPHRASE into your Bitfi Legacy device which will then calculate your private key for any given token “on-demand” and then immediately expunge that private key.

For various reasons (see Background) I was somewhat dubious about the veracity of this claim, and ultimately ended up looking over their source code in order to try and verify it.

This post details the results of that examination, the following items should be noted

  • Although not explicitly vulnerabilities, the issues noted below have been submitted in advance to the Bitfi dev team (I did ask previously via email whether email or was preferable for raising issues).
  • Incomplete sources are published on - example here, so although I include code snippets in this post, it's updated versions of code that's already public - I'm not simply publishing their code on the net :)
  • I probably will make some mistakes: I've been ill, so focusing is hard, and I dislike C# so it's more than possible something's changed without me realising.
  • This is the result of a fairly short code review, and in no circumstances should be viewed or characterised as a full audit
  • In the sources, code version shows as v112

The result is a long analysis, so some may prefer to jump to the Conclusion.  

Read more…

Configuring Pi-Hole to update blocklists more regularly

By default Pi-Hole updates it's block lists once a week.

Broadly speaking, this is fine for blocking Ad domains (although the recent trend towards advertisers generating new domains does undermine that a bit).

But, if you've added a Phishing block list (as detailed in Building Your Own DNS over HTTPS Server), this is far less optimal - Phishing domains tend to do the majority of their damage during the first 24 hours, so only getting an update into the blocklist (potentially) 7 days later isn't much use.

In this post we'll walk through the (simple) procedure to have Pi-Hole update the gravity lists more frequently


Gravity updates are triggered via cron, in /etc/cron.d/pihole

grep updateGravity /etc/cron.d/pihole 
26 3   * * 7   root    PATH="$PATH:/usr/local/bin/" pihole updateGravity >/var/log/pihole_updateGravity.log || cat /var/log/pihole_updateGravity.log

As that crontab is a core file within Pi-hole we're not going to edit that file, otherwise any changes we make would be lost the next time Pi-Hole is updated.

echo '30 */2 * * *    root    PATH="$PATH:/usr/local/bin/" pihole updateGravity >/var/log/pihole_updateGravity.log || cat /var/log/pihole_updateGravity.log' > /etc/cron.d/update_gravity
systemctl restart cron

Lists will now be updated every 2 hours, at 30 mins past the hour.

The Importance of Provider Redundancy

Back in the days before cloud computing, it used to be accepted (if somewhat resented) by management types that having redundant systems in place was important if you cared - even a little - about uptime.

In today's industry, those same management types generally understand that it's still important to have multi-region availability, with instances running in completely distinct provider regions, so that an outage in one area doesn't impact your ability to do business.

What doesn't seem to be quite so widely understood, or accepted, though is the importance of ensuring that systems have redundancy across providers. It's not just management types who are making this mistake either, we've all encountered techies who are seemingly blind to the risk and view it as an un-necessary additional cost/hassle.

Rather than typing "the provider" throughout this post, I'm going to pick on AWS, but the argument applies to all Cloud providers.


Read more…

An argument in favour of application level name resolution

Recently I published some documentation detailing how to build and run your own DNS-over-HTTPS (DoH) server.

As I mentioned at the beginning of that documentation, there's been a certain amount of controversy about DoH vs DNS over TLS (DoT).

One thread of that argument is along the lines that name resolution should be handled at the OS level (so that all applications get the same result for a given name - improving troubleshooting - as well as giving some caching benefit, versus applications resolving names themselves).

Generally I've found that argument fairly persuasive, but also taken the view that DoH being implemented at the application level is the result of a general lack of availability/uptake of DoT at the OS level.

In other words, whilst it's not ideal for applications to be resolving names themselves, it makes an (arguable flawed) privacy-enhancing solution available now, rather than continuing to wait for an (arguably) better solution to actually get adopted (and ignoring whatever reasons led to that lack of adoption).

But, I've begun to change my mind on whether applications doing resolution themselves really is a problem, or whether it's actually more beneficial when considered alongside some of the aims of DoH


Read more…

Solution to my April 2016 Puzzle

It's been three years now, and although I've had many people complain about it giving them a headache, to my knowledge no-one has solved the puzzle I posted in April 2016. My other puzzles and crypto trails have all fallen in significantly less time, but I've watched people really struggle with this one, so I think it's fair to say that I made it just a little too hard.

It only seems fair, therefore, to explain the solution (while I can still remember it).

This post will do just that (there's a video of solving it below for those who don't want to read)


Read more…

Beware USB Quick Charge Ports

In order to power a couple of thermistor controlled cooling fans, I use a pair of USB to 3 pin Molex adapters.

I noticed the other day that one of the fans wasn't working, so I detached it from it's mounting plate and brought it and the adaptor out to check.

Access is a bit... tricky... so I couldn't really test the adaptor against the other fan (and didn't want to risk breaking it if something odd had gone wrong). The fans I use are about £5 each, and it's always worth having spares, so I ordered some replacements, which arrived today.

I plugged one of the new fans into the adaptor and tried to power it on. Nothing. So, I dealt with the access issues in order to plug the new fan into the other adaptor to check the fan worked - it did.

The last remaining check then, was to verify that the issue didn't lie with the USB port the adaptor was plugged into.

Read more…

Why I won't have an Amazon Echo

I was recently asked to explain in a bit more depth why I'm not willing to have an Amazon Echo (or, more specifically - Alexa), so I thought I'd write an answer down too.

Although the question was specifically about Alexa (being the best-known), the answer applies to alternatives like Google Home (now Google Assistant), Microsoft's Cortana and Sonos One.


Although the technology is now quite old, voice activated virtual assistants really are a cool bit of tech, I'm not disputing that for a second. In fact, in some ways I'd like nothing better than to hack an OBD-II connector onto an echo, install it in the car and live a Nightrider future.

But, as cool as these things are in principle, they are by design an always listening microphone that you're willingly installing into your home. For the reasons I'll lay out below, that's not a small deal.

Read more…

Protecting Identity and Copyright Online

At times, it really feels like the world is completely fucked. We've got a US president who somehow manages to be enough of an arse to fall out with Canadians flying off to meet a nuclear armed mad-man. We seem to be witnessing the increasing rise of a foaming mouthed racist alt-right, and have long since mourned the death of quality journalism in the media. Israeli defence forces are so focused on justifying murder of unarmed civillians that they now tweet about executing people for throwing a stone.

Yes, at times, it seems like the entire world is off to hell in a hand-cart.

Underneath it all, though, politics doesn't seem to be that different behind the scenes. Politician are still trying to implement many of the same stupid things that we've seen raised again and again throughout our lives. 

As fucked as the world may seem, it's important that it not act as a distraction from the issues we can do something about. Trump, for better or worse, is here to stay (at least until his KFC infested diet catches up with him).

But we can do something about fuckwits in Government once again suggesting that implementing the ability to control and track what everyone does online is in any way a positive. We also can do something about fuckwits from many Government's who think it's beneficial for humanity for them to take a bended knee before Copyright cartels and screw the lot of us in the process (otherwise known as Article 13 of the EU Copyright Directive).

This post isn't about the things that have become big, but about the things that will become massive infringements on our lives if allowed to pass unchallenged.


Read more…

Google, Cloudflare and GDPR - my quandry

Just like most of the internet, I've been working hard making sure my site and services are GDPR compliant. For the most part, on the technical front I already was, and it's mostly been a case of making sure the documentation is up to scratch.

However, in one area, I've had to revisit a  decision that I've gone over and over after the past few years - having ads on (some) of the sites, compared to the alternatives.

I decided I'd create this post for a couple of reasons - partially because I suspect others may be in a similar situation, and also to try and help lay it out so I can spot alternatives to those I've already considered.



Read more…

An Open Letter on Medicinal Cannabis in the UK

Today, I watched as one of our representatives denied the opportunity to even debate the benefits of Medicinal Cannabis. Such was his disregard for those suffering, he seems to have acted to prevent simple discussion of the pros and cons.

I'm ashamed to say that he's the MP for the Town I grew up in. For years I've watched people fruitlessly try to convince our MPs to listen, today was too much and I've decided that I need to put my head above the parapet and share my experience in this area.

Below is an open letter which I've sent to the leaders of the 3 main UK political parties

Read more…

A guide to designing Account Security Mechanisms

The history of the Internet is rife with examples of compromises arising both from poor security hygiene, and also from misguided attempts to "make it more secure" without first considering the implications of changes.

In this post, I'll be detailing some of the decisions you should be making when designing account security and user management functionality.

There's likely little in here that hasn't already been stated elsewhere, but I thought it might be helpful to put it all together in one post.

The post itself is quite long, so headings are clicky links to themselves. For those with limited time, there's a Cheat Sheet style summary towards the bottom.



Account security is an essential component of many, many, systems. Historically, though, amongst some developers there's been a pervasive attitude of "good enough", or worse "I think it makes it more secure, so it must". Some of the worst design decisions are based upon an invalid assumption, whilst others come from an attempt to "layer" security without considering the implications that each layer might have.





Read more…

Don't Use Web2Tor / Tor2Web (especially

Web2Tor and Tor2Web are reverse proxies which allows clearnet users to access Tor Onion Sites (AKA Hidden Services), and there are a variety of services available online (such as,,,, and running this service.

This post details why using these is, at best, a bad idea (and at worst, downright unsafe), as well as detailing some of the changes I'm making to the site to help discourage use of these services.


Read more…

Building a Tor Hidden Service CDN

Last year I started experimenting with the idea of building a Hidden Service CDN.

People often complain that Tor is slow, though my domain sharding adjustments to the onion have proven fairly effective in addressing page load times.

On the clearnet, the aim traditionally, is to try and direct the user to an edge-node close to them. That's obviously not possible for a Tor Hidden service to do (and even if it were, the users circuit might still take packets half-way across the globe). So, the primary aim is instead to spread load and introduce some redundancy.

One option for spreading load is to have a load balancer run Tor and then spread requests across the back-end. That, however, does nothing for redundancy if the load-balancer (or it's link) fails.

The main aim was to see what could be achieved in terms of scaling out a high traffic service. Raw data and more detailed analysis of the results can be seen here. Honestly speaking, It's not the most disciplined or structured research I've ever done, but the necessary information should all be there.

This document is essentially a high-level write up along with some additional observations



Read more…

The State of Mobile Banking (in the UK)

News recently broke that Tesco Bank's Android App refuses to run when Tor is also installed on the handset, presumably in the name of security.

So, out of morbid curiousity, I thought I'd take a quick look at just how effectively various banking apps were secured. Banks, after all, should be at the forefront of security (even if they often aren't)

To start with a disclaimer - personally, I think using banking services on any mobile device is a bad idea from the outset, and some of the results definitely support that idea. I've only taken a cursory look, and not made any attempt to dis-assemble any of the apps.


Read more… now available as a Tor Hidden Service

Hidden Services have had something of a bad rap in the media of late, whilst it's undoubtedly true that some host some unpleasant material, the same can equally be said of the World Wide Web.

Hidden Services do have the potential to bring a much higher level of privacy to the end-user, and aren't always about hiding the origin from the user (or an attacker). The cryptography used in Tor's transport is arguably much stronger (and easier to change if found to be broken) that is available for HTTPS.

To that end, I thought it would be wise to configure the site to be multi-homed, that is to be accessible via both methods.

Because both are run by the same back-end, updates will appear on both at the same time.

So, you can now access at either

A link to the .onion has also been added to the Privacy bar on the left.


Read more…

David Cameron: Idiot, Dangerous or just a lover of soundbites?

We've heard Theresa May parroting the same lines for months, but in the wake of the Charlie Hebdo massacre, David Cameron has joined the choir of people calling for new surveillance powers.

Mr Cameron has stated that if the Conservatives are re-elected, he will ensure that there is no form of communication that cannot be intercepted by the government.

So, one of the question we'll be examining in this post, is - Is David Cameron

  1. An idiot who doesn't understand the technology he's talking about
  2. Demonstrating that pre-election promises are inevitably broken
  3. Planning on introducing a draconian surveillance state
  4. Being mis-informed by other parties
  5. Simply creating sound-bites to raise the chances of re-election

Most of the coverage thus far has focused on option 3 - which seems fair given that it's the inevitable result of actually attempting to do what he is claiming.

We'll also be taking a look at why Option 3 could, and should not happen


Read more…

All Digital Downloads Withdrawn From Sale

As I wrote recently, the EU definition of the Place of Supply with regard to digital services has shifted to the place in which the customer resides.

As a result of the change (and more importantly, the bureaucracy involved in both recording the place of supply and filing returns) all digital downloads within my Shop section have been withdrawn from sale.

You can read more about why this decision had to be made in my earlier post.

If, for whatever reason, you've a burning desire to purchase something that was previously on sale, please Contact Me to arrange a manual transaction.


The DVLA is routinely sending sensitive details via email

It's that time of year - time to renew car tax. I figured I'd give the monthly direct debit a go and see whether paying the extra little bit is worth avoiding the yearly pain of remembering you need to find a few hundred quid up front.

For anyone who's not used it yet, the process of setting up is smooth and easy (in an almost distinctly non-government IT way), unfortunately it turns out there's a fairly big issue with the final step.

I should be fair, and point out that the service is provided by DirectGov rather than the DVLA directly, but IMHO it remains the DVLA's responsibility.



Read more…

Thoughts on Mailpile

I was quite excited when Mailpile was released as a beta, and it made it onto my list of 'must have a play with'. Life being life though, I didn't get chance to give it a proper go until recently.

Sadly, it was somewhat anti-climactic and I've been left feeling more than a little underwhelmed. Mailpile shows a lot of potential, but it's definitely not ready for production yet. 

I ran my testing on a CentOS 6 VM, and in this post will summarise the good and the bad.


Read more…

Shop section closing 31 December 2014

The shop section of my site will be closing for business on 31 December 2014 and I'll be withdrawing all digital downloads from sale.

It's not something I actually wanted to have to do, but as the changes to the EU VAT rules come into effect on the 1 January 2015 (HMRC at least are calling it VAT MOSS), the additional overhead involved in compliance means that running the shop will likely no longer be financially feasible.

The closure will include everything in my (somewhat small) shop, so

  • Joomla Extensions
  • Ebooks
  • Credlocker Extensions
  • Photos


Read more…

Virtualisation: Google Play Music Manager cannot identify your computer

Although there seem to be an increasing number of things which irritate me about Google's Play Music, there's no denying that it's an incredibly convenient way to listen to music when not at home. Whether using the Android App, or playing in a browser, it makes your library available wherever you are.

It's a pity then, that Google have decided to make it such a royal PITA to upload music (I'm also not too happy about the requirement to have card details on file, even if you plan on using the free version - you should only ever need to provide card details when the plan is to actually use them, it reduces the likelihood of them being compromised).

As Google's Play Music Manager now won't run on my desktop (something I need more introduces a conflicting dependency , I figured I'd run Music Manager in a virtual machine and just point it at the right NFS share.

Turns out it wasn't quite so simple, as Music Manager returns the error 'Login failed. Could not identify your computer'.

After some digging, it's incredibly easy to resolve though.



Read more…

ON-Networks PL500 Powerline Adapters

Quite some time ago, I played around with some Computrend 902 Powerline adapters and found a number of different security issues - here and here

Those devices are long gone, but whilst the issues I found were relatively minor (if nothing else, proximity was required) it left me a little concerned about the security of any devices that might replace them. For quite some time, I didn't need to use any powerline adapters, but eventually the need arose again (no practical way to run CAT-5 to the location and the Wifi reception is too spotty).

So I bought 2 pairs of On-Networks' PL500S Powerline adapters. Depending where you buy them from, the model number may be PL500P, PL500-UKS, or even the Netgear part number - Netgear ON NETWORKS PL500-199UKS.

I've not got as far as giving them a serious hammering from a security perspective as yet, however there doesn't seem to be much information about these devices available on the net (and what is there is potentially misleading), so I thought I'd post the information I've pulled together from prodding the devices, as well as a few common sense facts that might be being missed. As I'd have found some of the information helpful had it been available prior to purchase, I suspect others might find it of use too.


Read more…

Understanding Password Storage

I occasionally receive emails from people who have come across PHPCredlocker, and the question is usually the same - "Why are you storing passwords using reversible encryption?". Most emails are polite, some not so much, but they all have one thing in common - assuming that a commonly stated fact applies to all scenarios, and failing to apply a bit of simple logic that would tell them the answer - because that's the only way the system would work.

In this post, we'll be briefly looking at some of the ways in which you can store credentials, and which of them are appropriate to use (and when), in the context of building an application (web or otherwise).

I actually wrote a white-paper on this a good few years ago, but things have moved on considerably since then - Rainbow tables are no longer used, and it's now possible to cheaply use cloud services with GPUs available.

One thing that hasn't changed, however, is that you should always work on the basis that one day, somehow, an attacker will manage to get a copy of your database. At that point, how you've stored the credentials becomes very important (users re-use passwords, so it may not just be your system that gets compromised if the credentials are recovered).

Read more…

A Bad Boss Can Ruin Your Job

We've all, almost certainly, had a boss we didn't necessarily get on with at some point, but that doesn't necessarily make them a bad boss.

People are different, and sometimes view points collide, it's an inavoidable risk of putting distinct personalities into a group and asking them to spend their days together.

What makes a true bad boss is when the power/influence they exert is mis-used. 

In my career, I've had one particularly bad boss (I hasten to add - I'm not working there anymore!), not only did their behaviour ruin my enjoyment of my role, but they (in my opinion) deliberately went out of their way in an (ultimately unsuccessful) attempt to severely tarnish my reputation and my name. Their attempt could also have had a devastating effect upon my quality of life.

In this post, I'll be taking a broad overview of what happened, and examining what I learnt from the experience, and (with the benefit of hindsight) what the early warning signs were.

The events I'm going to discuss occurred a number of years ago and I always planned to write about it, but wanted to leave it long enough that I could be truly objective. As a result, I never quite got around to writing about my experiences.

Being a denizen of a number of internet forums, I've seen others post about experiences they're currently going through, and some of them really ring alarm bells for me - so it seems like the right time to get around to writing about it.

I'm not going to name names, as that isn't the point in this piece. I've tried to keep it as brief as possible, but being quite complex it's not as short as I had originally hoped.



The full background is quite long and complex (if you want to skip the background, go straight to what I've learned), but the full details would take forever to type out, and even longer to read. Some of the finer details have been skipped for brevity.


Read more…

My Own Little HeartBleed Headache

When the HeartBleed bug was unveiled, I checked all of my servers to see whether they were running vulnerable versions. They weren't, but once the patched versions were released it seemed a good juncture to test and roll out the update to one server.

What followed was something of a headache, initially with all the markings of a serious compromise.

Having now identified and resolved the root cause, I thought I'd write a post about it so that others seeing similar behaviour can get something of a headstart.

In response to threats such as CDorked, I run PHP Changed Binaries on all my servers, so any file in PATH is checked (daily) for changes, based on a cryptographic checksum. If any changes are detected, an alert is raised so that I can investigate the cause of the change.

The day after I updated OpenSSL, I started receiving alerts for a wide variety of files (I'd updated hashes following the update of OpenSSL)

Read more…

Falling Out Of Love With Siteground

In the past, I've really rated Siteground Hosting very highly, and recommended them to anyone asking about US Based dedicated servers (Heart would be my first choice for UK Based Dedicated Servers or VPS). Unfortunately experience has worn me down.

To be clear, I'm not, and never have been, a Siteground customer. However, some of the people I do some work for are, so I occasionally have to escalate things to Siteground, or step in when Siteground have asked their customer to take some action.

I've been quietly sitting on some of these frustrations for a little while, but in the last week some have been added, tipping the balance in my mind.


Read more…

NTPD Refusing to accept time from GPSD

One of the (minor) drawbacks of the Raspberry Pi is the lack of a hardware clock. Normally, you'd work around this by configuring a good pool of NTP servers to connect to. What do you do though, if you can't guarantee there will be an Internet connection available when needed?

The solution is obvious, so obvious that many have already done it - use the time provided by a cheap GPS dongle. The gpsd daemon helpfully pushes the time to Shared Memory Segments (SHM) so it's a simple adjustment to the NTP configuration file to have NTPD pull the time from the dongle.

Except, it seems on Raspbian, it isn't quite so simple. You've followed all the instructions (simple as they are) but are still seeing an entry like this

# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
SHM(0)          .GPS.            0 l    -   16    0    0.000    0.000   0.001

No matter what you try, reach stays at 0.

Frustrating, and there's very little to give you a guide. This post will tell you what the issue is, as well as how to go about finding it should it re-occur

Read more…

The Storm Ate my Broadband

Like many in the country, the storm has left me feeling somewhat isolated - that is to say my broadband is down. Don't get me wrong, I'm just glad the power is (mostly) back, and I'm far better off than some who've had their lives affected.

The simple fact, though, is that I have things I need to do, and not having a broadband connection really gets in the way of that.

Living where I do, there's precisely one place in the house that gets a 3G signal, unfortunately that place isn't particularly conducive to sitting comfortably. Whilst the Wifi hotspot functionality on my phone helps, the range isn't great enough to let me sit somewhere that I might be able to concentrate.

So, somewhat convoluted workaround needed;

To get around the issue, we're aiming for best case scenario (I get to use the PC rather than having to use the laptop). Seems silly not to, given that some effort needs to be put in anyway, so the eventual connection will look like this

PC -> Laptop -> Phone -> Internet

For the pedantic, there is a link missing there, but it should be obvious (Phone -> Charger!).

There's no ethernet connectivity upstairs, and the Wifi interface will be in use to connect to the phone's hotspot (though I suppose I could have used USB tethering) so I've used a Powerline device pair to extend my wired network up to where it needs to be for this (somewhat rushed) project.


Read more…

It's funny how times change

Over the past few days, I've been going over the old archives and have republished some of the content.

What's struck me as funny though, is how times change, but a lot of the issues remain exactly the same.

 Not that everything is unchanged. Back in 2006 I wrote a (very) short post on how Apple had taken another step towards vendor lock-in. What they'd done was to change their update mechanisms so that you had to use iTunes to update your iPod (prior to this, there was a standalone installer available - meaning Linux users could extract the update and apply it manually). In the pre-Iphone days, this was quite a big thing, although it wasn't the first step towards lock-in, it was a big change.

Contrast that to the level of control Apple exerts today. There's no (supported) route to load an app onto an iThing other than through the App Store (something which some predict will become the case with OSX too).


Read more…

Republished: Phorming Relationships

Originally published on 06 Jun 2008

Most people have heard of 121Media, although they may not be able to place where they heard the name. Well 121Media are back as Phorm, and so far they've created quite a stir. They are pushing a new style of Targeted Advertising whereby they place some hardware between your computer and the Internet and analyse the pages you access in order to serve you with 'more relevant' advertising. Unlike many other online advertisers, Phorm will not just base adverts on partner pages that you have previously accessed, but will actively analyze the contents of almost every page you view.


Read more…

My Volvo is Dead

It's been a sad week, on Monday I took my Volvo 440 for its MOT, knowing I'd get a few 'must fix' items back - emissions being the usual headache. This year, emissions passed with flying colours, but it was noticed that the sills have seen better days. Or as the garage I took it to for a confirming opinion said - if we remove the plastic covers to check, you won't have sills anymore!

So after 3 years of hard service, my car has finally gone to the scrapyard. I only ever expected it to last a year, so it's done well, but it's still quite sad to be left with nothing but spares to sell, especially as it was otherwise mechanically sound.

Still, on the upside, at least I'd gone in knowing there was a chance of a fail so it wasn't a complete shock. Life goes on, and cars don't last forever (parts were getting a little scarce too), though it's going to take quite some time to find a car that drives quite as well, something that's going to bug me for quite a while on my daily commute.


Modern Feminism is Dangerous

I'll start by clarifying what I mean by feminism - I don't mean the right to equality, equal pay etc - at this point those should really be considered common sense, even if we're not quite there yet. To me modern feminism appears to be far more fundamentalist than that and it's an incredibly dangerous path to follow.

The campaign 'Lose the lads mags' (backed by UK Feminista) is an ideal example of this. I can completely understand the sentiments being expressed, and yet the focus seems to be solely on magazines aimed at blokes. 

In this post, we'll be looking at what the campaign group seems to be missing, and why it's so dangerous for them to be attempting to force their views onto others. Although we'll be using this as an example, the aim is to try and ensure that all points raised are applicable to most of the current 'feminist' topics.


Read more…

Why You Shouldn't be using SHA1 or MD5 to Store Passwords

There are a lot of badly coded sites out there, and far too many sites still seem to be falling prey to SQL Injection vulnerabilities resulting in a lot of high profile leaks of user data.

I wrote quite some time ago on The Importance of Salting Stored Passwords And How To Do So Correctly, but whilst the underlying message remains correct, the techniques for doing so have been outpaced by technology.

Although still widely used, checksum algorithms such as SHA1 and MD5 are no longer sufficiently secure.

In this post we'll be exploring why you shouldn't be using MD5/SHA1 and how you should be storing passwords.

Read more… - Something's afoot..

There's speculation that may have been compromised in some manner. A number of people (including myself) have noticed some very spammy links showing up in Webmaster tools as Itemtypes under Structured Data.

Rather than displaying (for example), there's an itemtype pointing to various URLs on domains including, and The only thing any of the sites have in common is their use of

Curiously, you can also reproduce the issue using the Structured Data Testing Tool and entering a small HTML snippet. The issue only seems to be affecting those in Europe though, with US users only able to reproduce by using an EU based proxy.

It appears to happen about 1 time in every 5 requests, and you'll need to modify the snippet slightly to be able to resubmit the form (I simply clicked after the closing span and inserted a space each time).

Try inserting the following

<div itemscope itemtype="">
<span itemprop="name">Badgers</span>

Submit, and then re-submit. If you're in Europe, within 5 requests you should see the itemtype change from

to something like

On the Google thread, a poster claiming to be from yalwa says it's nothing to do with them. I'm reasonably inclined to believe that given that it's not always a yalwa URL being generated (though they do seem to form the vast majority).

More concerningly, some of the URLs returned link to Adult material. There's a risk that a site may be wrongly classified by Google if they're taking the content of the linked 'schema' into account.

So what's happened? Has someone found a way to exploit the ability to 'extend' schemas? Has someone compromised or is something else going on here?

Aside from using 'Fetch as Google' and the Rich Snippets Testing Tool, it appears to be impossible to reproduce. Using Google's User-Agent isn't enough (nor that of the Rich Snippet testing tool) so if it is a compromise, there's some IP filtering going on as well.

You can follow the thread here.


Read more…

Darkleech Apache attacks on the rise, but is it really that hard to detect?

Reports of CDorked.A infections are still on the rise by the looks of things. The attack is reported as 'hard-to-detect', but this should only be true for the more naive sysadmins out there.

Whilst it's true that CDorked changes nothing on disk, except the HTTPD binary, this change alone should be triggering alerts. On a production server, you should be storing checksums of known good files and comparing these regularly to see if anything's changed.

As some obviously aren't following this basic step, in this post we'll look at what you need to do to at least be made aware if CDorked gets onto your system - it'd be nice to be able to do a post on avoiding it, but the attack vector is still unknown!

Read more…

Cookies: Taking Transparency a Step Further

Contrary to the belief of some, the EU E-Privacy Directive was never about stopping cookies. It was always about raising awareness of what they are, which ones are set and how they can be misused. It was, and still is, a cause of annoyance for many - especially as only four member states have currently adopted the provisions.

Whilst I don't think the implementation was correct, the underlying principle is sound - we should be ensuring users are aware of what data we're storing in their browser and how it's used. Most sites, in my opinion, don't go nearly far enough to achieve this, instead just scraping the minimum standard.

In this post, we'll be exploring what I think we're doing wrong, and what we should be aiming for.



When the law was first implemented in the UK, giving the user an informed choice formed a huge part of the directive. We were told that users should be told that cookies would be set, and given the option to opt-out (even if this meant they simply couldn't use the site).

Many of the bigger companies opted to ignore this requirement, instead choosing to show a banner that essentially said "We use cookies, we'll assume you agree if you use the site". At the 11th hour, the Information Commissioners Office stated that this was an acceptable implementation, and the idea of an informed choice was dead.

Now whenever we browse UK sites, we're plagued with banners telling us what most of us already knew - sites set cookies. So the net effect, really, is that we've increased bandwidth usage slightly but made no real gain otherwise. For those of us who tried to stick to the original interpretation, the end result was a large decrease in traffic as users were scared off by the cookie choice screen. There's not much point in trying to help protect users if your attempts drive them away!


Read more…

Creating a DOS Games Server

This post assumes you've followed my guide to Setting up Xen on Ubuntu 12.04. and will talk you through the steps required to set up a web-accessible server for playing classic DOS Games (I've got Commander Keen, Duke Nukem 3D and Quake in mind!).

Either create a new Ubuntu VM, or clone an existing one, launch and then connect via console.

First we want to install DOSBox

apt-get install dosbox

Next, we want to configure X-Forwarding (dosbox makes quite a mess of our console if we try and run it otherwise!)

nano /etc/ssh/sshd_config

Ensure X11 Forwarding is set to Yes

Add an unprivileged user (do you really want to be logging in as root to play games?)

sudo adduser gamesuser
sudo passwd gamesuser

Now we should be able to SSH into the VM (run ifconfig if you don't know the IP)

ssh -XY gamesuser@ip.address

We should get a window open on our local machine when we run


If that worked then we've got some very basic config to do, exit dosbox and then run the following

nano ~/.dosbox/dosbox-0.74.conf
#Add the following to the end of the file, in the autoexec section

mount c ~/.dosbox/drive/

#Save and close
mkdir ~/.dosbox/drive

Now, when you run dosbox, the C: drive is automatically mounted. If you're keen to start installing games rather than doing the last few bits of configuration, copy your installers to that directory, then call them from DOSBox.


Read more…

I've gone Joomla 3!

Joomla! 3.0 was released in September 2012, and I've been planning an upgrade of the site ever since. As should be obvious by the change in layout, the migration is now complete. 

There are quite a few changes that have been made at the same time, some obvious, some far less so...


I've gone for a far simpler template this time, though it is fully responsive and so should work well on mobile devices too (something the old site never really handled particularly well). I've stuck with my tendency for light on dark as I find it easier on the eyes (though as I spend most days staring at consoles, it may well be a personal bias), but have also implemented a color switcher for those who prefer to read black on white (See the link at the top right of the screen).

A little while ago, I wrote about my My love/hate relationship with Responsive Web Design. I've tried hard to avoid the pitfalls I've listed, especially the section about removing functionality from smaller screens. I'm sure over time I'll improve the experience somewhat, but I think the site works well on mobile - my only issue being that Adsense isn't responsive and so won't adjust if your viewport shrinks (perhaps because you've turned a landscape tablet portrait)

The old shop is gone (Virtuemart doesn't currently support Joomla 3.x) so if you had purchase history I'm afraid that's gone for the time being. I am planning on importing it into the new system at some point, and if you need in the meantime just drop me an email.

For me, the biggest change though, is that Joomla finally contains the option to not send you your password in plain-text when you first register. So now, when you register for the shop with a nice secure password, the site won't be compromising it by sending it to you via email (I've never, ever, understood the logic behind that behaviour). The only occasion where this happens now, is if you sign up with a social media account and your email address couldn't be retrieved automatically (which essentially means Facebook, Twitter and GitHub - Linking a Google account gives a far more seamless experience).

In summary, I've mad the following changes

There are more changes planned for the future (I'm still not entirely happy with the Shop, for example) but I think it's a good starting point. 

Having detailed the changes present in this version of the site, I'm now going to be a little self-indulgent and look at how my little corner of the web has changed over time.

Read more…

Why you should always consult a professional

Originally posted at


The web is, undoubtedly, a wonderful resource, it allows us to quickly and easily find information on almost anything. When it comes to servers and websites, however, it can be incredibly dangerous if you (or worse, the author) do not know what you/they are doing.

I was browsing to see if there's a better way to reset a users password from PHP than the method I usually use, and stumbled across this tutorial. Quite frankly, my chin hit the desk at the advice being offered.

In all fairness to the person who posted the tutorial, they have attempted to mitigate some of the serious security concerns, but despite that, it's still a security nightmare. What makes it worse, is the comments below indicating that some users are blindly copying and pasting the PHP and following the steps without even a base understanding of how it works.

In this post we'll be looking at what the tutorial suggests, and why it's a bad idea.


Read more…

My love/hate relationship with Responsive Web Design

I am, by no stretch of the imagination, not a web-designer. I can write and understand CSS but completely lack the ability to look at something and think it would look nice if I did this, ultimately I'm a sysadmin and a developer at heart - the visual stuff just doesn't interest me.

However, just because I can't design something doesn't mean I don't come in contact with it, and 'Responsive' design is becoming increasingly popular (for example, Joomla 3 is responsive by default), and I regularly do - both as a user and in my professional life.

Responsive design is how things should have been designed from the beginning. Of course, we didn't have to worry about such a range of different devices accessing sites, so no-one really thought too much about it - every one was far too busy working out how best to deviate from the standards so that IE would display things correctly.

The growth of responsive design does cause me some concern though, and in this article I'll be explaining what my concerns are, and how we can best avoid them.


Read more…

Introducing PHPCredLocker Version 1

For a little while now, I've been working on a small PHP based project designed to store passwords securely. After a lot of testing, bug-hunting and fixing, PHPCredLocker has reached version 1.

Designed to prefer security over convenience, the system takes every step it can to protect stored credentials. Depending on the version of PHP you are running, passwords will be encrypted with either OpenSSL or MCrypt. A different key is used for each credential type (think FTP password vs Joomla password) and the database has been designed to be as unhelpful as possible to any miscreant who should manage to get a database dump.

I'm not an interface designer, so the template is very basic, but PHPCredLocker has been designed so that you can adjust and override as necessary (modules and views can be overridden, and templates are easy to create).


There's also support for plugins, with a couple of core plugins included in version 1: ExternalResources and AffinityLive Logging.

I've also created a plugin called AutoAuth, allowing for the display of a 'Log In' button by certain credential types.

You can view the PHPCredLocker documentation on this site, so far I've created

There're more to come, but all will be filed under Documentation -> PHPCredLocker.


PHPCredLocker is released under GNU GPL V2, and bug reports should be filed in the GitHub Repo. Download either from GitHub, or click here to download a zip file.

You can also tinker with the PHPCredLocker Demo here.

Version 1.5 should bring some new features as well as refinement of the interface.


Read more…

My Most Used Android Apps

My other half's new phone arrives today, replacing a very old and knackered Sony Erricson. Once the Galaxy S2 Mini gets here it'll be down to me to set it up, which has set me thinking about which Apps I use regularly (aside from games) and which of those will actually be of use to her (somehow, I can't see her using ConnectBot!)

I don't think any of the apps I use are too obscure, but then people's exposure differs so hopefully some of these will be of interest to other users!


Firstly, I always root Android phones, having that extra level of control is very important in my eyes as, amongst other things, it means you can install and use Apps that won't work unless the phone is rooted. The first of these being


Permissions Denied

Personally, I think this should be on every phone! You do need a rooted device to use it though, but if you fulfil that criteria the app allows you to revoke permissions from other Apps. Sometimes it'll cause issues (the app you're adjusting might start force closing) so it's largely a case of playing around to see what you can remove. If an App won't play nice when you revoke a permission you really don't want it to have, it's time to think about whether or not you want that app installed at all.

So far I've found that you can't revoke the following

  • Facebook App - Fine Location - App will force close 
  • Google+ App - Coarse Location - App will force close (you can remove fine location no problem though)

The one thing I would say, is don't get a false sense of security. Yes, it means you can download an App that's demaning permissions you don't want it to have and remove those permissions, but think about why the App might have those permissions in the first place. If you're downloading a game and it wants to be able to send text messages and make calls, is it possibly you're actually downloading malware?


Do Not Disturb Me

This app implements a 'feature' that I used to have on much older phones, but that appears to be missing from both Android and iOS. Basically it allows you to set schedules where the phone is in silent mode, incredibly useful if (like me) you sometimes forget to switch silent off the next morning.

Better (or worse) for those of us who are on-call, the app can be configured to allow the phone to ring after a threshold has been reached. So the first call you make to me won't make a sound, but if you then call back the phone will ring. The threshold is customisable, and the app can also send you a SMS after the first call to say "I'm busy, but if it's important call again".

It supports whitelisting and blacklisting (the latter being great for those who feel their call is always important!).

End result, I can sleep through the night without being woken by the sounds of my inboxes filling but not run the risk of missing an important call (or forgetting to turn notifications back on the next morning!).


Amazon Kindle

Even if you never buy books from Amazon, this app is worth installing. Why? because it handles PDF's far more gracefully than the built in PDF reader. Especially with long PDF's (think car service manuals), it just makes things ten times easier.

Obviously, if you do buy from Amazon, there are additional benefits. I have a Kindle, but it's still useful to be able to bring books up on my phone (mostly reference material, reading fiction on a phone just doesn't feel quite right!).



I find Twitter a convenient way to quickly add news links to my site, and occasionally have conversations with people on there. I'm not the most prolific of Twitter users (by a very, very long shot) but I have found that TweetCaster makes life a lot easier. It hooks into the system so that you can use it to 'Share' content, so if you're browsing a site and want to tweet about it (assuming the site doesn't have a webintent button, of course) just bring up the menu, choose Share and then select TweetCaster. It's not unusual for apps to allow this, but it is handy!

TweetCaster will also notify you of new tweets, mentions, direct messages etc. Personally, I've turned those off, but I assume there must be some who want to be notified of every new tweet!



Traditionally, you needed a rooted device to use OpenVPN, but the new Galaxy Note seems to let you get by without rooting. Whether that's a decision that Samsung have made, or something we can look forward to in future devices I don't know (not my tablet, can't poke around too much!). I suspect it's more likely to be a change in Android though, so hopefully it won't be necessary to root as the TUN device will already exist.

OpenVPN is great, it lets me connect securely to the work network. I originally set it up so that I could test the work VPN (why would I need access from a phone?) but actually it's amazing how useful it's been. If there's something up with the router, I can VPN in (assuming the router isn't dropping WAN traffic!) and access the router's web interface. Hell, with connectbot, I could access the shell but that's really a laptop job.

It also means that I can connect to the VPN and then turn my phone into a hotspot. Every client connecting to my hotspot will be routed through the VPN, useful if there's more than one of you out in the field!



I don't use this nearly as much as I first assumed I would, but it remains a useful tool. AndSMB allows you to connect to Samba (Windows file) shares. For me the added benefit is that it can be very verbose in it's output so if a fileshare isn't working I can test in more depth before I need to get my laptop out. 

On my old phone, this was my preferred way of moving media between my phone and PC, just connect to the PC's fileshare and then push/pull the data as required.



SSHDroid lets me start a SSH server on my phone so that I can log in remotely and do things that I can't be bothered to type into Terminal Emulator. It's another of those apps I installed to play around with, but now use far more than I ever expected I would. I guess if you're not into digging through filesystems to see how stuff works it's probably of little interest though!

Why I Use Joomla!

For those who weren't already aware, this site was created using the Joomla! Content Management System. Personally, I think it looks quite good, but why exactly did I choose to go with Joomla!?

I can quite happily write software all day, so why go with a CMS someone else has written when I could enjoy the challenge of writing my own bespoke platform?

In this post, I'll be explaining the reasoning behind my choice, and why I continue to use Joomla where appropriate.

Read more…

Natwest - The Big Mistake

There's been a lot of talk (unsurprisingly) about the recent screw-up at Natwest, RBS and Ulster Bank. There's been a lot of people blaming 'outsourcing' despite RBS denying that the work was outsourced. Here's where the confusion lies though, RBS operates in India so although half the team weren't based in the UK they weren't technically outsourced. The term in these instances is 'off-shoring'.

Of course, it's a game of semantics and it's a little concerning to see RBS willingly playing such a pointless game. To most in the UK, if you are sending the work abroad (especially to India) then you are outsourcing (even if you're really off-shoring).

What's scarier, though, is that the failures at RBS could quite easily happen to any other company. It's easy to blame the bankers (we have a lot to blame them for!) but the practices there aren't that different to what's been happening in other companies around the globe. That it was a bank that fell over is probably just blind luck.

Read more…

Aesthetic Change to Site

Those who visit my site regularly will have noticed a quite substantial change in the last 24 hours, I've finally got around to migrating to 2.5. 

Not that all the changes are done quite yet, my old Attachments plugin didn't have a newer version so I need to manually re-add attachment to affected articles.

I do have a solution to the EU Cookie Law installed, but won't activate it until I've got more than the session cookie being set (i.e. once I've finished the migration) as if something doesn't work I want to be sure it's not the Cookie Banner interfering.

Enjoy the new look and feel!

Use Joomla? Pull your finger out!

According to builtwith there are over 1.6 million Joomla based sites out there. Pretty impressive really, are you one of them? If so, then this post is aimed at you! There may be 1.6 million sites, but the sad reality is not everyone is giving back (not that you're obliged to). It's so easy to do, you don't even need to be a developer there are plenty of other things you can do to help the community!

So if you use Joomla, and haven't contributed anything recently read the rest of this post for ideas on how you can help (it won't cost you a thing!).


Read more…

Apple Patenting the Obvious Again

I enjoy coding, and many would say I'm quite intelligent. But, by no means, am I some sort of super-being. With this in mind, I'd say if I was able to think up a technique then it's more than possible it was too obvious to patent.

So, given Apple have just been granted this patent, I felt it was worth highlighting that it's not new, it is obvious and has already been done!


Apples' Patent  "Techniques to pollute electronic profiling" essentially describes having a bot browse on your behalf in order to pollute the stream of anyone who may be profiling your browsing behaviour. In the real world, this is most likely to be advertisers, but there's no reason it couldn't also be law enforcement bodies and the like.

The bot can be programmed with the user's (or Principle's in patent speak) interests, but is mainly aimed at 'browsing' information that doesn't fit with the users actual interests. The idea being that the database of "x likes y" held by advertisers becomes far less valuable thanks to the innaccurate data collected from the bot browsing on your behalf.

Nice idea, sure, but not patent worthy. I wrote a script to do this in 2008 (and I don't claim to have invented the technique either) to amuse myself when it looked like BT were actually going to roll out Phorm's Webwise system. I even published it on my site (back then, it was!) for others to tweak and use. Apple's patent was filed in October 2011, so it's safe to say my script pre-dates that (17 June 2008!).


Colour me Cynical

There's a cynical part of me that suspects this is aimed at raising advertising revenue for Apple. At the moment, anyone capable of deploying Javascript across the net (ahem.. Google) can track a users browsing habits, and tailor ads to the detected preferences. Throw a bit of data pollution into the mix and it becomes far, far harder. Apple, of course, could still be able to aggregate this data (and I'm not saying they definitely will) which would become particularly valuable if the traditional methods of stalking tailoring ads were to become ineffective.

Regardless of the motivations, it's the Patent I object to. Apple's recent behaviour indicates that it's become a litigious shit with a stronger focus on lawsuits than actual innovation (I had a Mac SE that I loved, last bit of Apple kit I actually liked) so the granting of another patent is never good news. Given that I know, for fact, that there's prior art it seemed important to share it for the day when Apple decides to try and file suit against someone.

I mentioned earlier that I don't claim to have invented the technique of polluting an advertisers stream, which means that there's probably a shitload of prior art out there. Firefox's TrackMeNot achieves a similar end from within a browser, and I'd hazard a guess that it's also been done for Mobile already. The only real difference is that neither I, or TrackMeNot bothered to implement a 'bot chat' function for chatrooms/IRC etc. Bet someone out there did though.



I've attached the files I posted back in 2008 (note the date on the PDF), but for easy reading, here's how simple Apple's 'invention' actually is


# Urlrequest released under the GNU GPL v2 copyright B Tasker 2008
# A simple script to clog a Phorm client with lots of unrelated keywords
# thus reducing its commercial viability

#Before running this script you need to turn WebWise on,
#grab the cookie from your browsers cache, and then turn WebWise off

cd /home/$USER/.urlrequest
wget --save-cookies=$WORKDIR/wgetcookies \
--load-cookies=$WORKDIR/wgetcookies \
--keep-session-cookies -r --random-wait \
--ignore-length -nd \
-R .gif,.jpg,.txt,.pdf,.js,.js.1, \
--spider -i $WORKDIR/urlrequesturls
rm -rf /home/ben/.urlrequest/*

# Be aware that you will be using up your bandwidth, 
# not at any great rate, but bandwidth does cost money!!!!


I've a feeling that I also wrote a more advanced version that based the delay between requests based on the size of the HTML page being retrieved (bigger pages often contain more text, so it represented someone taking a while longer to read), but can't find whether or not I posted it. It also allowed you to set the user-agent (the default was something rude about Phorm). I probably have it somewhere, but it's not that important.

It's not a great implementation (I've no idea why I decided to require ~/urlrequest and ~/.urlrequest for a start!) but it works well when populated with a sensible list of URLs. Granted the list accompanying the script wasn't exactly sensible ( is always a classic!) but the later version that searched for a category and then browsed on it's own enhanced it a bit.



There's probably reams of prior art, but as the USPTO seem to have missed it (there's a surprise), I thought I'd contribute my little bit. If it helps prevent any of Apple's iTrolling then it's worthwhile.

I have to admit, I was pretty amazed to see the patent given how obvious such a solution really is, but then who holds a design patent on rounded corners? I was even more amazed when it occurred to me that I'd created something similar more than 3 years prior to Apple filing their patent! It's just a pity I can't link to my original posting on though I may check the archived version so that I can look it up in the Internet Archive (some of my stuff made it in, be great if this was there too!)


File: urlrequest

File: urlrequest.pdf

Old Articles given new life

Before I took it offline, contained a vast number of articles on a wide range of subjects. The plan was always to filter through and republish those articles that may be of use to others.

I've previously republished a few of the articles, but efforts to skim through have largely been overridden by other commitments.

Having had a total of 1 hours sleep last night, I needed something easy to do with impaired mind, but challenging enough to keep me awake! As as result I've now republished the following articles



In my sleep deprived state, I've not been able to thoroughly proof-read all of the republished content. Some of the posted content dates back to 2006 when my writing style was.... let's say different..... so I can only apologise for any typo's or use of abysmal grammar!

I've also launched a new service in the Suffolk Area: Vehicle Fault Code Reading. This can cost up to £70 at a garage, but I'll come and read your codes for just £15.


Read more…

New Service: Vehicle Fault Code Reading

I'm offering a new service in the Suffolk (UK) area - Engine On Board Diagnostic Code (Fault Code) Reading.

I can read all fault codes on any Petrol or Diesel car newer than 1996 and also on some older cars. All cars manufactured since 2001 are required to support OBDII (OBD2) so any car newer than 2001 can definitely be checked.

I charge a fixed price (see below) for this service, no per-code charges! Price includes reading and resetting codes. Service Light reset is charged at a different rate, but is cheaper if I'm reading other fault codes.



I'm based in Ipswich, but can travel (for a little extra) if necessary (for example if you're unable to bring the car to me). No additional travel charge for the Kesgrave/Martlesham area.

Evenings and Weekends are preferred, but daytime checks may also be possible.

Reading fault codes can help in troubleshooting the following issues (amongst others)


  • Ignition Issues
  • ABS/Traction Control Issues
  • Emissions issues
  • General Engine Issues
  • Fuel-System Issues
  • Fuel Consumption Issues
  • Steering System
  • Some Braking Issues



  • £15 to read (and if you want, reset) all codes, no charge if I can't read the codes for some reason.
  • £20 to reset Service light (£5 if reading other codes anyway)



  • You'll need to know where the Diagnostic connector is on your vehicle (usually under the ashtray on newer vehicles), but if you don't let me know in advance so I can look it up.
  • Your car must have a working battery so that the codes can be read. If this is an issue, let me know and I'll bring one to connect for the duration of the check.
  • Please let me know the Make, Model and Year of the Vehicle in advance so I can look up the relevant DTC translation table.


Fault Resolution
  • Depending on the fault detected, I may be able to resolve the issue. This will cost extra (amount dependant on work required), but will not be undertaken until a price has been agreed!
  • If I'm unable to rectify a fault, you'll still be able to visit a Garage but will be able to provide them with details of all detected fault-codes.



Some garages are charging £70 for this service, so at £15 it's a bargain.

If you want me to read your DTC codes, then please contact me to arrange a time

Linux for Business People

It's taken some time and effort, but I've finally released my first book - "Linux for Business People". It's now available for purchase on the Amazon Kindle Store.


2021: You can access the book here.


Linux for Business People is aimed at Business Men and Women primarily within the Small to Medium Business sector. It examines the business case for using Linux based systems within business, highlighting any potential benefits or drawbacks. Although primarily focused on servers much of what is discussed will also apply to workstations. The experience of a number of businesses currently using Linux is also explored in order to highlight the real-life benefits and pitfalls that they have encountered.


Additionally, Linux for Business People also gives the reader some hands-on experience of the tasks a Linux SysAdmin is likely to undertake. Whilst many business men and women may have little interest in this area, the hope is to help demystify some of what the IT department actually does. This section of the book may also prove to be a useful resource for those new to managing Linux based systems. Due to it's simplicity, I doubt that the experienced sysadmin will find much of interest, though it does contain enough detail to allow those in small business to routinely manage a server themselves.


Whilst my writing style is almost certainly not that of a professionally writer, I believe Linux for Business People should be reasonably easy to read and follow. I've tried to keep all jargon to a minimum and have used footnotes extensively to help explain complicated subjects.


Linux for Business People is currently exclusive to the Amazon Kindle, however I do plan on publishing through other mediums at a later date. If there's sufficient demand, I may also consider a short print-run.


I've also made a conscious decision not to use DRM on any works I publish, including this. Please respect my rights as an author by only obtaining through authorised means just as I have respected the rights of readers by opting not to use Digital Restrictions Management to protect my work.


You can purchase Linux for Business People direct from Amazon and have it delivered to your Kindle reader or application.


Click Read More for a short snippet from the book!



Chapter 3: Which Type of Linux should you use?

There's a very good chance when provisioning a Linux based system that your IT Department has already selected the Linux Distro that they feel is most suited. This chapter, however, is intended to help you understand how to make these decisions. Whilst all Distros run the Linux kernel, many are built with a specific task in mind. With the wide variety of Distro's in the Linux ecosystem, it would be impossible to advise exactly which flavour is best suited, but it is possible to examine some of the assessments that need to be made before making a selection.

It's important to note that many distros maintain two versions of each release - one for Desktops and one for Servers. Although based on the same underlying software, the differences between the two can often be many. The easiest example is, of course, that many of the server focused systems will not run a GUI by default (although this doesn't stop one from being installed!)


Read more…

Have a Happy New Year!

As 2011 draws to a close, it feels there's very little left for me to say but Have a Happy New Year!

2011 was the year that Steve Jobs died, News of The World was exposed, SOPA was conceived and many, many other events transpired. As ever though, I suspect most of us will forget the happenings of 2011 once we've had a few drinks tonight and will simply hope that 2012 is the year that finally moves things forward for each and every one of us.

Have a great New Year, I'll be back in 2012 and I hope you will too!

Do We Take Security and Safety for Granted?

The latest set of SCADA related exploits has set my brain wondering about the systems we assume are safe. Some have basic security checks, and some have none so are we taking the security of these systems for granted?

My brain sometimes wanders to some interesting places, and I thought it may be interesting to pursue these a little further, despite the tiny likelihood of them actually occurring in real life (the point of course being, it may be unlikely, but they could!)


Read more…

BUGGER is back online

It's taken a lot longer than planned (you know what they say about builders houses? Its the same for devs!) but my Project Management system is back online!

It's actually not fully completed, but the public facing aspects are finished enough to begin using. The code's still quite messy, but it's safe and means I can go back to managing projects more effectively.

I've not built the 'home pages' functionality, so for the time being the Project Summary pages will fulfil this role (just as they did in older versions).


You can access BUGGER from the navigation menu, or at


Summary Pages








Who's auditing the auditors? (it should be you)

First published 30 September 2011 on

A recently published issue with a Security Auditor has highlighted just how much potential there is for the worst to happen when information is requested by someone with a level of authority. In this particular case, the person being asked for the information had the sense to challenge the request, but it's easy to believe that many others would have simply attempted to comply.

The Security Auditor in question was insisting that the following be provided;

  • A list of current user-names and plain-text passwords for all user accounts on all servers
  • A list of all password changes for the past six months, again in plain-text
  • A list of “every file added to the server from remote devices” in the past six months
  • The public and private keys of an SSH keys
  • An email sent to him every time a user changes their password, containing the plain-text password.

It should be pretty clear to most that this presents a huge security issue, but faced with a Payment Card Industry (PCI) Auditor making the request, how many would simply assume that he “must know what he's doing”?


The data requested is actually very difficult to obtain from a Linux based server (as was the case) unless you've been pro-actively logging it (don't, ever!). Despite the auditor's claims, any system that stores user passwords in a retrievable form (and plain-text logging certainly falls into that category) would fail PCI compliance instantly.

It, of course, would be a logical assumption to assume that the auditor was in fact attempting to “socially engineer” the user as part of his audit. As, however, this particular auditor insisted on the information to the point of the user changing payment provider it seems unlikely.

The point of this article, however, is not to simply examine the mistakes of this auditor, but to look at the wider picture. What happens when someone with an air of authority and conviction in their statements asks for something you should not provide? Will you know what's OK to disclose, and what isn't?


Read more…

The Difficulties of Anonymity Online

Quite some time ago, I wrote a piece questioning whether Anonymity will ever be impossible online. At the time I concluded that it would always be possible, what I didn't touch upon is something highlighted recently - the huge amount of trust you need to place in others to actually achieve anonymity.

This post looks at the mistakes that can be made, as well as where true anonymity is not always necessary.


Read more…

Making the Move to Chrome

I've been avoiding upgrading my browser for a while, but the time has come - I've been running Firefox 3.6.18 for longer than is necessarily wise and it's really beginning to creak. Slowly but surely, Firefox has been eating more and more memory and taking longer and longer to do what I need it to.

I avoided Firefox 4.0 thanks to their decision to follow Chrome in buggering the user interface, and had been dreading changing.

But it looks like Firefox and I will be parting ways, despite my concerns about Google's view on Privacy Chrome does seem to be the way forward for me

Whatever happens I'm going to have to get used to a lobotomised interface, and as every release of Firefox seems to get more and more bloated I may as well take the opportunity to try something else out.

Opera, sadly, was never really a contender for a number of reasons, but I never envisaged that I might actually adopt Chrome as anything more than a browser to test sites in.

As it turns out, Chrome and I are getting on quite well, and unless Mozilla really turn things around I suspect if might be quite a while before I use Firefox for anything other than testing again.

It's a pity, but when I hit the point that Firefox is trying to each 2GB of RAM to run (for me) very few windows, it's time to call time. I did everything I could to keep that browser going, Firebug became a casualty quite early on but even with no add-ons running and a conservative view on the number of tabs open, Firefox just didn't want to keep up!

Can't say I'm that keen to bother keeping up with their idea of releasing a new major version quite so regularly either, it's a bit too Apple for my liking.


All things come to an end I guess.

UK High Court Approves National Censorship in the name of Copyright Protection

At the UK's High Court, Justice Arnold made a decision that is not only stupid but dangerous. In the fight between the MPA and BT he ruled that Cleanfeed should be used to block the copyright infringing site Newzbin.

Cleanfeed is a technology designed to do one thing - prevent access to child abuse material (the effectiveness of it is another matter). But now it will also be used to block Newzbin (and you can be sure other sites will follow).

So why is the decision so utterly stupid?



The answer is actually reasonably simple: until now, the only people with a motive to circumvent Cleanfeed were those interested in accessing child abuse material. Now, Mr Arnold has ensured that a much wider audience have a reason to want to circumvent the filter.

Newzbin have already promised to 'break' Cleanfeed, to which BT responded "We would be appalled if any group were to try to sabotage this technology as it helps to protect the innocent from highly offensive and illegal content". What hasn't been realised yet is that every user wanting to access Newzbin will need to circumvent the filter in order to do so.

Before today, if a site stated "To bypass Cleanfeed ......." you could reliably say that they were supporting those who wish to view Child Porn. It is for this very reason that I have posted very little information regarding Cleanfeed to this site.

Now, that information will circulate quickly, as many users will be searching for the answer and circulating it to their friends. Thanks to this judgement anyone found searching for information could simply be attempting to access Newzbin, which is perfectly capable of hosting non-copyright infringing material as well.

So although I don't believe Cleanfeed to be particularly effective at preventing child abuse, Justice Arnold has now ensure that the information required to bypass the system will be more readily available to those with more nefarious desires than a dodgy MP3 or two


Read more…

The Slippery Slope of Censorship

I‘ve written several articles recently about why Internet Censorship does not work and why implementing even the most basic of national filters is a bad idea. I’ve pointed to examples such as China and Australia as proof that these schemes usually suffer from ‘mission creep’ and end up censoring material that falls outside of the original scope. 

Now in the UK, we have the beginnings of another example.


Read more…

The Importance of Salting Stored Passwords And How To Do So Correctly

Many will have watched the recent releases of user passwords from Sony (and others) with interest. A lot of people, won't however, realise why Sony's practises were so poor. For many, storing passwords means just that, purely because they aren't aware of the methods available to make it a lot harder for an attacker to gain access to users passwords.

Whilst network security obviously plays a very important part, even when that fails it should be almost impossible for an attacker to tell you what your password was based on nothing but a database dump. In this short post we'll examine exactly how passwords should be handled and stored in a database.

Read more…

Are companies being too loose with our privacy?

Privacy is a big issue, the world is gradually becoming more aware of just how devastating it can be when you lose control of your private details. Mistakes are being tolerated less and less, and yet companies continue to encourage customers to make decisions that are potentially risky.

In this case, I'm referring to the growing habit of companies relying on Facebook and Twitter as their points of contact for customers. It's a potentially dangerous practice, and companies need to realise that not every users knows enough to protect themselves adequately.



A current example would be Birdseye's campaign to find new recipes for their Chicken. There's a prize of £1000 on offer for the best entry, but in order to enter this competition hopeful chefs need to post to the Birdseye Facebook page.

Whilst most users do have Facebook accounts, a prize of £1000 is compelling enough to encourage even the wary to consider signing up for an account. Given the, frankly, horrific history of Facebook and privacy is this a wise move?

Quite aside from giving Facebook free advertising, what Birdseye are requiring is that would-be entrants willing trade an element of their privacy for the right to enter a competition. Facebook's business model is based upon the concept of amassing as much personal data as possible so that they can then display 'relevant' adverts to the user. Unfortunately, this business model does not sit well with best practice regarding privacy.

Facebook has a history of making decisions to the detriment of it's users, often resetting privacy controls to the default ("Available to Everyone") whenever they release a new control. For the average user, the task of maintaining these settings to protect their personal information is just too huge and so they don't do it. They may also not realise that the settings have changed, but whatever the cause their data is exposed to the world. It's all too easy to blame the users for being slack, but the reality of the matter is this - Facebook have a responsibility to protect that data and shouldn't be putting users in this position.


Read more…

Security Flaw in the Computrend Powergrid 902 Adaptor

This article was originally published at in 2009

OK, so I was playing around testing NetManage when I discovered something of a security flaw in the ComTrend Powerline 902 Adaptors I use when I want to use a laptop somewhere that I haven't got a Wired Connection.

Quite how serious it is is hard to say, in theory an attacker would have to already have access to your network in order to exploit it, but I suppose there is a risk that they could risk a more obvious attack once in order to glean this information. It also calls into question quite why ComTrend would bother to Password protect the Advanced Information page, but none of the ones below it.

My Biggest concern is that BT have been shipping these things with BT Vision, and if there is a bigger attack vector than I'm presuming then it could affect a lot of people. None the less there is no way they will listen to me if I try to raise the issue, so here we go.

If you point your Webbrowser at one of the adaptors (You're not going to have less than two after all), you should be prompted to enter a password;

If you get the password wrong, you get Access Denied

It's all good so far, but now try changing the URL manually to http://IP.ADDRESS/mac.htm, as you can see below the adaptor will quite happily let an unauthenticated user view the encryption key it uses for its transmissions

In Fact you can also view it changing the URL to point to info_mac.htm.

Now try clicking the Advanced Information or Configuration link, it will chuck you out. The Information on these pages is far less sensitive, yet it's been protected.

In reality, even if someone did get hold of your encryption key they would need to plug into a mains socket reasonably close to your house/office, but your neighbours could probably make use of it.

As I mentioned before an attacker would need to gain access to your network before they could view this information, and the adapator won't let an unauthenticated user change anything, but a poorly configured NAT router could potentially stick one of these adaptors in your DMZ, or enable port forwarding to it.

I haven't checked to see if the adaptor will allow you to log in from an I.P. outside it's own subnet, but I'm guessing it probably does. Make your own mind up about hwo big a threat this really poses, but keeping in mind a lot of these things have been shipped it does seem worrying that the system has such an obvious flaw.

A quick note for anyone who lives near me, I've already changed the encryption key, and very rarely use the adaptors anyway. Just to save you some time.

I'm off to Google to see if I can find any publicly accessible adaptors!

UPDATE: I figured I'd take a proper look at the security of the adaptors, how easy is it to run remote code on them etc. In other words is it possible to load some form of Network Sniffer/Cracker on them, exactly what Algorithm is used in the encryption etc. As a starting point, I decided to point Nessus in the direction of one of my adaptors.

It found an Etherleak vulnerability, Now depending on the type of network you are running, this is either inconsequential, or really quite bad. Potentially, it could also possibly be of use if the Padded Data you get back is an adaptor on someone else's network trying to connect to yours. Presumably as part of a connect request it would send a recognisable header encrypted with its own key, if you know what this header is, you may even get as far as deciphering the key.

If you are running a corporate network with these things, it could also potentially be bad, though given the nature of the devices, they shouldn't see any more than you can, and the attacker would already be on your network.

Still, these devices are really starting to stink of cheaply made crap.

UPDATE 2: Sorry, got my dunces hat on today. Mentioned earlier that I wanted to see what Algorithm the devices use for encryption, it actually says in the screenshot that they use 3DES Encryption

Building Network Resilience Through Sensible Reporting Policies

Sometimes life as an Administrator can be pretty hard, we often face difficult choices knowing that we'll need to step on some toes. Where security is concerned, most will decide that it's their job and that toes might have to be stepped on.

As a user, however, this choice isn't quite so clear cut. User's may be a bane sometimes, but they can be a valuable tool in maintaining the security of your network.

When we plan a network, meticulous preparation goes into planning what will sit either side of the firewall, which nodes will be on which VLAN and all sorts of other aspects which really don't interest users at all. The reality, however, is that all this preparation can be undermined by one mistake, or one bug in a piece of deployed software.

Sensible policies are just one of the ways to mitigate this risk, but a lack of clear policy (or worse, a poorly written policy) can compound the problem exponentially.


Read more…

The Ads we love to hate

Advertising is everywhere, and there can be very few people who don't love to hate them! But do these irritating modules serve a purpose? Should we block them or just learn to live with them?

I hate advertising, it's everywhere and is often aimed at manipulating your subconscious, but I can't convince myself that we'd be better off if advertising just went away.


Read more…

When did Project Management become such a buzzword?

I remember, not that long ago, very little planning went into many activities. Software development was done (almost) on the back of a fag packet, reorganisations in the office were planned on paper or verbally.

Now, however, everyone's into Project Management! This isn't a bad thing until you try to find a PM solution to meet your needs, it used to be difficult because there were but a few solutions available. Now, it's difficult because there's just too much available!

As an individual, I don't need too many features, and don't want the storage overhead of a all-in-one solution.

In fact, at this point, all I'm interested in is a replacement for BUGGER. I've been searching for a good portion of the day, and found every package had (as a rule) one of two drawbacks;

  • Too big a package
  • Wouldn't allow anonymous access to files etc.

All I'm actually after is a suite to manage my personal projects, track bugs and host release files and/or documentation. It should be such a simple request, and yet there seems to be nothing out there!

Instead of a quick solution, I appear to have little choice but to bring BUGGER up to date. The only upside to this being that I should be able to import the database into my final solution.


Seems to me that everyone is talking about Project Management, and yet very few people are thinking outside the box when it comes to developing solutions! That said, I'll be largely tailoring BUGGER to my needs, so I can't criticise too strongly!


Social Networking and Children

I've been asked a few times why there are no pictures of my recently born Son on Facebook. Although many people happily post pictures of friends and family to social networks, it's something that I try to avoid for one simple reason - privacy.

Social networks such as Facebook are infamous for their laid back approach to the privacy of their users, and it seems that whenever Facebook release a new feature they reset all privacy settings to something that suits their needs and not ours. Not to mention the holes in their approach.

It may not seem the end of the world if a few pictures get out into the wild, but there's a bigger and much more insidious privacy issue here.


Read more…

Sky Broadbands Terms and Conditions: Are they legal?

I was asked for advice recently on an issue with Sky Broadband, I won't go into too much detail but those familiar with Sky won't be too surprised to know that half of what the 'customer service advisor' said was absolute rubbish.

So I thought I'd take a quick look at the T&C's for their service, I must admit I'm rather shocked at what I found;


So let's start at the top of the current T&C's;


You must keep the Sky Broadband Product you have chosen for a minimum of 12 calendar months from the date your telephone line is first activated by us or BT to receive Sky Broadband (the Minimum Term), unless you or we are allowed to end this contract earlier (Condition 11). If your Contract ends during the Minimum Term (other than where you have a right to end it – see Condition 11(b)) we may charge you an early termination charge. We may charge this amount directly to any credit or debit card which you have provided us with details of and, by entering into this Contract, you are authorising us to do so. We will give you reasonable notice in writing before these charges are made.


Now I don't work for a bank, but I seem to recall that card companies are generally not too happy with companies attempting to process payment without explicit customer consent. Whether or not such an argument would stand up in court is also not clear, but it probably doesn't matter as there's far better to come.


Your use of Sky Broadband, and that of those you allow to use Sky Broadband, must comply with our Usage Policies. If your chosen Product has a Usage Cap then you must not go over that Usage Cap each month otherwise we may take action against you. This may include upgrading your Product to one with a higher Usage Cap if you go over your Usage Cap twice in any six month period. You will then have to pay the then current price for that Product. We will send you email alerts to tell you if you are approaching your usage allowance. You should make sure we have an email address that is up-to-date and that you check for emails regularly. Please see our Usage Policies for further details. You are responsible under this Contract for the use of Sky Broadband by any person you allow to use it (Condition 2(c) and Usage Policies).


So what Sky are claiming here is that if you go over your usage cap twice in 6 months, they have the right to upgrade your package to a more expensive option, and there's nothing you can do about it! WRONG. This term would almost certainly fall foul of the UK's Unfair Terms in Consumer Contracts Regulations and at the very least you'd be entitled to leave Sky _without charge or penalty_ if they were to try and enforce this term (the change of package being a material change to your agreement).

So what else do we have?


Sky Broadband is variable and our prices, Products and the Email & Tools can change, even during your Minimum Term. However, if we reduce the level of service provided by your chosen Product and you reasonably consider that you have been materially disadvantaged by this you will have a right to move to another Product accessible by you or end this Contract. You can also end the Contract during your Minimum Term if we increase your Sky Broadband Payment more than once, or by more than 10% or the annual increase in the UK Retail Price Index, whichever is the greater.


Again I'm not a lawyer, but as I understand it just increasing the price once _during the minimum term_ is sufficient a change for the customer to be able to terminate the contract without penalty. The law does apply to service providers in a slightly different way, but as a rule of thumb if you have signed up to a service at £10 a month for 12 months, the provider cannot adjust the price (even to £10.50) during that time. It's slightly different where VAT is involved, but not for a companies charges.


Your right to cancel does not apply to you if we automatically upgrade your Product in line with our Usage Policies.

Actually, I think you'll find it does, under the UTCCR's. Now further down the T&C's Sky do say that you can downgrade again if you can keep your usage below the old cap in subsequent months. However, what can you do if you can't? You could argue that the package is not suitable for your needs, and that although Sky do offer a more suitable package, other packages (from other providers) are far more suited to your needs. What Sky cannot do is co-opt you into a more expensive subscription and keep you there


By becoming a customer you agree that any member of the British Sky Broadcasting group may use and share, within that group, the information you provide and other information we hold about you  for account management and, unless you have told us otherwise, for market research, sending you periodic newsletters about your services and the marketing of group and third parties’ products and services including for a reasonable period after you cease to be a Sky customer. This may include contacting you for marketing or market research purposes by post, telephone, email or SMS unless you tell us you don’t want to be contacted for such purposes in any of these ways by calling us on 08442 41 41 41 or sending an email to

Not only do Sky charge you to watch TV with Adverts, but they pass their data around the BSkyB Group to try and make more money out of you. If you should want to opt-out of this, there's no quick form you need to either phone them (at your own cost) or send an email to a fairly generic e-mail address. Further down the T&C's is another clause specifying that they may also pass your details to third parties, unless you choose to notify them by phoning them, or e-mailing.


Read more…

A Developers Point of View on PHPScheduleit

I've been asked to add some fields to PHPScheduleit for a client, and now that I'm midway through the process I'd like to reflect a little on the project as a whole;



Recently, I've found myself getting very frustrated at just how difficult it can be to work with someone else's code, whether that's because they don't seem to understand databases or because of bizarre coding practices. I don't claim to be the worlds best developer, but I do hold other people to a high standard!

With PHPScheduleit, this hasn't been a problem. The guys over there have done a fantastic job and their code is nothing short of a breeze to work with! Its a reasonably large codebase, so by now I could easily have stressed myself bald but because the code is well commented, logically laid out and generally follows best practices I've been thoroughly enjoying working on it.

So for anyone looking at implementing a resource booking system, I'd strongly recommend looking at PHPScheduleit even if you need to add a few fields to the database. As long as you know PHP and are used to Object Orientated Programming you really shouldn't struggle at all.


To the devs behind it, if you're reading - Thanks! I've been so close to going out of my mind recently, but this latest task has reminded me just why I Software Development so much!

PayPal hinder developers using subscription model

I've only just come across this myself, and haven't seen it really reported elsewhere, so here's a heads up for any developer wanting to implement Paypal subscription payments on a website;


It used to be that you could generate a simple HTML form to create the subscription, and then if the user wanted a feature that required an upgrade, you could generate a button that would amend the amount on the subscription.

Not any more, if the site uses Express Checkout (i.e. directs to the PayPal page rather than accepting details on the originating site), there's an arbitrary 20% cap on subscription increases. So;


  • User1 has a subscription at £5.00 a month
  • He purchases level 2 access to your forums for an additional £2.50 a month
  • A subscription modification button won't work, and Paypal will return the error "Amount can only be increased by 20%"

The issue is being discussed on the DeveloperX forums, with no good reason having yet been given as to why the limit has been imposed. The user would need to log into their paypal account and approve the change so it cannot be 'fraud prevention'.

If your PayPal account was created before Oct 2009, it sounds like the modification may still work. Developers Beware, just because it works with your account does not mean it will work with a clients!


There is a workaround, but it's messy, unnecessary and is going to seem really stupid to the user;

  • Cancel the old susbcription
  • Set-up a new subscription with the new amount

Keep in mind that the user will need to approve both of these actions, so it's never going to be quite as seamless as it should be.







Update on the Play Breach

Play have sent another email to their customers;




As a follow up to the email we sent you last night, I would like to give you some further details. On Sunday the 20th of March some customers reported receiving a spam email to email addresses they only use for We reacted immediately by informing all our customers of this potential security breach in order for them to take the necessary precautionary steps. 

We believe this issue may be related to some irregular activity that was identified in December 2010 at our email service provider, Silverpop. Investigations at the time showed no evidence that any of our customer email addresses had been downloaded.  We would like to assure all our customers that the only information communicated to our email service provider was email addresses. have taken all the necessary steps with Silverpop to ensure a security breach of this nature does not happen again.  

We would also like to reassure our customers that all other personal information (i.e. credit cards, addresses, passwords, etc.) are kept in the very secure environment. has one of the most stringent internal standards of e-commerce security in the industry. This is audited and tested several times a year by leading internet security companies to ensure this high level of security is maintained. On behalf of, I would like to once again apologise to our customers for any inconvenience due to a potential increase in spam that may be caused by this issue . 

Best regards,


The e-mail is from Plays CEO.

Play coughs to Security Breach

The online media retailer Play has e-mailed customers notifying them that a company they use has suffered a security breach. This has led to the compromise of some personal data – Names and e-mail addresses.



Play have sought to reassure customers that the breach happened outside of Plays systems and that no other data has been compromised.

The most likely use for the compromised data would be marketing, but it could also be used to help lend credibility to any phishing emails the attacker may send. Given the source of the data, any unsolicited mail is likely to purport to be a communication.

Users should be aware that such e-mails may be received, and that they should verify the legitimacy of any e-mails received. As’s email states, Play will never ask users (by email, at least) for passwords, banking details or credit/debit card numbers.

Play have asked that any suspicious emails be forwarded to so that they can investigate and then (if necessary) alert other users that a targeted campaign is underway.



The Register is reporting that users have been receiving e-mails pushing malware from an address usually used by Play. I’ve not received any such emails myself, so can’t confirm.


Update 2

Play have given the name of the company who suffered the breach, it was Silverpop. They're no stranger to this kind of breach, having had similar in 2010. What's equally aggravating is that there's no mention of the breach on their site, much less an apology!



Some Database Basics

At the request of a client I've been writing a Joomla module that will allow them to import data from a CSV file into the database used by another module. I thought it would be a fairly straight forward job until I started development, and then.... well let's just say the original developer doesn't seem to know the difference between a flatfile and a relational database model.

That said, from a user's point of view, they've done a good job. So in order to avoid detracting from their product, I'll change a few details! In this article, I'll not only show the right way to lay our tables out, but also how this particular developer opted to do it.


Read more…

ISPA Invited to Porn Discussions with Minister Vaizey - Analysis

On Monday a meeting will take place between UK ISP’s, the Culture Minister and lobbying organisations such as Safermedia.

Safermedia have been supported by MP’s such as Claire Perry (MP for Devizes), who have suggested such measures as required age verification to remove a mandatory block on adult content on wired internet connections (wireless internet providers already have this requirement.)

The Internet Service Providers Association (ISPA) contends that the blocking/management of adult content is far better suited to software installed on the users computer (NetNanny being an example).

Read more…

Assange: The Need To Identify Two Separate Issues

There’s a lot of upset about the arrest and possible extradition of Julian Assange. Many of his supporters claim that the case has come as the result of US pressure, but to believe this is to conflate two issues that may be completely separate.


My views on Assange are reasonably well known, I think he’s extremely creepy and there’s something that just doesn’t seem quite right. However, although he may argue with this, Assange is not Wikileaks. Although I believe Wikileaks have been very irresponsible in their actions (when compared to other similar sites), I don’t make the mistake of bundling one with the other.




For the purposes of this post, we’ll assume support for Wikileaks. I’ll even adopt the rhetoric of some of Wikileaks’s most avid supporters and claim that they are “changing Governments”.

So, now that my Wikileaks support is (temporarily) established, we’ll establish something as ‘fact’;


Assumption 1: What Wikileaks is doing is 100% right, they couldn’t have made any changes nor done anything differently.


So, with this in mind, what does this assumption do for Assange? Nothing, that’s what. Whether Wikileaks is 100% in the right, or the wrong, has no bearing on whether or not the man is guilty of rape. Although the term ‘sex by surprise’ has been bandied about, the charge seems to be that he had unprotected sex with a woman whilst she was asleep (knowing full well that she wouldn’t have consented if she were awake.)


Whether or not you consider this rape is immaterial. If Swedish law says that it is, then in Sweden committing that act constitutes rape. I think it would be fair to say (assuming the allegations are true and accurate) that most women, in any country, would be somewhat upset if you were to do the same to them.



Read more…

James Brokenshire re-defines Success

This post was originally published to Freedom4All, the original can be found here in the archive.

Firstly, hat-tip to Peter Reynolds for covering this story. 

James Brokenshire (in typing that I originally used a ‘t’ instead of an ‘r’, freudian slip?) has claimed that the dangerous level of cocaine impurity seen in the UK is a ‘sign of success’. 

This is quite possibly one of the most dangerous and uninformed statements that I have heard since I began scratching at the thin veneer of the war on drugs. 

Brokenshire, on Friday, claimed that 

The quality of cocaine on the streets is, in some cases, as low as 10% in purity at the moment. That shows some of the very effective work that is taking place. 

With apparently no understanding that this means people are instead putting unknown chemicals up their nose. Regardless of where you sit on this debate, at least you know how harmful (or otherwise) cocaine is. Can you say the same for the following; 

  • Caffeine 
  • Boric Acid 
  • Benzocaine 
  • Creatine 
  • Dilitiazem 
  • Dimethylterephthalate 
  • Hydroxyzine 
  • Lignocaine 
  • Mannitol
  • Paracetamol
  • Phenacetin
  • Procaine Sugars
  • Tetramisole hydrochloride 

So, Mr Brokenshire, what are the harms of these when you inhale them? What about if you burn them? What about the many different combinations of these that could be present (87178291200 by my calculation) before we even consider different levels of dosage! 

Perhaps Mr Brokenshire would like to enlighten us as to the relative dangers of each of these permutations? After all, he seems reasonably sure that their very existence is a sign of the success being made. 

That is, at least, assuming that the prohibitionists are actually interested in reducing harm to users? Could it be that they have another motivation, and that the ‘war on drugs’ has never been about preventing harm? 

If presented with two bags of cocaine; 

  • 1 at 1980′s level of Purity 
  • 1 at 2010 level of Purity 

Which would Mr Brokenshire consider safer to take? The one with certainty (well, almost) as to what’s in it, or today’s dose which could contain almost anything? 

Does anyone else actually see this ridiculous situation as a sign of success? Personally, I view it as a sign of abject failure in the Governments claimed aim of ‘protecting’ users. 


Ignorant Attitudes on Paper

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

A member of the Coalition – Simon Heffer – has penned a truly ignorant piece in the Times. Quite frankly, some of the tripe he has written is so self-evident that it’s hardly worth countering. So instead, I’ll deal with the crux of his article; 

We have a drug problem because the punishments aren’t severe enough. 

Most will already have seen straight through the ignorant veneer of this, but let’s just give it a quick prod anyway! 

So the basis is, if we strengthen our punishment against drug users, the problem will go away. 

Sounds good! Let’s go Singapore style and bring in Capital Punishment for drug use. Oh wait, we used to execute for murder, and yet it still happened. In fact, there used to be a fair few capital offences, and yet people still committed them. 

OK, what’s worse than death? I know, we’ll torture people to death if they are caught in possession of drugs. What a lovely society we’ll live in once we’ve enacted this law! 

Admittedly, I’m being a little tongue in cheek here. Even those in power haven’t banged their heads so hard as to believe execution is a good punishment for drug use (at least not in the UK!). But, the argument that the drug ‘problem’ exists because we don’t punish severely enough is a fallacious argument. The drug ‘problem’ exists because the Government decided to hand control of these substances to the black market. 

Mr Heffer begins his argument with the sentence “Drug dealers and, indeed, drug users are criminals” but omits the very important “because our laws say they are“. I thought our society had evolved beyond believing that criminals are a specific types of people, hell why don’t we go back to using phrenology whilst we’re at it? 

You can read the rest of the Times article here. I’ll leave it to you to pick out the many errors and fallacies contained within. 


Wikileaks and Payments - Who's in the right?

There's been a lot of upset in recent weeks over the decision of numerous payment processors to withdraw their services from Wikileaks. In this post, I'll be taking a look at whether or not Paypal, Mastercard, Visa and others are actually able to withdraw their services in this manner. We'll not be debating the rights and wrongs of Wikileaks and Mr Assange as they are both separate issues.


Read more…

OpenBSD and the FBI

Following the allegation that the FBI surreptitiously planted a backdoor in the IPSEC stack of OpenBSD (some ten years ago), followers of the controversy have generally split into two factions - those who believe the allegations and those who refute it. Very few, however, seem to be taking a neutral view of the allegations, and the true impact of a potential backdoor in IPSEC is both being over and underestimated.




The allegations are that, 10 years ago, the FBI paid a private company to discretely insert a backdoor into the IPSEC stack of OpenBSD. At the time, this stack was the only freely available IPSEC implementation available. Unsurprisingly, the stack was therefore ported to a number of other Operating Systems including Linux.

The allegations were made by Gregory Perry in an e-mail to the notorious Theo De Raadt. Mr De Raadt believes strongly in openness and so published the correspondence so that others could follow the situation and perform code audits where they believed necessary. Mr Perry asserts that an contributor named Jason Wright also helped the FBI to insert backdoors into the OpenBSD network stack.

The delay of 10 years, Mr Perry asserts, is due to a 10 year Non-Disclosure Agreement that the FBI required him to sign.


Read more…

UK Government follows a dangerous path, again!

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive


It was reported on Monday that the Government have decided that they no longer need scientific advice on drug law. Presumably this is related to the mountain of scientific evidence contradicting the Government’s prohibitionist stance, but it does at least make it clear that the policies have absolutely nothing to do with health. 

As if they couldn’t top that, the Government have now announced that they will be promoting abstinence over control of drug addictions. Although, to the casual observer, this may seem like a wise move, it’s a terrible decision and here’s why; 

Read more…

Are you getting what your pay for? How to detect ISP Packet Shaping

We all pay our Internet Service Providers a monthly fee to access the internet, and we all (quite rightly) expect to get the maximum speed achievable on our line. Some ISP's don't like to play fair, however, and shape (also known as throttling) certain types of Internet Traffic.

So how can we tell if our connection is being shaped?


There's a useful set of tools at Measurement Lab. They'll allow you to run diagnostics on your connection, check for throttling and to check which route your packets are taking. In this post, I'll be talking you through the process of checking whether you are getting what you pay for, whether your ISP is shaping your connection and whether there are any faults on your connection.


Note: Test results will be affected by internet access by anything you have on your network. If someone is using VoIP then the results will appear far worse than they would otherwise be.



Read more…

UK Government Decides It No Longers Needs Scientific Advice

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

Remember how we criticised the Government for not following the advice of the Advisory Council on the Misuse of Drugs? Well it would seem that they have learnt their lesson: they’ve decided to dispense with the advice completely! 

The Misuse of Drugs Act specifies that the Government must seek scientific advice when making policy decisions. It doesn’t say that they have to follow the advice, simply that they must listen. The Government have decided that they are going to amend the law to remove this requirement. 

At the very least, this is an admission that there is no scientific basis for the prohibitionist stance. It’s also another worrying step away from making informed decisions, although the Government made it’s previous decision despite scientific evidence to the contrary, at least they were required to listen to the evidence. 

So what does this mean? It’s quite hard to tell, could this be the desperate thrashing of a Government which has cornered itself? Especially given that the EU will be holding a review of drug law in 2 days time? Or is it a sign of a continued hardline support in keeping with their blind devotion to the failed war on drugs? 

Sadly, I don’t have the answer. I’d like to hope it was the former, but it’s quite clear that Cameron and Clegg can’t be trusted. With this in mind, it’s more likely to be the latter. 


Take Action

I’m doubtful of the effect it will have, but write to your MP today and tell them that you oppose the changes. 

More details at The Register


WikiLeaks – What a Fiasco!

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

WikiLeaks continue to dump more classified material into the Public Domain on a daily basis. There’s still nothing of real interest contained within, but the various reactions to this disclosure is quite interesting. 

Spokesman Julian Assange has an outstanding Interpol Warrant in relation to accusations of rape. Amazon have once again booted Wikileaks from their cloud, Wikileak’s DNS provider has outed them and PayPal have suspended the account used to receive donations. 

So, what effect has this had on Wikileaks right to Free Speech? 

In reality, very little. Wikileaks was quickly moved to new servers in Sweden following Amazon’s decision, and the loss of EveryDNS’s service caused disruption for no more than a few hours. 

Loss of their PayPal account may cause a few issues further down the line, but the reality is that there’s always a way to receive donations. 

Even if Assange were to be convicted and imprisoned for the alleged rapes, the Wikileaks train would continue to roll on. The huge amount of publicity generated recently is sure to have enlisted a great deal of support, especially with conspiracy theories circulating that the US Government is behind the DDoS attacks targeting the whistleblowing site. 


Whats it all about? 

There’s an awful lot of fuss going on about Wikileak’s disclosure, but it’s not entirely clear why. So far, the content published to Wikileaks (in this latest tranche) contains very little useful information. There’s a few embarrassing facts in there, but nothing of real interest. 

The Internet is rife with speculation that perhaps Wikileaks have yet to publish the truly earth shattering content, especially given the fuss the US Government is making. Whilst it is a possibility, it seems unlikely that anything of that gravity would be contained within a Diplomatic cable. 


Until all the documents have been published, there’s no way to know whether Wikileaks are bluffing or if they truly have an ace up their sleeve. One thing’s for certain however; nothing is likely to stop publication. 


Is there any point?

Whilst much of the world gets excited about the banalities of these cables, we here at Freedom4All cannot help but think that this wasted effort could be better used elsewhere. With numerous abusive regimes around the world, why are we not seeing leaked documents from these countries? Why are Wikileaks not leaking documents showing the abuses that happen in China, Somalia, North Korea and the many other Humanitarian bombsites around the world? 

We cannot be the only ones who find it difficult to find excitement in a document describing Putin as ‘Batman‘ when we know that elsewhere in the world people are being mistreated for daring to speak their minds. 


Wikileaks have shown a willingness to publish classified documents, but yet they fail to focus their efforts on areas that require these efforts most. Worse, Wikileaks do not always take the care to redact identifiable information (we gather the latest tranche are well redacted however) and so put the lives of informants at direct risk. 

Supporters of Wikileaks often dismiss this as ‘collateral damage’, as if this somehow makes it OK. The ability to read a classified document should never, ever, be worth more than a single human life. Effort should be maintained to protect the identities of those who seek to help our forces/Government. 

Until Wikileaks can begin to redress these issues, they’ll continue to become less and less relevant to the world we live in. Even the ‘name’ assigned to this latest tranche of disclosure does nothing more than highlight the egotistical nature of the site: cablegate. The very name pre-supposes that this will be a scandal of huge proportion, yet all we here can see is some waffle with a bit of name calling. 


Should they be taken down?

We quite strongly believe that some of Wikileaks’ actions and policies could endanger the lives of people around the world. However, we are also avid supporters of Freedom of Speech which conflicts quite drastically with any call to take Wikileaks offline. 

The reality is, we’ve never had full Freedom of Speech, there are some things which you are just not permitted to say. As much as we value Freedom of Speech, human life is worth more. If Wikileaks are unable to exercise their right to Freedom of Speech without endangering lives then perhaps they do need to be taken offline (not that it’s actually possible). 

It’s a pity, because with the right focus and motivation, Wikileaks could become a force for good around the world (and they have done in the past). Unfortunately, they seem to be playing the Public Relations game instead of focusing on the bigger picture. 

Not that we’re advocating actual disconnection of Wikileaks, simply hoping that they may start to take more care to protect the identities of those named in their leaked documents. 



Release needs your help

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

On my Call to Arms Page is a button allowing readers to Donate to Release. I put this up in the hope of generating more donations to this important charity. 

The situation has now changed, Release are in a tight position and one of their most important services is in imminent danger. I felt it was important, therefore, that I highlight some of the work that Release do. 

Release form a very important source of drug information and education in the UK. They’ve done something that the Government refuse to – accepted that people will take drugs and taken action to help ensure those people are safe and educated. Release offer legal support to drug users in order to help them deal with a range of issues including housing, health and welfare. 

As well as helping drug users, Release seek to provide the general public with factual evidence. That is to say, unlike the Home Office, Release publish their findings based on scientific evidence and not moral judgements. 


Read more…

Thoughts on Whistleblowers

The Internet is abuzz with news and discussion over whether Pte Bradley Manning could be tried for treason. Pte Manning is the soldier responsible for leaking thousands of documents to Wikileaks, if he's found guilty of treason he could conceivable be executed.

Firstly, I'm against Capital Punishment for any crime. No Justice System is yet accurate enough to be trusted with the life of the accused (I doubt 100% certainty will ever be achieved!).

That being said, do I have any sympathy for Pte Manning and others like him? No, and here's why;


Pte Manning is by no means the first, and probably won't be the last to leak classified documents. In the UK, Katherine Gun leaked news that GCHQ planned to secretly monitor the conversations of UN Diplomats, and she was (quite rightly) charged under the Official Secrets Act.


Read more…

Tips on Good Password Management

Those in the IT industry regularly remind users that good practice should be followed when using and setting passwords. You should use a different password for every site/service, use non-dictionary words and use alphanumeric phrases where possible.


The question is, how exactly are users supposed to manage these passwords? In the early days of computing, a user may have to remember a few passwords at most. Today, even the most basic of users probably has access to numerous services which require a password. It's easy to see why some consider it unreasonable to require a different password for each of the services they use, and those that try to adhere probably get berated by the system administrators when they regularly request a password reset!


So in order to minimise the confusion, this post will explain the principles of good password management. I've posted advice on how to avoid password theft, how developers should handle passwords and the importance of using secure passwords. In this post, however, we'll be looking more specifically at how users can try to cope with the resulting influx of secure, unmemorable passwords.



Read more…

Update on the Fisher Hargreaves Proctor Security Breach

As I reported on Sunday, Fisher Hargreaves Proctor recently suffered a serious security breach on their website. At time of original writing, details were limited to those disclosed in the original e-mail sent by FHP, especially as no-one else seems to have reported on the issue.

I e-mailed both FHP and their site maintainer – Reach Marketing – for more information on the issue, and they’ve been reasonably forthcoming. Below are the questions I asked and the responses from Reach Marketing.



  1. Why were the passwords stored in plaintext?

Unfortunately the site was coded with plain text passwords. This site was an older site and is currently undergoing some upgrades and this area of the site would have been encrypted. We have brought this change forward immediately.


  1. How was the site compromised?

The site was a victim to an SQL Injection


  1. Where were the credentials posted?

They were placed on a company’s website which had been hacked and the file placed in a folder that the company were not aware of.


  1. How many credentials were compromised?

It was a proportion of the users of the site.


  1. What steps are you taking to avoid a repetition?

We are putting in place two steps. The passwords will now be protected with multi layer encryption. Additionally we will be updating the coding of the site to close any vulnerabilities in the URL structure. We think the “cat_ID” part of the URL was targeted.


  1. Were any of your other customers affected?

We are not aware of this


  1. When did the breach actually take place?

There is a date on the hacked file of 15th April 2010 however we don’t know if this is the date the file was hacked or a date that the hacker has put on it to confuse. We don’t know when the file was placed on the company’s website (as they were not aware of it themselves). We were made aware of the file on 25/11 and had it removed the same day. We let all FHP users now(sic) about the issue on 26/11.



Read more…

Wikileaks releases new material

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

After much fanfare, Wikileaks has released a new batch of material. However, as others have noted, the new material is not exactly earth shattering. 

So what have Wikileaks released? 

The diplomatic cables obtained by Wikileaks include material claiming that; 

  • Hilary Clinton asked US Diplomats to gather passwords and encryption keys of top UN Officials
  • Negative Comments regarding the UK, Berlusconi (“feckless, vain and ineffective as a modern european leader“), Sarkozy (“Naked Emperor“) and Mahmoud Ahmadinejad (“Hitler“)
  • Colonel Gaddafi rarely travels without his voluptuous blonde nurse
  • China was involved in the attack that compromised Google’s servers last December. 

All told, whilst some may be embarrassing, there’s nothing truly earth shattering there. Given the fanfare preceding the release of these documents, we here at Freedom4All are quite disappointed. 

The media, Governments and Wikileaks themselves have been telling us for quite some time just how secret and damaging these docs could be. In reality, we’ve got a small collection of trivia ready for the next pub quiz! 

It’s important to have organisations monitoring the behaviour of Governments around the world, and Wikileaks was once invaluable for that. However, given the blatant self-promotion leading up to this anti-climax, we can’t help wonder if 

Wikileaks should hang up their hats. 

Cryptome has been running for far longer, doesn’t participate in flagrant self-promotion and is far more responsible about the content it published (redacting, for example, the names of informants in Afghanistan). 

For those wishing to trawl through the Wikileaks docs, you can find them here. 


Fisher Hargreaves Proctor Suffer Security Breach

Property Professionals Fisher Hargreaves Proctor have e-mailed clients and customers alike to warn that their site has suffered a security breach. Unidentified miscreants managed to gain access to the FHP user database. These details were then publicly posted on the internet.

The FHP website is maintained and run by Reach Marketing, who were (according to the e-mail) instructed to take down the site hosting the compromised credentials. This suggests that the attackers hosted the details with Reach Marketing, so it's quite possible that the attack was the result of a weakness in RM's website management system.

This, however, is conjecture as I can find nothing to suggest that Reach Marketing actually host websites. I'll be sending an e-mail to try and find out exactly what has happened.

FHP for their part have generated new random passwords for all their users and apologised for the inconvenience.

However, even with the few facts available, it begs one question - Why was this allowed to happen?

Security breaches happen, and I'm not holding either FHP or RM responsible for the breach itself. What concerns me is that the attackers were able to retrieve passwords. There's no apparent reason why FHP would need to store passwords in plaintext, they don't need to log into external services on the users behalf. Indeed, the FHP password appears to do nothing more than grant access to the site. So why wasn't it stored as a hash?


Read more…

The Software Evaluation and Testing Process


Verification, Validation and Testing has become an intrinsic part of the Software Development Life Cycle, the
importance of Testing throughout an applications development has been recognised with most companies now
designing their processes by using the 'V' Model.

Recognition of the importance of formal testing and planning is only a relatively recent development in comparison to the art
of programming itself. Not so many years ago, software was designed on the back of a cigarette packet in the
pub, and then implemented, there was little to no documentation, and very little structured testing.

Read more…

A Stark Reminder

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

I begin with some bad news. The country singer Willie Nelson has been arrested at the age of 77 for possession of Cannabis. Six Ounces of Cannabis were found on his tour bus when it pulled into a police checkpoint in Texas, the singer was bailed for £1500. You can read (a little) more about this at the Telegraph. 

This, however, is not what this article is about. Readers will note that I have been elated at the effect that Cannabis has had on my life in recent months, so much so in fact, that I believe I began to forget just how difficult life had been prior to my use of this fantastic plant. 

Unfortunately, earlier this week, I was given a stark reminder of just how difficult life was. 

Read more…

Cannabis: Social Harm or Social Good?

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

We are told that the Misuse of Drugs Act exists in order to protect society from the ill effects of certain drugs. In fact, there is reason to believe that the original legislation in 1924 was more motivated by commercial interests than humanitarian ones. Especially given that in 1894 the British and Indian Hemp Commission decided against prohibition stating that social use of cannabis was acceptable. For the purposes of this article, however, we’ll give the Government the benefit of the doubt. 

The Government often reminds us that drugs are harmful and constitute a real and present danger to society. This is clearly a very bold statement to make, but does it apply to every use case? 

The Government would certainly like us to believe that it does, but my experience would suggest that the truth is a little different. 

Read more…

Religious Group harnesses power of FaceBook to ban gig

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

The Catholic Group ‘Catholics Taking Action’ have successfully used Facebook to oppose the ‘Black Metal’ festival scheduled to run in Sydney, Australia next weekend. 


The facebook page states that a ‘satanic festival’ is scheduled to take place and that it is ‘disgraceful’. 

The advertising which features the insignia of the Church of Satan and an inverted crucifix is encouraging people to come and partake in an “unholy spell to be cast upon the city of Sydney” featuring the “ultimate of soul possessing occult revelations...unbridled blasphemy... [and] a union of all things unholy”. 

The group has elicited such support that the host – Returned & Services League – has withdrawn its support for the event. The organiser’s promoters have written that RSL have withdrawn from the event on forums here. 

They have blatantly cancelled with no prior notice and with no option of negotiation or compromise. They have sadly bowed under pressure from Christians who have lobbied them to cancel the show 

The posting notes the timing of the protest, which due to it’s late hour has left the organisers little time to seek out an alternative venue. The planned venue has been used in the past for numerous alternative gigs and one can easily see why the promoters are disgruntled. 

Supporters of the event have, not unsurprisingly, set up their own facebook page.

Is it right?

We all have the right to hold our own beliefs and opinions, and we all reserve the right to be offended by what others say and do. Do we have the right, however, to control what others do? 

Do we have the right to prevent others from participating in a relatively harmless activity? Can we afford ourselves such a level of arrogance that we may hold our own beliefs above those of everyone around us? 

This is exactly what ‘Christians Taking Action’ have chosen to do. Because they believe that their religion is the only true way, they have actively acted in order to prevent others from attending a gig that they believe to be ‘satanistic’. The event’s organisers may have referred to satanism in their advertising, but this matters little; 

Do we believe it right to try and prevent the building of Mosque’s in our communities (and we’re not saying it doesn’t happen.). Do we believe it tolerant to try and keep the Jews out of our community? 

So why is it considered right for a group to try and disrupt a legal event because they believe it to be ‘illegal’? The actions of Christians Taking Action are nothing but censorship. As such, they should not be tolerated. 

Everyone is free to hold their own beliefs and opinions, but no-one has the right to try and enforce those beliefs on any other. 


The Link Between Cannabis and Increased Crime Rates

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

One of the ‘weapons’ wielded against Cannabis is the ‘link’ to crime. Supporters of prohibition argue that Cannabis related crime is increasing, which is often supported by statistics. 

But there’s a causal link here that is often omitted: the very prohibition that the supporters are attempting to justify. 

I can quite honestly and accurately state that Cannabis related crime rose dramatically in 1971. Why? was there a sudden outbreak of crime in society? 

Or could it be that before 1971 cultivation and possession of Cannabis was not a crime per se. So although the statistics would suggest that Cannabis use was more prevalent and damaging in the years following 1971, the reality is quite different.

Read more…

Fitwatch: Did the Police Overstep their Authority?

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

Few will have missed the news that the site Fitwatch was taken offline at the request of the police. The site has called it a “pathetic attempt” and many have criticised the move as censorship. The Police, for their part, claimed that the site was “attempting to pervert the course of Justice”. 

So which argument is correct? Did Fitwatch slip up, or have the Police attempted to suppress a critical voice? 



There’s little doubt that Fitwatch crossed a line in their post. Not only did they advise participants of the Milliband protest to discard their clothes, they also hinted that lying could be a good defence. 

DONT assume that because you can identify yourself in a video, a judge will be able to as well. ‘That isn’t me’ has got many a person off before now. 

If you accept that this is an incitement to perjury, the authors have clearly strayed from the safe path of “offering legal advice”. The statement seems to be pretty self-explanatory so we’ll assume that Fitwatch was indeed advising participants to lie under oath. 

The site has also advised readers to change their appearance in order to help evade detection. At worst, this is a grey area. It would be a very different society if you could be arrested for telling someone to shave their head! 

DO think about changing your appearance. Perhaps now is a good time for a make-over. Get a haircut and colour, grow a beard, wear glasses. It isn’t a guarantee, but may help throw them off the scent. 


The Police 

We’ll give the Police the benefit of the doubt and assume that the content Fitwatch posted was actually “attempting to pervert the course of justice”. 

Even with this concession, did the Police overstep their bounds? Why were they able to have Fitwatch taken down without recourse to a court? 


A Complicated Matter

This is where the issue starts to get complicated. In asking the hosting provider to take Fitwatch offline, the Police did nothing (legally) wrong. The Hosting provider did not need to comply, and could have refused to suspend the account until they were issued with a court order. 

The problem, for the webhost, is an issue of publicity. Violent protests have been rebranded by the Police as “Domestic Extremism”, a phrase which in most minds surely conjurs up images of terrorists. As the Web Host, would you want to be portrayed as supporting “Domestic Extremism”? 

Ultimately, it’s up to the Webhost to decide who they do business with. If they suspect criminal activity, they are free to suspend the account and report it. Some will say that JustHost were a little overenthusiastic in this case, others will say they are justified. The fact remains that it’s entirely their choice; 


The problem lies elsewhere

The cause of the problem lies in UK Law. The Police should not be allowed to request that a site be taken offline at all. Procedures exist to have a ‘takedown’ ordered by the Judiciary, this should be the only recourse available to any arm of the Government. Any other arrangement poses too great a risk of abuse. 

Whilst Freedom of Speech must be protected, it would be naive not to admit that true Freedom of Speech does not exist (as such) in the UK. As a society we tolerate certain restrictions for a variety of reasons, but the key difference between this and the MET’s actions is highlighted in this example; 

In the UK you are perfectly at liberty to shout “FIRE” in a crowded theatre. We all accept, however, that we must accept responsibility for the consequences of doing so. This is the level of Freedom of Speech that we enjoy in the UK, there are certain things that we understand should not be said. Nothing, however, actually prevents us from saying these things (and yes, there are exceptions) 

What the MET tried to do is to prevent Fitwatch from saying what they wanted. Or, to be more accurate, they tried to prevent you from ‘listening’. 

This cannot be tolerated, the Police cannot and should not have the power to arbitrarily restrict the individual right to free speech. As we saw above, it’s quite possible that Fitwatch did cross a line, but it should not be in the Police’s power to act against this directly. They should be required to ask a court to order the takedown. 

When a court makes a decision, it is a matter of public record. If the Police make such a decision there’s no guarantee that the reasoning will be fully recorded. 



Although Fitwatch probably crossed the line between legal advice and encouraging illegal actions, the action taken by the Police is less than exemplary. What’s too stop a conservative Police officer from requesting the takedown of a website he disagree’s with? How is the Webhost to know whether the request is one which should be enacted? Far more accountability is available when the decision is made by a court. 

In reality, all the effort put into this by the Police has done but one thing: opened them to criticism. Fitwatch is back, on new webhosting, and is carrying further details of the move here. 


What are Cultural and Historical Reasons

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

The Home Office have been known to justify the wide availability of alcohol as being for ‘Cultural and Historical reasons’. In fact, this seems to be their default response as to why alcohol is legal and cannabis is not. 

But what do they actually mean, is their position tenable? 

The simple translation is that “too many people do it to ban it”. Funnily enough, we saw this same position when the smoking ban was imposed, but when used to argue against the legalisation of Cannabis it’s nothing more than a red herring. 

Ask yourself this: you’re caught doing 36 in a 30MPH speed limit. Would a court accept your argument that it’s OK because so many people do it? Erm No. 

In fact, can you think of a single thing you could effectively defend by saying “well, loads of other people do it?”. Personally, I can’t. The argument seems to be reserved exclusively for the Home Office. 

Lets assume, for a minute, that it is an acceptable argument. So, it follows that if a lot of people smoke cannabis it should be decriminalized. Except, of course, the Government have thought of that. By attempting to enforce strict prohibition, the Government have ensured that not all users will be willing to stand and be counted. Although I’m open about my Cannabis use, I have to remain anonymous to protect my loved ones. I don’t doubt that many others feel the same way. 

So, while Cannabis remains illegal, it is impossible to know how many people actively use cannabis. The Government has deliberately blinded itself from this so that it does not need to reverse it’s prohibitionist stance.


What can be done? 

The only way to effect change is to ensure we can stand and be counted another way. Only by educating each individual MP as to the failure of the Governmets prohibition can we show that enough people support the decriminalization of cannabis. This is why I regularly call for you to write to your MP. The more letters they receive, the sooner the Government will recognise that we do not support their ‘war on drugs’ or the harm it has called. 


Isn’t it funny how we hear calls for referendums on the smallest thing, yet the Government doesn’t dare to ask the populace if we think their prohibition is working? 


Will Anonymity ever be impossible on the Internet?

We all value our privacy, with many seeking to remain anonymous. We're told time and time again that development of interception technologies, with increasing government oversight mean that anonymity on the internet will soon be a thing of the past.

Is this true though? Are we moving towards a day when everything we do online will be monitored and scrutinised by government agencies?

Some may be aware of my support for Freedom4All, for whom I've lent a lot of support. I designed, at the request of the Freedom4All Admin Team, a system for ensuring their anonymity whilst they access, run and manage Freedom4All.

Whilst I'll obviously not be divulging too much information, I'll provide a basic explanation of how the system works, and examine what future risks may lay instore.

We'll also be taking a quick look at the current privacy threat – commercial organisations.


Read more…

Understanding the Nigerian 411 Scam

Originally published to


The 411 scam is generally known to most of the world as Advance-fee fraud, or a 419 Scam. Traditionally 419 scams originate from Nigeria, and the name refers to the section under the Nigerian criminal code that such a crime applies to. Quite when the term 411 scam became popular is unclear, but the American Dialect society has traced the term '419 Fraud' back to 1992.

A 419 Scam is a confidence trick intended to persuade the mark to part with money in order to reap greater rewards. The original 419 scams were sent by Letter, Telex and Fax as a result of the Nigerian Oil Crisis in the 1980's. The original marks were businessmen wishing to make money from illegal deals, but this soon expanded to include Western Businessmen. With the advent of wider E-mail use came a new form with which a scam could be executed, the target audience also grew to include the population as a whole.

Read more…

My Life Has Improved

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

Contrary to what the UK Government may tell you, cannabis does not ruin your life. 

In fact, I’m constantly reminded by friends and family how much better and happier I am than I have been for a very long time. Why haven’t I been happy in the past? I was in constant, unbearable pain and lived every second of my life under the influence of multiple prescription medications. 

Read more…

Inconsistencies in advice

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

Whilst watching “Desperate Housewives”, of all things, I was made aware of another legitimate medical use for cannabis. The show is set in “The eagle state” – Mississippi – where, amongst other things, cannabis is prescribed to treat depression. 

In reality, I can’t find anything to verify this use in Mississippi, but it is apparently used to treat depression elsewhere. 

Now, depression is no small thing. I’ve both seen and experienced it and it’s dangerous and unpleasant.What I don’t understand, however, is the vast differences in state recognition of the medical value of cannabis. 

Read more…

Professor Nutt Confirms What We Already Knew

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

We’ve all said it time and time again, in fact the only people in the dark seem to be the UK Government – Alcohol is one of the most dangerous substances available. 

Professor Nutt of ACMD fame has confirmed it, Alcohol has a serious detrimental effect on both the user and society as a whole. 

So why is it that alcohol is legal and other substances aren’t? As we’ve discussed in the past there is no provision in the Misuse of Drugs Act to arbitrarily exclude a substance from the controls implemented by the legislation. 

The Home Office after much dillying and dallying have justified the juxtaposition as being due to “historical and cultural” reasons. This already seemed to be a tacit admission that there’s no science behind the Government’s war on drugs. This latest news pretty much sets it in concrete. 

Read more here. Be sure to note the huge difference between harm caused (to others and the user) by alcohol and that caused by Cannabis. 


A very belated reply to FeMail

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

OK, in 2007 the Daily Mails’ Sally Emerson wrote a piece slating a Suffolk mother for ensuring that her children had a clean, safe supply of Cannabis. 

Now I know 3 years have passed, but the article is so full of lies, damned lies and statistics that I felt it would be a great opportunity to highlight some of the propaganda used by the Government and the sensationalist media. 

Read more…

Fallacies P3 – The Home Office are Protecting Our Children

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

This post was part of a series called Fallacies and can now be found under the tag Fallacies

Earlier in the series, we looked at claims that the Home Office have credibility when debating drugs and also at claims that Cannabis is a dangerous substance


In this instalment we will look at the myth that the current political position on medical cannabis is what’s required to protect our children from harm. 

The easiest way to highlight the fallacy within the suggestion that it’s all about protecting children is by example, so you’ll have to forgive me whilst I cross national borders with the examples I use. It isn’t that there aren’t sufficient examples in the UK, simply that some examples highlight the issue far better. 

Let’s begin with the now-cliche phrase ‘think of the children‘. The phrase shot to popularity after it became the catchphrase of Maud in The Simpsons. Generally, if you hear/read the phrase “won’t somebody please think of the children” the orator/author is sarcastically referring to the stance of another. 

The phrase is often used in relation to Daily Mail articles (though not by the readership themselves). 

So, hopefully it’s clear in your mind already that crying “think of the children” is something of a running joke. The question is, why? After all, our children do need protecting from a great many things. 

The problem is, our children have been used as a political tool far too many times. A favourite trick of politicians is to spin an argument so that it appears to be about protecting children. Talking of a nations young is a very emotive argument, and few will feel comfortable trying to argue against ‘saving’ the youth. 

For example: In 2006, the South Dakota Attorney General received a ballot entitled “An act to provide safe access to medical marijuana for certain qualified persons,” and although his duty is to report neutrally on any ballot that he receives, he opted to change the name of the motion to “An Initiative to authorize marijuana use for adults and children with specified medical conditions.” 

In a transparent attempt to block the bills passage, the AG used the most emotive argument available – Children. Anyone who has actually read the ballot would understand that the suggested controls are so tight that there is no additional risk of harm to children. Unfortunately, the bill had to pass by someone who wanted to score political points. 

Well, for his troubles, AG Larry Long found himself on the receiving end of a lawsuit. You can read more on this here and here. 

If you’d like a more local example, cast your mind back a year or so to the UK under Labour’s leadership. Who remembers the creation of the hugely unpopular Enhanced Vetting Scheme? At one point, it seemed to be apparent that anyone who was in regular organised contact with children would need to undergo the new vetting scheme. This included parents who alternated collecting the kids from school with another childs parent. 

Although this aggrieved a great many people, including civil liberties groups, we were told it was because of the risk of paedophiles. In fact, based on the publicity it received, you could be forgiven for thinking that every second adult was a paedo’. Once again, the argument used was ‘think of the children‘ and anyone opposing the measures ran the risk of being considered pro-paedophile. 

A year on, the measures have been scrapped. Do we feel that our kids are any less safe now? No. 

The nation was simply taken in by an emotive argument, albeit one which was bolstered by a lot of media activity. 

Although I hate using the war on paedophilia as an example, it is the most recent source of examples. Although I believe our politicians and media have lied, bluffed and invented to meet their own agendas, I’m definitely not pro-paedo. There is a problem, and it needs solving, but as with many other things our politicians just get in their own damn way. 

Some may recall the (fairly) recent argument between the head of the Child protection agency CEOP – Jim Gamble – and the social networking site FaceBook. 

For those that don’t here’s a quick summary; 

Mr Gamble wanted FaceBook to embed CEOP’s ‘Panic Button’ into their website in order to protect minors from grooming. The idea being that if the child suspects that something is wrong, they can hit the ‘Panic Button’ and the case will be referred directly to CEOP. 

On the face of it, it’s a sensible way to help protect children. Unfortunately, because Mr Gamble relied on emotive arguments to further his agenda he rather undermined his own position. You see, FaceBook have their own system allowing users to report others, but this wasn’t sufficient for Mr Gamble. 

The next thing we knew, the mother of murdered Ashleigh Hall was in the papers complaining about Facebook’s failure to install the Panic Button. Now Andrea Hall has been through something that no parent should ever have to go through, and she has my deepest sympathies, but both she, Mr Gamble and the Papers missed a very important point. A point that everyone else picked up on. 

The CEOP Panic button would work like this; 

  1. Minor logs into facebook
  2. Minor reads/receives something from another user
  3. Minor feels uncomfortable/suspicious
  4. Minor clicks CEOP Panic Button.

Now, in Ashleigh Hall’s unfortunate case, steps 3 & 4 would never have happened. She believed she was talking to a 17 year old boy (and didn’t believe anything was wrong) and so would not have clicked the Panic Button. 

As horrific as her story is, Andrea Hall was not the right person to use for this agenda. Not only did it undermine Mr Gamble’s argument, but it unnecessarily put Andrea Hall back into the media spotlight. Mr Gamble’s misguided attempt to further his own agenda led to a mother in mourning being forced to undergo public scrutiny once again. 

The use of emotive arguments is a very dangerous game, not only can they impede progress but as in Mr Gamble’s case, they can seriously undermine your position. However well intentioned you may appear to be, if it seems that you don’t actually understand the subject matter no-one will be willing to listen to you. 


Back to Cannabis 

As rich a resource as it may be, I don’t wish to talk any further about the various anti-Paedo attempts in the UK, as the horrific stories involved make me feel physically sick. All I’ll say about it from here-on out is that my deepest commiserations go to anyone affected in any way, whether through abuse or murder, I wish you the very best. 

OK, so we’re back on topic. 

So the Home Office claim that by criminalising anyone using, possessing or cultivating cannabis they are protecting our children. The problem is, that the facts just don’t withstand proper scrutiny. 

Alcohol abuse has been quite a concern recently, with various measures implemented in order to protect the children (and of course to some extent, adults). We all know that you require a license to sell alcohol (actually, this is oversimplifying the matter). But are you aware of the draconian steps taken in the name of ‘protecting children’? 

Imagine that you work in a licensed establishment (whether a pub, Tesco’s or anywhere else), you may or may not hold a personal license, depending upon your role within the organisation. Now on your average friday, you’ll have at least a few minors try and purchase alcohol. 

You’re normally pretty strict, but today you make a mistake. That customer you thought was around 25 was actually 17 (and there are more than a few who look like this), and was part of a trading standards ‘test purchase’. Despite your efforts in the past, you receive an on the spot £80 fine. 

If you’re a license holder (probably the Designated Premises Supervisor) you could get a much bigger fine and/or 6 months in prison. Not to mention the loss of earnings if you lose your license. 

Why? You didn’t set out to break the law, it was the kid who tried to purchase it from you who did. Do they get punished (even if they’re not part of a ‘test purchase’)? No – In fact, where I live, kids caught with alcohol get a single punishment – they have to watch whilst the copper pours it down the nearest drain. The unwitting supplier however potentially faces a criminal record

So a law designed to ‘protect the children’ does nothing but criminalise those who make a single mistake. We are all human, we’re not infallible and age is a very subjective thing to guess. In response to the draconian efforts of the law, anyone fortunate enough to look under 25 (or 30 in some places) is required to provide ID. 

Make no mistake, it’s not the fault of the retailers, it’s because the UK Government has absolutely no idea how to deal with problems. Yes, there should be a minimum age and yes retailers who regularly sell to minors should lose their license, but the current system just does not work. 

Which is exactly the situation we are in with Cannabis Prohibition. The Governments current stance does nothing but criminalise anyone involved with the substance. They don’t care whether you are using it recreationally, as pain relief, to offset the symptoms of chemo or as a method of treating and preventing cancer. If you use it, you are a criminal. If your spouse grows it, you are a criminal. 

This cannot continue, Cannabis was used medicinally for over 4000 years and is the only effective medication available to some. It’s certainly far safer than many of the regularly prescribed medications available on the NHS, but the Government continue to bolster their argument by asking us to ‘think of the children‘. 

Ask yourself, did you experiment when you were younger? Would you be at all surprised that any teenager experimented? Now here’s some interesting facts, all an issue solely because the Government believes that prohibition is the answer; 


Hash/Solid/Soap Bar/Cannabis Resin 

Regularly ‘cut’ with other chemicals (including boot polish) to raise weight and thus profit 

Weed/Green/Bud/Cannabis Leaves 

Regularly sprayed with glass (a carcinogen) in order to raise weight and thus profit 


So, if we accept that some teenagers will always experiment, how exactly is the above protecting them? Because the Government tries to enforce strict prohibition, it is left to the black market to supply Cannabis. In order to maximise their profit, they ‘cut’ their product with anything that will increase the weight. The end user gets to pay Cannabis prices for boot polish/glass. 


If you truly want to protect your kids, you need to do two things. Educate them about the risks of over-indulgence (which applies to everything) and ensure that there’s a clean supply of anything they might try. 


Our Kids are Our Future 

It’s often said that kids are our future, and it’s true. So why is it, that we are giving them such a poor start in life? Under the current regime, if a minor is caught in possession of Cannabis, they could end up with a criminal record. Suddenly, they are unable to get a job, and so pursue a life of crime.  

Make no mistake about it, this is not the fault of Cannabis. It was the Government’s decision to criminalise anyone using this natural substance, and it is they who have ensured that anyone ‘experimenting’ loses their future. 



I hope that, in amongst my ramblings, I have shown that when Government’s claim to be ‘protecting the children‘ it’s usually a sign of either an unsustainable political agenda, or a warning of future incompetence. In the UK we are criminalising our children and destroying their futures all in the name of protecting them. 

In doing so, we deprive those who can benefit from medicinal cannabis of a safe and effective medication. 

From my point of view, prohibition is doing more harm than good and it is time that it was repealed. 

I’ll leave you with an interesting quote. The UK Government has claimed that it’s hands are tied by the UN Single Convention on Narcotic Drugs. However within the pre-amble of the convention is the following; 

the medical use of narcotic drugs continues to be indispensable for the relief of pain and suffering and that adequate provision must be made to ensure the availability of narcotic drugs for such purposes 

Previous Article: Fallacies: Cannabis is a Truly Terrible, Nasty Drug 

Next Article: Cannabis is of no medical benefit. 


Fallacies: Cannabis is a truly terrible, dangerous nasty drug – Part 2 of Series

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

This post was part of a series called Fallacies and can now be found under the tag Fallacies

In my last instalment we began disproving some of the myths that prohibitionists use to justify their stance on the medical use of Cannabis. 

In this article, we’ll examine the fallacy that Cannabis is an abhorrent drug 

Cannabis is a Gateway Drug 

Many claim that Cannabis is a gateway drug, meaning that it leads to abuse of ‘harder’ substances. This is simply not true, the only correlation between Cannabis and harder drugs is that those who smoke Cannabis are more likely to have easy access to harder drugs. 

In fact, US researchers have claimed that the illegality of Cannabis has actually worsened abuse of harder substances. Because so much money, time and effort is being wasted on enforcing Cannabis prohibition there are less resources available to police the more dangerous drugs. 


Cannabis is Dangerous 

Cannabis is one of the most used drugs in the UK. All substances pose a risk if not consumed responsibly, so it’s less than surprising that a number of people do experience some side effects. However, when you consider the number of Cannabis smokers in the UK surely there should be a much higher level of Cannabis related illness/hospitalisations? Unless, of course, Cannabis is not as dangerous as the media may have you believe. 

Cannabis was successfully used in medicine for thousands of years prior to it’s prohibition in the 20th century. Scientists are currently unsure of whether or not smoking Cannabis increases your risk of cancer compared to simply smoking tobacco. They believe there may be a slightly increased risk, but only because Cannabis smokers tend to hold the smoke in for longer to get a better ‘hit’. 

It is believed that over use can lead to psychosis, but then overuse of other substances can also lead to ill effects. Overuse of alcohol, for example, can lead to serious damage to major organs. The old adage of “a little of what you want, in moderation” applies to most things in our life, including fast food. 

It’s also worth noting that use of Cannabis does not cause any mental conditions, it simply exacerbates something that was already there. So if a user develops psychosis, it’s because there was already a trace of psychosis there. Whether or not the condition would have developed if the user hadn’t been using Cannabis is something that no-one can ever empirically answer. 

Many of the prescription medicines used in place of Cannabis carry far greater risk of damage to the human body than Cannabis does. Many medicines can lead to liver damage, increased risk of heart attack and in some cases an increased risk of cancer. 


Cannabis use increases crime

There are three main types of crime that people often associate with Cannabis – violent crime, organised crime and theft. Let’s begin with Violent Crime; 


Violent Crime 

It’s believed by many that those who smoke Cannabis are more likely to go out looking for a fight. Whether in a dark alley or visiting a club/pub, people assume that if you are stoned you are more likely to attack them. 

This, quite simply, is false. Use of Cannabis results in a relaxed euphoria. Yes you’re ‘high’ but expending energy is the very last thing you want to do, most will be quite happy slumped in front of the television. Very few, if any, will feel inclined to go out, much less go out looking for a fight. 


Organised Crime 

The Home Office regularly reminds us that by purchasing illegal substances we are funding criminal gangs, terrorists and all sorts of other nasty groups. Sadly, this is true at the moment (though not every dealer has these links, obviously). What the Home Office forget to mention, however, is that the Status Quo exists purely because the drugs are illegal. 

If the drugs were legalised (whether across the board, or simply for medicinal use), I could purchase my medicine legally. Some of my money would (presumably) go to the Government as tax, some to the supplier and some to the NHS. At no point would criminal gangs be involved in supplying the medicine that I need to live my life. 

Some would argue that the criminal element will always exist, using counterfeit alcohol and smuggled tobacco as examples. It’s true that criminal gangs continue to produce counterfeit alcohol and tobacco, but when there is a legal market for the true product, they command much less control. 

Those buying directly from the criminal element must surely be aware of the risks. The product is untested and unverified, has been made to the lowest possible cost and could be dangerous to consume. Whilst there is a legal market for the product, why would you buy from a criminal gang? 

It would be the same with Cannabis. Because of the Home Office’s prejudicial stance I am left with no protection whatsoever. If I purchase Cannabis from a dealer I’ve no way of knowing exactly what I am buying. Cannabis resin (Hash) is often mixed with boot polish (or similar) to increase the weight. Cannabis bud/leaves are often sprayed with glass to increase the weight, in both cases this dangerous action happens for one reason, and one alone – to command a greater profit. 

If the Home Office were to reverse their stance and create a legal and controlled supply chain, many of these problems would go away. 



Use of illegal substances is often associated with an increased rate of theft. Sadly, the two do correlate, but this does not tell the whole story. The assumption is as follows; 

  1. User gets hooked to illegal substance A
  2. User can’t afford to buy any
  3. User steals to fund habit
  4. Rinse and repeat

The reality, however, is that whether or not a substance is illegal has no bearing on this pattern. Alcoholics are utilising a legal substance, but many still have to steal to feed their habit. The point is that this is endemic of a much larger problem and does not relate solely to illegal substances. 

In fact, if the Government were to legalise Cannabis, there’s a good chance that the number of Cannabis related thefts would go down. If Cannabis were legally available, it would command a much lower price than it does now and so addicts would be more able to afford it. Those who are addicted to it could receive help from the NHS, including controlled prescriptions of the drug. 

Much like we do now with prescription meds in fact. 

It’s also important to bear in mind that, though some do get addicted, Cannabis is not nearly as addictive as it is portrayed to be. 



I hope that I have adequately outlined the fallacies surrounding Cannabis and that the reader has been left with a better understanding of the propaganda war waged by the media and the Home Office against a very viable form of medication. The reality is that very little information regarding Cannabis that has originated within the UK can actually be trusted. Those interested in the truth about this plant need to look further afield, there have been numerous studies world wide, all of which have been largely ignored by the Home Office because they don’t suit the political agenda. 

As I hope this series will highlight, the UK Government is knowingly and deliberately denying it’s citizens access to a viable, safe and effective medication. In order to police their prohibition they chose to disregard Article 8 of the UDHR, and force those in need to try to survive with a lower standard of living than could be achieved through use of Cannabis (Article 13). 


Take Action 

Please give your support, contact your MP and demand to know why the Government believes it right to deny effective medication to those who need it most. 

Previous in series: Pharmaceutical Mistakes of the Past 

Next in series: Fallacies: The Home Office are protecting our children 


Fallacies: Pharmaceutical Mistakes of the Past – Part 1 of Series

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

This post was part of a series called Fallacies and can now be found under the tag Fallacies


The UK Government steadfastly denies the medical benefits of Cannabis, they’ve classified it as a Schedule 1 Drug – “Of no medical benefit” – and resolutely try to ignore scientific evidence to the contrary. 

Surely, you think, it must be that the Government know something that we do not. After all, the Home Office exists to protect the taxpayer from all the nasties that threaten our lifestyles. In this series of articles, we’ll be debunking the myth that the Home Office have any justification in their stance. 

We’ll start with the easy fallacies and then work through; 

    • The Government has any credibility when it comes to judgement of Drugs
    • Cannabis is a truly terrible, dangerous nasty drug
    • The Home Office are protecting our children
    • Cannabis is ‘Of no medical benefit‘
    • The current stance is based on scientific data
    • The Home Office are doing this for the ‘greater good’ 

I imagine those who have pursued this cause for a while are probably having a good laugh at some of these ‘truths’. Since my ‘discovery’ of the benefits of Cannabis, I’ve had quite a few conversations with various people. Some of the above have been suggested, although most agreed that I have a right to use whatever medication works best. 

So, let us begin: 


Read more…

Why does the UK Government refuse to help reduce the suffering of many?

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

We here at Freedom4All feel that many will agree with us when we say that pain and suffering should be reduced in whatever way possible. Why is it then that our Politicians seem un-willing to explore certain means of reducing this suffering? 

Our Guest writer has offered to share his story to highlight the impact of this political decision; 


Our Politicians seem more than willing to go to war or even to discuss assisted suicide to help reduce or prevent un-necessary suffering. Yet certain, less severe, avenues are deemed completely taboo. 

Many people around the world live their lives in constant pain, reliant on a plethora of medications in their attempts to keep their lives bearable. Some of these unfortunate people were born with horrific disabilities, others develop conditions such as MS, the final group (including me) suffer daily as the result of a single incident earlier in their lives. 

As others have noted, it is very difficult to explain the daily battle that constitute our lives without sounding like you are trying to evoke pity. This is not my aim, simply my (inadequate) attempt to help the reader understand the impact of my life the UK’s legislation has. 

I’m a guest writer on Freedom4All, and I hope to help foster debate on an issue that is very important to me – Freedom of Choice. 



My story begins when I was 17. Like most teenagers, I thought I was (more or less) invincible. I was riding my motorcycle to work, and wanted to try and see my Girlfriend on the way. So despite a speed limit of 30 MPH I was riding at nearly double that. 

As I approached a roundabout, in my hurry to reach my Girlfriend before the bus did, I failed to reduce my speed as much as I should. On any other roundabout of that size, entering at 50MPH would have constituted little risk (assuming there was no other traffic). Unfortunately, this particular roundabout was on a main bus route, so it was less than surprising when my motorcycle lost grip of the road as the result of a diesel slick. 

The bike turned sideways before it had cleared the slick and as the wheels once again gained traction the bike attempted a barrel-roll, taking me with it. I landed just before the bike, and slid along the road until my head hit the kerb. 

The injuries I sustained, at the time, seemed less than severe. My right knee was very painful, my palms were grazed (despite being protected by a good pair of leather gloves) and my head hurt. A kind onlooker gave me a lift home in his car, as I was clearly unable to proceed to work. 

That night, my father drove me into the hospital where I complained that I could hardly move my right leg. The hospital questioned me about my accident and proceeded to check for signs of concussion. My leg was not examined and I was simply told to rest it for at least a fortnight. 

I’ll pause my story here to defend the hospital; 

A head injury is potentially the most serious and likely damage caused by a motorcycle-involved RTA, and any brusing or swelling of the limbs is to be expected and would normally subside after a period of rest. 

Unfortunately, my right knee has never recovered. Indeed, it’s only become worse with time. That night was the last time I was even able to attempt to attend a shift for that job. I was simply physically incapable of completing my duties. 

Many of my previous hobbies became too difficult, I lacked the strength to skateboard or to run (even walking is difficult). 

Some seven years on (and a few trips and falls), I now walk with a stick. My knee hurts even when I’m not moving, and the NHS has been able to offer very little in the way of effective pain relief. I’ve tried a variety of drugs, none of which have effectively reduced the pain. The most effective drug I’ve been prescribed is an opiate, but I’ve become physically dependant (addicted) on it and so have had to begin the (incredibly unpleasant) process of weaning off it. 

One of the side-effects experienced by users of this drug is insomnia. It disturbs your sleep, in varying degrees, from the first moment you take it until the point that your body has finally got used to not having it any more. 

Many insomniacs will agree that the impact on your life is immeasurable, you slowly begin to go mad through lack of sleep. Your brain is too tired to function, and simple actions (like climbing the stairs to the toilet) seem like indefeatable obstacles. 

The NHS has provided me with various drugs to combat the insomnia, but none of them have worked. My (desk-job) work has suffered, my family have suffered and I’ve suffered, all through a fairly simple side-effect of my analgesia. 


The Taboo Treatment

So driven half mad by pain and lack of sleep, I decided to take things in to my own hands. 

Just like many others, I’d smoked Cannabis as a teenager but drifted away from that scene as I ‘matured’ – In the UK, Cannabis is not even available for medical use. It’s a Class B drug, and so possession is sufficient to lead to a prison sentence. So here I was, unable to think clearly, wondering whether the potential benefits were worth the risk. 

Regardless of whether or not it worked, if I was caught in possession of the weed, my livelihood, family and even my liberty would be put at serious risk. After much consideration I obtained a very small amount. I hadn’t smoked it in years, and was very aware that it could react with the various medications I was on. So, I rolled a very small spliff and discreetly smoked it. 

Within minutes my head was absolutely spinning, so I crawled up the stairs and into bed. Almost as soon as my head touched the pillow, I was asleep. For the first time in years, I was asleep almost as soon as I was in bed! 

The next morning I woke up feeling more refreshed than I had in quite some time, it’s true that I was still tired but little steps lead to the same place as huge bounds! 

As I got out of bed, I realised something: I hadn’t taken any of my analgesia yet, but my leg seemed pain free. Admittedly the joint was still largely immobile, but I was pain free despite my head being completely clear. 



In some ways, I wish my trial hadn’t worked. What a position to be in, knowing that there was a treatment that would allow my life to get back to normal(ish), but being caught with that very treatment was enough to turn my life upside down. Could I bring myself to risk sacrificing my job and my liberty for a pain free life? My family had stuck beside me throughout, could I allow them to be opened to the spotlight if I was caught using Cannabis? 

My mind went round in circles, until recently, when I decided that my pain was too much – my misery was affecting those around me, those I care about. I had a single choice to make, the pain had to stop – either I fought it or ended it another way. As painful and miserable as my life was, suicide was not an option. I obtained some more Cannabis and haven’t looked back. I get to live the normal life I’d always imagined I would have one day. The government would denounce me as a criminal, and the media would use me as evidence that Britain has been overrun with ‘Druggies’, but my family and I would get to go back to normal. 



The one thing that really annoys me about the Government’s stance is the intense hypocrisy of it all. They tell us that ‘Drugs are bad’ yet it’s OK for me to take 6 ineffective tablets before I go to sleep (and more during the day), but resolving the problem with 1 spliff a day is wrong? 

The medicinal qualities of Cannabis have been recognised by many, from studies to stories like mine. Some Governments allow Cannabis to be used medicinally (a number of states in the US for example), yet the UK Government is unwilling to even entertain the idea. 


Prohibition Doesn't Work

Groups such as the ‘Legalise Cannabis Alliance‘ argue that prohibition doesn’t work. I’m inclined to agree with them: the UK still has an abundance of cannabis smokers, the Government spends vast amounts on trying to enforce the Misuse of Drugs Act, and the only people making any money from the current situation are the ‘criminal’ dealers supplying their customers. 

I’d go even further than the LCA in that I believe the current situation is actually responsible for the many issues surrounding Cannabis. For example, I recently had a ‘dealer’ scam me out of £80. I have absolutely no protection against this, I can’t go to the Police and I’m not willing to add assault to my ‘crimes’. Clearly I was far too trusting, but if Cannabis was legal and controlled, this would not have occurred

The current suppliers are willing to ‘cut’ their product with unknown chemicals in order to maximise profit, smokers of hashish are often smoking boot polish as well as the expected Cannabis Resin. Inhaling burning boot polish clearly has greater health implications than the expected ‘product’. If the drug were legal and controlled, this would not be an issue

Supporters of Prohibition often argue that legalisation would see a huge increase in consumption. This may be true in the short term, but it would also remove the mystery and ‘coolness’ so perhaps use would go down. Even if usage did go up, what would you prefer your kids did – smoke cannabis & unknown substance bought from a criminal gang or smoke pure cannabis in a controlled environment? In either case, the number of cannabis smokers is statistically unlikely to rise dramatically in the long term. 

Unlike the LCA, I’m not arguing that Cannabis should be legalised across the board, but why is our Taxpayer funded Health Service not allowed to dispense (or even recommend) this drug even if it might help? 


Why do I have to break the law?

Why is it that our democratically elected Government feels that I have to break the law to live a normal life? They’ll discuss taboo subjects such as Euthanasia, they’ll quite happily worsen the drug situation in Britain by banning a substance before scientific evidence has even been fully collected (in this case the ‘related deaths’ were eventually ruled unrelated), yet they are not even willing to entertain the idea that Cannabis use could be a very effective form of analgesia. 

I’m not suggesting that the Government should legalise Cannabis across the board, or even that they should license it for medical use tomorrow. All I’m asking is that they actually conduct a proper scientific study into the possible benefits; why criminalise those who have no other alternative? 

Drugs have always been a taboo subject in the UK, but we should not allow political ideals to interfere with scientific fact. This is something the UK Government has never been good at, Professor Nutt lost his job because he dared to publicly state that the governments legislation (on Cannabis) didn’t follow scientific findings. 



The current situation, much like the Victorian view on Sex, stems from a political unwilling to talk about anything remotely Taboo. Successive Governments have tried to extend their reach into our lives, but remain unwilling to do the one thing that could help countless other people in my position. It is time the hypocrisy came to an end, the country is in the midst of an economic crisis and yet we continue to waste money enforcing a law which could (possibly) be more successful if repealed. 


Take Action 

Please write to your MP today (Even if, like me, you choose to do so anonymously). Tell them that you want freedom of choice. The NHS should be allowed to utilise any reasonable method to help it’s patients. Cannabis may be distasteful to some, but others find the use of Stem Cells very distasteful. Why is it OK to pander to one demographic on one issue, but not on the other? 

If you ever find yourself in constant pain (and I sincerely hope that you never will) wouldn’t you want to be able to use any available method to reduce that pain? I’ve chosen to fight the pain, and in doing so have been branded a criminal, how many would believe this to be fair? 





US shows refusal to respect European Privacy Requirements

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

The Obama Administration has unilaterally decided to ‘tear up’ the agreement negotiated with the Eurozone on the monitoring of SWIFT transactions. 

Originally, the US set up secret monitoring of the International Financial system in order to combat terrorist funding and money laundering. When this covert monitoring came to light in 2006, Europe agreed to provide the information on request – subject to several conditions; 

The minimum amount to monitor should be $10,000 The request should be approved by Europol The data must be relevant to a specific terrorist investigation 

However, Obama’s administration has decided it is not willing to co-operate, and that it will monitor any transaction it wants without seeking approval or permission. Europe was not notified of this, and instead were only made aware when the Washington Post stated that “transactions between European and US banks would be captured regardless of whether there is a substantiated need“. 

Unsurprisingly, the European Commission are high unimpressed and are “requesting Clarification” from Washington. This rather docile term generally means “very p*ssed off” in the world of European Politics. 

The Dutch MEP met with the European Commission before releasing the following statement; 

We are all getting a bit tired of being taken by surprise all the time. The US is our friend and ally, so we shouldn’t be treated this way 

To add insult to injury, the US Government is once again labouring under the delusion that it owns the Internet.

The National Security Crowd want legislation passed requiring all services that enable communications — including encrypted email transmitters like BlackBerry, social networking websites and software that permits direct “peer to peer” messaging such as Skype — to be technically capable of complying if ordered by a court/Government Agency to impose a wire tap. 

The proposal has already been criticised by the Center for Democracy and Technology, who claim that the Government want to ‘turn back the clock’ and make the Internet work the way the Telephone system once did. 

London MEP – Sarah Lufford – echoed these concerns stating; 

How the US chooses to snoop on people within its own borders is its own business. But there seems little point in struggling to reach transatlantic agreements on data transfer such as those on financial transactions (SWIFT/TFTP) and Passenger Name Records if the US is going to undermine them through constantly moving the goalposts. 

We need an overarching EU-US deal not only on privacy safeguards but also on the broad limits of what personal information will be sought by law enforcement agencies. Permanent mission creep is very destabilising of the trust necessary to reach long-term agreements. 

MEPs are fully supportive of necessary and proportionate efforts to catch major criminals. But the US must be stopped from trashing the international boundaries of privacy. The European Commission and EU governments must in particular make crystal clear – as they have so far failed to do – what the rules are on data-mining and profiling. 

These sentiments are likely to be echoed by Internet Users world wide. 

Contrary to popular belief, the Internet was not invented in the US. It does not belong to the US Government (indeed the Domain Name Regulator ICAAN has recently become fully independent of the US Gov), and the US has no god-given right to snoop on the communication of any individual. 

Whilst intelligence gathering may be essential for National Security Purposes, it is unacceptable that the US should try to pass a local law and then enforce that law on the rest of the world (sentiments, strangely enough, echoed by those concerned with overreaching copyright legislation). 

The US does not have the authority or the right to try and police the Internet. They never have, and probably never will, regardless of any legislation they may pass to the contrary. 

Protect your rights, write to your local MP/MEP and ensure that they are made aware of the liberties that the Obama Administration is taking. As the quotes show, the US is supposed to be an Ally, yet treats us with disdain. It is time that they were reminded that Europe is not part of the USA, their laws have no bearing on us and their Law Enforcement Agencies have no right to monitor our bank transfers or our communications. 

If this legislation should pass, we at Freedom4All will be among the first to publish articles on how to secure your communications. Privacy is a fundamental human right, and must be respected! 



Judge Rules: Privacy Controls on Facebook Insufficient

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

As US court has ruled that a woman who posted content to a restricted part of Facebook had “No reasonable expectation that it would remain private”. 

Facebook is often criticised for making too much information public, however on this occasion the woman – Kathleen Romano – had set her profile to be private. Despite this, a Judge has ruled that content previously posted to Ms Romano’s profile is admissible as evidence, even though it had never been publicly accessible and Ms Romano had deleted it from her account! 

The Judge has ordered that Facebook provide the content in question from their backups or archives. Although Facebook is often criticised for retaining deleted information, it is not yet clear whether they have retained the content in question. 

Although Ms Romano had claimed that she suffered permanent injuries and was largely confined to her home and bed, the defence contends that images posted to the social networking site (and later deleted) showed Ms Romano smiling outside her house. 

This case helps bring to light the serious privacy ramifications of both local laws and online services. 

The Facebook Privacy Policy is less than clear on how long data will be retained, even after the user believes they have deleted it. Particularly concerning is the caveat;

We cannot guarantee that only authorized persons will view your information. We cannot ensure that information you share on Facebook will not become publicly available. We are not responsible for third party circumvention of any privacy settings or security measures on Facebook.

Although many users are aware of Facebook’s numerous privacy failings, few will have realised how severely this could impact them. Whilst Facebook is arguably fully deserving of criticism when it comes to privacy, it is true that users must first opt to use Facebook. 

Local laws, however, are largely out of the control of the average individual. When a court is able to rule that a user “has no expectation of privacy”, despite the fact they went to some length to navigate Facebook’s many privacy settings, something is seriously wrong. 

The justification in this case is that the postings “may contradict claims she made about the injuries she sustained” and so can be compelled under New York Discovery Laws. 

We are not disputing the right of Companies/Insurers to protect themselves from potentially fraudulent claims, but the law simply cannot ignore the privacy of others. 

Although the image may appear to contradict the Plaintiff’s claims, it is highly circumstantial – Disabled people are not miserable all the time – and cannot therefore nullify the claim. How is it reasonable, then, that the Plaintiff’s privacy be ignored in favour of a (largely inconsequential) piece of evidence? 

It’s very easy to simply say “if you don’t want something made public, don’t post it to the Internet!”, but consider that the same could easily happen to you. Consider that the Judge has granted access to the Plaintiff’s “current and historical Facebook and MySpace pages and accounts, including all deleted pages and related information ... in all respects.” on the basis of the Defendant’s unsubstantiated claim that photo’s may bolster their case. 


Imagine if you were the plaintiff, the Judge has granted the Defendant full access to your information – perhaps you had an argument with an ex, or even an ‘adult’ conversation with your partner. The Defendant would have full access to this information, purely based on claims that a photo may support their argument. 


The problem here is twofold: 

  1. Local laws fail to adequately protect or respect the individuals right to privacy
  2. Facebook retains deleted data for too long

Facebook do have a business to run, and it’s simply not reasonable to expect them to purge deleted data from historic backups. But it may be reasonable to expect them to review how long they retain backups for – even with a Grandfather- Father-Son system, 90 days seems excessive. 

Local Laws should be changed, and courts should respect the privacy of users. The Judge in this case could easily have opted to view the requested content himself, before deciding whether it was relevant. Instead he granted the Defendant full and unfettered access. 

This is simply not acceptable. 

Unfortunately, this situation will never change unless politicians are reminded (again and again) that their constituents value their privacy. 

Wherever you are in the world, contact your Local Representative and remind them that you value Privacy above all else! 


Storage and Manipulation of Passwords: A Developers Guide

This content was originally published on in September 2010. You can see the original here


For many users, creating and entering passwords is an everyday occurrence. On today’s internet, very few services will allow access without some form of credential. Whether it’s internet banking or social networking, the user is required to enter a username and a password.

Although passwords have a number of weaknesses when compared to alternative methods (such as One Time Tokens), they continue to be the most common form of authorisation. As a developer, it is highly likely that you will need to process and store passwords at some point.

The aim of this whitepaper is to look at the strengths and weaknesses of the various methods available. We will also look into the available methods of processing supplied credentials to establish whether to permit the user access to the system.

This paper is not intended to focus on any particular type of system, and the main body of information provided here should apply to any system, whether a web application or a local application. For convenience, we will assume that your application data is stored in a CSV based database. In reality the data can be stored using your preferred method

Read more…

Thousands of bloggers silenced

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

The Internet is usually synonymous to Free Speech, but each and every one of it’s users (and publishers) are dependant on a myriad of organisations and companies. A failure, or abuse, at any one point can have severe implications for free speech. 

Such is the case with, a site that hosted many thousands of personal and business blogs. Not long ago, the site was taken completely offline following “a notice of a critical nature from law enforcement officials”. The site had been home to over 70,000 blogs, but went dark as the company hosting the site – BurstNet – was informed that terrorist related materials had been found on the server. This allegedly included an Al-Qaeda “hit list”, bomb making instructions, messages from Bin Laden and links to other sites containing extreme material. 

As a result, the site was “terminated without any notification or explanation“. So in effect, inappropriate content on a few blogs was responsible for the sudden removal of around 70,000 innocent blogs. 

What’s even more concerning about this episode, is the fact that the site was shut down, not by order of a court, but at the decision of BurstNet. CNet are reporting that BurstNet were notified of the material by the FBI, but the FBI does not have the power to shut down websites. Only a court can order that a site be shut down. In fact, the FBI have even confirmed that they did not request the removal of any sites. 

So why exactly were all these sites taken offline? Speculation is currently running wild, with suggestions that the Patriot Act may have been used to force the removal. Some are also querying whether child pornography had also been found on the server. It seems far more likely, however, that BurstNet made the decision to switch off. 

Rather than notifying their users that the system would be shut down, and so allowing them to back up their data, the company remained quiet about the plans and simply switched the service off. 

The effect upon freedom of speech is immediately obvious, over 70,000 blogs have been taken offline. Some may have hosted inappropriate material, but it seems unlikely that 77,000 sites would all be hosting material of a dubious nature. 

That a single company can choose to enforce censorship on such a grand scale is very concerning. Yes, the servers belong to that company, but the content did not. Had the company allowed its users to back up their data, it would have been an inconvenience, but would have protected the work that those users have created over a long period of time. 

Sadly, in todays age, freedom of speech is something that’s taken very lightly. Corporations and Governments need to start taking our rights seriously, and factoring it into their decision. 

BBC News has more on the story here

Aussie Comms Minister Puts Foot In Mouth

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

Australian communications minister – Stephen Conroy – has made a speech pledging to protect Australians from ’spams’ coming through their ‘portals’. 

You could forgive the man for his obvious lack of technical literacy if it were not for his pet project – The Great Australian Firewall. 

That’s right, the same man who publicly displayed a great level of ignorance, is behind the project that threatens Free Speech within Australia. His pet project has repeatedly received criticism, especially when the blacklist was leaked and it became apparant they were blocking perfectly legitimate content. 

Despite his prior failures, the man soldiers on, today giving us such wonderful words as; 

There’s a staggering number of Australians being in having their computers infected at the moment, up to 20,000 — uh — can regularly be getting infected by these spams or scams, that come through the portal. 

Oh dear! We also wonder quite how the Minister plans to stop these ’spams or scams’ or any other malware for that matter. Would he, by any chance, be proposing that the GAF should also process every inbound e-mail? If so, the privacy implications grow ever greater! 

You can watch the video here


South Africa plans Internet Filter

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

The South African Government has recently been making a lot of noise about pornography on the Internet. The end result being that the Government is planning to pass an act – Internet and Cell Phone Pornography Bill – forcing Internet Service Providers (ISP’s) to filter pornography both on the Internet and on mobile phones. 

Most of the hardware required is probably already in place, as the Film and Publication Act bans pornography featuring children. However the South African Government is planning to expand this functionality by using the definition of pornography used in the Sexual Offences Act. 

The minister behind the proposed measures – Malusi Gigaba – has an unusual take on the situation; 

“Cars are already provided with brakes and seatbelts, it is not an extra that consumers have to pay for. There is no reason why the internet should be provided without the necessary restrictive mechanisms built into it.”.....”The Bill is aimed at the total ban of pornography on internet and mobile phones. United Arab Emirates and Yemen already have legislation in this regard. Australia and New Zealand are currently seeking to do so.” 

We can only assume that the fact that seatbelts address a risk (death) not present when viewing pornography is lost on the Minister. The minister also seems to be unaware that South Africa has numerous real issues that need to be dealt with – Serial Rape, High murder rate, high assault rate, high crime rates in general

Even if you ignore the many studies showing that pornography is not a serious contributory factor in rape, Pornography cannot account for all the crime in the country. 

Any law, in any country, is only effective if enforced, so in order to ensure compliance this bill sets some pretty stern punishments for any ISP failing to obey the law; 

“Any Internet service provider or Mobile phone service provider who distributes, or allows to be distributed through the Internet or through a mobile phone in the Republic of South Africa, any pornography, shall be guilty of an offence and liable, upon conviction, to a fine or imprisonment for a period not exceeding five years, or to both a fine and such imprisonment.” 

Presumably, its the Chief Executive of the company that could face imprisonment, but the bill isn’t clear on this point. 

It’s simply not possible to completely prevent ‘questionable’ material from being accessed on the Internet. You can make it very difficult, but not impossible. The wording of the draft legislation would seem to imply that the ISP will be liable should any pornography slip through the net. 

So why is Freedom4All concerned about this? Whichever side of the moral fence you sit on when it comes to pornography, there’s a very serious element to this. We call it ‘feature creep‘ - these systems are highly prone to abuse. 

The Chinese government are famous for restricting their citizens use of the Internet, and it is the same hardware that allows them to do so. 

Surely, though, this couldn’t happen in a democratic country? Unfortunately, it can. Australia recently deployed filtering hardware to protect the citizens from Extreme & Child Pornography. The Government operated their system on a blacklist basis (i.e. you can access anything so long as it’s not on the list) and kept the list secret. 

Inevitably the blacklist was leaked. It then emerged that, despite promises that the system would only be used to block access to illegal content, a large number of other sites were on the list. There were gambling sites, even an Australian dentist as well as some mainstream pornography sites. 

Clearly the list had been misused for political or personal reasons, what is particularly scary is that this abuse began within months of the system being deployed. The politicians couldn’t even wait so much as a year before they began filtering other content without any oversight whatsoever! 

As mentioned, China does this to a far greater extent, and probably to the greatest extent in the world. But Australia, China and South Africa are not the only countries to utilise this technology - many Middle Eastern Countries also utilise similar filtering technology. Even the United Kingdom has this technology deployed, ostensibly to prevent accidental access to Child Pornography. The UK blacklist is maintained by the Internet Watch Foundation, again, in secrecy with no oversight. Decisions made by the IWF in the past have led to large swathes of British users being barred from Wikipedia. 

As the minister said, both UAE and Yemen also have this technology in use, Bangladesh recently used this technology to bar access to Facebook (in response to ‘everyone draw Mohammed day’) 

 The problem with these systems is that they allow Governments to restrict or even take away the right to free speech – China blocks any web searches related to dissident topics (Tiananmen Square for example), thus preventing its citizens from developing an informed opinion on many aspects of their life (including their own Government). 

In a totalitarian society, such as China, the presence of the hardware also helps to dissuade dissidents from attempting to use the Internet to achieve their ends. When a simple web search could be detected and lead to an enforced disappearance, debate around sensitive topics simply stops. 

So whilst Minister Malusi Gigaba may claim that the system will only be used to block pornography, experience shows us that the system is likely to be abused. It’s highly likely that the South African Government will maintain their blacklist in private, with no oversight. 

Eventually however, it’s likely that the system will be used in an effort to stifle free speech. 

Of equal importance, is the effect that such technology has on privacy. It removes it completely, would you really want your Government knowing exactly what you do, and where you go on the Internet? 

Supporters often say “If you have nothing to hide, you have nothing to fear”, but the reality is that we all have things that we’d like to keep private. This technology works by examining every page you request, with the end result being that the Government will know exactly which pages you’ve visited. 

There needs to be a concerted effort in South Africa to reject the Bill before it becomes law, which means that everyone needs to spread the word. 

You may also find the title of one of the bills related documents somewhat interesting; 

A reasonable and justifiable limitation on Freedom of Expression and Right to Privacy. 

It quite adequately sums up the very issues we have with any plans of this nature. 

Is your country planning to restrict your right to free speech? Are they following the lead of Australia and New Zealand and proposing measures to filter objectionable content? Please let us know! 


Twitter Bomb Joker Launches Appeal

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

Paul Chambers was due to fly to Ireland to meet a girl and, in frustration, posted the following on his Twitter Account 

Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your shit together, otherwise I’m blowing the airport sky high! 

This post was noticed by an off-duty airport worker. Despite the airport never considering the message a credible threat, the Airport was duty bound to pass the intelligence onto the authorities (whether by Policy or by statute isn’t entirely clear at this point). Paul was arrested at work a week later, contributing to the loss of his Finance Supervisor Job. 

The case proceeded to Magistrates court, where Paul initially entered a guilty plea but changed his plea in March. He was found guilty of sending a threatening message under the UK Communications Act 2003. He had claimed that it was ‘innocuous hyperbole’ but was ordered to pay a fine of £385, costs of £600 and a £15 victims surcharge. 

Paul has received widespread support (although some inevitably have denounced him as an idiot who got what he deserved) including from Stephen Fry. Mr Fry even offered to pay the fine on Pauls behalf. It is this widespread support that has allowed an appeal to be launched – Paul's supporters started a fund to pay for it.Some have commented that the prosecution was a waste of taxpayers money, whilst this is probably true, it is not this which concerns us here at Freedom4All 

Whilst Pauls comment was misguided, it is not worthy of prosecution. Unfortunately, the incident is part of a worrying trend in the United Kingdom. Civil Liberties are slowly being removed, with the justification often being to protect from Terrorists or Paedophiles. This particular case (if upheld at appeal) will have a severe effect on the individuals right to free speech. 

Paul posted a message on the Internet, with no expectation that anyone at Robin Hood Airport would ever see it. Pauls Twitter history shows that he is something of a joker, so it should have been clear with even cursory investigation that this was not a credible threat. Indeed, the airport decided that it was not, so why did the case proceed to prosecution? Certainly, a due diligence investigation would have been needed by Special Branch to ensure it truly wasn’t a credible threat, but the prosecution highlights either a severe failure of process or a desire to make an example out of an innocent bystander. 

Paul was not prosecuted under Anti-Terror laws, so it’s clear that there was no belief that he may pose a threat. The message was not sent to the airport, yet the Magistrate ruled that simply posting it on the Internet was sufficient. 

Given that Magistrates are unpaid volunteers, often with no legal training (District Magistrates are different however), why did the Magistrate feel he was qualified to make a decision on such a fine point of law? 

The whole affair has had a very negative effect on Pauls life, as mentioned he has lost his job because of it, and it may well be that even if acquitted, he’s unable to work in certain roles again ( CRB Checks were to change under Labour, there’s been no announcement by the Coalition that this may change). The prosecution amounts to a prosecution for Paul exercising his right to free speech, which is clearly a very worrying development. 

It’s also noteworthy that a similar case is currently underway, although it unfortunately has reporting restrictions so details are sparse. It is known that it will decide whether a 1 to 1 conversation – i.e. via MSN or Skype – over the Internet (which many would consider a private conversation) constitutes publishing as defined by the Obscene Publications Act. If the court rules that it does constitute publishing, it will have severe implications for Free Speech within the United Kingdom. We will provide more information as soon as the restrictions on reporting are lifted. 

Everyone here at Freedom4All wishes Paul the very best of luck with his upcoming Appeal. 


Republished: The courts and the internet - The Love Story unfolds

Republished: This content was originally posted on in May 2010. You can see the original in the archive.

The courts are no stranger to the online world that so many of us enjoy daily. Despite being described by many as a lawless world, most of society is becoming increasingly aware that laws in the real world also apply to our online existence.

Whether you are distributing Copyrighted Material, selling extreme pornography or breaking into a remote network, you risk being tried under the laws of your country if you are discovered.

It is possible, as in the case of Gary Mckinnon, that attempts may be made to try you under the laws of another country.

Despite these, already rather staunch controls, two new precedents have been set (in practical terms rather than in the true legal sense). This article will examine the implications of these two new developments.

Read more…

Republished: Privacy on the Internet

This was originally posted to, you can see it in the archive

I've been meaning to write an article on this subject since it was published. This, technically, is the second article I've written on the subject, but due to a disagreement with my text editor over whether the 'save' button should save the file or cause a seg fault, the earlier article never reached the point of publishing.

Read more…

How Social Networking Can Be Used To Identify Distributors

You often hear reports about the privacy implications of Social Networking sites such as Facebook, Bebo and MySpace,
but these sites could potentially be used to bring charges against you for the distribution of Copyrighted material.



I've not heard of any such cases, but it is theoretically possible, consider the following example.

  • I obtain a copy of Windows 7 and either crack it, or find a download that becomes pre-activated.
  • I'm quite impressed with it, and pass copies to any of my friends who want one.

So how exactly does Facebook help lead to my downfall? It's scarily simple:

Each copy of Windows has a unique product ID, unless of course you happen to be sharing copies of the same disc! So if my product ID is 1234, the product ID of everyone I've given a copy will be 1234. This makes it easy for Microsoft to spot that some sharing has been going on, but the clever bit starts when trying to ascertain who was distributing the copies.

Lets look at the friends I've shared my copy of Windows with.

I've given copies to;

  • Jon
  • Alan
  • Ray
  • James
  • Roger
  • Alistair
  • Brian
  • Sandra
  • Alex

Now the odds are that not all my friends know my other friends. For convenience lets base our social circles on the first letter of their name, so;

  • Jon knows James but doesn't know any of the others
  • Alan knows Alistair and Alex but doesn't know any of the others
  • Ray knows Roger but doesn't know any of the others
  • Brian doesn't know any of the others
  • Sandra doesn't know any of the others

So, all these friends are using a copy of Windows 7 with the product key 1234. During Windows Update or some other phone home script, Microsoft notices this and decides to investigate. They start by looking each of us up on Facebook (assuming we all use it). This could be done through spyware delivered via Windows Update, or by a little thorough investigation.

Once they have found us all, they start examining our lists of friends, and the logic is quite simple;

  • Jon could have passed the Copy to James and me
  • Alan could have passed the Copy to Alistair, Alex and me
  • Ray could have passed the copy to Roger and me
  • Brian could have passed the copy to me
  • Alex could have passed the copy to me
  • Only I could have passed the copy to each of these users.

Now Brian could have passed me a copy, and I could have passed it onto all the others, or even just one person within each group. In this scenario, I'm not the source, but I am the major distributor.

Even if two groups interlink slightly (for example Jon knows Alan), if one group is only linked to the other group(s) through their friendship with you, it's likely that you are the source. In fact, It's almost guaranteed that you are either the source or a distributor.

It's more realistic to believe that the groups would overlap slightly, and also that not everyone you pass a copy to will use Facebook/Bebo/Myspace. Unfortunately these do not detract from the possibility of being traced, it only takes a small percentage of your friends to be using the social networking site and it becomes possible to trace these things back to you.

Many people still add people they don't know to their 'friends' lists, this presents two problems. Firstly, even if your profile and 'friends' list is set to friends only, if you add the investigator as a friend (they're not likely to tell you who they really are!) they'll be able to access all this information. Secondly, if one of your new 'friends' has also downloaded that software, you may find yourself held responsible for their copy as well, despite never actually having met them!

It would, of course, be difficult to prove that you distributed the copy, and that your friends did not just follow your lead and download the software from the same source. However, if the latter is true then it's probably likely that you passed the URL to them, and enabled them to do so. With copyright law changing rapidly, it's hard to know what you might be opening yourself up to!

I'm not suggesting that Social networking sites should be avoided completely, simply highlighting how your membership could be used against you. Whether you share software or not, you should examine the privacy settings on your account. Do you really need to let the whole world view your profile, and who you are friends with? Probably not, if someone wants to find you, they'll manage!


Read more…

What is a Computer Virus

You can see the original version of this post in the archive

So someone has mentioned the term 'Computer Virus' and you're wondering what it means? Put simply a Computer Virus is a program that attaches itself to other programs on your system without your consent.

To look at it in a bit more depth requires a greater understanding of Malware as a whole, when the average user refers to a computer virus they may actually be referring to worms, Trojan Horses, rootkits, spyware or an actual virus. You may also hear users refer to them by the incorrect plural time virii (Virii is actually an incorrect plural Latin term for men) what they actually mean is Viruses. Knowing both terms can however be beneficial when trying to find information on a virus. However you refer to any of the above, they all come under the category Malware, which is simply a term for software that does something unwanted.

Spyware is generally referred to separately from Viruses etc, because it's behaviour is generally completely different, although some Trojan Horses and Viruses will act as spyware.

Most malware can be avoided by running an Anti-Virus (AV) program, but it is important to keep the virus definitions up to date. There are numerous available online with prices ranging from free to 400+.

Now that the lines of distinction are somewhat blurred, lets take a look at each of these types of malware.

Read more…

Truth and Fiction in Chain Email

You can see the original version of this post in the archive

There cannot be many E-mail users who haven't opened their inbox to see a subject line reading Fw: FW: XYZ. Love it or hate it Chain mail reaches us all! Some contain a sad story, others entice the user to gain themselves a free gift, but sadly there is only one group who benefit from Chain E-mails - Spammers.

Read more…

Republished: Tips for fighting password theft

Originally published on Jan 2010.


Password theft is a fast growing business, in the age of the internet a singular word or phrase is often all you need to verify your identity. Unfortunately this token is all that is needed for someone else to adopt your identity, and potentially commit fraud or criminal acts in your name.

Everything seems to be online in this day and age, whether it's your bank, your mail or your shopping. Each of these require a unique login to identify you. Unfortunately usernames can be quite easy to come by, in fact on many sites your username is public (Ebay is a good example of this).

So how do you protect yourself from this threat? Generally it simply requires a little bit of common sense. You wouldn't provide just anyone with a copy of the key to your house, so why do the same for your online persona?


You may, on occasion, receive emails from your Bank, or from PayPal, notifiying you that they are undergoing a security re-vamp and need you to verify your identity. You are usually give a link to open a page requesting your password. If you receive one of these, apply a little bit of common sense and ask yourself exactly why the bank would want this. Check the sender details in the email (though these can be faked, so beware)and contact the supposed sender to verify the legitimacy of the email.

Most Financial Institutions make it clear that they will never ask you to enter your passcode/phrase in whole, and certainly never by email. Yet many people still fall for these scams. One common reason for people clicking these links in emails is that it appears to link to the website you would expect. However in a HTML email it is all too easy to create a link that appears to point to but will actually open

Banks aren't the only organisations targetted for password fraud, everything from E-mail to Facebook accounts seems to get targetted. There are a variety of reasons for this, the first being that it is not always your money that is being targetted. Your account may be required to put a line of anonymity between you and the fraudster, this makes the fraudster harder to track, and puts you in the frame.

Another reason for seeking out the login details of supposedly harmless accounts (let's for example assume Facebook), is that many people use the same password for all their accounts. So if I establish that you registered for Facebook with the email address, using the password 'sekretpassc0de' then the odds are that I could login to your PayPal account with the same credentials. Whilst you would probably hesitate to give out your PayPal login in response to an email, you may decided that FaceBook is harmless, so why would an email be fake? I then have possible login details for every account you own.

Aside from the obvious cure of not giving your password out, ever, this eventuality can also be combatted by using different passwords to each of your accounts. It goes without saying that a password should be secure, containing letters, and numbers in as random an order as possible, but passwords can be 'brute forced.' Brute Forcing is the act of trying different password combinations until you find the correct one, the more random your password the harder it will be to break with a 'Dictionary style attack'. Should your password fall prey to one of these attacks, using different passwords will severely limit the damage.

Using the above example, where I gained your Facebook credentials, if you use different passwords for each of your accounts, the most I would be able to do is to login to your Facebook and mis-use that account. Your PayPal account would remain out of reach, unless I could convince you to give me those details.

You should avoid giving your passwords to anybody, and if you believe they may have been compromised, change them immediately. But also be aware that people can find out various facts about you, that may help them discover your password. Most websites have a 'Security Question' which allows you to reset your password. However this does pose the risk that someone could reset the password. One of the most common questions is 'Mothers Maiden Name?', there are plenty of resources online that would allow a determined attacker to examine the birth records for the year you were born. Birth registers contain your mothers maiden name, and so your password could be reset. This weakness is especially dangerous when combined with an account relating to finances, whether it be PayPal, Ebay or an Internet Banking website. It may be an incredibly useful feature when you forget your password, but it also risks your security. Many sites have improved the security of this feature now, by changing the procedure. On sites such as GoogleMail, providing the correct response now triggers an email to your secondary email account. Within this email is a link allowing you to reset your password, however if an attacker has gained access to your secondary account, he/she can still use this method to gain access.

A potential angle of attack that I regularly see, are sites such as Facebook which allow you to enter your email details, and an automated message is sent to your entire addressbook inviting them to join. Never use these, not only will the constant stream of emails annoy your friends, but it would also be childs play to craft a site asking for the details for my own benefit.

Anything that happens under your username will automatically be credited to you, so if someone were to use your Ebay account to commit fraud, you would be the first point of call for the relevant authorities. This can be prevented by following the simple steps laid out in this article.

In conclusion, the steps for fighting password theft are very simple. Don't ever give your password to anyone, a password should always be entered securely, not given to an 'advisor.' Verify the legitimacy of any correspondence received, especially if it asks for your credentials. It is especially important to gain contact details from other sources than the correspondence, a simple google search is usually sufficient. Ensure you practice damage limitation procedures, keep your accounts seperate, use different passwords for each one.

Finally, always choose a secure password. Avoid dates of birth, whether entered in reverse or otherwise. Don't use family names, and never write the passwords down.

Password security comes from adopting a state of mind, treat a password as the key to your life. Never trust anybody with it, and try to avoid obvious password hints.


Republished: Amazon Kindle DRM Broken

Originally published on 23 December 2009

Amazons Kindle DRM has been broken by an Israeli guy. Quite what will happen next isn't clear though, will Amazon learn the lesson that Apple learnt after DVD Jon cracked the ironically named Fairplay? Or will they move to update and 'improve' their Digital Restrictions Management?

Companies with deeper pockets than Amazon have learnt that DRM is a very bad idea, but ultimately it may not be down to them. Most retailers that use DRM develop sloping shoulders when questioned, it's almost always the Publisher or the Manufacturer that requires the DRM. If this truly is the case then Amazon is in a truly un-enviable position, its suppliers want DRM and its customers don't. At what point does the balance tip in the favour of the customer? Could Amazon decide that it's not worth the development cost of devising a new DRM system, or will the Publishers be able to provide enough pressure to ensure that they do?

Even the publishers must know that DRM is incredibly bad for the consumer, it unfairly restricts what the customer can do with an item that they have purchased. It does nothing to inconvenience professional copyright infringers, and simply makes life hard for the customer (who remembers trying to migrate their iTunes database to a new PC?). Hell, in the case of the Sony rootkit fiasco, that attempt at DRM was even dangerous to the customers computer.

DRM serves one purpose, it allows the publisher to require that you buy one copy of the media for every device you plan to read/play the media on. It makes great business sense to try and force this, whilst screaming at the top of your voice about how 'Pirates' are killing your business. Unfortunately, the business logic begins to evaporate as customers leave you, whether on principle, or because your DRM has made life too difficult.

When people talk of boycotts on the web, you often read comments by naysayers along the following lines "they're a big company, do you think they'll really care if such a small group of people stop buying their products? They wont even notice!" This is a very pessimistic view, it doesn't really matter if they notice at first. Even if a company makes billions a year, if you stop buying their CD's at �13.99 then you have deprived them of some income. They may not notice it, but you are no longer funding their greed. The more people that look at it from this angle, the more likely it is that a boycott will grow to encompass a great many people.

As an example, I do not, and will never own a Blu-Ray player (I didn't buy HD DVD either) because it is a format designed for one purpose - DRM. The HD side of things is a Unique Selling Point designed to entice the customer, but make no mistake it was introduced to try and curb copyright infringement. It also means that to create a Blu-Ray player you need to pay the media studio's a license fee. If you don't, then you're not going to be able to unencrypt the discs. Once you've done that, if someone does crack your Players key, you may well find that the media studios move to block that key. The end result is you have a lot of very pissed off customers!

Of course as a consumer you don't really care about the above example, but it also effects you. If you own a Blu-Ray player (whether software or hardware) and someone discovers the player key for that make and model, the next time you play a new Blu-Ray movie, you could well find that your player disables itself. Not only will it fail to play the new disc, you'll also be unable to play all the discs you have already bought. Your Blu-Ray player will be useless, if it's a hardware player then you effectively have a very expensive door stop.

The Blu-Ray Consortium have already done this once. Luckily it was a software player who's key was discovered, so existing customers could upgrade once the manufacturer had made a few changes. But it still generated extra work and hassle for everyone who had that software player.

They also have the power to disable discs, the fact that you shelled out good money is completely irrelevant, if they feel the need you wont get a say. As far as I know, this hasn't happened yet, but it is possible.

And that nice HDMA cable you had to spend out on? You needed that because they decided that your HD TV needed to contain a certain chip to watch Blu-Ray in High Definition. If that chip isn't present, then although your TV is HD Ready, the player will still send the movie in a lower quality (admittedly slightly higher than Standard Definition).

Although I've made Blu-Ray a major point, there are a number of products that are crippled by DRM. Take a quick search on the internet and you'll soon find a long list, DRM pops up everywhere. The nice new version of Windows you bought is full of it (as is Vista for that matter). Even DVDs contain a form of DRM (DVD CSS), athough this was far more 'consumer friendly' than on later formats. DVD CSS was broken by DVD Jon quite some time back which is why you are able to rip DVD's to your hard drive for more convenient viewing.

Incidentally did you know that by ripping that DVD you are breaking the law? You don't even need to distribute a copy, you've already broken the law. That's right, as well as forcing DRM onto the consumer, the media moguls managed to get laws passed to support their greed. In both the UK and the US it is illegal to 'circumvent technical measures' and by bypassing the DRM on the DVD you have done just this. Worse, the combination of DRM and this law actually means it is illegal to watch DVD's on some computer systems!

As with Blu-Ray the manufacturer of the player needs to be licensed by the media moguls, and any workarounds are illegal. With more than a little bit of cunning the media companies have stolen your right to do what you wish with products that you have bought! They even go so far now as to try and convince the consumer that we don't actually 'buy' software/music/media but in fact 'license' it. By setting a precedent of these items being licensed rather than owned, the media companies believe they can set more or less any requirement. If you breach their terms, they will remove your ability to use the item that you paid for. Even if you consider this to be fair, keep in mind that they hold all the cards, they can change the requirements at any time, and if you object to the new 'rules' then they can easily stop you using that item. Would you sign up to, and pay for a contract knowing the other party could change the entire thing at any time with no deferral to you? So why should you have to do that to listen to music or use software?

It's not just home equipment that allows DRM to ruin peoples day. Recently 3D Screenings of Avatar were cancelled in Germany. Why? Because the key supplier failed to supply the cinemas with enough keys to decrypt the DRM. In their infinite wisdom the media company decided to require one key per copy of the film, per film projector and for each movie server. There are always leaks, but what real benefit is there in sealing the film to this paranoid level? The cinema's effected ended up having to offer the would-be customers the chance to get a refund, or to watch the film in 2D.

Part of this incident has been blamed on the cinemas showing the film across multiple screens, therefore using up a larger number of keys that was strictly necessary, but had the film not been encumbered with DRM it would not have been an issue. Had the media company responsible had the foresight to realise that most multiplexes would show the film across a number of screens, the issue could also have been avoided. Human error is a factor that cannot be eliminated, but the existence of DRM massively magnifies the impact that a simple error can have.

So what is the answer? Boycotts can be quite effective, but as noted above they can often lack the inertia to make any real difference. This doesn't mean you shouldn't boycott DRM encumbered items, in fact you should. Although you may lack popular support, by boycotting DRM you are protecting yourself from the numerous disadvantages. If you buy a CD and it's DRM encumbered, if you buy a DVD that wont let you skip the adverts, then return it and insist on a refund. Explain exactly why you are returning it, and consider e-mailing the manufacturer to tell them the same.

Boycotts will become more common as people become increasingly aware of it, so spread the word. The more people start boycotting DRM the quicker the media companies will realise that it is harming their business. If you want to buy music online as an MP3 use a site like 7Digital which is DRM free (except for a couple of WMA's). If you use a store that has two versions available - High quality with DRM and DRM free but at a lower quality, contact them and explain that you want the higher quality MP3 but are not willing to download a DRM encumbered file. Especially if they are charging more for the DRM free version, most of us are willing to pay (a little) more for a DRM free version, but not if it has been created at a lower bitrate. This practice helps to create the myth that DRM leads to better quality digital downloads.

Don't settle for something that isn't worth what you paid for it, DRM unfairly restricts consumers rights and has been forced onto an unwilling public in order to feed the media moguls greed. It's time that we all took a stand and reminded these monoliths that they operate in a market that depends on the customer. We are the customers, it is our money that they want spent so perhaps they should try offering something that benefits us.

If you want to keep up to date with the fight against DRM, I strongly recommend that you take a look at Defective By Designt and consider signing up to their mailing list. They regularly arrange protests and demonstrations to make the perpetrators of this Consumer Crime realise that their actions will not be tolerated.

Republished: Climate Change affects us in ways you would not expect

Originally published on 27 Nov 2009

It's becoming increasingly clear that Climate Change definitely effects the Human race, but not in ways that you may think. The Australian Government is currently in chaos with a massive Liberal revolt, the cause? Climate Change.

Many of the Liberals do not believe that man is responsible for the current warming trend (a view, incidentally, I agree with.) and so are refusing to pass a bill through the Australian Senate where the Democratic Government is severely under represented. The end result of this is that the Prime Minister may call a snap election (I don't understand Australian Politics, but this sounds a bit strange!). Even the Liberal leader says that a snap election would harm the Liberals greatly.

Whatever your feelings with regards to Climate Change, it must at least strike you as crazy to see the amount of disruption the whole debate causes? As has been highlighted time and time again, the figures just don't truly add up. We've seen examples of figures being massaged to support the hypothesis, and Climate research is a major black hole for funding.

Our taxes have been raised in the name of Climate change, and the Government has spent an inordinate amount of money creating adverts that use ill-advised scare tactics. Yet none of what is being said has actually been shown to be factual. They are all examples of what could happen or be happening.

For crying out loud, the Government has even wasted questions in the driving theory test on the Climate. These tests were introduced to improve road safety for us all, these questions do nothing to aid the abilities of potential drivers. Once again political will is creeping where it shouldn't, it's absolute madness!!!!!!

I acknowledge that some will undoubtedly disregard this article instantly, I am clearly a Climate Change denier and therefore must be mad, selfish and arrogant. Hopefully however, some will climb from their high horse and actually take the time to ponder on just how much of an impact these unproved hypotheses are having on our daily life.

Republished: A basic guide to the Internet for the Simple Minded

Originally published on 26 Nov 2009

There's been something of a furore amongst the PC Brigade about a picture of Michelle Obama that appeared in Google's Image search result. As ever the BBC have launched a debate in the Have Your Say section. Unfortunately for the rest of us, this debate does very little other than highlighting how ill informed a large section of the Internet using populace are.

So lets dispell a few of the most common misunderstandings that are displayed in the vast array of comments;

a) Google is not responsible

Google is a search engine, they neither authored or OK'd the picture. They use software to rank links and images, based on a number of things, including the number of pages that have linked to a given bit of media. They also rank based on relevance to search criteria. What they do not do is 'post' the image.

The image was posted to a website somewhere on the net by someone, it was not Google's doing. All Google did was to index the page, based on comments made by Google it does sound like the original authors may be linked to malware, but as no malware was found on the host page, there's no reason to remove the listing. Measures of offensiveness do not come into it.

b) Just because you find something offensive, not everyone does

Offensiveness is very subjective, so who's going to measure it? I find articles in some of the national newspapers to be quite offensive, especially when they are so clearly biased and ill informed. Do I call for them to be censored? No. Similarly I find the average level of intelligence on todays internet quite offensive compared to the original intent of DARPAnet and the like. Do I demand that 80% of the populace have their connections terminated? No.

So why should you demand that a third party censor the information that I can consume? Would you be happy if I called for the Daily Mail to be banned, because I believe they publish nothing of worth? Probably not. If you don't like what you see and read, click the little cross in the top right hand corner. Or even just the 'Back' button in your browser!

Don't waste bandwidth by spouting a load of vitriol about how much you dissaprove of something, and how we should all be 'protected' from it.

There are customs in other parts of the world that you and I would find offensive, similarly there are things we do that are grossly offensive in other parts of the world. So who exactly is going to define offensiveness. If you are really that concerned, you have three choices - 1) Install something like Net Nanny 2) Move to China or Iran where the work is done for you 3) Call your ISP and cancel your Internet Service. Don't try and create a secret option 4 where you interfere with what the rest of the world sees.

c) Censorship is a bad thing

You may believe you have the best intentions, but it is a very slippery slope. First we have 'offensive' images blocked, then the Government decides there are bits of information that are useful to Terrorists. Then it's information that could be useful to paedophiles and other lesser criminals. Before you know it what are we left with? Access to CBeebies and not a lot else. Of course there will always be the Government approved sites, but just like in China, dissidents will be cracked down on in an attempt to sheild the rest of the Populace from 'unsavoury' information.

Most of us will trade a little bit of freedom for the privilege of not stumbling across child porn, but even that process is not nearly transparent enough. What happens if you are blocked by accident? Or incorrectly categorised, how do you appeal. Hell how do you even know that your site is on the blacklist? The answer to all 3 is, you can't.

Its a trade most of us are willing to make, but that's as far as it can ever be allowed to go.

d) The Internet is not a safe place

Believe it or not the Internet contains a lot of information that could offend or even harm you, let alone your darling children. So why would you let them surf unsupervised?? There was quite a tirade by the head of the Child Exploitation & Online Protection Centre on BBC Radio 4 last week. Incidentally he came across as very single minded and a bit of a bully, but I guess his heart is in the right place. He was complaining that the likes of Facebook and Myspace won't impliment his 'Panic Button' to allow kids to report anything that concerns them. Facebook and Myspace both claim that they have processes in place that are more effective, but this all misses an important point.

If you need a button on every website so that your child can report predators etc. it means a number of things;

  1. You're probably letting your kids surf unsupervised
  2. Your kids wouldn't tell you about something concerning them

Both of these indications are not faults of the internet. They are failings in your parenting skills, kids should not be allowed on the net unsupervised (the age at which they can go it alone is debatable), and if your kids can't trust you enough to tell you about Bill the Pedo on Facebook, try spending some more time with them!

More to the point, if they don't feel they can talk to you about it, why would they then be willing to talk to an anonymous copper who is more than likely to tell you anyway? The internet is not a safe place for kids. Period.

If you're letting young children on the net, install Net Nanny or one of the free alternatives. It's really not that difficult.


Some people completely fail to understand how the net works, and the dangers that lie therein. Not everyone is technically minded so this alone does not make those people any less intelligent. When these users become truly stupid is when they lets kids run free on something that they do not understand themselves. Or when they begin extolling the virtues of their beliefs and feelings by calling for greater censorship to protect their fragile minds from the nast images out there.

However, the level of truly stupid is reserved for those that believe that censorship is spelt sensorship. These people appear so lacking in relevant intelligence that they should not even qualify to participate in so important a debate. A simple bit of research before vomiting ones opinion onto the web is a must, and a failure to do so can easily lead to the impression that you are trying to punch above your weight.

Improve your Claims_DB Front-End by using Checksumming

This content was originally published to in Nov 2009

This is a tutorial on how to improve the response time of applications that use Claims_DB as a backend. The method works whether you are using Claims_DB_listener to access Claims_DB on a remote server, or accessing Claims_DB on your local system.

In the case of the former, replace the direct Claims_DB code with a Claims_DB_listener request (using METHOD 14). I've included an example Claims_DB_listener query, but commented it out.

We are going to start by building a simple BASH script, without checksumming. It's going to generate a HTML page using the information provided by Claims_DB, we will simply define the template.

Note: Choose a Database and Table that already exists within your Claims_DB installation. We are not going to write to the database in this tutorial.

Create your script in a convenient directory (I used ~/sandbox, but it's down to personal preference). I use Kate for most of my scripting, but use whichever text editor you are most comfortable with.

# kate

Now enter the following code;

# This is a simple script to demonstrate the usefulness of checksumming
# Copyright Ben Tasker 2009
# Released under the GNU GPL V3 - See

# You need to define these yourself
DBROOT="name of a DB stored in Claims_DB"
TABLE="name of a table within the Database named in DBROOT"

# If you're going to use Claims_DB_listener you need to set this next
# one to the name/ip of the database server

DATE=$( date )

# Generate the HTML
/bin/cat << EOM > /tmp/script.tmpfile.$$
<head><title>Example Script to Demonstrate Checksumming</title></head>
<body bgcolor="white">
<center><b>This is an example HTML report generated from a Claims_DB query.</b></center>


# Grab the column titles

# Claims_DB_Listener request, uncomment this and comment the following line to fire the request that way
# wget -O /tmp/Claims_DB-response.$$ "http://$DATABASE_SERVER/cgi-bin/$TABLE&DBROOT=$DBROOT&METHOD=3"

# Comment this one out if you're using the listener
TABLE="$TABLE" DBROOT="$DBROOT" "$CLAIMS_ROOT"/bin/ -headers > /tmp/Claims_DB-response.$$

# Remove the text delimeters
sed s/''"//g /tmp/Claims_DB-response.$$ > /tmp/column_headers

# Create the Column Head

COLT1=$( awk -F'' '{print $1}' /tmp/column_headers )
COLT2=$( awk -F'' '{print $2}' /tmp/column_headers )
COLT3=$( awk -F'' '{print $3}' /tmp/column_headers )
COLT4=$( awk -F'' '{print $4}' /tmp/column_headers )

/bin/cat << EOM >> /tmp/script.tmpfile.$$

# That's the quick & easy part out of the way!

# let's take the data from the table

# Claims_DB_Listener request, uncomment this and comment the following line to fire the request that way
# wget -O /tmp/Claims_DB-response.$$ "http://$DATABASE_SERVER/cgi-bin/$TABLE&DBROOT=$DBROOT&METHOD=2"

# Comment this one out if you're using the listener
TABLE="$TABLE" DBROOT="$DBROOT" "$CLAIMS_ROOT"/bin/ -read > /tmp/Claims_DB-response.$$

# Now we want to process it line by line

while read -r a

# Remove the text delimeters
echo "$a" | sed s/''"//g > /tmp/current_line

# Grab each column and put it to a variable
COL1=$( awk -F'' '{print $1}' /tmp/current_line )
COL2=$( awk -F'' '{print $2}' /tmp/current_line )
COL3=$( awk -F'' '{print $3}' /tmp/current_line )
COL4=$( awk -F'' '{print $4}' /tmp/current_line )

# Generate the HTML
/bin/cat << EOM >> /tmp/script.tmpfile.$$

# We've generated our rows

done < /tmp/Claims_DB-response.$$

#Tidy up a bit
rm -f /tmp/current_line
rm -f /tmp/Claims_DB-response.$$
rm -f /tmp/column_headers

# Close off the HTML
/bin/cat << EOM >> /tmp/script.tmpfile.$$
Report Ends

# Open the resulting file in our HTML Browser
firefox /tmp/script.tmpfile.$$

# Give firefox a few seconds to load before we remove the file
sleep 5
rm -f /tmp/script.tmpfile.$$

So we've generated a HTML page containing every record in the selected Database. It should look something like the following;

This is an example HTML report generated from a Claims_DB query.

05 November 2009

Title 1 Title 2 Title 3 Title 4
Data 1 Col 1 Data 1 Col 2 Data 1 Col 3 Data 1 Col 4
Data 2 Col 1 Data 2 Col 2 Data 2 Col 3 Data 2 Col 4
Data 3 Col 1 Data 3 Col 2 Data 3 Col 3 Data 3 Col 4
Data 4 Col 1 Data 4 Col 2 Data 4 Col 3 Data 4 Col 4

Report Ends

Obviously you'd normally tidy things up (center the data etc.) but it'll do for the purposes of this tutorial. The page loads, and we can read it which is great, but if the Table is quite large then it can take absolutely ages to load!

Try running it on a table that contains more than 100 rows of data, the while loop takes a long time to complete. But by using checksumming we can generate a cachefile, if the checksum hasn't changed then we can simply display the cached file giving the user an almost instant response.

To do this, take the code from above and make the following changes (in red)

# This is a simple script to demonstrate the usefulness of checksumming
# Copyright Ben Tasker 2009
# Released under the GNU GPL V3 - See

# You need to define these yourself
DBROOT="name of a DB stored in Claims_DB"
TABLE="name of a table within the Database named in DBROOT"

# If you're going to use Claims_DB_listener you need to set this next
# one to the name/ip of the database server

DATE=$( date )

# We want the DATE to be generated everytime, so lets put it to a different file

# Generate the HTML
/bin/cat << EOM > /tmp/script.tmpfile.$$.head
<head><title>Example Script to Demonstrate Checksumming</title></head>
<body bgcolor="white">
<center><b>This is an example HTML report generated from a Claims_DB query.</b></center>


Get a checksum from Claims_DB

# Uncomment this line if you are using the listener
# wget -O /tmp/ "http://$DATABASE_SERVER/cgi-bin/$TABLE&DBROOT=$DBROOT&METHOD=14"

#Comment this next line if you are using the listener
TABLE="$TABLE" DBROOT="$DBROOT" "$CLAIMS_ROOT"/bin/ - --gen-tbl-sig > /tmp/

# Check for a cache file
if [ -e /tmp/.claims_db.response.$TABLE ]
# Cache file exists, but is it valid?

# We need to compare the new checksum to the one stored in /tmp/.claims_db.response.$TABLE.sum
NEWSUM=$( cat /tmp/ )
OLDSUM=$( cat /tmp/.claims_db.response.$TABLE.sum )

# Do they match?
if [ "$NEWSUM" == "$OLDSUM" ]
# Checksums match, use the cache file
# Tidy up first
rm -f /tmp/

# Place the header into a file, then place the cache file in under it.
cat /tmp/script.tmpfile.$$.head > /tmp/script.tmpfile.$$
cat /tmp/.claims_db.response.$TABLE.sum >> /tmp/script.tmpfile.$$

# Now open the file
firefox /tmp/script.tmpfile.$$

# Give firefox some time then tidy up and exit
sleep 5
rm -f /tmp/script.tmpfile.$$
rm -f /tmp/script.tmpfile.$$.head


# Checksums don't match


# Either checksums don't match or the cachefile didn't exist, process as normal

# Grab the column titles

# Claims_DB_Listener request, uncomment this and comment the following line to fire the request that way
# wget -O /tmp/Claims_DB-response.$$ "http://$DATABASE_SERVER/cgi-bin/$TABLE&DBROOT=$DBROOT&METHOD=3"

# Comment this one out if you're using the listener
TABLE="$TABLE" DBROOT="$DBROOT" "$CLAIMS_ROOT"/bin/ -headers > /tmp/Claims_DB-response.$$

# Remove the text delimeters
sed s/''"//g /tmp/Claims_DB-response.$$ > /tmp/column_headers

# Create the Column Head

COLT1=$( awk -F'' '{print $1}' /tmp/column_headers )
COLT2=$( awk -F'' '{print $2}' /tmp/column_headers )
COLT3=$( awk -F'' '{print $3}' /tmp/column_headers )
COLT4=$( awk -F'' '{print $4}' /tmp/column_headers )

/bin/cat << EOM >> /tmp/script.tmpfile.$$

# That's the quick & easy part out of the way!

# let's take the data from the table

# Claims_DB_Listener request, uncomment this and comment the following line to fire the request that way
# wget -O /tmp/Claims_DB-response.$$ "http://$DATABASE_SERVER/cgi-bin/$TABLE&DBROOT=$DBROOT&METHOD=2"

# Comment this one out if you're using the listener
TABLE="$TABLE" DBROOT="$DBROOT" "$CLAIMS_ROOT"/bin/ -read > /tmp/Claims_DB-response.$$

# Now we want to process it line by line

while read -r a

# Remove the text delimeters
echo "$a" | sed s/''"//g > /tmp/current_line

# Grab each column and put it to a variable
COL1=$( awk -F'' '{print $1}' /tmp/current_line )
COL2=$( awk -F'' '{print $2}' /tmp/current_line )
COL3=$( awk -F'' '{print $3}' /tmp/current_line )
COL4=$( awk -F'' '{print $4}' /tmp/current_line )

# Generate the HTML
/bin/cat << EOM >> /tmp/script.tmpfile.$$

# We've generated our rows

done < /tmp/Claims_DB-response.$$

#Tidy up a bit
rm -f /tmp/current_line
rm -f /tmp/Claims_DB-response.$$
rm -f /tmp/column_headers

# Close off the HTML
/bin/cat << EOM >> /tmp/script.tmpfile.$$
Report Ends

# Combine the header and our newly created HTML cat /tmp/script.tmpfile.$$.head > /tmp/script.tmpfile.$$.1
cat /tmp/script.tmpfile.$$ >> /tmp/script.tmpfile.$$.1

# Before we display the resulting file, let's create a cachefile for next time;
cp /tmp/script.tmpfile.$$.1 /tmp/.claims_db.response.$TABLE

# Store the Checksum for use next time
mv /tmp/ /tmp/.claims_db.response.$TABLE.sum

# Move the file back to it's original location
mv /tmp/script.tmpfile.$$.1 /tmp/script.tmpfile.$$

# Open the resulting file in our HTML Browser
firefox /tmp/script.tmpfile.$$

# Give firefox some time then tidy up and exit
sleep 5
rm -f /tmp/script.tmpfile.$$ rm -f /tmp/script.tmpfile.$$.head exit

So the script will generate the same result, but will first check to see if anything has changed. If something has changed, it will generate a new page (which could still take a while) but if nothing has changed, it'll return the results previously generated. Try running the script against your big table again, first run takes a while, second run should be almost instant.

You can also use this checksumming method for queries, although you need to do some of the work yourself from within your script. Such a query would look more like the following;

TABLE="$TABLE" DBROOT="$DBROOT" "$CLAIMS_ROOT"/bin/ -query "Query Text" COLUMNNUMBER  > /tmp/Claims_DB-response.$$  md5sum /tmp/Claims_DB-response.$$ > /tmp/

Using the result of a query to generate a checksum does not provide as much of an improvement in response time, however depending on the length of the returned data, it may still be an improvement. Results can be cached in the same way, although you need to ensure that the cache filename will be unique to that particular query. I use the format


Because the cache is based on a checksum of the returned data, there is no need to impose time limits on the locally stored cache files. You may wish to do so in order to avoid cache files building up, but it will not affect the accuracy of the data returned.

This concludes this tutorial on how to improve front end response times by using checksumming, hopefully you've learned something, have fun creating your application!!

Republished: A couple of issues with Karmic Koala

Originally published on 05 November 2009

I reviewed Karmic Koala a few days ago, and though I did encounter a few minor niggles, everything was running quite well. However that has now changed, I've discovered two issues, both of which I would class as pretty major usability issues.


Bluetooth support in Kubuntu Karmic Koala is very limited. Konqueror lacks the file Kio_obex, so you are unable to browse your phones folders from the PC. This is apparantly due to changes they are making in Kbluetooth itself, but the end result is that I've lost some functionality.
Reports suggest that Bluetooth support is a little borked in Koala's Gnome environment as well, although it sounds as though it may be in a better state than in the KDE environment.

Sooner or later there will be a fix out for this.

Intel Users Beware

If your graphics card would normally use Xorgs Intel driver, beware, the implementation of Kernel Mode Setting (KMS) has broken the XV functionality of this driver. If you try using mplayer (for example) with the xv driver as your Video out (i.e. mplayer -vo xv somefile.mpg) you will be told that no xv device exists.

Try running xvinfo if the output tells you no device is present then you have probably been affected by this.

If your graphics are running slow, or your CPU load seems to be unusually high, this is why. At time of writing there is no fix for this (though they are working on it upstream), there is however a workaround. You need to disable KMS. To do this, do the following;

  • Restart your computer
  • At the GRUB screen select Linux and press 'E'
  • Type nomodeset and press enter
  • Once the system has booted try xvinfo again, you should get device details this time
  • You're good to go!

If this fix worked, and you want to save typing nomodeset every time you boot your computer, do the following;
  • sudo nano /boot/grub/grub.cfg
  • Enter your password (if prompted)
  • Press 'Ctrl' and 'W'
  • Enter linux and press enter
  • This should take you to a line that ends with splash.. If not then press Ctrl + W and enter until you find it
  • On the end of this line add nomodeset
  • Ctrl & x to exit, press Y to save
  • The option is set as default

On the scale of things, these are pretty minor issues, but they are irritating. Whether we will ever see the KMS fix in Koala isn't clear, but those feeling brave can compile their own kernel and patch it. If you're happy to do without KMS then nomodeset is probably the way to go.

Republished: Windows 7 vs Karmic Koala

Originally published on 01 Nov 2009 - (Images were missing at time of restoration)

Ubuntu 9.10 was released a couple of days ago with the codename 'Karmic Koala', there were plenty of reviews written immediately after the release, but this one is different? Why? Because I've taken the time to actually use the system.

I reviewed Windows 7 a few days ago, so let's start by taking a look at Koala. I'm using the Kubuntu release as I'm not a fan of Gnome.


The install was relatively painless, and is unlikely to cause issues for even the most tech-illiterate user. The defaults all work fine (although I do prefer to have my /home as a seperate partition).

I miss the days when you were asked which packages you wanted to install, but I guess it makes things less complicated for 'average joe'.

Booting from the install media to a useable system took about 20 minutes.

First Impressions

My first impressions were not that great. Koala has shipped with KDE4 which unfortunately defaults to the b*stard horrible launch menu found in Vista/7. You can however disable it and use the old style menu simply by right clicking on the K button.

KDE4 also uses widgets (known as Plasmoids) which has taken some getting used to. However, although I do miss the 'old style' desktop, it does have it's benefits. I've gone all nostalgic and laid my desktop out in a Windows 3.1 style, although i have retained my taskbar etc.

I'm used to working from a console, but for the purpose of the review I made the effort to rely on GUI's for everything possible. I don't like Kubuntu's Settings panel as Kcontrol used to contain a lot more. None the less it does seem to cover everything the average user might need to adjust. I notice it also has the ability to search.

Further use

After the initial playing around, I wanted to actually use the system. Koala ships with AppArmour enabled (older versions of Ubuntu did as well, but it's been brought more to the forefront). There's a sandbox for Firefox, but for one reason or another it's disabled by default. I couldn't find the option to enable it from within the GUI, but it can be enabled with one command (I used a console, but you could probably run it from the 'Run' option) - 'sudo aa-enforce /etc/apparmor.d/usr.bin.firefox-3.5'.

Enabling this was the only time I actually needed to use a console. Perhaps in the next batch of updates there should be an option added to the Settings  Manager? Or in the next release it could even be enabled by default?

To use the system, I needed to install the software I wanted. Synaptic seems to have been replaced (or heavily modified), but the new Software manager is simple enough to use. I did encounter one major issue though. Take the following scenario;

You want to install

  • Timidity
  • Rosegarden
  • Lives
  • Povray
  • Kpovray
  • Firefox
  • dvdstyler
  • Smb4k

So you search for each one and select it (my list was actually much longer) then you click apply to install. The system goes off and checks dependencies, but then returns an error saying it is unable to install dvdstyler. You click OK, but rather than continuing it dumps you back at the package selection screen. Worse than this, it has deselected all your packages, so you have to work back through and select them all again.

Could it not have a) continued or b) returned to the selection screen but maintained the list of your selected packages (minus the faulty one)?

It's a minor niggle, but if you've spent some time choosing your packages (and not made a note of what you are going to install) it's more than a little irritating!

There are no real changes in Keyboard shortcuts, however I do have a minor issue with Ctrl-Alt-Del. It displays a screen giving you an option to Logout, Shutdown or Restart. Where's the lock screen option??? The shortcut to lock the desktop is Ctrl-Alt-L which whilst logical does differ to that of Windows. When you're working with Windows all day you do get used to hitting Ctrl-Alt-Del followed by the space bar. On Kubuntu this will log you out. Like I say a small niggle, and probably something I could fix myself.

Alt-Tab is improved, much like In Windows 7. Similarly you get little previews of the window when you hover over the window on the Taskbar.

Beryl is in there as you would expect.

The Final Round: 7 vs Koala

So, which of the new releases do I prefer?

  1. Koala contains every single 'feature' of Windows 7, many of which are improved. It also has the added benefit that you can turn every single 'feature' off if you wish. Koala encrypts the users Home Directory (full disk encryption also available) , wherehas you have to pay Microsoft for Bitlocker.

  2. Microsoft still haven't implemented multiple desktops, a feature which I use regularly, so they lose points on usability.

  3. I'm not going to compare speed as I tested 7 in a VM and Koala on bare metal. Koala's memory footprint is smaller than that of 7 however, and Beryl runs quite happily on a system that MS's docs suggest couldn't run Aero.

  4. Both have bugs, although the Ubuntu ones should get ironed out a little faster.

  5. Windows 7 still maintains a traditional desktop, but I am beginning to like the Plasma desktop.

  6. Koala is far more configurable than 7, although it does seem to be missing a few options from the GUI.

  7. Hitting the 'Super Key' on Windows 7 opens the start menu, on Koala you can't seem to configure it to do that from the GUI.
  8. The default Themes on Koala aren't that great, Windows 7 has adopted quite a clean interface.

  9. Both seem to be stable and relatively secure (in 7's case once the 3rd party apps have been added)

  10. Windows 7 is built around DRM, Koala does not restrict what you can do with your purchases.

  11. Koala is free, Windows 7 costs more than it is actually worth (unless you're stuck on Vista)

On that last point, it's not a knock against Windows 7 as such. If you have XP there's nothing in Windows 7 that makes it worth the huge amount of money (why pay for soemthing that just works if you have something that jsut works?), unless you count the extra eye candy. if you have Vista which doesn't work, then 7 probably is a safe bet.

Ultimately Koala wins the battle, none of the purported 'features' of 7 are unique to Windows 7. You have a lot more flexibility with Koala, and can customise it to meet your needs. Windows 7 is a long way from being fit for the uses I require. I export a lot of apps from my server using X forwarding over SSH, something which still doesn't seem to be possible on Windows (RDP doesn't count, I just want the app not an entire desktop) without paying Citrix for their Xenapp metaframe server.

Amazingly, I'm planning on leaving Koala on the laptop for the time being. I've finally found something that can replace Gentoo, that in itself is one hell of an acheivement! I'll probably revert back to the command line for a lot of tasks, but the Karmic Koala has earned itself a place on my system.

Disclaimer: I am a Linux user, and many of the Windows Fanboys will claim that it has clouded my judgement. In a way it has, I expect certain functions from my OS that have never been available on Windows. That's the thing about having a choice, you discover there are things that you never even thought of. That said, I liked Windows 7 a lot more than I expected, and it is definitely a step in the right direction. It's unfortunate that it's packed with DRM, but other than that it is a reasonable operating system.

Republished: Windows 7: the Verdict

Originally published on 25 October 2009

Windows 7 has been released for all the world to use and abuse, so what do we think of it? You may recall that I wrote a review of the Windows 7 RC back in May I never got quite as far as writing a review of the RTM, which is a pity because there were a number of changes.

However, the final 'polished' version has now been released. So let's see what the final judgement is.

It's better than Vista

Seems an obvious and easy target, but Windows 7 is Much better than Vista.

However, I still don't think it compares to the Windows XP experience. New interfaces take time to learn, so this may be an extraneous variable.

It's still lacking functionality

Most of the competing Operating Systems have had extra features such as Virtual Desktops for quite some time. There's quite a few extra features I would like to see in Windows 7. Some of these features will probably appear in the form of freeware, but if you have to trust the security of your PC to an unknown third party purely to get the functionality you want, something is wrong. However I am a Linux user, and used to actually being able to customise my system, so we'll let this point slide.

Improved Install Process

For the avergae user, installing Windows has always seemed more than a little challenging. To be fair, it's a long time since it was that difficult, but with 7, if a user can actually get the balls to try, they should find it a breeze. Most of 7's users will probably never see the install screen, and will continue to buy PC's with Windows preloaded. However, it is nice to see that even this area has been looked at.

Dummification of the OS

This is an issue that bothers me, and some will label it elitist. With every release of the OS, Microsoft seem to be trying to make things simpler. Even the latest advert describes it as simple to use. This on its own is not a bad thing, it's the wider ranging effects. Although a valid business goal for the vendor, the last thing the rest of the world needs is for idiots to be able to use computers (which is what MS are aiming for). We need PC users to realise and understand that they actually have to maintain their PC's. Otherwise they become easy targets for malware etc. I'm not advocating a Computer driving license, more saying that complacency should not be encouraged. Don't make things too simple

Network Copying

When Vista was released, copying large files took an age. This was eventually fixed, however now that 7 is released there's a similar issue. Copying a file of more than half a gig across the network seems to slow everything down. It's a major pain, but one that I'm sure will be fixed quite quickly. Stole my background

In the RC and the RTM there was a rather nice Background of a fish. This appears to have been removed from the Retail version. Not a major issue I suppose, but it was a pretty good picture!

Renaming of files

This is a nice feature, on XP I often had users complaining that they couldn't open a file. It usually turned out that they had deleted the filename extension, and so Windows wasn't too sure what to open it with. In 7 when you click to rename a file, rather than highlighting the whole filename, it only highlights up to the dot. It's quite a nifty little feature, although it doesn't really go far enough. Why Microsoft haven't ended their reliance on filename extensions I don't know, do what everyone else does and take a quick peek at the MIME type. Still, this feature will at least save some heartache!

Overall Interface

This largely comes down to personal choice but, when I reviewed the RC I stated that I didn't like the look of the icons. Guess what, they still haven't grown on me. The whole interface is reminiscent of those childrens 'laptops' you can buy from Toys 'R' Us. It looks like a cartoonist puked over my desktop.
You can change the look, but I can't find the option to take my Start Menu, Taskbar back to how I like it. No-one else seems to have reported the location of this function, so it looks like users will have to learn to love the new interface. It does bring some improvements, but overall I don't consider it a winner.

XP Compatability Mode

God what a farce! Users with XP Compatability mode running will find that they need to install a second set of Anti-virus, and maintain the XP instance seperately. How many will realise that they have in essence, XP running on a second PC with Windows 7? How many will be willing to find the time to maintain both instances? Not many I'd assume. It's possible to run something similar in Apple's latest and greatest, so why have MS bundled the extra work?

I'm guessing the long and short is that MS do not want you running XP apps, they want you to upgrade to the native version. I imagine it's a tactic that may well pay off as well!

Control Panel

The Control Panel still sucks. I read a comment elsewhere on the ent that most of the new names for areas of the control panel looked like they had been chosen by a committee. This hits the nail right on the head, why 'Network Settings' wasn't deemed clear enough is beyond the understanding of any sane person! Given that most users won't venture too deep into the Control Panel (and will presumably continue to refuse to do so), could MS not have left this area alone? (or at least refrained from trying to apply Management Speak to everything.)

Overall View

As with the RC, I'm reasonably impressed with 7. I wouldn't choose to use it as my main OS, but I wouldn't object as strongly as with Vista, XP etc. It still lacks a lot of what I consider essential functions, but this is more to do with what I'm used to than anything. For those diehard Windows users out there, you've probably got the best featureset you've ever seen. For those who have stepped outside the Microsoft circle, it's an improvement, but it's still not quite there.

There's a few teething issues still, and I for one greatly resent the theft of the fish desktop. Most of these issues will no doubt be resolved in future updates (though I doubt the fish will be returned!), but for me 7 just still doesn't cut it. Perhaps Windows 8 will be the point when Windows offers the features I need and use?

Republished: It's a Dangerous World

This was originally posted to back in 2009

It's a dangerous world out there, there's more than a few scams running at the moment. Whether you are trying to crack someones facebook account, or simply have a landline, there's people after your money.

Take the current phone line scam, Some guy phones you up and tells you that;

you owe the phone provider money, and you're going to be cut off if you don't pay.

As 'proof' he invites you to try and phone someone, so you hang up, but he keeps the call on mute. You don't get a dial tone because you're still technically talking to him.

You get frustrated, and hang up, he hears this, hangs up and calls you back. Believing that you have been cut off, you give him your card/bank details.

It's as simple as that, you've been taken for a fool! Now how could you have avoided it?

  • Tell the guy you will phone him back, then call the number on your Bill (never use the number he gives you)
  • Don't give out your personal information if you're not 100% sure

With those two things in mind, you've avoided the scam. I never give any financial details to someone who has phoned me, I phone them back on a number that I trust (i.e. from a bill) and then pay them. It sounds picky, but its a) safe and b) kind of fun!

How can it be fun? Have you ever been phoned by your bank, who then want you to confirm who you are. They ask for Full name and address, Date of Birth, and type of last transaction. How often do you think, hang on, you called me? So I insist that they answer my questions first. Whats my date of birth, whats the first line of my address?

So they claim they can't answer that due to data protection. To which they get asked, is this a sales call, or is it urgent? If it's sales, don't bother, if it's urgent, I'll call you back, and if it's in between, stick it in a letter. Banks especially seem to get really aggravated when you do this!

I don't conduct financial transactions with people who phoned me. Simple!

That's all it takes to keep you safe from this kind of confidence trick.

The scam in question doesn't just target one provider, though it does appear to be BT who have identified it. There's also  a similar scam being run where the scammer claims to be Ofcom;

You need a digital upgrade on your line, please pay £6 now

This scammer also 'cuts you off'. Again, this is a scam, £6 may be a small amount of money, but once they have your card or bank details, you've had it!

Again, follow the same rules as above, and you'll be safe from this trick.

Further information can be found here.

Republished: Why you should never share Login Details

Originally published on Aug 2009.


Anyone who works in IT in any form knows the headache, despite signing to say they wont, users insist on sharing their login details with everyone! Whether it's because someone else can't remember their own username or simply because it's easier than logging out.

On occassion it happens because the user didn't lock their desktop before walking away, and someone else happens to need to use a PC 'quickly.'

We all know it, users just don't care about security. Why should they? It's 'Your' network to look after, not theirs. This article is aimed at that particular group to try and highlight exactly why they should care.


As a user, your username and password identify you to the system as, well you. Anything that happens under your login will be assumed to be you. In fact, if you read your IT security documentation (you know, that thing they got you to sign) it will probably state that you are liable for everything that happens under your login.

Lets leave aside the confidentiality of documents that you may have access to. If you work in that sort of job, you should be quite aware of the implications of giving someone unfettered access to such documents. Let's focus more on the direct impact that sharing that password has on you.

You allow one of your colleagues (lets call him Joe) to use your login, he starts by  writing the report that's overdue. So you go to have a coffee and a natter with someone else whilst he gets up to date. In the meantime, Joe has got bored and decided to spend 5 minutes on the net, just while he gathers his thoughts.

Now Joe goes to his search engine of choice and types in the words Desperate Housewives (He's a fan of the show!) and without thinking hit the 'I'm feeling Lucky' button. This takes him to the first link in the list, a site dedicated to the wrong type of Desperate Housewives. Now from an IT Admins point of view, Joe didn't do that, You did. The logs will show that your username accessed it, as the system has no way of knowing that Joe was using your login.

Now imaging Joe likes what he sees and decides to browse for a bit longer, it now looks as if you are deliberately accessing unsuitable content from work. Joe probably doesn't realise that Internet Connections are logged, but as he's doing the browsing on your behalf, it may be that he doesn't care.

Now if the Company is particularly strict, it's quite possible that your Job could be in danger. Is Joe the sort of person who would step up, admit responsibility and lose his own job instead? He might feel bad, but perhaps he has a family to support?

More to the point, if Joe did step up and admit responsibility, he could get fired, but you would still be in for a disciplinary for breaching IT security procedures. Not something you are too likely to lose your job over, but still not something you want on your record!

All this could have been avoided simply by logging yourself off and letting Joe log himself into his own user area.

This article has been quite tongue in cheek, the browsing of adult content is just one of the few things that could happen, and by far not the most serious. You may not overly care if the corporate network gets hacked (might even get a day off out of it) but you will probably start to care when all fingers start pointing at you.

It doesn't matter who actually did it, the system will think it is you, and by the time you hear about it, a lot of managers will have heard that it was you. You're then reliant on the actual perpetrator owning up to it, which depending on the person and the severity of what they did, they may not.

The simplest way to avoid this, don't share your username and password with anybody. You wouldn't let them have your bank details, so why let them assume your identity in any other way?


Republished: Basic Malware Detector For Linux

This was originally published on in Jun 2009

OK, if this of use to anyone then fantastic!!!!

It's a simple script that will generate MD5/SHA1/SHA256 sums of all files within your PATH. This is based on the PATH variable on my machine at time of writing, in fact it also checks the sums of my backups (you'll probably want to remove the /mnt/exthd line).

Its simple to use, all you need to do is burn the generated disc image to a CD for use when you check your system. It is based on the idea that you trust the security of your system at the time of generation, and there are a few caveats:

  1. Must be run as root (you can run as a normal user, but will get a lot of Permission Denieds)
  2. Won't notice if new executables appear (to be changed at a later date, maybe!)
  3. You must burn the disc image (if you leave it on the system, and it's compromised, the attacker could regenerate your image)


There are a couple of steps before you can get the script working. You'll need nothing more than a text editor!

  1. You need to specify the checksum program to use (default is sha256sum)
  2. You may want to change the directories that are checksummed


Calling just the script, or using --help will display usage options. Despite what is shown, all that is currently supported is --full --help

using the first will generate a checksum of every file stored within the directories specified within the script, which will then be stored in an ISO image along with the verification script. This should be burnt to a CD immediately.

Upon mounting the CD (to run your check), cd into the mounted directory and run


which will then check all files stored within it's database. It will provide you with a prompt before it goes away, read it carefully and then press enter.
Should any discrepancies be found, they will be piped through less, but the file will remain in /tmp


Probably quite a few


MD5 Sum


Republished: Phorm launches the InPhorm Newsletter

Originally published on 29 June 2009

In a casual spare moment I clicked onto Phorms Website, once I got past the vomit evoking mess that is the Webwise Discover advert page, I noticed that there has been a bit of a shake-up since I last visited.

I certainly don't remember them having a newsletter called InPhorm (Will the play on words ever wear thin?) so I figured I'd give it a browse. Once again, Phorm are seemingly desperate to be viewed as no different to Google. Love them or hate them, there's no denying that Google is something completely different. Google isn't talking about putting kit into my ISP in order to analyse almost every aspect of my datastream, and Google is reasonably easy to avoid if you wish.

Read the following tidbit from the newsletter

Undoubtedly the most important development is the unveiling of Webwise Discover. For over a year, Phorm has been in the rather unusual position of being evaluated more on our revenue model than our consumer proposition. It's similar to talking about Google showing you advertising, while leaving out the search part. But no more.

The fact is our technology will allow ISPs to partner with websites to create a unique consumer experience; and Webwise Discover is a perfect example of this. The result of several years development, we have launched a technology that, rather than simply being a better website, has the ability to make all websites better.

Webwise Discover is the ultimate recommendation engine. All you'll need to do to get personalised content automatically is to browse the internet - no boxes to check, no forms to fill in. Just show up at any participating site and it will show you stuff that is right for you. It's the simple way to get a more interesting and useful browsing experience.

One of the major concerns that everyone has with the technology is detailed in that second paragraph. All you have to do to send Phorm your datastream is browse the internet, no horrible checkboxes to tick. Unless of course, you don't want Phorm knowing which sites you visit and when. Claims of anonymity are fine, but where's the proof? It's all dependant on trust, there's nothing to stop BT taking a peek at everything I visit, but the point is I (just about) trust them not to do it.

I can't say the same for 121Media (lets call a turd a turd shall we?). How can I trust a company that previously created malware (they claim adware, but it's still a turd) with something so precious as my privacy? Are you willing to allow a company that once allowed its software to be installed onto an unwitting users computer to analyze your traffic? All in order to serve you 'more relevant' ads? Because lets face it, that is what they are about, anti-phishing comes as standard in most modern browsers, and Discover appears to be something of a shortlived novelty.

So it'll provide you with links to websites that it believes are relevant to you, but aren't you smart enough to find most of them anyway? So far, I've yet to see anything to suggest that the websites it will offer will include any not using the OIX advertising network. So, to put it another way, they will provide a nice polished turd to point you towards sites containing more of their 'more relevant' adverts.

Perhaps I'm wrong, maybe Discover will list websites not affiliated with the OIX network (I expect to see a heavily edited quote on StopPhoulPlay if I am), but it doesn't change the underlying realities. This is a company determined to make money, and their past behaviour suggests that they don't have too many scruples about how they do it. Advertising is their game, and the advertising world is a tough game, it could be just too tempting to remove the anonymity filter from the system. Who would know? Not BT and not us, or at least not until far too late.

Now a fantastically optimistic paragraph says the following

Consumers in the market research we conducted responded with a level of enthusiasm which leaves no doubt as to the reception Webwise Discover will have as it is deployed. As part of our launch activities, we also held an evening reception with many of the top websites in the UK. The response was virtually the same everywhere: we like this and see how it creates value for us.

Which is absolutely great, until you take this into account. That's right, Phorm didn't exactly mention DPI in the survey. Perhaps because we wouldn't understand it, poor cretins that we are. The problem for Phorm is that those who have the basics explained to them don't like the idea. Those who understand DPI on a deeper level hate the idea.

I mentioned recently that I hadn't spoken to anyone who didn't like the idea of Phorm once it was explained to them, that has now changed. I mention it because honesty is an important part of a balanced argument. This person was not against it, but also wasn't for it. The view was that as the internet was not used very much, she didn't think there were any privacy implications for her. Frankly, that's about the best that Phorm can hope for!

Although it isn't mentioned in the News letter, Phorm do have a link leading to this article by the ISBA - 'The Voice of British Advertisers' - which calls the EU's actions over Phorm a bit premature (exact words are - EU is getting ahead of itself). I think this is supposed to be taken as support for Phorms position, which it is, but it's hardly surprising. An Advertising Body in support of a company who believe they can increase advertising revenue. Hmmmmmm...... did not see that coming!

ISBA, the voice of British advertisers, says concerns about the new technology ? which can help refine and personalise the advertising content received by online consumers ? ?can and should be addressed by the UK?s successful system of advertising self-regulation.?

I'm no expert in the advertising field, but I'd say the issues raised have bugger all to do with the advertising itself. In fact, I'd say the whole debate centres on the underlying technology, the adverts are just the end results. Most people tolerate adverts on the net, they are an unfortunate necessity, but that's not the same as saying it's OK to track our every move on the net.

I'm assuming that the "UK's successful system of advertising self-regulation" refers to the Advertising Standards Authority. How they factor into the debate is unclear, OK they do deal with the placement of adverts as well as the content, but I doubt their remit extends to the current debate. In fact, I'd go further than that. Being an agency funded by the very people the regulate, I'd say they have absolutely no business making any form of decision about what does and does not get placed into a telephone exchange. I doubt that anyone at the ASA is qualified to understand the technology, and I doubt anyone could believe that there would be no bias in their decision.

The technology has already been given the green light by the Information Commissioner?s Office, the UK?s data watchdog. And earlier this year, as an example of the strength of the self-regulatory system, the Internet Advertising Bureau, in consultation with industry bodies including ISBA, published its Good Practice Principles for behavioural targeting. Ten businesses have initially committed themselves to the principles, including Google, Microsoft, Platform A, Yahoo! and Phorm.

The fact that the ICO cleared such an unpopular technology is one of the reasons that the EU is involved. You cannot use the cause as a defence against the cause itself, that would lead to a paradoxial world. Whilst the ISBA may have published it's guidelines for Good Practice Principles, it still fails to address the underlying issue. And for those who have forgotten, it is this;

Phorm want to read every page you read, and then make a note of anything of interest therein. They promise not to record that it was you that read it, simply that your number viewed that category.

A world where adverts are worth more money, don't think anyone can claim that the ISBA doesn't have a vested interest in this one.

I'll post anymore of Phorms astounding tidbits as and when I find them.

Writing a Front End for Claims_DB Part 6 - Closing Notes

This content was originally posted to

Writing a Front End for Claims_DB - Part 5 -

In this tutorial we have inserted a Database into the Claims_DB system and created a simple front end, users can add records and run a pre-defined query.

There are however a few caveats within this tutorial, and within the Claims_DB system itself.

Because of the use of temporary files within the processing functions of this tutorial, only one user can access your system at one time. Two simultaneous connections could lead to a data clash, this is currently also a problem within Claims_DB itself, and one that is being given a high priority for the next release.

If you are writing front ends for yourself, it is advised that you devise a way to avoid a clash of temporary files within the front-end, this could be implemented by appending the remote hostname to the filename of the temporary file, or simply by loading everything into variables instead.

If you can code around this issue, then when the next release of the database engine is made you should have a functioning multi-user system.

You will also note that we did not provide a front-end page for the removal or editing of records. This is because the engine doesn't yet support that functionality very well, although it is improving. It is possible to DROP a record, or edit it from a front-end but it is a dangerous functionality.

If you require this functionality try something along the lines of the following


# Deletes a record from the table - DANGEROUS
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See for details


# Get your QueryString and store it in the variable COL1
# Store the Column that your QueryString appears in in COLNUMBER
# Best to only use Primary Keys for this code

DBROOT="$DBNAME" TABLE="Catalogue" "$CLAIMS_PROGS"bin/ -query-line "$COL1" $COLNUMBER > /tmp/LINENO.tmp
LINE_NUMBER=$( cat /tmp/LINENO.tmp )
READ_LINE=$( sed -n "$LINE_NUMBER"p "$CLAIMS_PROGS"/db/"$DBNAME"/Catalogue.csv )
cat "$CLAIMS_PROGS"/db/"$DBNAME"/Catalogue.csv | grep -v "$READ_LINE" > /tmp/CLAIMSTMP.db
cat /tmp/CLAIMSTMP.db > "$CLAIMS_PROGS"/db/"$DBNAME"/Catalogue.csv

Take good note of the comments, and test any implementation very carefully! It would be very easy to hose the entire table this way. If you were using this method to edit a record, you would then call Claims_DB as if you were adding the record for the first time.

There is a great deal more that you can do, the query-line call above is very useful if you wish to create a form or report to find a record by its Primary Key.

You should also note that the GET method of submitting forms is only really useful for small forms, both browsers and Webservers can only accept URL's of a limited length, so large forms, or forms with a lot of input will not work. Internet Explorer has a particularly short length, so if you implement using the GET method and the submit button doesn't seem to work, try re-submitting from Firefox or Opera.
For long forms use the POST method, though you will probably find it easier to parse POST data using Perl or PHP.

Be aware that long reports can take quite some time to generate, if anyone can find a faster method than the one shown on the previous page be sure to let me know!!!!

To make these scripts 'live' place them into your webservers CGI folder, and ensure that they are executable by whatever user your server daemon runs as.

A tarball containing copies of all the scripts we've written today can be obtained from here.

Writing a Front End for Claims_DB Part 5 -

This content was originally published to

Tutorial Part 4 -

So far in this tutorial we have created a landing page, and a means for users to add records to the system. What we havent done yet is created a way for them to retrieve data, worse we've teased them by putting an option on the landing page that currently does nothing.

Well now we are going to create the script that is supposed to sit on the end of that button. So without further ado, lets create our headers


# Part of the Claims_DB Sample Database Frontend
# Generates a report of all stock for a given location on the bike
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See

# Set yer variables here


# Tell the browser our MIME Type
echo Content-type:
echo ""

Now we want to grab the Request URI to find out which query criteria the user selected.

# Grab the Query Criteria from the Request URI
REQUEST_URI=$( /bin/env | grep "REQUEST_URI" )

# Seperate out the Criteria
QUERYSTRINGA=$( echo "$REQUEST_URI" | sed -n 's/^.*CRITERIA=\([^&]*\).*$/\1/p' )

# Remove URL Encoding
QUERYSTRING=$( echo "$QUERYSTRINGA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
Now we know what the query string was, we can make the query and retrieve the data needed to generate our report.

# Get the Column Headers

DBROOT="$DBNAME" TABLE="Catalogue" "$CLAIMS_PROGS"/bin/ -headers > /tmp/REPHEAD.tmp

# Run the Query


# Generate the Title Row

COLT1=$( awk -F\, '{print $1}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT2=$( awk -F\, '{print $2}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT3=$( awk -F\, '{print $3}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT4=$( awk -F\, '{print $4}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT5=$( awk -F\, '{print $5}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT6=$( awk -F\, '{print $6}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT7=$( awk -F\, '{print $7}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT8=$( awk -F\, '{print $8}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT9=$( awk -F\, '{print $9}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT10=$( awk -F\, '{print $10}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT11=$( awk -F\, '{print $11}' /tmp/REPHEAD.tmp | sed 's/\"//g' )
COLT12=$( awk -F\, '{print $12}' /tmp/REPHEAD.tmp | sed 's/\"//g' )

We can now start by creating the HTML, ready to begin inserting data. You'll notice a reference to a CSS file on this server, this contains formatting to try and ensure that if the report is printed it will not expand over more than one page wide. Click here to view the CSS file.

# Start Generating the HTML

DATESTAMP=$( date +'%A %d %B %Y' )

/bin/cat << EOM

<title>Parts for location $QUERYSTRING</title>
<link rel="StyleSheet" href="/images/benscomputer_no-ip_org_Archive/stylesheets/print_report_land.css" type="text/css" 
<body bgcolor="white">
<b><font size="+1"><font size="+12">Parts for location 
<font size="+1">

<table style="width: 100%; text-align: left;" border="0" cellpadding="2"
<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<tr><td colspan="13"><hr></td></tr>

What we have at this point looks a little something like this;

Parts for Location (whatever the user queried)

Saturday 20 June 2009

Part Location
Part Number
Number Type
Qty in Pack
Qty on bike

The data will then be placed below this row by row. So in order to achieve this we need to do the following;

# Populate the Table with data, but sort it first

cat /tmp/REBBOD.tmp | awk -F"," '{print $1"," $2"," $3"," $4"," $5"," $6"," $7"," $8"," $9"," $10","}' | sort -n -k 1 > /tmp/REBBOD.sorted

# Remove the old unsorted temp file
rm -f /tmp/REBBOD.tmp

# Go through Row by Row
while read -r a
#Extract each column

COLT1=$( echo "$a" | awk -F\, '{print $1}' | sed 's/\"//g' )
COLT2=$( echo "$a" | awk -F\, '{print $2}' | sed 's/\"//g' )
COLT3=$( echo "$a" | awk -F\, '{print $3}' | sed 's/\"//g' )
COLT4=$( echo "$a" | awk -F\, '{print $4}' | sed 's/\"//g' )
COLT5=$( echo "$a" | awk -F\, '{print $5}' | sed 's/\"//g' )
COLT6=$( echo "$a" | awk -F\, '{print $6}' | sed 's/\"//g' )
COLT7=$( echo "$a" | awk -F\, '{print $7}' | sed 's/\"//g' )
COLT8=$( echo "$a" | awk -F\, '{print $8}' | sed 's/\"//g' )
COLT9=$( echo "$a" | awk -F\, '{print $9}' | sed 's/\"//g' )
COLT10=$( echo "$a" | awk -F\, '{print $10}' | sed 's/\"//g' )

# Generate the HTML

/bin/cat << EOM

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">

<td style="vertical-align: top; text-align: center;">



done < /tmp/REBBOD.sorted

rm -f /tmp/REBBOD.sorted 2> /dev/null

And then finish off

# generate the closing HTML
/bin/cat << EOM

# We're finished!
So the finished result should look something like this;

Parts for Location (whatever the user queried)

Saturday 20 June 2009

Part Location
Part Number
Number Type
Qty in Pack
Qty on bike


And so long as the users browser supports CSS when they try to print the report, it should fit onto the width of a Landscape A4 sheet. They may have to set the pagination in Page Setup though as some browsers don't seem to support that setting.

The source for this section can be downloaded from here.

Tutorial Part 6 - Closing Notes

Writing a Front End for Claims_DB Part 4 -

This content was originally published to

Tutorial Part 3 -

In the previous section of this tutorial we created a page allowing a user to insert a record into the database. Now we need to create the script that actually processes the input and inserts the record into the database.

This script will be the first time we have called insert_records_claim.

As ever we need to start by generating our script headers


# Part of the Claims_DB Sample DB Front End
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See

# Define your variables here

# Where did you install Claims_DB?

# The name of the Database we are using

# Tell the browser our MIME Type
echo Content-type: text/html
echo ""

This time we need to process the details sent to us. As the form method in the previous page was set to use GET we can grab this from the Request URI.

# Run /bin/env to get environment 
REQUEST_URI=$( /bin/env | grep "REQUEST_URI" )

# Now we need to seperate out each of the elements.

# Primary Key
RECORDID=$( echo "$REQUEST_URI" | sed -n 's/^.*RecordID=\([^&]*\).*$/\1/p' )

# Description
DESCRIPTION=$( echo "$REQUEST_URI" | sed -n 's/^.*Desc=\([^&]*\).*$/\1/p' )

# Part Location
LOCATION=$( echo "$REQUEST_URI" | sed -n 's/^.*Locat=\([^&]*\).*$/\1/p' )

# Part Number
PARTNO=$( echo "$REQUEST_URI" | sed -n 's/^.*Partno=\([^&]*\).*$/\1/p' )

# Number Type
NUMBERTYPE=$( echo "$REQUEST_URI" | sed -n 's/^.*NoType=\([^&]*\).*$/\1/p' )

STOCKIST=$( echo "$REQUEST_URI" | sed -n 's/^.*Stkis=\([^&]*\).*$/\1/p' )

# Quantity in pack
PPQ=$( echo "$REQUEST_URI" | sed -n 's/^.*PPQ=\([^&]*\).*$/\1/p' )

# Quantity on bike
QOB=$( echo "$REQUEST_URI" | sed -n 's/^.*Need=\([^&]*\).*$/\1/p' )

# Rating
RATING=$( echo "$REQUEST_URI" | sed -n 's/^.*Rate=\([^&]*\).*$/\1/p' )

# Bike
BIKE=$( echo "$REQUEST_URI" | sed -n 's/^.*bike=\([^&]*\).*$/\1/p' )

Now although we have all the information we need, we probably don't want to insert it into the database yet. The information is currently URL Encoded, and Number type still uses the coding that we gave it on the last form.

We probably could insert it into the database like this and then remove the encoding when we retrieve the information at a later date, but it does mean that should we open the table in a spreadsheet program it's going to be more or less useless to us. Let's be tidy and do the processing now, starting with the number type.

# Translate NUMBERTYPE into the value to be inserted 
into the database

if [ "$NUMBERTYPE" == "0" ]
Next we'll remove URL Encoding from the variables that are likely to contain it. NUMBERTYPE won't as we have just set it, PPQ and QOB should both be numeric so they don't need changing. Everything else could contain spaces or characters, so lets run it through a list of all likely possibilities. (Note: Each of the following should occupy one line)

"s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed 
"s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed 
"s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed 
"s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed 
"s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
LOCATION=$( echo "$LOCATIONA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
PARTNO=$( echo "$PARTNOA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
STOCKIST=$( echo "$STOCKISTA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
RATING=$( echo "$RATINGA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
BIKE=$( echo "$BIKEA" | sed "s/%0D%0A/<br>/g" | sed "s/+/ /g" | sed "s/%21/\!/g" | sed "s/%40/@/g" | sed "s/%20/ /g" | sed "s/%26/\&/g" | sed "s/%3B/[SEMICOLON]/g" | sed "s/%22/[DOUBLEQUOTES]/g" | sed "s/%2F/\//g" | sed "s/%5C/\\\/g" | sed "s/%24/$/g" | sed "s/%A3/?/g" | sed "s/%27/\'/g" | sed "s/%23/\#/g" | sed "s/%3A/\:/g" | sed "s/%5B/\[/g" | sed "s/%40/@/g" | sed "s/%5D/\]/g" | sed "s/%25/\%/g" | sed "s/%5E/\^/g" | sed "s/%28/(/g" | sed "s/%29/)/g" | sed "s/%2B/+/g" | sed "s/%2C/,/g" )
All our variables should now be ready for insertion into the database, so lets put them in.
# Insert the record into the 
"$CLAIMS_PROGS"/bin/ -insert > /tmp/STATUS.tmp

STATCODE=$( cat /tmp/STATUS.tmp )

Now we are going to generate some HTML to tell the user whether we were successful or not. To save a few lines of code, we'll generate the HTML header and then process the event.

# Generate the HTML header
/bin/cat << EOM
<html><head><title>Claims_DB Sample Database Front End</title></head>
<body bgcolor="white><center>Claims DB Sample Database Front End - Add Record Stage 
Now lets check whether the insertion was successful. If it was then Claims_DB will have returned the status "SUCCESS", if it returns anything else then something went wrong.

# Check whether it worked
if [ "$STATCODE" != "SUCCESS" ]
# Something went wrong
/bin/cat << EOM
Your record was <b><font color="red">NOT</font></b> inserted into the Database due to an 
error. The status code returned by the database was<br>
Please report this error to your Database Administrator.<br><br>
Sorry for the inconvenience.<br><br>
<a href="/">Return to the Index Page</a>
EOM else # It worked! /bin/cat << EOM

Your record <b><font color="green">WAS</font></b> inserted into the database.<br><br>
<a href="/">Return to the Index Page</a>
EOM fi # Generate the HTML footer /bin/cat << EOM </body></html> EOM # Tidy up and exit - make sure no errors are sent to the browser rm -f /tmp/STATUS.tmp 2> /dev/null

With that we have a script that process the input sent to it from our add_records form. Again, aesthetically not the greatest work in the world, but the necessary functionality is there for you to expand upon.

You can download a copy of this source here.

Tutorial Part 5 -

Writing a Front End for Claims_DB Part 3 -

This content was originally published on

Creating A Claims_DB Front End Tutorial Part 2 - Index Page

Part 3 - Add Records

In the previous section of this tutorial we created an Index page. The first button takes the user to a CGI script that allows them to add a record. That is what we will be creating here, depending on the functionality you want, you could simply create a HTML page for this, but we are looking for something more advanced

First lets create the script headers

# Part of the Claims_DB Sample front end
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See

# Define your variables here

# Where did you install Claims_DB?

# The name of the Database we are using

# Tell the browser our MIME Type
echo Content-type: text/html
echo "" # Generate the HTML /bin/cat << EOM <html><head><title>Claims_DB Sample Database - Add Record</title> </head><body bgcolor="white"> <center><b>Claims_DB Sample Database - Add Record</center></b> <br><br> EOM

Now we need to identify what the Serial Number of the record will be. This is the primary key for the Catalogue table, and there are two main ways of achieving it. When we created the database we could have created a second file called Catalogue_PK and told our front end to read and write primary keys to that. Whenever a record is added we would write that Primary Key to it. Then when we want to create a new record we would read the number back and then add 1 to it.

Whilst this method works, it's not compatible with the way that the database engine will create primary keys in the future. So we are going to get our front end to do more or less exactly what the database engine will do.

# What will our Primary Key be?

# Read the entire table
DBROOT="$DBNAME" TABLE="Catalogue" "$CLAIMS_PROGS"/bin/ -read > 

# PK is in column 1 # Now seperate out Column 1, and remove duplicates awk -F\, '{print $1}' /tmp/LOCATIONS.tmp > /tmp/LOCATS2.tmp

# Strip out the text delimeter - " sed s/\"//g /tmp/LOCATS2.tmp > /tmp/LOCATS.tmp

# Now lets sort it cat /tmp/LOCATS.tmp | sort -n -k 1 > /tmp/LOCATS2.tmp

# What is the final entry, load it into a variable LASTKEY=$( cat /tmp/LOCATS2.tmp | tail -n 1 ) # Work out the next key with a simple bit of maths NEWKEY=$(( $LASTKEY + 1 )) # Tidy up rm -f /tmp/LOCATS.tmp
rm -f /tmp/LOCATS2.tmp
rm -f /tmp/LOCATIONS.tmp
Now that we know what the Primary Key will be, lets continue to generate our form. We don't want the user to be able to change the PK (too much chance of a clash) but do want to a) tell them what it'll be and b) send the key with the form. So

# Continue to generate the HTML

Record ID: $NEWKEY
Part Description: <INPUT TYPE="text" NAME="Desc">
<br>Part Location: <INPUT TYPE="text" NAME="Locat">
<br>Part Number: <INPUT TYPE="text" NAME="Partno">
Now for the number type the Database uses two possible values - OEM or Stockist. Lets provide the user with a drop down box to select the valid option, it'll also mean we can reduce the length of the Request URI when the form is submitted

# Use drop down box for Number type
/bin/cat << EOM
Part Number Type: <select name="NoType">
We'll convert the ID's 0 and 1 back to their true meaning when we parse the submission later. For now lets finish the rest of the form

# Generate rest of form
/bin/cat << EOM
Stockist: <INPUT TYPE="text" NAME="Stkis">
Qty in Pack: <INPUT TYPE="text" NAME="PPQ">
Qty on Bike: <INPUT TYPE="text" NAME="Need">
Rating: <INPUT TYPE="text" NAME="Rate">
Bike: <INPUT TYPE="text" NAME="bike">
<INPUT TYPE="submit" VALUE="Save Record">

# Generation finished

And in those few easy steps, we've created our Add Record form. There's no input validation or anything, which is something you'll probably want to implement, but again that's a little outside the scope of this tutorial.

You can grab a full copy of this source here.

Claims_DB Front End Tutorial Part 4 -

Writing a Front End for Claims_DB Part 2 -

This content was originally published to

Creating a Claims_DB front end Tutorial Part 1 - Introduction

OK, so we've created our Database, now we need to provide users with a way to interact with it. We are going to create a Web based interface using BASH scripts as CGI scripts. These will provide the interface as well as doing any necessary processing.

Lets create and enter our working directory

mkdir ~/src
cd ~/src

Now the first thing we are going to need is an Index page. You could put some sort of Password authentication on this page, but that's outside the scope of this article (if you are going to add a password, make sure you use SSL).

Now, we want our Index page to be pretty interactive, so we are going to have the following elements

Drop Down boxes to generate Reports
Button to Add a Record

The reports fields should allow the user to select values available within the database. Given the dataset we are using, location on bike would be a good choice.

So whilst we are aiming for something like this;


We want the list to generate dynamically, so simply coding it into the HTML won't meet our needs.

So lets create the script

# Index page for Claims_DB front end.
# Copyright Ben Tasker 2009
# Released under the GNU GPL
# See
# Set Variables here
# Where did you install Claims_DB?

# The name of the Database we are using

# Function Main!

# Tell the browser what mime type we are using

echo Content-type: text/html
echo ""

# May as well start the HTML generation

/bin/cat >> EOM

<title>Claims_DB Sample Database 

<center><b>Claims_DB Sample Database 

The script now generates the headers, our page would render at this point. There will be errors because the code is incomplete, but the server would generate a very basic page if we were to use this. Next we want to provide links to the various functions that we will generate later on;

# Lets create our Add record 

/bin/cat << EOM
<!-- Pass user to the add record CGI -->
<!-- Section ends -->

So we now have a button allowing us to move to the Add Record form. If we used it at the moment we'd get a 404 as we haven't created that script yet. But lets finish off the Index page first

# Now lets create the drop down box for our report

/bin/cat << EOM
<!-- User to select Query criteria, then send them to the report generator -->

# Now we want to run our own query to provide the user with valid criteria
# We want to run a SELECT DISTINCT style query.

# Grab the available locations, we know they are stored in Column 3

# Until new functionality is implemented, we need to grab the entire table
DBROOT="$DBNAME" TABLE="Catalogue" "$CLAIMS_PROGS"/bin/ -read > /tmp/LOCATIONS.tmp

# Now seperate out Column 3, and remove duplicates
awk -F\, '{print $3}' /tmp/LOCATIONS.tmp > /tmp/LOCATS2.tmp
awk '{
if ($0 in stored_lines)
}' /tmp/LOCATS2.tmp > /tmp/LOCATIONS.tmp

# Strip out the text delimeter - "
sed s/\"//g /tmp/LOCATIONS.tmp > /tmp/LOCATS2.tmp

# Read the resulting file line by line and create an Option value for each
while read -r a
/bin/cat << EOM
<option value="$a">$a</option>
done < /tmp/LOCATSS2.tmp

# Tidy up!!!!!!!!
rm -f /tmp/LOCATIONS.tmp 2> /dev/null
rm -f /tmp/LOCATS2.tmp 2> /dev/null

# Now finish creating the form

/bin/cat << EOM
<INPUT TYPE="submit" Value="Generate Report"></form>

# Page generation finished

We now have a valid page, the user will only be presented with locations that exist in the table. We should probably do a little more on the aesthetic side of life, but the basic functionality exists.

You can grab a full copy of the code so far here.

Tutorial Part 3 -

Writing a Front End for Claims_DB Part 1 - Introduction

This content was originally published to

This tutorial is intended to show the reader how to create a simple user interface for Claims_DB. We will be creating a frontend for a sample database. The end result will comprise of a set of CGI scripts, which will process input from forms and pass the relevant requests onto Claims_DB itself.

A word of warning, the system is still under heavy development, so writing any sort of front-end is not for the faint hearted, it's not especially complicated, but you currently need to do quite a bit of the processing and parsing from within the front end.

Certain functionalities will be added in the next release, so some of the methods shown here (creating an autonumber for example) will become outdated. They will still work, but will be superseded by a more appropriate method.


The first thing we need to do is to create our Database, Claims_DB currently lacks an 'add database' functionality, so there is a bit of manual work to be done. First lets grab an existing table - Motorcycle parts listing You can find that here.

This table is provided as part of the sample database, but we'll treat it as though we are starting from scratch.

Change into the db folder within the Claims_DB folder, and create a folder called Sample_DB

mkdir Sample_DB

This is our main database root, now copy Catalogue.csv into this folder.

Next we need to register the new database within Claims_DB so move back to the parent directory and enter the folder called Databases

cd Databases

Now open the file Databases.csv in your favourite text editor;

nano Databases.csv

and add the following line


The number 4 is the database ID, as logn as it is unique it doesn't matter which number is set. The reason we are using 4 is because I already have 3 other databases. Sample_DB is the name of the database (This must match the folder name, and is case sensitive). There will eventually be a tool to automate this process!

Now that the database is registered, we can move on. However, to protect future functionality (i.e. not necessary at the moment, but may be in the future) we should register the table and the columns of the database, so add the following to Tables.csv


4 defines the database, Table states that the element is a table (In case this table is used for forms, reports and queries in the future), 7 is a unique identifier for the table, and Catalogue is the table name.

To define the Columns, we open the file Elements.csv and add the following lines

7,"Part Location",3
7,"Part Number",4
7,"Number Type",5
7,"Qty in Pack",7
7,"Qty on Bike",8

Updating these two files is not absolutely necessary, but it can make your life easier in the future. The Elements table can be especially helpful to use as a reference when writing Queries and Reports.

OK, so the database is prepared, we now need to write the front end.

Writing a front-end to Claims_DB Part 2

Republished: Looking at the Digital Britain Report

Originally published on 16 June 2009

Well, the long awaited 'Digital Britain' report is out. It's 254 pages long, and not being at the RSA I haven't had chance to have a full read through yet. I have managed to pick out a few highlights though, and although certain sections made me want to throttle whoever suggested Lord Carter was the right man, he does redeem himself in other areas.

So lets take a look at the good and then the bad;

There will be no additional levy to compensate rights holders for format shifting. Quite right too, the media cartels cannot complain about users getting things for free (read copyright infringement) and then expect us to pay for something we've already bought, just to play it on a different device. Unsurprisingly, this is quite an unpopular decision as far as the media cartels go.

There will be no three strikes rule on file-sharing. Unfortunately, this good aspect is tainted by the proposals used instead, more on that later.

There will be no increase in the cost of a television license. This doesn't mean that it wont increase, just that he hasn't recommended one.

Unfortunately, thats about all the good points I've noticed so far. So on with the bad......

The proposed file-sharing measures would be laughable if they weren't potentially so dangerous. The system will continue on the same basis as it already does, rights holders identify IP addresses. The relevant ISP will then send its customer a letter stating that what they are doing is illegal, after a number of warnings, the ISP will release the customers details as a result of court order. The rights holder will then begin legal action against the infringer.
The issue with this being that they do not have a great track record for accuracy, we've all seen the news stories about their libellous accusations, and have also seen the intimidating tone of the letters sent out by their lawyers. To make matters worse, a number of the torrent trackers have employed the defence mechanism of placing random IP addresses into torrents. This means that people who have never even used Bittorrent could be accused. Whilst I can understand the theory of making the current (and already flawed) method less productive, it is going to severely inconvenience a large number of people. How exactly do you proved beyond doubt that you did not posses, let alone share a file? Regardless of guilt, you are going to lose access to your PC whilst Discovery procedures are underway. Not good!

Worse than this, if certain ISP's fail to reduce illegal filesharing by 70% within 12 months, further restrictions will be allowed. These include bandwidth capping, traffic shaping, protocol blocking, Host (IP/URL) blocking, port blocking, Content inspection (read DPI) and blocking. So, in other words, if a certain ISP fails to reduce illegal filesharing by the prescribed amount, it could opt to block all bittorrent traffic. This will affect those using the service perfectly legally (Linux ISO's are often distributed in this way) through no fault of their own.
A cynic would also suggest that failing these targets is a good excuse for ISP's to mandate bandwidth throttling on all their customers, allowing them to further oversell their bandwidth.

The report also announces a new tax on fixed phone lines, in order to fund further Broadband development. It is currently only 50p a month, but there's no reason the government wont try to inflate it. Whatever the final price, the tax will come into effect next year, and I'm guessing will probably rise year on year. All to bring broadband into urban areas, despite the fact that surveys have shown a lot of those not on the net are that way by choice. The mobile operators are currently expanding their connection, and there's a new HDSPA link coming soon, so urban areas could get faster speeds for a fraction of the cost of fixed line broadband.

Standard FM/AM Radio will be killed off by 2015, despite the current failure of DAB to make any impact. Contrary to the adverts, DAB radio is often of inferior quality in many places, and a slightly weaker signal can make listening impossible (especially compared to AM/FM). Portable DAB radios also seem to suffer problems, and again analogue radio is generally preferable. Either way, Lord Carter wants that section of the spectrum vacated by the end of 2015.


So all in all, the bad currently outweighs the good, but Lord Carter is off to a nice cushy job, so I guess for him it's not really an issue. His lack of foresight in some areas is shocking, especially with regard to the sanctions to  combat illegal filesharing, Content examination will be pretty much useless when all the P2P clients start using encryption by default, but the DPI kit will still cost the ISP a pretty penny.
All his other sanctions put 'honest' customers at a disadvantage, and allowing the continuation of the current processes used to identify illicit filesharers is downright dangerous. The rights holders need to provide more concrete evidence that that file is definitely being shared by that IP address. They also need to make damn sure that they are correctly identifying files, I'm hoping that they use SHA1 sums, but I wouldn't be surprised to hear that they rely on filenames alone.

The scrapping of analogue radio is frivolous, it stinks of an attempt to force DAB onto an unwilling public. Most people will put up with a slight amount of static caused by a weak analogue signal, but DAB just doesn't give you that option. Similarly, based on recent surveys, the phone line tax has probably been created in order to provide broadband to people who just don't want it. Add that to the certainty that it will increase, and you're looking at some strong reasons for Labour to lose votes.

As consumers, we have been protected a little. The media cartels are still unable to charge us twice just to move a track to a different format, though this hasn't been entirely ruled out. More, it was felt a recession was not the right time to be increasing the cost of consumer hardware. So we can probably look forward to that one being levied in the future.

Republished: A look at BT's Trial Documentation

Originally published on 14 June 2009

Now, it can hardly have escaped anyones attention that BT ran some very questionable trials of Phorms system. It's been on BBC News, as well as many other sources, including the Governments refusal to take action. This has led to the EU intervening on our behalf, not that much has happened from that so far.

But most of the media has focused on the RIPA element of it, that is to say the Illegal Interception of the users traffic. Having read the leaked test documentation (Have a look on WikiLeaks), I'd say that there's another element to it that appears to have gone largely unnoticed.

The original trial involved injecting Javascript into each and every page the user visited (with some unfortunate results on forums), and based on the test documentation, even users who were opted out (not that they were given the opportunity in the trials) would find JavaScript being run on every page.

Now lets take a look at the problems with this, firstly users who noticed the strange behaviour believed that it was due to malware, and BT did nothing to correct this view. Secondly, it was malware.

BT did not have authorisation to run the software (i.e. the JavaScript) on those users computers, that is a violation of the Computer Misuse Act. Now had those users known about the test, and been able to opt-out, the Javascript would have continued to run. Again, probably a violation of the CMA.

The test documents highlight another issue, pages with a large number of links caused a problem within the script, and the browser window stopped responding. Phorms fix for this? Blacklist any pages that they know cause the issues, given the size of the net, they couldn't possibly block all the pages. Thats before you take into account the fact that pages evolve, so a 'known good' page could easily become a problem page.

As it currently stands, a large number of people have blocked the webwise domains at router level, but it would appear that this may be ineffective. The test documentation makes it clear that the aim is for a completely transparent proxy system, and I suspect that the requests will be routed through Phorms hardware after they have entered BT's service. That is to say, your router may never know that the traffic is going through Phorms hardware.

The Test Docs make it clear that a Squid Proxy will be used, now whilst Squid is a lovely bit of software, it does have a history of vulnerabilities (what software doesn't?). A quick search on Google shows fixes for Remote Explot vulnerabilities, as well as Denial of Service issues. Despite denials by both BT and Phorm, the simple fact is that introducing another piece of hardware into the network puts customers at risk. Especially when everyones traffic is defaulted into passing through said piece of kit.

It's public that the hardware will be running Squid, so what happens the next time a vulnerability is uncovered? All BT's customers are potentially at risk from a Man in the Middle attack. The best case scenario is that someone DoS's the hardware, and therefore temporarily denies all BT's customers access to the internet. Worst case is that the hardware could begin spoofing domains, your Internet Banking site could easily be replaced by a lookalike. The address bar would confirm that you are at even though you are actually at

BT's test documentation makes quite a big deal of the fact that the users who noticed the trial attributed it to malware. Well, thats hardly surprising really. Until recently, who exactly would have expected their ISP to secretly run a trial that involves monitoring your every move on the Internet?

Other than that, the test documentation is pretty par for the course. There's a section covering the reliability of their Squid server redundancy, and most other test failures mention that this issue should be resolved with the implementation of 'ProxySense', which is where Squid comes in.

The end result of all this is, that their end goal is somewhat disturbing. Depending on how well their newer 'ProxySense' system is being implemented, they could potentially be running it right now and we would find it far harder to notice (not impossible though!). Hopefully they aren't that stupid, but  I wouldn't deprive yourself of oxygen.

Republished: Phorm, PR Master or PR Disaster

Originally published on 14 June 2009

About a week ago, I wrote about Webwise Discover, Phorm's new 'service'. At the time I questioned just how Phorm's survey managed to find such a large proportion of responders interested in their service, to me it seemed that these users had not been fully informed before being asked.

It now appears that I was correct. Over at the PC-Pro Forums (thanks for the tip Peter) there's a post by a user called Jonaba, who claims he was one of the respondents. He claims that at no point was Deep Packet Inspection mentioned, and in fact the actual reason for the technology was that well hidden that it took him a couple of minutes to even clock onto what the survey was about.

Now so far this is one user making claims that are difficult to substantiate, and Phorm would surely spin it that way, but his statement do correlate with what we would expect from Phorm. This is a company whose PR plan includes editing their own WikiPedia article to remove inconvenient information, as well as claiming to have the support of the Home Office for secret tests. Just as the edits came to light, and the Home Office clarified what had gone one, this latest fabrication is starting to come apart. This is the first thread, and people like me are going to tug at it. As more threads come away their lie will be revealed.

Now, they will claim that they haven't lied, which may be true. They may indeed have had a large number of respondents say they would be interested in WebWise Discover, but if the underlying technology wasn't explained then the results are pretty much invalid. What Phorm have done is run a consumer opinion survey on what looks like a piece of software, with no mention of how it works, and then posted the results of the survey in support of the technology itself. This to me is a lie, 72% of respondents didn't support Webwise, they just thought the flashy Discover front end looked cool. You didn't even mention the important bit, that every time they send a GET request, you'll have a quick skim first.

Kent, if you're reading this, I'll put it into your terms. On the level you claim you will be looking, and then the level that I think you will 'achieve'.

1. You must have a PA or similar. Now if you sent them out to buy you a paper, would you not be more than a little pissed if they had a quick skim of it before they gave it to you. Worse, they had maintained a list of categories that may interest you based on the paper. Annoying yes? That's the level you claim you will be at

2. The level I expect you will be at is slightly different, you ask the PA to nip down the post office and collect you parcel. They stop off and have a peek inside, and then give it back to you. Does it really matter whether it was a box of paperwork or mail order dirty knickers inside? Of course not, your privacy has been breached. They looked at something that you expect to be private, and worse than that wrapped it back up to hide it from you. Would it make you feel any more comfortable knowing that they know people who run a business selling dirty panties? No, didn't think so.

This is more or less what the system entails, and yet Phorms survey asks about the Front End, and whether they would like something that shows them 'relevant' websites. Its the equivalent of asking Kent whether he wants errands written into his PA's contract. Ignores the underlying issue, and tries to give it a new face. This is why it's lieing and thats why it's wrong.

Because of this, the survey results mean nothing. I'm sure most people would vote 'Yes' to a survey asking if I should send them free cakes every day. But I can't expect to use that as any sort of defence when people find I've been stealing them from children. Make no mistake about it either, Phorm are stealing. The data is mine, and I do not want anyone else looking at it, it is private.

Now, I may not have convinced the average reader that Phorm did anything wrong with their survey. All I ask is that you maintain an open mind, this is just the first piece of information in a new area. More is likely to follow, but keep in mind Phorms past when you consider if you trust them. If you have already decided that you dont then contact your ISP.

As a side note, I had an interesting thought about their Opt-Out procedures. As you are all aware, Phorms system will be default Opt In, and the Opt Out mechanism will be a cookie. Now take into account the following scenario;

Me: Hi Mr BT Customer Service Agent, I absolutely do not consent to you pimping my data through Phorm or any other Method

BT: Are you sure sir? You'll lose WebWise Discover and Phishing protection

Me: Yes, get it off my line.

BT: You can opt out by visiting Webwise's website and click the 'Turn Off' button

Me: Doesn't that use a cookie?

BT: Yes it does sir

Me: Well the EU states that you need my permission to place a cookie on my machine, and I don't give it

BT: You could also set your browser to reject cookies from

Me: I'm not willing to make any changes to my system. Neither am I willing to participate in this 'service'. Disable it at network level.

BT: ?????

OK, so I would come across as a difficult customer, but as far as I can tell, legally I'd be well within my rights. They would probably decide that they need to terminate our service agreement, but if I am still within contract, it could potentially leave them liable (depends on the contract I guess). Still, it's an interesting thought, how do you Opt-out if you aren't willing to make any changes to your system.

Perhaps I'll phone them and ask at some point!

Needless to say, if you completed the survey, I'd like to hear from you!

UPDATE: I've just sent BT an e-mail asking what would happen in the scenario given above. I'll post details of the reply, as and when.

Republished: A quick look at Webwise Discover

Originally published on 06 June 2009

Well, as I posted in the News links yesterday, Phorm have launched a service called Webwise Discover. It appears that this is largely a front end, allowing the user to further benefit from having Phorm follow you around the internet.

But lets take a quick look at it;

So, I found it whilst trying to get to Phorms website,, but was instead confronted with a vile flash intro (god I hate those things).
As annoying as flash intros are, they don't really reflect on the quality of a service, though they do make me more likely to go elsewhere!

So anyway, once we get past the flash intro, we are taken to the Discoverer page (there's a link on there to go to Phorms corporate page as well), and Webwise Discover is presented to us.

Webwise Discover brings relevant content from across the web directly to you, wherever you are online.

It works by understanding your interests from
the pages you visit. So, if you're interested
in celebrities and football you will receive a range
of the latest stories, video clips, blogs etc. on your favourite celebrity and favourite team.

So, as the name suggests, it is reliant upon the WebWise service itself. That is to say, if you don't have Phorm earwigging your line, you can't use Discover.

For once, I'm not going to delve too much into the problems with the Phorm service. This article is aimed more at looking at Discover, not that there is too much information available just yet. Phorm on the other hand, can't just show the service on its merits, they feel the need to add in a line about privacy, that really has more to do with WebWise itself than it does Discoverer.

Webwise Discover is a free service that will be offered by internet service providers. If you choose to activate Webwise Discover, it will provide personalised content and useful advertising,
and has been designed to never know who you are, keep no record of where you've been, provides free and transparent choice and stays away from anything sensitive. If at any time you don't want it, you can turn it off wherever you see it.

We already know what the transparent choice is, they will Opt you in automatically, then you can opt out if you wish. Of course if you delete that cookie, you're back in.

The statement about turning Discoverer off wherever you see it, gives a (albeit slightly prejudiced) suggestion to me that the Discover service is going to appear on a lot of pages, and will probably bug the living hell out of me. Although I can't see anything on their website explaining the mechanism for turning Discover off, experience tells us that it will probably involve cookies.

Worse than that, I dare say that even when turned off, it will still be present so that you can switch it back on if you want. More bandwidth wasted, more annoyances from flash type boxes.

Now, it appears this service would bug the hell out of me, and even without the Privacy Implications, I think I could safely say that I would not use it. But, it would appear that I am in the minority;

Polls say 82% of people liked webwise discover and anti-phishing, 72% just like Discover.
- Populus Research survey of 2075 broadband users

Now, (and I know I said I would try and avoid this issue) a large part of me wonders just how informed these users were. I.e were they shown this shiny new service and asked if they liked it, or where they given a more technical detail of exactly what it entails?

People I've spoken to, like the look fo Discover (some hate it though), but are generally put off once they are told the mechanism of how it learns your interests. That is obviously a very small sample of people compared to the 2075 users surveyed, but for myself I'm not convinced.

So the question is, would you trust your privacy to a company like Phorm in order to use a service such as Discover?

Republished: The Best Bits of StopPhoulPlay

Originally published on 20 May 2009

I was having another curious read of Phorms PR blog -StopPhoulPlay - and there were a couple of things I noticed. If I'm honest, they made me laugh. There's nothing more you can do when reading such trash.

OK, so lets get started. In this post Phorm talk about the number 10 petition. Now personally, I believe most of these petitions are a waste of time, I've yet to hear of a success story. But, Phorm go one step further,

In the United Kingdom, the tradition of raising a popular petition against a perceived miscarriage of justice has a long and distinguished pedigree, but not one that the privacy pirates felt any hesitation about desecrating.

Now, how exactly does that work? The 'Privacy Pirates' posted a petition on number 10's website about an issue. An issue that they, and many others, clearly feel strongly about. How is that a desecration of the Petition Tradition? Surely it's the point in petitions.

Still, it gets better. Lets take a look at 'The Truth' from Phorms perspective;

The total number of signatures in support of the petition was 21,403, or 0.2% of the combined broadband subscriber base of the UKs three largest ISPs.

Now any statistician will tell you that you will never get 100% of people to fill out a questionnaire, or sign a petition. OK, there's a huge gap between 100% and 0.2% (assuming the figure is correct), but many of those subscribers were probably unaware of Phorm, or even just unaware of the Petition itself. Another percentage will question the point in filling out any petition on Number 10s website. Then you have the percentage who see that you need to provide quite a few details just to add your signature, and probably decided that they couldn't be bothered.

It may be a tiny percentage, but its still tens of thousands of people. I remember being told that one letter to an MP usually represents the opinion of at least 100 people, so if you apply that logic here, thats 2140300 people. Give or take.

There's no way to accurately identify how many are for Phorm, how many are against, and how many just don't care (Most BT Subscribers I've spoken to are against), so lets move on.

Phorm takes a look at the Open Rights Group's Open letter. In the post Phorm launches yet another attack on Marcus Williamson. It once again claims that he is a serial letter writer, but a) there's no substantiation and b) if the letters are all about Phorm then it's not really representative.

Phorm also claims that the letter was published purely to create bad PR for Phorm. This is an interesting twist on what I expect to be the truth. I'd say that the letter was published in an effort to make the consumer aware of Phorm, and the planned system. Of course, if I was a PR bod at Phorm, this would definitely be classed as bad PR.

It does not bode well for a company if any releases made for the benefit of the consumer are bad PR. It means that an informed consumer would not choose to use your product/service.

ORG claims that its campaign against Phorm is motivated by the higher purpose of protecting consumer privacy. But if that were the whole story Phorm would be engaging with ORG in a very civilised dialogue since we share the same objective, which is why we have designed a technology that sets a new benchmark for online privacy.

Sadly, this statement is misguided. There is no way that Phorms system can be deployed as planned and protect consumer privacy. Yes, there could be some dialogue to improve the privacy aspects of the system, but it's already been said that what the consumer wants is for the system to be Opt-In (not Opt-Out), and to be set at network level. I.e Traffic should not touch their hardware unless we've opted in.
Not that hard is it? Oh unless you suspect consumers don't want your product. Then it's just not good business to ask permission, better to imply it.

On data storage and anonymity, ORG has made a habit of refraining from critcising any of the major search engines or online companies which daily store massive amounts of personal data and information. For reasons best known to themselves, ORG have chosen to focus most of their vitriol against one company, Phorm, which has not even deployed its technology yet and is remarkable for one fact only - that it does not store any personal information such as browsing histories, IP addresses or search histories. Why is that?

On interception, ORG repeatedly claim that Phorm is like the Post Office opening your mail, a deliberately alarmist and inaccurate statement which suggests that our system is not anonymous. The Post Office tag is also designed to give the impression that consumers will not be offered a clear choice and that our system stores personal information and reads email. Neither of these assertions are true, and we believe that ORG knows it.

Phorms system introduces a completely new and significantly higher benchmark for online privacy than has existed previously - a unique achievement among the world?s many interest based advertisers. Instead of crediting Phorm for embedding privacy at the very heart of its technology, ORG prefers to lob brickbats at Phorm. At the same time, it does not say a word about applications such as Gmail, for example, which is not anonymous, does not offer the non-account holder a choice as to whether their messages are scanned, does store information and does scan email. Again, we have to ask why?

Simple Answer? Those systems do not encompass the entire net. They should be Opt-In as well, but at least they only affect partner websites, and don't scrape every page we visit. Phorm may promise not to read our e-mails, but the fact remains that they can.
The difference between Gmail and Phorm? I can choose whether I trust Google enough to use Gmail without needing to change ISP. If I don't trust Phorm, then I need to move to an ISP that doesn't use them.

Yes Gmail may scan my e-mails, but as far as I am aware, only when I use Webmail. If I want to read my e-mails, and not have them scanned - POP3 Client. Or, of course, I can stick an autoreply on Gmail and change e-mail addresses. Simples!

Now we have a bit about a very aggressive and discourteous message. Alex Handoff posted a reply to a question by Marina Palombo, legal director of the Institute of Practitioners in Advertising. Phorm have quoted a single snippet;

She must have found her Law Society membership in a Christmas cracker.

Which has been taken completely out of context. Have a read of the post here. They've even gone so far as to capitalize the S to make it look like an independent sentence. Its actually the end of a sentence making an observation on the motivation for here question, followed by a post that answers her original question. Aside from the comment about Christmas crackers (which I suspect was written tongue in cheek) there is no discourtesy, the post may read as a little patronising, but it's certainly not aggressive.

Now for the post on the EU's Infringement Proceedings, Phorm are quite right in stating that it is a matter between the commission and the Government, and that Phorm are not party to it. Conveniently though, they forgot to mention that it has come about as a result of complaints about who? Oh yeah, Phorm! It's the way the EU works, they wont take action against Phorm directly unless they have to, instead they will tell the member state to be compliant, and thats what they've done.

So what do we think of Phorms blog? Frankly it's the best laugh I've had in years. If I was a professor at a law school, I'd be displaying it as a prime example of how to twist facts to suit. There's nothing on that site that is clearly a lie, but it is far from accurate. The truth has been taken an re-invented to suit the writer.

In the interest of Full and Open disclosure I should point out that I am far from an advocate of Phorm. I believe it is a gross invasion of my privacy, and I will change ISP the day that it is rolled out. I will also continue to make sure consumers are aware of who their ISP is getting into bed with.

Republished: A look at the Windows 7 RC

Originally published on 06 May 2009 (Images sadly missing at restoration time)

So, being a fairly well balanced person, I thought I would give Windows 7 at least the benefit of the doubt. So after a surprisingly quick download (either MS prepped their servers, or everyone else has been using Bittorrent!), I started installing Windows 7 Build 7100.

Full Disclosure: The recommended minimum requirements for the 32 bit version is 1 Gig of RAM and a 1 Gig processor. I installed it in a virtual machine allocated 600Mb of RAM running on top of a 2.6 Gig Processor.

Because I was using just above the minimum requirement for RAM, I'm not going to mention speed apart from where things were painfully slow (there were a few).

So, lets get started;

The installation process has a Disk partitioning menu that is far more user friendly than in days gone by. Anyone else remember the XP screen? That said, the increased processing requirements for the installer does mean that the installer runs more slowly than it necessarily needs to.

What hasn't changed however is the need to restart the system more than once. The user see's a section of the installer that mentions that the system may need to restart several times. Admittedly I didn't notice these as I had gone to make dinner whilst the system copied it's files. I did notice two restarts however, so it may well be that I didn't actually miss any!

After the first restart the installer runs from the Hard Disk, and the setup process quickly starts. The user is prompted to set a password. It's not necessarily required, but it is recommended. If the user enters a password they are required to enter a password hint to help them remember it.

So far all pretty standard stuff, with the odd tweak here and there. The system then asks the user what they want to do about Automatic Updates. I quite like this method, because it should reduce the risk of users not being aware its off/on by default. That said, as most Windows 7 sales will probably be OEM sales, many systems will probably have been configured beyond this section before the unit even leaves the shop. Still a nice touch none the less.

Next the user is asked to setup their network. The user is informed that Windows has detected a network (assuming it has of course) and asks them to classify it as either Home, Work or Public. There is a little note at the bottom telling the user to select Public if they are unsure. One of the examples given for a public network is a wireless hotspot, but one does wonder exactly why you would be installing Windows in a wireless hotspot. But then it's probably just the main Network wizard.

Now this is where the shock began, Windows 7 loaded the desktop (OK it took a little time doing the initial configuration) and rather than the screenshots we've seen in the Beta versions, XP's tellytubby hill or the God awful Vista default, the screen displayed this rather nice picture of a fish!

Unfortunately, this was about the best bit of my first impression. Most of the Aero features were disabled because of the lack of High end graphics card (indeed lack of medium end graphics card) within the Virtual Machine. I was prepared for that to happen, but what I wasn't prepared for is the awful style of the interface without Aero. The Taskbar icons are reminiscent of when Worms went Cartoony in Worms Armageddon. The icons are no longer refined (they were slipping in Vista though, so should have seen it coming.), and frankly the initial view was more reminiscent of a toy than a PC.
Still, these things are only cosmetic, lets not judge a system purely on it's looks.

Talking of cosmetics, apologies about the theme in a few of the pictures, I was trying to find one I liked.

I figured I'd try out Internet Explorer 8 as I've yet to play with it. Alas, it was a no go error, crashing almost as soon as it loaded. BUT, Windows stepped in, told me that a page had failed to load, asked me what I wanted to do about it. Sadly when I told it to close that page, IE 8 then crashed. Still it was a nice thought!

Once I had let the system sort itself out, I fired up IE8 again, and this time it loaded the default home page (MSN) reasonably happily. But of course, has had plenty of time to avoid the need for IE's Compatability Mode, so I pointed it at my own site. I've made no changes in the wake of the release, but then never implemented any hacks just to serve perfect pages to IE6/7. There have been complaints of IE8 garbling sites that are fully standards compliant, but as it turns out loaded pretty well.

Aside from the initial issues, IE seemed pretty solid, and does seem to load pages quite fast. Given that it's running on a system inside a VM I won't comment on Microsofts claims about page loading speeds. Needless to say, with a minimal amount of RAM it did take a while longer than I would otherwise be willing to wait.

So what else did I notice? Well the Control Panel has been dumbed down more than it was in XP even. But only by default, there is an option to view the Control panel by icons (either large or small, if you please!), and this has been expanded quite a bit. Things appear to be far easier to find than they were in Vista, and almost approach the accesibility found in XP.

This has been quite a basic view of Windows 7 Build 7100. One thing I will note is that the Beta is Windows 7 Ultimates, so some of the features noted will probably not be available in the cheaper versions. It is understood that the Final Release Versions in the UK will be Home Premium and Professional. However Ultimate will be available to Home Users for an additional fee and will contain the same features as Windows 7 Enterprise.

I have to admit, I installed Windows 7 expecting to hate it. I am a Linux user, and as such figured that the RC would contain (or fail to contain) many of the 'features' that older versions of Windows seemed to (fail to) deliver. That said, whilst I wouldn't use it as my main Operating system, I was pleasantly surprised. Compared to Vista, Windows 7 is a work of pure genius (Calm down, that's not saying that much!). The problem, as many others have noted, is that it doesn't really offer anything that XP doesn't. Or more to the point, nothing compelling.

OK so users have the option to encrypt their hard drive, they even have a candyfloss interface without the bloat of Vista, but what else have they gained? There's no way to change the Start Menu to the Classic Menu, and support contracts aside, I see very little to tempt businesses. Those businesses that steered clear of Vista and Office 2007 to minimise training costs will probably try and avoid 7 as well.

Although it is a step in the right direction, I suspect that the installed base of Windows 7 will consist of two groups. Those that have bought a new PC with 7 on it, and those who upgraded to escape Vista.

Republished: No Phoul Play Involved - Good Phorm by BadPhorm

Originally published on 5 May 2009

A question posed on the StopPhoulPlay blog;

The more interesting question is this: if the Home Office and the many expert legal advisors we consulted are wrong, how is it that a system such as GMail - which scans emails from non-account holders without their consent to GMail users - is not also an ?interception? and as such not also a prime target of their campaign?

Unlike Gmail?s webmail service, which is perfectly legal, Phorm?s system is fully anonymous, does not look at email and does not store personal information such as IP addresses. Surely if FIPR/ORG is genuinely interested in a fair debate and the application of law as it sees it, the question merits a response?

The simple answer is, I choose to use Gmail. Those people are e-mailing me with information that they clearly want me to see. That's the difference, I don't have a choice where WebWise is concerned, the packets are still going to hit Phorms system, even if only long enough to check my cookies. That in itself is assuming that Phorm are being upfront and honest about the systems behaviour.

The article in question also mentions Five points, they claim that criticism of Phorm follows a familiar pathway;

1. Make a sensational claim (Phorm ?colludes? with Home Office)
2. Induce someone with some stature into associating themselves with it (in this case Baroness Miller to whom we re-extend our invitation to explain how the system actually works, since we are best qualified to explain our own technology)
3. Take every opportuniy to criticise Phorm when the media (the BBC) cover the story.
4. Move on to the next claim once this claim, like all others, is discredited.

So lets take a look at it;

1. Phorm did collude with the Home Office. Maybe consult or co-operate would have been a better word, but the fact remains that Phorm did speak to the Home Office. If they told HMG about the secret trials, then they colluded. If Phorm didn't tell them about the trials then Phorm lied to the Home Office.

2. It seems very easy to claim that someone has been induced. The problem is, outside of Phorm and BT, there seem to be very few people pushing the benefits of Phorm. It appears a good number of BT employees are against the system, so I'd suggest the only people truly interested in seeing this system are those that stand to profit.

3. This is a reversal of logic, the news story does not prompt the criticism. The investigation leading to the story takes place because of the vocal criticism, and look at some of the things that have been uncovered. Without the media stories we probably would not have an admission on just how wide ranging the trials were.
Potentially the EU would not have become involved in the whole sorry mess.

4. I haven't yet noticed any substantial claims being discredited. We know Phorm claim their system does not store personal information, but prove it. Prove that it doesn't and never will. Prove that there is absolutely no way to track a UID back to an IP or a person. Prove that your system isn't in breach of RIPA, Prove that you are not violating the copyright of websites such as this one.

That same article also contains the following snippet

Phorm?s system is fully capable of being deployed in accordance with UK and EU law. This is a matter of record as far as the EU and UK authorities (BERR) are concerned, as well as the UK regulator (ICO). Phorm?s system has, furthermore, developed a privacy-protecting technology that actively anticipates future changes in the law ? and not just in the UK/EU, but on a global basis.

Now, the ICO may have passed the buck. They may even have given the system a thumbs up so long as certain conditions are met, but the EU have done no such thing. The UK taxpayer potentially faces a huge fine because of this system, or more the failure of our Government to intervene. Claiming that the EU is ok about the system is clearly unsubstantiated.

The website also contains an article about the claims that Phorm stores and sells your personal data. The claims they refute, on the face of it are incorrect. At least as far as Phorms statements allow us to believe. The problem is, all this is still based on trust. And 121Media I do not trust.

The points raised are also not really the focus of the article on p2pnet News. The story is more about the discussions between the Home Office and Phorm, and the facts raised there are consistent with many of the points raised.
Realistically, use of the phrase 'lifts personal data' was probably just a poor choice of words, and does very little to bring the entire article into disrepute.

Directly from their front page, there is a link to a specific section of this thread. I suspect it is intended to give a negative view of Anti-Phorm campaigners, but the thread (read it all) does read well. It is an honest and open discussion for the most part, and does begin to address some of the concerns about links between Phorm and Privacy International.

And finally we have the link explaining how the Anti-Phorm brigade Operate. Indeed it is a page dedicated to Phorms view of their critcs modus operandi. So lets take a quick look at their claims, and then just maybe turn the spyglass onto Phorm themselves.

The blog raises the question of why a smear campaign is being run against them. Or more to the point, why most opposition is voiced through the media, and used to try and effect their share prices. The answer to this is simple.

We need to make everyone aware of this system, it is also the only way to make your voice heard in a world where money rules. Objections have been made to the government, and various agencies. Look where that went, it's lead to the UK being threatened with a fine by the EU.

The poster then moves onto mention the wish of Anti-Phorm Activists to remain anonymous. Can you really blame them? And there's plenty of anecdotal evidence to show that Phorm have done a little bit of research in order to ascertain who the activists are. It may have been a simple WHOIS request, it may have been something more, but the creator of Dephormation is no longer known purely as Dephormation.

I also suspect that asking MP's, MEP's, peers of the realm and technical experts about the system has very little to do with hiding ones own identity. I think it is more about raising the issue with people who have the power to do something about it.

Now let's take a brief look at some of the tricks that Phorm has pulled. We'll give them the benefit of the doubt, and assume that the anonymity of the Anti-Phorm groups was not broken by Phorm. But;

  • Phorm did edit a Wikipedia article about itself to remove elements it deemed unfavourable. This is deceptive and in violation of Wikipedia's TOS.
  • Phorm did (in combination with BT) run secret trials of it's system without the consent of BT's customers. And it  was noticed, but denied (what does that say about both the effects of the system and the honesty of the two companies?)
  • None of the companies focus on the issues being raised, originally Webwise was promoted as helping web safety (with it's anti-phishing add-on).
  • Most of it's defences seem to involve raising the subject of Google's systems. This is comparing Apples and Pears, unless I use G-Mails web interface the Ad system shouldn't read my mail. I can block Adsense with firefox add ons, or avoid using Google. Changing e-mail provider and search engine (and installing an add-on) is far less hassle than changing ISP. Plus, Google have not been involved in Malware (as far as we know at least).
  • Phorm supports the 'legality' of it's systems by saying it consulted legal experts, but it never says who these experts are. We know they spoke to the Home Office, but can get no information on the other experts. Phorm can not even tell us on what basis their other experts believed the trials could be legal.
  • Phorm focuses on the purported benefits of the system, but will not consider making the system a network level opt-in. Why? Probably because they know almost no-one will opt-in. They are relying on the fact that the average user may not even know the system is in place, and so will not know to opt-out.

Phorm's creation of the StopPhoulPlay blog has been described as unprofessional by the Guardian, and does seem to consist of a lot of logic reversal (definitely one of their hallmarks). The strange thing about the whole situation though is how Phorm continues onwards, they must truly believe that some UK Consumers want them. Either that or the fees from Advertisers are likely to be very lucrative, either way, if the system is truly of benefit to people, they will probably Opt-in.

But they should have to Opt-in, it shouldn't be done on their behalf. And it should be network level, it may mean that BT have to look at their Routing Tables, but traffic from people that want Phorm should travel down a completely different cable. It'll probably never happen, but it's looking more like the alternative is that BT will lose a substantial proportion of their installed consumer base, or Phorm will be banned by the Government.

Republished: Censorship on the Net

This article was originally published on in May 2009

Now we all know that countries such as China intercept and filter all Internet connections within their country. It cannot have escaped the attention of many that Australia has recently been testing a firewall (with some interesting revelations on Wikileaks.) There have been suggestions that the Germans tried to censor Wikileaks, although the disconnection of the site later turned out to be related to unpaid bills.

Read more…

Republished: rejects Phorm

Originally published on 26 March 2009

I've been having a conversation with recently, they provide the DNS Re-Direct for, about Phorm and the Webwise system. Although I completely disagree with Phorms systems using Opt-Out, I also do not want to help them monetize their customers browsing behaviour.

So, some time back I sent an e-mail to their website exclusion list, stating that I did not give permission for them to scrape my site for their own benefit. I received a reply effectively stating that as the WHOIS query for the domain ( does not match my details, the request was being viewed as unauthorised and would not be actioned.

Because belongs to, and is used by a number of customers, it's not exactly possible to rectify this issue. Whois databases don't contain information on subdomains (lets face it, that would be nigh on impossible to keep up to date) so that leaves us a bit stuck. are generally a pretty good company, so I figured I'd drop them a line explaining the issues I was having, and how it affected them.

They were pretty friendly, and had a look over Phorms website (being an American company, they've sort of missed the Hoo-Hah a bit). Phorms site doesn't actually contain the e-mail address to use (hadn't realised until they told me) and they queried exactly what the impact was.

So I explained in more detail what the webwise system entails, and the impact upon privacy. Of particular relevance to (or so I believed) was the impact on copyright of their customers sites, plus privacy implications for anyone running a web interface to a mail server.

Long and short of it is that I received an e-mail today stating that they were in the process of adding both their free and their enhanced domains to the exclusion list.

I'm not sure exactly how many customers they have, but I imagine it is quite a few. It should be representative of quite a few sites no longer being open to Phorm.

It would appear however, that Phorm take action as soon as you add yourself to the exclusion list, I noticed quite a few hits after I sent the initial opt-out ( which they rejected) and I'm sure some were from a domain that I traced back to Phorm (I'd have to dig out the logs to confirm), and they had a pretty good crawl of the site.

I'm guessing that this probably means they scanned the site for keywords, and will use these rather than the current content when someone visits the site. Presumably they will crawl the site again at some point to update their database, and when they do I will contact them (and BT) to state that I do not give permission for this either, and to find out if they honor robots.txt

Still, need to wait for them to take another peek first!

Republished: A note about the PRS v Youtube Dispute

This article was originally published on in 2009

There has been a bit in the news recently about Google's decision to remove access to Youtube hosted music from UK users. This is as a result of a breakdown of talks between the Performing Rights Society, and Google itself.

In statements, PRS has identified the problem as arising from Google not being willing to pay more than a pittance. Google states that the dispute has arisen because of the vast jump in PRS Fees.

The PRS has now set up a website - Fair play for creators - dedicated to pushing their message of Performers and Composers rights.

Although they are probably heavily moderated and edited, the Supporters Comments Page makes interesting reading. Many have written about Google being a disgrace, and should pay the fees.

I don't believe that any of us are truly aquainted with the full facts, but I can well believe that the PRS charges a hefty fee, especially as some of the terms they apparantly set for a YouTube agreement are somewhat questionable;

YouTube must supply PRS with which videos have been viewed, and how many times on a regular basis. The PRS will then tell YouTube how much they owe.
But, YouTube are not allowed access to any form of Membership list so that they may confirm that they are not being overcharged.

Now the Data Protection Act probably comes into disclosure of the Members list, but an up to date list of all Songs/Compositions made by members of the PRS should not.

The main reason I am writing this article is as a response to some of the comments posted on the Fair Play site. I recognise that people do want to be paid for their work, but get the impression that some of the commentors are under the impression that they are automatically entitled to be.
Inconvenient as it may be, no-one has to buy their music. We can live without it, and the creation of a piece does not automatically lead to it being monetised.

As Google could not reach an agreement with the PRS, they reminded PRS exactly whose money does the talking. They blocked access to the relevant media, thus negating the requirement for a PRS license. The ball is now in the court of the PRS, do they want Google's license fee? Are they willing to negotiate terms?

Songwriters comments may be heartfelt, and may result from a feeling that they have been deprived of money, but in reality Google have reacted in a way that we all do.

If you encounter issues with a company, and they can't be resolved, then you take your money elsewhere. We've all done it, Google is just doing it on a larger scale.

Branding these actions as Despicable or Corporate Bullying is an unnecessarily dense kneejerk reaction. What Google have done is perfectly legal, and most of us would have responded in a similar manner.

There will be those who disagree, but each to their own.

An Atheist's view of faith

Originally published to Helium in 2009

Faith is an interesting word, meaning so much to so many different people. For many, the word carries strong Religious Connotations, and it is undeniable that the word applies in that context. Faith however, is not unique to the world of Religion, it is a word that can apply throughout Humanity.

Faith is defined as "A strong Loyal Belief, or trust in a person, idea, or thing" or as an expansion upon that definition can also be defined as a "Belief that does not rest on logical proof or evidence". Personally I believe that Religion is the epitome of the second definition, however, within my family even, were I to proffer either of these meanings as the 'True' Definition of Faith, I would quickly be corrected.

As I mentioned earlier, Faith is a word that has been adopted by the Religious world, and is often now described in very Black and White terms. You either have faith, or you don't. To the religious mind, not having faith means that you are not a religious person. To the more evidence based amongst us, you not having faith probably means that you are concerned about some component of a hypothesis, or an experiment. Thats not to say that Atheists and Agnostics are all scientists, however you can apply some of the activities of a Science lab to many aspects of life, life is one big experiment. You can hypothesise about how it's going to turn out, and why you are here, but without conducting the experiment you cannot support the hypothesis!

So what exactly does faith represent to me? As an Atheist I don't recognise the Religous comparmentalisation of the word, and as such I view it more as a statement of belief that can be applied to anything. You can have faith in a concept, a person, or an item without needing to believe in God. I have strong faith in the concept that everything around us can exist without requiring the presence of an omnipotent deity. In fact, I have very strong faith in the idea that Religion is simply a device previously created to explain the unanswerable questions, as science has developed the number of those questions has been reduced.

Faith has become a very, very personal term to many, and the devout followers of Religion would probably like to restrict its use to represent religious connotations, but unfortunately it's not their word to claim.

So what do I, an Atheist think of the Religous Faith? How do I view it? Do I condone it? Am I Against it?

These are all good questions, and many will already believe they know the answers. But there is something of a stereotype used when discussing Atheist views, which tilt the 'moral bias' firmly in the direction of the church.

The true answers are that I cannot scientifically support the underpinning assumption of most religions, that is to say that I cannot find measurable evidence that there is or was a Deity to control and create the earth. In fact, I believe that the evidence available to us is quite contradictory of the religous Hypotheses. I respect the strength of faith shown by many religious people, as I would respect strong faith shown in any other environment. However, unshakable faith is not always a good quality, I make assessments based on evidence, and I respect others who make similar judgements, dis-regarding the evidence in order to continue your faith in a hypothesis is the antithey of all development. For that reason I personally believe that many Religious teachers are fooling themselves into believing a theory that has not a shred of evidenciary support.

So now you know my view of Religous faith, the question is do I condone such beliefs? Whilst I believe there is a certain foolhardyness in many of the Religous beliefs, I cannot state that I do not condone religion. This decision is not based on any belief that Religion has done good things in the world, but more that I cannot condone a world where we are told what to think, and what not to think. For me Religion is a paradox, I could not get rid of it for the reasons mentioned before, yet Religion satisfies those criteria in itself. By taking action (were it even possible) I would be creating the very world that I don't want, but by not taking action, I allow Religion to try and re-claim that world.

Religion does set strict rules on morality, and this is definitely a good thing. However, contrary to many claims, we do not need Religion in order to have morality. Especially damaging for the reputation of Religion (in my eyes at least) is the willingness of the more devout to overlook these morals in order to spread their 'one' religion. Be this the crusades, the IRA, or Al Quaeda, terrible things have been done under the guise of Religion. Many will claim that these organisations are not true members of their Religious Facade, but this is irrelevant, Religion has been used as justification for some terrible acts.

Regardless of what strictures a Religion lays down, there will always be those who believe that propogation of the Religion is more important than the strictures that form the very foundation of that belief. These atrocities are committed in the very name of Religion, often regardless of the true agenda. Without Religion, there is one less excuse for these atrocities to happen.

I don't deny that atrocities would continue without religion, but it would provide one less facade for murderers to hide behind, and Religion is still seen as a powerful flag to be riding behind.

So am I against Religion? The answer is neither a simple yes or a simple no. I am against the acts committed in the name of religion, and I believe the only way to prevent them is to rid ourselves of these Religious Fairy Stories. On the other hand, I am also against telling people what they can think and what they can do, so ridding the earth of religion would not make me feel any better. It would take a very concerted Dictator like effort to truly rid the planet of Religious beliefs, and that cannot be condoned.

I am strongly against people trying to convert me to their religion, I detect an amazing level of arrogance in their belief that thier theory is the only right one. I do not try to covnert people away from religion, so please repay the favour. I have always been more than happy to discuss my belief with anybody that's interested, and have had some very interesting conversations. Unfortunately, Religion can make that debate quite difficult, it is very very hard to maintain a civilised conversation when the evidence that you are referencing is effectively questioning the strong belief held by the other person.

And of course, when talking to certain religions, any question of evidence that xxx happened, will normally lead to you being given a quote from their book of choice. A belief that the contents of a book are true, being supported by the fact that the book exists has always been an interesting one for me. Yes, we can measure that the book exists (it has mass etc.) but how does this prove that the contents are the word of God/Allah?

I'll leave you to puzzle over that one!

Email and Captcha Generation Scripts

This content was originally published to

This page is to provide links to a couple of scripts that I knocked together today. One manages and processes Captcha's and the other takes input from a HTML Form and then e-mails it to you.

It saves having your e-mail address in a mailto link, and the captcha's will help reduce the amount of spam you receive.

Captcha rotates Captcha's regularly, and stores the Captcha's value locally rather than as a hidden element of the form. Scripts later in the processing chain can then reference the value.

Captcha gets it's information from the CaptchaDB which is a simple text database, there are examples contained within the link. The use of &MATHS=Y signifies that the Captcha is a mathematical question, rather than a simple copying excercise, this affects the output of captcha

Email's is a CGI Script utilising Captcha, it takes its input from a HTML form. That input is then e-mailed to the site owner, with a unique reference.

You can see all of these in use on the Contact Me page.

Republished: IWF Punts it's blacklist some more

This article was originally published on in 2009

Well, between them, the IWF and the NSPCC are doing quite a job of advertising their Child Porn Blacklist but it seems that some of the smaller ISP's are refusing to implement it. This is largely based on a cost vs benefit argument.

And sadly, they are right. Such measures are incredibly easy to circumvent, and is not going to stop anyone even remotely interested in the material from viewing it, and that's just using a web browser, roll P2P into the equation and suddenly you start seeing how easy it would be to obtain. Let alone if you were to use one of the anonymous networks.....

As much as I would like to see this filth removed from the web, thats not how the internet works, you can't remove things from the net without first going after the publishers. Even then more might publish on behalf of their imprisoned counterparts, trying to block these pages is simply a waste of money. And it's the customers who will pay, the ISP's will simply pass the cost down to their users.

Blocking the pages doesn't really do much to help either, those children have already been abused, and probably will continue to be until someone puts in the effort to actually track the producers of the material. Once these people have been tracked down they can be imprisoned/castrated and the children removed from the situation. It's not pleasant, but unfortunately it's the way these things work.

There seems to be some concern about people accidentally accessing the content and then becoming curious, but in my opinion, if they are even remotely curious about such things then the inclination was probably already there. The likelihood of a site suddenly making someone desire it's content, is quite small unless that person had a latent desire to try it.

Similarly, there is no proven link that viewing pictures will make a Pedophile more likely to actually physically abuse a child, most research has been conducted on Pedophiles who have been caught abusing children (If you secretly viewed it, would you admit it in a survey? Thought not) so the control group is somewhat lacking. Based on this lack of proof, it is surely far better for these people to fulfil their desires from a website than to actually abuse some kids? Obviously it's not quite as black and white as that, the pages do contain images of children who have been abused, and that cannot be condoned, but the basic principle of the hypothesis should be clear.

There are, and probably always will be, those that abuse children for sexual gratification, whether this is due to mental illness or something else is left to the readers own opinions, and therefore the abuse will continue. The best we can do, sadly, is to catch people as it happens, and hopefully before, blocking sites will not aid in this, however monitoring traffic to them could. Why block the sites? If they have been identified why not monitor who accesses them and then keep any eye on their activities thereafter. It may lead to the breaking of an entire ring, or it may lead to the arrest of one person, either way rather than simply denying access it would lead to the capture and imprisonment/capture of at least one Pedophile.

The sad fact is that no-one will ever like all of the content available on the web, personally I believe that Religion is perfectly capable of ruining lives, and has been very sadistic in the past. Hell, people still die in the name of God/Allah today, their beliefs have a real impact on everyones lives. Yet if I were to suggest that we block all Religious pages from the net, people would call me crazy, and say that it would have no impact. The unfortunate truth is that filtering Pedophilic (is that a word?) sites amounts to a similar result, a lot of expense, and very little true benefit.

Denying access to these materials will not kill the urges, just like censoring Religion would not stop it being practised. Censoring will push it deeper underground, but will not end it. As a teenager, with all those hormones racing, did being grounded every stop you wanting something, did rejections from women make you stop desiring sex? If you had gone to an All Boys/Girls school would you have stopped wanting sex or turned Homosexual? No. If anything, being denied access makes most people want something more.

The same thing is likely to be true here, I wish it wasn't, but unfortunately some people have some very very strange desires.

If you are concerned about accidentally accessing such material at home (or even that your kids might) you can install software onto your computer to deny access to any unsuitable sites, and you will have far more control over what is filtered. Take for example the recent IWF decisions to block Wikipedia and The Wayback Machine on the basis that they contained images that were potentially illegal. Not Definitely, not even Probably but Potentially! It is this kind of arbitrary filtering that is concerning most of the tech community, the larger proportion of people do not want to access the filtth that this list is supposed to protect from, but clearly the IWF lacks the responsibility to fulfil it's requirements without erroneously blocking useful resources. Most Internet Users have never accidentally stumbled onto Child Porn, and it's definitely not nice if you do, but as a rule you do actually need to be looking for it to find it. That should be ample protection for average users, and as the determined can work around the Blacklist, having no Blacklist is as effective in stopping users from viewing content.

Unfortunately, arguments against the blacklist are often greeted by accusations and suspicions of being a closet Pedophile. In my case at least, this is definitely not true. I have no desire to view such content and would love to see it wiped off the Internet, but as I said before Thats Just Not How It Works. It is a very political argument, and is quite a sensitive debate.

No doubt the debate will heat up, and the Government will weigh in with some new legislation, but the only real impact will be that we have a little less freedom and our Internet bills will go up.

Just for the record I wish to state that both the IWF and the NSPCC are clearly very well meaning organisations made up of people who care, I simply believe that they are slightly misguided on this issue. And as I've highlighted the IWF does have a history of being somewhat over eager when it comes to the blocking of websites. I certainly believe there should be more judicial oversight on the Blacklist, if an item is to be added it should be because it is illegal, not because it could potentially be illegal.

Republished: Hacking the Computrend Powergrid 902 Powerline Adaptor

This was originally published on in 2009

I've been having a bit more of a play about with the adaptors, wondering what other weaknesses lay within. So far I haven't found a vast amount, but I have found a potential DoS attack that could be used by a local attacker.

First, a bit of information about what makes the adaptors tick, I got the trusty screwdriver out and opened one up. Some of this information is available on the net, but I only found it by searching for chip names/numbers to find out what each one does. So to get it all in one place;

The Adaptors are powered by an 'Aitana' Chipset, manufactured by DS2. To be more specific, the main processing is handled between a dss7800u chip and a dss9101u chip. Given the size of the latter, I would presume that it contains the main CPU.
More information on the Aitana system can be found here.

This is supported by an Etrontech em638165ts-6g chip, providing DRAM.

The Ethernet side of things is largely handled by an IC+ ip101a lf chip. This is purely a transceiver, so most of the processing must still take place within the Aitana system.

I had been hoping for some form of flash memory to play about with, but as the Aitana is a System on Chip (SoC) implementation, this angle of attack was narrowed down greatly.

Although I haven't yet played aroudn with it, the ethernet board interfaces with the Mains Network Transceiver using a 10 pin interface. At least two of these must provide power to the ethernet board, so it seems likely (in my mind at least) that the other 8 pins could simply be an internal version of a CAT 5 cable. Like I say, haven't actually checked it yet, so it's only speculation.

Now the management interface is provided via a HTTP server, so I figured it would be worth seeing if there were any known exploits for the server, unfortunately the server doesn't provide servertype in its headers, and if you try and fool it into releasing the information by causing a 400 error, it simply closes the connection.

Being unable to identify the server (it's possible that it's an inhouse solution) I tried crafting a buffer over-run exploit.

Whilst so far I've failed to run code on the remote machine, I have managed to force the adaptor to drop all packets for approx 20 seconds, before restarting itself. A attacker on the local network could run a script to execute this command every minute or so and effectively deny access to anyone beyond the adaptor.

It's also likely that if someone were to craft a 'network sniffer' for one of these devices, they could run a similar attack from 'Mains Side'.

This attack is likely to have little effect on a small home network, after all if an attacker has that kind of access, they could probably just hit the standby button. But if used on a corporate network (I imagine some small businesses may have considered their use) an employee with a grudge could use it, whilst the effect of such an attack on an adaptor connecting to one other adaptor should be reasonably limited, if the adaptor is acting as an AP for a one to many connection, it could disrupt a large number of nodes.
Similarly if the connection is one to one, if the adaptor attempting to connect to the targetted adaptor is conencted to a switch or a hub, the attack could quite effectively disrupt network comms for a large number of nodes.


Republished: How Safe are Webcam Sites?

This content was originally published to in December 2008

There are a variety of websites available online which allow users to stream live footage of themselves, either to a specific person or indiscriminantly. The dangers of using these sites depends largely on the user.

There is a danger of users attracting unwanted attention, or of finding their webcam session more widely distributed than they expected. Users should be aware of this risk before accessing one of these sites, but it is expected that most would be mature enough to accept this risk.

One of the benefits of using an online meeting place, is that should a user be plagued by unwated attention they can easily report the issue to a moderator or simply stop using the site. However, this is reliant on users not disclosing personal details to people they do not know. There is very little benefit in leaving a site if you have given the person your home address.

An additional risk to users is largely caused by teenagers, were a minor to access an adult room and go un-noticed then users of that room could potentially find themselves prosecuted. In all likelihood this would not happen, as the authorities are likely to recognise that the minor had mis-led people about their age, or simply remained anonymous.

Whilst webcam sites do pose various risks, most should log conversations between users. This at least provides evidence should the worst happen, unfortunately this does not protect users from a risk that many would consider the most dangerous. Especially amongst the more adult sites available, a concern is that someone may be recording a webcam session for publication elsewhere. Whilst some would argue that it is a risk you take, most would agree that if you are streaming video of a personal nature, you are streaming it for that specific audience and should expect that it would not be published.
Unfortunately it does happen, as a quick google search will display. This issue affects any video streaming solution, whether it be MSN, Skype or an online site. If you are considering streaming video in any form you should be aware that it may be recorded and republished without your permission.

Whatever your personal views on the sanity of the matter, people do become addicted to these sites. For many users it is just another social group that they feel comfortable in, some may balance this with a 'real-world' social group, others may not. Whilst some may argue that these online meetings do not constitute a real friendship, for people involved in online meetings they feel real enough. Whilst it is true that you may be misled by another user in terms of who they are and what they do, this can also be true in real life.
For those who fall in love with someone online, it feels no less real than if they had met someone in a bar. Whilst this connection can be good for responsible adults, it does also mean that unwanted affection can feel no less real. Indeed, 'CyberStalkers' can be a big problem, especially if allowed to gain more personal or private details. The addition of video to this mix can also worsen a situation. Once again this is an issue experienced in the real world as well, and unfortunately comes with the territory.

Webcam sites and children, however, do not mix. Most teenagers lack the maturity to truly understand and accept the risks that I have detailed thus far. Teenagers are often easy pickings for the unscrupulous, and are likely to be more at risk than when chatting in a non-video environment. Neither is safe unless carefully moderated by a responsible adult. There should be a greater burden on site operators to ensure that children cannot access areas that are unsuitable, and to protect them in the areas that they can access. However, the true burden of protection should remain with the parents.

This article has focused mainly on the dangers of using Webcam sites and chatrooms, however I have also highlighted some of the benefits. This article has not mentioned the financial risks of paying to use a site, and addictive nature could quickly lead to debt, whilst placing trust in the wrong site could lead to identity theft. These issues lie more within the scope of other articles.

When used responsibly and maturely, these sites can be a good release for like minded adults, and a social safety net for those who would otherwise find friendship hard to come by. Not everyone that uses these sites is a deviant or socially inept, and good friendships can be forged.

Whether or not these sites are safe depends almost entirely on the persona of the user, with a few exceptions the risks are no different to e-mail conversations. There is always a risk of unwanted affection no matter which world you choose to spend your life in, the internet does provide more opportunity for this but also provides a modicum of anonymity.

What is this blog about

This post was originally posted on Freedom4All, you can view the original in the Freedom4all archive

FreedomForAll exists to identify and highlight breaches of Human Rights around the world, but what exactly are these rights?

On December the 10th 1948, the Universal Declaration of Human Rights (UDHR) was proclaimed by the General Assembly of the United Nations. The Assembly then called on all Member States to ensure that the act was Disseminated, read and adhered to, without distinction.

Amongst the 30 Articles that comprise the act, the UDHR stated that;

  • All human beings are born free and equal in dignity and rights.They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
  • Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.
  • Everyone has the right to life, liberty and security of person.
  • No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.
  • No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment.
  • Everyone has the right to recognition everywhere as a person before the law.
  • No one shall be subjected to arbitrary arrest, detention or exile.
  • Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.
  • Everyone has the right to take part in the government of his country, directly or through freely chosen representatives
  • Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance.
  • Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
  • Marriage shall be entered into only with the free and full consent of the intending spouses.

These are rights that most of us take for granted, but they are far from universal. Many countries subject citizens to arbitrary arrest, and hold dissidents without trial.

China is perhaps the most prominent example of these countries, regularly arresting Political dissidents for expressing their views. Some stories reach International News Agencies, but most 'Enforced Disappearances' go un-noticed by the International Media.

Imagine being in a position where you disagreed with the actions of your Government, but were unable to speak out about it for fear of the effect it could have on your life, and that of your family. This is exactly what happens when Freedom of Expression is taken away.

Those that do speak out often do so in order to strive for a better quality of life, both for themselves and others in their Country. All to often this action is followed by an arrest, and occasionally a trial.

In some countries dissidents also risk being tortured simply for disagreeing with the 'Official Line.'

Imagine being persecuted by the Government because you are the wrong religion, for many this is a daily occurrence.

Organisations such as Amnesty International provide a lifeline for those that are not granted the basic Human Rights and actively campaigns for better treatment. Give them support in whatever way you can, whether by becoming an active member or simply by Donating.

The more voices there are, the stronger the message becomes.

Join, Donate, Do Something, Just don't Do Nothing!

Republished: A suggestion for BT

Originally published on on 27 October 2008

Given that BT claim to be creating a network level opt-out from Phorm, I thought I would give them a bit of a helping hand. They claim that although it will be implemented, it's unlikely to be in place by the time the WebWise system is rolled out.

My first suggestion is made knowing that BT are unlikely to adopt it;

  • Ditch Phorm - Your customers do not want this technology

  • Now, regardless of arguments about Opt-In versus Opt-Out, it is quite simple for BT to create an account level opt-out.

  • Use the HomeHub - BT's HomeHub is the connection between a customer and BT's network. Create a setting on the HomeHub to opt-in to the WebWise system. This is already in place for services such as BT Fon, Broadband Talk, and a variety of others.

  • Given that not all customers have HomeHubs, the service really should be Opt-In, this will avoid users of different Network Equipment being Opted In with no way to Opt-out.
    The setting on the HomeHub would determine the route that traffic takes (i.e. through Phorm's super secret spy hardware, or through more conventional hardware).

    As long as the service is provided as Opt-In, the above should be a suitable solution, the interface on the HomeHub is dumbed down so that more or less anyone can use it, and if a customer is really that interested in adding WebWise to their connection, they will quite happily log on and activate it.

    As I have just saved BT a small fortune in pointless R&D, I am quite happy to accept my consultation fee by cheque!

    Just not going to happen is it!

    Republished: Phorm your own opinions

    Originally published on on 06 October 2008

    It has been revealed today that BT consider it the account holders responsibility to explain about Webwise to all users of their connection. BT have revealed in their revised terms and conditions that they can accept no responsibility if users of a connection are not kept informed about Webwise by the account holder.

    What this basically means is if your child opts-in to Phorm on their user account, it is your fault and not BT's. That a child is too young to consent to interception of traffic doesn't seem to matter, it remains your responsibility to inform all users, and to opt-out on each and every browser and user account on each and every net connected computer in your home. To clarify what this means, if you have a PC and a laptop, both with three user accounts, and both Firefox and Internet explorer in use, you will have to complete the Opt-out process a total of 12 times.

    If you clear your cookies, you will have to opt-out again. You can block cookies from however your traffic will still pass through the hardware. Phorm promises not to use it if you have opted out though.

    The reality is, if you are with BT your traffic will pass through Phorms system whether you like it or not, you can either leave or encrypt your traffic.

    BT claim they are working on a network level opt-out, but cannot guarantee that it will be in place before WebWise is rolled out.

    For a more detailed look at what Phorm is up to try reading

    Republished: UK Government fails to respond to the EU about Phorm

    Originally published on 13 Aug 2008

    Below is a copy of a letter the EU sent to the UK Government at the end of june. As reported on The Register, the Government has yet to respond. The letter was sent at the end of June, which means that the deadline has been missed. Apparantly the Government would not comment on exactly why they had missed the deadline, and it's not entirely clear what happens next. Potentially our government could find itself having to defend it's actions (or more to the point, lack of action) in Luxembourg.

    Dear Sir,

    I am writing to you in relation to certain issues arising from the past and future deployment by some major United Kingdom Internet Service providers of the technology provided by a company called 'Phorm' to serve their customers with targeted advertisements based on prior analysis of these customers' internet usage.

    In March 2008, a number of news items appeared in the media concerning the planned use by United Kingdom ISPs of the Phorm technology. Many of these publications raised issues concerning the impact of this technology on the privacy of Internet users. The information published on the web also included an e-petition submitted to the Prime Minister and a complaint made to the Information Commissioner's Office (ICO). In addition, in early April 2008, BT published a briefing according to which it had performed trials of the Phorm technology in autumn 2006 and summer 2007. In a TV interview, a BT representative confirmed that these trials had been performed without informing the customers affected and obtaining their consent.

    The European Commission has already been contacted by Members of the European Parliament from the United Kingdom who communicated the concerns of their constituents regarding the deployment of Phorm technology. The issue has also been the subject of several written parliamentary questions addressed to the Commission by MEPs asking the Commission to comment on the applicability of WU legislation and also to set out its intended action in relation to the previous trials. Finally, a number of individuals have also written to the Commission directly to express their concerns and invite it to intervene in the matter.

    In order to provide the response that is expected from it, the Commission needs to base itself on a clear understanding of the position of the United Kingdom authorities. Several EU law provisions concerning privacy and electronic communications may be applicable to other activities involved in the deploment of Phorm technology by ISPs.

    In particular, Directive 2002/58/EC on privacy and electronic communications, which particularises and complements for the electronic communications sector the general personal data protection principles defined in the directive 94/45/EC (Data Protection Directive), obliges Member States to ensure the confidentiality of communications and related traffic through national legislation. They are required to prohibit listening, tapping, storage or other kinds of interception or surveillance of communications and the related traffic data by persons other than the users without their consent (Article 5(1)). The consent must be freely given, specific and an informed indication of the user's wishes (Article 2(h) of Directive 95/46/EC). Traffic data may only be processed for certain defined purposes and for a limited period. The subscriber must be informed about the processing of traffic data and, depending on the purpose of processing, prior consent of the subscriber or user must be obtained (Article 6 of Directive 2002/58/EC).

    In the light of the above, we would highly appreciate it if the United Kingdom authorities could provide us with information on (1) the current handling by the United Kingdom authorities of the issues arising from the past trials of the Phorm technology by BT and on (2) the position of the United Kingdom authorities regarding the planned deployment of the Phorm technology by ISPs.

    As regards the first issue, according to applicable EU law the responsibility for investigating complaints concerning such trials and determining whether the national legal provisions implementing the requirements of the relevant EU legislation have been complied with lies with the competent national authority(-ies) in the United Kingdom. The Information Commissioner's Office (ICO), which is responsible for enforcing the United Kingdom Data Protection Act 1998 (DPA) and Privacy and Electronic Communications Regulations 2003 (PECR), has made a number of statements on Phorm. In its latest published statement of 18 April 2008, the ICO analyses the conformity of the deployment of the Phorm technology with the DPA and the PECR. At the same time, the ICO indicates that it does not have responsibility for enforcing the Regulation of Investigatory Powers Act 2000 (RIPA), which has been invoked by some individuals who question whether the use of Phorm entails an unlawful interception of communications under this Regulation. In this respect, the ICO refers to a statement by the Home Office, which says that it is questionable whether the use of Phorm's technology involves an interception within the meaning of RIPA and that it does not consider that RIPA was intended to cover such situations. The ICO concludes on the issue of RIPA by stating that it will not be pursuing this matter. At the same time, the ICO statement does not include any indication as regards the intentions of the ICO in relation to the investigation of possible breaches of other relevant legal provisions* in the past trials of the Phorm technology.

    Second, as regards the issues arising with regard to the planned future deployment of the Phorm technology, there appears to be a certain discrepancy between how it is envisaged by the ICO, the ISPs and Phorm itself. One of the most significant issues in this regard is the way in which customers will express their consent to the application of Phorm technology in their case. While the ICO seems to suggest that the consent of users for the Phorm technology should be on an opt-in basis and also BT seems to confirm this approach, Phorm has indicated that it intends to tackle user consent through providing 'transparent meaningful user notice'.

    I would therefore be grateful to receive the response of the United Kingdom authorities on the following questions:

    1. What are the United Kingdom laws and other legal acts which govern activities falling within the scope of Articles 5(1) and 6 of Directive 2002/58/EC on privacy and electronic communications and Articles 6, 7 and 17(1) of Directive 95/46/EC?

    2. Which United Kingdom authority(-ies) is (are) competent (i) to investigate whether there have been any breaches of the national law transposing each of the above-mentioned provisions of Community law arising from the past trials of Phorm technology carried out by BT and (ii) to impose any penalties for infringement of those provisions where appropriate?

    3. Have there been any investigations about the past trials of Phorm technology by BT and what were their results and the conclusions of the competent authority(-ies)? Are there ongoing investigations about possible similar activities by other ISPs?

    4. What remedies, liability and sanctions are provided for by United Kingdom law in accordance with Article 15(2) of the Directive on privacy and electronic communications, which may be sought by users affected by the past trials of the Phorm technology and may be imposed by the competent United Kingdom authority(-ies) including the courts?

    5. According to the information available to the United Kingdom authorities, what exactly will be the methodology followed by the ISPs in order to obtain their customers' consent for the deployment of Phorm technology in accordance with the relevant legal requirements and what is the United Kingdom authorities' assessment of this methodology?

    Given the urgency of this matter I would highly appreciate receiving your reply within one month of receipt of this letter.

    Yours sincerely,

    Fabio Colasanti

    Republished: Phorm's History

    Originally published on on 13 August 2008

    I've noticed that the number of links on the main page to stories about Phorm is quite large, so to make the stories a more useful resource I decided to create an index with all Phorms links on it.
    The newest entries can be found at the bottom of the page, and all links will open in a new window/tab.