A Practical Demonstration of what IPB will allow

There have been numerous write-ups of the threat that the Draft Investigatory Powers Bill poses to our privacy and security.

The intention of this post is not simply to repeat those, but to provide a practical demonstration of exactly the kind of information that the proposed powers would compel your Internet Service Provider (ISP) to record.

As well as demonstrating what an ISP would soon be collecting (and how simple it is to extract), we'll look at the issues the IPB presents in the context of the information we've extracted.

As the IPB isn't exactly explicit about exactly what it allows, especially in terms of techniques, I've made some assumptions (though I believe their fair and reasonable).

Most of the results were exactly what I expected, but I think describing them explicitly is probably more helpful than not - to that end, I've tried to keep the language as accessible as possible, as those who understand how tech works at the network level are unlikely to find much of surprise here.



Testing Methodology

I took a number of packet captures, all at the internet facing interface of my firewall. So the data collected is what would be seen downstream at the ISP's recording system (or, in fact, if Mass Equipment Interference led to my ISP provided router being compromised).

Captures were taken when I was the only person home, so the traffic I captured was either generated by my activities, or from one of the systems I run.

Captures were started, and then I simply went about my business as usual (for example: running a security audit on a server, followed by a browse of the news).

Where there was something that struck me as likely to generate interesting traffic, I made a note of the time as a reminder to look at it later.

I've constrained the data in this post to that generated by using simple extraction techniques - by performing more advanced processing of the captures I was able to correlate information in order to reach additional conclusions, but to keep the technical barrier to understanding this post low, I've omitted that information.



  • I'm under no illusions that my home network is anything like the "average" internet user. Some connections are automatically routed out via VPN's, others via Tor, but (at time of writing) I still allow other connections to go directly via my ISP.
  • I also block a lot of advertising domains, and most outgoing HTTP connections are routed via a caching proxy, so my request fingerprint will differ from the average. 
  • What that means, is that, the results below come from a network which should be reasonably well controlled in terms of what makes it onto the wire in cleartext. For the average user, the situation is probably notably worse. 
  • There is also, of course, always going to be some selection bias. I'm hardly likely to search for goat porn when I know a packet capture is running - none-the-less I tried to keep my habits as normal as possible.
  • Finally, the captures were taken over the course of hours - whereas the IPB will be months, if not years until it's repealed or struck down - so they will be able to observe and infer patterns that my limited test cannot.
  • Remember that what's show below will be performed, by default, against every Internet user in the UK


Web Browsing

List of HTTP Hosts accessed

Let's start with listing every site accessed via HTTP - we'll simply be extracting the Host header (though in reality, the IPB will want the source and destination IP recorded too)

ben@milleniumfalcon:/tmp$ tshark -q -r capture1.pcap -Y "http.host" -T fields -e http.host | sort | uniq -c
      3 10jp2ew.m.ns1p.net
      6 13vqq7w.m.ns1p.net
      1 3jsnc7xbng.r.ns1p.net
      2 40.media.tumblr.com
      5 a.deviantart.net
      2 apex.go.sonobi.com
    50 arstechnica.com
    35 arstechnica.co.uk
      1 beacon.errorception.com
      7 benscomputer.no-ip.org
      1 b.huffingtonpost.com
      2 b.ns1p.net
      1 capture.condenastdigital.com
      3 cdn.arstechnica.net


Just from the output above, during the capture period, we can see that I was actively browsing arstechnica (both the US and UK sites).  The first capture I took (primarily getting some work done, with a bit of browsing) returned 90 different domain names, and it's fairly easy to identify which of those I took a prolonged interest in and which I (maybe) read a single page and then closed.

Over time, that obviously allows us to ascertain not only which sites I frequent, but by categorising those sites, the type of information I actively seek out. 


HTTPS offers little protection - SNI

Whereever you find discussion of the IPB, you'll find someone commenting that moving to HTTPS will save everyone. This is a common misconception, and has been since (at least) 2003.

Server Name Indication (SNI) sends the FQDN of the service you're trying to access as part of the SSL handshake, and extracting it from a PCAP is incredibly simple to do

ben@milleniumfalcon:/tmp$ tshark -q -r capture1.pcap -Y "ssl.handshake" -T fields -e ssl.handshake.extensions_server_name | sort | uniq -c

      1 0.client-channel.google.com
    14 abs.twimg.com
      2 accounts.google.com
      1 ajax.googleapis.com
      1 api.facebook.com
      3 apis.google.com
      5 a.thumbs.redditmedia.com
      3 benscomputer.no-ip.org
      1 blog.torproject.org
    .... snip ....

We've now got the same information as we extracted by pulling the Host header from HTTP connections, only this time for anything I connected to via HTTPS. This is particularly significant given the misconception I noted above - many people assume that using HTTPS will hide their destination, when in fact it only hides the URL path (we'll looked at partially circumventing that in a moment), we cannot see which article I read on Tor Project's blog, but we can see that I visited the blog.

It's a long list, I wasn't even browsing the net that much -  I guess it's easy to forget just how media rich the web is nowadays.

So, pulling the domain name you're accessing from HTTPS connections is trivial, expect it to happen.


HTTPS offers little protection - Referers

Although, by default, most modern browsers now won't load a HTTP resource within a HTTPS page, some still do. There's also the question of where you go after visiting a HTTPS site, both of which can leak information about the page you were accessing.

As a practical exercise, let's take Reddit as an example.

Reddit is served over HTTPS, however I want to try and identify which subreddits you're subscribed to, and if possible your username.

Directly capturing Reddit traffic does me no good, I can see you're accessing the site (see SNI above) but that's it.

Unfortunately, a lot of the stories/images submitted to Reddit are served via HTTP (even Imgur, the defacto image hosting service for reddit). So let's use the Referrer header to identify where you've been on Reddit (the traffic for this example was ever so slightly staged)

ben@milleniumfalcon:/tmp$ tshark -q -r reddit-referer-example.pcap -Y "http.host" -T fields -e http.referer | sort | uniq -c | grep reddit.com

      1 https://www.reddit.com/r/AskNetsec/comments/3sjblq/my_usernamepassword_for_gamestop_was_online_in/
      1 https://www.reddit.com/r/aww/
      2 https://www.reddit.com/r/netsec
      3 https://www.reddit.com/r/nsfw

We can see I've been in two Netsec related subreddits, viewed NSFW (:D) and followed a single link it /r/awww.

It doesn't immediately tell us what I'm subscribed to, but repeat the capture over a period of time, and you'll soon start to get an idea. For those Reddit users (it happens) who get a bit, err, infatuated with someone in GoneWild - if you're clicking links to him/her from their profile page, that'll also be pulled out by the command above.

Similarly, if you've a habit of locating older news stories by looking at your own profile and then clicking the link to the item, we can extract the username. Over time, if that username appears frequently, we can begin to surmise that it's your handle.

Looking at slightly more average behaviour, even if I'm not entering the subreddits themselves, when I click links from the front page we'll see a referrer like


Over time, we can identify the category of subreddit I've likely subscribed to (in order for the news to appear on my front page), especially if the content being viewed is unlikely to fall into the "default" subreddits. Where unique links have been posted to a sub, we should even be able to narrow down to specific subs. 

The caveat here, is that the IPB isn't 100% clear on what is and isn't allowed. They re-assuringly say that only connection data (rather than content) will be recorded, but the cynic in me can easily see the Referrer header falling into the connection category (it shows where you came from), if not now then in the future.


Third Parties

LinkedIn is leaky

I don't think anyone genuinely associates LinkedIN with privacy, but I was caught slightly off guard by this one.  

I was reading a story on The Register, and the pages contain Social Media widgets, including LinkedIN buttons. So we, of course, expect to see a request go out to LinkedIN.

Looking at the request though, it's (oh so kindly) disclosing that I have a Google Mail account

Cookie: bcookie="...snip....utmcsr=mail.google.com|utmccn=(referral)|utmcmd=referral|utmcct=/mail/u/0/;...snip"

To my shame, I've obviously clicked a link in one of LinkedIN's emails at some point, and they've set the referrer details in their Google Analytics cookies.

We know I came from the Google mail's web interface, rather than it being a convenient coincidence because they've got utmcct set to /mail/u/0 - which is the beginning of the URL path when viewing a mail.

Twitter and the other social networks at least have the decency to serve all their widgets over HTTPS, so this issue is avoided (at least from the point of view of a watcher on the network).

The IPB should almost definitely prohibit collection of cookies, but it's still worth keeping in mind as an example of how the requests browsers make may reveal more about you than you realised.


Advertising becomes worse

We all already know that advertisers like to track us. Unfortunately the ID's they use to do so are particularly useful to someone watching on the network, especially when considering mobile devices.

Take (for example) browsing a couple of news sites (picking up some tracking cookies in the process). You later go out, taking that laptop with you (for sake of example, you go to Starbucks). If whilst out, you access any site which uses the same advertising network, that ID will be sent over the network. Anyone watching both networks (as the Govt will be under the IPB) can now tie you to both locations - assuming, of course, that the advertising networks are using HTTP, which most seem to insist on doing.

The same, of course, also goes for Social Media widgets (which is part of the reason those icons are blocked by default on my site)



Of course, we don't just use the net for browsing, we also use it to communicate with others, so let's take a quick look at what we can pull out of that.


Instant Messaging

It's the work of a moment to see exactly what services you've been connecting to (example below is Jabber), and depending on the protocol, the name of the service (for Jabber, we can pull the service name from XMLNS)

ben@milleniumfalcon:/tmp$ tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 5222" -T fields -e ip.dst | sort | uniq -c 


From the XMLNS, we can see that these are 

  • chat.facebook.com
  • talk.google.com

So, we've a good idea of what chat services I use, and know exactly which protocol I use to do so.

Similar can also be achieved with Skype and other messaging services - as I don't really use them, XMPP was the easier option to find in a capture.



Working out which mail services are in use is incredibly straight forward


# Plain SMTP
tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 25" -T fields -e ip.dst | sort | uniq -c 
tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 465" -T fields -e ip.dst | sort | uniq -c 


# Plain IMAP
tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 993" -T fields -e ip.dst | sort | uniq -c 
tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 993" -T fields -e ip.dst | sort | uniq -c 

# Plain POP3
tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 110" -T fields -e ip.dst | sort | uniq -c
tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 995" -T fields -e ip.dst | sort | uniq -c 

It's obviously no harder to start extracting headers from any plaintext connections you find, to identify who you've emailed and when (or who you've received from).

What mails I did capture leaving the network all transited over an encrypted connection, so I can't extract headers from them. We can, however, show which mailserver they went to

tshark -q -r LAN-64-sample.pcap -Y "tcp.dstport == 465" -T fields -e ip.dst | sort | uniq -c

A quick look shows that that's the mailserver for Suffolk County Council



A surprise to no-one, DNS requests give a lot away on their own, I don't need to know anything about the protocol you'll be using for you to disclose which server you're intending to communicate with - so long as you're performing a DNS lookup that information lands straight in my lap

ben@milleniumfalcon:/tmp$ tshark -q -r LAN-64-sample.pcap -Y "dns" -T fields -e dns.qry.name | sort | uniq -c

I'd been expecting a lot of output, but the sheer number of lookups performed surprised me. If you're browsing anything which uses a CDN in anger, expect to see a lot of lookups as the TTL's are traditionally in the order of seconds - in plain terms, if you're using HTTP Adaptive Streaming to, say, watch porn, expect there to be a lot of DNS queries identifying the service you're connected to.

The only positive about the amount of data DNS provides, is that it might prove not to be cost-effective to capture and record every DNS query placed by an ISP's subscribers. A single query isn't a lot of information to store, but the sheer volume of queries being placed should not be underestimated.

It'd also be incredibly cheap, in computational terms, to generate a high volume of bogus lookups to ensure that they couldn't effectively be stored.


Device Identification

My firewall NATs, so we obviously can't look at the source IP to identify individual devices (even if my firewall didn't, the router would for a downstream observer). That isn't enough, however to prevent us from identifying at least some of the devices on my network.

If we're talking about the proposed Equipment Interference (EI) bits of the IPD, revealing the equipment you're running becomes more of a concern as it potentially allows you to be targeted as a user of a certain bit of kit. In simple terms, EI is a touchy-feely name for compromising devices, and the IPB would permit the Government to do it en masse, in total secrecy.

So, what information can we extract and how?

An obvious starting point is to pull out User-agents from any HTTP requests

ben@milleniumfalcon:/tmp$ tshark -q -r LAN-64-sample.pcap -Y "http.host" -T fields -e http.user_agent | sort | uniq -c

      5 Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36
    757 Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/43.0.2357.130 Chrome/43.0.2357.130 Safari/537.36
      4 OSMC/Open Source Media Center

We can see clearly here that there were three devices making requests, in both cases Chrome, one likely on a Windows box, the other on a Linux box. The third is OSMC - XBMC on a Rapsberry Pi

Accuracy based on user-agent isn't always great, I know for a fact the first is a spoofed user-agent used by one of my crawlers. But, importantly, the third is obscure enough of a user-agent that it's probably genuine

There aren't any Android devices in there though - I'd temporarily piped their connections over a VPN so that I could come back and specifically examine their footprint, so lets do that now

With android on your network, you'll see lots of requests to clients3.google.com in particular some like below

GET /generate_204 HTTP/1.1
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.4.2; GT-I9505 Build/KOT49H)
Host: clients3.google.com
Accept-Encoding: gzip
Cache-Control: max-age=259200
Connection: keep-alive

Using the user-agent, we've now got a model number (GT-I9505 == Galaxy S4) as well as the version of Android being run. Even if we're not allowed to look at the path, accessing clients3.google.com means the device is likely running Android.

It's also possible to identify some of the apps installed

GET /dropsync/app-news?v=2.6.14&vc=5082601&i=com.android.vending&s=OZ46QQPQ2J74Z2KF2C&p=mrzg64m2wwia HTTP/1.1
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.4.2; GT-I9505 Build/KOT49H)
Host: android.metactrl.com
Accept-Encoding: gzip
Cache-Control: max-age=0
Connection: keep-alive

Even if we're not allowed to look at the path requested, the host header narrows the App down (at time of writing) to 5 apps. If we are allowed to use the path, we can see exactly what version of dropsync is installed.


Issues the bill presents

Much of what has been discussed above has always been possible, at a technical level. However, it generally requires that an attacker be on the route your packets take to the wider Internet. Traditionally, outside of specific warrants, that path has been guarded (however poorly) by our ISPs.

The Government now seeks to routinely occupy a very privileged position - that of a man in the middle - giving themselves incredibly invasive access.

So, in the context of what we've shown above, let's look at the issues the bill presents.


Is it like an Itemised Phone Bill?

The UN head of privacy has done a pretty good job of slamming the IPB, so instead, let's look at the claims in the context of the information above.

Taken at a very high level, the information we've just extracted shows who I contacted, and when, just like an itemised phonebill does. However, as with all things, context is very very important.

How often do you make a phone call? How often do you browse the net? If something sensitive and personal is bothering you, are you more likely to call a helpline, or would you search the net first? If one of the search results takes you to http://thehaemorrhoidcentre.co.uk, it's going to appear in a list like the one above.

On average, according to Ofcom, we spend 31 hours a month browsing the net. Whilst some might spend close to that on the phone (I certainly don't), you don't tend to call someone for a few minutes, hang up and then call someone else throughout that period.

Just as when your seeking out information on a specific topic, you may read multiple sites, whilst will probably make less than 2 calls (if any)

Simply put, taken over time, a history of what sites you've accessed is far, far more revealing than an itemised phone bill, even before it's taken in combination with the other communication records (who you've emailed, who you've Skyped etc).  


The Bill Has Protections

Like any draconian legislation, the bill has a number of "protections" to "ensure" that our rights are not infringed, so let's take a quick look at them

  • Communications data is defined as data describing who you communicated with, how and when - this will form the Internet Connection Record (ICR). Anything extra, should be considered content.
  • ICR's will only be retained for 12 months, absent a retention order
  • Although the communications data will be recorded routinely by your ISP, those working for the Government will need to seek permission from various people depending on what they want to access.
  • If there's a need to access the information above, a member of the Police (for example) would need sign-off from a "Designated Superior" officer, just as they currently do under RIPA.
  • If there's a desire to intercept the content of your communications, then a Minister must sign-off, then pass to a "Judicial Commissioner" for a rubber stamp. Unless, it's particularly urgent, in which case the Minister can effectively bypass that

All of the above have one very important thing in common - the protections are nothing but promises. Promises which can be ignored or repealed at any point in the future.

When the Snowden revelations started, there was a strong argument that what GCHQ were doing was illegal. The Government would now like to legalise those powers with promises that they'll handle the data "correctly" despite a history of ignoring what is and isn't legal.

There appears to be just one "technical" measure in place to control access to the records - the much vaunted "Request Filter".

Although the Home Office clearly don't like hearing it described as such, this appears to translate to nothing more than having a big database (or several databases) containing the information with an interface to query it, limited on assigned access level.

In other words, it's no real protection at all, the name is effectively just a nice sound bite for Politicians to use.

But, not to dwell on kicking the current shower, there's actually more to consider than whether you believe this Government will abide by the protections


It's not (just) about the Conservatives

When introducing powers that have a potential impact on civil liberties, Government has something of a tendency to trot out aged tropes such as "If you have nothing to hide, you have nothing to fear". Ignoring the somewhat problematic credit for that quote, the issue with this is it only focuses on the here and now.

Imagine that your teenage daughter has fallen pregnant, and doesn't want to keep the baby. She searches around to find an abortion clinic and eventually aborts the pregnancy. That's perfectly legal, and should present no issues.

9 months into the future, we have a new Government, comprised of die-hard anti-abortionists (I'm not going to attribute religion here).

One of the first things they do, is overturn abortion legislation, declare abortion a serious crime and start a witchhunt. During the course of which, they examine the ICR's to get a list of any users who accessed abortion related sites. You can see what happens next.

It sounds far fetched (and hopefully is), but the point is - we cannot simply trust the current Government, we also have to consider what successive Governments may be like. Power corrupts, and handing politicians the power to closely observe what the populace are doing is not, and has never been, a wise idea.

By allowing the current government to introduce the apparatus for mass privacy invasion, with or without the draconian secrecy measures the IPB provides, is a huge risk, and one that our descendants will likely pay the cost for.


Competence is assumed 

The timing of the Draft IPB being released was unfortunate, coming as it did after the revelations that TalkTalk have suffered a massive data breach. Note that it's not the first breach that TalkTalk have suffered, even this year. They're also not the only ISP to have suffered a breach at some point in recent history.

It's these same ISP's who'll be required to record and store the details we've discussed above. So not only is there a need to consider the motivations of future politicians, we've also got to consider whether those holding the data are actually capable of protecting it.

Given TalkTalk's breach, and more importantly, the terrible manner in which they've attempted to handle it, I suspect there are more than a few people in the UK who could say they've little faith TalkTalk could hold such data securely.

Make no mistake, the collective Internet Connection Records of a large ISP will be an incredibly valuable target. Gaining access to that collection of data would provide a criminal with a range of possibilities, with the obvious ones being

  • Blackmail/Extortion (based on browsing history)
  • Targeted Phishing
  • Sale of the data

The ISP's will, obviously, take "reasonable measures" to protect our data - but given it appears a major ISP's "reasonable measures" seemed to exclude checking for a SQL Injection vulnerability (we're not talking about a sophisticated hack here) can anyone really feel confident that the measures will be actually be sufficient?

Of course, there's also the possibility that a copy may be found on a laptop forgotten on the train at some point in the future


Scope Creep

Once in place, the very targeted, focused intention of systems/appliances tend to drift.

Some time ago, filters were introduced by the big ISPs in order to try and prevent access to Child Pornography. An aim that very few could criticise, though there were warnings voiced that introducing censorship apparatus onto the network risked it later being used for other purposes.

If we fast-forward a few years, we see the media industry taking BT to court in order to try and force them to use that same apparatus to block access to sites such as Newzbin and The Pirate Bay. The equipment introduced to filter child porn (criminal law) was now to be used to prevent users accessing a site allowing them to commit copyright infringement (civil law).

Whatever your feelings on copyright infringement, that's a huge deviation in scope, and more sites have since been added to that list. We also, of course, now have more comprehensive filtering enabled by default on our connections now.

The means to potentially censor on a mass scale is already in place, and now politicians are moving to introduce widespread monitoring - if this were a novel I think we'd all be smelling a fish right now.

I should, at this point, confess to my own biases against the City of London Police. As a result of their public statements and actions, my perception of the CoLP is that they are technically illiterate, industry paid henchman. I'm sure not all their staff meet that description, but the publicly visible behaviour of that department is not what I'd expect from one empowered to arrest, and there's not much I'd personally put past them.

With that disclaimer in mind, there's no reason to believe that the same won't happen here, we've already seen the CoLP travel to Leeds in order to arrest someone for leaking films as a result of their close partnership with the Motion Pictures Association of America (MPAA). 

Given the MPAA's history of "find a way", it doesn't feel too much of a stretch to suggest that at some point in the future the CoLP might start taking their own interest in ICR's. Given the MPAA's dislike of XBMC, things like the OSMC header we extracted above could conceivably be used in concert with the Mass Equipment Interference provisions.

That it seems unlikely is undeniable, on the other hand, when the Websense filters were introduced, it seemed unlikely that they'd ever be used to filter content that is entirely legal to access, and yet here we are.



In this post, we've taken a look at some of the information that will be captured on a routine basis, but make no mistake, they are very simple examples.

There is nothing above which can be considered advanced in a technical sense, the additional information you can gain by correlating data captured over time would quickly draw behaviour patterns which aren't evident above.

Although there were a few surprises (another, technically focused article coming later), I've done a pretty good job of controlling the data that leaves my network - there are a number of outgoing connections which weren't identifiable in my PCAPs. Most of the packets in my capture were for VPN connections and similar, yet I've still been able to identify (and fix) significant failings.

There's no easy way to identify what the average user's connection will look like - I cannot ethically run a capture on someone else's network without their permission, and getting consent means their browsing habits will (even if unconsciously) change. But we can probably assume that for the average user things are far worse in terms of what may be observed on the network.


The powers the Investigatory Powers Bill would provide have the potential to negatively affect us all, whether through ISP incompetence or malicious behaviour by those in power.

With routine recording of the sites you visit, something you legally browse today could negatively impact you in the future.

In this post, we've only touched lightly on the issues presented by Mass Equipment Interference. Consider that your devices may be (legally) compromised, not because you're the target of an investigation, but because some arm of the Government is attempting to collect data.

We've not touched (at all) on the Technical Capability Notices (TCN) aspect of the bill - A TCN allows the Government to tell a supplier to make changes to their product in some way, whether that's weakening encryption, inserting a backdoor or otherwise weakening the product.

What constitutes a supplier isn't defined, so will likely include makers of communications software such as WhatsApp and Snapchat (and almost certainly will, given how heavily public statements have focused on the End-to-End encryption used by WhatsApp) as well as manufacturers like Apple (who, hopefully will respond with two words).


Thanks to the heavy provisions the bill makes, the above will happen entirely in secret, with harsh penalties for those who disclose information of those activities. Secrecy to the extent, that if you're wrongly accused of a crime, exculpatory evidence gained as a result of use of the powers cannot be provided to your defense. There are no situations under which the bill considers it OK to release details of activities, capabilities or demands/requests.

The Government tell us that these powers are needed in order to combat terrorism, though those same politicians are unwilling to have an adult and open debate about the legislation they're trying to introduce. Instead we have Theresa May claiming the bill is simply the modern equivalent of an Itemised Phone Bill and Richard Graham trotting out "If you have nothing to hide, you have nothing to fear" (and inexplicably defending it by pointing out Goebbels would have said it in German??).

Neither is true, or constructive, especially given that even the bills protections may not be being correctly represented. Incredibly invasive legislation is being pushed through by MP's who either do not understand the subject and the consequences, or worse fully understand both but are motivated to try and pull the wool over our eyes. Some of the Prime Minister's comments on encryption have been laughable, but the IPB is way beyond a joke.

If the IPB is allowed to pass, not only will it destroy our civil liberties, but the possibility that a TCN might exist will mean that software and hardware built in the UK simply cannot be trusted by the rest of the world. You cannot ask a supplier to confirm there are no backdoors when they're legally compelled to lie about the presence of any which may have been added. The Government has talked in the past about wanting to turn Shoreditch into the next Silicon Valley, but is set to pass legislation which will end all possibility of that.


Thanks to the wonder that is party politics, if your MP is Conservative there may be little hope of them (publicly) seeing sense. All the same, it only takes a few minutes to send them a message outlining your opposition to the bill and if enough people do so, at least means they cannot trot out the "we have a mandate from the public" that they so love. So, head over to TheyWorkForYou, find the details of your MP and tell them what you think of the IPB.

Even if public support isn't enough to change MP's minds, it may be sufficient to have the Lords do what they do best, and kick a festering piece of legislation to the curb.


In closing, consider that the following sites are all legal to access in the UK, but ask yourself whether you'd be comfortable with a record of you or any member of your family having accessed them being available to anyone who has access to the data (which as we've seen could be anyone in the future).

All perfectly legal things to view, but also all things I suspect most would like to keep private.