I recently had visibility of a Spamhaus Block List (SBL) listing notification on the basis of malware being detected within a file delivered via HTTP/HTTPS.
As part of the report, they provide the affected URL (for the sake of this post we'll say it's
https://foo.example.com/app.exe) along with details of the investigation they've done.
Ultimately that investigation is done in order to boil back to a set of IPs to add to their list.
Concerningly, this is, literally just
dig +short foo.example.com
Which gives them output of the form
CNAME1 CNAME2 220.127.116.11 18.104.22.168
They then run a reverse lookup (using
nslookup) on those IP addresses in order to identify the ISP. The IPs are added to the SBL, and a notification sent to the associated ISP.
In this case, the URL was a legitimate file, though it had been bundled with some software falling under the Possibly Unwanted Application (PUA) category. The point of this post, though, is not to argue about whether it should have been considered worthy of addition.
The issue is that Spamhaus' investigation techniques seem to be stuck in the last century, causing potentially massive collateral damage whilst failing to actually protect against the very file that triggered the listing in the first place.
In case you're wondering why Spamhaus are looking for malware delivery over HTTP/HTTPS, it's because the SBL has URI blocking functionality - when a spam filter (like SpamAssasin) detects a URL in a mail, it can check whether the hosting domain resolves back to an IP in the SBL, and mark as spam if it does (in effect limiting the ability to spread malware via links in email - undoubtedly a nice idea).
Just to note, although they make it difficult to identify how to contact them about this kind of thing, I have attempted to contact Spamhaus about this (also tried via Twitter too).
It also seems only fair (to Spamhaus) to note that I also saw a Netcraft incident related to the same file, and they don't even provide the investigative steps they followed. So not only might Netcraft be falling for the same traps, but there's a lack of transparency preventing issue from being found and highlighted.
For some slightly obscure reasons I've recently found myself looking at the Bitfi hardware wallet and some of the claims the company make, particularly in relation to whether or not it's actually possible to extract secrets from the device.
The way the device is supposed to work is that, in order to (say) sign a transaction, you use an onscreen keyboard to enter a salt, and a >30 char passphrase.
The device then derives a private key from those two inputs, uses it and then flushes the key, salt and passphrase out.
Each time you want to use the device, you need to re-enter salt and passphrase - the idea being that if it never stores any of your secrets, then there's nothing to extract from a seized/stolen device.
From Bitfi's site we can see this wrapped up in marketing syntax:
The Bitfi does not store your private keys. Ever. Your digital assets are stored on the Blockchains, when you want to make a transaction with your assets (move them, sell them, etc.) you simply enter your 6-character (minimum) SALT and your 30-character (minimum) PASSPHRASE into your Bitfi Legacy device which will then calculate your private key for any given token “on-demand” and then immediately expunge that private key.
For various reasons (see Background) I was somewhat dubious about the veracity of this claim, and ultimately ended up looking over their source code in order to try and verify it.
This post details the results of that examination, the following items should be noted
- Although not explicitly vulnerabilities, the issues noted below have been submitted in advance to the Bitfi dev team (I did ask previously via email whether email or Bitfi.dev was preferable for raising issues).
- Incomplete sources are published on Bitfi.dev - example here, so although I include code snippets in this post, it's updated versions of code that's already public - I'm not simply publishing their code on the net :)
- I probably will make some mistakes: I've been ill, so focusing is hard, and I dislike C# so it's more than possible something's changed without me realising.
- This is the result of a fairly short code review, and in no circumstances should be viewed or characterised as a full audit
- In the sources, code version shows as v112
The result is a long analysis, so some may prefer to jump to the Conclusion.
It's that time of year - time to renew car tax. I figured I'd give the monthly direct debit a go and see whether paying the extra little bit is worth avoiding the yearly pain of remembering you need to find a few hundred quid up front.
For anyone who's not used it yet, the process of setting up is smooth and easy (in an almost distinctly non-government IT way), unfortunately it turns out there's a fairly big issue with the final step.
I should be fair, and point out that the service is provided by DirectGov rather than the DVLA directly, but IMHO it remains the DVLA's responsibility.
In today’s connected world, passwords are absolutely everywhere. We are constantly asked to
create new passwords, whether for a Facebook account, financial management systems or a new
Whilst new accounts seldom come with a default password, many devices do ship with a generic
username and password. Despite wide awareness of the importance of password control, many
people still fail to change these default passwords.
Many will have watched the recent releases of user passwords from Sony (and others) with interest. A lot of people, won't however, realise why Sony's practises were so poor. For many, storing passwords means just that, purely because they aren't aware of the methods available to make it a lot harder for an attacker to gain access to users passwords.
Whilst network security obviously plays a very important part, even when that fails it should be almost impossible for an attacker to tell you what your password was based on nothing but a database dump. In this short post we'll examine exactly how passwords should be handled and stored in a database.
News recently broke that Tesco Bank's Android App refuses to run when Tor is also installed on the handset, presumably in the name of security.
So, out of morbid curiousity, I thought I'd take a quick look at just how effectively various banking apps were secured. Banks, after all, should be at the forefront of security (even if they often aren't)
To start with a disclaimer - personally, I think using banking services on any mobile device is a bad idea from the outset, and some of the results definitely support that idea. I've only taken a cursory look, and not made any attempt to dis-assemble any of the apps.
As part of my appeal against the suspension I noted that that's arguably not GDPR compliant - a phone number is (undoubtedly) PII, and is not required in order to provide the service. For Twitter to hold that number requires consent, and it's unlawful for them to withhold the service if consent is not given for non-essential data processing.
Part of the reason for my objection was because Social Media companies (in the form of Facebook) have already proven they cannot be trusted with things like mobile phone numbers.
Presumably Twitter weren't happy with the fact that I needed to use Facebook as an example, as they've now gone ahead and had a data processing screw up of their own.