The latest set of SCADA related exploits has set my brain wondering about the systems we assume are safe. Some have basic security checks, and some have none so are we taking the security of these systems for granted?
My brain sometimes wanders to some interesting places, and I thought it may be interesting to pursue these a little further, despite the tiny likelihood of them actually occurring in real life (the point of course being, it may be unlikely, but they could!)
People have been tinkering with our vehicles onboard computers for quite some time, but usually for some form of benefit. What if your attacker wanted to cause real harm?
This line of thought began when I was wondering what percentage in benefit vented discs gave over solid discs!
The ABS system on most cars uses a hall sensor to detect the notches and ridges in an ABS ring on each wheel hub. Most send an analog square wave back to the onboard computer which then calculates the speed of rotation and takes the appropriate action when a difference is noticed.
So, what would happen if we performed a man in the middle attack on a targets ABS system?
It'd actually be reasonably trivial, by inserting a (small) piece of hardware between the ABS sensor and the ABS unit (most sensors will have a connector after ~1M of cable to allow easy replacement) we could control the frequency being reported to the ABS unit.
So rather than the original topology
ABS Sensor -> ABS Controller -> ABS Hardware
We'd now have the topology
ABS Sensor -> MITM Attack Hardware -> ABS Controller -> ABS Hardware
Now imagine we placed a small unit on the rear bumper of the car, using Near Field Communications (NFC) to communicate with the MITM attack hardware, using ultrasonic detection we could tell when the car was being followed closely by another (and this unit could be powered by a small photosensitive cell).
When the bumper unit detected a tailgater, it could send a message to the MITM attack hardware, which would then dramatically increase the frequency of the square wave being sent to the ABS unit. The ABS unit would think that wheel had lost traction and (if it had inbuilt traction control - a lot do!) would apply the brakes to bring the wheel back under control (giving no warning to the driver of either car).
Depending on the car, the brakes would either be applied on the wheel we attacked, or where the car uses a shared circuit on both brakes.
An intelligent attacker could quite easily design all hardware so it had a high probability of being destroyed in the crash, and could even connect a third piece of hardware to illuminate the brake lights once it was too late.
Of course, how hard the brakes would be applied is entirely dependant on the software in the ABS unit. There may be an upper limit explicitly to prevent the ABS from accidentally locking the wheels, but the point is - we don't know for sure that there is!
In reality, you'd have to really annoy someone for them to go to this level of effort, especially as it would require physical access to the car. If done correctly, however, it'd be possible to make the accident look like a case of driver fault. Without specifications on how our ABS systems work, it's impossible to tell (without trying) whether such an attack would be effective, and you'd hope that those who manufacture these systems have covered all attack surfaces, but as the issue with SCADA in prisons illustrates, things do get forgotten!
This is an attack that, sadly, is probably already in use somewhere!
Most internet banking now requires the user to provide a One Time Authentication Token, usually generated from a small calculator like device (such as HSBC's SecureCode). By requiring these, the banks have shifted liability onto the user as you'd need to provide both the device and it's pin to anyone wanting access to your account.
Of course, a good attacker only needs access to your account once to take all your money!
By performing a MITM attack, it's more than possible to do so, and it wouldn't necessarily need to be as targeted as the car scenario. Let's take a quick look at the process before the attack;
- User visits banking website (usually over an SSL connection)
- User enters account details
- User generates OTA token using their device and enters it into the page
- User logged into account
To be successful, the attack cannot change any part of the process that the user can see. Unfortunately, we don't need to.
By installing software on the users computer (or indeed a piece of dedicated hardware on the network) and reconfiguring the users system to use the new proxy server, all communications will be passed through (you could actually use a server on the other side of the world if necessary!). By acting as a transparent proxy, it's possible to intercept the encrypted data and decrypt it (it's a little more complicated than that, but falls outside the scope of this post!).
So, now when the user enters their OTA token, we have a copy of it. We pass the login back to the bank as usual, but then completely hi-jack the session. We could disable the users internet whilst we work (though some would deem this suspicious) but only need a few minutes time to send all money in that account somewhere else, or set-up a Standing Order etc.
As the user logged in normally, the bank would remain largely unaware until the user complained, and based on the Chip and Pin issues experienced may well blame the user themselves.
Ultimately, in our Capitalist society, the decision largely comes down to cost vs benefit. In theory, someone could attack your ABS system, but given the small probability of this happening, is it actually worth the cost of securing the system (by sending a signed digital signal instead of an analog wave). Given that the decisively low-tech option of cutting your brake lines would be just as effective if more obvious, probably not.
Other attack vectors, though, do seem more plausible. There's a real financial incentive for performing MITM attacks against Internet Banking sessions, and it'd be trivial to implement (in fact current phishing schemes could easily be adapted).
These systems are often a closed book, and although they are reviewed by others, we as the users have to trust that both the developer and the reviewer have made no mistakes, and cut no corners. In the case of Chip and Pin, some users suffered financially as a result of holes in a system that the banks claimed to be infallible.
Sadly, it's not clear that there's any real solution to the mess that currently exists. Companies will continue to develop their own protocols (which is a good thing) and tell the users that they are completely secure (which is a very bad thing to do!). The real issues in many cases stem from the companies not wanting to admit that they were wrong (Chip and Pin again being a prime example)
It does make you wonder, just how secure the systems we use every day really are.