Building Network Resilience Through Sensible Reporting Policies

Sometimes life as an Administrator can be pretty hard, we often face difficult choices knowing that we'll need to step on some toes. Where security is concerned, most will decide that it's their job and that toes might have to be stepped on.

As a user, however, this choice isn't quite so clear cut. User's may be a bane sometimes, but they can be a valuable tool in maintaining the security of your network.

When we plan a network, meticulous preparation goes into planning what will sit either side of the firewall, which nodes will be on which VLAN and all sorts of other aspects which really don't interest users at all. The reality, however, is that all this preparation can be undermined by one mistake, or one bug in a piece of deployed software.

Sensible policies are just one of the ways to mitigate this risk, but a lack of clear policy (or worse, a poorly written policy) can compound the problem exponentially.

 

Reporting Policies

Although often neglected, clear reporting policies are important on any corporate network. Your users are often better placed to stumble across security risks than the network administrator, after all it is the users that use the deployed software on a day-to-day basis. But how many users would know how to report any issues they discover? In a large business, they may not even know who to report it to.

Worse than that, they may opt not to report something they've found in case they are punished for their good deed. A major security hole could harm the ego of a proud administrator, and what user wants to be reported for "tinkering" with the network?

Sadly, it's too common a mindset. I've been in situations where I've discovered security issues, and on asking who I should report it to have been told "I wouldn't bother, they'll probably just accuse you of trying to break into the network". 9 times out of 10, the hole has been fixed and I've not been accused of deliberately tampering with the network.

 

Would You Report It?

As an Administrator, you probably feel pretty confident that you would report a security hole if you found one in someone elses network/software. The problem is, this is an Administrators view and not a users, so instead consider the following example.

 

You are innocently browsing the Internet, when you stumble across a Child abuse (porn) site.

Now considering that Possession of Child Pornography is a Strict Liability offence in the UK, do you;

a) Report the site to the Police

b) Close the window and do nothing (except clear your cache) for fear of being branded a pedophile by the Police?

 

It's a difficult decision to make, and though it may seem an extreme  example there is a correlation between this and the decision a user makes: Being punished for doing the right think can shatter lives in both cases (a particularly vindictive sysadmin could try to have the user dismissed after all).

We'd all like to believe that we would report the site, but how many can say for certain that they wouldn't instead pursue option b)? I've reported every security issue I've every discovered, and yet I still can't say for sure what I would do in the above situation. I'd hope I would take the responsible route, but with the Police being a bit of a wildcard in this area, it's hard to be sure.

 

Compounding the Problem

To make things worse, a user may discover a security hole, fail to report it, but continue to utilise it. If an when they are caught, not only are they likely to be in a lot of trouble, but as the Sysadmin you're not going to look great for failing to notice the hole. Having a hole spotted and reported by a user may hurt one's ego, but it pales in comparison to the possibility of this fact being aired to all in the users disciplinary meeting.

Worse, imagine that a disgruntled employee (or even a third party) exploits this flaw. When word spreads that the network has been compromised, do you really want to add a chorus of users saying "Oh I've known you could do that for ages"?

 

Use your Users Constructively

The aim of this article is not to get you to actively ask your users to break your network, simply to ensure that they are encouraged to report any issues that they do find. This is where a good stringent policy is important; not only must it define how to report the issue (and who to), but also needs to explain that users will not be punished for doing the right thing.

If your company has a rewards-for-ideas-scheme, you could perhaps consider putting valid reports forward under this. If so, be careful to avoid a situation where users actively try to break your network in order to acheive a 'bonus'.

How users report issues needs to be based on the needs of your company, but for simplicities sake shouldn't differ too much to your standard problem resolution procedures. If you run an electronic reporting system, for example, consider adding a category for security issues (which you'll obviously want to assign a high priority).

Once an issue has been reported, you need to deal with it as transparently as is reasonably possible. A good policy will do little to help if users feel that reporting issues would simply be a waste of their time. Don't make the mistake of telling a user that it's not an important issue, as they may respond by not reporting the next issue they find even if it's the most critical vulnerability you could imagine. To ensure transparency, send them a quick confirmation email once the vulnerability has been repaired, as they'll probably be checking regularly and may become disillusioned if you don't have the courtesy to at least demonstrate that you care about security issues.

A user reporting an issue they've found will probably come across as overly egotistical, and may even imply that they could do your job (seeing as how you missed it!). As annoying as it can be, take it on the chin and let the user have their moment, it'll only make them more proactive the next time they stumble across an issue.

 

Conclusion

Your users may occasionally discover vulnerabilities that you were unaware of, but without a good reporting policy in place may never make you aware of it. Rewarding users for reporting a vulnerability can lead to unforeseen results, and so should only be considered very carefully.

A good policy will contain;

  • Who to Report to
  • How to Report It (including what details you require)
  • Assurances that the user won't be punished for reporting the vulnerability.

Users are often wary of Sysadmin's (and if we're honest, that's how we like it!), and this wariness could leave you unaware of vulnerabilities. Every network has a vulnerability of some sort, whether it be an error in configuration or an issue in the software stack. It'd generally be better to know about the vulnerability before it's exploited maliciously, whatever the cost to the sysadmin's ego, as the ramifications of the latter could be severe for the entire company.

Users will often be very proud of the vulnerability they've discovered, even if it's an inconsequential issue. Don't undermine your reporting policies by ruining their moment, simply thank them for reporting the vulnerability and notify them when the issue has been resolved (they'll likely be checking regularly anyway).