This seems to be a discussion since I am in security â€“ and this is quite a while. How and when shall vulnerabilities be disclosed? Shall be paid? Is it legal or illegal? Etc. I try to put a few experiences and thoughts “on paper” here.
Before we go into details a disclosure: This is one of the rare occasions I can actually say that what follows is my private opinion as I am kind of between two jobs. I am currently not the CSO of Swisscom anymore and I did not start my new role, yet (details to follow on Monday). Therefore, please treat it as what it is â€“ my private view on vulnerabilities, disclosure and a few other things in this area.
Let’s cover a few basics. I am a strong believer of a few fundamental principles:
- A vendor has to do his best to ensure that a product does what it is supposed to do. This is true in the real world as well as in IT. Therefore a vendor has to do his best to ensure that there are no security vulnerabilities in the software (and hardware) sold.
- If vulnerabilities are discovered, it is the vendor’s duty to fix them in a reasonable timeframe for free.
- If somebody discovers a vulnerability he shall disclose it privately to the vendor and then point 2 kicks in.
- Going public is only the last resort.
- Vendors â€“ basically any company â€“ need to have a sound process to handle such information and get the problem fixed.
But if we look at this, where does it typically fail? If we start with point 1, there are too many software developers who just do not care about security and I am convinced that it will get worse with the internet of things. Just stupid, very stupid mistakes are made. Security mistakes we know since ages. People seem just to be neglecting good practice or are simply put just unprofessional â€“ sorry to be so rude. I will talk about this in a future post as well. It is sometimes scary that if you do a pentest on a new product to find things like plaintext passwords, SQL injections etc. on a broad scale.
When you then go back to certain vendors they even want to charge you to fix the bug they introduced. Happened to me once and guess you know my answerâ€¦ However, there are other responses as well â€“ you all remember the famous blogpost of the Oracle CSO. However, the key in statement 2 above is the “reasonable timeframe”. If I may take an example: You might have read about the SS7 vulnerability (e.g. German researchers discover a flaw that could let anyone listen to your cell calls.). This is not a simple buffer overflow, which might take a few hours to fix. This is a fundamental problem with the signaling protocol itself, which takes quite some time to fix or even mitigate. Fix deadlines as Google presses on are not a realistic scenario today. Maybe 30 days is an easy hit, maybe 12 months are close to impossible to go for. I am convinced that you need a strong process to handle vulnerability disclosures where you work closely with the finder to agree on timelines and disclosure. So it is a double-edged sword: There are pros and cons for everything but you need a close collaboration between finder and vendor which should not be based on competitive issues.
There is the question about bug bounties and any payment for vulnerabilities. Here I am not completely clear what my opinion should be. First of all I understand the point that the researcher had to invest significant effort to find the vulnerability â€“ but on the other hand, I never asked them to look for it. So why should I pay? The key challenge with the underground market is, that for severe vulnerabilities the price a national intelligence or the underground can pay will be much higher than what we will be able to. How do we handle this as an industry as it is definitely not attractive to sell to the vendor (and the other path might be illegal)? I am unclear here but it is clear that we need to find a way to keep potential researchers and finders motivated to work with us.
But let’s face it: compromising a product is a criminal act in most countries. This does not mean that a vendor should play this card but I guess we need to know where we all stand and what we do.
If you want a few thoughts on the same theme, go and read I found a security vulnerability, how do I disclose it?.
Now Wasenaarâ€¦ And interesting beast, isn’t it? I am convinced that most of the people who cry “foul” now are guilty that we got where we are today. Think about it. A regulator who is typically not a security specialist learns that something bad is going on and that we have kind of a “Wild West” in the underground on the Internet. There are a few laws but they are rarely enforced and finally we have a serious problem with public 0-days as well as a market where vulnerabilities and exploits are sold. That’s the context a regulator wants to act in. Now to the “guilty” part. Since quite a while I try to push back when it comes to terminology (e.g. Cybersecurityâ€“Do we need to change the approach?) as we create dangerous analogies. A lot of people love to talk about Cyberwar, Cyberterrorism, Cyberweapons etc. First of all I hate the “cyber” term but I need to adopt as it makes people listen. However, if we go back to our regulator: He hears cyberweapon, hears weapon â€“ knows how to regulate. This is what happens with the Wasenaar Agreement at the moment. I am deeply convinced that they want to do the right thing to protect their economy and their citizens but they do it based on what they know â€“ which is regulating a physical good, a weapon. We all know that our world is different and that we need the ability to freely exchange information on vulnerabilities, exploits and even attacks between the good guys as the bad guys do it with or without Wasenaar (I do not expect any national intelligence, terrorist or any other criminal to really follow the Wasenaar Agreement).
We need legal frameworks, where we can go after the criminals (and I count illegal espionage in here as well) and we need fast and efficient processes to put violators behind bars. However, we need an open way to exchange information and act flexibly if we are white hats. Maybe collaborating between the industry and the regulators in an open way (and let’s not talk about Cyberweapons, please) is the only way forward. Again, this information has to flow!
As I said at the beginning â€“ just a few thoughts