The bug bounty market is growing quickly. While an increasing number of organizations are embracing the concept, there still remains some confusion and ambiguity around paying hackers for vulnerabilities. Events like recently disclosed Uber breach illustrate this confusion. I’ll take this opportunity to clarify and define this rapidly evolving market.
Bugcrowd has managed programs and helped build relationships between organizations and hackers for the last five years. Not only are we the leading company in doing so, we have a clear understanding of the distinction between bug bounty and extortion.
A bug bounty is a reward offered for vulnerabilities discovered within a set scope. It's also important to note that through these programs, companies authorize researchers to not only identify vulnerabilities but to also provide proof of concept. As importantly, any data that a researcher gleans from these POCs is held and protected under the terms and conditions that the researchers and company (and if using a third-party platform, that platform) have set forth.
A bug bounty is not a ransom paid to hackers who find a vulnerability, exploit it, then attempt to sell that information back to an organization. That is extortion. Even if a company paid a ransom through a bounty program, and applied terms and conditions to the hacker after the fact, it would still be extortion and the law requires you to report this to the authorities.
The Uber breach is a clear case of extortion. It appears that a hacker exploited a vulnerability and Uber responded by paying the ransom to keep the breach quiet. Is this best practice? No. Is it understandable? Yes. Paying the ransom is sometimes economically rational and overall less risky than not doing so.
A bug bounty program is defined by a clear scope where the researcher says ‘we'll tell you about the vulnerability, and then you'll pay us for it’. In the case of Uber it was, ‘pay us for this vulnerability and then we'll tell you’; extortion.
The real concern in this case was in not reporting it to the authorities for a year. There is a clear responsibility for an organization to disclose the breach, to alert and reduce the risk to those impacted. Using money to pay malicious hackers not to cause damage may sometimes be rational, but breaking the law never is. Not disclosing in accordance with federal and state law is not something we would recommend nor support.
But the Uber breach isn’t the only event that has caused confusion of late. Enter the controversy around DJI’s bug bounty. In this case, the researcher identified an issue within the scope of the program. The issue came down to not having clear guidelines around payment and disclosure at the outset. This could have been avoided. It’s why we work with customers to set expectations from the start.
It’s not uncommon for organizations with self-managed programs to become quickly overwhelmed by the work that comes with that. From setting the scope, defining expectations, to accepting and triaging submissions and paying researchers, there is a lot that goes into implementing a successful vulnerability disclosure or bug bounty program. And this doesn’t take into account the actual remediation of the vulnerabilities found.
There is a lot loaded in the term responsible disclosure. There is an ethical connotation, but in reality responsible disclosure is simply the ability to respond. But this responsibility falls on both researchers and companies and it doesn’t work without mutual respect. This is a powerful model, but the “unlikely romance” is still in its early stages, and therefore fragile.
Having a trusted partner to help this relationship along is key. The right partner will help create a competitive program that draws the best researchers and provides the results organizations are looking for.
Any time you release code into the wild, you’re inviting the unexpected. Responsible disclosure programs shift the balance. The Crowd will find vulnerabilities -- a good partnership removes the uncertainty about what will happen when they do.