Bugcrowd Blog

Risk and Liability Concerns - Your Questions Answered

Posted by Jason Pitzen on May 23, 2016 11:13:40 AM

I’ve worked in the security industry now for about seven years, and in the responsible disclosure space for the last two and a half years. In that time I’ve heard and answered just about every question regarding legal, compliance and regulatory controls around vulnerability disclosure and bug bounties.

Bugcrowd’s goal from the beginning has been to utilize an incredibly efficient platform to facilitate the responsible disclosure of security vulnerabilities between organizations and the researcher community. That having been said, we understand that working with an incredibly savvy workforce of independent talent may raise some concerns, and it may surprise you to learn that the same goes for the researcher community. So whether you yourself have concerns, or your internal legal department does, we want to arm you with both logical justification and legal safety nets to put your mind at ease.

Being on the front lines of receiving these questions, I’d group these questions into three categories; researcher behavior, general liability, and compliance concerns. While the specifics will vary from company to company, these answers will provide some basic guidance. 


Researcher Regulations:

Q: What rules do the crowd follow?

All researchers agree to follow Bugcrowd’s Standard Disclosure Terms which outlines acceptable and unacceptable behavior. Failure to comply with these rules results in a warning and/or removal of access to elements of the Bugcrowd platform on a temporary or permanent basis depending on the severity of the violation. In some instances, an offender will be removed from the Bugcrowd community and Bugcrowd bounties entirely. All policy enforcement and member consequence decisions are made entirely at the discretion of Bugcrowd.

Q: What happens if researchers go out of scope?

Researchers are disincentivized from testing out of scope targets with penalties to their Crowd Reputation, an important measure for them to earn access to private programs.  Still, some out of scope reports do occur from time to time if a researcher does not read the bounty brief closely enough (each customer’s brief may have unique or unusual exclusions), or testing of an in-scope target redirects to an out of scope target. For these sorts of non-malicious out of scope testing activities, we simply mark those submissions as ‘out of scope’ and remind the researcher of the terms outlined in the bounty brief. If a crowd member is actively and intentionally testing out of scope targets and having a negative impact on application performance our Researcher Operations Team follows a standard incident management process to de-escalate the situation and work towards a win-win for both customer and researcher. In some cases the consequences for not following the Standard Disclosure Terms may also apply.

Liability Concerns:

Q: Do you have any safeguards or liability around public disclosure, data leakage or vulnerability exploits? What happens if there is one?

Although rare, as soon as a public disclosure is identified, Bugcrowd’s Researcher Operation Team reaches out to the crowd member to ask them to remove the public information and notify them of the consequences of unauthorized disclosure.

More often than not, unapproved public disclosures are not of vulnerability details, but of the existence of a private program.  An excited new crowd member tweeting that they’re receiving a reward from a private customer is typically resolved quickly and without conflict.

Additionally, for all private programs, Bugcrowd retains full liability for any unintended violations or infractions and have a generous business insurance policy to cover any damages.

Compliance Questions:

Q: Is crowd sourced security testing riskier than pen testing?

Naturally, when working with a large work force there are more uncontrolled variables, which is why crowdsourcing is so valuable - there is no other way to utilize such diverse perspectives and skill sets. For organizations who want to utilize those benefits, but simply cannot accept any unnecessary risk, we suggest running a private programs.  

Private programs can provide not only a more controlled environment, but an opportunity to access our private vetted crowd of global security researchers, which is effectively the same as any pentesting firm. Bugcrowd’s founders come from a pentesting background, so we know the business and our contracts are structured to look and feel very similar.

Q: Would a bug bounty program suffice for PCI requirements?

Absolutely! Many of our customers utilize our Flex Programs, which are time-boxed, private programs, to meet PCI requirements. The PCI DSS doesn’t dictate any certain methodology for the pen tests required in 6.5 (web app) and 11.3 (network), which means crowdsourced programs work! We’ve built the reports to look and feel just like traditional pen test reports (this is where we came from!), and have all the appropriate sections to satisfy most any auditor.


What’s most important to remember is that the crowdsourced testing model only works when researchers and organizations work together. We understand, however, that both parties may have reservations and fears about unfavorable outcomes, unnecessary legal action, and general negligence. The bug bounty space is relatively new, but we’re devoting more and more time and resources to mediating many of the legal concerns around the space.

That’s why tomorrow we’re teaming up with CipherLaw Founder and Attorney Jim Denaro to go more in depth on these issues, as well as address any questions the community might have.

Please feel free to direct any questions to hello@bugcrowd.com or submit questions in the form below to register for the webinar:



Running Your Own Program
Jason Pitzen

Written by Jason Pitzen

Director of Sales, formerly @ Rapid7. Lover of heavy metal and IPAs.