In our recently released guide, 7 Bug Bounty Myths, Busted, we addressed some common misconceptions about the bug bounty model and bug bounty programs. We're spending some time each week to take a deeper dive into those myths one by one. We started by addressing the misconception that bug bounty programs are all public and open to everyone and last week discussed the types of companies engaging with the bug bounty model. This week, we’re talking about risk...
In our recently released guide, 7 Bug Bounty Myths, Busted, we addressed some common misconceptions about the bug bounty model and bug bounty programs. We're spending some time each week to take a deeper dive at those myths one by one. Last week we talked about the misconception that bug bounties are all public, and are open to everyone. Today, we're addressing a related misconception regarding the types of companies engaging with the bug bounty model.
Myth #2: Only tech companies run bug bounty programs
This year, one of our favorite customers will be speaking at one of our favorite conferences where they will discuss why they implemented a bug bounty program, and how the results and learnings have influenced their internal security culture and testing processes.
There are many key performance indicators (KPIs) of a successful bug bounty program–some that matter more to program owners, and some that matter more to researchers. At bugcrowd we aim at aligning the importance of these KPIs between all involved parties to articulate better what is most helpful and valuable to each.
In this post, we will explore the ever important metric, response time. This value is a key factor in both maintaining a healthy and successful program, as well as keeping researchers engaged and involved. Communication, both in swiftness and effectiveness, is key to staying on the same page throughout the vulnerability reporting and review process. Our recent post regarding proper escalation paths when communication falls through is proof of that.
Throughout this year, bug bounties have hit an all time high in the news, and are well on their way to becoming non-negotiable parts of mature security organizations. Because of that buzz and the positive traction the bug bounty space is seeing, it’s easy for us to forget that this is still a new and novel approach to security that not everyone fully understands. That’s why we’ve put our ears to the ground to pick up on some commonly held misconceptions about how they work, why they work, and for whom they’re ideal.
By way of a quick refresher, in regards to setting up a bug bounty program, we've already covered step zero, setting your scope, and the importance of focus areas, as well as some considerations to make around exclusions on your program.
Now that we’ve covered most of what goes into writing a bug bounty brief, including rewards and disclosure policies, let’s take a look at what environment you'll be providing for researchers to test against. Regardless of how you decide to set up your application(s), it's important to remember that our goal is to attract great talent from the crowd, sustain activity, and ultimately minimize the challenges of setting up and running a bug bounty for you and your internal teams.
A few weeks ago, MasterCard launched their public bug bounty program, joining many other financial services companies who are utilizing the crowd to strengthen their product security and protect consumer data. This launch follows our recently published ‘State of Bug Bounty’ report in which we recognized the speed and volume at which the financial services industry is adopting bug bounty programs. Our financial services spotlight takes a look at that trend and more.
Over the past 10+ years, Cross-Site Scripting has made its way into just about every ‘top-ten vulnerability’ list and has consistently starred in headlines and POCs. XSS vulnerabilities are also commonly submitted through bug bounty programs, and many write them off as ‘low hanging fruit.’ We’re here to tell you that not all XSS are created equal.
This episode of Big Bugs examines the reason we're experiencing XSS-Fatigue, some examples of high impact XSS bugs found in the wild, and resources for defenders and offenders.
Why is it so important? Simply put, it's a matter of respecting researchers' time and effort. If we take a moment to look at this from a researcher's point of view, every issue that we clearly exclude on the bounty brief is something they won't/don't need waste their time testing for and/or reporting. A brief that doesn't contain explicit exclusions runs the risk of receiving issues that the program owner may not care for - resulting in wasting the time and resources of both the researcher and the program owner. To clearly document these exclusions, we've identified five of the most common categories to consider for exclusions when building your program: low impact issues, intended functionality, known issues, accepted risks, and issues resulting from pivoting.