Everything you need to know about vulnerability disclosure programs in 2020.
Organizations need a way to identify vulnerabilities discovered outside the typical software development lifecycle, without compromising on cost, or coverage.
This article shares insight into why VDPs has quickly become a baseline security best practice for organizations of all sizes. This article covers:
- Gaps in typical security testing lifecycles
- How VDPs can act as a “security safety net”
- Why a policy of coordinated disclosure can reduce risk and improve the reputation
- How to develop a strong legal framework for disclosure terms
- How to determine which VDP management style is right for your organization
- Best practices for implementing and growing a VDP
- Why Bug Bounties and VDPs are the new dynamic duo
Table of contents
What are the vulnerabilities?
How are vulnerabilities typically surfaced?
What is a vulnerability disclosure program (VDP)?
Who benefits from a VDP?
What is the disclosure?
How to start a VDP
Best practice for managing a VDP
Combining VDPs with bug bounty or pen testing
The human genome has three billion base pairs of code—99.9% of which is identical for all humans. This means the differences we observe in personality, appearance, and aptitude can all be attributed to just .1% variance in how we’re “coded.” Software is similar, as slight variations can drastically alter outcomes. Given the range of skills and experience that make every developer unique, variation is all but guaranteed.
Unfortunately, when it comes to software, variation can mean vulnerabilities. Agile development cycles help get products to market faster than ever, but compounding code multiples incidence of error, to the tune of 10-20 “bugs” for every 1,000 lines. And while behemoths like Google tout more than two billion lines of code, even the average iPhone app has just under 50,000.
The odds of any organization finding all vulnerabilities before production is slim and continues to fall relative to an already diminished pool of security professionals available to help. As a result, organizations now need a way to expand risk reduction efforts beyond the typical software development lifecycle. They need the help of a global community.
This article explores the strategic, legal, and social nuances of vulnerability disclosure programs (VDPs). Drawing on industry expertise and the results of a recent Bugcrowd survey, it seeks to uproot misconceptions, clarify definitions, and unite organizations in a common system of best practices for encouraging, accepting, and acknowledging vulnerabilities discovered “in the wild.”
What are the vulnerabilities?
Vulnerabilities are components of code that can be exploited to negatively impact the security of data, systems, people, or IP. According to ISO/ IEC 29147:2018, a vulnerability is, “a behaviour or set of conditions present in a system, product, component, or service that ‘violates an implicit or explicit security policy.’”
In focusing on results, these descriptions ignore the cause—both the ‘how,’ and ‘how often.’ Sometimes vulnerabilities are the result of erroneous scripting, but not always; they also arise from changes in the deployment environment, or from combining otherwise intentional commands in unintentional ways. In any event, vulnerabilities are now a pervasive byproduct of a market that demands ever-faster access to products and services. Speed is the enemy of security, but it’s also a friend of purchasing power.
6% of security leaders said their lack of a Vulnerability Disclosure Program was from fear amongst executives that the search for vulnerabilities would result in a breach.
Most digitally literate internet users intuitively grasp this trade-off in a world that has been eaten by software for more than a decade. This expectation is now at odds with a more dated perspective held by organizations that still view vulnerabilities as a sign of weakness—something that should be avoided entirely. This is perhaps due in part to the conflation of vulnerabilities with their potential outcome—a security breach. These organizations mistakenly believe one to be as deleterious as the other.
“Embracing vulnerability disclosure creates a security-first mentality, builds your reputation within the security community and educates your board in the process.” – CHRISTIAN TOON, PINSENT MASONS
In reality, it isn’t the existence of the vulnerability itself that is newsworthy, nor even its exploitation; it’s how the organization responds. Compare the Equifax breach of 2017 to MyFitnessPal (Under Armour)’s a year later. Under Armour responded to its breach by alerting the users affected after just four days spent ascertaining the extent and implementing steps towards remediation and prevention. Equifax announced the breach two months after discovery, through poorly considered communication methods, compounding a bad situation. The results were clear; one month on from the breach, Under Armour shares were up 9%, while Equifax’s sank 11% in the equivalent time.
In a world where consumers now value trust over reputation and reliability when choosing brand allegiance, transparency must be central to everything an organization oversees, including security.
Forces like Moore’s Law causes software complexity to increase with time, making security more challenging. “Secure coding” is therefore less an end-state and more a perpetual journey of overlapping best practices, and sometimes, just a bit of luck.
How are vulnerabilities typically surfaced?
Most internally developed software progresses through similar development lifecycles, which include several phases of testing. This could include automated tooling like SCA, SAST, DAST and vulnerability scanners, or more human-powered solutions like red-teaming and pen-testing.
While vulnerabilities are ideally surfaced preproduction, it’s impossible to simulate every possible use case, permutation or potential interaction in such controlled settings. Software is always evolving—expanding and contracting like a living organism to adapt to new operating environments and an ever-growing list of connected tools and services. Forces like Moore’s Law causes software complexity to increase with time, making security more challenging. “Secure coding” is, therefore, less an end-state and more a perpetual journey of overlapping best practices, and sometimes, just a bit of luck.
In reality, some of the most egregious vulnerabilities are discovered by end-users themselves. In 2019, 14-year-old Grant Thompson was playing video games with his friends when he discovered FaceTime could turn iPhones into a listening device. While his mother tried for more than a week to alert Apple to the vulnerability, she eventually resigned herself to “tweeting” about it. While examples like these are rare, they are damaging not because of the vulnerability itself, but how it is socialized.
The third group responsible for vulnerability discovery is important to note, but often difficult to define. It includes pen testers, video game aficionados, and very often a combination of the two. Security researchers, otherwise known as ethical hackers, otherwise known simply as hackers, are individuals of varying experience, interest, and demographic, skilled in finding security vulnerabilities missed by other testing solutions. And whatever their motivation—be it educational, reputational, or even reward-based— hackers are united by a fundamental desire to find the unfindable, first.
Whatever their motivation—be it educational, reputational, or even reward-based— hackers are united by a fundamental desire to find the unfindable, first.
What is a vulnerability disclosure program (VDP)?
Vulnerability disclosure programs, or VDPs, maybe best described as the Internet’s “neighbourhood watch”. Neighbourhood watches leverage a formal system run on voluntary effort to report suspicious activity. While the city does much to protect inhabitants through routine police patrols, and emergency response services, neighbourhood watches help “fill in the gaps,” for 24/7 community-lead protection. These communications are incentivized by an altruistic desire to make the neighbourhood safer, as well as build relationships that persist even when neighbours move away.
Just like neighbourhood watches, VDPs encourage anyone that uses your corner of the Internet, to take care of it, for the benefit of all. VDPs provide a framework to encourage and facilitate the secure reporting of vulnerabilities discovered outside of typical testing cycles. As they usually cover all publicly accessible, internet-facing assets, anyone with an internet connection can participate. Additionally, just as the simple presence of neighbourhood watch signs tends to deter nefarious activity, publicly posted VDPs indicate that the organization is unlikely to be an easy target.
69% of organizations report receiving a critical vulnerability through their VDP.
The method for managing VDPs differs from the organization and is often dependent on goals and resources. Some choose self-management, while others rely on third parties like Bugcrowd to monitor intake channels, triage findings, provide remediation advice, and communicate with the submitting party. Rewards for valid vulnerabilities also differ by program and management type, and while “kudos points,” are the standard method for showing appreciation on hosted programs, some programs offer payments for findings with significant impact. This may seem similar to a bug bounty program, but there is an important distinction—bug bounty programs incentivize submissions, while VDPs selectively reward them.
A VDP allows companies to reduce risk, while publicly showcasing their commitment to security in a way that is both easily understood, and easily verified.
By allowing for the communication of vulnerabilities found in the routine use or testing of externally facing products and services, organizations can greatly expand their risk-reduction with minimal disruption to existing security and production lifecycles. Brian Adeloye, Principal Product Security Engineer at Atlassian, states that “a VDP is a reciprocation of the good faith shown by hackers who identify and share vulnerabilities of their own volition. This provides an opportunity for organizations to give and get respect within the security community.”
Vulnerability disclosure programs may be best described as the Internet’s “neighborhood watch”
Who benefits from a VDP?
Countless vulnerabilities are being written into new and existing software every day, and organizations need to maximize their catchment area for discovering these. Yet a large majority of security professionals claim that at some time or another, they have been unable to report a vulnerability that they discovered. Less than 9% of Fortune 500s have a VDP in place today. All organizations are well practised in paying lip service to “taking security seriously” whenever they make the news, but the actual prevalence and maturity of VDPs say otherwise.
As Bruce Schneier noted, vulnerabilities are an externality that affects end-users much more than owners. Ethan Dodge, the security engineer at Atlassian, agrees, “The primary goal of a VDP is to do right by your end-users. Companies should design a program that works to serve their customers and researchers, while simultaneously benefiting their legal and marketing teams.” This means organizations should not only prioritize the security of their users’ data for their sake, but for the reputational, and ultimately financial damage the organization will incur if they fail to do so. A VDP allows companies to reduce risk, while publicly showcasing their commitment to security in a way that is both easily understood, and easily verified. No more lip service.
PARTNERS, INVESTORS, AND EMPLOYEES
The VDP halo extends to an organization’s overall security brand, acting as a strong indicator of security posture for external stakeholders like prospective investors, partners, and other collaborators. These programs are public evidence of an organization’s culture of remediation, recognition, respect, and commitment to rapid response. For potential security hires, the presence of a VDP often signifies the influence wielded by security leadership amongst executive peers. This may be best summarized by Dodge’s further observation, “A mature vulnerability disclosure program signifies a mature security culture, and maybe a more accurate indicator than press coverage. I have always researched a company’s VDP when interviewing for jobs to assess the working environment.”
“A mature vulnerability disclosure program signifies a mature security culture… I have always researched a company’s VDP when interviewing for jobs to assess the working environment.” – ETHAN DODGE, ATLASSIAN
Any discussion on the impact of VDPs would be incomplete without due attention to the finders of vulnerabilities themselves. VDPs provide emerging security researchers with the opportunity to hone their skills, while established hackers can build and extend relationships with organizations that may also offer private, invite-only engagements like bug bounties. Both groups benefit from the knowledge that they are incrementally improving the organization’s security—something that 93% of hackers cite is their primary motivation according to the 2020 “Inside the Mind of a Hacker” report.
Of course, that’s not all they’re motivated by. The report goes on to show that researchers are also incentivized by education, rewards, and recognition. But unfortunately, ‘recognition,’ is all too often confused with ‘reward.’ Rewards and recognition are both gestures of appreciation but are each rooted in different measures of value. VDP rewards may come in the form of kudos points, store credits, or, on occasion, payments. Recognition in a VDP program goes beyond the organization’s acknowledgement of the researcher’s contributions and instead refers to the ability of the researcher to have their contributions recognized by the broader security community. It is global recognition, through disclosure.
What is the disclosure?
Sharing security vulnerabilities with the world enables similar organizations to get ahead of threats before they become larger problems. Communicating how and when these vulnerabilities were uncovered can drastically reduce the frequency of their creation while improving the ability of security researchers to more readily spot related issues. According to recent Bugcrowd research, organizations that adopt disclosure terms see 30% more vulnerabilities than organizations that don’t.
Programs on the Bugcrowd platform that adopt disclosure terms see 30% more vulnerabilities on average, versus organizations without.
Coordinated, discretionary, and time-boxed disclosure terms are based on good faith… they encourage rapid remediation while demonstrating a commitment to, and appreciation of, the hacker community.
“Disclosure” has several meanings, referring both to the communication of vulnerability to the organization within which it was discovered and to external parties, usually in a public forum. While the first definition benefits the organization, and by extension, its direct customers, partners, and other stakeholders, the second, when done right, benefits the entire digitally connected world.
However, the term “disclosure” does carry an unfortunate and misplaced stigma, which is holding back security standards globally. A quick exploration of the varying types can help to clarify terms and alleviate unfounded concerns.
The Spectrum of Disclosure
When programs are marked as “non-disclosure,” it is understood that the finder is not permitted to communicate any portion of vulnerability beyond the confines of the organization itself, even after it has been resolved. For non-disclosure programs, no vulnerability, regardless of type or severity, can be shared. While these programs still receive submissions, they do not encourage them.
COORDINATED, OR DISCRETIONARY DISCLOSURE
When organizations opt to enable coordinated disclosure, they signal openness to consider public disclosure of remediated vulnerabilities, in full or in redacted form, on a case-by-case basis. Removing a vulnerability from consideration for coordinated disclosure is sometimes necessary when disclosing it creates a significant risk to customers, as is the case with pacemakers, vehicles, and other IoT devices that are difficult to quickly recall or update remotely.
More mature organizations set a “timer” on disclosure for every vulnerability, essentially declaring their commitment to fixing fast. This approach is often taken by organizations who deem security to be a strategic priority and need to invest in building the best possible relationship with the security community.
Coordinated, discretionary, and time-boxed disclosure terms are based on good faith, and are considered best practice for all parties involved as they encourage rapid remediation while demonstrating a commitment to, and appreciation of, the hacker community. 77% of organizations with a VDP in place enable one of these methods of public disclosure.
Unlike the other approaches, full disclosure is not a program policy. It is an individual instance of public communication wherein the finder discloses a vulnerability before it has been fixed. Bruce Schneier defended the merits of full disclosure in 2007, suggesting that the threat of this act is sometimes necessary to force owners to fix vulnerabilities when they are unresponsive to hackers’ well-intended communications.
However, both parties often prefer to avoid this type of disclosure at all costs. Both non-disclosure and full disclosure are discouraged because of the asymmetric cost to only one party. Disclosure should be undertaken in a way that protects the owner, rewards the finder, incentivizes further research, and enhances relationships between owners and the security community.
OBSTACLES TO DISCLOSURE
Coordinated disclosure policies help reduce risk to industry peers while strengthening the relationship with the researcher community. Security researchers’ reputations are their brands, and receiving acknowledgement for identifying an exceptionally complex vulnerability enhances the finder’s reputation and increases their market value. Organizations that clearly state their willingness to collaborate on disclosing vulnerabilities in advance can expect better relationships with the security community, and often greater program activity. Of course, it’s not quite that simple for many organizations.
Christian Toon, CISO at law firm Pinsent Masons, notes that self-preservation and fear of reputational damage can harm the outlook of certain owners when it comes to disclosure, “Many organizations see the disclosure of a vulnerability to be an admission of failure that harms their reputation, but this is a short-term outlook” he states. “Embracing vulnerability disclosure creates a security-first mentality, builds your reputation within the security community and educates your board in the process. Why would you not want the help of security researchers to strengthen your business?”
Organizations that clearly state their willingness to collaborate on disclosing vulnerabilities in advance can expect better relationships with the security community, and often greater program activity.
Aligning the interests, incentives, and expectations for both hackers and VDP owners primarily involves frequent and clear communications, but there is also a need to provide unambiguous legal clarity and assurance. Hacking involves testing, stressing, and sometimes even breaking software to rebuild and improve it. This creates problems in a legal system that presumes malice to be the motive for any party who handles software outside its intended functionality. As a result, the default legal status for vulnerability discovery and disclosure, unfortunately, excludes good faith research.
The Computer Fraud and Abuse Act (CFAA) prohibits accessing a computer without authorization or exceeding authorized access. This renders good faith testing of assets illegal where robust VDPs are not in place, and while the number of hackers convicted for related offences is low, it has a chilling effect on the community; 60% of hackers do not submit vulnerabilities due to fear of legal retribution.
The Digital Millennium Copyright Act (DMCA) makes it illegal to circumvent controls that prevent access to copyrighted material, defined to include software. This applies even to legal owners of the products in question.
These laws were passed during a time when hacking was mostly done maliciously, before the advent of bug bounties, good faith hacking and a thriving community of professional security researchers. While the DMCA was amended in 2016 to allow security researchers to work on owned consumer devices in good faith, there are still legal gaps that need to be resolved before organizations can fully benefit from VDPs.
Organizations must draft terms to allow and incentivize good faith-testing and submission of vulnerabilities, in a way that acknowledges the concerns of legal teams by ruling out backdoor entry points or loopholes for malicious actors. These agreements create a “safe harbour” for well-intentioned researchers, which considerably increases the number and quality of vulnerabilities submitted.
One starting point to consider is Disclose.io, an open-source standardization project that offers a boilerplate VDP framework instilling safe harbour and enabling good-faith security research. This provides an accessible legal agreement for the research and disclosure of vulnerabilities and uses standardized terms and policies to create a more welcoming space for hackers and researchers, many of whom do not speak English as a first language and have minimal legal knowledge.
Hacking involves testing, stressing, and sometimes even breaking software to rebuild and improve it.
60% of hackers do not submit vulnerabilities due to fear of legal retribution.
How to start a VDP
Having a VDP is quickly becoming industry standard, and is no longer optional for some. The Cybersecurity and Infrastructure Security Agency (CISA) issued a binding directive requiring all federal agencies to publish a VDP, and 28% of respondents in a recent Bugcrowd survey say that VDPs have been mandated for their organization. However, starting and managing one effectively can seem overwhelming. There are five key steps that every organization can follow to build a strong program:
Decide on Self-managed or Hosted
Organizations like Bugcrowd offer managed vulnerability disclosure programs to help alleviate the time and effort required to construct and run an effective disclosure program. Bugcrowd provides access to a cloud-hosted secure submission framework that enables individuals to submit security feedback from anywhere in the world.
The fully managed process includes the design and management of email and embedded submission forms, as well as validation, prioritization, and remediation advice for each vulnerability. Integration with the organization’s software development tools encourages a faster fix, while Bugcrowd handles researcher communication, points-based remuneration, and support. Leveraging Bugcrowd for program management with the option to have the program listed on Bugcrowd’s researcher homepage, also brings your program to the attention of registered hackers and researchers for the increased likelihood of additional activity and submission volumes.
Companies with few internet-facing assets, limited resources, or still-maturing processes may instead choose self-management. Though, it’s possible incoming submissions may outpace the ability of a thinly resourced team to respond in time, which can lead to tension between researcher and organization if communications are not prioritized. This evolution tends to expedite the transition to a managed model, as evidence of urgency becomes easier to demonstrate to superiors.
Bugcrowd’s Managed VDP Process:
Anyone can submit security feedback -> Any asset that is Internet-connected -> Submissions triaged and results shared
Organizations must adopt a model of fluidity, transparency, collaboration, and action to maintain the trust of customers, partners, and the security community at large.68% of security teams have, or would consider awarding monetary payments for exceptional VDP submissions.
Organizations initiating a VDP should adhere to principles that make the program scalable and robust. This should include broad indications regarding acceptable conduct, as well as techniques that could be considered out of scope—DDoS or social engineering, for example. Organizations with limited resources should also restrict the assets covered by the program during a ramp period, to ensure they have the resources and workflows to properly handle submissions.
Determining how you plan to reward and recognize contributions to your program can also ensure a healthy relationship with the researcher community.
According to a recent Bugcrowd survey, 65% of security teams reward findings with points, swag, or store credit, though 68% have or would consider awarding monetary payments for exceptional findings.
Expect to Iterate
It’s important to build in time to review processes, gather data, and revisit workflows. A phased timeline can allow room for making adjustments, and revising scope on the fly. As VDPs are not tightly scoped, they act as a great barometer for areas in your attack surface that may need more attention. One interviewee stated that traffic to one site went up over 500% when their VDP was implemented. Unexpected influxes like this can help focus attention and reallocate resources.
No organization will start with their ideal, preferred disclosure policy, and most efficient communication process, so the best approach is to build iteratively. Toon says, “I advise those starting with VDPs to be prepared to fail fast and fix fast. Play around with parameters and approaches and gather plenty of data to inform yourself. As long as you don’t annoy or offend the security community or your board it will all be valuable. Also, check your scope. Once you’ve checked it, get it validated. Scope accuracy is vital.”
As VDPs are not tightly scoped, they act as a great barometer for areas in your attack surface that may need more attention.
It is also important to give clear guidance around communications, within dedicated channels. This could be a [email protected] email address, to begin with, but it is crucial to avoid single points of failure. Multiple channels, safeguards, and responsible parties can prevent an unchecked inbox or overactive spam filter from creating blind spots and associated risk.
Factor in Respect
Perhaps most importantly, a VDP should define clear disclosure standards based on good faith. These standards should ensure incentive alignment, so both parties benefit from every interaction. Researchers should provide as much detail of the vulnerability as possible while abiding by the agreed-upon method of disclosure. Program owners should reply to submissions promptly, and ensure appropriate recognition is offered for valid findings.
Best practice for managing a VDP
Those willing to implement best practice in vulnerability disclosure can both set a standard amongst peers while differentiating themselves against their competitors. Here are some steps that can make VDPs work best for organizations, partners and the security community.
- ALIGN EXPECTATIONS — Researchers should feel legally protected and know exactly how to report a bug and what to expect throughout the process. Don’t be afraid to over-communicate.
- PROVIDE CLEAR LEGAL GUIDANCE — Use standardized terms and clear examples to encourage good-faith interaction, and authorize conduct under CFAA by providing explicit consent to access systems.
- GROUND INTERACTIONS IN GOOD FAITH — Allow for the accidental overreach of the scope by hackers done in good faith. Ensure your policy prioritizes relationships and industry norms over strict interpretations of the guidelines.
- REMEDIATE EFFICIENTLY — Prioritize your end users and the vulnerability finder by getting to work resolving the bug and validating the fix quickly.
- START A DIALOGUE — VDPs are a two-way street and there are long term benefits to working on your end of the relationship with hackers through clear communication and appropriate incentives.
- TROUBLESHOOT THE PROCESS — Remove single points of failure in communications channels, seek feedback from researchers and commit to flexibility in your VDP philosophy and operations.
- TAKE AN INTEGRATED APPROACH — VDPs are just one in several overlapping tools and procedures that make up your security posture. Ensure all processes and products are configured to move in the same direction.
- KNOW YOUR LIMITS — Depending on your current security posture, VDPs can be overwhelming. Work with your team and/or VDP provider to configure a manageable solution.
Combining VDPs with bug bounty or pen testing
Bug bounties allow organizations to direct targeted, rigorous testing at business-critical assets. Similarly, pen test programs enable organizations to focus on compliance-related assets or those where a structured methodology would improve how security posture is communicated to partners, investors, and customers. Vulnerabilities found through these programs qualify for financial rewards, so most organizations limit the scope for budgetary reasons, and may also impose limited testing windows. While economical, this creates gaps in coverage and wrongfully assumes that all potential vulnerabilities can and will be surfaced through an exclusive (often private) crowd of researchers.
82% of organizations have, or would consider adding a pay-per-finding bug bounty program alongside their VDP.
The market has failed in creating a standard linear maturity model for when and how to “progress” between a VDP, Bug Bounty, and/or Pen Test. This is because each should be viewed as providing complimentary benefits, with adoption driven by individual goals and resources rather than maturity. Atlassian’s Adeloye considers a VDP to be the first building block in external testing—“a superset that can include a bug bounty program,” though it’s equally common for VDPs to be the final addition to a comprehensive crowdsourced approach. An agreed-upon sequence might make for tidier budgeting, but it also goes against the organic, adaptive and sometimes unruly nature of security. Every organization is different.
VDP programs provide a much-needed catchment for vulnerabilities surfaced by anyone, anywhere. But they may require the force of each organization demanding a cultural shift, to ensure an organization’s leadership, and legal team are aligned. As Toon at Pinsent Masons notes, “Pentesting has been recognized and accepted by the audit community, which makes it useful for assets where compliance is the goal. VDPs still have a long way to go for cultural acceptance in business. Good security and compliance don’t always sit alongside each other.”
The digital world is expanding, at a rate that far outpaces any one organization’s ability to keep up. Vulnerabilities will emerge. Software deployed across millions of devices will be recycled and reused, multiplying attack surface and risk. To prosper in this environment, organizations must adopt a model of fluidity, transparency, collaboration, and action to maintain the trust of customers, partners, and the security community at large.
Vulnerability Disclosure Programs provide a means to align varied business and security goals in a way that is efficient, and economical so that every organization can thrive in the digital era.
Expert Advice #1
“Many organizations see the disclosure of a vulnerability to be an admission of failure that harms their reputation, but this is a short-term outlook. Embracing vulnerability disclosure creates a security-first mentality, builds your reputation within the security community and educates your board in the process. Why would you not want the help of security researchers to strengthen your business?”
“I advise those starting with VDPs to be prepared to fail fast and fix fast. Play around with parameters and approaches and gather plenty of data to inform yourself. As long as you don’t annoy or offend the security community or your board it will all be valuable. Also, check your scope. Once you’ve checked it, get it validated. Scope accuracy is vital.”
“Pentesting has been recognized and accepted by the audit community, which makes it useful for assets where compliance is the goal; VDPs still have a long way to go for cultural acceptance in business. Good security and compliance don’t always sit alongside each other.” – CHRISTIAN TOON, CISO AT LAW FIRM PINSENT MASONS
Expert Advice #2
“A VDP is a reciprocation of the good faith shown by hackers who identify and share vulnerabilities of their own volition. This provides an opportunity for organizations to give and get respect within the security community. Owners should measure the effectiveness of their VDPs by asking ‘how easily does this make it for the good guys to do the right thing?'”
“A VDP is a superset that can include a bug bounty program.” – BRIAN ADELOYE, PRINCIPAL PRODUCT SECURITY ENGINEER AT ENTERPRISE SOFTWARE COMPANY ATLASSIAN
Expert Advice #3
“The primary goal of a VDP is to do right by your end-users. Companies should design a program that works to serve their customers and researchers, while simultaneously benefiting their legal and marketing teams.”
“Mature vulnerability disclosure signifies a mature security culture and maybe a more accurate indicator than press coverage. I have always researched a company’s VDP when interviewing for jobs to assess the working environment.”
“If the Department of Defense can have a VDP, anyone can.” – ETHAN DODGE, SECURITY ENGINEER AT ENTERPRISE SOFTWARE COMPANY ATLASSIAN