HackerOne 'updating' Ts&Cs after bug hunters question if they're training AI
Summary
HackerOne clarified it doesn't use researcher submissions to train its GenAI models, assuring hackers their data is confidential. Other platforms like Intigriti and Bugcrowd made similar statements.
HackerOne denies training AI on reports
HackerOne CEO Kara Sprague confirmed last week that the company does not use researcher bug reports or confidential customer data to train generative AI models. The statement follows a wave of criticism from the cybersecurity community regarding the platform's new Agentic Pentesting-as-a-Service (PTaaS). Sprague addressed the concerns directly on LinkedIn, stating that neither internal teams nor third-party providers use hunter submissions for model fine-tuning.
The controversy began after HackerOne launched its Hai AI system, which the company claims uses autonomous agents to perform continuous security validation. Marketing materials for the service initially stated that these agents were "trained and refined using proprietary exploit intelligence." This phrasing led researchers to believe the platform was ingesting their intellectual property to build a tool that could eventually replace them.
Sprague clarified that Hai exists to accelerate administrative tasks rather than automate the creative work of human hackers. The system aims to speed up report validation, confirm fixes, and trigger reward payments faster. Sprague told researchers they are "not inputs to our models" and that the AI is designed to complement their work.
Researchers fear automated replacement
The backlash centered on the fear that bug bounty hunters are inadvertently training their own automated competitors. Security researcher @YShahinzadeh voiced these concerns on X, formerly Twitter, asking the platform to confirm it had not used his historical reports. Other hunters warned that using researcher data without consent would destroy the trust required for the bug bounty ecosystem to function.
One researcher, @AegisTrail, suggested that if white-hat hackers feel the legal and economic systems are rigged against them, they might move toward the "dark side" of illicit hacking. This sentiment reflects a broader anxiety in the tech industry regarding GenAI and the exploitation of user-generated content. Security researchers often spend hundreds of hours on a single vulnerability report, making the data highly valuable for training LLMs.
HackerOne maintains that its "proprietary exploit intelligence" does not include the specific, creative logic found in researcher submissions. The platform currently manages over 300,000 confirmed vulnerabilities, representing a massive dataset of exploit techniques. Sprague emphasized that the company preserves the integrity and confidentiality of every researcher contribution despite the move toward automation.
Competitors draw a hard line
The industry-wide debate prompted other major bug bounty platforms to clarify their own data usage policies. Intigriti founder and CEO Stijn Jans issued a statement to reassure the community that researchers own their work. Jans stated that Intigriti uses AI to amplify human creativity and help triage teams process reports with higher accuracy, but never to replace the hunter.
Bugcrowd also moved to distance itself from the controversy by citing its existing terms and conditions. The company explicitly prohibits third parties from training AI or LLM models on customer or researcher data. This hard-line stance aims to protect the multi-million dollar economy of independent security research that powers these platforms.
The standard policies across the top three platforms now include:
- No training on private vulnerability reports or proof-of-concept code.
- Strict prohibition of third-party model providers retaining customer data.
- Human-in-the-loop requirements for all final vulnerability validations.
- Ownership rights that remain with the researcher or the customer depending on the program.
The rules for using AI tools
While platforms are restricting their own use of AI, they are also tightening the rules for how researchers use GenAI. Bugcrowd recently updated its guidelines to hold researchers accountable for any AI-generated content in their submissions. The platform does not accept automated or unverified outputs as valid vulnerability reports.
HackerOne and Intigriti have adopted similar stances, requiring researchers to verify every claim made in a report. Using ChatGPT or similar tools to draft a report does not exempt a hunter from the platform’s rules or the specific scope of a program. Most platforms now use their own AI detection tools to flag reports that appear to be generated by bots without human verification.
The tension between human intelligence and agentic AI remains a primary concern for the 1 million registered users on the HackerOne platform. Sprague’s clarification aims to settle the market as HackerOne transitions into a service that blends human testing with continuous automated scanning. The company's goal is to reduce the time-to-remediation for enterprise customers, who currently face a growing volume of sophisticated threats.
Protecting the researcher economy
The bug bounty model relies on a steady stream of high-quality submissions from independent researchers who are paid only when they find a bug. If these researchers believe their data is being harvested to build autonomous agents, the supply of new vulnerability data could dry up. Platforms are now competing to prove they are the most "researcher-friendly" in the age of AI.
HackerOne's Agentic PTaaS represents a shift toward the "continuous" security model that many enterprise clients now demand. Traditional point-in-time pentests often miss vulnerabilities that emerge between tests. By using agents, HackerOne claims it can provide 24/7 coverage, but the company insists these agents are not capable of the complex, "out-of-the-box" thinking that human hunters provide.
The company has not yet released the full technical specifications of how Hai was trained if it did not use researcher reports. Industry analysts suggest the models likely rely on public CVE data, open-source exploit databases, and synthetic data generated by security professionals. For now, the platform's leadership is betting that transparency will be enough to keep its community of elite hackers from migrating to rival services.
Related Articles
HackerOS is what a Linux enthusiast’s OS should be
HackerOS is a versatile Debian-based Linux distribution with multiple editions for different users. It includes unique features like a helpful ZSH terminal and fun "hacker" commands, making it appealing for both regular users and enthusiasts.
cURL’s Daniel Stenberg: AI slop is DDoSing open source
cURL creator Daniel Stenberg says AI is a double-edged sword: it floods projects with bogus bug reports but also finds real, deep vulnerabilities that other tools miss.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
