Image: Mariia Shalabaie via Unsplash
Image: Mariia Shalabaie via Unsplash

ChatGPT’s maker has over 4,500 hackers looking for bugs

Bug bounties have become so pervasive in recent years that new programs often are noticed only by the security researchers who regularly participate in them. But when OpenAI announced its own contest in April, it drew headlines.

The much-buzzed-about maker of ChatGPT and other artificial intelligence applications hired the Bugcrowd platform to organize white-hat hackers to probe for vulnerabilities in its public-facing technology. The rules were specific, though: OpenAI only wanted help examining things like cloud resources, plugins and connections to third-party services.

Any issues or biases in the company’s large-language models — the proprietary code that fuels ChatGPT and other “generative” AI tools — weren’t part of the scope of the program. In short, OpenAI didn’t want participants to focus on its most interesting technology, just the infrastructure used to access and present it.

The competition still presented plenty of important opportunities, however, for the ethical-hacking community and the two companies involved, said Casey Ellis, founder and CTO of Bugcrowd. The initial response was “noisy,” he said, drawing many participants who might not otherwise join a bug bounty competition. Bugcrowd took it as a chance to train less-skilled hackers, Ellis said.

“They've got a good appetite to learn more things, they've got an interest in the space, but they're not finding critical issues just yet, because they've got to mature in how they test, or whatever else,” he said. The company nudges people like that into its Bugcrowd University program to “actually teach them some stuff to try to get them more successful in the future.”

Representatives of OpenAI declined to comment for this story, but pointed to a blog post saying that the company views cybersecurity as a “collaborative effort.” Issues with the actual AI models “do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” the company said. “Addressing these issues often involves substantial research and a broader approach.”

OpenAI is paying up to $20,000 for individual bug disclosures, similar to the potential top payouts in recent Bugcrowd contests for Okta and Netflix, but below those for Tesla and Sophos. Over 4,500 researchers have signed up for the OpenAI program, more than any of those except for Tesla, with about 5,000.

Amid all the noise, the bug bounty competition had accepted 50 vulnerabilities as of mid-June, with an average payout of about $786 apiece. Ellis said it makes sense that OpenAI’s contest might not issue a lot of prizes.

“OpenAI, from a technology footprint standpoint, they're not that large. They've obviously got a ton of stuff behind the scenes, on AI and the data side, right, but in terms of the actual addressable attack surface to the internet, it's not this sprawling set of real estate that's accrued over the years,” he said.

By contrast, he noted that Bugcrowd had done another program “with an automotive company that's been around forever, they kind of pre-date the internet in terms of existing as a company, and their internet real estate has kind of reflected going through acquisition and all these different things.” In a case like that, “You just end up with a lot of stuff,” he said.

Gathering feedback

OpenAI is not publicly disclosing information about any of the vulnerabilities filed through Bugcrowd’s platform. And discoveries related to one type of issue — misuse of the API keys that give customers access to services like ChatGPT — can’t be submitted through the Bugcrowd program itself; researchers are directed to a separate form. (The digital tokens have a subscription price associated with them, and they reportedly are being sold on the black market. OpenAI says the credentials were stolen from users’ computers, not through any breach at the company.)

Overall, though, OpenAI is a natural fit for a bug bounty program for several reasons, Ellis said.

“One was the fact they are a newer organization that's native to this idea of soliciting feedback from the outside world. A big part of their fundamental thesis relies on data that's generated off the internet. Like the large-language model is literally the internet kind of hoovered-up and used to train AI,” he said. “And they're also learning from people interacting with their various kinds of interfaces. So this whole idea of receiving external information and actually becoming better because of what your users are teaching you — it's already kind of native to the way the organization thinks."

The contest is also partially laying the groundwork for how AI companies will secure their technology as developers find new ways to integrate it with other software, Ellis said. OpenAI is considering an app store for AI products, The Information reported this week.

“In 12 months time, we're going to be talking about how this stuff has been rolled out into other products — the convergence of generative AI into other things — and see the consequence of that,” he said. “Like, we're not there right now because we're still in the early stages in terms of an implementation standpoint ... in 12 months' time it's going to be how this integrates, not just ChatGPT itself."

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles

Joe Warminsky

Joe Warminsky

is the news editor for Recorded Future News. He has more than 25 years experience as an editor and writer in the Washington, D.C., area. Most recently he helped lead CyberScoop for more than five years. Prior to that, he was a digital editor at WAMU 88.5, the NPR affiliate in Washington, and he spent more than a decade editing coverage of Congress for CQ Roll Call.