Newsdeck

ARTIFICIAL INTELLIGENCE

OpenAI will pay people to report vulnerabilities in ChatGPT

The OpenAI logo on a laptop computer arranged in the Brooklyn borough of New York, US, on 12 January.

OpenAI will start paying people as much as $20,000 to help the company find bugs in its artificial intelligence (AI) systems, such as the massively popular ChatGPT chatbot.

The AI company wrote in a blog post on Tuesday that it has rolled out a bug bounty program through which people can report weaknesses, bugs or security problems they find while using its AI products. Such programs, which are common in the tech industry, entail companies paying users for reporting bugs or other security flaws. OpenAI said it’s rolling it out in partnership with Bugcrowd Inc, which is a bug bounty platform.

The company will pay cash rewards depending on the size of the bugs uncovered, ranging from $200 for what it calls “low-severity findings” to $20,000 for “exceptional discoveries”.

The company said part of why it’s rolling out the program is because it believes “transparency and collaboration” are key to finding vulnerabilities in its technology.

“This initiative is an essential part of our commitment to developing safe and advanced AI,” said the blog post, written by Matthew Knight, OpenAI’s head of security. “As we create technology and services that are secure, reliable and trustworthy, we would like your help.”

The Bugcrowd page for OpenAI’s bounty program details a number of safety issues related to the models that aren’t eligible for rewards, including jailbreak prompts, questions that result in an AI model writing malicious code, or queries that result in an model saying bad things to a user.

The announcement doesn’t come as a complete surprise. Greg Brockman, president and co-founder of the San Francisco-based company, recently mentioned on Twitter that OpenAI had been “considering starting a bounty program” or network of “red-teamers” to detect weak spots.

He made the comment in response to a post written by Alex Albert, a 22-year old jailbreak prompt enthusiast whose website compiles written prompts intended to get around the safeguards chatbots like ChatGPT have in place.

“Democratised red teaming is one reason we deploy these models,” Brockman wrote.

Gallery

Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Daily Maverick Elections Toolbox

Download the Daily Maverick Elections Toolbox.

+ Your election day questions answered
+ What's different this election
+ Test yourself! Take the quiz