khaskhabar.com : Wednesday, April 12, 2023 12:43 PM
New Delhi. OpenAI, the developer of Microsoft-owned ChatGPT, is now offering security researchers up to $20,000 to spot flaws to help the company differentiate between benign hacking and malicious attacks, after it suffered a security breach last month. Had to do OpenAI has launched a bug bounty program for ChatGPT and other products, stating that priority ratings for most findings will use the ‘BugCrowd Vulnerability Rating Taxonomy’.
“Our rewards range from $200 for low-severity findings to $20,000 for extraordinary discoveries,” said the AI research company.
However, security researchers are not authorized to perform security testing on plugins created by other people.
OpenAI is also asking ethical hackers to protect confidential OpenAI corporate information that may be exposed through third parties.
Some examples of this category include Google Workspace, Asana, Trello, Jira, Monday.com, ZenDesk, Salesforce, and Stripe.
The company informed, “You are not authorized to conduct additional security testing against these companies. Testing is limited to discovering confidential OpenAI information while complying with all laws and applicable Terms of Service. These companies are examples, and OpenAI is not necessarily to do business with them.”
Last month, OpenAI acknowledged that some users’ payment information may have been exposed when it took ChatGPT offline due to a bug.
According to the company, a bug in the open-source library caused it to take ChatGPT offline, allowing some users to view titles from another active user’s chat history.(IANS)
read this also – Click to read the news of your state / city before the newspaper