OpenAI Whistleblowers ask SEC to Investigate the company’s NDAs
By AP Staff
FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston. Biden on Monday, Oct. 30, will sign a sweeping executive order to guide the development of artificial intelligence. The order will require industry to develop safety and security standards, introduce new consumer protections and give federal agencies an extensive to-do list to oversee the rapidly progressing technology. (AP Photo/Michael Dwyer, File)
NEW YORK (AP) — OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission and asked the agency to investigate whether the ChatGPT maker illegally restricted workers from speaking out about the risks of its artificial intelligence technology.
A letter to SEC Chair Gary Gensler representing “one or more anonymous and confidential” whistleblowers asks the agency to swiftly and aggressively enforce its rules against non-disclosure agreements that discourage employees or investors from raising concerns with regulators.
The July 1 letter references a formal whistleblower complaint recently filed with the SEC. The Washington Post was the first to report on the letter.
U.S. Sen. Chuck Grassley's office shared a copy of the letter with The Associated Press, noting it was provided to his office by legally protected whistleblowers.
“OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” said Grassley, an Iowa Republican, in a written statement. "In order for the federal government to stay one step ahead of artificial intelligence, OpenAI’s nondisclosure agreements must change.”
OpenAI and the SEC didn’t immediately respond to requests for comment Monday.
Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.
Rite Aid has been banned from using facial recognition technology for five years over allegations that a surveillance system it used incorrectly identified potential shoplifters, especially Black, Latino, Asian or female shoppers.
Hackers accessed Xfinity customers’ personal information by exploiting a vulnerability in software used by the company, the Comcast-owned telecommunications business announced this week.