The Federal Trade Commission has opened an investigation into OpenAI, the artificial intelligence start-up that makes ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals.
In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked OpenAI dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data, and said the company should provide the agency with documents and details.
The F.T.C. is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers,” the letter said.
The investigation was reported earlier by The Washington Post and confirmed by a person familiar with the investigation.