This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

FTC Investigates OpenAI for Privacy and Cybersecurity Concerns

The U.S. Federal Trade Commission (FTC) is investigating OpenAI for potential cybersecurity and privacy violations related to the data it uses for training the generative artificial intelligence model, ChatGPT. In an 20-page investigative demand (basically a subpoena), the FTC claims that the investigation relates to whether or not ChatGPT has engaged in unfair and deceptive privacy and data security practices, or engaged generally in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm. Each would be in violation of Section 5 of the FTC Act and the FTC asks 49 questions along with demands for 17 categories of documents relating to OpenAI's practices. 

This investigation comes on the heels of complaints from the public to the FTC claiming algorithmic bias and privacy issues, as well as instances where ChatGPT has "hallucinated" the results to prompting. 

Moreover, the investigation comes as various countries have tried regulate the use of generative AI. Demands to regulate it in the United States have ramped up, including the proposed framework from Senator Chuck Schumer.

Overall, businesses should continue to review their use of generative AI models, particularly when it comes to the source of the training data to make sure that the data is compliant with applicable privacy laws. 

The Federal Trade Commission is investigating OpenAI for possible violations of consumer protection law, seeking extensive records from the maker of ChatGPT about its handling of personal data, its potential to give users inaccurate information and its “risks of harm to consumers, including reputational harm.”

Tags

privacy, cybersecurity, generative ai, ai, large learning models, innovative technology