OpenAI Broke Canadian Privacy Laws Training ChatGPT and Must Now Fix It Within Months
Published: 1 hour ago
When ChatGPT launched in November 2022, privacy concerns were already known internally at OpenAI. Now, a joint investigation by Canada's federal privacy commissioner and counterparts from British Columbia, Alberta, and Quebec, launched after an April 2023 complaint, has confirmed those concerns were valid. OpenAI unlawfully collected overly broad personal information, including health details, political views, and children's data scraped from social media and public forums, without clearly informing Canadians. The company also failed to provide users with an accessible way to access, correct, or delete their personal information. OpenAI has since retired the non-compliant earlier ChatGPT models and agreed to corrective measures. Within three months, it must notify users that chats may be used for model training. Within six months, it must improve data exports, clarify user rights, protect children of public figures, and confirm retired datasets are no longer used for active development. OpenAI will submit quarterly compliance reports until all commitments are fulfilled.