Dutch privacy watchdog asks ChatGPT for “clarification” on data
The Dutch data protection authority AP says it is “concerned” about the way personal data is being handled by artificial intelligence companies and has asked the makers of ChatGPT for information.
ChatGPT, developed by OpenAI, is a chatbot that gives seemingly convincing answers to questions and has been used by 1.5 million people in the Netherlands since its launch four months ago.
ChatGPT is based on an advanced language model and is “trained” by using data, including information already on the internet but also by storing and using the questions people ask.
“That data can contain sensitive and very personal information, for example, if someone asks for advice about a marital dispute or medical issues,” the AP said.
The privacy watchdog said it wants to know if the questions which people ask the system are used to train the algorithm, and if so, in what way, as well as the collection of data from the internet.
Furthermore, the AP said it has concerns about the information GPT “generates” in its answers. “The generated content may be inaccurate, outdated, inappropriate or offensive and may take on a life of its own,” the AP said. “Whether and how OpenAI can rectify or delete that data is unclear.”
The system was banned by the Italian data protection authority at the start of April over privacy concerns but reinstated at the end of the month after OpenAI “addressed or clarified” the issues raised.
Privacy regulators in Europe have now set up a ChatGPT task force to coordinate their approach.
The company says ChatGPT users can now turn off chat history, allowing them to choose which conversations can be used to train the system.
Thank you for donating to DutchNews.nl.
We could not provide the Dutch News service, and keep it free of charge, without the generous support of our readers. Your donations allow us to report on issues you tell us matter, and provide you with a summary of the most important Dutch news each day.
Make a donation