Government use of AI unregulated and open to risk: audit office
The Dutch government is using or has tested 433 AI-based systems and around half of them have not been properly assessed for “dangers”, the national audit office said on Thursday.
Some 167 systems are currently being experimented with and 120 systems are in regular use. But just 5% have been listed in an official register of algorithms which was introduced in the wake of the childcare benefit scandal, the audit office said. Algorithms had a central role in identifying people as potential fraudsters and thousands of people were wrongly accused of being so.
Audit office spokesman Ewout Irrgang told broadcaster NOS he is concerned that the risk attached to using AI has been under-estimated. “There is a drive not to classify AI systems as risky,” he said. This way, he said, they do not have to comply with tough EU rules which come into effect in 2026.
The EU rules, for example, say using A1 to assess if people are at risk of committing crimes should be seen as high risk and systems should be checked regularly to ensure they are not discriminatory.
Most Dutch government systems are classed as “minimal risk” but this does not mean they are without risk, particularly when it comes to privacy, the audit office said.
AI-based systems are most popular at the infrastructure ministry (84) and the justice and asylum ministries (84). Of those in use, 124 related to processing information and 82 have regulatory and compliance roles. In total, 70 government departments and agencies are using AI-based systems.
Among the examples given of successful AI is its use to take notes during meetings and to help people to fill in government forms or make a police report about online scams. The prison service has even experimented with robotic dogs to inspect cells.
Health minister Fleur Agema has pinned her hopes on AI to solve the growing shortage of healthcare workers, which she says is set to reach 200,000 by 2033. Experts, however, have warned that the minister is being too optimistic.
Privacy
In July, privacy watchdog AP warned that “ill-considered” algorithms that discriminate against citizens are still being widely used by government agencies.
These included the student grant body DUO’s use of an algorithm to detect fraud regarding student grants that the AP called “discriminatory in nature without any substantiation.”
The AP also criticised benefits agency UWV for “illegally us[ing] algorithm[s] to detect fraud with unemployment benefits,” while the use of facial recognition by police has also come under scrutiny.
Since last year the independent AP has been given the task of coordinating the supervision of algorithms and AI, including that used by big tech companies.
Thank you for donating to DutchNews.nl.
We could not provide the Dutch News service, and keep it free of charge, without the generous support of our readers. Your donations allow us to report on issues you tell us matter, and provide you with a summary of the most important Dutch news each day.
Make a donation