Authorities worldwide are racing to rein in artificial intelligence, including in the European Union, where groundbreaking legislation is set to pass a key hurdle Wednesday.
Riskier applications, such as for hiring or tech targeted to children, will face tougher requirements, including being more transparent and using accurate data. Also forbidden is AI that exploits vulnerable people, including children, or uses subliminal manipulation that can result in harm, for example, an interactive talking toy that encourages dangerous behaviour.Lawmakers beefed up the original proposal from the European Commission, the EU's executive branch, by widening the ban on remote facial recognition and biometric identification in public.
AI systems used in categories like employment and education, which would affect the course of a person's life, face tough requirements such as being transparent with users and taking steps to assess and reduce risks of bias from algorithms. That would let content creators know if their blog posts, digital books, scientific articles or songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress.The European Union isn't a big player in cutting-edge AI development. That role is taken by the U.S. and China.
"The fact this is regulation that can be enforced and companies will be held liable is significant" because other places like the United States, Singapore and Britain have merely offered "guidance and recommendations," said Kris Shrishak, a technologist and senior fellow at the Irish Council for Civil Liberties.Others are playing catch up. Britain, which left the EU in 2020, is jockeying for a position in AI leadership.