Bloomberg
The European Union (EU) is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behaviour, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.
The rules are part of legislation set to be proposed by the European Commission, the bloc’s executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week.
The EU proposal is expected to include the following rules
• AI systems used to manipulate human behaviour, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
• Remote biometric identification systems used in public places, like facial recognition, would need special authorisation from authorities.
• AI applications considered to be ‘high-risk’ would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with
human oversight.
• High-risk AI would pertain to systems that could endanger people’s safety, lives or fundamental rights, as well as the EU’s democratic processes — such as self-driving cars and remote surgery, among others.
• Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
Rules would apply equally to firms based in the EU or abroad
European member states would be required to appoint assessment bodies to test, certify and inspect the systems, according to the document. Companies that develop prohibited AI services, or supply incorrect information or fail to cooperate with the national authorities could be fined up to 4% of global revenue.
The rules won’t apply to AI systems used exclusively for military purposes, according to the document.
A European Commission spokesman declined to comment on the proposed rules. Politico reported on the draft document earlier.
“It’s important for us at a European level to pass a very strong message and set the standards in terms of how far these technologies should be allowed to go,†Dragos Tudorache, a liberal member of the European Parliament and head of the committee on artificial intelligence, said in an interview. “Putting a regulatory framework around them is a must and it’s good that the European Commission takes this direction.â€
As artificial intelligence has started to penetrate every part of society, from shopping suggestions and voice assistants to decisions around hiring, insurance and law enforcement, the EU wants to ensure technology deployed in Europe is transparent, has human oversight and meets its high standards for user privacy.
The proposed rules come as the EU tries to catch up to the US and China on the roll-out of artificial intelligence and other advanced technology. The new requirements could hinder tech firms in the region from competing with foreign rivals if they are delayed in unveiling products because they first have to be tested.
Once proposed by the commission, the rules could still change following input from the European Parliament and the bloc’s member states before becoming law.
Tudorache said it was critical that the final version of law doesn’t stifle innovation and limits bureaucratic hurdles as much as possible.
“We have to be very, very clear in the way we regulate – when, where and in which conditions, engineers and businesses have to actually go to regulators to seek authorisation and to be very clear where it’s not,†he said.