AI gets another year of free pass as EU and US regulation stalls
The UK and EU governments are throwing themselves into the proverbial tramway, the AI ethics standard.The European Commission (EC) has drafted some Laws governing the use of artificial intelligencebut reports say it could take up to a year to actually get them in place.
Right now, we’re in the crossfire of the AI wasteland.This The law seems to be pushed aside And new AI applications are being built everywhere, completely unregulated.
according to Reuters (pass through Artificial Intelligence News), two lawmakers involved in the EU lawsuit said the debate focused on whether facial recognition should be banned and who has the power to preside over the rules and control artificial intelligence.
There is a similar situation in the US, where there is still no federal regulation of AI, but there are reports of some US AI regulation ‘coming soon’ However, it will obviously take a different form, replacing the detailed framework proposed by the EC for an agency-by-institution approach.
A previous draft by the European Commission established a number of classifications for artificial intelligence, depending on the level of risk each system could pose to us as a species. These range from “limited risk systems” such as chatbots and spam filters to “unacceptable risk” – any system that is exploitative, manipulative or that could “perform real-time biometric authentication in public places for law enforcement”.
It all sounds Orwellian, but when we let DeepMind train an AI to control nuclear fusion, you’d think facial recognition would be the least of our worries.
“High-risk” AI systems will need to undergo intense scrutiny and be heavily regulated to operate within the law. Regulations can include anything from human oversight to mandatory risk management systems or government registrations. Any system deemed high risk will likely require some rigorous record keeping and logging in case anything goes wrong, and may fully disclose the transparency of such records to users.
It seems that at least video game AI is ready to be included in the risk-limited category, but who knows if this will be taken up a notch once everyone lets go of reality and lets the exodus into the metaverse.
A decision has been made that any suitable AI and the high end of the risk spectrum will be subject to a blanket ban on deployment. The difficulty, however, lies in classifying AI, and the spat looks set to continue for some time.
“Facial recognition is going to be the biggest ideological discussion between left and right,” Dragos Tudorache of the European Parliament told Reuters in an interview. “I don’t believe in outright bans. For me, the solution is to have the right rules.”
The decision may take some time, but Chris Philp, UK minister from the Department for Digital, Culture, Media and Sport (DCMS), insisted, “We are laying the groundwork for growth over the next decade with a This strategy helps us capture the potential of artificial intelligence to play a leading role in shaping the way the world is governed.”
The UK government is also working with the Alan Turing Institute to create an AI Standards Centre to oversee its participation in AI ethics and safety guidance.
Surely these decisions should not be taken lightly, but a whole year of making some rules about AI? Might be a little worried that it’s still a free one. I mean, the great ideas of science fiction have been studied for years, folks, catch up. I’m sure Orwell, Philip K. Dick, and other future speculators are now rolling in their graves.
Nonetheless, I’m at least glad these bigwigs are seeking advice from top scientists, rather than making decisions on the spot to make decisions faster. I’d rather regulation be done right than reactionary, but the lack of regulation in the short term really relies on trustworthy AI systems to keep it that way.