[ad_1]
Legislators across the European Union have come one step closer to implementing broad artificial intelligence regulations. The EU AI Act, a proposed law that would assess and restrict AI applications based on risk, received a stamp of approval from the European Parliament on Wednesday, pushing it toward a final vote that will occur later this year. If passed, the law is expected to have sweeping implications across technology companies in and outside Europe.
Currently, the EU AI Act divides AI applications into four categories. Applications with little or no risk, like spam filters and virtual video game opponents, can be used freely under the law. Limited-risk applications, like chatbots, face minor rules and guidelines; most notably, the user must know they’re “conversing” with a chatbot, not a human. High-risk applications involve transportation, law enforcement, employment, financial services, and other areas that impact people’s safety. Before implementation, these must be thoroughly assessed for security, dataset integrity, and transparency. Finally, “unacceptable risk” applications that actively threaten people’s rights, livelihoods, or safety are prohibited. This includes applications that “have a significant potential to manipulate persons through subliminal techniques…[or] exploit vulnerabilities of specific groups…in a manner that is likely to cause them or another person psychological or physical harm.” A newly-established European Artificial Intelligence Board (EAIB) will enforce each category’s specific rules.
The European Parliament’s vote comprises a significant step in the EU AI Act’s journey. Following its conception in April 2021, the proposed law underwent a series of amendments, research studies, and discussions, placing the draft under intense scrutiny. While individual members and political groups within the European Parliament have indicated their thoughts on the draft, this is the first time the major legislative body has expressed its opinion. The draft will move toward a negotiation phase between the Parliament, the European Commission, and the Council of the European Union, which hope to reach a final agreement by the end of this year.
Credit: Laura Zulian Photography/Getty Images
Tech companies across the United States have been vocal about the consequences of the law’s potential passage. OpenAI, the company behind ChatGPT, expressed last month that it might leave Europe if it’s unable to comply with the EU AI Act’s rules regarding copyrighted material. (Whether OpenAI would be “unable” or just unwilling to meet those requirements is yet to be determined, though CEO Sam Altman did say he thought the draft was “over-regulating.”) Google and Microsoft, both of which have begun investing heavily in AI, have also shown their distaste for the law.
But while ChatGPT, Bard, and similar programs might be the first thing people think of when they consider the term AI, it’s not lost on the EU that AI has plenty of other uses. One of the first pages of the EU AI Act states that “the use of artificial intelligence can support socially and environmentally beneficial outcomes and provide key competitive advantages to companies and the European economy”—it’ll just take some risk mitigation to ensure that AI’s adverse effects don’t outweigh those benefits.
If the law passes later this year, it’s expected to have downstream effects on other legislative bodies, which could, in turn, enact their own AI regulations. While the United States has an “AI Bill of Rights,” it only serves as a suggestion; its guidelines aren’t enforceable. As for domestic AI regulations, Colorado Senator Michael F. Bennet told the Washington Post on Wednesday that he thinks “we’re behind where the EU is.” Senate Majority Leader Charles E. Schumer added that the US is several months away from proposing its own AI legislation, saying a bipartisan team would “start looking at specific stuff in the fall.”
[ad_2]
Source link