European Parliament passes artificial intelligence bill, could it become global standard?

The EU’s artificial intelligence law is one step away from failure.

On Wednesday, local time, the European Parliament passed the draft of the Artificial Intelligence Bill with 499 votes of approval and 28 votes of support, clearing a sticking point for the final landing. The French version submitted will join the tripartite talks between Member States, the European Parliament and the European Commission. In the course of this process, the final text of the bill will have some changes in terms of language. Final approval of the bill is expected before the end of the year.

However, there will be some time between these rules officially becoming invalid. After the final approval of the bill, there will be a period of compliance for enterprises and structures, usually about two years.

It is worth noting that in order to fill the gap before the law officially expires, Europe and the United States are working together to draft a voluntary action principle. Officials from both sides promised at the end of May to draft it “within weeks” and to expand it to other “like-minded countries”.

This is the world’s first comprehensive artificial intelligence regulation bill. The bill has been in the works since 2021, but with the rapid growth of innate artificial intelligence at the end of last year, the development process has become more urgent. In response to the new situation, the bill passed on Wednesday integrates departments.

Substance of bill

The framework of the bill, first proposed in 2021, aims to stop the governance of any product or service that uses artificial intelligence systems. According to the level of danger, the application scenario of artificial intelligence system is divided into four levels: low danger, limited danger, high danger and non-refusal danger.

Riskier uses, such as hiring or techniques for children, will face tougher requirements, including improving transparency and using accurate data. Whether a compliant application can be requested to be removed from the market will depend on the 27 EU member states themselves to determine the exact level of compliance.

In the most serious cases, violations can result in fines of up to 40 million euros, or about 7 percent of the company’s annual global business expenses. For tech giants such as Google and Microsoft, the fines could run into billions of euros.

According to the European Commission, most AI systems, such as video play or spam email filters, fall into the low or no risk category.

The bill defines taboo areas for the use of artificial intelligence, such as the use of vulnerable groups such as children, or the use of artificial intelligence that can cause harm subconsciously, or interactive conversation toys that encourage dangerous actions. Speculative policing, such as dissecting data to guess who might break the law, is also permitted.

In a separate vote on Wednesday, an amendment that would have allowed exceptions to laws such as using artificial intelligence to find lost children or avoid terrorist threats was rejected.

The latest bill tighteners the requirements for departmental issuance. For example, real-time remote biometrics was listed as “high risk” in the original proposal, but was consolidated as “non-rejectionable risk” in the latest vote, meaning that the use of AI in the domain is permitted.

The bill originally did not actually cover talking bots, and in response to the latest hiatus in recent months, the bill has added some items, such as a request to clearly label non-talking bots so that users know they are stopping interacting with them.

Another major addition is a request to fully document any copyrighted material on human works (text, images, video, and music) used to practice AI systems. This will allow actual creators to know whether their work can be used to practice the algorithm, and then decide whether they can be plagiarized and pursue compensation.

As to whether the application of quality thinking to artificial intelligence in fields such as unemployment and education can affect personal life and rest, these applications face stricter control requirements, such as the need to maintain a high degree of transparency to users, and the use of pace evaluation and adding algorithms to eliminate the danger of private views.

Becoming global scale?

The EU is not an important player in the frontier of cutting-edge AI growth, which is played by the United States and China. However, some analysts believe that in terms of regulatory legislation, the EU often plays the role of triggering the tide, and the delineation often becomes the actual global scale.

One reason is that the EU is a huge single market, with 450 million consumers. For companies, they do not need to create different products for different regions, and it is easier to follow the demarcation.

In addition, Europe is indeed at the forefront in terms of regulation, and the Artificial Intelligence Act, which has been basically finalized, is the first comprehensive law of its kind in the world. For users to go to the road, this is equivalent to a “reassurance”, convenient for the company to expand the shopping mall.

Kris Shrishak, a virtuoso of the Irish Council for National Liberties (ICCL), says that the fact that the EU law can be enforced and companies that break the rules will be prosecuted is “very important”, because other places, such as America and Britain, have so far only provided instructions and initiatives, and “other countries will try to follow and copy the rules of the EU”.

Corporate and industry structures, for their part, warn that Europe needs to strike the right balance in regulation. For example, Sam Altman, chief executive officer of OpenAI, while supporting some rules for the growth of artificial intelligence, also made it clear that “imposing strict rules on the domain now is a companion.”

Boniface de Champris, director of strategy at the Computer and Communications Industry Association (CCIA), a technology company, said the EU would be a pioneer in AI, but whether it would trigger a revamp of AI remained to be seen. Because it needs to manage risks effectively, but also to provide sufficient flexibility for the development staff.

The UK, which has been out of the EU, is also vying for AI development jobs. Zaiheng Sunak plans to lead a global summit on AI security this fall. He told a technology gathering this week that Britain not only aspires to be a hub for the growth of AI skills, but also a global hub for security.