AI regulation is all the rage. Since the success of the OpenAI chatbot ChatGPT, the public’s attention has been divided surprise and concern about what these powerful AI tools can do. Generative AI has been touted as a potential game changer for productivity tools and the creative assistants. Although they are already showing the ways they can be harmful. Generative models were used to generate wrong information, and could be weapons of spam and scam.
In recent weeks, from CEOs of technology companies to US Senators and leaders of the G7 called for stricter international regulations and limits on AI technology. What is the good news? Policymakers do not need to start from scratch.
I MIT Technology Review, we have analyzed six very different international efforts to regulate artificial intelligence. We then set out the pros and cons of each, and gave them a score that shows how influential we think they are.
A legally binding AI contract
The Council of Europe (COE), a human rights organization made up of 46 countries, is concluding a a legally binding contract on artificial intelligence. The treaty requires signatories to take steps to ensure that AI is designed, developed and implemented in a way that protects human rights, democracy and the rule of law. The contract may include moratorium on technologies that endanger human rights, such as facial recognition.
If everything goes according to plan, the institute would finish drafting the text in November, according to Nathalie Smuha, a lawyer and philosopher at the Faculty of Law at the Catholic University of Leuven (Belgium) and an adviser to the Council.
Advantages: The Council of Europe includes many non-European countries, including the United Kingdom and Ukraine. In addition, he invited others to the negotiating table, such as the US, Canada, Israel, Mexico and Japan. “It’s a good sign,” says Smuha.
Disadvantages: Each country must ratify the treaty individually and then incorporate it into their national law, a process that can take years. There is also the possibility that countries could give up certain features they don’t like, such as strict rules or moratoria. The negotiating team tries to find a balance between strengthening protection and signing as many countries as possible, Smuha explains.
Impact Rating: 3/5
Principles of AI according to the OECD
In 2019, the 38 countries belonging to the Organization for Economic Co-operation and Development (OECD) agreed to adopt a series of start non-binding rules establishing some values that must support the development of AI. According to these principles, systems with this type of information must be transparent and explainable; they must function in a robust, safe and secure manner; they must have accountability mechanisms; and must be designed to respect the rule of law, human rights, democratic values and diversity. The principles also state that AI should contribute to economic growth.
Advantages: A kind of manifesto for Western AI policy, these principles have since shaped other policy initiatives around the world. For example, the OECD’s legal definition of AI is likely to be adopted in the EU AI Law. The OECD also monitors and monitors national AI regulations, and investigates their economic impact. In addition, it has a global network of AI experts who conduct research and share their best practices.
Disadvantages: The OECD’s mandate as an international organization is not to regulate, but to stimulate economic growth, says Smuha. And all countries need a lot of work to translate high-level principles into actionable policies, says Phil Dawson, Policy Director at Armilla, a responsible AI platform.
Impact Rating: 4/5
The World Association on AI
In 2020, Justin Trudeau, Prime Minister of Canada, and Emmanuel Macron, President of France, created the Global Association for Artificial Intelligence (GPAI). This international body aims to share AI research and information, foster international research collaboration around responsible AI, and inform AI policy around the world. There are 29 countries in the organization, some from Africa, Latin America and Asia.
Advantages: The GPAI’s value lies in its ability to foster research and international cooperation, says Smuha.
Disadvantages: Some AI experts They asked an international body like the UN’s Intergovernmental Panel on Climate Change (IPCC) to share knowledge and research on AI, and GPAI had the ability to fit the bill. However, after sending it to many authors, the organization has kept a low profile and has not published any work in 2023.
Impact Rating: 1/5
EU AI law
The European Union is ending the The Law of Artificial Intelligence, a broad regulation that aims to regulate the highest “risky” uses of AI systems. This was first proposed in 2021, and will govern AI in sectors such as healthcare and education.
Advantages: The law could hold bad actors accountable and prevent the most harmful excesses of AI by imposing large fines, and prohibiting the sale and use of anti-EU technology. The bill will also regulate AI generation and place some restrictions on systems deemed to pose an “unacceptable” risk, such as facial recognition. As the only comprehensive AI regulation, the EU is the first to take the initiative. The EU regime is likely to be in the end de facto global AI regulation, as companies from non-EU countries seeking to do business in this trading bloc will have to adjust their practices to comply with the law.
Disadvantages: Many aspects of the bill, such as the ban on facial recognition and approaches to regulating generative AI, are highly controversial. In addition, the EU will face lobby technology companies that will want to relax the regulations. Therefore, it will take at least a few years before the law works its way through the EU legal system and comes into force.
Impact Rating: 5/5
Technical standards of the sector
Technical standards from standards bodies will play a more important role in translating regulation into simple rules that companies can follow, according to Dawson. For example, once EU Law is approved, companies that meet certain technical standards will automatically comply with that law. There are already several AI standards, and more are on the way. The International Organization for Standardization (ISO) has already provided a number of standards on how companies should perform the Risk Management and the impact evaluationsas well as managing development of the AI.
Advantages: These standards help companies implement complex regulation. As countries begin writing their own individual AI laws, the standards will help companies create products that work across multiple jurisdictions, Dawson says.
Disadvantages: Most standards are generic, and apply to different sectors. Therefore, companies will have to interpret them to use them in their specific sector. This could put a big burden on small businesses, according to Dawson. One of the most controversial points is whether technical experts and engineers should write ethical risk standards. “Many are concerned that policy makers are moving difficult questions of best practice into the development of industry standards,” says Dawson.
Impact Rating: 4/5
The United Nations
The United Nations, which has 193 member states, wants to be the international organization that supports and facilitates global coordination on AI. To do this, in 2021 the UN created a new delegate for Technology. That same year, UNESCO (the UN agency) and the member states also adopted a voluntary ethical framework of the AI. In it, members promise, for example, to introduce ethical impact assessments for AI, assess its impact on the environment, and ensure that it promotes gender equality and is not used for mass surveillance.
Advantages: The UN is the only significant place on the international scene where global south countries have been able to influence AI policy. While the West is committed to the OECD principles, the UNESCO AI Ethics Framework has had a huge impact on developing countries, which are new to AI ethics. Notably, China and Russia, which have been largely excluded from Western debates on the ethics of AI, have also adhered to the principles.
Disadvantages: This raises the question of countries’ seriousness in following voluntary ethical guidelines, as many countries, such as China and Russia, have used AI to monitor citizens. The UN also has a spotty record when it comes to technology. His first attempt to coordinate global technology failed: the diplomat chosen as Technology Envoy was suspended after five days bullying scandal. And the efforts of the United Nations to set standards for lethal autonomous drones, also known as lethal robots, have made no progress for years.
Impact Rating: 2/5