We created a site to keep you updated with the latest developments in AI regulations and policies: AI Regulation News.
Trustworthy Artificial Intelligence (AI) promises to provide innovative and effective solutions to meet the challenges facing the world today. In order to realise this potential, it is crucial that technology used globally is of the highest quality and developed in a way that earns the trust of all people.
To achieve this, world superpowers and leaders of AI development such as the European Union (EU), USA, and China have begun their plans for the rollout of their own approaches to AI regulation. But what exactly does AI regulation entail, and what are the future implications for global businesses that leverage these technologies?
AI regulation involves the creation of policies and laws in the public sector for promoting and regulating AI. The aim is to both encourage global AI adoption in private and public sectors and manage the associated risks of doing so. Many of the policy considerations deal with the technical and economic implications of widespread AI systems to ensure trustworthy and human-centered AI systems.
Most regulations focus on the risks and biases of Machine Learning (ML) algorithms — AI’s core technology — from the perspective of input data, decision models, and algorithm testing. This also includes algorithm bias.
Basic AI regulation is based on the existing Asilomar AI Principles and Beijing Principles, and are categorised as:
From this, basic AI regulations are divided into three topics: governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
On a macro scale, AI regulation is seen as a way to approach the AI control problem — how humans remain in control of an artificial superintelligence.
The three key players in the drive toward responsible and accountable global AI regulation are the European Union, the United States, and China. Each has outlined its own approach to address the various challenges and implications of AI regulation compliance.
To address the rapid technological development of AI within a global policy context where more and more countries are investing heavily in the technology, the 2021 Coordinated Plan on Artificial Intelligence (AI) sets out to create clear and consistent global leadership in trustworthy AI.
They plan to achieve this through 4 key policy objectives:
The EU’s approach to AI regulation focuses on identifying and mitigating against the potential risks associated with AI systems, including fundamental rights and user safety. The risk-based approach is categorised by the following levels:
The legal framework of the 2021 Coordinated Plan on Artificial Intelligence (AI) applies to any public and private actors operating inside and outside the EU (as long as the AI system is placed on the EU market or its use affects EU citizens). This includes both providers and users of high-risk AI systems. It does not, however, apply to private or non-professional uses.
If any of the risks and rules stipulated in the legal framework are breached, penalties will be handed down.
The U.S.A.’s National Artificial Intelligence Initiative (NAII) was born out of the National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) which became law in the United States on January 1, 2021.
The aim of the NAII is to develop a coordinated program across the U.S. Federal government to drive leadership in AI research and development, education, and use of trustworthy AI in public and private sectors. Additionally, the initiative aims to integrate AI systems across all sectors of the American economy and society.
The NAII plans to establish cooperation between all U.S. Departments and Agencies, together with academia, industry, non-profits, and civil society organisations to achieve its goals via 6 key strategic pillars:
The USA’s proposed approach to AI risk assessment can be classified into 3 categories: assessment, independence and accountability, and continuous review.
The USA’s approach differs from the EU approach in that it embraces and enables opportunities for AI development within the public and private sector thanks to a transparent collaboration with Federal departments.
The EU’s focus is on risk avoidance and hazard minimisation and is creating a strict framework for players to operate within. While the USA’s approach gives players full autonomy over their processes to drive galvanisation, the EU warns of severe penalties if any of the rules are breached.
In 2017, China’s State Council released its plan for the Development of New Generation Artificial Intelligence (Guo Fa [2017] No. 35). A first of its kind in China, the plan is positioned as a response to AI quickly becoming the new focus of international competition and proposed how China can become a leader in global AI development.
From this, the plan has 3 objectives:
China’s Development of New Generation Artificial Intelligence (Guo Fa [2017] No. 35) plan sets out to establish an integrated collaboration of existing government and industry resources, centred on 3 pillars:
The plan aims to drive the development of AI technologies through a collaborative approach by private companies and local governments. This comes in the form of internationally established private companies labeled as ‘AI National Champions’ (e.g. Alibaba, Baidu) and is explained as such:
“Being endorsed as a national champion involves a deal whereby private companies agree to focus on the government’s strategic aims. In return, these companies receive preferential contract bidding, easier access to finance, and sometimes market share protection”.
Local governments are provided the freedom to incentivise new technology development while also accommodating “national government policy aims”. Private companies and local governments are given a lot of freedom and flexibility in how to proceed and are only provided with a few specific guidelines. This allows businesses to select the technologies they want to develop while providing local governments with a choice of partnerships in the private sector.
As we are approaching the early days of global AI regulation, any business looking to leverage AI solutions must be aware of the pending rules in order to start off on the right foot. On the other hand, businesses with existing AI systems must set up their own AI teams to evaluate where changes can be made in order to remain within the proposed legal frameworks of their county.
Even though it is too early to tell precisely what laws will be put into effect regarding AI regulation, it is important for new businesses to be aware of the parameters and protocols likely to be in place when going to market. Business plans and models will likely need to be reimagined, and new opportunities will no doubt arise to combat the related challenges of regulation.
The best way to safeguard against uncertainty is to keep abreast of the latest information and updates. You can do this by visiting our AI regulation website here: AI Regulation News.
An executive’s guide to AI and Intelligent Automation. Working Machines takes a look at how the renewed vigour for the development of Artificial Intelligence and Intelligent Automation technology has begun to change how businesses operate.