AI regulation: EU, USA & China

Date
November 5, 2021
Hot topics 🔥
AI & ML Insights
Contributor
Mario Grunitz
AI regulation: EU, USA & China

We created a site to keep you updated with the latest developments in AI regulations and policies: AI Regulation News.

Trustworthy Artificial Intelligence (AI) promises to provide innovative and effective solutions to meet the challenges facing the world today. In order to realise this potential, it is crucial that technology used globally is of the highest quality and developed in a way that earns the trust of all people.

To achieve this, world superpowers and leaders of AI development such as the European Union (EU), USA, and China have begun their plans for the rollout of their own approaches to AI regulation. But what exactly does AI regulation entail, and what are the future implications for global businesses that leverage these technologies?

What is meant by AI regulation?

AI regulation involves the creation of policies and laws in the public sector for promoting and regulating AI. The aim is to both encourage global AI adoption in private and public sectors and manage the associated risks of doing so.  Many of the policy considerations deal with the technical and economic implications of widespread AI systems to ensure trustworthy and human-centered AI systems.

Most regulations focus on the risks and biases of Machine Learning (ML) algorithms — AI’s core technology — from the perspective of input data, decision models, and algorithm testing. This also includes algorithm bias.

Basic AI regulation is based on the existing Asilomar AI Principles and Beijing Principles, and are categorised as: 

From this, basic AI regulations are divided into three topics: governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.

On a macro scale, AI regulation is seen as a way to approach the AI control problem — how humans remain in control of an artificial superintelligence.

Key players

The three key players in the drive toward responsible and accountable global AI regulation are the European Union, the United States, and China. Each has outlined its own approach to address the various challenges and implications of AI regulation compliance. 

Approaches to AI regulation

European Union (EU)

To address the rapid technological development of AI within a global policy context where more and more countries are investing heavily in the technology, the 2021 Coordinated Plan on Artificial Intelligence (AI) sets out to create clear and consistent global leadership in trustworthy AI. 

They plan to achieve this through 4 key policy objectives:

  1. Set enabling conditions for AI’s development and uptake
  2. Build strategic leadership in high-impact sectors
  3. Make the EU the right place for AI to thrive
  4. Ensure AI technologies work for people

The EU’s approach to AI regulation focuses on identifying and mitigating against the potential risks associated with AI systems, including fundamental rights and user safety. The risk-based approach is categorised by the following levels:

  • Unacceptable risk: anything considered a clear threat to EU citizens to be banned: from social scoring by governments to toys using voice assistance that encourages dangerous behaviour of children.
  • High-risk: including critical infrastructure, educational training, safety components of products, essential private services, recruitment procedures, law enforcement infringing on human rights, border security, and democratic administration.
  • Limited risk: AI systems such as chatbots are subject to minimal transparency obligations, intended to allow those interacting with the content to make informed decisions.
  • Minimal risk: free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category where the new rules do not intervene as these systems represent only minimal or no risk for citizen’s rights or safety.

The legal framework of the 2021 Coordinated Plan on Artificial Intelligence (AI) applies to any public and private actors operating inside and outside the EU (as long as the AI system is placed on the EU market or its use affects EU citizens). This includes both providers and users of high-risk AI systems. It does not, however, apply to private or non-professional uses.

If any of the risks and rules stipulated in the legal framework are breached, penalties will be handed down.

USA

The U.S.A.’s National Artificial Intelligence Initiative (NAII) was born out of the National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) which became law in the United States on January 1, 2021. 

The aim of the NAII is to develop a coordinated program across the U.S. Federal government to drive leadership in AI research and development, education, and use of trustworthy AI in public and private sectors. Additionally, the initiative aims to integrate AI systems across all sectors of the American economy and society.

The NAII plans to establish cooperation between all U.S. Departments and Agencies, together with academia, industry, non-profits, and civil society organisations to achieve its goals via 6 key strategic pillars:

  1. Innovation
  2. Advancing trustworthy AI
  3. Education and training
  4. Infrastructure
  5. Applications
  6. International cooperation 

The USA’s proposed approach to AI risk assessment can be classified into 3 categories: assessment, independence and accountability, and continuous review.

  • Assessment: businesses will be required to conduct algorithm impact assessments to identify the associated risks with clear approaches to resolve each risk.
  • Independence and accountability: impact assessments must be conducted by independent third-party players to ensure complete transparency and accountability.
  • Continuous review: as AI evolves at a rapid rate, continuous review and impact assessment of AI systems is crucial in remaining on the edge of compliance at the rate of innovation.

The USA’s approach differs from the EU approach in that it embraces and enables opportunities for AI development within the public and private sector thanks to a transparent collaboration with Federal departments.

The EU’s focus is on risk avoidance and hazard minimisation and is creating a strict framework for players to operate within. While the USA’s approach gives players full autonomy over their processes to drive galvanisation, the EU warns of severe penalties if any of the rules are breached.

China

In 2017, China’s State Council released its plan for the Development of New Generation Artificial Intelligence (Guo Fa [2017] No. 35). A first of its kind in China, the plan is positioned as a response to AI quickly becoming the new focus of international competition and proposed how China can become a leader in global AI development. 

From this, the plan has 3 objectives

  • Create a new international competitive advantage
  • Stimulate the development of new industries 
  • Enhance national security

China’s Development of New Generation Artificial Intelligence (Guo Fa [2017] No. 35) plan sets out to establish an integrated collaboration of existing government and industry resources, centred on 3 pillars:

  • A funding and support mechanism that is government-guided and market-driven that supports AI projects. This will also include public-private partnerships to encourage the development, application, and commercialisation of AI technologies across all sectors.
  • The formation of clusters of AI innovation bases, particularly centering around State Key Laboratories, Enterprise State Key Laboratories, National Engineering Laboratories, as well as makerspaces and incubators.
  • The active collaboration of international resources with domestic ones, including facilitation and support for international M&As and for the establishment of overseas R&D centres. International actors will also be encouraged to establish AI R&D centres in China. Through this, China aims to establish international organisations in the field of AI to lead the formulation of AI-related international standards.

The plan aims to drive the development of AI technologies through a collaborative approach by private companies and local governments. This comes in the form of internationally established private companies labeled as ‘AI National Champions’ (e.g. Alibaba, Baidu) and is explained as such:

Being endorsed as a national champion involves a deal whereby private companies agree to focus on the government’s strategic aims. In return, these companies receive preferential contract bidding, easier access to finance, and sometimes market share protection”. 

Local governments are provided the freedom to incentivise new technology development while also accommodating “national government policy aims”. Private companies and local governments are given a lot of freedom and flexibility in how to proceed and are only provided with a few specific guidelines. This allows businesses to select the technologies they want to develop while providing local governments with a choice of partnerships in the private sector.

What does this all mean for businesses?

As we are approaching the early days of global AI regulation, any business looking to leverage AI solutions must be aware of the pending rules in order to start off on the right foot. On the other hand, businesses with existing AI systems must set up their own AI teams to evaluate where changes can be made in order to remain within the proposed legal frameworks of their county. 

Even though it is too early to tell precisely what laws will be put into effect regarding AI regulation, it is important for new businesses to be aware of the parameters and protocols likely to be in place when going to market. Business plans and models will likely need to be reimagined, and new opportunities will no doubt arise to combat the related challenges of regulation. 

The best way to safeguard against uncertainty is to keep abreast of the latest information and updates. You can do this by visiting our AI regulation website here: AI Regulation News.

Mario Grunitz

Mario is a Strategy Lead and Co-founder of WeAreBrain, bringing over 20 years of rich and diverse experience in the technology sector. His passion for creating meaningful change through technology has positioned him as a thought leader and trusted advisor in the tech community, pushing the boundaries of digital innovation and shaping the future of AI.

Working Machines

An executive’s guide to AI and Intelligent Automation. Working Machines takes a look at how the renewed vigour for the development of Artificial Intelligence and Intelligent Automation technology has begun to change how businesses operate.