The European Union finalized the world's most comprehensive artificial intelligence regulation in March 2024. By the time enforcement began in August 2024, 47 countries had announced competing frameworks. None matched the EU's scope.
This is the global AI regulation landscape in 2025: fragmented, reactive, and shaped more by geopolitical competition than by coherent policy vision. The United States has no federal AI law. China regulates algorithms but not the technology itself. The EU created a risk-based system that most companies still don't understand how to comply with.
What follows is a living tracker of who is regulating AI, how they're doing it, and where the gaps remain large enough to drive an autonomous vehicle through.
The global AI regulation landscape
Artificial intelligence regulation exists in three distinct models. The EU uses a risk-based framework that bans certain applications outright and imposes transparency requirements on others. China regulates AI through content control and algorithmic accountability rules tied to its existing censorship infrastructure. The United States relies on voluntary commitments, sector-specific rules, and state-level action.
No major economy has banned AI development. Every framework attempts to balance innovation with harm prevention. The difference is in who defines harm and who bears the cost of getting it wrong.
The EU AI Act, which entered force in August 2024, categorizes AI systems by risk level. Unacceptable-risk systems—social scoring by governments, real-time biometric surveillance in public spaces, manipulative AI that exploits vulnerabilities—are banned. High-risk systems in areas like employment, education, law enforcement, and critical infrastructure face mandatory risk assessments, transparency requirements, and human oversight obligations.
China's approach centers on algorithmic governance. The Provisions on the Administration of Algorithmic Recommendations, effective since March 2022, require companies to disclose how algorithms make decisions, allow users to opt out of personalized recommendations, and prohibit algorithms that encourage addiction or excessive consumption. A separate generative AI regulation, implemented in August 2023, mandates content reviews and prohibits output that undermines state power.
The United States has executive orders, agency guidance, and a patchwork of state laws. No binding federal legislation. The October 2023 Executive Order on Safe, Secure, and Trustworthy AI directed agencies to develop standards and required AI developers to share safety test results with the government before public release. Compliance is voluntary. Enforcement mechanisms are unclear.
The pattern: Europe regulates the product. China regulates the output. America regulates through procurement power and hopes the market follows.
United States: federal and state actions
The federal government's primary AI policy tool is an executive order that could be rescinded by the next administration. President Biden's October 2023 order established safety and security standards for AI development, directed the National Institute of Standards and Technology to create testing frameworks, and required federal agencies to assess AI's impact on workers and civil rights.
It did not create enforceable law. It created homework assignments for agencies.
The Commerce Department now requires companies developing AI models with significant computing power to report their safety testing to the government. The requirement applies to models trained with more than 10^26 integer or floating-point operations—a threshold that captures frontier models from OpenAI, Anthropic, Google, and Meta. Companies must notify the government 30 days before beginning training runs that exceed the threshold.
What happens if they don't comply? The executive order doesn't say. The Defense Production Act, which the order invokes as legal authority, allows the government to prioritize contracts and allocate materials during national emergencies. Whether it allows the government to compel private AI safety disclosures has never been tested in court.
Congress has introduced 47 AI-related bills since January 2024. None has passed. The closest attempt—the Algorithmic Accountability Act, which would require companies to assess their AI systems for bias and discrimination—has been reintroduced in three consecutive sessions and has never received a floor vote.
State governments are filling the vacuum. Colorado became the first state to pass comprehensive AI regulation in May 2024. The law, effective February 2026, requires developers of high-risk AI systems to prevent algorithmic discrimination, conduct impact assessments, and allow consumers to opt out of consequential decisions made by AI. It applies to systems used in employment, housing, education, healthcare, and legal services.
California, New York, and Illinois have sector-specific rules. California's AB 2013 prohibits employers from using AI to make hiring decisions without human review. New York City's Local Law 144 requires employers to audit AI hiring tools for bias and disclose their use to candidates. Illinois' Artificial Intelligence Video Interview Act mandates that companies notify applicants when AI analyzes video interviews and obtain consent before use.
The federal-state split creates a compliance nightmare for companies operating nationally. A hiring algorithm legal in Texas may violate New York City law. A credit-scoring model compliant in Florida may trigger Colorado's impact assessment requirements. The result is regulatory arbitrage—companies choose where to deploy AI based on which states ask the fewest questions.
European Union: the AI Act
The EU AI Act is the most detailed artificial intelligence regulation in force anywhere. It runs 144 pages. It bans four categories of AI outright, regulates 27 types of high-risk systems, and imposes transparency requirements on general-purpose models.
The banned applications: social scoring systems that rank people based on behavior or personal characteristics; real-time biometric identification in public spaces by law enforcement, with narrow exceptions for serious crimes; AI that exploits vulnerabilities of specific groups, including children and people with disabilities; and subliminal manipulation that causes physical or psychological harm.
High-risk AI systems—those used in critical infrastructure, education, employment, law enforcement, migration and border control, and administration of justice—must meet mandatory requirements before deployment. Developers must conduct conformity assessments, maintain technical documentation, implement risk management systems, ensure human oversight, and achieve specified levels of accuracy and cybersecurity. They must register their systems in an EU database.
General-purpose AI models, including large language models like GPT-4 and Claude, face transparency obligations. Developers must publish training data summaries, disclose energy consumption, and implement copyright compliance measures. Models that pose systemic risk—defined as computational power exceeding 10^25 floating-point operations—face additional requirements including adversarial testing and serious incident reporting.
Penalties scale with revenue. Companies that deploy banned AI systems face fines up to €35 million or 7 percent of global annual turnover, whichever is higher. Violations of high-risk system requirements carry fines up to €15 million or 3 percent of turnover. Providing incorrect information to regulators costs up to €7.5 million or 1.5 percent of turnover.
Enforcement begins in phases. The ban on prohibited practices took effect in February 2025. Rules for general-purpose AI models apply from August 2025. High-risk system requirements become enforceable in August 2026. Companies have 18 months to comply with the most demanding provisions.
The question is whether the EU has the capacity to enforce what it wrote. The regulation relies on member states to designate national authorities, conduct market surveillance, and investigate violations. As of April 2025, 11 member states had not yet designated their enforcement agencies. The European AI Office, created to coordinate implementation, has 140 staff members to oversee a market of 450 million people.
China and other major players
China regulates AI through three overlapping frameworks: algorithmic recommendations, generative AI, and content security. The system prioritizes state control over innovation constraints.
The algorithmic recommendation rules, administered by the Cyberspace Administration of China, require platforms to register algorithms with the government, conduct security assessments, and allow users to disable personalized recommendations. Companies must label algorithmically generated content and provide explanations for why specific content was recommended. The rules prohibit algorithms that create echo chambers, encourage addiction, or discriminate based on user characteristics.
Enforcement is selective. Douyin, the Chinese version of TikTok, faced a $14.5 million fine in March 2024 for algorithmic violations that the CAC did not specify. Alibaba's Taobao received a warning for recommendation algorithms that the regulator said "induced excessive consumption." ByteDance, Tencent, and Baidu have registered more than 80 algorithms with the government since the rules took effect.
Generative AI faces separate content controls. The Measures for the Management of Generative Artificial Intelligence Services, effective August 2023, require companies to train models on data that "embodies core socialist values" and to prevent output that contains content prohibited under Chinese law—including material that undermines state power, damages national unity, or spreads false information. Companies must register with the CAC before offering generative AI services to the public.
The registration process is opaque. Baidu's ERNIE Bot received approval in August 2023. SenseTime's SenseChat followed in September. More than 40 Chinese companies have submitted applications. The CAC has not published approval criteria or rejection rates.
Other major economies are watching rather than regulating. The United Kingdom published an AI white paper in March 2023 proposing a principles-based approach with no new legislation. Existing regulators—the Information Commissioner's Office for data protection, the Equality and Human Rights Commission for discrimination, the Competition and Markets Authority for market abuse—would apply existing law to AI within their domains. As of April 2025, the government has not introduced enabling legislation.
Canada's Artificial Intelligence and Data Act, part of a broader digital charter implementation bill introduced in 2022, remains in committee. The bill would prohibit AI systems that cause serious harm, require high-impact systems to undergo assessments, and create a new AI and Data Commissioner. It has not passed.
Japan released AI guidelines in May 2024 that encourage voluntary compliance with transparency and fairness principles. The guidelines carry no legal force. South Korea announced plans for an AI Basic Act in December 2024 but has not published draft legislation. India's approach consists of advisory frameworks issued by the Ministry of Electronics and Information Technology with no enforcement mechanisms.
Key areas being regulated
Facial recognition is the most restricted AI application globally. The EU AI Act bans real-time biometric identification in public spaces except for specific law enforcement purposes including locating missing children, preventing imminent terrorist attacks, and prosecuting serious crimes. Even these exceptions require judicial authorization.
Fourteen U.S. cities have banned or restricted government use of facial recognition. San Francisco, Oakland, and Berkeley prohibit municipal agencies from using the technology. Boston, Portland, and Minneapolis impose strict limitations. No federal law restricts private-sector use. Clearview AI, which scraped billions of photos from social media to build a facial recognition database, operates legally in most U.S. jurisdictions while facing regulatory action in the EU, UK, Canada, and Australia for privacy violations.
Employment decisions are the second major regulatory focus. The EU classifies AI used in recruitment, worker performance evaluation, and task allocation as high-risk, triggering mandatory assessments. New York City requires bias audits of automated employment decision tools. Illinois prohibits AI analysis of video interviews without candidate consent. Maryland bans employers from using facial recognition during interviews without prior notice and consent.
The regulations share a common gap: they focus on disclosure and assessment, not on banning discriminatory outcomes. An employer can use a biased hiring algorithm in New York City as long as it conducts an annual audit and publishes summary statistics. The law does not require the employer to stop using the algorithm if the audit finds bias. It requires paperwork.
Credit and insurance pricing face growing scrutiny. The EU's high-risk category includes AI used to evaluate creditworthiness and determine insurance premiums. The U.S. Equal Credit Opportunity Act and Fair Housing Act prohibit discrimination but do not specifically address algorithmic decision-making. The Consumer Financial Protection Bureau issued guidance in 2022 stating that lenders using AI must provide specific, accurate reasons for adverse credit decisions—a requirement that conflicts with the opacity of many machine learning models.
Law enforcement and criminal justice applications are regulated inconsistently. The EU restricts but does not ban AI in policing. China integrates AI into surveillance infrastructure with no public accountability. The United States allows police departments to adopt predictive policing, risk assessment algorithms, and facial recognition with minimal oversight. A 2023 study by the Brennan Center for Justice found that 40 percent of U.S. law enforcement agencies use some form of AI, and fewer than 15 percent have policies governing its use.
Education is the least regulated high-stakes domain. AI tutoring systems, automated grading, student surveillance tools, and admissions algorithms operate with almost no binding rules. The EU AI Act classifies AI used to determine educational access or evaluate students as high-risk, but enforcement does not begin until 2026. No U.S. state has passed comprehensive regulation of AI in education. Proctoring software that uses AI to detect cheating—criticized for high false-positive rates and discriminatory performance—remains widely used in higher education with no federal standards for accuracy or bias testing.
What to watch in 2026
The EU's high-risk AI requirements take effect in August 2026. That is the enforcement deadline companies are designing around. Expect a wave of compliance theater: impact assessments conducted by the same teams that built the systems, risk management frameworks that document rather than mitigate harm, and technical documentation that satisfies regulatory checklists without changing how the technology works.
The real test is whether any EU member state actually fines a major company for non-compliance. The regulation allows penalties up to 7 percent of global revenue. Applying that to a U.S. tech company would trigger a transatlantic regulatory conflict. Applying it to a European company would require political will that may not survive lobbying pressure. The first enforcement action will reveal whether the AI Act has teeth or is a 144-page suggestion.
In the United States, the 2024 election will determine whether the executive order survives. A new administration could rescind it on day one. Congressional action remains unlikely absent a high-profile AI disaster that forces legislative response. The pattern in U.S. tech regulation is that laws pass only after the harm is undeniable and the victims are sympathetic. Algorithmic discrimination has not yet produced that moment.
State-level action will accelerate. At least nine states are considering Colorado-style AI accountability laws in 2025 legislative sessions. The result will be a compliance patchwork that makes national AI deployment legally complex and creates pressure for federal preemption—which, if it comes, is more likely to favor industry than consumers.
China's regulatory trajectory is the hardest to predict. The current framework allows the government to approve or reject AI services with no transparency. That gives the state enormous power to shape which companies succeed and what kinds of AI the public can access. It also creates uncertainty that may slow Chinese AI development relative to the U.S., where companies operate under fewer constraints. Whether China prioritizes control or competitiveness will shape the global AI race.
The question is not whether AI will be regulated. It is whether regulation will be designed to protect people or to protect the companies building the systems. The EU chose the former, at least on paper. The U.S. has chosen the latter by choosing nothing. China chose state power. The gap between these models is wide enough that the same AI system can be banned in Brussels, legal in Beijing, and unregulated in Boston.
banned in Brussels, legal in Beijing,
and unregulated in Boston.
That is not a sustainable equilibrium. One of three things happens next: regulatory convergence around a common standard, fragmentation that creates incompatible AI markets, or a race to the bottom where companies deploy in the least regulated jurisdiction and export the technology everywhere else. The 18 months between now and the EU's August 2026 enforcement deadline will show which path the world is on.