More than 100 robotaxis stopped moving at roughly the same time in a Chinese city. Not one car. Not a handful with isolated sensor faults. A fleet — coordinated, networked, and simultaneously inert — blocking traffic while the company that built them said nothing. BBC News reported that Baidu, the Chinese tech giant behind the Apollo Go robotaxi service, did not respond to requests for comment about the outage.
That silence is the story. Not the malfunction itself — systems fail, software crashes, hardware breaks. The story is that a company operating more than 100 autonomous vehicles on public roads, in a live urban environment, with real passengers and real traffic, had no public response ready when all of them stopped working at once. The question the industry does not want asked is a simple one: who decided this was ready?
Autonomous vehicle deployment has followed a predictable commercial logic. Companies race to accumulate ride data, because ride data trains the algorithms, and better algorithms justify the next funding round, and the next funding round justifies the next city launch. Baidu's Apollo Go has been expanding aggressively across Chinese cities — Wuhan, Chongqing, Shenzhen — with ambitions to reach 65 cities by 2025, according to the company's own investor materials. The business model requires scale. Scale, in this framing, is not the destination. It is the product. The passengers and the public roads are the testing environment.
This is not unique to Baidu. The autonomous vehicle industry globally has operated on a similar logic: deploy first, document failures, iterate. General Motors' Cruise unit lost its California operating license in 2023 after one of its robotaxis dragged a pedestrian 20 feet following a collision — and after the company initially withheld footage from regulators, according to the California Department of Motor Vehicles. Waymo, widely considered the industry's technical leader, has logged hundreds of minor incidents in San Francisco, though it has maintained a stronger safety record than its competitors. The pattern across the sector is consistent: safety claims are made prospectively, and the public absorbs the cost of disproving them. The conversation around AI's expanding footprint in critical infrastructure — including growing pressure in the U.S. for federal guardrails on AI systems — has largely not caught up to the specific risks posed by AI systems operating at speed on public roads.
The simultaneous nature of the Baidu failure deserves particular attention. A single vehicle malfunction is a hardware or software problem. A hundred vehicles malfunctioning together is an architecture problem — and a much more dangerous one. It means the fleet shares a dependency: a central server, a software update, a network connection, a mapping service. When that dependency fails, everything fails. This is not a theoretical vulnerability. It is what happened. And it raises a question that urban planners, transit officials, and regulators in every city that has invited robotaxi operators onto their streets should be asking: what is the contingency when the fleet goes down during a hospital shift change, or a school pickup window, or a storm?
China's regulatory environment for autonomous vehicles has been deliberately permissive. The government has treated AV deployment as a strategic national technology priority, with local governments competing to attract operators through licensing incentives and reduced oversight requirements. That competitive dynamic between cities — each wanting to be the home of the country's most advanced mobility infrastructure — has created pressure to approve deployments faster than safety frameworks can be built around them. Baidu has benefited directly from this environment. Apollo Go obtained permits to operate fully driverless rides in Wuhan in 2022 and has since expanded its driverless zones without a corresponding expansion of public incident reporting requirements.
The people most affected by a simultaneous fleet failure are not the investors or the engineers. They are the passengers mid-ride when the cars stop. They are the drivers of other vehicles caught behind a hundred stationary autonomous taxis. They are the emergency responders trying to navigate a city whose traffic patterns have been quietly reorganized around a private company's network uptime. None of these people were asked whether the infrastructure was ready. They were simply enrolled in the test. The question of who bears the cost when networked AI systems fail at scale — and who has the authority to pause deployment until that question is answered — is one that regulators from Beijing to Brussels to Sacramento have so far declined to answer clearly, even as the AI industry continues to expand into consequential public-facing roles faster than oversight frameworks can follow.
Baidu's non-response to the BBC is, in this context, more than a PR failure. It is a demonstration of the accountability gap at the center of autonomous vehicle deployment. When a bus company's vehicles break down, there is a transit authority, a union contract, a public service obligation, a regulator with jurisdiction. When a hundred robotaxis stop working simultaneously, there is a press inquiry that goes unanswered. That gap — between the public consequences of fleet-scale autonomous systems and the private, largely unaccountable companies operating them — will only widen as the industry scales. The Baidu outage did not happen because autonomous vehicles are inherently unsafe. It happened because the industry has been allowed to define the terms of its own accountability, and those terms do not include answering questions when a hundred cars stop moving at once. The relevant regulators should be asking, loudly, why that is still acceptable — and what it costs when industries are permitted to regulate themselves until something goes wrong badly enough to make the news.