Skip to content

Palantir and Anduril Sell Targeting Systems That Kill Civilians. Why Are We Still Calling Them Tech Companies?

AI companies like Palantir and Anduril sell targeting systems used in Gaza and Iran, killing thousands of civilians. They're defense contractors hiding behind tech branding—and the regulatory failure is already lethal.

Palantir and Anduril Sell Targeting Systems That Kill Civilians. Why Are We Still Calling Them Tech Companies?
Image via The Guardian US

Between 2023 and 2025, Israeli forces used an AI-assisted targeting system called Lavender to generate kill lists in Gaza. The Guardian US reports that the system, built with technology from companies including Palantir, produced targets so rapidly that human operators spent an average of 20 seconds reviewing each name before approving lethal strikes. The result: thousands of civilian deaths, many of them children killed in their homes alongside a single suspected militant family member. This is not a hypothetical risk of AI warfare. This is AI warfare.

The companies building these systems—Palantir, Anduril, Scale AI, and others—do not call themselves defense contractors. They call themselves AI companies. They speak the language of innovation, disruption, and technological progress. Their executives give TED talks about the future of computing. Their marketing departments produce sleek videos about data analytics and machine learning. But the product they are selling is the same product Lockheed Martin and Raytheon sell: the capacity to kill human beings at scale.

The distinction matters because it determines how we regulate them. Defense contractors operate under export controls, congressional oversight, and international arms treaties. AI companies operate under venture capital funding rounds and terms of service agreements. One framework assumes the product is a weapon. The other assumes the product is software. The bodies in Gaza and Iran reveal which assumption is correct.

Consider the "fog procedure" that Israeli forces have used since the second intifada: soldiers fire into darkness on the theory that an invisible threat might be present. According to The Guardian, this logic of preemptive violence based on uncertainty has now been encoded into algorithmic systems. AI targeting software generates probabilities—a 70% likelihood this person is a militant, an 80% chance this building contains weapons—and militaries treat those probabilities as certainty. The machine cannot see into the building any more than the soldier could see into the darkness. But the machine's output carries the authority of data, and data feels like truth.

The human cost is not speculative. Between October 2023 and March 2024, more than 14,000 children were killed in Gaza, according to Palestinian health authorities—a figure that UN agencies have called credible. Many of those deaths occurred in so-called precision strikes on residential buildings where AI systems had flagged a single target. The technology did not fail. It performed exactly as designed: it identified a target, calculated acceptable collateral damage, and authorized the strike. The children died not because the system malfunctioned but because the system worked.

The same pattern is now emerging in U.S. operations against Iranian-backed forces. American drones equipped with AI-assisted targeting have struck facilities in Syria and Iraq, killing not only militants but also civilians in adjacent structures. The Pentagon describes these as "proportionate responses" and cites the precision of its technology. But precision is not the same as discrimination. A weapon can be accurate and still be indiscriminate if the decision to fire is based on incomplete information processed by an algorithm that cannot understand context, intent, or the value of a human life.

What makes this regulatory failure so stark is that we are not dealing with a technological inevitability. Congress has the authority to regulate AI weapons systems the same way it regulates other military technologies. International bodies including the United Nations have called for a ban on fully autonomous weapons. The European Union has proposed restrictions on AI systems used in law enforcement and border control. The legal and diplomatic infrastructure exists. What does not exist is political will.

Part of the problem is the branding. Palantir presents itself as a data analytics company. Anduril describes itself as a defense technology startup reimagining national security. Scale AI calls itself a platform for training machine learning models. These companies have successfully convinced policymakers that they are part of the innovation economy, not the defense industrial base. They lobby alongside Google and Meta, not alongside Northrop Grumman. They hire from Stanford and MIT, not from West Point. And so they are treated as tech companies subject to tech regulation—which is to say, barely regulated at all.

But the product reveals the truth. When Palantir software is used to generate kill lists, Palantir is a weapons manufacturer. When Anduril drones are used to surveil and strike targets, Anduril is a defense contractor. When Scale AI provides the training data that teaches an algorithm to distinguish a militant from a civilian, Scale AI is part of the weapons supply chain. The fact that the product is delivered as software rather than hardware does not change its function. A missile guided by AI is still a missile.

The companies themselves understand this, even if they will not say it publicly. Palantir's contracts with the Israeli Defense Forces and the U.S. Department of Defense are not for consumer software. They are for battlefield systems. Anduril's promotional materials feature autonomous drones tracking targets in contested environments. These are not productivity tools. They are instruments of state violence, and they should be regulated as such.

What would that regulation look like? At minimum: the same export controls that apply to conventional weapons should apply to AI targeting systems. Companies selling these technologies to foreign militaries should be subject to the same congressional oversight and human rights vetting as companies selling fighter jets or missile systems. Algorithmic accountability frameworks already being debated in Europe—requiring transparency about how AI systems make decisions, mandating human review of high-stakes determinations, prohibiting fully autonomous lethal systems—should be adopted and expanded.

The broader question is whether any targeting system that reduces human judgment to a probability score can ever meet the legal and ethical standards of distinction and proportionality required by international humanitarian law. If an algorithm cannot understand that a child in a building is not an acceptable cost of killing a suspected militant in the same building, then the algorithm cannot be trusted with life-and-death decisions. And if the human operator is reviewing targets so quickly that they cannot exercise meaningful judgment, then the human is not really in control—the machine is.

The fog procedure was always a moral failure: violence justified by chosen blindness, shooting into the dark and calling it defense. AI warfare is the same failure at scale, encoded in software and deployed with the authority of data. The companies building these systems are not innovators. They are arms dealers. And the cost of pretending otherwise is already written in the names of thousands of dead children. We know what these systems do. We know who profits from them. The only question left is whether we will regulate them before the body count grows higher.

Ideas ai warfare military contractors civilian casualties