Automating Life-or-Death Decisions: Why Autonomous Weapons Demand Action Now

nil news photo
Kris Szczepaniak

Kris Szczepaniak

Estimated reading time: 7 minutes

This analysis asks provocative questions about the ethical and policy guardrails needed to establish responsibility for the use of lethal force.

On the 14th of May 2025, the United Nations Secretary-General António Guterres has renewed his appeal for an international ban of Lethal Autonomous Weapon Systems (LAWS), branding them “politically unacceptable” and “morally repugnant”. He called on the member states to agree on clear and binding regulations by 2026. 

To date, there’s no clear, internationally recognized definition of LAWS, though they can be simply understood as devices capable of killing without direct human control. Therefore, it is without a surprise that the UN calls for a robust regulatory framework. Some experts claim such regulations to be long overdue. LAWS regulation appears overdue indeed when we zoom in on conflicts in Ukraine and the Middle East, becoming active deployment sites for AI-enabled military systems.

 In this unregulated environment, both the democratic West and the autocratic regimes are developing LAWS. According to Bloomberg, in 2024 alone the US Military maintained approximately 800 ongoing AI-related projects, with funding totaling USD 1.8 billion. The private AI defence technology experienced a substantial surge as well, with companies like Palantir, Helsing and Anduril driving the AI hype. Meanwhile, data from Russia, China and Iran remain largely undisclosed.

Are We Ready to Ban “Killer Robots”?

Seeing the rise of AI in the defence sector prompts a question about the public support for LAWS. Multiple surveys of public attitudes towards LAWS over the years have shown that the support for the use of LAWS strongly depends on the context. Using LAWS in defending capabilities is generally more acceptable, especially if the adversary uses them as well.  However, in any other context, the opposition is fierce. The largest study to date found that 61% of respondents strongly opposed using LAWS in an attacking capacity, with that figure increasing over time.

Autonomous Weapons Unleashed

There’s a broad agreement on the need to regulate lethal autonomous weapons. Yet paradoxically, the development of these technologies has already crossed a point of no return. This contrast raises a question: can we reasonably entrust the defence industry with the mandate to self-regulate? Can we expect it to act responsibly?

Predictive analytics and facial recognition in policing are built on biased data across multiple dimensions. Firstly, these tools are trained on datasets that reflect the historical over-policing of Black, Brown, and other marginalised communities. By disproportionately targeting minority groups, they are at risk of facing “supercharged institutional racism”. Examples include the expansion of surveillance technologies by US police during the Black Lives Matter protests and the use of predictive algorithms by British forces, as detailed in a recent Amnesty International report.

Secondly, Israel Defence Forces indirectly revealed their use of the “Lavender” system, during their recent war with Palestine, allegedly identifying up to 37,000 Palestinian civilians as low-rank combatants with supposed links to Hamas. 

Automation brings dehumanisation, particularly in military contexts. In the future, might those identified by a “Lavender 7.0” be killed by what is effectively a fully integrated autonomous weapon at a push of a button (or even with no button at all)?

These and many other examples make some experts believe that component systems of LAWS are inherently not compliant and not fixable, and thereby incompatible with international laws, violating human rights. Can we trust the defence technology companies?

The Weaponised Choice

Imagine the future of warfare, where wars are not fought by humans, but by “killer robots”, making life-or-death decisions independently. Here’s the thing – it’s already happening. Are we heading towards the dystopian worlds, known from the unsettling visions of Isaac Asimov, Stanislaw Lem, Terminator or Matrix?

Human relationship with machines will be increasingly important in the future. Automation bias is an example of such a relationship, whereby humans tend to rely too heavily on decision support systems. How large of a challenge will automation bias constitute going forward? How many sectors beyond defence will be affected by placing too much trust in machines? 

While the democratic West deliberates the ethics and responsibility of AI development in the military, autocratic regimes do so with little to no reservations (notably countries such as Iran, Russia and China). Can they be trusted to obey an international convention, provided that they sign it in the first place? 

As the Royal United Services Institute and numerous experts have noted, further development of LAWS is inevitable. LAWS are easier to produce and, on measures like efficiency, scalability, or cost-per-effect – exceeding the capabilities of conventional weapons. Could their relatively low production cost and accessibility fuel proliferation among non-state actors, from criminal gangs to rogue law-enforcement units? If that’s indeed the case, should we even bother trying to outlaw autonomous weapons? Or, should we regulate them, but manufacture solely in proportionate defense capabilities?

An increased use of LAWS in warfare and the risk of proliferation, as reported by the UN and the Red Cross, “poses serious humanitarian, legal, ethical and security risks, including increased propensity to engage in violent conflicts”. Regulatory notions in the UN raise a point of mitigation strategies, instead of an outright prohibition, mentioning keeping human chain of command as one of the options – but how practical and proportionate would such measures actually be?

If LAWS become widespread, adversaries might deploy fully autonomous systems and flout any regulations. Should the defending forces then disable safeguards and let their own LAWS operate autonomously when attacked? Would the public accept misidentified targets, friendly-fire mistakes or other horrors carried out without human oversight? And if civilians are killed, who would be held accountable – the manufacturer, the government, the military, or perhaps, the scientist behind the code?

The “Charm” of Limited Liability

A similar dilemma has been pondered in the past, when companies were first born. Originally, limited liability was mainly about protecting investors’ finances from a company’s creditors. Today, the debate is less about money and more about the dehumanisation involved in handing life-or-death decisions over to machines.

In simple modern terms, limited liability means that shareholders can lose only as much as they have invested if anything goes wrong with the company itself. As a legal concept, limited liability existed in multiple forms since as early as 1855, when the Limited Liability Act was introduced in Britain. But it wasn’t until 1897, when the House of Lords decided that the company and its shareholders are separate legal persons. Later on, the concept evolved, becoming the ground truth of corporate law as we know it today, famously called by N.M. Butler “the greatest single discovery of modern times”.

Obvious parallels can be drawn between the legal status of 19th-century corporations and the regulation of LAWS today. Yet the question of responsibility remains. Should AI algorithms, inherently built into LAWS, possess a separate legal personality? In other words – should AI be “personally” responsible for its actions?

Even though this concept might seem enticing, diluting the ownership of LAWS’s dirty deeds, we should consider another crucial question – if we grant an algorithm legal personality, do we also assign it with obligations and punishments? What does it mean to “punish” an algorithm? Can it/she/him/them “feel” the punishment?

Any acknowledgement of “feelings” could set a precedent, recognising an algorithm as a sentient, conscious being in the eyes of the law. Could an adversary’s army one day take an AI humanoid robot “hostage”? And might a mass power outage that renders an AI inoperable be compared to the Great Famine?

Conclusions

Mounting evidence suggests that the deployment of LAWS is all but inevitable. As a result, questions around containment, damage control, risk management, regulatory compliance, civil defence measures and proliferation have never been more urgent. Will we allow the technocratic momentum to outpace our moral responsibility, or will we assert control to ensure that human judgement prevails in times of war and peace?

As we stand on the verge of an era where machines may autonomously make life-or-death decisions, we – the collective of human beings – are responsible for the legacy we leave to future generations. Let us not allow fear or greed to dictate our actions.

Share the Post: