AI on the battlefield
One of the major risks of AI development is military application
I’m discontinuing the audio version of posts for now due to low engagement. Let me know if you’d like me to bring them back!
Last week Ukraine’s president Volodymyr Zelenskyy stood before the UN General Assembly and gave a stark warning:
We are now living through the most destructive arms race in human history – because this time, it includes artificial intelligence. —Volodymyr Zelenskyy
In the limited time he had to address the world about the horrors his people have experienced at the hand of Russian aggression, he chose to spend some of it warning us all about the dangers of AI-powered military applications. He went on: “We need global rules – now – for how AI can be used in weapons. And this is just as urgent as preventing the spread of nuclear weapons.” The comparison with nuclear weapons is appropriate. Like nuclear, AI is a technology that can be incredibly destructive. And like every other technology, it’s hard to put the genie back in the bottle once it’s out. Many solutions have been proposed. The purpose of this post is to review them and raise awareness of this important issue.
First, a bit of background. Zelenskyy warns of AI controlled drones which can act and make decisions autonomously, at superhuman speeds. However, there are many other potential military uses of AI: cyberattack, strategic planning, AI-fueled wars of propaganda, etc, etc. These aren’t hypothetical. Such systems are in active development, and early versions have already been tested, and in some cases deployed. There are reports that Israel has used AI-assistance to target Gazans they believe are hostile. Both Ukraine and Russia are using AI to varying degrees on their battlefield. The US is a leader in AI military tech development. China is the country the US is most concerned about keeping up with, but even Turkey and Iran are developing AI military tech.
So what are the answers? Several have been proposed, but none are great. I’ll run through what I know.
Solution 1. An AI pause. Halt AI development altogether until we have better regulations and effective approaches to AI safety of all kinds (military and non-military). This idea was proposed by several prominent tech leaders not long after the introduction of ChatGPT to the world, when it became clear that AGI was a possibility in our lifetimes. All agree that it’s impossible to predict what society will look like post-AGI, and many bad-to-catastrophic possibilities are easy to envision. Unfortunately, this idea has been steam-rolled by the last 3 years of rapid AI development. The promise of unlimited AI-powered economic growth created too strong of a market force to be counter-acted by a grass roots AI-pause movement. At this point, even if we pause further AI development, the technology that has already been developed can have massive destructive capabilities when paired with military applications. Despite this, the AI pause movement isn’t dead. Prominent researchers are still calling for a pause in the development of new models, such as Eliezer Yudkowsky in his recent book If Anyone Builds It, Everyone Dies.
Solution 2. AI treaties. This is the solution I’d be the most hopeful about, but buy-in among our political leaders and technical challenges make it difficult. I believe it’s possible, and there are some historical precedents. There have been successful nuclear non-proliferation treaties, for example. An AI treaty could come in many different forms: agreements not to use AI in weapons development, not to use AI for specific uses (e.g. targeting systems), etc. Unfortunately, the reality is that an AI treaty may be hopeless, for a variety of reasons:
The range of potential AI applications makes it hard to clearly define the parameters of a treaty.
It’d be nearly impossible to ensure that any state actor is in compliance. One needs a lot of infrastructure to create a nuclear weapon, making it possible to have weapons inspectors. AI can go undetected much more easily.
Since AI is cheap, there is always the possibility of a bad actor. It’d be hard to convince a major power not to develop AI military tech when a rogue state could gain a military advantage with it.
Despite these difficulties, many countries are taking the idea of an AI treaty seriously. There have been numerous UN resolutions in this direction, some proposed by the US. Last week the UN announced a new International Scientific Panel on AI. There is some cause for hope as leaders take this issue more seriously.
Solution 3. Mutually Assured Destruction. If treaties fail, this is where we’re headed. If there is no viable solution to halt AI military tech development, the only option we’re left with is to make sure there is a high cost to ever using such technology. Again, nuclear is a place we can look to for at least a glimmer of hope. Since nuclear weapons have been developed the threat of mutually assured destruction seems to have been enough to prevent a full scale nuclear war for over 70 years. It’s not a great solution, but so far it seems to have worked. Perhaps as all countries get a hold of destructive enough AI military tech, they’ll be that much more hesitant to use it. Of course, this is far from an ideal situation. Many experts warn that autonomous systems that act without human oversight can compound errors in decision making and rapidly escalate conflict, making tense situations highly unstable. Military applications are yet another reason why it’s so important to ensure that AI systems are aligned to human values. The “alignment problem” is unquestionably the most important area of AI safety research.
Clearly none of these “solutions” are great, so why would anyone support the use of AI in military technology? I’ll close this post by running through some counter-arguments, so you can decide for yourself how much to worry.
The statistics on self-driving cars now show that they have the potential to be safer: in certain areas you are far less likely to get into an accident if you are willing to hand control over to an AI system. While many find it disturbing to get into a car with no driver, eventually I expect we’ll all just get used to it and we’ll all be better off. Something similar may be true for AI military applications. Escalation in conflict situations due to human errors in judgement has always been a major concern. Adding some degree of AI assistance and/or control to critical decision making in those situations has the potential to reduce those concerns. Better ability to evaluate real-time battlefield information can lead to fewer civilian casualties. Better targeting systems can lead to less collateral damage. While you may worry about the many ways in which AI in the military can lead to bad outcomes, there is an argument that when implemented carefully, like in self-driving cars, it can result in a safer world. How much you want to believe that argument is, of course, up to you!


