Mind and Iron: How AI is changing wartime
From a new phase of disinformation to the dawn of autonomous weapons, armed conflict will never be the same
Hi and welcome back to another issue of Mind and Iron. I’m Steve Zeitchik, longtime denizen of The Washington Post and Los Angeles Times, and overseer of this news platform.
[Listen to this intro]
Events in the Middle East have put me in a solemn mood. But we’re an outlet that looks at the human consequence of change, and what better time to do that than amid the dehumanization that’s unfolded in Israel and Gaza over the past six days.
So in this Thursday’s episode you’ll find some stories pertaining to war and conflict. As always, please remember to sign up if you haven’t already done so.
And please consider pledging your support. We know your dollars are hard-earned, and many outlets deserve them. But please think of sending a few our way so we can continue bringing the human-centric tech coverage that we believe is so central to a bright and peaceful future.
This week we look at the morphing nature of disinformation and a fresh attempt to beat it back.
At autonomous weapons, which are getting dangerously closer, along with the general slippery role of tech in warfare. (This helps our weekly Apocalypse Score shatter a record — and not in a good way.)
And, while he's not saying that the new form of computer intelligence will one day try to rebel against us, he's not not saying it either. What Geoffrey Hinton, the so-called Godfather of AI, thinks about our ability to leave the gun and take the cannoli.
Our future-world quote of the week comes from him:
“I don’t know — I can’t see a path that guarantees safety.”
—Geoffrey Hinton, the researcher and ex-Googler who helped devise AI, on how to ensure it won’t slip human control
Let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Disinformation of multiple stripes; where autonomous weapons are headed; can AI really slip away from us?
1. BY NOW WE'RE STARTING TO GET A GOOD SENSE OF THE DANGER that AI poses in spreading disinformation.
Back in May, an image of an explosion at the Pentagon likely created by AI prompted panic and even caused the stock market to dip. What used to be laughably fake pieces of media have become slick, almost undetectable pseudo-truth bombs in the age of AI. And they’ll only get more convincing.
[Listen to this story]
The disinformation that has spread since the Hamas attacks on Israeli civilians last Saturday kickstarted a war has — so far — not been this. It's been of the old-school analogue variety. There have been audio tracks replaced (like this attempt to make a CNN report about a Hamas rocket attack seem staged); images crucially miscaptioned (the photo of a Moroccan soccer player waving a Palestinian flag at the World Cup in December misrepresented as Ronaldo waving it now); and images entirely twisted (a supposed shooting down of an Israeli fighter jet by Hamas that was actually footage from a video game). Digital distortions, to be sure. But still, in the most elemental sense, human-created.
Yet it’s only a matter of time before AI becomes the favored tool of bad actors seeking to persuade us of something that didn’t happen. And with withering effectiveness. A few weeks ago we told you about the Chinese government’s distribution of AI images that sullies democratic symbols like the Statue of Liberty; these images were proved to create more engagement than ones that humans designed. If the recruitment of cutting-edge AI for truth-clouding purposes doesn’t happen in this war — and I'd strongly bet it will — it won’t be long before it does.*
One doesn’t need to take too many cognitive leaps before realizing what these AI fakes (they’re too new and persuasive to be called deepfakes) could do — stoke outrage, sway public opinion and eventually change real-world outcomes. In a way that goes well beyond some people dumping stocks.
What’s worse, the haze of uncertainty created by all these convincing forgeries will make us start to doubt genuine videos too, a phenomenon social scientists call “the liar’s dividend.”
So how can it be combated?
There has been some hope that AI itself can recognize AI-based disinformation. But most experts believe that this won’t work; the training of such an AI would fall prey to a level of human subjectivity that would render it useless. Which media we tell the AI is objectively fake will determine which media it then goes out and identifies as objectively fake, and of course the nature of disinformation is it puts at issue what’s objectively fake in the first place. (If that’s more muddled than a Dall-E image, this Chicago Booth Review article does a good job of explaining the problem.)
One interesting new counter-offensive comes from the software company Adobe. Adobe is rightly scared about high-level doctoring forever disappearing the line between fact and trollish invention. So the firm has led two connected initiatives. Together with Intel, Microsoft, the BBC et al., the company has formed the C2PA, a group that aims to define technical standards for how media is created.
And with the New York Times et al. it has created the Content Authenticity Initiative, or CAI, which has gathered over 1,500 companies, organizations and citizens to sign on to a symbol — an “icon of transparency” — attesting to a piece of media’s authenticity.
This is it here.
Essentially what this kosher symbol does is establish a piece of media’s “provenance” — it tells you it hasn’t been doctored since it was created. If it has been changed, a little red x appears on it like so.
The icon, with or without the red x, will appear on the media of creators who’ve opted in; it includes AI images from Bing, materials from brands overseen by the marketer Publicis, photos from Nikon cameras, and plenty more.
A few months ago I met with Andy Parsons, the Adobe executive who serves as the senior director of the CAI. Parsons laid out to me why he and the company were so passionate about this. As tech gets slicker and slicker at manipulating media, the truth (and we) become more and more confused. We need a system to come in and de-confuse things.
“The consequences of not doing this are too great,” he said. We can’t, he added, stand by as tech gets this good at distorting the truth. So the company is trying to use tech to clarify it.
Basically the CAI is an attempt to literalize the idea that we all get our own opinion but not our own facts. There is only set of facts — and the symbol denotes if this is it. If you see the icon, you can trust the image is original.
Notice I said original, not truthful. Because the icon doesn’t really tell you how an image came into being in the first place. So while it will pick off some of the low-hanging fruit — if a bad actor took a State Dept. video of Secretary Blinken in the Middle East and molded it to say something he didn't say, the x would alert you to the fact that it’s been altered. But if a bad actor uses AI to create a video from scratch of Blinken in the Middle East saying something he never said, the video would still bear the clean symbol; after all, no one changed the media after its initial bad-actor creator.
Even the video-game footage wouldn’t get the x because the media itself hasn't been changed; it’s just the context that’s misleading. The problem with this method, in other words, is that it can’t identify inherent fakes or fake context, just if potential fakers came along later to manipulate it.
Also, content providers need to opt in, which means a lack of a symbol doesn’t tell you whether something is inauthentic or just from a good-faith actor who decided not to be a part of this.
Finally, would it shock us if disinformation-spreaders went out and threw smoke bombs at the legitimacy of the symbol in the first place? No, no it would not.
A “nutrition label” feature does allow us to scroll over and see if AI was used in creating an image, but how many of us are scrolling over to the metadata history when we see something that stokes our fire? Plus legitimate news-gatherers will use AI too.
Efforts like these are noble, and I don’t want to rag on them too much. They will trip up a chunk of disinformers — never a bad thing — and deter a few others from even trying. But as a fundamental remedy I come away unconvinced.
What the issue boils down to, I think, is a key misconception in the fight against anti-factualism. Because tech is such a force in how we receive information, it’s reasonable to think that tech can provide a solution. But disinformation is fundamentally a human disease — the agenda of highly motivated people to cloud facts and our own veil of bias and outrage that prevents us from seeing through it.
Tech has little to say about such primal forces. As long as bad actors want to confuse us — and, more important, as long as we’re willing to be confused — we’ll find a way to go online and get misled. To date the best weapon against disinformation (when Elon Musk isn’t eliminating it) is still old-fashioned humans meticulously combing through and offering context. And that doesn’t always work so well either.
The problem here is not a flaw in a platform’s coding; it's a bug in our genes. And all the righteous tech deployment in the world may not be able to edit it out.
[The Quint, USA Today, Neuroscience News and The Chicago Booth Review]
(*AI of course already has a big role in clouding the truth on social media, with the algorithms that shape our feeds responding to our preferences in a way that continually tightens the noose until we hear nothing but our own voices.)
2. LAST THURSDAY, BARELY 24 HOURS BEFORE WAR BROKE OUT IN THE MIDDLE EAST, the United Nations issued a press release.
In it, the organization said that U.N. Secretary-General António Guterres had called for a fast track to an agreement that as soon as 2026 would set global parameters for the use of autonomous weapons. (2026 is fast in U.N.-world.)
Autonomous weapons or “slaughterbots” are defined as machines — drones, tanks or anything else — that can make a decision to kill and then execute on that decision without any human intervention. Just program them, send them off and let ‘em decide what they should target (and when they should stop). And if that sounds both horrifyingly ruthless and makes you wonder how it might make decisions in murky fog-of-war conditions that resist simple data, all I can say is…yes.
But such weapons are coming, and whether it’s war in the Middle East, Ukraine, or another conflict not long off, we hardly want to wait until after they’re used before formulating a policy.
When he issued the call last week, Guterres couldn’t have known the all-out hell that was about to break out in Israel barely a day later. But his words couldn’t have been more relevant. In fact, so relevant he and other concerned parties have been calling for global guidelines on autonomous weapons — a Geneva Convention for the AI age — for five years.
The new call, which Guterres wrote with president of the International Committee of the Red Cross Mirjana Spoljaric, asked for “specific prohibitions and restrictions on autonomous weapon systems to shield present and future generations from the consequences of their use,” noting “serious humanitarian, legal, ethical and security concerns” and describing an “urgent humanitarian priority.” “We urge member states to take decisive action now to protect humanity,” it concluded.
Basically, this is 1944, and how lovely to get a second chance at avoiding another Hiroshima and Nagasaki.
And how tragic that it’s being squandered.
Because most of these efforts have fallen on deaf ears. The countries who most urgently should sign such a treaty because they have the capabilities to develop these weapons — the U.S., Russia, China, India, Israel, Iran, Australia, South Korea and the U.K. — are generally the ones least likely to want such codification. So punting is the order of the day. (For a story on the frustrations faced by those seeking regulation of this arms race, check out a story I wrote for The Post in early 2022 just after the Russian invasion of Ukraine.)
Meanwhile everybody else is speeding ahead. The U.S. Defense Department in late August made a big show of announcing “Replicator.” An initiative conceived to keep up with China, it aims in the next two years to create thousands of new AI weapons systems, usable in everything from planes to ground vehicles to drones. Thousands.
Deputy Defense Secretary Kathleen Hicks (that’s her on the right above) touted the project shortly after the announcement. As The Hill noted of her briefing, “she tasked the audience to imagine self-operating, AI-powered systems ‘flying at all sorts of altitudes doing a range of missions,’ with some of them potentially even solar-powered.” Well at least they’re environmentally friendly.
All this momentum doesn’t mean the objectors have slowed down. Groups like the Dutch Pax for Peace, a massive consortium named Stop Killer Robots and Human Rights Watch have all been mobilizing against the efforts. Earlier this year Human Rights Watch conducted a deep analysis of autonomous weapons and concluded that big trouble was brewing.
“The US pursuit of autonomous weapons systems without binding legal rules to explicitly address the dangers is a recipe for disaster,” said Mary Wareham, the group’s arms advocacy director. “National policy and legislation are urgently needed to address the risks and challenges raised by removing human control from the use of force.”
Proponents of AI weapons note the absence of human error and fatigue when machines are involved. The arguments against it — and there are many — include the fact that such important decisions shouldn’t be left to the hallucination-prone minds of AI; it’s one thing to get movie times wrong, it’s another to make a bad bombing decision. Also, wars are ruthlessly efficient enough without having machines doing the thinking in them.
And if you believe that AI fighting AI will reduce human casualties, think again. A whole slew of experts — including former Google chief Eric Schmidt and folks at the think-tank Center for a New American Security — believe it’s more likely to lead to “flash wars” in which countries suddenly find themselves in unwanted conflicts when machines start attacking each other for their own code-based reasons. Conflicts that will inevitably spill into human realms.
Not to mention the whole violation of Asimov’s First Law of Robotics.
Make no mistake, traditional weapons can impose stunning brutality, as the attacks in Israel showed last weekend. Ironically, it was actually Israel’s over-reliance on technology, via a remote border-surveillance system, that paved their way. As four officials in the NY Times revealed, Israel’s strong belief in the high-tech electronic system meant fewer physical assets on the ground — and a smaller speed bump for the terrorists once Hamas drones took out the system.
The lesson here for AI is pointed. Technology is dangerously fallible, and the illusion of infallibility makes it even riskier. And AI, with its implication that the human can be removed from the equation, is peak illusion-of-infallibility.
Yet the future of war remains deeply technological; the probability of weapons making in-the-moment kill decisions is increasing daily. Groups like Human Rights Watch, Stop Killer Robots and Pax for Peace may be sounding the alarm. But in this edgy environment, they seem no match for the world’s top militaries, armed with so many humans who believe in machines.
[The Hill, The Washington Post and The Nation]
3. IF AI SYSTEMS AREN’T ATTACKING EACH OTHER, COULD THEY BE ATTACKING US?
That’s the scenario dangled by Geoffrey Hinton in a “60 Minutes” interview Sunday.
Hinton, you may remember, is the University of Toronto scientist whose research on neural networks beginning in the 1980’s set the stage for generative AI. In the past decade he put a lot of that insight into his work for Google and won computing’s prestigious Turing Award in 2018. But earlier this year, just as AI was achieving new technological and cultural heights, he up and walked away from Google. The risks, he said, were too great for him — or humans — to simply plow ahead.
In the segment Sunday he didn’t say much he hasn’t said before. But Hinton did lay out his greatest concerns. Perhaps the biggest is an AI starting to modify its own code to defy human instruction. (“I, Robot” has some clever scientists engaging with this too.) From there, of course, all hell-fury could be unleashed.
Hinton’s salient comment: “That’s something we need to seriously worry about.” (Video here.)
He also said, “I believe it definitely understands. And in five years time I think it may well be able to reason better than us.”
Hinton offered an instructive analogy on AI’s black-box issue, the idea that we don’t really know how it works. “It’s a bit like designing the principle of evolution,” he said — the organism then evolves in myriad ways the designer didn’t foresee and doesn’t understand.
He had no specific solution for many of these issues — “I can’t see a path that guarantees safety.”
And continuing to evince what might be called his low-key alarmism, he explained why he thinks turning it off won’t be an option.
“They will be able to manipulate people,” he said. “Because they will have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances. They’ll know all that stuff and they’ll know how to do it.” This will allow the system to keep modifying itself independently of humans, acting autonomously.
Computer scientists are heatedly split on this last point; Hinton’s Turing co-winner, the Meta AI boss Yann LeCun, doesn’t see things the same worried way. He thinks if AI is going rogue, we can just reprogram it, no problem.
This all is crucial — but academic. Whether machines trained by humans to think analytically could think analytically enough to realize they shouldn’t be trained by humans is an unknowable question, and possibly a self-contradictory one; it has shades of that ‘can God make a rock He can’t lift’ debate from that freshman-year dorm.
So no one is resolving it anytime soon. But all this does point to the idea that if there’s even a slight chance Hinton is right, we should probably not only go slow but build in automatic fail-safes. Something that shuts down AI entirely if code-modification/autonomy begins to develop. Such fail-safes could of course be out-thought by smart machines too, but at least we’ll have a glimmer of hope when they begin seizing control of nuclear storage facilities.
Admittedly it seems wildly crazy to contemplate something like this happening in the coming decades. But then, as Hinton could well tell you, it was crazy to imagine ChatGPT back in the ‘80’s too.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
AI DISINFORMATION COULD CLOUD THE ISRAEL-HAMAS WAR: -5
AUTONOMOUS WEAPONS : At least they’re…not here yet? -3.5
AI’S DR. FRANKENSTEIN KEEPS TRYING TO TELL US HE CAN’T CONTROL THE MONSTER: -2.5