Mind and Iron: How AI is multiplying Hamas-Israel disinformation
Let's make a hazy situation cloudier! And The White House weighs in on the tech-y future.
Hi and welcome back to Mind and Iron. I’m Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and maitre d’ of this bold restaurant.
If you’re new here, welcome! The goal at Mind and Iron is to bring you a heaping portion of tech journalism seasoned with humanity every Thursday. Change is coming at us so fast — in our media, medicine, politics, education and elsewhere — that understanding how it will affect our lives can be a head-spin. So we’re here to help. To explain the promise and, equally important, to call out the dangers.
Mind and Iron was Isaac Asimov’s original name for “I, Robot” — reflecting how an AI society needs to balance warm humanity with cold machine efficiency — and we can’t think of a better way to capture what we’re trying to do. So please sign up a friend (or yourself, if you’re reading this on the Web).
And since journalism is sadly not free, please consider offering a pledge to support us with a monthly subscription of a few dollars when we move to a paid model. It will help us keep going and ensure you never lose access.
This week, we examine just how AI is messing with our Hamas-Israel heads. When it comes to the Middle East we may think we're consuming all the important media and filtering out all the irrelevant stuff. But AI is getting really good at confusing us about the difference. And as you can see from the startling images below, they’re miles ahead of where even the savviest of us sit — with devastating consequences.
Also, on Monday the Biden White House revealed its long-awaited AI executive order. We break down what you need to know and what you can skip in its dense thicket.
And on another, slightly more hopeful note, could gene-editing offer…a cure for HIV?
First, the future-world quote of the week:
“I’ve watched one of me. I said, ‘when the hell did I say that?‘“
—President Joe Biden, describing his reaction upon seeing a cutting-edge AI deepfake of himself
Let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
The White House wraps its arms around the AI elephant; why Hamas-Israel disinformation is becoming impossible; did a San Francisco startup just discover a cure for HIV?
1. WHETHER YOU’RE LIKE ME, DOOMSCROLLING MIDDLE EAST NEWS ALL HOURS OF THE DAY, or simply a savvy news consumer who jumps on at actually healthy intervals, chances are you think you can tell the difference between a fabricated image and a real one. You’ve been doing this too long to be fooled. Even in the fog of war.
At least that’s what I believed — for years. But in barely its fourth week, the Hamas-Israel war is completely confounding me on what is real and what isn’t. And with it, removing any confidence that I know what I’m looking at. AI image-generators are that good. And getting better by the day.
We wrote about the rough places AI-generated disinformation could take us just a few weeks ago, right when the war started. I’m astounded by how quickly it’s become a problem since. These synthetic images are that convincing.
Here are a few pictures from the region that have surfaced in the past few weeks. Scroll through them quickly if you can — not scrutinizing them, just taking them in the way your brain would take in images as you thumb through Instagram. Can you tell what’s real and what’s a concoction?
Israeli refugees:
Hamas leaders on a private plane:
A rally for Israeli soldiers:
A rally for the state of Israel:
Boy in Gaza with dove:
Girl in Gaza with bear:
Apart from the last one, which is clearly labeled AI (more on that stickiness in a minute), these images all leave you in a state of uncertainty. Or at least they did me.
[For the record: the Israeli refugee camp (No. 1) is fake; the Hamas leaders on a plane (No. 2) is real but with AI “upscaling; the first Israel rally (No. 3) is fake; the second Israel rally (No. 4) is real; the boy in Gaza with the dove (No. 5) is real; the girl in Gaza with the teddy bear (No. 6) is fake.]
To show how confusing it’s gotten, check out these two images from European football, of all places. The Scottish club Celtic recently played Spain’s Atletico Madrid in a Champions League match. Social media was flooded with these two images:
OK, so if you’ve seen enough AI images the second one is pretty visibly synthetic. But the first one is real.
Even as AI continues to gobble up new images so it can improve its mimicry, the blur is already in full effect. Sure, you can linger over the corners of the pictures and Photo Hunt your way to correct answers. But that’s not how most of us consume media. Which means that thanks to AI we’re subjected to heaps of disinfo we weren’t just a few months ago. In an age when it’s impossible to find consensus on what should be done in a crisis, imagine not even being able to access the basic visual media on which to base that decision.
But two of these cases offer a further twist.
First, the Hamas leaders on a plane/upscaling. The image was shared by Hananya Naftali, a former social-media staffer in the Israeli PM’s office, which makes plenty of folks skeptical already.
Naftali in fact took the image from a legitimate news story in Ynet from a bunch of years back; he just was using an AI tool to try to increase the resolution. (See the eyes of the figure on the extreme right for why that wasn’t a good idea). But the very act of using that enhancement tool makes the whole thing seem fake. When Forbes has to run an article with the headline “Viral Photos Of Hamas Leaders Accused Of Being AI Fakes Are Actually Just Poorly Upscaled Images” you know clarity has left the building.
And this may be the biggest danger. Because lest you think something like this just empowers bad actors, it doesn’t. It also disempowers honest players. When media gets this confusing, many of us stop believing even totally factual images from legitimate news outlets like Ynet. Social scientists have a name for this — “the liar’s dividend.” And it’s been paying out like vintage AT&T lately.
Second, the fake Israeli rally above (No. 3). This case is particularly vexing. The image is synthetic, as you can see if you enlarge some of the patios on the right.
So someone wanted to make it seem like there was a pro-military parade in Israel that didn’t happen. Which is bad enough.
But it gets crazier. Because after the image was disseminated, a whole bunch of people accused the Israeli government of sharing it. Which it didn’t do. (The actual source is unknown.) But given how upset people got hearing that Israel was disseminating it, it’s not hard to imagine someone in the future conducting a kind of false-flag operation in this vein — instead of framing an antagonist by making them seem to have committed a real-world attack they frame an antagonist by making them seem to have fabricated an AI image.
This Snopes post did come along and debunk the parade as an AI creation. But who's going to head over to Snopes every time they see an image on social media? And what's to stop someone from using AI to create a realistic looking Snopes page??
Still, it would be wrong to blame all this on the tech. Also responsible are those who use it. I don’t just mean the bad actors — I mean the consumers of such media. Because the machinating types know what our brains are looking for — who we want to see as victims, what entities we’d like to support, what preconceived notions we come in with. As with that famous invisible gorilla test, we see what we’re primed to see, no matter what’s really there. This is why this stuff can be so effective.
In other words, like so many other cons, AI disinfo doesn’t pull off its deceptions in a vacuum. It has an accomplice — us.
Which brings me to the last image. Even when we know definitively that something’s fake, there’s a risk. How’s that, you ask? Well researchers have long conducted experiments in exactly this vein. (The UC Irvine psychologist Elizabeth Loftus in particular has done some very cool work on this.)
Essentially subjects are told off the bat that they’re being shown fake childhood photos along with real ones, and then a fabricated image is indeed slipped, like a picture of the subject as a kid going on a hot-air balloon ride they never went on or meeting a Bugs Bunny character they never met. When they’re asked later if the event really happened, a large percentage — as many as half — say it did!
Yes, these fictions being mixed in with real photos actually tricks our brains into thinking they’re real memories. So in our current case it may not matter that social-media posters correctly brand an image as AI — our minds may, in the context of all these other news images, internalize it as real.
After all, if our psyche can’t even remember what happened to us in our own childhoods, how on Earth can it know what’s happening to strangers halfway around the world?
2. CAN SOMETHING BE ALL THAT COULD BE DONE WHILE STILL BEING LAUGHABLY NOT ENOUGH?
Like the time you decided not to eat a single bite of dessert for three days yet still didn’t shed those five unwanted pounds.
The White House this week issued its executive order on AI. By now you may have read a bunch about it (damn Monday new cycle). I think the basic takeaway should be, well, this diet analogy. The order went as far as anyone could reasonably conceive it going. Which isn’t very far at all.
Generative AI — the idea of machines essentially battling each other to learn such that over time they become more intellectually efficient— is an incredibly powerful force whose potential even its makers don’t fully understand. So the leader of the free world should probably have some thoughts. And some attempts at regulation — something very notoriously not tried for social media.
And so in their 100+ page executive order, the White House authors did exactly what they could in the limited areas where they could (like staking funding to scientific research on sufficient warnings).
They asked nicely in areas where they couldn’t do much (imploring biotech firms to be responsible when manipulating biological material, or advising that in the age of AI Congress should pass broad data-privacy protections, something the body hasn’t been willing to do for years).
And they tried to bridge some of the areas in-between (like delegating to the Commerce Dept. the watermarking of AI images in the hope of preventing the kind of disinfo noted in the previous item).
Two days later, British PM Rishi Sunak led an “AI Safety” summit of leaders, researchers and executives in England (at the Turing-codebreak site of Bletchley Park) with some of the same goals. The group came up with the “Bletchley Declaration,” in which 28 individual countries and the EU signed on to a pledge to make AI to be "safe in such a way as to be human-centric, trustworthy and responsible".
As a response to a moral imperative, the Biden executive order is extremely commendable. As a theoretical framework, it’s clearly well-informed. As something that will have real-world impact on reducing the negative consequences of AI it’s…highly questionable.
First, that’s because the executive order — while it invoked the Korean War-era Defense Production Act — can still be immediately overturned by any future president.
Second, that’s because it left entire swaths of what AI can do — command weapons, seize control of political campaigns, deploy insidiously on consumer apps — almost entirely out of the equation. The very concept of trying to impose a sweeping set of regulations over AI is a little weird given how much the tech is going to infiltrate every aspect of life — it's like trying to announce guidelines on the use of the letter 'e'. And these absences underscored the point.
(The document did require that AI companies screen their tech for potential weapons use, a somewhat squishy ask given that the U.S. government is itself developing a whole slew of AI weapons.)
Third, while the order did, through that Defense Production move, make it technically illegal for a company to violate its dictates, it’s entirely unclear what would happen if one did. As Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell, said, "What's missing is an enforcement and implementation mechanism.” (She also wryly noted, “It's calling for a lot of action that's not likely to receive a response.” Because really, why would an OpenAI or an Anthropic, let alone a Google or a Microsoft or a Meta.)
This challenge was best encapsulated by one of the biggest provisions of the order — that any company developing apps with processing power more powerful than the current generation of AI had to run safety tests and then let the government know the results. More transparency, always a good thing. But then what? There are no real obstacles here stopping companies from rushing ahead.
But I think the biggest issue comes with this: believing government can impose restrictions on a force that people so badly want to adopt in the first place. Because we already like and rely on AI telling us what products to buy, which people to date and how to get home. And I don’t see a huge call for taking down Amazon, Tinder or Google Maps. Soon enough we’ll rely on AI to decide where to invest, which candidates to vote for and what medical treatments to seek (not to mention probably a whole realm of AI assistants and companions). And I just can’t get convinced that bids to guardrail those new deployments will be much more robust than the efforts for the current ones. I realize this executive order is meant as a first step, but to where?
There are some who came away from this week more optimistic; my fellow Substacker Casey Newton wrote potently how the order could achieve some tangible goals. But given how deeply AI will be embedded in our lives, reining in the money-churning companies that provide it — and by extension reining in their ability to sell it to the various corporations, governments and political campaigns who are also their clients — feels a lot more challenging than just clarifying some bureaucratic lines of command.
“The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section,” is the kind of line that pops up pretty commonly in the order, and it’s reasonable to wonder what all those Secretaries will add up to.
AI isn’t a toxic household chemical — it’s not something whose harms are a) plainly evident and b) given the ways it will also make life easier, it’s not something I think people will be in a rush to give up even if they did understand the harms. And if you have any doubt about this just look at our inability to kick the social-media habit. Or smoking.
And maybe most crucially in all this, AI isn’t smoking — it actually in many cases will help save lives, with predictive diagnostics and elder-companionship and many other use cases. So in addition to the fact that AI covers such a wide swath of human existence, and that it’s being created by powerful tech megaliths with giant lobbies, and that a lot of American citizens and companies want what it’s producing, AI also gives us this giant matzah ball of a factor when it comes to reining it in: we shouldn’t always want to rein it in.
So while the order was as nuanced and ambitious as one of these things can be, I still kept feeling reading through it like it was applying government-bureaucracy thinking — with its requests for transparency and delineations of agency responsibility — to a trans-bureaucratic problem. AI is as powerful as gravity, with all the positives, negatives and immutabilities thereof, yet the fundamental approach here is the same as the one we use to clean up Superfund sites. Don’t get me wrong — there are many, many places a conventional regulatory approach works, and history has shown them time and again. I’m just not sure this is one of them.
When I was at the Post I remember sitting in an editorial meeting one morning hearing an editor and a few reporters unironically talking about the ways Congress could pass a law to stop misinformation on Facebook.
The notion of Congress as the great lifeline here seemed funny to me, but only to me, and I spent some time wondering why. At least some of it had to do with the fact that many of the people in the room came to tech journalism from a D.C.-government perspective and I came to it from a West Coast culture perspective. It’s tempting to see Congress as capable of stepping in to save us from our worst problems. And technically it could. But then, we have to want to be saved. “But we want to be outraged by what we see on Facebook, that’s why such laws wouldn’t do anything, if they're even passed in the first place,” I said in the meeting. I may as well have been speaking Cantonese.
It’s good that world leaders are paying attention and trying to implement solutions; certainly it’s better than the opposite. And I could be wrong — government may well be the strong guiding hand that keeps AI and civilization on a path of responsible progress. Government does a lot of great things, and in many cases it’s hard to argue with the power of regulation. But somehow I feel that when the great history of AI’s effects is written, its story told of the most glorious heights and darkest lows, government’s role in how all this happened will be the deep-buried sixth paragraph in the ChatGPT response about it.
[ABC and The White House]
3. WELL NOW THAT WE BOMBARDED YOU WITH ALL THAT WORRY, HERE’S A BIT OF FUTURE NEWS THAT COULD BRING A SMILE TO YOUR FACE.
Maybe.
For forty years scientists have searched in vain for a vaccine or permanent cure for HIV. Those efforts have been sadly unsuccessful, in part because the virus hides genetic copies of itself inside cells, allowing itself to restart. Which means that, while antiretrovirals can stem the effects, they can’t permanently remove the disease. Which is why people with HIV need to stay on meds permanently or risk serious and even deadly flare-ups (a process known as a rebound).
But a San Francisco biotech startup named Excision BioTherapeutics has quietly been running a test to see if gene-editing might be the key to cracking this problem.
As Antonio Regalado of MIT’s Technology Review reported last week, the company has been using CRISPR, the gene-editing tool, in a bid to essentially remove HIV from an infected person’s genes and cure them permanently.
Many of the tests currently run with the CRISPR technique (which makes editing gene sequences relatively cheap and easy) aim to change a genetic code to prevent future infections, not try to cure an already-present virus. Five years ago this month Regalado reported that a Chinese scientist had used CRISPR to edit out a gene in the hope of making babies resistant to HIV; the move caused a global uproar and landed the scientist in jail.
But because HIV leaves behind those hidden copies of itself inside genes, it also functions in some ways like a genetic disease, allowing it potentially to be subjected to the CRISPR technique. (Excision “added the gene-editing tool to the bodies of three people living with HIV and commanded it to cut, and destroy, the virus wherever it is hiding,” Regalado wrote.)
At least that’s the hope. We don’t know if this works because — get this — Excision isn’t saying. The company has run the test on at least three people, and we don’t know if it’s successfully removed the virus, because researchers haven’t disclosed the results. Apparently only two of the three subjects have had enough time pass to show a disappearance of symptoms, so they're holding off.
The company at least has said the subjects suffered no adverse effects from the gene-edit. So there’s that. But my dude, are they cured??? No one is saying.
(A senior vice president for the company, William Kennedy, said the results won’t be revealed until sometime next year.)
Given that 39 million people globally are currently living with HIV and that it continues to affect so many people in places where access to regular medicines is limited, this is no small cliffhanger.
By the way the edit doesn’t even need to fully cure the disease to be worthwhile; as Kamel Khalili, a Temple University professor who co-founded Excision, said, the treatment could be useful even if it just caused “a significant delay in the rebound.”
So all of this provides hope. Or maybe it will one day, if we can actually find out what happened.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
AI DISINFORMATION IS TAKING OVER ISRAEL-HAMAS COVERAGE: Sheesh. -5
THE WHITE HOUSE WANTS GOVERNMENT TO GET IN THERE ON AI: +1
THOUGH FAR FROM PROVEN, A GENE-EDITING TECHNIQUE COULD PROVIDE THE KEY TO AN HIV CURE: It’s something, anyway. +3