Mind and Iron: "We are in trouble as a democracy" because of AI deepfakes
A leading expert has a warning. Also Donald Trump, now with a lot of AI Black supporters
Hi and welcome back to Mind and Iron. I'm Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and lead bellhop of this tech-news hotel.
If you're new here, welcome! And if you're old here, welcome too! Please sign up a friend. Our goal is to cast a lens on the AI-flavored future, making sure what Big Tech is cooking up for us has a strong dose of the human — aka, what didn't happen in the years before social media. Please consider pledging a few dollars to this crucial mission.
This week's issue is just the slightest bit shorter because I've been prepping for the SXSW tech conference, where I'll be hosting a panel Friday on the future of our hearts and brains in the age of AI. Should be a robust discussion, with some heavy hitters like the mind savant Gary Small. If you'll be in Austin please come say hi!
AI deepfakes have been everywhere this week. They were at the center of a very troubling story of fake explicit photos at a Beverly Hills middle school. They cropped up a few days later in an effort to make Donald Trump seem like he was surrounded by Black supporters. The way things are going, we’ll soon see Taylor Swift, Vladimir Putin and LeBron James doing shots at a college bar while arguing about their March Madness picks.
So we’ll catch you up on those stories and then talk to the man you most want to hear from when the subject of deepfakes comes up: Hany Farid. Farid is a Berkeley professor who studies and knows AI deepfakes better than pretty much anyone; he has literally been warning us about them for years. Collectively, he says, what we've seen add up to some pretty gloomy portents. But Farid hasn't entirely despaired yet. Scroll down for a very enlightening conversation.
Also, last week an autonomous air taxi made the first known flight in our Earthly world, between two cities in southern China. It conjures a glittering future — "The Jetsons" meets Waymo. But just how imminent?
First, the future-world quote of the week.
“It will get to the point where it will be exceedingly difficult to tell AI content without real interventions….And if pretty much anybody can create content that is this deceptive we are in trouble, as a democracy and as a society.
—Farid on the coming deepfake detonation
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Deepfakes, deepfakes everywhere; an expert weighs in; robo-flying around the world
1. IF YOU WERE DESIGNING A REAL-WORLD CASE STUDY OF AI DEEPFAKES, you couldn't do better (worse) than what's happened in the past week.
First came word that a Southern California school district was investigating a case at Beverly Vista Middle School in Beverly Hills. The student(s) there allegedly used AI to place the pictures of an undisclosed number of classmates’ faces on explicit body images — the tech isn’t hard to operate — to disturbing and bullying effect.
Part of the shock here is that it happened in a middle school. But again, we’re not talking about sophisticated tech tools. All you need is the bad intention. There don’t seem to be any laws on the books to stop this either, even though the harm is pretty much identical to a case of real non-consensual photos being distributed (which is obviously a crime). At this point the question is not if these AI manipulations become the next front in the cyberbullying war — it's when. (The school district said it took disciplinary action, for whatever that's worth.)
The second deepfake incident came when several players were revealed to be spreading images of presumptive Republican nominee Donald Trump surrounded by Black voters that he was in fact never surrounded with. As the BBC reports, there are now “dozens of deepfakes portraying black people as supporting the former president” created by a slew of personalities not affiliated with the campaign, all in the hope of “encourag[ing] African Americans to vote Republican.”
One of the images, by a Florida talk-radio host named Mark Kaye, had “Mr. Trump smiling with his arms around a group of Black women at a party,” the outlet noted. Kaye shared it on Facebook with his one million followers.
Donald Trump’s Black support is actually up — but is still dismally low. (It’s risen from four percent in 2020 to 23 percent now.) But therein lies one of the interesting lessons in how these AI manipulations work. The most effective don’t tell an outright howler. They just play on something real to amplify a fiction.
Also notable is what Kaye told the BBC. "I'm not out there taking pictures of what's really happening. I'm a storyteller,” he said. This is emerging as a standard defense for deepfakers — “we’re just having some fun, don’t get your collar all wrinkled.” But of course in many cases the deepfaker doesn’t prominently label their images as “storytelling;” that would drain them of their power. So some people are left to think they’re legit.
Besides, the fake images could still have an effect on how we perceive a candidate no matter how they're labeled; some pretty serious research suggests even when we know an image is fake it can get integrated into our brains as real. I mean, you now have the above picture in your head along with hundreds of other legitimate ones of the former president. It’s logical they’d get jumbled up, and by the time November rolls around, your mind won’t separate out this one as fake.
The concerns over weaponized AI cleave in two. One type happens when bad actors make someone pure look problematic. The other comes when bad actors make someone problematic look pure. In these two instances of the past week we’ve had both — a compendium of how AI media could be weaponized both against and on behalf of its subjects. Of how AI media, for all the cool creative possibilities it offers, can be deployed to harass kids and misrepresent candidates.
So what can actually be done to stop it? We’ll have more in our talk with Farid in the next item. But two things do come to mind. On the election, the FEC is moving, if slowly, to stop this.
Yes, campaign ads can seem to enjoy…wide berth in painting the truth. But there are limits. We don't allow outright libel in campaign ads, and we shouldn't allow outright visual misrepresentations of reality either. I think we'll get there, but painfully. Also, how do you enforce it? We still don't know who released some of the Trump images. And we couldn't punish the party that matters most — the candidate — even if we did.
The kids situation is even more complicated and book-worthy. The young live online, and increasingly online is AI. But that doesn't mean rules can’t be put in place and enforced — by tech platforms, by schools, by governments.
Requiring tech companies to age-verify those who use the tools hardly seems like a stretch, even if it will only be partly effective. Requiring consent for image-manipulation of minors is a bridge further, but hardly unconscionable either.
Nor does the First Amendment — a common Big Tech argument — really come into play here. We don’t allow children to buy a bottle of scotch, and nobody complains that’s a curtailment of rights. “Technology, including AI and social media, can be used incredibly positively, but much like cars and cigarettes at first, if unregulated, they are utterly destructive,” said Beverly Hills Unified School District Supt. Michael Bregy after the scandal broke. He called for a host of laws that “strictly regulate evolving AI technology to prevent misuse.”
This seems both reasonable and doable. None of these measures — election guardrails, child-protection laws — would stop the creativity the tech companies say they wish to foster. They would not even, I don’t think, take a big chunk out of their profits. And they just might save an election or a child’s mental health.
2. I FIRST INTERVIEWED HANY FARID IN MID-2022. Midjourney, Dall-E and Stable Diffusion were just starting to etch their way into our consciousness, and a video deepfake company had created a sensation on America’s Got Talent.” He was concerned about how this tech could be misused, particularly as it spread to other media.
“We’re quickly entering a world where everything, even videos, can be manipulated by pretty much anyone who wants to,” the Berkeley professor said in my story. They would have the ability to “defraud, spread disinformation and disrupt society.”
A professor in both electrical engineering and computer science, Farid has been so concerned about deepfakes he’s keeping a running tally of how they’re being deployed in the 2024 presidential election — a useful if disturbing archive. (He includes the recent Trump photos — “our analysis has detected patterns in this image that are consistent with the generative-AI service Midjourney.”) Farid also created an AI version of Anderson Cooper that he demonstrated on-air with the host.
With disinformation spreading into all these realms — and with the 2024 presidential election officially kicking into gear this week — I thought it was a good time to catch up with Farid again. Just how fast is the tech expanding? What are the greatest dangers as he sees them? And do we have any hope of stopping them? The conversation has been lightly edited for brevity and clarity.
Mind and Iron: It’s been a very eventful 18 months! How much better is the tech getting, and how quickly?
Hany Farid: I remember when the first version of Dall-E came out [in 2021] and we thought it was magic. And now you look back at some of those images and it’s pathetic compared to the latest versions — it’s stunning the images we’re seeing. On the audio side they’re incredible at cloning a person’s voice — upload 30 or 60 seconds of anyone’s voice and it’s got it. A couple years ago when people talked about deepfakes I used to say ‘if you don’t get the voice right, it’ll be fine; Joe Biden in my voice, who cares’? But once the voice got over the uncanny valley, it’s a gamechanger. You don’t even need video. Audio and a single image, that’s really all.
M&I: I think a lot of people feel that the telltale signs will save us — the extra finger or the lips that don’t close or anything else that gives away image or video as AI. Can we tell?
HF: It depends who ‘we’ is. There is a dangerous level that I hear from people of ‘oh I can tell.’ And the short answer to that is ‘no, you really can’t.’ We [my Cal team] can tell. But we have a lot of computational skills and tricks up our sleeve. And it’s getting harder for us too. I do this for a living and I struggle with images from Midjourney. Images are hard to tell. Audio is very hard to tell. With video if it’s long enough you’ll eventually see something. But most people are looking on a small device and moving pretty fast.
M&I: And it’s just a matter of time before that changes too.
HF: The endgame is that video will also pass through the uncanny valley. It will get to the point where it will be exceedingly difficult to tell AI content without real interventions.
M&I: And once you can’t tell, the dangers multiply. How serious are the risks in your view?
HF: It’s exactly what you expect — as the technology gets better and easier to use and cheaper to use, there are novel ways to do bad things with it. We’re seeing that everywhere, in terms of scams and fraud and non-consensual sexual imagery, disinformation campaigns and the [fake Joe Biden] New Hampshire robocalls. All of these are on the rise. When it comes to scams and frauds people are losing a lot of money. When it comes to non-consensual sexual imagery it doesn’t even matter if you can tell. The harm is the harm.
M&I: As scary and terrible as it all is the disinformation is what I get stuck on. It goes to the very underpinnings of our freedom.
HF: If pretty much anybody can create content that is this deceptive we are in trouble, as a democracy and a society. Because it will be very easy to manipulate people with disinformation.
M&I: And even then you don’t even need to convince that many people for an election to flip.
HF: Exactly. Penetration can be a few percentage points and you’ve been successful. If I can change tens of thousands of votes in a handful of counties I can change a presidential election. And if you don’t think that’s possible from a state-sponsored actor, an organization or even an individual, you’re not paying attention.
M&I: Law enforcement doesn’t really seem to stand a chance either.
HF: Well look at what happened in New Hampshire. Thousands of people get a call from someone they think is Joe Biden telling them not to bother voting. A New Orleans magician created it for $150. It took him 20 minutes. But here’s the thing — it took the New Hampshire Attorney General two weeks to figure out who it was. Meanwhile we’re off to Super Tuesday. When things break on Election Night you can come back later with civil and criminal penalties. But the election is over.
M&I: The sheer breadth of the potential damage is also kind of staggering.
HF: Yes. We have two billion people voting this year in 70 elections around the world, and we’ve already seen problems— in Slovakia, in Bangladesh, in India, in Mexico, in Brazil. In Slovakia last year there was fake audio of a pro-NATO candidate [saying he’d rigged the election] and two days later his pro-Putin opponent wins. That has scared the bejesus out of folks in Brussels, and rightly so.
M&I: I suppose it would be…naive to believe that this will be stopped by the tech giants that platform this media.
HF: If anyone thinks Meta is on top of this there’s a bridge I want to sell you.
M&I: How does the First Amendment come into play here? It seems both a hard climb and a slippery slope to start limiting how people express themselves with what will soon be basic digital tools.
HF: It’s a tricky issue. We have a very vigorous First Amendment, and we should, it’s good, we want that. We can’t just say ‘you can’t create deepfakes of Joe Biden or Donald Trump.’ That would be a real problem and I’d be opposed to that. On the other hand we’ve been seeing dangerous disinformation campaigns around candidates. And now you also have it as a defense. Roger Stone could be heard saying he wanted to kill sitting senators. But he just dismisses it as AI.
M&I: The liar’s dividend. [The idea that if there are enough fakes people stop believing the real]
HF: Exactly. I don’t need to poison the whole dish of M&Ms. I just poison one and then you don’t eat any of them. Look, it’s not the deepfakes that are scaring me — or not only the deepfakes. It’s that we’ve lost so much trust as a society in media, in government, in scientists. I could do my analysis on a piece of AI, you could publish that analysis, but if one in four people believe that that the FBI was behind January 6, then it doesn’t matter that I’ve done the math and you’ve done the reporting. And it’s actually worse in much of the rest of the world — at least the U.S. has a robust press.
M&I: Tell us something that won’t make me seek out the nearest cliff.
HF: The good news is when I have conversations with legislators it’s not like 20 years of social media where they said everything is fine — there’s a genuine sense of understanding what this means. There’s just not any sense of what to do about it.
M&I: I’m not sure that did it!
HF: I guess what I would say is it’s a double-edged sword. I’m not saying there aren’t creative and fun uses. I can create a digital version of Anderson Cooper and do interesting and cool things for my movie. But I could also spread disinformation about Anderson Cooper. It goes both ways. And it’s our job as scientists and media to figure out how to harness the power for good and encourage people to do the same while mitigating harm.
M&I: Labeling, too?
HF: Absolutely. Efforts like the CAI (Content Authenticity Initiative) and others that are working on the industry side to say we need to get serious about watermarking real media. We also have education, where we can teach about good digital safety like we teach about healthy eating. Because the tools will only get us to a certain point; the rest is on us to be better digital citizens and not be vulnerable or gullible. The bottom line is we can’t pretend like this is the year 2000 and we are naive about tech. We know the bad things are coming. So let’s stop pretending and start doing something.
3. FOR A SHEER CONTACT HIGH ABOUT THE FUTURE, watch this video of two electric planes, known as eVTOLS, take off for a 30-mile flight between Shenzhen and Zhuhai in Southern China, to a patriotic swell of music that would make Roland Emmerich blush.
Flying cars are the Dippin’ Dots of transit — always imminent, never happening. But now they seem to be happening. Slightly. Maybe.
eVTOLs are Electric Vertical Takeoff and Landing crafts, a kind of Tesla-meets- helicopter concept that engineers have been working on for years. Powered by battery and without a need for runways, they’re supposed to revolutionize flying and perhaps transit itself. (This video about the Shenzhen-Zuhai flight is filled with all kinds of buzzwordy terms that may not exist, like “vertiports” and the China-favored “low-altitude economy strategy.”)
The idea of people whisked aloft across that are too traffic-jammed or water-separated to drive but too short for large fossil-fuel jets is an appealing one. And some pretty hefty names have been applying their muscle.
The American eVTOL company Joby Aviation (they bought an Uber subsidiary a few years ago) and German counterpart Volocopter both conducted demo flights in New York last November (with pilots). Volocopter says it wants to begin commercial flights in this country as soon as next year. Joby has been ramping up production plans, this week announcing that it has bought an Ohio facility that will let it mass-produce its craft. Volocopter last week gained regulatory approval to manufacture them in Europe.
There's a host of stuff that they don't tell you, though.
For one thing, the companies are only midway through the FAA and EASA certification process that would allow them to fly in the U.S. and Europe. Engineering is not simple with these craft — it's hard to make a battery long-lasting enough to put in the sky but light enough to fly. (I wrote about a Vermont company a few years ago that was developing eVTOLS for transporting cargo instead of people. A more realistic use case — and even that’s not easy. They’re also ramping up production with an eye toward certification.)
And autonomous aircraft come with added safety stakes. It's one thing to get stuck at an intersection because the car erroneously thinks it's supposed to stop, as Waymo cars can. It's another to drop right out of the sky. Would you get on a plane that had no one in the pilot seat?
So if you're like me and want to imagine a sci-fi-ish future in which we swipe for planes like we swipe for Ubers, hop to another city like we're jumping in a cab to the supermarket, and generally live in a world where the sky is filled with little buzzing flying machines like gnats over a Minnesota lake, you can let your imagination run wild from all these developments. Just don't expect mass adoption to happen before the 10th birthday of your unborn grandkids.
[Axios and Aviation Today]
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. But it’s a new year(ish), so we’re starting fresh — a big, welcoming zero to kick off 2024. Let’s hope it gets into (and stays) in plus territory for a long while to come.
AI DEEPFAKES HAVE BEEN SEEN EVERYWHERE FROM MIDDLE SCHOOLS TO THE PRSIDENTIAL ELECTION: Gulp. -5
THE TECH AND BAD ACTORS ARE MULTIPLYING FAST ON THESE DEEPFAKES: But, better tools and education are on the way too. -2
eVTOLs ARE STARTING TO TAKE OFF: Slowly. +1.5
Fabulous, must-read column, Steven! I'll be forwarding to all my friends who will hopefully, sign up for Mind and Iron.
Best,
Dan Vianello's mom
Great Work.
Sincerely,
Alan Reinstein