Mind and Iron: The Taylor Swift deepfake train is only starting to bear down on us
Also, what a Winnie the Pooh wobble tells us about the future of creativity. And tech solves...wine counterfeiting?
Hi and welcome back to Mind and Iron. I'm Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and chief architect of this Beaux-Arts building.
Tech and the people peddling it are tearing through our world, knocking it down and reconstructing it in their image. So every Thursday we tell you what they’re up to and how we might view what they’re doing. Technological progress can save lives. It can also kill spirits and bodies. Mind and Iron is here to sort through it all. Please consider supporting our important and independent mission.
We should also mention that we’re working on some new channels and planning some branded M&I speaking engagements — exciting! More on those in the next few weeks. In the meantime we’ll keep giving you all the future-world news you crave.
In the past week Taylor Swift AI became a big story and then a bigger story and soon it will be a monster lurching toward your favorite city — because, really, we’re just at the beginning of this synthetic-media craziness. Details within.
Also, the strange tale of a Wal Mart Winnie the Pooh and how AI got mixed up in all of it. And finally, wine counterfeiting is a big deal, costing billions per year and making your oenophile brother-in-law really mad. But somehow technology can help via something called…digital olfaction? We sniff it out.
First, our future-quote of the week. It comes from a mental-health professional, speaking about the dangers of weaponized AI that the Taylor Swift story highlights.
“It's not just famous people. This is happening to your normal, everyday person too."
—Psychotherapist Stephanie Sarkis on why synthetic media poses such a threat
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Deepfakes killing us slow, out the window; Winnie Wal Mart Whac-a-mole; when technology gets a nose
1. TAYLOR SWIFT, ELON MUSK, DONALD TRUMP AND THE NFL COLLIDING IN A STORY OF DEEPFAKES, THE ELECTION, CONSPIRACY THEORIES AND THE CULTURE WAR — WE SURE AI ISN’T ALREADY RUNNING THIS SIMULATION?
I was ready to put a bow on the Taylor Swift story after our item last week about the terrible deepfake attacks, in which bad actors of course created and spread explicit images of the star. 404 Media had unearthed a Telegram group where a Microsoft image-generation tool is used, and Swifties flooded social media to counter the attacks with legitimate content. That did what good human countermeasures to bad actors operating unchecked often do — a lot but not enough. And we would grudgingly move on.
It turns out we hadn’t even gotten past the prologue.
First X management went a little nuts. After not taking the images down for 17 hours, on Monday it didn't let us search for Taylor Swift at all — repression of legitimate content to catch the illegitimate kind, seems like a good plan — before on Tuesday restoring access and (saying) it will be "vigilant" in removing future non-consensuality…after it appears. Keep swinging to extremes, you'll average out.
Meanwhile Microsoft frantically tried to tweak the tool, called Designer, so that it yields only generic faces and not celebrities. "Yes, we have to act...it behooves us to move fast on this," Satya Nadella told NBC News' Lester Holt, seemingly without irony; it's only been a year since people began warning about the ways large language models generate proprietary material. As his interview was airing the Swift images were being viewed at least 47 million times.
No one in charge — and I mean no one — comes out of this looking good.
Not Musk, who after years of engaging in libertarian-on-speed toxicity careened all the way over to free-speech-repression — and who after enabling all this damage now is hiring 100 people to moderate content...after previously gutting the same department in favor of crowdsourced “notes.” You can't make this stuff up (but if you did you probably could post it on X).
Not Microsoft, which actually rolled out this tool recently as part of Microsoft 365 with porous safeguards despite tens of thousands of non-consensual explicit deepfakes shared across platforms dating back to 2019 — and which finally woke up to blocking deepfakes only when it happened to the most famous person in America. And even then it may only really be blocking them for celebrities, leaving gaping holes for what happens in high schools, workplaces and any other venue where toxic people exist. But hey, sales.
Not the U.S. government, which amazingly still doesn't have a federal law banning non-consensual explicit imagery despite how familiar these attacks are; this happened to Scarlett Johansson more than five years ago. (She eventually gave up, saying it was a "useless pursuit, legally.") A bipartisan trio of senators this week introduced a bill allowing victims to sue creators. We'll see if they do better than the congressman who attempted to get a similar bill passed eight months ago. Even if they do, it's unclear how such a law has teeth in the Internet’s dark corners, where it's hard enough to even know what tool is being used, let alone which villain is using it. Who, exactly, would Taylor Swift sue in this case?
And not AI itself, whose corporate overseers keep arguing for its life-saving utility but somehow keep getting stuck in situations like this. This is both an optics problem and a substantive problem. If your world-transforming technology can't even stop something as basic as content-swiping from the country's most prominent newspaper or appropriating the body of the country's most famous celebrity, you might want to take a look at just what kind of program you're coding.
And of course as massive a problem as explicit content is, it's not deepfakes' only hazard. Those include —
Dead celebrities getting revived without permission, prompting outrage and family lawsuits, as it did recently with George Carlin. Political misinformation on the edge of a precipice, just waiting to tip into the sea. The shift to video deepfakes, a whole other bowl of scary. And the very nature of truth — of distinguishing between the living and the dead, the real and the fake, the world as it exists and the world that vested parties conjure up in a puff of evil-magician smoke to pretend exists — now getting shot into an ocean of fictions so deep the craftiest submarine couldn’t touch it.
It's exhausting always rooting for the anti-hero.
This would all be bad enough no matter the personality. But the particular celebrity involved makes for a much more worrying symptom.
Because let's call the Taylor Swift deepfake what it is — the dehumanization of someone seen as a political threat.
The 47 million views and its attendant glee is happening in the same helix of Swift's potentially race-changing endorsement of Joe Biden; of her message of feminist empowerment to millions of devoted Eras-concertgoers last summer; and of her high-profile romance with Kansas City Chiefs tight end and famous Pfizer spokesman Travis Kelce. Democrats, feminism, science. A trifecta of a target, and those who fire the deepfake arrows, as creators or as sharers, are only too happy to hit it.
AI deepfakes are of course bad whether they hurt one person in a small town or the most famous woman on the planet. But the latter comes with an added dose of truth-subversion. The Swift deepfakery and attendant right-wing conspiracy theories of recent weeks are probably a bad sign for Democrats (though the Trump camp’s apparent decision this week to run against Taylor Swift might backfire given all those easily mobilized Swifties in Florida and Ohio). But really it's just a bad sign for democracy.
Because beneath the hype of the latest breathless headline, what’s really unfolding with these increasingly slick deepfakes is a society in the throes of trying to figure out how to keep all the bullies, bad actors, reality-manipulators, petty criminals and outright treason-seekers from detonating H-Bombs, after two decades of the slingshots of leaked pics and Facebook rumors. Thanks to AI, every scoundrel with the notion can now become a tyrant by flicking a few buttons; we have given keyboard warriors the nuclear codes. And there's no SALT treaty coming to stop them. We are building a system that laws can't fix, that social-media platforms don't know how to fix and that software giants don't want to fix.
In response, most of us ordinary humans can just cling to that timeless reed of the dispossessed: hope the weapon doesn't get turned on them.
[NBC News, 404 Media, The New York Times, The New York Post, Axios and AP]
2. THIS STORY SEEMED LIKE A SLIGHT ONE WHEN I FIRST HEARD ABOUT IT — AI CREATED A PIECE OF WINNIE THE POOH CERAMICS THAT DIDN’T EXIST? Not to go all Eeyore, but ohhhh-kayyy.
But as I got into it I realized something juicier was going on.
Here's the deal. Last week a few people browsing online noticed a super-cool Winnie the Pooh crockpot (I guess it’s super-cool? I don't really know Winnie the Pooh cookware). They went on Wal Mart’s Pooh corner to try to order it, but then found out it didn't exist — it was almost certainly an AI creation.
(The thing is pretty cute, kind of sumo-wrestler chic, with that nifty belly-button dial. Wait, did Winnie the Pooh even have a belly button? Or is it like Curious George and the whole tail thing? This feels like a question for ChatGPT.)
Anyway there was a little bit of confusion, a little bit of wistfulness (“That is so cute! If it was not fake it would be mine,” said a Reddit user), a little bit of Internet annoyance. And the world went back to its, um, corner.
But the story raises some interesting questions (beyond ‘who the heck has time to get annoyed about Winnie the Pooh crockpots?’) First, how is it that AI can dream up a product more desirable than a whole team of Disney merchandise specialists? Second, given that it could do that, why are all those Disney merchandise specialists still employed?
Now, it's possible the dynamic at work here is just novelty. The image-generator came up with something humans never did and that attracted attention among a few Internet hobbyists. Doesn't mean it would actually sell or endure.
But the idea of an AI creation even temporarily becoming more popular than something people spent decades iterating does make you wonder what the humans are bringing to the table. It also makes you wonder if some executives will be asking the same question.
(One reddit/Winniethepooh user: “Dear Disney merch executives, Get on this, please. Yours truly, All of us.”)
At the very least it conjures that dystopian future-employment scenario where AI will come up with all the principal ideas and we humans will just get woken from our torpor to fiddle on top of them every now and then. Either way, not a strong check mark in the human column. We should always remember we’re stronger than we seem and smarter than we think. Except when we’re not.
In fact, we may even need a term for this, the idea that AI will stir envy in humans by slowly doing something better than us. Creepfakes? AIspirationalism?
Not that we should get too carried away with the automation implications — I mean, it's a Winnie the Pooh crockpot. It's not like AI invented a personality that can melt the Internet. (You still need actual humans like Taylor Swift and Travis Kelce for that.) And certainly art would be a strong word here; Jeff Koons is not placing any emergency calls to a career coach.
What this incident does, though, is recontextualize the question away from one we've been asking a lot (what can AI do) to one we should maybe be asking more often (what are humans doing that AI can’t). For decades whole swaths of creative workers got paid to come up with precisely stuff like this Pooh crockpot. Presumably they did it well. But now AI, like a pack of woozles, is sneaking up to swipe their honey.
We tend to believe deep down that a lot of commercial-creative jobs will be spared AI’s sickle — a Pew Research survey this summer found that interior designers, for instance, had only “medium exposure.” Well, maybe creative jobs will be spared — but the goalposts seem to be moving back on how we define that creativity.
The AI age may not deprive us of high-level creative jobs. In fact, it may even put a bigger premium on them. But little moments like this show how merciless it might be on all of us who don’t level up.
3. TALK IN TECH CIRCLES HAS ABOUNDED LATELY ABOUT WHAT MAKES A MACHINE-INTELLIGENCE ELITE.
Is it its ability to trick a person into thinking its human? Its ability to write a novel? Its ability to draft a business plan?
Allow a new thought to enter the chat: its ability to smell.
OK, no one has proposed that as a benchmark — yet. But I was struck in seeing a new application at how impressive and unique and, yes, human such a computer skill would be. Haptics can do touch, pattern-recognition is a form of sight, Shazam has been listening for 15 years. We accept all of these skills as fundamentally machine-based. But a computer being able to smell? Remarkable. And unnerving during those early-morning Zoom meetings.
The application in question is wine fraud. The land of oenophilia I am far from, but apparently wine fraud is a massive problem. Since labels are easy to counterfeit and taste can be subjective/elusive, bands of counterfeiters make the rounds, blending grapes with chemicals to fake a pricey vintage. And many buyers are none the wiser. A good counterfeit job requires experts to suss out, and how many of those can be running around sniffing and sipping wine every time someone buys a bottle?
Also, unlike physical art, you can’t send digital images of it to be inspected by experts halfway around the world. To prove that a wine is legit, there’s only one method: getting an expert in the room.
Ah, but what if your computer can be that expert? A Forbes story this week notes technologies that have begun to do this — to smell. Known as “digital olfaction,” the tech essentially uses sensors to translate odors into unique codes, then runs those codes against a very large database to determine what it has encountered. Voila. A sense of smell.
This seems like a cheap parlor trick, until you realize that it’s also essentially how our sense of smell works — we process a whole bunch of data when we walk into the house and the delicious odors hit us, then subconsciously scan our mental database to match all those smells with a vegan stew.
Forbes notes a startup named Aryballe that is evolving this kind of odor recognition, potentially for deployment in a whole bunch of cases where recognizing scents would be beneficial, from unsolved crimes (don’t automate the dogs out of jobs!) to retail marketing and its get-more-people-to-buy-stuff pursuits.
The Forbes contributor, Bernard Marr, suggests pairing Aryballe’s tech with wine detection. If an AI system can scan thousands of data points of legit wine and thousands of data points of counterfeit wine, it can make a pretty good determination of what it’s smelling.
“In the hands of consumers, AI-powered devices could offer real-time verification of wine authenticity,” he writes. “This empowerment would disrupt the counterfeit wine market, often targeting unsuspecting buyers with sophisticated fakes. Moreover, it would foster a new culture of informed connoisseurs who can rely on technology to guide their choices.” Paul Giamatti in “Sideways” would be appalled.
The idea of using AI to stop wine counterfeits certainly holds plenty of appeal from a financial standpoint. Philosophically there is something sweet (dry?) about it too. The Taylor Swift episode shows how machines thinking and moving quickly can allow criminals to create material that hurts innocent people. It’s nice to know that, sometimes, innocent people can use machines that think and move quickly to stop the criminals.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. But it’s a new year, so we’re starting fresh — a big, welcoming zero to kick off 2024. Let’s hope it gets into (and stays) in plus territory for a long while to come.
THE WEAPONIZATION OF CELEBRITY AI TO EXPLOIT PEOPLE AND FURTHER POLITICAL AGENDAS IS UPON US: Fan-tastic. -6.5
AI CRAFTS SEEM BETTER THAN THE REAL THING: Cool for collectors, bad for jobs. -2
COMPUTERS THAT CAN SMELL: They can sell us stuff but they can also sniff out the bad guys: +1.5