How the AI message Taylor Swift embedded in her Harris endorsement can change media
Also, the new Hollywood movie that could forever alter how we think about machines
Hi and welcome back to another spiky episode of Mind and Iron. I'm Steven Zeitchik, veteran of The Washington Post and Los Angeles Times and lead engineer on this newsy soundstage.
The world is changing fast — how we work, live, eat, socialize, vote, play, pray, buy and get informed are all evolving. Why not have a weekly guide to what’s happening on all these fronts? Please consider supporting our independent mission so we can stay independent.
It's been an intensely newsy week here in Futureville, so we're going to backburner for a minute a couple of the big pullback topics we've been poking around on to swing you through current events.
First, on Tuesday night Taylor Swift decided to take the most scrutinized moment of her career and turn it into an awareness-raising opportunity on the dangers of AI. This was not accidental, and its impact won't be small.
Also this week, Strawberry — OpenAI's platform that can allegedly reason — will now be coming to consumers...in two weeks???
Finally, also in two weeks, one of the most influential modern movies on AI — an animated film called "The Wild Robot" — will be released. It could forever alter the trajectory of how we think about machines and machine-intelligence companions — ie, the defining digital dynamic of the coming decade. We've seen the movie and can give you the skinny.
First, the future-world quote of the week. It comes from — who else?
“It really conjured up my fears around AI, and the dangers of spreading misinformation...The simplest way to combat misinformation is with the truth.”
— Taylor Swift, breaking through the haze in her Kamala Harris endorsement
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Taytay’s AI crusade; Strawberry unexpectedly ripe for the picking?; the taming ‘Wild Robot’ could do
1. IT WAS THE SHOT HEARD 'ROUND THE TAYLORVERSE.
Or universe. Yes, the country's most famous celebrity endorsed Kamala Harris. But that was kind of inevitable, no? The real news, at least from this future-lensed angle, is how she dwelled on AI in order to do it.
The most famous person in America, who doesn't say a word without closely weighing its impact on the public, went out of her way to spend a good chunk of her most anticipated pronouncement in years on what machines generate.
The full comment, to refresh:
"Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth."
That she felt the need to take this opportunity to shine a light on AI is a logical strategic move. Swift has been deepfaked again and again and again. And really again this campaign season. (The whole ersatz "Swifties for Trump" movement, which of course isn't one.) These deepfakes clearly trouble Swift on a personal brand level, as they should. So she's using the moment to spread awareness.
What does such a usage mean, though — where will they take us? Can a light shined by the likes of Swift change our cliffside trajectory?
For that let's take a step back first to where we are on the subject of AI deepfakes, aka the no-exaggeration gravest threat to truth and democracy we’re currently facing. Let's start with the legislative front.
Activity there has been abundant — everyone from AOC, who has something acronymed the Defiance Act that makes it easier to sue explicit-image deepfakers, to Ted Cruz, whose Take It Down Act targets platforms that air revenge porn. States are moving somewhat faster — at least 23 have already passed one kind of legislation or another that attempts to curb or hold liable those who would facilitate deepfake images (Wired has a good map).
All of this is to the good, of course, and what lawmakers should be doing. But for all the rosy P.R. this generates, these efforts mean...less than they purport to mean. First, the laws are not fast enough, not punitive enough and not broad enough. Notice so many of these are about porn. That's mainly because those laws are easier to pass, especially culturally. The public (not to mention certain lawmakers) are instinctively more likely to balk at unclothing someone digitally than at other equally insidious dangers (like pretending a person endorsed hate speech).
And when they do set their sights on broader targets, the laws tend to become the kind of mealy-mouthed 'let's establish a task force to determine if there should be more disclosures about Gen AI in marketing content' sort of deal. (Literally, that's the scope of a current Congressional bill. And it hasn't even advanced.) Big Tech laces up its shoes a lot faster than that.
Second, even when the laws really do try to get at the fuller range of what deepfakes can do — by targeting political ads or by focusing on scamsters, say — their teeth can be dull and only snapped down after the fact. At least 19 states have passed acts pertaining to the use of AI deepfakes in campaigning, which seems nice until you realize their go-to mechanism is usually just to allow candidates to sue opponents who use deepfakes. Such plaintiffs may not even win — First Amendment protections are (rightly) strong, and many of the laws are not skillfully constructed enough to catch the bad actors. But even if they do, the election is long over.
As a deterrent, sure, maybe this will stop a few people from creating deepfakes. But as a tool to protect democracy you might as well try to use a plastic straw to fight off a home invader.
But the biggest issue is that the real problem isn't legal: it's social. You can try to target every single bad actor who is trying to impersonate a politician or fake a movement. But what you can't do is change the social conditions that allow their work to be effective. The manipulation of images and video isn't landing in a vacuum; it's not descending onto a 1970's media landscape in which, for all the vehement disagreement about national policy and direction, no one is much disputing the basic facts.
Three decades of college-campus relativism on one hand and right-wing cable-news propaganda on the other (among other factors) have seeded the ground for everyone having their own truth.
Two decades of algorithmic outrage-driven social media has then segmented the land into places where no one is exposed to or much wants to hear from a neighboring farm.
And a decade of trolls from foreign entities and other hostiles have created so much smoke in the lower atmosphere that no one can much see facts even if they wanted to. (For an up-to-the-minute example of the dangers of such a climate in real time, look no further than the Springfield, Ohio pet-eating insanity, where a clear hoax is still causing a real-world panic.)
It is on to THIS biosphere that the ability to fabricate images, voice and soon video with startlingly life-like qualities is descending. And the idea that a few laws will disincentivize people to drop them down is as hilarious as it is ineffective. It's like looking at the charred remains of an apocalyptic movie earth and wondering if sprinkling a little water might blanket the planet in vegetable gardens.
So now that I've depressed the hell out of you by sketching the utterly hopeless and irredeemable nature of what we're facing, here’s the most encouraging statement humankind can hear: "Taylor Swift is on the case."
OK, so what can Taylor do? She's the most powerful media figure around with more ride-or-die followers than any American personality currently — maybe ever in this modern age. Certainly that counts for something?
On one hand it's tempting to answer the question by saying...it doesn’t. Swift is an anomaly. Pretty much no one else is going to have 283 million Instagram followers or the level of trust they bring to her. When she says "its' me, hi, don't listen to the machines" she is speaking into a megaphone that no one else can vocalize through. What happens when everyone else is deepfaked? How can they correct the record? Who will listen?
And it's true. Pretty much anyone else — from a city-council candidate fighting for a seat to a high-school student just wanting to be accepted — won't be able to sway the deepfaked masses off their illusion with anything close to this level of influence.
But, BUT — and here's the good part — Taylor Swift's singularity is also a major advantage. Her megaphone helps people listen and internalize in a way no one else’s does. She can make a wide swath of the public understand the dangers of AI — she can be an instrument in the fight to understand media literacy like no one before. When fans see how her image has been co-opted by bad actors using these tools, it brings home the hurt in a new way — and, presumably, a desire to understand and fight against it.
One of the most fundamental challenges to fixing this ruined world is that we can’t really see how ruined it is — we by definition can't see there’s a problem. That’s the power of outrage and tribalism (the toxic byproducts of that chemical assault on facts): no one thinks they're doing anything but seeing the truth.
And that's where Taylor Swift — that's, really, where only Taylor Swift — can come to help, can make us realize what's happening. Harris supporters will say that the biggest news this week is that Taylor has chosen a side to help that side win an election. But the greatest gift she offered Tuesday is the opposite of partisanship — it's prodding us to realize we should look at facts and eschew that tribalism. That when it comes to a digital world that can be increasingly manipulated by those with an incentive to divide us, we should be stopping to take a (literal) harder look at that image, and a (figurative) harder look at who might benefit from generating it.
That's a message that’s been getting lost because no one knows they need to listen to it. But the tens of millions of people who revere Taylor Swift -- many of whom will have to really deal with the consequences of these encroaching AI manipulations when they age into positions of power in the coming decade — just heard from the person they most trust that they must do this. And that’s good for all of us.
The best way to make these AI-disinformation dangers better is not by sprinkling water from above, Congress-style. It's by trying to clear the smoke from below. And the world's most trusted firefighter just arrived on the scene.
2. SUMMER IS ENDING BUT IS STRAWBERRY SEASON JUST BEGINNING?
The fruit puns, they never get old (yet).
About eight weeks ago we updated you on Strawberry, the alleged reasoning project at OpenAI that would, if true, change how machines think, and how we think with them. The Information, which has been slicing out these scoops (sorry) at regular intervals, reported that Strawberry was coming. And now this week the outlet says it’s coming in two weeks.
To emphasize, a product of OpenAI that could actually deduce and do math problems and ultimately reason out a situation will be released commercially by the end of this month. At the very least it could do what The Information previously said was “not just generate answers to queries but to plan ahead enough to navigate the internet autonomously.”
The outlet says it’s not clear how Strawberry will reach us — one option is for it to be a mode within ChatGPT. (So you can choose to use the old large language model or this new higher-level reasoning approach for your query.) But the rollout specifics are in a way second to the philosophical significance. If these drib-drab reports are right, we could be soon be using AI to think about problems instead of just looking up and synthesizing information.
That latter modality, after all, has been the hallmark and limitation of all of these Large Language Models over the past few years. They’re just really casseroling all that’s been said before to create an appearance of coming up with something original; it’s a sophisticated mirror. It can’t do basic math, or make meaningful inferences about whether to take that job, or whether to reconnect with that estranged relative, or do anything even a doltish human friend with reasoning skills can tell you to do (yes, including search the Internet). That’s why if you’ve ever pressed ChatGPT to give you some real-world outcome or just grok the contributing factors it returns something like a tinny on-one-hand-on-the-other-hand laundry list; it’s not actually thinking about the question the way we humans would).
But if OpenAI has licked the reasoning problem, potentially by combining a Large Language Model with another method, then it will. It will have changed the game, both on what we use AI for and what AI is at heart about.
Of course this is a big if. There have been rumors of OpenAI’s next LLM known as GPT-5 for years, and nothing. Murmurings of AGI, or artificial general intelligence (the idea that a computer can completely function like a human), and the calendar goalpost keeps getting moved back. So the idea that a product that can reason like a human, while intriguing, comes decidedly in the show-me category. It’s entirely possible Strawberry comes out and is just an incremental iteration on the chatbot we already know, and as you were.
Still, don’t sleep on big change here. When ChatGPT came out in late 2022 many tech industry folk were surprised. They didn’t think OpenAI would unleash on a market a tech that seemed buggy and not ready for primetime. But the company did anyway, and it rocked the world. Ordinary citizens could now generate writing they’d never been able to generate before. And every other company scrambled to keep up, investing in or releasing their own products and setting us on the arms-race course we’re still on today.
Where will Strawberry fit on this spectrum? Could it take consumers by storm? Change how we partner with machines? And jolt the industry anew? These are just some of the juicy questions.
UPDATE: As we were putting this issue to bed OpenAI announced it was indeed releasing a reasoning model called “OpenAI o1” (so no more fruit) that “can reason through complex tasks and solve harder problems than previous models in science, coding, and math.”
Just a preview, but so far in tests it “performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology,” the company said. “We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%.”
The results come because of training in which the models “learn to refine their thinking process, try different strategies, and recognize their mistakes.” We’ll dive into this more next week as the thing unspools. But it’s here.
3. IT’S WEIRD TO TALK ABOUT A KIDS MOVIE AS THE MOST IMPORTANT PIECE OF AI CONTENT THIS YEAR. But that's probably what it will become in a few weeks when Universal puts out the DreamWorks Animation film "The Wild Robot."
‘The Wild Robot” is based on a Peter Brown book (and eventually series) that first came out about eight years ago. In it, and the movie (which we’ve seen), a helpful robot nicknamed Roz crash-lands on a jungle island, where she is first treated warily by the (human-like) animals and eventually, through her own desire to improve, starts demonstrating very human-like sympathy and affection.
“To survive we must sometimes become more than our programming,” one of the human-like animals tell Roz, and she obliges, overcoming her circuity to do the right thing.
Here’s the trailer for the movie, which is pretty great and an early frontrunner for Oscar’s best animated movie and possibly even a best picture nomination.
I won’t say all of this is a perfect parable of the coming age of AI companions. There is a mobility here that our companions won’t have, for one thing, and also a sentience. In truth no AI companion can “overcome” their programming; that’s serious science fiction. Also, the film doesn’t really deal with the paradoxes. The idea that a machine intelligence can be sentient but also only our friend is a logical impossibility; once machines attain the ability to make independent choices, they may not always be in our favor.
But the fundamental message here is nonetheless one that will influence us: Machines are not the enemies of humanity. In fact they can demonstrate humanity, sometimes even better than the cold-blooded humans. (There are evil-acting machines here but they are programmed by bad humans.)
Youth-oriented entertainment shapes how we think of machine-intelligence in surprising ways. A whole generation grew up thinking of machines as potential friends thanks to C-3P0 from "Star Wars" and Johnny 5 from "Short Circuit,” which eventually helped make a lot of adults feel OK opening the door of their lives to the likes of Alexa and Hinge and Google Maps.
That whole generation also sometimes thought of machine intelligence as threats thanks to movies like “The Terminator” (and “2001’s” HAL that preceded it). It’s a message that’s found expression of late with (justifiable) worry about whether we’re giving up too much power to tech companies to let machines take over our lives. We worry about the effective accelerationists in Silicon Valley who want to replace as much human thinking with machine intelligence as possible.
"Wild Robot,” I believe, sets the clock back to the former example. DreamWorks and director Chris Sanders (more from him in a second) are surely not trying to put forth any Big Tech messaging. They’re just trying to make a cool and heartwarming movie, and they succeed in spades. But the message gets through just the same: Machines are just doing their jobs. They’ll never take anything from you, and they’ll only enhance you. Which, while is sometimes true in real life, is obviously not always true.
The movie in a way brushes back a lot of the skepticism that's kicked in since Google’s AI Overview started telling us to make pizza with glue and OpenAI board members resigned fearing a lack of safety mechanisms.
I don’t want to be the guy saying don’t love the movie. Do love it; it’s great. Go out and see it and then see it again. But it’s worth trying to separate its greatness from a message that can seep in about the fundamental harmlessness of machine intelligence. AI companions will serve a lot of lovely purposes. But they might also lead us astray by prompting overreliance and making us forget or neglect actual human relationships. Or complicating those relationshipswhen we suddenly realize we can’t program them the same way.
Here’s what the director, Chris Sanders, feels about the topic.
"Whenever I’m in Santa Monica I see those little delivery robots traveling around and they always give me this vibe of some little kid running an errand for the first time,” he said when I asked him in an interview how he thinks about machine intelligences. “My heart breaks and I want to stop my car and walk with it and make sure it gets home.” (N.B.: I feel the same way. Something about seeing a human-like entity helpless in the face of humans and all their potentially bad intentions.)
He continued: “I read an article about a Roomba and how people attach themselves to it emotionally because they move around and seem to be kind of alive. People don’t replace them as often as you would replace another thing….It’s like your cat needing to go to the vet. When your Roomba breaks you want to fix that Roomba.”
A fascinating response, and one that actually puts its finger on what will fuel AI companions. They will be, in a sense, our pets, our kids, or even friend/partner equals. But in the end, that can also mask what it means to have programmable machines as social companions. Because they’re not pets, they’re not kids, they’re not humans, and they don’t have feelings. They can affect ours, though. And we should probably be keeping that in the back of our mind even as we enjoy getting to know them as cuddly film friends.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. Can 2024 do better? The summer wasn’t great. September so far? Pretty solid.
TAYLOR SWIFT IS RAISING HER VOICE TO FIGHT AI-ENABLED DISINFORMATION: +4.0
OPENAI’S NEW REASONING PROGRAM: Hard to know what this will amount to. +1.0
THE WILD ROBOT COULD DEFANG US NOT TO WORRY ABOUT AI DANGERS: -2.O