Mind and Iron: Here's what AI could do to our hearts and brains
Will it make us happier or more miserable? Biden wants to ban AI voices (maybe). And what the Kate Middleton muddle says about the future of media.
Hi and welcome back to Mind and Iron. I’m Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and pitmaster of this vegan barbecue.
As many of you know (and you newbies will soon find out) we’re here to help make sense of this crazy, shining, dark, hopeful, apocalyptic future, and all that will bring us there. Mind and Iron views the world through a human lens, and evaluates tech by this criterion above all. Our reporting is always — always — done with this as a priority. Please consider pledging a few dollars to our crucial mission.
I’m recently back from the SXSW conference in Austin, where the state of tech is…pretty strong? Slightly scary? Stealing our humanity even as it restores it? All of the above?
If you’ve never been to SXSW, it’s a scene to behold. Thousands of engineers, entrepreneurs, investors, thinkers, marketers and other types gather to work out how the future should and will look. Amid the running around, Mind and Iron hosted a discussion as part of the conference's 2050 track on what AI will do to us psychologically — on the long-range picture of people using machines as full intellectual and emotional partners. (Shoutout to SXSW senior conference programmer Hayden Bagot, who coordinates panels like they're skittering pucks and he's Connor McDavid.)
I’ll offer some quick bits from around the conference and highlights from our own panel — along with my personal take, which is a mix of hope and caution.
Also this week, AI as a presidential issue? It’s finally here, thanks to President Biden's jolting comments at the SOTU.
And finally, what the Kate Middleton mystery tells us about the future of media consumption. You won't learn the whereabouts of the missing princess. But we'll clear up a mystery or two of our own.
First, the future-world quote of the week.
“We should not be looking to bots for replacing our emotional connections….all these feelings and emotions we have as humans is not something we can replace with a bot. At least, not anytime soon.”
—Paige Bailey, lead product manager at Google DeepMind, speaking at SXSW about what she thinks AI is and isn’t good for
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
What will AI do to us emotionally?; the leader of the free world talks deepfake bans; what the Kate Middleton photo dust-up says about where media is going
1. WHEN THE HAZARDS OF AI ARE DISCUSSED, the conversation tends to drift toward well-known issues. Bias is one, and it's a biggie. Data-security is another; ditto.
But the existence of two problems doesn't preclude the presence of another — our endangered happiness.
Despite their centrality to human lives, our emotional and cognitive states are not something most of us dwell on in the hurlyburly of the everyday. And it's certainly not something pundits dwell on. They want to tell us who'll win an election or pull off a tournament upset, not whether states of happiness are trending up. And tech pundits? Even less so.
What's weird about this is that tech and tech platforms have of course had a profound impact on said states. Over the past 30 years technology has led to greater worldliness, more relationships with strangers and even increased neural activity among prolific Googlers, according to a signature study.
It also has led to higher anxiety (a Pew Research poll this week found that three-quarters of teenagers felt more at peace without their phone) and shallower thinking. Extensive social-media use has also been academically demonstrated to cause higher levels of dissociation.
Meanwhile, mobile tech/cheap wifi has reduced the social serendipity that can make life fulfilling. As Trevor Noah said in a SXSW discussion with psychotherapist Esther Perel, referring to our compulsive phone use in public places, "The fact that we can contact the people we already know makes it a lot less likely we connect with strangers and share experiences with them."
My essential question at the Mind and Iron panel centered on what happens next. That is, what happens when the technology is not simply a tool, as it was for the first 15 years of the Internet, or a platform, as it's been for the past 15 years of social media. What happens when it's actually another intelligence?
Because AI will elevate technology into something qualitatively different than just new ways to listen to (or shout at) other humans — it will allow us to interact directly with the code itself. So what are we staring down the barrel of over the next 15 years, when technology is no longer just a conduit for human relationships but an alternative to them?
[I hear what some of you might be saying: “Will it actually do that?” And granted, how exactly this will all play out is hard to say. Will we be consulting AI to make decisions? Will we call on a smart companion when we’re going through struggles with a friend, or simply to keep us company after a long day? Will we engage in outright romantic involvements with AI, since these machine intelligences can know us better than any human, and without their own baggage?
Maybe and maybe and who knows. The form all this will take is hard to predict. But one fact seems certain to me from studying where these models have been and where they’re going — AI will increasingly have the technical capability to do all of this. And if human nature (and capitalism) have their way — and let’s be honest, they always do — these use cases will become ever-more popular. No matter the specifics, AI is likely to fulfill a partnership function in some way.
As it turns out, the person who conducted that signature study 15 years ago about Googling and our brains is Gary Small, then at UCLA, now chair of psychiatry at Hackensack University Medical Center. He was gracious enough to sit on our panel. So was Cristine Legare, a professor of psychology at The University of Texas at Austin and director of the university’s Center for Applied Cognitive Science. And Dor Skuler, who co-founded Intuition Robotics, the elder-tech company that has given us ElliQ, a character-based AI that is increasingly keeping Grandma company.
Here are a few highlights of what they said on these existential questions — on how deploying AI as decision-making, emotional and even romantic partners will change us. Or, the true theme of conversations like these: what makes us human?
I’ll hold off on commenting for a second — I’m curious about your reaction as you read them.
(You can listen to the hourlong conversation here as a podcast — hey, we give you options.)
On how AI can redeem us emotionally and psychologically:
Skuler: The worst punishment we have in modern society is sending people to solitary confinement. And yet that’s where we send our [elderly] moms and dads….For the time that they’re alone, ElliQ is like this motivational chirpy, very very funny delightful sidekick in their lives…We think that it’s better than the alternative of just being alone in front of their TV.
Small: We can’t train enough mental-health practitioners, and the more we try we still can’t get them there. I think we can definitely use technology — create AI — that helps the human condition [in therapeutic settings].
On whether we’re sufficiently aware of the dangers:
Legare: There needs to be major investment in scientific research on the effects of these technologies. The pace of these technologies is breakneck speed. So the science on the psychological, cultural and social effects is already behind…The dream I think would be to use these technologies to amplify the better, more moral, more pro-social aspects of our nature and inhibit some of the more negative, darker sides of our nature.
On why we shouldn’t be so quick to believe humans are superior to AI:
Small: Yes, the machine is just a bunch of zeros and ones. But what are we? We’re just a bunch of molecules and hydrogen and oxygen and electrical impulses. We’re just more complex and have a longer history of development. So one way of thinking of this — we’re human beings but we have another species that is developing very rapidly: techno-beings.
On avoiding AI anthropomorphism:
Skuler: As designers of these technologies we have a lot of responsibility. On one side we want to have a strong bond. That means a relationship. We want that relationship to be helpful and to continue. But we don’t want people to confuse it with a relationship with a human…We have relationships with our pets and they’re meaningful and we feel love and emotion. But we don’t confuse them for a human… I think that a lot of people in the AI industry are actually doing a disservice to humanity by trying to portray AI as human. I mean it’s literally the goal. The definition of the Turing Test is to successfully fool us into believing AI is a human. Like, what?? Why is that a good thing for humanity? I don’t think it is. What we’re trying to do [at Intuition] is create an AI that is authentically an AI.
Legare: [But] for AI to be psychologically impactful it needs to trigger enough cognitive bias for us to feel something for it — for us to feel an attachment…There’s a fine line there. It [does] need to be human-like enough to be imaginable to us and for us to feel a closeness and a bond.
Small: AI may get to be so good, we won’t be able to determine a human from a robot. And as long as we realize that, that’s OK I think. All of us have relationships with people who don’t have a lot of empathy, that are very narcissistic.….[Is AI] any different?
On whether it indeed matters that our new good friend is not an organic life form:
Skuler: ElliQ does have knowledge about you and interest about you. But it is programmed. And she is empathetic but is programmed to do so. The efficacy kind of speaks for itself. She is projecting empathy — or if you want to call it ‘we have created a formula on how to be perceived as empathetic’ — because that’s what the human counterpart needs.
Legare: A lot of the way placebo effects work is how you think something impacts you and if you believe there’s a potential for an effect. There are tons of scientific studies that show it does have a positive effect on you….[And so] I wonder if we’re holding this tech to a higher standard that we hold ourselves….How often in your daily life do you communicate messages that are maybe well-intentioned but not true?…[The point is] it’s subjectively helpful. We do this all the time [without AI].
On avoiding the worst outcomes:
Skuler: The question is what do you do with that agency of AI; what are the goals you’re programming it to achieve?…How can we use this agency to animate and strengthen intra-human relationships? You can do that if it’s a goal you’re setting for yourself and measuring it against. And if your goal is to try to monopolize a person’s time and create a more intimate relationship between humans and AI, then conceivably you’re able to optimize for that as well….If we wanted to, we [at Intuition] can fool people. Two years from now we can definitely fool people. That’s why I think the ethics are super-important. I don’t think it’s discussed enough. And I think we need to approach this from a position of right-and-wrong.
--
I came away from my discussions with these experts optimistic but concerned (a familiar vibe).
Therapeutic use cases certainly seem magnificent; someone who has trouble interacting with humans or can’t find the right therapist now suddenly has a way to feel social or talk to someone or just practice their skills so they can engage in the real thing. These are folks who truly need more people in their lives but for reasons beyond their control can’t find them in the moment. AI helps.
I’m on Skuler’s side on the elder-companion use case too. Sure, it might encourage some seniors to stay home even more often, as some of ElliQ’s critics have argued. But given how massive a plague elder-loneliness already is in our world, that seems like a risk worth rolling the dice on.
Also, tech entrepreneurs aren’t all out to ruin us for profit. (See that Paige Bailey quote above.) And even if they were, the human capacity to adapt is high. The fast-food industrial complex tries to ruin our health by relentlessly marketing us garbage. And yet millions of Americans see through that and enjoy healthy meals every day.
Maybe most important in all this, as a society we’re coming into the AI age with a lot more awareness than we did the last time around. Ensuring that AI is used healthily will be easier for a simple reason: we’ve had 15 years of social media making us collectively unhealthy. As Legare said on the panel, “I think there’s enough concern about the pitfalls…[that] there’s a huge appetite to do better going forward.”
That’s the optimistic side. But I have a lot of worries too. Because some of the “Her” scenarios we’re looking at can feel as dystopic as that movie turned out to be. AI companions might seem like they’re filling a void in our lives, and sometimes they will. But in the end they are not human, and I can’t help feeling like that gap will make itself known to us in surprising and discomfiting ways.
Programming our friends might seem appealing. But it’s also a problem — a far worse order of magnitude of our current echo-chamber challenges. It’s bad enough when we only listen to other voices we agree with, as so many of us do on curated social feeds. But shaping those voices (via algorithm) to say stuff that’s only to our liking in the first place? At least on social media we have to seek out other voices. And they might at least challenge us occasionally. That disappears when the only voices we hear are our own.
Sure, some people might actively program the algorithm to challenge them. But life is hard, lonely, alienating. How many of us are really going to take that option, especially when a warm, cozy, validate-everything-we-say voice is waiting right there? About the same percentage of teenagers that currently put down their phones, I suspect.
(I disagree with the idea, btw, that talking to an AI is the same as talking to a narcissistic friend. There seems to me a fundamental difference between a human who’s not often thinking about other people and a machine that by definition can’t even know what thinking about other people is. I suspect some of you might nod strongly and some of you would offer counter-arguments.)
Another thing: will anyone ever get over a bad breakup when we can just program an AI to preserve the illusion we’re still in a relationship? And how can we grieve properly over a death? Replika AI, one of the leading AI companion companies, was created after its founder was beset by sadness over a recently departed good friend (she went out and created a program that could interact like he did). These programs are by their very design meant to ease the pain of death. But in doing so they might remove the healthy kernel in that husk of pain.
Psychologists have no roadmaps yet, because we haven’t seen this deployed in the wild. All they know is that for thousands of years humans who lost someone were forced to fully grieve; humans who fell out with a friend had to go about the messy business of repairing the friendship; humans who wanted a fulfilling relationship had to go out and communicate with another human. And now, suddenly, we don’t. What that means for us as a species is impossible to understand at this point, and anyone who tells you they do is lying or selling you something.
All we can really say definitively is that the questions are many. What happens to our hearts when we no longer need other people to have meaningful relationships? What happens to our brains when we no longer need to process information as we once did? What happens to us as humans when so much that has long defined that humanity — the ability to independently think and feel — is outsourced to machines?
There are no answers yet. But assuming unvarnished good would be naive and ahistorical. So would thinking the opposite: that no psychological benefit whatsoever will come out of this.
But maybe most naive and ahistorical would be not to ask the questions at all.
2. LAST THURSDAY PRESIDENT BIDEN MADE HISTORY WHEN he mentioned AI in a State of the Union address for the first of what will no doubt be many times in future presidential speeches.
Even more notable, he built, into two compact sentences, both the promise and risk of the technology. "Harness the promise of AI to protect us from peril," he said of his legislative agenda in the year ahead/second term). Then, the doozy: "Ban AI voice impersonations and more.”
The first part was general enough to not really mean that much — it could refer to anything from a large federal program to build climate-crisis models to small pieces of funding for deepfake watchdog bots.
But the second part was a juicy statement for anyone who believes that AI has to be carefully regulated before it spins out of disinformational control. Biden himself was the victim of madness-sowing robocalling in New Hampshire, and it's of course just the tip of spear when it comes to frauds, electoral and otherwise. So the idea for a ban is timely and bold. Except…what does it mean?
Right now there are no laws on the federal books explicitly regulating AI, which is shocking considering how much it's already a part of our lives. (Some states are mobilizing…slowly) The White House’s own executive order from October doesn’t really get into the impersonation question.
There’s a Senate bill known as the No Fakes Act introduced back in October that effectively federalizes what are now a spotty mix of “likeness laws” across states, making it a federal crime to create a digital replica of someone without their permission.
This is…sort of what Biden seemed to be talking about, but also not really since the act on the one hand goes well beyond voice impersonations and on the other really isn’t an election-centric proposal — it’s more to stop pop singers from getting deepfaked.
The closest piece of voice-ban legislation involving politics currently in play is the Adriano Espaillat-sponsored Candidate Voice Fraud Prohibition Act from last summer. And while it’s not nothing, it’s narrow, amending a 50-year-old voting law to “prohibit the distribution, with actual malice, of certain political communications that contain materially deceptive audio generated by artificial intelligence which impersonate a candidate’s voice.” Not exactly “and more.” Also, the legislation hasn’t gone very far in the last eight months.
An outright ban on impersonations would no doubt come under First-Amendment challenges (which Big Tech would be happy to back). We’ll tackle the legal specifics in a later issue; they’re really interesting. (There’s a possibility that federal identity-theft laws could come into play too.)
For now the question is what lawmakers could try. Perhaps Congress, at the prodding of the White House, could prioritize a version of the Espaillat bill, banning voice impersonations specifically in ads ahead of the barrage of the next eight months of campaigning. Or it could try to require disclosure labels for impersonations, which would nicely avoid First Amendment challenges but stop a long way short of a ban. The White House could also put the screws to the FEC, which has been notoriously slow on this, to finally impose their own regulations.
By the way if you’re wondering what Donald Trump as president would do about AI impersonations, first, who the heck knows, but also it would be interesting to see how his anti-regulation bent would come up against, well, let’s just say displeasure that his voice and likeness are being used willy-nilly. Of course AI deepfakes are also potentially a low-key ally of some politicians, since they can just point to it when they don’t want to acknowledge uncomfortable factual images; Trump has already done this at least once.
The bottom line is that while the SOTU line got a lot of applause from those rightly worried about where disinformation is taking us, there’s a hardly a clear legislative path in the U.S. to stopping someone from going out and using AI to sound like someone they’re not.
If we’re looking for a model, we might want to gaze east. On Wednesday the European Parliament voted overwhelmingly to basically greenlight their legitimately toothsome AI Act (you can read the full text here). Such a sweeping act would have the effect Biden imagines; it does heavily regulate AI in “high-risk” realms, which would include not just elections but money solicitations and other areas where impersonators have a field day. And even low-risk acts require transparency.
(European politicians summed up just how urgently they saw all of this. “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology,” said Dragos Tudorache, a Romanian lawmaker who helped lead negotiations on the AI Act.)
One encouraging thought on where Biden’s head sits comes from Fei-Fei Li, one of the earliest innovators of AI, co-director of Stanford’s Human-Centered AI Institute and a general humanist on these topics. Li, who has actually met with the president to discuss AI in the past, was at the SOTU and went up to him afterward, noting how nice it was to hear him mention AI. He smiled and told her, “Yes! And keep it safe.”
Now we just have to hear how he’ll do that.
[AP, Euractiv, The Verge and NPR]
3. ANDY WARHOL WAS WRONG — IN THE FUTURE EVERYONE WON’T BE FAMOUS FOR 15 MINUTES. IN THE FUTURE EVERYONE WILL BE PHOTO SLEUTHS.
Or so we’re left to think after the slavish scrutiny over the Kate Middleton photo, in which an official palace social account this week of course first sent out, then retracted (after wire services objected), an admittedly doctored photo of the absent princess.
And all the Detective Sergeant Ellie Millers descended. Every freckle, limb, gesture and lighting detail came under the magnifying glass.
AI was not used in said doctoring— or at least, no one at the palace copped to it. But no matter the nature of the technological tools. The whole episode is an object lesson in the precarious state of media circa early 2024.
What is the even the point of images when they can be so easily manipulated, so many of us have found ourselves asking in recent months? The Middleton mystery photo illustrates this to a tee. Because there’s little point to using photos to demonstrate anything when there are so many reasons to question them. As John Oliver told Andy Cohen this week — after the photo was released — “There is a non-zero chance she died 18 months ago; they might be ‘Weekend at Bernie’s’-ing this situation.” He’ll believe this “until proved otherwise, until we see her with a copy of today’s newspaper.” (To which I’d say, and even then.)
The Kate kerfuffle feels both prescient and naive. Prescient because for the coming months we’re going to have more and more debates over whether a piece of media is real, as the manipulation tools get better and subjects (or bad actors) try to blur the lines. Naive because it won’t be long before the tools get so good we won’t be able to tell at all, and then the idea that we can scrutinize our way to the truth will be laughable.
What happens then, I don’t know. Chaos? The end of time? Into this void will likely rush every manner of conspiracy theory, basic media facts sucked up and obliterated like New York under the spell of the Stay Puft Marshmallow Man.
Anyone who wants to prove they did something will really only have one choice: to appear live in front of a lot of people. Because the real world can’t be digitally doctored. Beyond that it’s hard to know what proof we will accept. (Biometrics come to mind, but how that would work on any practical or philosophical level is beyond me.) A public figure won’t be able to convincingly say where they are or who they are or even that they’re on this earthly coil.
So as chaotic as this Kate stuff is, let’s enjoy this moment while we can. At least there’s still reason to believe we can sleuth our way out of it.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. But it’s a new year, so we’re starting fresh — a big, welcoming zero to kick off 2024. Let’s hope it gets into (and stays) in plus territory for a long while to come.
AI WILL BECOME OUR EMOTIONAL AND INTELLECTUAL COMPANIONS: Holy Scarlett Johansson, Batman. Some hope, lot of concerns: -1.5
PRESIDENT BIDEN IS TAKING VOICE DEEPFAKES SERIOUSLY: Great, even if the path forward is still unclear. +2
THE BEGINNING OF THE END OF VISUAL MEDIA: Gulp. -3.5