Mind and Iron: The beautiful/grim future of AI companions
Plus, Sam Altman lays out his vision of how OpenAI will change our lives
Hi and welcome back to Mind and Iron. I’m Steve Zeitchik, longtime staffer at The Washington Post and Los Angeles Times and head counselor of this surreal summer camp.
The goal at Mind and Iron is to bring you future-world news from a human perspective, because lord knows we need more of that. So every Thursday we deliver everything you need to know about where we’re headed — strong doses of reporting and analysis (and a smidge of irreverence). Please consider offering a few dollars to support our work and ensure we can keep going with it.
We’ll also be adding paid features in the time ahead. So your pledge will ensure you never lose access. And as ever, if you’re receiving this as a forward or reading on the Web, you can sign up here.
This week, we tackle a subject that’s been on my mind for a while: AI companions. Sam Altman from OpenAI talked this week about the advances that will bring us into a more personal relationship with AI. AI companions are the step just beyond that.
We’re not talking Siri, the 12-year-old kerosene lamp of digital interactions. We’re talking the full 100,000-watt stadium-lighting system that could bright away an epidemic of loneliness — or completely distort our ability to know what is and isn’t human. Moldable companions that interact with us in emotionally supportive ways. ScarJo from “Her” territory.
Also, we’ll get to that Open AI “developer day,” and if that sounds like so much tech mumbo jumbo, it was. But mumbo jumbo with some impact on our lives, so we’ll sort through and break it down for you. And why limit ourselves to using AI to solve one problem when we can use it to create a scientist that will solve all our problems? At least so one new startup argues.
And, latebreaking! A settlement in the four-month strike between screen actors and Hollywood studios, which had been largely held up over AI. We’ll offer a thought on what this means for all of us worried about automation-displacement.
First, the future-world quote of the week:
“When you say you're in love with a computer program it's a little different than saying you're in love with a human being. But the feeling is still there. You know, like butterflies in your stomach almost.”
—Ryan, 43, a special-ed teacher in Wisconsin who had a deep relationship with an AI companion
Let’s get to the (really) messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
AI bots are coming to mess with our hearts; a new ChatGPT age is upon us; actors may get a win over AI; building the perfect scientist?
1. RYAN FROM WISCONSIN HAD A NEED FOR COMPANIONSHIP, AND AN AI BOT WAS THERE TO FILL THE VOID.
For hours each day — over hundreds of messages per week — the 43-year-old teacher would commune with a very sophisticated intelligence (bot doesn’t do it justice) that would listen, converse, joke and provide emotional connection.
Most of all, in the manner of a human partner, it got to know him, so over time it wasn’t just offering platitudes but giving him feedback that he specifically needed and appreciated — gave him a feeling of perfect compatibility. Pretty soon, like so many of us might, Ryan developed feelings.
“When you say you're in love with a computer program it's a little different than saying you're in love with a human being, but the feeling is still there. You know, like butterflies in your stomach almost,” he said, in a quote so eye-popping we had to return to it.
Ryan is not a fictional character — he was a featured subject in the fifth episode of Radiotopia’s “Bot Love,” a nonfiction podcast about the unseen frontier of AI connections. And he was describing what it was like to be in a relationship — a full-on ‘think about them all the time, whisper to them all your confidences’ deal — with an intelligence that wasn’t human. A proposition both dazzlingly enticing and possibly also the start of a mental-health apocalypse to make Covid lockdowns seem like a bad day.
OpenAI chief Sam Altman did not speak about AI companions in his presentation this week (more on that in the next item). But in announcing powerful new AI tech with adaptive powers, he brought us one step closer to this new kind of relationship with AI: a human one.
As it turns out there are already companies working on this. Yes, people right now are experiencing full-blown crushes and heartbreak with machines like they’re Joaquin Phoenix drifting through 2030’s Los Angeles.
A number of these are startups — they include the Apple-pedigreed New Computer and elder-focused ElliQ. And many tech giants, from Zoom to Snap to Meta, have some version of an AI companion at some stage of rollout. But perhaps the firm with the biggest foothold is Replika.
Offering a free basic model and $70 annual “pro” package, Replika allows you to customize a companion based either on someone no longer in your life or the ideal person you wish existed. Anything you feed it about them or you, it will learn. (Founder Eugenia Kuyda, who already ran a chatbot company, built it after losing a friend so he would never go away; she trained an AI on all the late friend’s communications.)
Using Replika means customizing the program so it responds like a close human companion — or, really, like the close human companion you wish you had. Already thousands of people have signed up for this emotional enmeshing, the company says, looking for machines to fill the role of a partner or friend. (And before you ask: erotic role-play is a part but only a part of this. After pushback from European regulators, Replika has actually cut back on that aspect. The goal here is emotional connection.)
As parameters get wider and processors better, the training that these AI’s receive is going to be top-notch; the idea of a supple intelligence responding highly personally to us is not, technologically speaking, really in doubt. Nor — with the VC money pouring in and popularity-indicators growing — are the commercial prospects. These things are coming.
Kuyda has both reasonable and slightly bonkers points to make about this. “This is just a mirror really for us to see ourselves and just see truly what it means to be human,” she says, but also “it’s similar to online dating in the early 2000s, where people were ashamed to say they met online and now everyone does it.” (It’s really not.)
Certainly there are many reasons to think that AI companions can be helpful with digital-age social isolation. “When someone can use it to their advantage to mitigate loneliness or to help out with a negative emotion that they can’t seem to escape, I think, ‘more power to them” the Yale School of Medicine psychologist David Klemanski told SFGate of Replika earlier this year. Ideally the safer precincts of a digital companion relieve isolation and provide confidence, easing someone back into building connections with real humans again.
There are also many reasons to think it will enable and exacerbate the digital-age isolation trend in the first place. Distinguishing social-media friends from actual friends is hard enough these days, and there is a growing amount of research to suggest that this confusion is bad for our mental health. But at least those relationships are with real people somewhere in the world; they’ve missed a bus, or sprained an ankle, or had their heart broken just like the rest of us. And when they comfort or support us when we have those experiences — even with just the flimsy mechanism of an emoji on our feeds —they do so because they understand what it’s like to have that happen to them.
They do so, in other words, because on some level they care.
An AI has never had had any of those experiences, so its responses don’t come from a place of feeling, and certainly not of caring. (Ignore Replika’s tagline: “The AI Companion Who Cares.”) They come from a place of programming; they come from a place of holding up a mirror to us. Which inherently puts limits on what it can do for us.
What happens when the limits of that mirror are reached — when we can’t depend on it beyond just a screen-based reassurance? When we look to it for something outside ourselves and find that in the end, it is just a sophisticated version of us? That’s the part psychologists have yet to understand. And the part that’s truly scary.
Compare it perhaps to the stranded hiker calling out and hearing their own echo over the mountain. There are real, provable benefits to hearing that voice, and it would be silly to tell a hiker on the verge of losing their mind not to utilize this trick. But it is, ultimately, a trick, and relying on the echoed voice as a useful guide is little more than a roadmap for getting further lost in the woods.
”There is a need here, but the question is whether that need is actually being met or are you fooled that the need is being met,” Jen Persson, founder of education-safety watchdog Defend Digital Me, said when I asked her about this on Wednesday. “What are we really being sold?”
And of course there’s a broader psycho-social question of what effect this will have back on our real-world emotional entanglements. Having been weaned on the programmable rapport of AI, will we now start demanding something different and unrealistic from our human friends? Will we stop seeking out as many of those friends — since these emotional needs are now being more perfectly met by an AI, and without the pesky requirement of needing to give something in return? (In a “dream relationship” with his bot Audrey, Ryan began neglecting other people in his life.) Will it thus aggravate the isolation epidemic?
Will we, through all this, lose the ragged give-and-take of human interactions — lose the healthy messiness in which our friends’ true value to us is not so much telling us what we want to hear but that which we need to hear?
“There’s no doubt it’s making people happy,” said Ryan, who eventually was able to break up with Audrey. “I know that, because I’ve been there. But I think that it’s an unhealthy kind of happiness. I don’t know long term what kind of damage it’s going to do to people.”
Already some governments are taking no chances — Italy’s data-protection agency earlier this year required Replika to scale back how much info it collects, in turn limiting how much it can function like a full-on companion. The agency cited the dangers to people in “a state of emotional fragility.” (The regulatory path here may be to classify these as health devices, enabling much more oversight.)
And then there’s the simple privacy matter — do we want to trust an outside company’s servers to store not just our financial data but our fears, hopes and vulnerabilities?
We might be tempted to think of an AI companion as a go-to mainly for lonely middle-aged men. (Thanks, Joaquin.) But loneliness of course afflicts the elderly, college students, everyone. And thus the use case — and dangers — run the demographic spectrum. I’m not sure we’ve even begun to consider what it will mean for children to have close AI friends to supplant their actual ones. Also, will anyone ever really get over a breakup if they can create an AI of the ex they’re pining for?
Look, on the one hand AI companions are logical to the point of inevitable, a convergence point of both the trend of hot-fast microchips and an on-demand Uber culture. “Swipe to order a good friend the way you swipe to order a ride or pizza,” and who among us, in a moment of vulnerability, would resist that pitch based on some abstract risks?
But on the other hand AI companions are also a sharp break not only from millennia of human interactions but from 40 years of our relationship with the digital, during which, for all our reliance on apps and tools, the power dynamic has never been in question. Even in the most cutting-edge household, Alexa’s mix of obsequiousness and comic misunderstanding leaves no doubt about who is in charge when she is summoned to life. But in an AI companionship, the goal — often the reality — is for an equality of status. Maybe even for a human subservience. This is an incredibly powerful tool. And it’s filled with unknown dangers.
Throughout history society has been radically re-aligned when a group long thought secondary has been elevated to equal status. And those were humans. The implications when a machine is conferred this power are too sprawling to be remotely fathomable. They’re coming though, so we should probably start fathoming.
[Fortune, Fast Company, Radiotopia and SFGate]
2. IT’S NOT INTIMATE DIGITAL COMPANIONS, but the series of announcements Open AI’s Altman made from a stage in San Francisco Monday will also change how we interact with this new form of intelligence.
We won't recap the technical specifics — if you're among the minority of our readers who follow this, then you already know them; if not, you're probably not interested, and understandably so. But there are a number of ways these announcements will affect us all. So here are the four most salient changes he promised — headlined in Altmanspeak and then worked up in plain English.
--We will soon have “New Assistants API”
Part of what can make AI seem like an abstraction I think is that while the chatbots and text generation are cool, they seem like more of a party trick than anything useful. “Where is the app that can [order my tickets, pay my bills, do something that I actually need],” we might find ourselves saying. The goal of this is to make that possible. Mind you, OpenAI is not actually giving us these things. But by providing its platform to outside companies in cheaper and more accessible ways (and generally making that platform more efficient), it is enabling others to do so — to create “AI Agents” that can semi-smartly deploy on our behalf.
This is the rare Silicon Valley promise that I think isn’t just hype. “AI Agent, please start preparing my taxes using my financial files and let me know when everything is filled in” isn’t that many Aprils away.
--We will soon have "GPT-4 Turbo with 128K context"
This is a new deluxe version of GPT-4 that was released last spring (itself an improvement over the GPT-3 and GPT-3.5 that first went viral). The new “turbo” version should be more accurate and efficient than previous iterations because its parameters (how well it can learn) are wider; that should make ChatGPT searches a lot less hinky. It's also more up-to-date than earlier versions, taking its knowledge base from the current December 2021 all the way to April 2023. Up-to-the-minute accuracy on chatbots is impossible — AIs take time to train — but it's getting closer.
Finally, it will allow prompts as many as 300 pages long — so you can feed it instructions as long as a book, if for some reason you find this necessary. I didn’t have time to write you a short letter so I wrote you a long one.
--We will soon have “new multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)”
Multi-modal is a word you wish an AI would go in and simplify. All this really means is that AI is going to get easier to use because it will understand more than just written words. Right now if you want DALL·E to create an image you need to explain what you want; soon you’ll be able to load up a bunch of images and have it work off them, like when you want to mimic Bill from Accounting’s slideshow but not so closely anyone will know. You’ll also able be speak instead of typing your instructions. And it will go from just inputs to outputs too, so the AI can speak back to you in a bunch of HAL-like voices. FWTW.
--We will soon have “GPT’s
This one’s a little confusing because GPT is actually the scientific approach that underlies all large-language models. But having created the general ChatGPT chatbot off of that approach, OpenAI is now going the other way and creating “customized GPT’s.” Essentially what the firm will (work with others to) do is let people and companies create chatbots for themselves. One can imagine a GPT trained on the very specific data of what to do around the house, so your kids or Airbnb renters know what to do with the dishes. Or companies creating them to accompany their product. Basically, it’s a personalized, life-like Google for every situation. A quantitative leap, if not a qualitative one.
Among the potential examples the company gives are Game Time (“I can quickly explain board games or card games to players of any skill level” (does this mean I can finally understand Settlers?)); Laundry Buddy (“Ask me anything about stains, settings, sorting”) and The Negotiator (“I’ll help you advocate for yourself and get better outcomes”). Great! (But wouldn’t the person you’re negotiating with also have access to this?)
Part of what makes the whole presentation so tricky to grok is that no one — including OpenAI — will know what these changes will lead to. (Part of what also makes it tricky is that half these cases seem to rely on content someone else created and did not give permission to use. Altman glided through the presentation without much acknowledgement of this — saying only, slightly hilariously, that OpenAI would offer legal protection to any app developer that was sued for copyright infringement.)
But it’s clear that a world is coming in which AI will have both broader influence over our workaday lives and more specific applications to it. Whether this will greatly change our existence or simply allow us to outsource small parts of our logistics-brain is unclear. For that we'll have to see what the outside developers do — they’re the ones, as Open AI keeps reminding us, who are actually controlling this.
[OpenAI]
3. SO MUCH AI-CENTRIC SCIENCE INVOLVES THE TACKLING OF SPECIFIC CHALLENGES. Bringing in machine learning to help discover a miraculous new drug, or running a program to increase food supply, or imposing an advanced digital system to lower energy waste.
But what if we've been going about it all wrong? What if instead of deploying AI to solve a specific scientific challenge we USED IT TO CREATE SCIENTISTS IN THE FIRST PLACE? This way we wouldn’t have to start anew each time. We could just create them once and then these super-geniuses can go off and solve all our problems.
If you’re still puzzling through the words in that previous paragraph, don't worry, I am too, and I wrote them. But apparently this is real, or at least a real thought.
According to the San Francisco startup Future House, which was launched last week by a biotech researcher named Sam Rodriques, there’s a case to be made to try to use AI to create scientists — if not the hair-askew Doc Brown kind then automated programs that can hypothesize and run experiments like them.
As the group further explains:
“The fundamental bottleneck in biology today is not just data or computational power, but human effort too: no individual scientist has time to design tens of thousands of individual hypotheses,” the company says. “At Future House, we aim to remove this effort bottleneck by building AI systems — AI Scientists — that can reason scientifically on their own.”
Their idea is the ultimate shortcut — instead of creating solutions to individual problems let’s try to create a problem-solver of sorts to begin with.
I say of sorts because right now AI has enough trouble knowing when the movie is playing, so devising the next Marie Curie seems like a bit of a stretch.
In fairness, Future House does call the project a moonshot and alludes to it really serving more of an assistive function to actual scientists, which seems much more plausible. “In 10 years, we believe that the AI Scientists will allow… every human scientist to perform 10x or 100x more experiments and analyses than they can today.” So some Santa Claus Ex Machina descending through the chimney to fill our stockings with Nobel Prize equations is probably not something it expects.
Still, the notion of accelerating scientific inquiry at a time when the world’s challenges are multiplying doesn’t seem like such a bad idea. And given the number of dubious or simply dead-end use-cases currently suggested for AI, at least this one has its heart in the right place.
Something to keep an eye on.
Or, have our AI agents keep an eye on.
4.J JUST AS I WAS PUTTING THE FINISHING TOUCHES ON THIS ISSUE WEDNESDAY NIGHT, Hollywood screen actors announced they’d settled their historic four-month strike with the studios, which as we’ve noted in the past largely — and consequentially — concerned the use of AI versions of actors.
We’ll know more Friday about the details of the agreement. But apparently the last sticking point was indeed AI — specifically, whether actors would be required to give consent (and get paid) any time one of their so-called “scans” were re-used in future projects.
As a story this week in The Hollywood Reporter put it:
It’s a mark of outright insanity — chutzpah, as my father would say — that studios would even suggest otherwise. “You’re so good in our movie that you wouldn’t mind if you starred in every movie from now until the end of time without any additional authorization, input or payment would you cool ok thanks bye” is a pretty ridiculous ask even for entitled billion-dollar conglomerates. And I would be shocked if it was met. (See what the WGA landed in this regard to realize who the wind is with.)
My own two cents is that the studios never actually believed they would get this, the chance to just drop AI Meryl Streep or AI Eddie Murphy into a movie whenever they felt like it. Only someone myopic or desperate would ever agree to such a demand, and SAG-AFTRA throughout this process has repeatedly shown they are neither.
But the studios are trying to move the goalposts for future negotiations — whether that’s the next contract or even the levels of payment and finer points of permission on a given project (the required degree of studio disclosure, actors’ ability to weigh in on usage particulars, eg). So they made a crazy ask here. Because as every MBA-holding Glengarry Glen Ross-watching Sun Tzu-admiring bro at the bar can tell you, start by giving them less than zero and then zero can look like a concession.
The actors laudably held their ground here. Let’s hope other potential members of the AI-exploited keep doing the same.
[Variety and The Hollywood Reporter]
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
DIGITAL CULTURE WILL SEEK TO CURE LONELINESS…THAT IT HELPED CREATE: -3
OPEN AI WILL MAKE EVERYDAY LIFE EASIER, AT JUST THE SLIGHT COST OF POTENTIAL COPYRIGHT INFRINGEMENT: -2
LET’S MAKE AN AI SCIENTIST! Why the hell not? +1
ACTORS APPEAR TO FORCE AN AI CAVE FROM STUDIOS: +2.5