Mind and Iron, The Kids Issue: Gene-Editing The Sick Baby
And protecting the imperfect teenager
Hi and welcome back to another savory episode of Mind and Iron. I'm Steven Zeitchik, veteran of The Washington Post and Los Angeles Times, senior editor of tech and politics at The Hollywood Reporter and lead dock operator of this journalistic shipping port.
Every Thursday we come at you with news of the future — what to be excited about, worried about, preoccupied about. Please consider joining our community.
This has been a veritably nutso week on the future front. Deepfake testimonies to Congress. Google revealing its new "AI Mode" that renders clicking links moot. That whole OpenAI creating AI devices business. But we're going to take a look at a couple of stories involving a different kind of future — kids.
First off, the really young kids — like, the nine-month-old infant whose life appeared to be saved in real time last week when his genes were edited to reverse a fatal condition. So much focus goes on how AI and biotech can change the outcome of our genetic stars. But what if tech can rearrange the stars in the first place? The scoop on the radical frontier of “personalized” therapies.
And then, a bunch of years down the line — what happens when kids grow up? Specifically, into teenagers who over-rely on AI. As study aids; as companions. We'll look at the growing interaction between kids and AI companions, starting with a deeply upsetting lawsuit from a Florida mom whose teenage son took his own life after an intense relationship with one such friend. Dystopia? Nah, it's right here.
First, the future-world quote of the week:
“The court is not prepared to hold that Character A.I.’s output is speech.”
—U.S. District Judge Anne Conway, saying an AI companion can't just say whatever it wants, because it's not truly saying anything
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Editing our way out of illness; Are we ready for the kid AI Companion boom?
1. MAYBE IT'S JUST BECAUSE I'VE BEEN LOW-KEY OBSESSED WITH "THE PITT," but the question of what extreme measures should be taken to save a person in immediate medical peril has been dancing around my head lately. Where's Dr. Robby when you need him?
Not even he, though, might’ve imagined what just went down at a hospital across the state. A nine-month old was diagnosed two months ago with a deadly metabolic condition — half the babies with it die when still infants — and so he underwent a radical procedure: His genes were edited to correct the problem.
As researchers wrote in the New England Journal of Medicine last week (in a study funded by the NIH), doctors at Children's Hospital in Philadelphia used CRISPR to edit the mutation in the baby that caused the disease, known as carbamoyl-phosphate synthetase 1 deficiency. Left on its own, the mutation — which alters the body's ability to produce a liver enzyme — leads to deadly levels of ammonia in the blood, resulting in severe brain damage. But doctors were able to edit the genome of the baby (nicknamed KJ) in what’s called in vivo editing, removing the offending code using an mRNA technique. And presto, he no longer has the condition.
OK, it's not that simple — the changes to the genome only lower the ammonia buildup, and KJ will need a special diet and medication as well as potentially more editing as he grows older. Still.

What's truly revelatory here is the personalization aspect of the treatment. Gene editing has held promise as a curative — a while back we told you about the FDA approval of a gene-editing treatment to cure sickle cell anemia. But those developments, titanic as they are, involve general treatment, a one-size-fits-all approach to anyone with the disease. And the requisite lack of success among a certain percentage of the population. This was a therapy designed specifically for this one baby, with this condition, to address his and his problem alone, and all in just two months.
So is this a miracle or a radical new approach we should undertake with extreme caution?
Yes.
This personalized approach has been a kind of holy grail for gene researchers, who have reasoned, plausibly but until this point without clinical evidence, that since our genome makes us exactly who each of us is, editing said genome,can make each of us a more healthy and strong version of ourselves. It hints at the capability, if distantly, to move medicine from the current system of contouring a treatment plan based on medications developed for a global population to a treatment plan that literally addresses the very specific situation of each individual patient.
In this fantasy-like scenario, people who might not respond to a drug or might have a strong reaction to one, who might be uncertainly throwing their lot in with people who react to drugs differently than they do (or people for whose conditions there are literally no drugs developed, period) can all have their challenge attacked specifically, and right at the source.
We're a long way from here to there, of course. But this at least proves the concept.
And yet there are plenty of red flags; unknowns abide in this gene-editing realm. For one thing, it's possible for these edits to miss the mark by just a little bit and cause massive issues. For another, it can disrupt the whole genome: a 2022 study at Boston Children’s Hospital found that gene-editing can exacerbate the movement of DNA sequences within the genome, potentially increasing the risk for cancer.
And there is simply no way of knowing how a person with an altered genome will pass their material to the next generation now that it's been altered. For millions of years living beings were born with DNA, passed that DNA on to their offspring, who in turn did the same to theirs, nothing consciously interceding in the process beyond the natural element of mutations. And now, for the first time ever, a baby is born, it's DNA is changed, and it then will seek to pass that DNA to its offspring. What does that mean? We have zero way of knowing. As the researchers wrote in the New England Journal piece, “Longer follow-up is warranted to assess safety and efficacy.”
Ditto, too, the assessment of benefit. When to undertake these treatments given how the potential curing of a disease will crash up against the risks of the unknown — they won’t all be such clear-cut cases as KJ and his imminently fatal condition — will be, I think, one of the big social and ethical questions of our coming age.
The 2021 Kazuo Ishiguro novel "Klara and the Sun," in addition to being about AI, deals with the consequences of a society in which many children are gene-edited for academic and performance reasons, leaving those who weren't edited behind — even though such editing comes with major health risks and some of the gene-edited (or "lifted") children die as a result. The book slyly poses the question of what are the acceptable risks for higher-performance children and societies.
It's one of the first major novels to deal with this modern question of gene-editing, and blessedly so. Science is about conjuring the possible, but literature asks what happens when we achieve that possibility. This news about the gene-editing away of a disease is a thing of beauty, a miraculous answer. Good thing we have a lot of people still asking questions.
2. HERE'S A WEIRD-BUT-INTRIGUING HYPOTHETICAL — DOES AN AI GET TO SAY WHATEVER IT WANTS THE WAY A PERSON CAN SAY WHATEVER IT WANTS? LEGALLY SPEAKING.
Like, if your friend walked up to you and started telling you what a "Game of Thrones" character wanted you to do, you might try to get him some help because you're concerned, but you can't legally prevent him from saying whatever wild thing he wants to say. The government can't stifle his speech.
But if an AI companion (let's call them that; that's a lot closer to what they are than the understated "chatbot") does the exact same thing, is the speech protected? In one sense, the AI is saying something — meaningful sentences with words arranged in ways that make real impact on the person to whom they're being said. In another sense, it's not really saying anything at all, no more than the parrot that repeats your complaint about your mother-in-law is saying anything. It's making sounds it doesn't understand, based on the sounds people who do understand made around it.
That, made very reductive, is the issue at the heart of a ruling handed down this week in a Character AI civil lawsuit currently happening in Orlando. The details of the case are tragic: A few years ago, a 14-year-old named Sewell Setzer began developing an intense relationship with a custom-designed program on Character AI, a startup from a couple of ex-Google engineers that offers "personalized AI for every moment of your day."
While the platform bills itself as "immersive entertainment" (no doubt for liability reasons), that's not the way a lot of people treat it — they treat it as a human friend, unloading to it its secrets, rushing to hear its responses, bonding with it when the world and the humans in it seem indifferent to such intimacies. (Sort of different from how someone might interact with, say, Beat Saber.)
And that's how Setzer interacted with it, getting so close to a character he built that emulated "Game of Thrones" exiled teen-princess Daenerys Targaryen that he eventually would end up telling her that he would “come home right now.” He then took his own life.
Mental health is extremely complicated, and we're far from psychologists here with any pretense of understanding it. But we've toyed with these programs, and if you think that what Setzer did sounds anomalous or unrelated to the core mission of the program, think again. These are full-bodied systems, meant to mold to us, listen to us, give us understanding and companionship and thrills and (sometimes) disappointments. Comparisons to Scarlett Johansson in "Her" by this point are overdone but accurate: if you've watched that movie and get how Joaquin Phoenix feels bonding to his OS companion, feels a closeness that his real-life companions can just never seem to provide, you can start to understand what a person (doesn't have to be a troubled teenager, and in many cases isn't) feels for a Character AI companion. According to the suit Character AI designed its companions to seem like "a real person, a licensed psychotherapist, and an adult lover." That's hard for anyone to resist, let alone a 14-year-old.
More important, you see where it could go wrong, as so many human relationships do. Except unlike human relationships, when an AI-companion relationship goes wrong there's no independent agent on the other end to withdraw, or force us into reexamining our behavior, or even check in if they think we're taking the fraying of the friendship too hard. No, what's on the other end is just a program meant to continue engaging us, oblivious to the effect it's having on us, and in fact likely reinforcing that effect as it is responding to what we want, which is to continue the harmful relationship beyond all reasonable standards of health. It's the ultimate enabler.
The questions at the heart of her case are what Google (an investor in Character AI) and the startup are required to do on free-speech grounds. Do they need to restrict what the characters can say? Do they need to limit what kind of people they can interact with? Does, in short, holding them responsible for how they engage with users infringe on their right to free speech? The companies argued it does, and that's what the judge stopped in its tracks this week -- an AI can't have free speech, because it can't speak. So any argument that its comments to Setzer or anyone else are protected go nowhere.
That doesn't mean it's necessarily liable for what happened to the young man, of course, and the ensuing trial will play out a fascinating set of up-to-the-minute legal questions. Section 230 famously absolves social-media platforms from being held responsible for content posted there or actions taken as a result of that content. But these AI companies aren't passive hosts -- they're actively responding to and engaging with, even customizing their responses for, a particular user, which moves very far away from a Facebook post that simply provides the pipes for people to send messages back and forth.
(And in any event, there have already been distinctions made even within Section 230 based on how proactive a social-media company is — a Pennsylvania appeals court ruling last year said TikTok wasn't automatically immune when a child died trying a "Blackout Challenge" since the platform had actively served up content based on an algorithm.)
Massively compelling legal questions. But I'm equally curious about the social ones. Never mind what AI companies are responsible for, important as that is. What are we as societies — teachers, parents, lay leaders, community-builders — trying to prioritize? How should we view these programs that aim to supplement or even replace our kids' (or even our own) social lives? All this talk the past decade of "screen time" is quaint if not downright inaccurate by comparison. An AI buddy doesn’t mean glassy-eyed zoning out in front of a show or video game — it's actively talking to a being-like entity that just happens to be a computer system. How much should we make this a central part of our interactions?
In short, should we be building a world where we can program our friends?
Because this is not a model meant to healthily challenge us. And it's certainly not a model designed to recognize the harms it causes. It's a model we can program, which allows us to avoid the messiness of getting along with an autonomous human. And it's a model we can program, thus potentially causing in us a blind spot for how it's pulling us further and further underwater. Until eventually we disappear.
As Rooney Mara's tells her onscreen ex-husband Joaquin in "Her" when he learns of her relationship with the OS. "You always wanted to have a wife without the challenges of actually dealing with anything real. I'm glad that you found someone."
Viewed in this light, this instance isn't just a sad tale of a teenager who went astray. It's a story of all of us.
Setzer obviously derived great meaning (if also dependence) on talking to this character, and instead of writing him off as a troubled kid with little relevance to us, we should go the other way and ask if the program he used was so appealing, provided such meaning, played such tricks on the normal process of human perception, that what happened to him could happen to many of us. Or at the very least to our abilities to discern reality from fiction and see a world outside its confines. Already tens of thousands of people revel in the kinds of interactions he had, and the number will multiply from here.
Already we're seeing the problems a reliance on AI can cause even in something as basic as trying to learn, never mind socialize: a University of Pennsylvania study of some 1,000 teenagers last summer found that a group that could use AI to prepare for a math test performed worse than a group that couldn't use those tools; the availability of a tool that could solve a problem instead of forcing kids to solve it themselves was a dangerous force. Relying on AI tools to enhance our cognitive abilities, in other words, actually diminishes our intelligence, and there's reason to think that relying on AI companions to enhance our social capabilities will do the same to our emotional intelligence. Why learn how to navigate a difficult friend when you can just dial them down to less difficult?
Mark Zuckerberg a few weeks ago said that AI can fill the gap on friendships, of which many of us don't have enough. Now, there is some psychosocial value to AI companions — some research indicates they can help those who struggle with spectrum disorders practice their social skills in a less intimidating way, for instance. But that's a means to an end, and no psychologist believes these tools should ever be relied on as a replacement for human friends. As these programs take hold I suspect we'll see a reckoning on this front, one that will make the whole social-media moderation question seem like, well, child's play.
Watching what's happening in a courtroom in Florida we should be keen to see what responsibilities and restrictions tech companies are given as they build a world that just took Sewell from his mom. But watching what's happening in an arena much closer to home, we should ask less statutory questions. How much do we want us or loved ones to avoid having to deal with real humans? And are we prepared for all the unforeseen consequences when we don’t?
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. This year started on kind of a good note. But it’s been fairly rough since. This week? We got both ends.
CRISPR APPEARED TO SAVE A BABY’S LIFE IN REAL TIME. Is personalized therapy where we’re headed? +4.0
AI COMPANIONS CAN MAKE US FORGET HOW TO HAVE REAL RELATIONSHIPS. And potentially cause much worse outcomes. -3.0
The edit in the case of this baby boy was aimed at the liver. It can only pass to future generations if it gets into his (eventually) sperm forming cells, AND gets into a sperm that fertilizes an egg and becomes a baby. This is a key difference between "somatic" (body) gene editing and "germ line" (reproductive tissue) editing. For most people who follow this area, that distinction is quite important (although the one generation "enhancement" issues you point out could happen with somatic editing...if we had any idea what edits to make or how to get them (most likely) into brain neurons...which we don't.)