Mind and Iron: Why AI Copyright is all of our fight
Also, the dead come back with GPT. And shopping for new arteries?
Hi and welcome back to Mind and Iron, a tech ice-cream parlor in exclusively human flavors. I’m Steve Zeitchik, veteran of the Washington Post and LA Times, here every Thursday doling out future scoops.
We’re heading into the melancholy final days of summer — August, truly the Sunday of months — so the ditty below will reflect said soulfulness. Also, since August is indeed winding down, we won’t be publishing next Thursday. So after reading today’s issue, take in those last boardwalk bites of saltwater taffy and chafe at the calendar flip, and we’ll be back right after Labor Day with our regular serving of substance and analysis.
Requisite reminder to sign up here if you’re reading this as a forward/on the Web.
Requisite reminder that you can browse our archive of all past posts here.
And a friendly reminder to consider pledging some spare loonies here if you haven’t yet done so — it will get you access to everything we’ll be doing in the fall and beyond, from the full complement of paywalled stories to podcasts, guest columns and more. Also, the money supports this newsletter’s important (we think) mission of shedding light on where tech is taking our society.
First, the future-world quote of the week:
“Does this make it easier to get over someone? Or just prolong the grieving process?”
—Daniel Smalley, a professor at Brigham Young University, commenting on AI efforts to bring back the personalities of the dead
This week: The healing powers of a controversial biotech company. The rising trend of tech resurrection. And the future of transportation just can’t get out of the past.
Also, very few people — least of all anyone in the legal system — seem to have any idea how to stop AI Big Tech from outright burgling creators’ work. I don’t want to say the Napster word…but we seem to be heading to a Napster world. And outside of a few valiant authors, no one seems to be doing anything about it.
Let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Mall-store arteries. Tesla cover-ups. Technologically bringing back the dead.
1. YOU MAY HAVE NEVER HEAD OF HUMACYTE BEFORE THIS WEEK. You may not have heard of Humacyte even after this week. But you probably should soon hear a little about Humacyte.
The biotech company made some quick headlines in recent days because of a pair of political mini-scandals. First it was revealed that, thanks to his early investment, an alleged Russian crime boss owns 9% of the company. (Eek.) Then it turned out Alabama senator and Armed Services Committee member Tommy Tuberville bought a bunch of Humacyte stock right before its price climbed, in what may or may not be a coincidence.
Notable. But not why you should know them.
Based in North Carolina’s Research Triangle and working in the field of “regenerative medicine,” Humacyte makes “human acellular vessels.” Which is not a very snappy name for an eye-popping product. HAVs are a cutting-edge technology in which arteries and veins are fashioned out of human cells, and then washed of those cell traits that would make another person reject them. They can then be placed inside anyone who needs them at a moment’s notice, allowing a badly wounded or diseased body to get an on-the-spot life-/limb-saving transplant. (Good explanation here.)
Ukraine used Humacyte’s tech during the early days of the war to save the lives of critically wounded soldiers on the battlefield; the moves generated optimistic findings from the medical journal The Lancet. The Journal of the American Medical Association’s Surgery publication ran a study last year examining a whole bunch of Humacyte patients that was similarly encouraging. (Obvious caution: the product is still in clinical trials with no FDA approval; a lot can go awry between here and there.)
This idea of these so-called “off-the-shelf” blood vessels could, in the long utopic term, vastly improve health prospects compared to the current standard of bypass surgeries. Nearly half a million people undergo that procedure each year because of clogged arteries, arterial damage or another condition. And it comes with rejection and other surgical risks. Synthetic blood vessels are also a thing, but lots of infection potential. So far, at least, this seems to sidestep these risks.
More philosophically, the new tech taps into a longtime promise of science fiction — that one day technology can simply be popped into our bodies to create instant healing. We’re talking Bones-from-Star-Trek-arriving-from-the-future-to-give-a-patient-a-pill-to-grow-a-new-kidney type of science fiction. (“What is this, the Dark Ages?”) Not for nothing has Humacyte attracted government heat — former HHS secretary Kathleen Sebelius chairs its board, while the Pentagon has handed the firm a $3.4 million contract.
Of course military use cases is a reason this is all so fraught. Instantly healing battlefield wounds is terrific — but it also mitigates the perceived cost of war, which could lead to more of it. And one shudders to think about how someone with an unhealthy lifestyle could now be encouraged to lean in — “I mean, I can just buy a new artery anyway.”
But progress is progress. And a company that is increasingly showing that natural blood vessels can safely be grabbed and dropped inside a human body is certainly that. We're not in the Dark Ages, indeed.
[Newsweek and The Lancet]
2. MAYBE IT’S MY PAROCHIAL-SCHOOL EDUCATION, but the idea of a deity that revives the dead — a concept in so many of the world’s religions — has long fascinated me. The rest of this sentence may tell you something about your dear author’s childhood, but middle-school thoughts both about the finality of death and the potential for a supernatural force to somehow undo it occupied far too many middle-school reveries. (Probably why my grades wobbled. Probably also why the guidance counselor wrote that concerned note to my parents.)
Alas, there’s no scientific evidence for a deity being able to bring back either body or spirit. But maybe the prophets of old were on to something after all. Because the idea of tech-enabled resurrection has been percolating a lot lately. If new forms of GPT can replicate a person who does exist, then why can’t it replicate a person who once existed?
The idea is for an AI system to draw from a database of what a person once said, did and thought to re-enact those actions for the living. Earlier this year Vice wrote about Somnium Space, a company whose virtual world is perfecting a “live forever” mode that will “allow people to store the way they talk, move, and sound until after they die, when they can come back from the dead as an online avatar to speak with their relatives.”
In May, a number of outlets picked up the announcement of Seance AI, a company whose GPT-based program allows a short conversation with the recently deceased — enabling a form of closure that might have eluded in real life.
And this week Wired published a story that listed some of the latest possibilities in tech-resurrection, including LifeNaut, which describes itself as “a web-based research project that allows anyone to create a digital back-up of their mind and genetic code” with a mission of “explor[ing] the transfer of human consciousness to computers/robots and beyond.”
Wired quickly pivots to the costs of implementing resurrection tech on a mass scale. And they ain’t low. “ChatGPT purportedly costs $700,000 a day to maintain, and will bankrupt OpenAI by 2024. This is not a sustainable model for immortality.”
But there is also another cost: the psychological one. Because we haven’t even begun to grapple with what this all means.
In the fall, I wrote a story about a Sam Walton AI hologram at Wal Mart’s Bentonville headquarters. It involved a company called StoryFile, which is creating these AI-based interactive holograms that allows you to watch and ask questions of late relatives. More on them (they’re a fascinating outfit that grew out of a Shoah project) in another issue. Anyway for the piece I interviewed Brigham Young professor Daniel Smalley, an expert in AI and holograms.
And he raised the question that I think should be on all of our minds with these resurrectionist AI programs. “Does this make it easier to get over someone?” he said. “Or just prolong the grieving process?”
Yep. Because temporarily tricking our brains into thinking a person is alive and then having to return to a world in which they are not is the kind of whiplash-y trauma that could do Meta-knows-what to our psyches. Plus the paleness of the AI copy could truly drive home how not here they are. Would you do it for a loved one? I’m not sure I would.
Advocates say all of this resurrection tech is really no different from current archival materials of the dead — cell-phone videos on steroids. But when it comes to remembering those we've lost it seems to me there’s a big difference between static media and actual interactivity — between a cold medium and a hot one. And I’m not sure either we or our therapists are ready to handle that leap.
The need to heal the ache of a lost one is real; in that regard these companies are doing a service. It’s what they’re doing to us afterward that should worry us.
3. FOR NO GOOD REASON, PLEASE ENJOY this futuristic rendering of a nuclear sky hotel that went viral last year — aka the “cloud cruise-ship.” It’s an incredibly fun transportation concept that almost certainly won’t be possible for many decades.
And yet it still seems a safer bet than Tesla, which according to Ronan Farrow’s rip-roaring Elon Musk New Yorker story this week asked the National Highway Traffic Safety Administration to redact important crash data, and which, oh yes, as the story makes vividly clear also is run by a man who has come to be relied on by the government to a scary and possibly democracy-threatening degree.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
BIOTECH COMPANIES THAT CAN SELL US A NEW ARTERY? Hard not to be impressed. +4.5
RESURRECTING LOST ONES VIA AI: Amazing, but amazingly unknown psychological effects. -1
AI COPYWRIGHT IS HEADED FOR THE SHOALS (below): Can the country’s creators do the improbable and stand down tech companies? -2
The Mind and Iron Totally Scientific Apocalypse Score for this week:
+1.5
The Mind and Iron Totally Scientific Apocalypse Score for this month:
-4.5
MindandIrony
A possibly penetrating, perhaps droll comment on current tech developments
AI Copyright: How tech companies are pillaging all of us (with courts’ passive complicity)
It’s been a dizzying summer for AI and copyright.
First Sarah Silverman and two other authors sued OpenAI and Meta for allegedly training AI systems on their books without the writers’ permission.
Their contention: that work to which they hold the copyright should not become part of the corpus of AI without their sign-off. One big concern is piracy — “unjust enrichment” is among their legal claims. You can also toss in the broader spiritual fear that the work they create is feeding a hive-mind that can be distorted, or used to distort.
(An investigation from the Atlantic last weekend found that Silverman is just the beginning; everyone from Stephen King to Zadie Smith is getting fed into the AI thresher by giant tech companies. The prominent writing blogger Jane Friedman has also been drawing attention to the AI-generated ripoff books that somehow persist on Amazon and Goodreads.)
Then last week a court ruled on the opposite situation — whether the work of AI itself can be copyrighted. Its answer: it can’t. Words or images generated by AI can’t become copyrightable; there’s no person behind it. Copyright law can’t “protect works generated by new forms of technology operating absent any guiding human hand,” wrote U.S. district judge Beryl Howell, upholding a finding from the U.S. Copyright Office. (Of course Silverman would say there are humans behind it, just not any that granted permission, but distinctions, differences, etc.)
This second development feels perhaps like a victory for those of us who believe the tech companies training their AI models are getting away with murder. But I don’t think that’s quite the right read. Because taken together these two developments help form an eerie, law-agnostic reality. AI, at least for the moment, is able both to swipe the copyrighted material of others and unable to create a new copyright itself.
Or put another way, the collective legal message from these two situations is: do whatever the heck you want with AI because copyright law isn’t going to enter the picture. If there was any doubt about this message, check out Google/YouTube’s brazen announcement this week that it will cut AI deals with some big music labels, while everyone else it swipes from can just go eat dirt.
This all seems like an untenable situation for a society that believes in copyright law as both a fundamental issue of fairness and as key to incentivizing people to create in the first place. If AI exists outside the realm of copyright law, with anything involved with the tech part of an all’s-fair Wild West, we’ll end up with a situation that will benefit tech companies and pirates — and completely disadvantage legit creators. We'll end up with…Napster.
A more reasonable legal approach would first be to obviously stop copyrightable material like the works of Sarah Silverman and Stephen King from being taken — I mean how is this even debatable? — and then to allow AI to attain copyright in a situation where it’s being fed legally permissible material. Because AI is indeed creating something new, and it’s silly and outdated to pretend otherwise. Worse, not granting it copyright actually undermines Silverman’s claim: if what being created isn’t a new copyrightable work, Meta’s and OpenAI’s lawyers could argue, then what are Silverman and all the other authors so worked up about?
(Btw just wait until the AI gets even better and all this goes from books and still images to TV shows and video games — then the theft starts totaling serious money.)
Tl; dr, if it seems too good to be legal — and both Napster and AI image-creation tools absolutely are — it probably is. But that doesn’t mean that the legal system will actually take care of it.
And that’s what’s most concerning of all: despite the weight of both legality and morality pitched against them, the big tech companies that want to cheaply produce all their irresistible potato chips — combined with too many users who either can’t or don’t know to resist its greasy appeal — will overcome that weight. In the courts and, if eventually not there, in the marketplace.
Web historians/anyone who was alive will recall the “information wants to be free” days of the late-1990’s-early 2000’s internet, in which a combination of then-powerful new technologies, petrified media companies and kid-in-a-candy store consumers led to a Web awash in songs and articles no one had to pay for.
And as much as the media business tried to stuff the toothpaste back in the tube with a set of Johnny-come-lately paywalls — and the music business won some hollow victories with the Napster lawsuit and its ilk — the grooves in the record were set. Consumers got so used to all this free stuff that when they had to pay they basically started pirating media or stopped consuming it altogether. Much of the content business was eventually reborn as a subscription-streaming model — at a pennies-on-the-dollar rate for creators.
AI is now just offering a more complex 21st-century version of this same phenomenon — of theft hidden behind a newfangled filter. And it is just a filter. Because as much as tech companies like to project the message that AI is a powerful intelligence that should not be stopped (ah, the ol’ crypto “innovation” argument), how intelligent can something be that can’t do anything without the help of a lot of intelligent writers and artists? AI is autonomously powerful the way Professor Quirrell is autonomously powerful.
This tragic history no doubt is emboldening today’s creators to fight harder, and smarter. But to what end? Powerful well-funded tech companies whose business relies on swiping this material are far more powerful and well-funded now than they were in 2000, and they hardly need an advanced intelligence to tell them that consumers able to quickly grab or create facsimiles of their favorite books, images and video games will make for even more useful unwitting allies than they did two decades ago.
As UC Berkeley’s business innovation expert Olaf Groth recently told CBS’ affiliate in San Francisco, this is “a bit of a Napster moment here…remember Napster 2001 when people thought music didn’t have to be paid for?”
The tech companies remember it all too well.
Great points; I don't think we have the vocabulary let alone the conceptual fluency to fully understand this yet.
I agree that a human writer consuming large amounts of Hemingway isn't that different from an AI trained on him. But I think that only covers half the question. The reading/training isn't what violates the copyright -- it's the spitting out of what's read into the marketplace. The Hemingway human who turns around and writes The Sun Keeps Rising or what-have-you is adding something ineffable to the mix that a machine by definition can't. Call it originality, experience, creativity, soul -- even a mediocre human writer is bringing an element that transforms the work from a simple synthesis. Ask a human on a desert island who's never read a book in their life to tell you a story and you'll get _something_, however wobbly. Do that with a machine that's never read anything and you'll get computer circuits and sand.
That's why I think in the end LLMs are closer to Napster than a human -- more elaborate and disguised than a traditional copy, sure. But fundamentally borrowed and not original. And thus an infringement of that which is.
Idk, I think the problem with copyright and AI is deeper and more complicated than the most of the current conversation recognizes. Your point about whether not something that produces content that cannot be copyrighted can simultaneously somehow violate copyright hints at the issues. Napster ultimately was just making copies. But that's not how LLMs work -- they're not really copying anything, just "learning" how words are associated with a trillion parameters. If I read dozens of novels by Stephen King and Hemingway and then write my own book, people might notice that the writing style is in the tradition of those writers -- but unless I copy them word for word, it's not copyright infringement -- if it were, art (which is always in conversation with what came before) wouldn't be possible. Certainly it would make no sense to say I'm violating copyright just by *reading* (training myself) on their works, would it?
On the other hand, LLMs aren't people, and I think intuitively we have a justifiable aversion to the idea that a supercomputer could "imitate" the writing style of Margret Atwood and churn out a hundred books in her "style" over the weekend. But I'm not sure it's "stealing" in same way copyright infringement is stealing.
Ultimately the whole concept of copyright itself is a human creation that would probably never have arisen without the invention of mechanical means of reproduction. If we want to address the problem we intuit here, I think we're going to have think harder about what it is we're really objecting to and develop a legal framework that addresses that.