Mind and Iron: "AI means sharing our planet with a new form of intelligence"
An interview with the futurist who never misses. Also, Vegas goes robot.
Hey, the summer’s all gone, but here we are still standing.
A big hello-again from inside the ironic mind of Mind and Iron. I’m Steve Zeitchik, nee of the Washington Post and LA Times, and your gallery owner for all paintings futurist. Every Thursday, we got a glitzy new opening. (You can catch up on past installations here.)
We're back at it after a week away to close out the summer. The summer, a distant gauzy notion that must have happened — there are trinkets to prove it happened — and yet did not seem to happen in this timeline.
No matter — the fall can be pretty great too. The NFL is starting, Google’s on trial, life is interesting.
You know how we do things around here — if you’re reading this as a forward/on the Web, please hit the subscribe button.
And please consider dropping a few coins in the collection plate so we can keep at our brand of human-centric journalism. Other sites occasionally dip into the social and psychological consequences of all this tech change. We full-on submerge. Your pledge helps us do that. Also, it will allow you total access once the paywall drops.
First, the future-world quote of the week:
“AI means we need to get used to having other kinds of intelligences sharing our planet. No longer will it be the case that we’re the only ones who can do a lot of the things we’re used to doing.”
—Thomas Malone, MIT professor and future-seer extraordinaire
This week: The chorus to stop AI training theft gets louder. What activist musicians tell us about the future of labor. And can Vegas automation be a thing?
Also, the aforesaid Prof. Malone is one of the most interesting cats out there when it comes to future forecasts — he anticipated remote work and gamified finance years before Zoom and the crypto bubble. The notable author and MIT Sloan professor sits down with Mind and Iron to tell us everything our next AI decade holds.
Let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
AI Copyright Caterwauling; Vegas Automation Fears; NY Musicians Strike Back
1. AFTER OUR STORY LAST ISSUE ABOUT AI COPYRIGHT, I heard from a number of readers about how an AI training on massive amounts of books definitely is or isn't similar to the Napster theft of a few decades ago, and should absolutely/should absolutely not be legislated accordingly.
It’s an important discussion we’ll keep having, and this week a Wired story throws its voice into the room. The piece tells of the efforts of The Rights Alliance — a Danish anti-piracy group — to get Books3 taken down. Books3 is a rogue data set built on the 1960,000 (!) books of an online shadow library, which is a fancy way of saying they stole a whole bunch of books for an AI training tool. Meta, Bloomberg and others then went and used that tool. (It’s not that different from what OpenAI sometimes did to build GPT, n.b.)
Like the Sarah Silverman lawsuit against Open AI over her book “Bedwetter,” Rights Alliance’s move is part of the growing effort to stop the madness of creative stuff being fed unlicensed into the AI maw for eventual public consumption. And while some experts are skeptical we currently have the legal structures to halt the practice, the Wired piece noted some qualified victories already, at least against Books3. (Open AI and its GPTs are still out there.) Bloomberg has said they won’t use Books3 anymore, and some data sites that were hosting it, including one called Academic Torrents, said they’d take it down. (Meta didn’t respond.)
One of Silverman’s lawyers, Matthew Butterick, put a fine point on why companies can’t just be feeding any book they like into their system to regurgitate out for us. “Open source doesn’t mean you took a bunch of people’s shit and gave it away for free,” he says. “That's theft.” Tough to debate that.
Ah, but people will. Two points made by the pro-ripping Big Tech crowd in the story show just how deep the misconceptions go.
The first came from the author of the piece, who suggested that even if the training data did come from a pirate library, maybe it didn’t matter. “To draw a parallel, if Sarah Silverman was suing a human writer for infringing on the copyright for her memoir “The Bedwetter” — say, someone who wrote a suspiciously similar book called “The Bedwetter, Too” — how said writer had originally read her work might not factor into the verdict. Whether the defendant had purchased a signed copy or flagrantly shoplifted a dog-eared paperback wouldn’t matter.”
The other comes from Academic Torrents director Joseph Paul Cohen, who said that “The greatest authors have read the books that came before them, so it seems weird that we would expect an AI author to only have read openly licensed works.”
First, I’m not sure that I agree with the assumption that it doesn’t matter where the work originated; if an author held up a Barnes & Noble every day before heading over to their writing space we might care about that too. But the bigger point that both of these arguments miss is this: the problem is not the reading.
Let’s say it again, just for emphasis: The problem is not the reading.
No, the problem is the producing — that is, the AI companies then taking everything they’ve read and, without fundamentally changing it (more on that in a second), making it available to us. They’re peddling material to which they do not own the rights.
See, the human who reads Silverman’s book and then writes their own comic childhood memoir adds something to the mix that a machine by definition can't. Call it originality, experience, humanity, creativity — even the most mediocre human writer bring an element that transforms their work from a simple synthesis of past influences. And that’s what makes what they’re doing potentially not problematic — and what Big Tech is doing inherently so. Because the machine is nothing without what it reads. Ask a human on a desert island who's never read a book in their life to tell you a story and you'll get something. Do that with a machine that's never read anything and you'll get sandy computer circuits.
The telltale sign that Cohen doesn’t get this is that he uses the phrase “AI author.” But AI is not an author. AI by definition cannot be an author, because an AI isn’t sentient. Sentient beings can be authors. What an AI can do is be a very, very slick synthesizer, combining what it read before very very slickly. But since it never experienced any human emotion and possesses no consciousness, it can not actually create the way an author would.
And so what an AI is peddling to the public royalty-free isn’t its own work influenced by what its read. It’s just….what it’s read. Or what others wrote. And we generally don’t like when someone peddles hundreds of thousands of books other people wrote.
If current copyright law does not protect these rights ( I mean the laws were designed a long time ago), it should. And I suspect new laws ultimately will indeed address the issue, though I remain skeptical whether this will happen in time to help creators.
“Aren’t writers themselves reading a lot of other books?” the Big Tech-rationalizers like to argue. They are. And then they go out and do something an AI can never do: become an author.
[Wired]
2. IMAGINE VEGAS, BUT WITH EVEN LESS SOUL.
That's the scenario painted by a recent NPR piece, which took to the land of windowless neon caverns of metronomic beeping to ask how many of the people who work there could soon be AI-automated out of a job (and, by extension, how much of our experience there will soon involve machines over people).
The reporter found text bots making restaurant recommendations, automated hotel check-in kiosks and a "Tipsy Robot," an automated bar inside Planet Hollywood.
"Wherever the resort industry can replace their workers and not affect productivity, profits or the customer experience — wherever they can do that with artificial intelligence... they will," said the economic consultant John Restrepo.
Eh. I don't think our Vegas experience is changing so fast. First, the powerful Culinary Union's been on this like gin on ice; its last contract includes protective language such as "Advanced notification (of up to 6 months) of new technology implementation that could lead to layoffs and/or reduction of hours" and "mandatory free re-training to use new technology for current jobs." The union is currently in negotiation on a new contract and is prepared to go to the mat on this issue — just today they authorized a strike vote.
Second, it's really hard to program machines to move around even minimally unpredictable public spaces without all sorts of mishaps. Chaotic casinos? C-3PO himself would have a meltdown.
Also, many of the studies cited are old or not Vegas-specific.
But the biggest reason Vegas will still Vegas is that recent advances in AI are about advances in thinking. They're not about physical automation (that happened decades ago) and they're not about emotional connection (that's not happening for a while). And casino-resort jobs require a lot more of those last two elements.
The ideal employee at the craps table is a carnival-barker meant to stir excitement in the group, something a person can do a lot better than a machine. Ditto the server bringing you that drink at the slots and encouraging you to to part with that next fiver.
The best way to know that Vegas techification is marginal or gimmicky is that even the Tipsy Robot has a person on hand to correct its errant pours.
If anything, the heel-nip that machines can put on some of these jobs will increase those gigs’ emphasis on emotional connection, exemplifying a paradoxical truth about the AI age: it could actually bring more humanity to our public spaces, not less.
A rash of automated check-in kiosks and some glorified beverage vending? Sure. And those will be mildly interesting, socially and economically. Your croupier telling you about their latest teraflop upgrade? Don't bet the house.
3. THIS ISN’T A FUTURE STORY PER SE, BUT THIS WEEK I took a quick break from my usual tech-humanism mode to write a ditty on the labor battle being waged by musicians at a Lincoln Center venue.
There's nothing conspicuously tech-y about this story — these are performers from the august NY City Ballet who stuck with a company during the pandemic, and now feel like they're being short-shrifted by the check-signers, who are proposing stagnant wages and hiked health-care premiums. For their part, management has been saying times are tough and these people get paid just fine. And so now the musicians are threatening to strike.
I bring this up to highlight only that fears of exploitation around AI sometimes miss the point. There are plenty of old-fashioned ways that worker neglect can and does happen, and focusing on the ways companies will use AI against them is like a lost hiker focusing on how a bear will swipe their sandwich. Maybe it will. But shouldn't you be worried about everything else the bear can do?
That's the tough news. The good news is that, at least if this story is any indication, the future looks bright when it comes to workers’ rights. While it's unclear which way this Lincoln Center situation will go, it's entirely possible that management will be forced to make a fair deal. Because more than ever disenfranchised workers can tap into a media-enabled consciousness; into a social-media enabled promotion (look at how many of those WGA strike signs went viral); and soon enough — and this is where things will really get fun — to a degree of activism and strategy that AI can actually help devise.
Maybe above all that, what's really changed is not just tools but a vibe — a tech-enabled transparency zeitgeist that, for all its worries about privacy, makes it ever-harder for big organizations to bury their abuses.
My old boss and mentor Marty Baron talked (and lives) a new ethos of accountability that has been permeating journalism for several decades now, scoring numerous highs like the #MeToo movement. The future is even more visible, and thrillingly justice-filled. The bear can try to hide in the woods. But the trees are increasingly being cut down.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
A RISING CONSCIOUSNESS ABOUT AI COPYRIGHT? Woohoo. +1.5
A RISING CONSCIOUSNESS ABOUT WORKERS’ RIGHTS: Aw yeah. +3.5
VEGAS POTENTIALLY FILLED WITH EVEN MORE TECH THAN TECH BROS?: -2
The Mind and Iron Totally Scientific Apocalypse Score for this week:
+3.0
The Mind and Iron Totally Scientific Apocalypse Score for this year:
-1.5
IronClad
Conversations with people living the deep side of tech
Paging Professor Prophet
For the better part of the last four decades, Thomas Malone has been opining in ways that are just a little more prescient than the rest of us.
A former scientist at the Palo Alto Research Center and now the founding director at MIT’s Center for Collective Intelligence, Malone has been ahead of the curve on So. Much. Future. Stuff. He published a paper on the gamification of finance back in the 1980’s; was talking about remote work back in 2004; and writing five years ago on how much AI will change our lives.
I last talked to him in the fall of 2021, when he was forecasting with insightful clarity how the work world would evolve post-pandemic.
I was curious what he made of the current moment, with so much change besetting us, especially on AI. So I caught up with him by Zoom from his home in Massachusetts. The conversation was edited for brevity and clarity.
Mind and Iron: It’s been a pretty heady time since we last talked — AI is not only capable of quantum leaps, but people know about and actively use it. What do you make of this crazy pace of change?
Thomas Malone: It’s interesting, I had been working with GPT 3 for about six months when we last spoke in 2021 and one thing we noticed even then was that it could create software about 30 percent better than professional programmers, while making non-programmers able to do more basic tasks as well, as fast — and as cheap — as the experts. So it’s been clear for a while what this can do. But even with that, something I think we need to keep in mind is that it’s not the computer replacing the human — it’s the computer stimulating the human.
M&I: What do you mean by that?
TM: One idea is where you can let designers or creators type in a phrase describing what they want, and the system uses generative AI tools to generate semantically different versions of the prompt. And then each of those phrases generates images. It’s not the computer replacing the human, it’s the computer stimulating human creativity by generating possibilities the human might not have thought of, and taking all the time and effort out of creating the first images. But it still requires human involvement in specifying the problem in the first place and then judging which of the outputs are best.
M&I: Couldn’t an AI do that second part too?
TM: Yes but that doesn’t mean there isn’t creativity for the humans. Photographers are genuine artists because of how they use their cameras. They’re not painting and drawing the way artists once had to; they’re pointing and clicking and adjusting parameters. I think we need to think of generative AI like that.
M&I: But a camera still feels like an implement to me. It’s not being proactive the way AI is.
TM: Maybe another way to think of it is that every good artist needs a manager. Or a creative consultant. Or a mentor.
M&I: Interesting. So AI is going to be our sort of protege and we as humans its guiding hand.
TM: I was at a conference recently and on one panel an artist who designed the logo for the conference took us step-by-step how he did it. He used an AI image-generation program — I think it was Dall-E 2 — to help him. Mixing ideas and fooling around and eventually getting something that really worked. His artistic insight and judgement were critical to the product even though the computer actually created the final result.
M&I: Still marginalizes humans a little bit.
TM: I think of it as a partnership. Part of what I think we need to do here is get used to having other kinds of intelligences sharing our planet. We already share our planet with dogs and cats and forests and other kinds of things you can say are intelligent, and in a sense AIs are kind of like alien intelligences. They’re new on the planet; they’re alien.
But they’re also not really alien because we created them and we still have a quite a bit of control over them. I think we’re going to need to get comfortable, to get wise, about how we manage these new kinds of intelligences. No longer will it be the case that we’re the only ones who can do a lot of the things we’re used to doing. I think this is going to be an important transition for humanity and I don’t think it’s guaranteed to be a good one.
M&I: Certainly our track record in how we treat other intelligences does not suggest it would be..
TM: I like to say ‘if we choose wisely it will turn out well.’ In the last chapter of [my most recent book] “Superminds” I talk about this. One thing that I think leads people to be worried is that they’re influenced by science fiction, which is of course influenced more by human nature than AI nature. Many of the other human groups we come across are competitive and want to kill us. But there’s no reason that I know that computers will feel this way. They may be more like dogs or horses in terms of another form of intelligence that can do a lot of very useful things.
Again, I think this could go very badly. But the choices we make as a society about this will be at least as important as the capabilities of the technology.
M&I: That’s actually a founding principle of this platform, the Asimovian idea that these semi-autonomous intelligences can head us either to utopia or the abyss, it just depends on how we use them — that just lamenting the apocalypse is a kind of lazy and dangerous offloading of responsibilities. So what do we particularly need to be on guard for in your view?
TM: I’d say AI weapons are probably a bad idea. They’re already out there and they’re getting smarter and I think using these things should be considered a war crime. Maybe it’s not too late to do that.
M&I: You and several other thinkers and researchers recently proposed a global monitoring group for this stuff, a “Global AI Observatory” that will monitor and warn us of dangers. How much do you think that could help?
TM: I think something like that is very important if we are going to make wise choices. To some degree it will happen on its own but we’re at serious risk of it not happening fast or broadly or wisely enough. So I think a global AI observatory is really important.
M&I: I also wanted to ask about a more immediate hazard: AI copyright. Some of the arguments by Big Tech seem egregious to me — that somehow they can just swipe anything anyone wrote and put it in the AI blender and they have no financial or other responsibility for what smoothie it stirs up.
TM: I’ve read your coverage of this and think it’s serious too. I wrote an op-ed recently for Bloomberg in which I propose something called “learnright.” Basically it’s laws that don’t restrict copies but restrict what an AI can learn. Now, it could be opt-in or opt-out — you have to actively choose to have your work be a part of the training or you’d have to actively choose not to, either way. But there would be some protection.
See, the issue is that the laws we have today are designed for a world in which the only kind of intelligence that’s relevant is human intelligence. We now have a different kind of intelligence on the planet, and we need to make sure our laws are written with those possibilities in mind.
M&I: On the subject of sharing the planet with new intelligences, I’ve been thinking a lot about how we might use AI in everyday professional and personal settings. You like to use the phrase ‘computers in the group’ — that they’re going to be an important piece, but just a piece, of decisions we undertake.
TM: Precisely. We’ll still be in charge. We’ll just want to be consulting the computer.
M&I: One thought I had in this regard is using it at meetings. It might seem a little crazy to our 2023 eyes but I don’t think it’s really that farfetched to think that in business settings we’ll one day be going around the room saying ‘Ok Bob, what do you think of this new campaign? What about you Sally? Ok AI, what you do you think?’
TM: I think AI is going to be used in all kinds of settings and that is certainly one of them. And then if its comment is helpful we say ‘good comment’ and we train it, so it keeps getting better and then it becomes an even bigger part of our decisionmaking.
M&I: That’s interesting. A true example of a computer in the group — just sitting at the table, as one of us, becoming smarter and smarter.
TM: Exactly.
M&I: I also wonder about dating applications. I mean Tinder and the like all use algorithms already — why couldn’t AI become an essential part of who we’re romantically involved with?
TM: I hadn’t thought of that. Like, it suggests who you should go out with and then you can veto it or not.
M&I: Yep, and who you should break up with too.
TM: I think you’d call an application like that AI Yenta.
M&I: There are worse use cases.
TM: There certainly are.
M&I: Like weapons.
TM: That’s definitely a bad one.
M&I: Thank you, as always, for at once scaring and exciting me.
TM: You’re welcome.