Mind and Iron: The inventors of AI just won the Nobel Prize. It could soon look very cringey.
A Swedish shocker. Also, revisiting Deepfake Patient Zero.
Hi and welcome back to another briny episode of Mind and Iron. I'm Steven Zeitchik, veteran of The Washington Post and Los Angeles Times and lead pilot of this island puddle-jumper.
Every Thursday we aim to give you the best of tech, science, progress and the future to determine where we're headed and how we should feel about it.
Before we dive in, I wanted to shout out to a story I did last week for my employer The Hollywood Reporter on the state of OpenAI. The company is somehow both at the precipice of huge cultural impact and the cusp of a great intellectual unraveling, and my reporting (I hope) captured this dialectic. You can check out the piece here.
So do you recall the story out of Baltimore last year in which a high-school principal was caught on tape making horrible racist and antisemitic comments, only to have turned out to be the victim of an AI slander? Or maybe you just remember the first part, which is exactly where the danger lies in this new world — as a visit to him months later reveals.
Also, Amazon has made a very small AI tweak to how packages get to us — with some potentially big implications.
And in the biggest news of the week, the two godfathers of AI, Geoffrey Hinton and John Hopfield, just won the Nobel Prize in Physics (!).
“This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning," the Nobel Committee said, in our future-world quote of the week.
Only one problem — Hinton doesn't think all that machine learning is a good thing.
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
AI will be to the “benefit of all mankind?”; how a deepfake victim is trying to claw back from the dead; Amazon’s deceptively big AI step
1. SO LET’S START WITH THE NOBEL.
No one can deny the impact that John Hopfield and Geoffrey Hinton, who this week won physics’ most prestigious honor, have had on the world. Or the universe. In fact in some ways it's astounding the pair — at ages 91 and 77 — weren't thus honored before.
Hopfield in the early 1980's created the "Hopfield Network," in which a network "is trained by finding values for the connections between the nodes so that the saved images have low energy,” as the Nobel Committee put it. Hinton three years later devised the "Boltzmann Machine" which "can learn to recognize characteristic elements in a given type of data."
Yeah, so those sound like "Big Bang Theory" episodes. What they did in plain terms was, first, Hopfield figured out how a machine can store and recall images (kind of a crazy thing that a piece of silicon can do when you stop and think about it), then Hinton allowed it to recognize the recalled images' level of similarity to other images. The latter has been iterated upon time and again, and Hinton and others in recent years have advanced these comparative abilities to the full-on machine learning (and text- and image-generation) we know today.
[If you want to know what any of this has to do with physics, you're not alone; the Nobel Committee slightly strains a muscle trying to convince us, using phrases like the "network as a whole is described in a manner equivalent to the energy in the spin system found in physics" and he "used tools from statistical physics, the science of systems built from many similar components." Well, where were you going to put a prize for ChatGPT — literature?]
Anyway, the $1.1 million that the American Hopfield and Canadian Hinton will split is unquestionably deserved, their discoveries up there with past winners like the work that made X-Rays and holograms and quantum-mechanics research possible (all of which I learned, in a turn that Hinton and Hopfield would greet with an I-rest-my-case smile, via Google’s AI Overview. And then checked against human sources.)
But unlike nearly all of the theoretical discoveries and many of the practical ones, H&H's work isn't an unassailable event, a solid circle of positivity or even neutrality. There exists a giant hole in the middle of their research into which God knows what might fall — it's a scientific bagel, if you will. The AI that was constructed on Hinton and Hopfield’s foundation will yield powerful new applications. But what those applications are is unknown and potentially scary.
How do we know this? Because Hinton himself has told us. In what has become a famous set of pronouncements roughly akin to Prometheus rueing the whole fire thing, or Oppenheimer sitting-in-Truman's-office-at-the-end-of-the-movie thing, Hinton has repeatedly over the past 18 months said he doesn't know what he's unleashed on the world and perhaps we should all take a step back so we can find out.
"We have no experience of what it’s like to have things smarter than us," Hinton told CNN, even right after winning (!) "We also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.” Faulkner accepted the Nobel by saying that "man will not merely endure: he will prevail." This is a…different vibe.
Whatever one's personal level of worry about the so-called Skynet scenario (and my own pales compared to more imminent dangers) one has to take this warning seriously. The person behind the technology is urging us not to uncritically accept what he has created. Seems worth listening to.
(Some accelerationists, of course, two-foot this in a particularly Lamar Jacksonian way, embracing the changes that Hinton made possible while ignoring so many of his forecasts about where it could go.)
Hearing the news Tuesday I found myself wondering how this Nobel would play fifteen, ten or even five years out. Will it seem like a cringey choice, honoring people for doing something that brought us so much harm? Will it be prescient, a clear salutation of men who made the world a better place?
Or will it live in the mushy in-between, making us happy for the lives it's improved and wincing for those it’s ruined?
What's perhaps most worrisome here is that the win could actually change the experiment (Heisenberg Uncertainty Principle — another physics Nobel winner!). Because an impartial, august group like the Nobel Committee declaring that the godfathers of AI did something "for the greatest benefit of mankind," as the Nobel's mission statement does, offers a kind of unintentional shield for what comes next.
A Nobel for AI allows executives to espouse, consumers to embrace and companies to invest in a realm that might otherwise be viewed with more apt caution. "Why slow us down with pesky annoyances like regulation?” they might say. “Even the Nobel Committee endorses what we do!” This isn't Time's Person of the Year, marking someone who "for better or for worse" has influenced the year. This is the Nobel Prize, honoring the greatest benefit to mankind. (No, like, literally it’s right in the mission statement.)
The AI Safety people, of course, have plenty of counterargument fodder for those who would use it as a fig leaf — including the warnings of the winner himself. Technology is powerful, they can say, citing Hinton — but should it override the people it exists to serve? AI can do a lot — but should we shove aside the humanity it can never begin to replicate?
Or, put another way, "[Man] is immortal, not because he alone among creatures has an inexhaustible voice, but because he has a soul, a spirit capable of compassion and sacrifice and endurance...It is... [a] privilege to help man endure by lifting his heart, by reminding him of the courage and honor and hope and pride and compassion and pity and sacrifice which have been the glory of his past."
That quote? It’s from Faulkner’s Nobel speech too.
2. YOU MAY REMEMBER THE STORY OF THE HIGH-SCHOOL PRINCIPAL IN SUBURBAN BALTIMORE who went viral for making offensive comments about Black people and Jewish people last January.
According to audio of the incident, Pikesville High School principal Eric Eiswert said some truly terrible and offensive things about a number of groups. Millions of social views followed, along with outrage, death threats and calls for him to be fired. Eiswert was put on paid administrative leave.
The rest of the story is heartbreaking. Eiswert denied that he said any of these things right off the bat and immediately suspected an AI deepfake. A forensic analysis proved (after much scrutiny — the kind social-media posters won’t apply) that it WAS a fake, and police arrested and charged the school’s athletic director, a 31-year-old named Dazhon Darien, with theft, retaliating against a witness and stalking. (We really don’t have the criminal categories for this yet, do we.)
Darien allegedly had an ax to grind against Eiswert because of an accusation the athletic director stole money. He wasn’t going to have his contract renewed, prosecutors allege, and they say he was trying to get Eiswert discredited to protect his own skin. (Seriously, this is a Sundance movie waiting to happen.)
Whatever the motive, law enforcement finally deemed the audio a fake. Eiswert never said any of those horrible things.
You’d think such exoneration would have, after too much unnecessary pain, finally have closed the chapter for the principal. You’d be wrong. The BBC just spent a whole chunk of time in Pikesville. It turns out Eiswert has had to be transferred to another school, still faces threats and still encounters a lot of people in town who think he made those comments, facts be damned.
“I honestly believe that a lot of people here in this city don't really know that that's not true,” one Pikesville resident told the BBC, in what was the investigation’s big bright flashing money-quote. A tech-enabled scam has ruined a person’s standing. And there’s nothing anyone can or will do about it. I mean, this story had a happy legal ending. Law enforcement got to the bottom of the matter and unearthed the deepfake. And it still won’t make a bit of difference.
This isn’t to say, of course, that AI shouldn’t be welcomed for what it can do. Deepfakes, or synthetic media, are and will increasingly be a part of our landscape. And they can do some very cool things. But they can also do some very nefarious things. And if we’re going to let out the cage a beast that can manage all of that, we should probably have the requisite legal and social tools to bring it back when it goes amok.
‘Course the world seems to be going the other way. After a Silicon Valley lobbying blitz, California governor Gavin Newsom two weeks ago vetoed a bill that would have made the developers of AI models liable for major harm their tools cause. The veto prompted the bill’s sponsor, Sen. Scott Wiener (D), to decry this as “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.”
And anyone who, like Eiswert, was just doing their job when an aggrieved person with a laptop came along.
3. AS FUTURISTIC INVENTIONS GO, THIS ONE LOOKS LIKE THE SMALLEST OF THE SMALL. YET IT’S SURPRISINGLY TELLING.
Amazon this week announced that, starting early next year, a new AI-powered system will be implemented in many of its vehicles. Some 1,000 electric delivery vans will be equipped with “Vision-Assisted Package Retrieval,” or VAPR, which will shine green and red lights on a package for delivery drivers when they pull up to a given location.
No more sifting through packages or coming up with a system to organize them by geography. The driver pulls up, the system is able to “read” the label of a whole vanful of packages and cross-reference them with a location, then greenlight the driver on which package to grab. A few minutes are saved, and the driver is off to your front door and the next destination.
This would seem to be the most basic/benign AI labor tool you can imagine — the system ain’t replacing anyone. It’s simply helping a person at work. When Silicon Valley evangelists suggest that AI isn’t coming for anyone’s jobs but instead just helping them do those jobs, this is solidly what they have in mind.
In fact, one imagines these little use cases creeping in to so many areas of our lives — an AI that better knows what we want and where we are and can thus relieve us of the sundries of daily existence. (Imagine pulling up to a supermarket and AI alerting you to exactly which items on your list are and aren’t in stock before you even walk through the door, eg).
And yet when you spin this harmless little delivery helper further into the future, what do you have? Not a job that’s easier. That would be true if the delivery person works for themselves. They don’t. They work for a system that makes money according to the time it takes people to do their jobs — by them doing more work in the same amount of time. At heart what this tool achieves isn’t to let the delivery driver exit the truck faster —what it achieves is persuading the system its driver can deliver more packages.
So an Amazon that is already known for the pressure it puts on drivers to fit in a large number of deliveries in a day could now, with the required time decreasing, demand even more such deliveries.
An AI improvement, in other words, may not be an improvement at all — the only thing it may raise is management demands.
Labor-economy types note “The Efficiency Paradox” (a spinoff of something called the “Jevons Paradox”). Basically it says that when you make a task easier, you actually encourage a lot more people and elements to come rushing in, and then just end up back where you started, or worse. (Kind of like how a workplace tool such as Slack is supposed to make us communicate more efficiently, but too often the ease-of-use just makes us communicate more.)
The labor economy is complex and its AI days are early. On its face, VAPR is the kind of tool any company would be happy to use. But such tools also demonstrate how even incremental advances should be scrutinized for their unintended consequences and responded to accordingly. Labor unions, for example, might want to ensure contractually that the little tech tools that ostensibly make life easier for human workers doesn’t make a company expect more as a result; that’s not really an improvement for workers.
When it comes to AI and the labor market, increased productivity is good. But we should be decidedly on guard for how that can bleed into the increased demands that go along with it.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. Can 2024 do better? The year started off strong but the summer wasn’t great. And this week isn’t faring that well either.
A NOBEL FOR GEOFFREY HINTON: But will it spotlight his warnings or Nobelwash them? -1.0
DEEPFAKE VICTIMHOOD LASTS A LOT LONGER THAN WE THINK: -4.5
AI WORKPLACE PRODUCTIVITY TOOLS A MANY-SIDED COIN: -1.5
Destructive force of dynamite led to the Nobel Prize.
Considering the destructive capability of AGI, one (an AI) might infer that there will someday be a Geoffrey Everest Hinton Prize for "contribution that have conferred the greatest benefit to humankind." This prize will recognize efforts to mitigate the consequences of AI. Better said: for putting genies back in bottles, and curses back in Pandora's pithoi (i.e. boxes.)
In other words, the greatest benefit to humankind will be to prevent the wrong hands possessing superior artificial intelligence.
"Hinton has....said he doesn't know what he's unleashed on the world and perhaps we should all take a step back so we can find out."