Mind and Iron: What the Hezbollah pager attack says about tech, war and the future of security
Also, has OpenAI ushered in the era of the reasoning machine?
Hi and welcome back to another bubbly episode of Mind and Iron. I'm Steven Zeitchik, veteran of The Washington Post and Los Angeles Times and pastry chef of this gluten-free bakery.
Every Thursday at M&I we bring you developments from the fast-changing world of science, technology and business, offering guidance on what to get excited about and what to be wary of. We're owned by and owe debts to no one. Our only obligation is to all our fellow beautiful humans — and to maintaining our humanity in the evolving days ahead.
The past week has been a crazy one in the world of tech. First the most cutting-edge AI tool our civilization has ever known was released into the wild, as Open AI put out o1, a model that can purportedly reason like a human and be deployed (possibly) to solve knotty math and science problems.
And then a few days later brought perhaps the most low-tech wartime attack of the modern era, involving pagers, powder and Hungarian shell companies in Israel’s strike of Hezbollah. We'll sort through both ends of that spectrum.
And we’ll close with a look at a new California law signed this week that seeks to limit what studios can do with AI actors. Will it work or be as vaporous as a hologram?
First, the future-world quote of the week:
“Turn off your phone. Take out the battery!”
— A public call in Lebanon after hundreds of devices sold to Hezbollah operatives were detonated simultaneously.
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
The techno-shift that the Hezbollah attack points to; has OpenAI created a genuine reasoning machine?; Hollywood actors now can’t be AI-replicated
1. WE ALL SAW THE NEWS WHEN IT HAPPENED. BUT DID WE REALIZE HOW CRAZY IT WOULD TURN OUT TO BE?
At the same moment Tuesday, hundreds of devices of Hezbollah operatives detonated, causing numerous deaths and injuries both to the suspected operatives and some bystanders, all part of a likely mission by the Israeli military/Mossad. Our rule at M&I is not to wade into geopolitics, and we won't break that credo here. But what this event and the reaction to it says about technology, security and our changing attitudes to both is our domain. And plenty fascinating. So worth some delving.
The first assumption — before much news was out — was that this was a sophisticated high-tech attack. Somehow Israeli operatives had managed to deduce where the devices were (at the time they seemed like smartphones) and then blow them up. Sort of a crazy proposition. Spying is one thing (indeed some of the early rumors had this as the handiwork of Pegasus, the Israeli hacking software). But causing a smartphone hundreds of miles away to explode is the stuff of Bondian fantasy. How could this be? Could AI have somehow picked out these devices and melted them down in a way to cause this much damage?
The next wave of the foamy news cycle washed in the opposite direction. This wasn't AI. In fact the detonated devices weren't even cell phones. They were pagers, the famously clunky (though cutting edge in their time) one-way communication devices that let you know someone was trying to reach you without any way to do something about it, the carrier pigeon of the 20th century. The truth, it turned out, was analogue.
And how did these pagers blow up? With the most old-school of on-the-ground operations. As the news cycle soon made clear, Israeli operatives had inserted a substance in with the battery, the plastic explosive PETN, that enabled a detonation. And not only inserted it by infiltrating the manufacturing process but by manufacturing the pagers in the first place. The entire production line came from B.A.C., a Hungary-based company that seemed longstanding and legit — they had a licensing deal with the Taiwanese electronics company Gold Apollo — but was really a front for the Israeli government. The whole company was created and run by Israeli intelligence officers for exactly this sort of operation.
As the riveting NY Times account has it, "B.A.C. did take on ordinary clients, for which it produced a range of ordinary pagers. But the only client that really mattered was Hezbollah, and its pagers were far from ordinary." The whole scenario was so fantastical that reportedly when Ben Affleck was pitched it this week as a sequel to "Argo," he took one look at the treatment, smiled, and tossed it aside saying "Nah, that would never happen." (How B.A.C. won a contract with Hezbollah is one detail that remains unclear; we'll have to wait for the actual movie on that one.)
There's a small and big lesson here when it comes to technology. The small lesson is that what seems like a major innovation is often anything but. Sure, this could have been some sophisticated operation. But it also could be (and was) the opposite — a 1980's communication device filled with explosives that were first deployed in World War 1 by people conducting on-the-ground spycraft out of the 1950’s. For all the hype over the latest tech, analogue methods still have their place.
But it goes beyond that. Because it's not just that analogue methods were used in the attack. The analogue methods were only possible because of skepticism about sophisticated technology.
You might be wondering why Hezbollah was using pagers in the first place. And the answer is as eye-opening as it is ironic: they were afraid Israel would target their devices if they didn’t.
In February Hezbollah leader Hassan Nasrallah told his followers to stop carrying smart phones and switch to the low-tech modality of pagers.
“You ask me where is the agent," he said. "I tell you that the [smart]phone in your hands, in your wife’s hands, and in your children’s hands is the agent.”
He asked his operatives to shun all their high-tech iPhones and Androids. “Bury it. Put it in an iron box and lock it.”
The Israeli government leveraged this fear of technology — leveraged, in a sense, Luddism — to execute precisely the tech operation Nasrallah was afraid of. The attack, after all, couldn't have been executed with smartphones — those are available everywhere and could hardly be assuredly custom-shipped en bloc to Hezbollah. But it could be carried out with the specialized production of pagers, which required ordering from a niche company, thus allowing B.A.C/the Israeli operatives in the door.
The obvious, almost literary conclusion here would be that which we do to avoid our fate actually seals it, and so we might as well plunge headlong into the new. I don't think matters are as simple as that — unquestioningly embracing unknown new technology isn’t a great idea either. But the incident does complicate the idea that the best way to preserve our humanity is to eye warily the latest innovations. The new isn't always what will get us, and the old isn't always a refuge. Reflexively fearing the fresh technology can be as counter-productive as reflexively embracing it.
But there’s one last turn of the screw.
Because as much as this attack was carried out low-tech style, the high-tech devices aren’t going anywhere. We all carry them and their many exposures to the broader world around with us, inviting bad actors to target them. And now a smartphone, as casually unthreatening as our own fingers, seems capable of packing a threat. And just wait until VR glasses and other wearables become more common; then we’ll really know intimate.
I don’t know what effect this will ultimately have on our attitude toward technology. I didn’t see anyone walking the streets of New York today scared of their phones (nor of the poles they crash into while staring at them). But only a naif wouldn’t detect a shift. This tech we carry around, for so many decades feared only by the paranoid and the CIA officer, just became a little more threatening; the risk that an interested party can use our own banal accouterments against us just became a little more real. Tally it up as one more everyday activity this century has made fraught, from air travel after 9/11 to going inside crowded places after the onset of covid.
As with all of those, life, I suspect, will go on. But new safeguards will be put in place, new anxieties will descend and new disasters will unfortunately come to pass. Some may lament the passing of a simpler world; others may simply chalk up all these ills as the costs of living in modernity. Whatever our reaction, it’s clear that an attack aimed at Hezbollah in Lebanon didn’t just happen to other people living different lives far away. It reverberated all around the world, right into our pockets.
2. JUST AS WE WERE PUBLISHING LAST WEEK’S EDITION CAME NEWS THAT OPENAI HAD RELEASED WHAT IT CLAIMED WAS ITS MOST CUTTING-EDGE MODEL TO DATE, o1.
Unlike all the GPTs that came before, this program is meant to "reason.” Which is another way of saying it doesn't just search all the text out there and synthesize a logical-sounding response but actually (allegedly) proceeds through a progression more akin to human thought. It can thus not just spit back facts (or, as ChatGPT often does, a wan synthesis of various human writings) but actually solves problems, at least of the technical sort, like math equations and coding challenges.
OpenAI touted the extra buffering time as inherently additive. "We trained these models to spend more time thinking through problems before they respond, much like a person would," the company said, implying these added seconds would accrue to a better response.
That’s of course not automatically true — I know more than a few humans who take a long time to answer a question; it doesn’t mean they’re smarter. But at least in theory the added time should provide quantitative if not qualitative enhancement to what’s being spit out. That’s because o1 is using the added time to approach a query differently than previous models did — the system is using “reinforcement learning,” a points-and-demerits system that over time guides a machine to improve its accuracy.
Such a system has actually long been used in training AI models but seems (OpenAI has been very vague about this) to now be used more directly for actual problem-solving. This still hardly seems like the full story, as even the best computer carrot-and-stick system still can’t really lead to reasoning as we think of it. But at least that’s OpenAI’s explanation.
So, in a phrase, does it work? Is this new approach actually capable of solving problems in a way all previous AI systems cannot?
Some users were out there doin’ the testin’.
A Verge reporter watched a demo in which o1 was asked the following question: “A princess is as old as the prince will be when the princess is twice as old as the prince was when the princess’s age was half the sum of their present age. What is the age of prince and princess?” It solved the riddle in 30 seconds.
Impressive, as I puzzled over the riddle for twice as long and still felt half as smart. Is this reasoning, though? Or a really, really advanced calculator, the way ChatGPT isn’t literature so much as a really, really advanced summarizer?
In a less controlled environment, a UCLA math professor named Terence Tao gave the system a whirl and found it good if not great on advanced math.
Tao said he gave o1 a “complex analysis problem” and concluded that it could “work its way to a correct (and well-written) solution *if* provided a lot of hints and prodding, but did not generate the key conceptual ideas on its own, and did make some non-trivial mistakes.”
He added, "The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student."
Well, the idea that it can cosplay as a graduate student at all is kind of striking given that the writing quality on most GPTs seems more akin to a college freshman’s.
Yet this still feels like a magic trick more than thinking. After all, solving the problem is nice. But if it really was capable of thought it would, as Tao says, generate a concept.
In this way I think OpenAI has both done something cool and is obscuring the truth with its “reasoning” branding. The cool part comes with the simple fact that AI can now solve a problem and “think” in the Kahneman System 2 of the word — it’s actually working through a problem that takes effort instead of just skimming people’s answers for essentially the most upvoted response. That’s not nothing. No machine intelligence before could really do something like this.
But this type of problem-solving is hardly the human brain’s most distinguishing trait— originality is. The really mindblowing result — the real gamechanging application that would lead to AGI and all that the Altmanian prophets predict — would be an AI generating something new, even if just a fresh critique of the problem. The real determinant of its human-like intelligence wouldn’t be figuring out the theory — it would be writing a new one.
Now, it’s true, plenty of humans can’t do that. But at least some can. And if no machine can, then we’ve still not made the quantum leap OpenAI claims — we still haven’t, for better or worse, bridged the fundamental chasm between humanity and AI. In fact even if a machine and a person get to the same answer, they’re kind of using different routes to get there. I don’t want to say the machine way is inferior. But it is…illusory.
We are very early in seeing how this will roll out. Will coders and other humans who need to think through a problem be able to rely on o1 to do it for them, and if so what does that mean for their jobs and AI’s large societal role? Will this ability to think through a problem infiltrate other modes of our existence; do everything from climate-change modeling to dating-app algorithms (all of which now deploy AI only as essentially suggestions for humans to take or discard) now become reliable enough that they can generate meaningful solutions without human intervention?
Does this kind of reasoning lead to another word that you hear thrown around a lot by OpenAI evangelists — “autonomy?” Because that would not only make o1 philosophically significant. It would make it useful.
Time and the testers will tell. And of course the model can evolve, both in the lab and in the problem-solving world. What we know right now though is that o1 isn’t snake-oil — it’s something. But it’s also not everything, or even a lot. Basic reasoning would tell you that.
3. FINALLY THIS WEEK, WE LAST TOLD YOU ABOUT CALIFORNIA GOVERNOR GAVIN NEWSOM WHEN HE SIGNED A JOURNALISM BILL last month that sealed the doom of too many reporters and journalists at the hands of Big Tech. But credit where credit is due — this week he put pen to paper on a bill that might just save a few creative/media jobs.
Newsom this week signed into law two bills that make it harder for studios to use performance replicas — that is, an actor's body of work — to generate video for a new movie or show. This was a major issue during the Screen Actors Guild strike of last year, with actors rightly fearful their past roles will be mixed up and blendered to ensure they never get any new ones.
Under the bills, a dead celebrity's estate must always give permission for a replica (there would have been legal challenges even without this codification but this toughens the protections); a studio that uses replicas must offer a specific description of how they will do so in any contract with an actor (hopefully preventing them from just unleashing replicas all over the movie); and the bills generally bolster the idea that a performer must give consent before a studio can start churning out AI performances from their work (say, using outtakes for a sequel).
Consent was the big issue in the Screen Actors Guild strike last year. Now with these bills — in fact sponsored by SAG — actors ostensibly have the power to agree, not to agree, or not agree unless they're paid.
Problems still abound. Consent is hardly the cure-all, not in a cutthroat industry where free will is relative and studio pressure tactics are legion. Newer or younger actors without leverage could be pushed into giving consent. Acting is a hyper-competitive business for all but the most vaunted — someone new is always waiting to take your role — and I fear a reality in which only the actors willing to sign away their replica rights are given the choice roles. A law on the books doesn't always speak to the reality on the ground.
Fortunately there are guilds to negotiate these deals — the next SAG contract could find ways to prohibit a studio turning down on an actor on this basis or even require a certain percentage of roles on a project be allocated to actors who don’t give consent. Studios will still try to grab, but actors increasingly have tools to grab back.
I still feel recycled performances from actors who are nowhere near a set (/this earth) is where our entertainment world is going — the tech will be too good, the appeal too shiny and the bean-counters too dollar-conscious. This will bring massive cultural implications. (For a fictional look at where this could all lead, check out our story Reboot from our Summer Fiction Series).
But the bills are a good sign. The biggest state in the country — and the one where so many entertainment companies are based — now has some serious guardrails against simply taking an actor’s work and mashing it up into something new. Hopefully these barriers keep getting built.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. Can 2024 do better? The summer wasn’t great. September so far? Pretty solid.
THE HEZBOLLAH ATTACK SHOWS THAT TECH IN OUR POCKET IS NOW A WORRY: -2.0
OPENAI’S NEW REASONING PROGRAM CAN…REASON (MAYBE): +1.0
ACTORS GET SOME FRESH LEGAL PROTECTION FROM AI RIPOFFS: +3.O