Mind and Iron: Doomers, Boomers and the Glory of Q*
Making sense of the OpenAI craziness -- and why we should all be concerned
Hi and welcome back to Mind and Iron. I hope you all had a very nice holiday weekend and tried not to think about how AI is going to dominate our lives (too much).
I’m Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and chaplain of this ecumenical gathering.
We normally come at your inbox Thursday with all the news about how tech is shaping our future. But with so much happening since we last communed, I thought I'd send out the newsletter a little earlier this week. We're going to spend this issue parsing what the heck happened with Open AI last week as CEO Sam Altman was pushed out and then pushed back in and where this now leaves us — parsing, really, the state of the future. And why you should care about all of this.
Because beyond the business news, there is A LOT to say about the competing AI worldviews that underlie this drama; this wasn’t just a personality clash. It can be hard to know what each of these ideologies stand for — let alone who we should stand with. So this issue will break it all down. Which way of thinking might prevail — and which way of thinking, for those of us who care about a human-centric future, we might want to prevail.
As always, please sign up here if you’re reading this as a forward or on the Web.
And please consider pledging a subscription ahead of our paywall dropping in the next couple months. If you value a voice that both makes sense of the AI craziness and also isn’t carrying any water for Big Tech companies, they’ll be dollars well-spent.
First, the future-world quote of the week:
“I really hate how ‘AI safety’ is used to mean the entire paranoid fantasy around ‘preventing AGI that is hostile to humanity’ instead of ‘addressing the real harms being done by AI algorithms in use today.’”
—Software engineer Hana Lee, cutting to the quick of AI risks
Let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
How we got here; where we’re going; what the heck is Q*?
1. WHEN ALL WAS SAID AND DONE AT THE END OF LAST WEEK, SAM ALTMAN WAS still in charge of OpenAI, his deputy Greg Brockman was still in the No. 2 spot, all the employees remained in place and the company still had a close but nominally arms-length relationship with Microsoft.
After Altman was fired by the board the previous Friday, took to social media Saturday and said he was decamping to Satya Nadella’s Microsoft Sunday, nothing really changed.
After all that made-for-TV drama — in which Altman briefly entered the headquarters of his company with a guest badge — everything was the same.
Ah, but like a T-shirt vendor in Bangkok could tell you, everything was the same but also different. Because in the coup to remove Altman, and the counter-coup orchestrated by investors and Nadella (with a boost from loyal employees), the whole power dynamic had changed. As had the board.
Ilya Sutskever, the chief scientist who we noted last week embodied a more cautious approach to AI, was booted. So was Helen Toner, the director of strategy at Georgetown’s prestigious Center for Security and Emerging Technology, who is similarly concerned about safety and recently published a paper about it. (A paper Altman, um, did not like.)
Also gone from the board: Tasha McCauley, a robotics engineer and RAND Corporation researcher who has expressed concern about the power of AI/which people get to wield it. (Props to any reporter who tried to reach out to Joseph Gordon Levitt, her husband, to elicit a response; you do get points for trying.)
The only member left from those black-and-white American Frontier brake-pumping days of early November 2023 was Adam D'Angelo, co-founder of the online bulletin-board Quora.
In the others’ place was a group unlikely to challenge Altman, both because they’re not philosophically predisposed to do so and because they lack such incentive; if your appointment is explicitly the result of Sam Altman liking you better, how willing are you to repeat your guillotined predecessors’ mistakes to try and stop him? These folks include former Salesforce co-CEO Bret Taylor and former Treasury Secretary Larry Summers — not exactly people reputed for their nuanced warnings about the dangers of tech.
While not himself on the board right away, Altman will likely regain his spot soon enough. And a few more people might yet be named to puff the board up to as many as nine members and generally tie its hands from speed-bumping Altman (I can tell you from years as a business reporter — the bigger the board, the less effectual it is).
Whether Microsoft will get a seat, btw, is an open question. You’d think the company with a $13 billion, 49 percent-stake investment would want that, if only to prevent another dastardly attempt to slow down the freight train of capitalism. But Nadella et al may want to puppeteer from afar since, if everything goes skis up, who really wants to be facing Congressional subcommittees? “Mr. Chairman, we let them run as an autonomous company and aren’t involved day-to-day,” is a line that Nadella no doubt has at the ready but would also no doubt prefer never to have to utter on C-SPAN.
A little more on the dynamics of Altman’s push-out: Toner’s paper was critical of OpenAI, especially compared to breakaway rival Anthropic. She noted that OpenAI’s release of ChatGPT (a year ago this week!) created “race-to-the-bottom dynamics” that flouted safety across the tech world. She also noted “criticism for many other safety and ethics issues…copyright issues, labor conditions for data annotators, and the susceptibility of their products to ‘jailbreaks’ that allow users to bypass safety controls.” She and others may also be worried about something called Q* (pronounced Q-Star). More on that in a minute.
Toner is right about all of her fears, of course. One hardly needs to channel Nostradamus to see how the headlong rush into new tech platforms can do untold damage to our psyches and democracy — one simply needs to have lived through the past ten years.
But a CEO who’s part of that race doesn’t want to hear that. And a CEO who’s part of that race really doesn’t want to hear that from his board member. So the battle lines were drawn. (How much Sutskever, who by all indications is sympathetic to Toner’s view, was an active driver versus a passive cooperator is up for debate.)
Anyway, you tick off Microsoft, Sam Altman and the biggest venture-capitalists in the world, you’re unlikely to win, no matter how much rightness is on your side. And so here we are.
(For more on Toner’s work and her concerns, check out this video from a couple years ago.)
None of this should really surprise us. I mean, Open AI is a company with a valuation of up to $90 billion and the backing of one of the biggest tech firms in Christendom — were they really going to let a few conscientious engineers and researchers get in their way? In fact, it’s testament to OpenAI’s roots as a nonprofit that someone like Helen Toner was even on the board in the first place!
Maybe she, McCauley and Sutskever extracted some concessions behind closed doors from Altman — about perhaps the race ahead to GPT-5, or obligatory disclosures, or some other due diligence. But given the forces they were up against, I wouldn’t count on it.
So how much power does this leave Altman with now? Anytime the board fails to remove you and instead themselves gets removed, you're probably pretty bulletproof. Especially if so many employees — and the people who pay the bills — rally to your side.
A few observers wondered if Altman had to give up something to come back, particularly with the retention of D'Angelo, who may not share his full accelerationist view. The sharp AI expert Gary Marcus suggested that as a result of Altman at least temporarily losing his board seat and other factors this was “not winning, it’s compromising, and yielding some power.” There’s also an investigation forthcoming that could plausibly turn up some Altman misdeeds.
But the consensus is that Altman is back and stronger than ever — with a friendlier board and, thanks to how easily he drew so many tech kingpins to him in this, a warning shot for anyone who might even think about attempting this in the future.
So Open AI is here and obstacle-free.
Which should make us feel…
2. THAT’S THE BIG QUESTION: WHAT DOES AN UNFETTERED ALTMAN MEAN FOR THOSE OF US WHO EMBRACE NEW TECH BUT ALSO WANT HUMANS AT ITS CENTER?
To hear the principals tell it, this was a victory not just for OpenAI, not just for the good people working in tech, but for nothing less than humanity as a whole. “We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners,” Satya Nadella posted on X after the coup was defeated. The week before he noted the company just wanted to “rapidly innovate for this era of AI…[to] continue to deliver the meaningful benefits of this technology to the world.”
Well, the head of a massive tech corporation just wants to deliver meaningful benefits, what could go wrong?
A whole bunch of stuff, of course. Just not exactly what some of the objectors are worried about. (Or, more accurately, are painted by Altman as worried about.)
A quick stepback: You may have heard of this whole fracas as a battle between “accelerationists” like Altman and “decelerationists” like Toner, or “utopians” vs. “doomers.” According to this framing, there are people who want to go fast with AI and people who want to go slow; there are people who see the tech as inherently good and people who see the tech as inherently concerning. This is superficially true — but only superficially.
The main concern has been (said to be) the speed at which the current AI tech will reach Artificial General Intelligence, or AGI. AGI is basically when a machine can reason like a human. When will that happen? It could be a few years, it could be a decade, it could be never. And it’s this that has (supposedly) become the main bone of contention, with the Toner types (allegedly) worried about what could be unleashed with AGI — worried in a “doomer” way about giving machines so much power over humanity.
And Altman et al have come in and essentially argued that — please muster as much false naivete as you can when hearing this spoken — “this idea you’re espousing of machines taking over is completely silly; we should push ahead and feel confident we’re able to control it.”
Now, let’s leave aside whether statements like this are actually true. It may or it may not be the case that a machine that reaches AGI will be controllable; science fiction may be just that, or it may have been onto something. But again, let’s leave that entirely aside. Because here’s the thing that Altman and the accelerationists never tell you: We could never as a society reach AGI and still have many problems from outsourcing so much of our thinking and de facto decisionmaking to machines.
Problems that we’ve sometimes mentioned in this space. Concerns that Toner and other decelerationists have raised that don’t require going into “Terminator” territory, no matter Altman’s attempt to caricature them as such. Worries whose seeds are in fact already present and will only grow in the coming years.
Here is a very short list of those concerns off the top of my head.
1. Bias and racism
2. Hallucinations and persistent general ignorance
3. Copyright infringement and the undoing of creative protections as we know them
4. Cognitive declines
5. Ruthlessly efficient product marketing
6. Political disinformation on steroids
7. Widespread hacking of weapons and the electrical grid
8. New forms of digital addiction
9. Job displacement
10. A whole new level of persuasive scams
11. Social challenges and/or dysmorphia
12. Mass surveillance
If you give me another five minutes I can come up with another dozen.
All of this is already possible even with AI at its current primitive levels — just think of the fresh risks the past decade of online life has posed to us in those last two departments alone — never mind in a time when we upload so much of our brains and lives to ultra-smart machines.
Now, it would be silly to tether any of these concerns to AGI. None of this has anything to do with AGI; a computer may never be able to reason like a human yet all of these pitfalls can lurk in the brush and trapdoor us just the same. To hear Altman and the accelerationists talk, though, you’d never think any of this is something to be worried about.
Literally the day before Altman was fired he said this about AI to the Asia Pacific Economic Cooperation summit in San Francisco: “I think this will be the most transformative and beneficial technology yet invented.” If Altman thinks the doomers make sweeping statements with little empirical basis, he may want to take a look at his own utterances. Back in February he also said, “I think AI is going to be the greatest force for economic empowerment and a lot of people getting rich we have ever seen.” Airtight argument, who could possibly challenge it.
(We won’t get too deep into it, but Altman is actually articulating a once-marginal Web ideology known as e/acc, which in its simplest formulation argues that AI will in and of itself create utopia. It stands in contrast to — and offers no more evidence than — a school that so many of the doomers come from known as Effective Altruism, or EA, which you may have heard a little bit about and basically argues that money should be spent by the wealthy to stave off harms/do good. None of this will be on the final.)
Simply put, the whole “Terminator” fear is a gift to the Altmanians because it allows them to write off a whole group en masse. “What, you’re going to let a bunch of Luddites win?” is the subtext of pretty much every one of their arguments, allowing them in one fell swoop to wipe out the legitimate concerns from some very tech-savvy people who are anything but Luddites. (China, incidentally, is another one of these dismissals. Never follow a hippie to a second location, and beware anyone who says “but otherwise China wins” as a way to score points in an AI argument. If you have any doubt about the squishiness of their reasoning, a small movie named “Oppenheimer” would like a word.)
Anyway, the straw-manning that the Altmanians do in characterizing the Tonerians is pretty stark. As that software engineer Hana Lee so brilliantly put it on Bluesky:
“I really hate how ‘AI safety’ is used to mean the entire paranoid fantasy around ‘preventing AGI that is hostile to humanity’ instead of ‘addressing the real harms being done by AI algorithms in use today.’”
So this whole framing done largely by Altman and Big Tech is off. In fact the issue even goes down to the level of language. “Accelerationists” vs. “decelerationists” or “utopians” vs. “doomers” really doesn’t paint an accurate picture of what each side is arguing. This isn’t at heart simply about going fast or slow, and certainly not about people who are rosy-eyed problem-solvers versus a bunch of schleppy Eeyores. This is about people who aren’t stopping to consider the consequences in their rush to release new products/make money and those that are cautious. Let’s call them the corporatists and the advocates instead.
And the corporatists, by pretending the advocates are only worried about a far-off improbable risk, can avoid the actual ones.
It’s upsetting, if hardly surprising, how many of the big outlets barely even noticed the stakes here, let alone lamented the result. The NY Times, for instance, did point out that “the battle between these two views appears to be over, ” that “Team Capitalism won” and that this represented a “triumph of corporate interests over worries about the future,” per a column by Kevin Roose. But he seemed oddly unbothered about the implications of this. He ended his column on an almost triumphant note: “Now, the utopians are in the driver’s seat. Full speed ahead.”
Among the few outlets to air some skepticism was The Guardian, which ran an op-ed from the UCLA tech and law scholar Courtney Radsch. Radsch noted that “amid the flurry of efforts in the US and Europe to ensure the development of ‘responsible’ and ‘safe’ AI, the harms and risks of massive concentration in the generative AI ecosystem have been largely ignored or sidelined” and pointed out potential challenges ranging from “rampant disinformation and manipulation to addiction and surveillance capitalism.” Many other big outlets have stayed silent.
And you can forget about the investors. Among the few I heard offering some restraint was a wealth manager out of D.C., Malcolm Etheridge. “We have to marry our greedy ambitions as investors with the idea that maybe there really is something to be concerned about if the people who are raising concerns have something to say,” he refreshingly told the Big Technology podcast. His voice did not rise above the din.
There are reasons to listen to the corporatists; falling into the lazy cultural construct of Team Altman vs. Team Toner (or in the nerdy parlance of tech people, Team e/acc vs Team EA) seems sillier than forcing a choice between Taylor Swift and Katy Perry.
Because of course moving fast in some AI realms is a good idea, and of course we should be happy in these limited regards to have so much money and momentum at our backs. As we noted last week, there are use cases in which a full-blown rush into AI could do nothing less than save scores of lives. The deployment of the tech for drug discovery and medical diagnostics, for instance, will undoubtedly allow for crucial interventions earlier, on everything from cancer to depression.
With an aging Boomer population, computers that can act more like humans could also well stave off a worsening of the loneliness epidemic. And I’m extremely optimistic AI can help with the climate crisis, on both micro- and macro-levels. These are massive benefits, and shouldn’t be denied.
But the consequences of too many other use cases — AI assistants and digital companions and large-scale decisionmaking algorithms — are far more mixed and at the very least seem to merit a lot more research and oversight before they are unleashed on everyone from 8 to 80.
It’s theoretically possible — notice I do not say probable — that some combination of Big Tech self-regulation and patchwork government regulation will step in to mitigate some of these risks. The Jets can also theoretically win the Super Bowl.
Also, while each side has something to recommend it, they’re not the same. Literally, as in one has all the power and force of capitalism behind it. And the other side isn’t saying NOT to develop all this AI, just to do it carefully, with a little safety research and oversight before express-shipping it to the nearest digital store.
Alas, we now have an OpenAI leadership unhindered by those pesky obstacles— and, worse, with the perfect rhetorical weapon to use against anyone who might put one up. “What are you, some kind of doomer,” seems for the moment able to stop all cautionary voices in their tracks, instantly reducing anyone who raises a concern into some AGI-obsessed paranoiac who’s spent too much time with “The Matrix” and “The Terminator.”
In the end we should beware any corporatist that want us to focus on a hazy threat potentially forming over the distant horizon — it makes it all too easy to take our eye off the dangers materializing right in front of us.
And we should beware anyone who says something new will make us all rich. They might only be speaking of themselves.
3. OK THIS Q* THING.
Speaking of machines reasoning like humans, that brings us to Q*. In past eras Q* was prelude to something cuddly, like a funky 80’s video game, or something nefarious, like a ridiculous conspiracy theory.
But Q* now is at the center of the AI debate.
It came to light, via Reuters, that one of the flashpoints for Altman and the board ahead of the whole failed coup was that researchers at OpenAI had apparently found a way for machines to potentially solve math problems on their own. They read a letter from researchers about it, Altman didn’t agree with it, you know the rest.
This may not seem like a big deal — your third-grader just aced their long-division test, and no one is ousting executives over it.
But in the case of computers this is a big deal because it means the potential for independent reasoning. Large-language models — the system on which AI products like ChatGPT are based — essentially just regurgitate everything told to them. It may sound like Ernest Hemingway but it has no independent notion of who Ernest Hemingway is or even have a way of truly deducing what novelists do any more than the family cat does. It just approximates what Ernest Hemingway would write based on everything else he actually wrote.
But if AI can start solving math problems on its own — well now it’s starting to think for itself. And that’s a whole other ball of yarn. After all, unlike language, there’s only one answer to most math problems. And it takes reasoning to get to it.
So how does Q* do this? What the hell IS Q*? Well, no one really knows, because this was all under cover of OpenAI’s secret projects. But it appears to combine an age-old “reinforcement learning” algorithm known as Q-learning with something called A.*
A reinforcement-learning algorithm rewards a machine for getting something right and “punishes” it for getting something wrong, so that when confronted with multiple choices over time, it continually becomes more right — it’s basically a refinement tool. A* is a technique to help a machine that’s learning measure its success.
So some people believe what OpenAI did here was combine the two (hence, Q*) so that it could better measure its success before making its choice — essentially, so it can see into the future of all its potential choices and make only one right choice. Ie, so it could reason like a third-grader doing math.
This post explains it (relatively) accessibly.
It’s not clear what a commercially deployed Q* could actually do, or what board members saw as a reasonable possibility from this discovery. It’s also not clear how much OpenAI will use Q* in its products anytime soon. But it clearly made Toner and others worried. We’ll see how much OpenAI remains worried now that they’re gone.
Finally, I wanted to end this whole discussion on a demographic note.
Nearly all the key players from the OpenAI board — Altman, McCauley, Toner, Sutskever, Brockman, D’Angelo — were born in the 1980’s. This is true whether they were on the corporatist or advocacy side. What we have, in other words, is a split between two strands of Millennials’ thinking — the side that wants to daringly create and own and the side that’s conscientious and worries what a lack of caution will mean for the future. The two sides of the Millennial personality, writ (very) generally.
But the people who mediated the conflict from the outside were almost entirely Boomers, born in the 1950’s and 1960’s, as many of the lead investors at Sequoia and other VC and tech firms are, and I can’t help feeling a generational drama vibrating underneath all of this. (As a 56-year-old, Nadella isn’t technically a Boomer but he’s close enough.)
After all, the people who came of age in the middle-to-latter part of the 20th century were pretty much exposed to nothing but technological possibility and even euphoria. The idea that digital technology could pose dangers really only came into the social and business mainstream decades past their formative years. Tech was going to make life easier — it was going to save us from “1984,” as Steve Jobs so persuasively argued.
So when they had to choose which people born after 1984 to side with, there was only one choice: the ones who believe technology will be our savior.
Bold move by the Boomers. Let’s just hope they don’t screw this one up for the next generation too.
4. LAST NIGHT AT AN ENTERTAINMENT AWARDS SHOW I ATTENDED IN NEW YORK, the “Killers of the Flower Moon” star Robert De Niro got up and, as he began launching into an anti-Trump political speech, said the speech had been edited on the teleprompter in a way he hadn’t approved. He seemed to believe that Apple, which financed and released the Martin Scorsese movie, was responsible.
“I don’t feel like thanking them at all for what they did,” he said. “How dare they do that, actually.”
Per Variety, it was in fact Apple that didn’t want him speaking about politics. The paper reports that “a revised version of the speech was delivered to the teleprompter less than ten minutes before the event started, according to sources with knowledge of the show. A woman who told the teleprompter operator to upload a new speech was overheard identifying herself as an Apple employee. At 6:54 p.m., the teleprompter company was sent an email from two Apple employees with the new text, which omitted explicit references to Trump….A spokesperson for De Niro said that the actor did not see the changes.”
(Scorsese and producers had gone to Apple in the first place because it was one of the few companies with deep enough pockets to finance the $200 million movie.)
With everything else going on, we won’t get too deeply into this. But if De Niro’s allegations are true, it eye-openingly recalls our item from a few weeks ago in which Apple allegedly axed Jon Stewart’s show because executives didn’t like all the hard questions he was asking about AI and China. Censorship doesn’t need to come in the form of evil bureaucrats twirling their mustaches in darkened rooms — all you need is a ballooning Big Tech that controls way too many media platforms and doesn’t like what you have to say.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week (hoo boy):
SAM ALTMAN IS BACK, STRONGER THAN EVER: A powerful CEO removes internal checks? Who were concerned about safety? -3
Q* COULD GIVE US A COMPUTER THAT REASONS: Kinda impossible right now to know what this will mean. Could be everything, could be nothing. 0
ROBERT DE NIRO SAYS APPLE CENSORED HIS SPEECH: -2.5