Mind and Iron, Special Edition: Here’s what you need to know about the Sam Altman craziness — and why it matters to all of us
How the evolution of AI and humanity as a whole could change based on what's currently playing out in Silicon Valley
Well so much for the silence.
You thought you wouldn’t hear from me this week. Heck, I thought I wouldn’t hear from me this week. But the Sam Altman craziness of the past few days is the biggest news to happen to the worlds of tech and the future in forever. So here we are at Mind and Iron, interrupting our pre-Thanksgiving quiet (and yours) with a quick hit on what’s happening as Altman is out, back in, then back out as OpenAI CEO.
Because as you noticed if you’re even casually following this fast-breaking story, there’s a lot of noise, much of it focused on the personalities and corporate drama. And of course that stuff is interesting. But such a focus does…not always foreground the relevance to our lives.
So what’s the impact this saga could have on our world — how might it actually change how humans live, work and interact in the years ahead? (And I think it will.) Read on…
FIRST, A RECAP OF THE NEWS:
Sam Altman has been a massive driving force in AI, co-founding and leading Open AI, maker of the very popular ChatGPT. While an elite group of researchers have paved the way to creating this new kind of digital intelligence, the 38-year-old St. Louis native is the person most responsible for operationalizing it — for creating AI products that will greatly transform humanity (and pushing rival companies to do the same). If any single person is shaping the new world in which machines think for humans, it’s Altman.
The tech industry creates too many cults of personality, elevating the role of one figure over the engineers, researchers, marketers and shrewd-but-faceless folks that actually forge change. (See under: Elon Musk, who actually was himself an early investor in OpenAI circa late 2015 but split with the organization a few years later after differences with Altman.) But sometimes individuals do make this much of a difference, and Altman is one of these people, both providing behind-the-scenes vision and front-facing ambassadorial work. Think Steve Jobs, but for machine-learning instead of gizmo-carrying.
Altman emceed OpenAI’s major “developer day” just two weeks ago, which we told you a bit about. In that presentation he outlined some of the ambitious innovations OpenAI was working on, and that a whole range of people creating apps could soon have at their disposal. Nothing less than the future of society as OpenAI sees it (for better or worse) was revealed that day.
OpenAI was once a nonprofit but now has a robust for-profit arm. Thanks to a $13 billion in investment, said arm is 49 percent-owned by Microsoft — strong input, no control. For that there’s been a small board that oversees the whole shebang: besides Altman it’s included OpenAI chief scientist Ilya Sutskever, Quora CEO Adam D’Angelo, entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology honcha Helen Toner. Greg Brockman, OpenAI’s president and co-founder with Altman, has been chairman.
Altman has been in conflict with some members of the board since at least the end of last week (though of course likely much earlier), when in a surprise move Friday they fired him. Brockman was removed from the board too but permitted to remain in his executive role. He quit in solidarity with Altman shortly after.
The board didn’t really explain its motive for the change, saying only in a memo that “Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.” But it’s been a pretty open secret for a while that Altman has been pushing hard for “commercialization” — Silicon Valley-speak for “run fast and get these things out there” — while some on the board crave a more deliberate approach with more research before rushing ahead. The company, after all, began as a nonprofit before spinning out a for-profit arm. We’re seeing exactly this spiritual tension play out in real time.
Anyway, Altman and the board haggled all weekend, trying to figure out a way for him to come back (a lot of OpenAI employees threatened to quit if he didn’t). An interim CEO even came and went, replaced by another interim CEO.
Microsoft CEO Satya Nadella — understandably a little worried at the sight of his company’s $13 billion spiraling down an Altman-less drain — feverishly ran interference to try to get everyone to come to an agreement. But they couldn’t, and late Sunday night, Altman and Brockman were out. A few hours later Microsoft announced it was hiring the pair to run a new lab in-house at the company.
Whew.
Now this would all be a lot to digest but for the fact that as of this writing Monday afternoon Altman may still be on his way back to OpenAI — apparently negotiations are ongoing, per this report in The Verge.
You can’t blame each side for trying. The reality is OpenAI badly needs Altman, but Altman also needs OpenAI; running a startup with outside Microsoft money gives you a helluva lot more freedom than being an employee at Microsoft, no matter how much you couch your role there as running a “lab.” And launching a new startup with new financiers and new teams is work. And so we wait.
WHY THIS MATTERS
OK now this part is key. Because this isn’t only or even primarily a disagreement about personality styles. It’s a disagreement about vision. And not just any vision, but a vision for the very core issue facing us: the ways powerful computers should figure into our lives in the coming years.
One of the central players in all this is Sutskever, who seemed to be driving the Altman ouster (though now seems to be trying to walk it back). It’s no secret that Altman and Sutskever — a Google vet who Musk helped lure over to Open AI in the early days and is now the firm’s chief scientist — have had major philosophical differences. Remember, when Sutskever came over OpenAI was a nonprofit. And besides, he’s fundamentally a researcher — he studied under so-called “AI godfather” Geoffrey Hinton at the University of Toronto, the man who has since taken to issuing grave warnings about the power of AI. While Altman for all his technical knowledge is an entrepreneur and an investor.
So this is an ideological clash first and foremost: Between the change-the-world-and-make-money entrepreneur mindset of Altman and the go-slow-and-figure-out-what-the-hell-is-happening scientist mindset of Sutskever. (Also see under: America.)
In July Sutskever even helped form a new OpenAI group to look at the consequences of “superintelligence” — the potentially earth-shaking change that could happen when AI gets so smart it builds on itself and leaves humanity in the dust. Sutskever made comments to MIT’s Tech Review recently about his worries. “It’s obviously important that any superintelligence anyone builds does not go rogue,” he said. He’s sounding the responsibility horn again and again.
Now, for those of us who care about innovating responsibly, it’s logical to side with the board/Sutskever and hope that by leaving Altman is at least partly slowed down. (And working within a large corporate structure like Microsoft will slow him down no matter how many hypey press releases they send out.) My own impulse is initially in this direction. Move-fast-and-break-things didn’t work out so well when Facebook tried it last generation — it damn near pulled down our democracy with electoral misinformation and has given us such lovely modern developments as cyberbullying, screen addiction and data harvesting.
So maybe next time around — with a force about 1000x more powerful that can literally rob us of our agency not to mention create ruthless robot soldiers and make redundant half our workforce — let’s go a little slower, as Sutskever the Scientist is arguing.
But it also is not as simple as that. Because the reality is that at least in some realms, an AI that moves too slowly is also not good. It could impede the arrival of medical breakthroughs, costing millions of lives, not to mention delay mental-health tools, environmental solutions and other applications that address challenges to our survival. The reality is when it comes to AI we don’t really know what it will unleash. And while that’s kind of the argument a caution-preaching Sutskever is making, it doesn’t mean one needs to practice caution across the board. What we should be practicing is selective caution.
Of course, when you’re creating general models that can be used by both the very best and the very worst actors, as OpenAI is, such selectivity is very hard to impose. Part of what’s so challenging about the realm of AI research is getting our heads around the idea that something can be our doom and our salvation at the same time.
Altman and his ilk are also right in saying that pushing ahead does not necessarily mean being irresponsible. And while the nature and history of American capitalism does not, ahem, strongly argue the case, our world is also littered with life-saving fixtures that grew from unholy origins. The Internet and the accountability it enabled began as a military project, and Big Pharma for all its ethical stains saved millions of lives with covid vaccines, for starters.
The reality is if you gave me a choice between putting my life in the hands of a scientist or putting my life in the hands of an investor, I’d choose the scientist 10/10 times. But the other reality is that so many of the great scientific discoveries of our time would never have happened without investment.
I suspect both Sutskever and Altman would agree with this and say simply that they are each best positioned to strike this balance. Sutskever would say he is not opposed to commercial imperatives that can drive innovation, while Altman would say making money does not mean you can’t be responsible.
(It’s worth noting that the big Future of Life Institute letter last March calling for a six-month pause on AI research was signed neither by Altman nor anyone on the OpenAI board, though both Altman and Sutskever each signed a more modest warning.)
In the end, really, all we can hope for is that this tension — between two nerdy men in Silicon Valley but really between humanity’s twin forces — can ultimately make our world better not despite their conflict but because of it, because of their yin-and-yang equilibrium.
That the very fact that money and creativity are coming to such loggerheads is what will help give us the best result.
That far from having to choose between binaries of acceleration and deceleration, the two forces can balance each other out to move at exactly the right speed.
But first we have to figure out who’s going to run OpenAI.