Mind and Iron: Google Will Now Be Developing AI Weapons
From cheesy Super Bowl ads to going all in on those autonomous killing programs, the tech giant's terrible, horrible, no good, very bad week
Hi and welcome back to another tangy episode of Mind and Iron. I’m Steven Zeitchik, veteran of The Washington Post and Los Angeles Times and lead baker and quality-control manager of this newsy Cinnabon.
Every Thursday we bring you the best of what’s around, futurewise. All the news about the technology and science that is changing our lives — and how we should and shouldn’t want it to change our lives — without any of the corporate ingredients or preservatives you find elsewhere. Please consider supporting our independent mission here.
After last’s Thursday’s deep dive into DeepSeek, we’re back on the AI front again this issue. Because AI has been having a week, influencing everything from cheese facts to weapon deployment.
In fact Google alone has been having a week, as it seems to be caught up in all of those controversies.
We’ll be seeing plenty of AI ads at the Super Bowl on Sunday — lot of industry tenterhooks over what OpenAI will do – so we’ll circle back to the game and all that went down tech-wise in next Thursday's issue. For a look at how machine intelligence is being used in the front offices of the NFL (and how coaching is changing drastically thanks to data and tech), check out our previous dispatch on the topic.
First, the future-world quote of the week:
“Google’s pivot from refusing to build AI for weapons to stating an intent to create AI that supports national security ventures is stark…This appears destined to result in a race to the bottom.”
— Human Rights Watch’s senior researcher Anna Bacciarelli, denouncing Google’s new stance on AI weapons
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
What a cheese ad tells us about the new world of information; time to turn killing decisions over to the machines?
1. YOU LIKELY (HOPEFULLY) HAVE MORE IMPORTANT THINGS TO DO WITH YOUR DAY THAN FOLLOW THE MISSTATEMENT OF CHEESE FACTS IN UPCOMING SUPER BOWL COMMERCIALS.
But if you did happen to be paying attention to the mini-scandal henceforth known as Gouda-gate, you might have been amused to see the way events played out with the Google Gemini ad this week.
In a nutshell (which Gemini just told me contains three different types of cheddar), Google dropped a Big Game spot this week in which a Wisconsin cheesemaker leaned on its AI program to tell it some relevant business facts — namely, that 50-60 percent of cheese consumed in the world is gouda. (You’d think a cheesemaker would already know that number off the top of his head, but no matter.)
Internet brietectives quickly jumped on the spot to say that this figure is way too high, and whatever data the model was synthesizing to come up with this answer was, clearly, wrong. (You’d also think Google would have checked this Gemini output with an old-fashioned Google search before putting it into an $8 million ad, but at least give executives credit for believing in their own product.)
This would not seem like a surprising deal – anyone who’s used AI for search has come upon five wrong answers just since this morning. At least it wasn’t suggesting using glue on pizza, that quintessential example of AI Slop.
But then Google made things weird — and passive-aggressive.
Responding to a blogger’s correction on the matter, Google's president of cloud applications Jerry Dischler dug in. The 60 percent fact “was not a hallucination," he wrote. "Gemini is grounded in the Web — and users can always check the results and references. In this case, multiple sites across the web include the 50-60% stat.” (Google edited the ad to take out the fact anyway.)
Two things stand out about this:
The first, and most obvious, is Dischler's breeziness about what the ad got wrong. "Multiple sites" include this incorrect information is a really strange and deflective way to defend the value of your product. "Well, sure, the brakes don't work, but I'm just the dealer, I didn't build the car" is not the kind of response a business should be giving a dissatisfied customer.
But the second and more significant point is how his answer undermines the very argument for AI summaries in the first place. The whole reason for using these chatbots in searching the human-authored Web is to take out the middle step of finding and vetting the Web site in question — of zeroing in ourselves. But if we need to dig deeper to check the fact that has just been handed to us, well, then, AI has not eliminated a step at all. In fact it’s added one because now instead of just gravitating to the result that seems right we have to question the tool...and then gravitate to the Web site anyway.
Perhaps you've had some type of this experience already. You Google something, see the AI Summary, against your better judgement stop to read it instead of scrolling down, feel skeptical about what you've read, click through the link the summary says it pulled the info from, find your skepticism justified as the source is incomplete or not what the AI model thought it was...and then go down to the normal search results to do your own research anyway, wondering why you've wasted the past three minutes of your life on an AI wild goose chase.
And that's in a best-case scenario. In a worst case you lack the instinct to realize it got something wrong or the time to do anything about it and then get burned later when the info point it pulled turned out to be false.
The only audience I can really imagine for these Gemini-esque search tools as they’re currently constructed is for people who don’t know how to vet links themselves — the people for whom the error rate of the AI is actually lower than their own. But at this stage of high Internet literacy how many people is that, really? And they're probably not the ones who're gonna be firing up Gemini.
A philosophical admission sits at the heart of both Dischler's statement and Google's walkback of the ad: that AI needs to be grounded in the human Web. Because for a good while after ChatGPT came on the scene (in fact Sam Altman and OpenAI are still kind of insisting on this) the spin from tech companies was that you didn't need to do any human work to find what you needed; the machine could do it for you. They even underplayed how much the info they led you to was originally written by humans (though actions like the NY Times lawsuit reminded them pretty fast.)
But it turns out you need humans on both ends — not just on what's being found but on what's doing the finding. Dischler's "Always check your sources" isn't just an annoying response from an executive not willing to take ownership of a mistake — it's a self-own that the machine intelligence has very little of value to add here. Of course that doesn't stop them from advertising that it does.
I'm all for having a machine intelligence trawl the Web and find answers for us. I mean, I don't think it's the most transformative use of AI, and I'm a little baffled by why so many companies have devoted so many resources to so narrow an offering when the old system works pretty well. But if you're going to make a big show of how a computer can find what we need online better than our own fingers and minds — and spend millions telling us that — you might want to first demonstrate that it's actually true.
AI will have many uses in research and heavy data-lifting. Finding quick factual nuggets may not be what it's best at. Yet Google and others keep trying to make fetch happen. Which, as everyone who uses Gemini knows, is one of the most quoted lines from the 2006 teen comedy "She’s The Man."
2. AN OLD EDITOR OF MINE USED TO SAY THAT YOU CAN ALWAYS TELL A COMPANY HAS SOMETHING TO HIDE by how much of a word salad they use in their press release.
Well, there sure are a lot of vegetables getting tossed into the bowl in the Google Blog post this week about its newly revised AI mission.
After a whole lot of sentences like “Collaborative Progress, Together: We learn from others, and build technology that empowers others to harness AI positively” (literally, that’s an actual sentence a human (we think) wrote), it gets to the nub of the matter.
The company says that its principles will “continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.” Sounds good, right? Well what it didn’t say — for the first time since 2018 when it started publishing these mission statements — is that it vows not to develop AI “for use in weapons” or surveillance. Which is a pretty telling omission.

This was astutely noted by a senior researcher at Human Rights Watch named Anna Bacciarelli, who went on to say this small shift in policy could mean the literal difference between life and death. “Those red lines are no longer applicable,” she said.
“Google’s pivot from refusing to build AI for weapons to stating an intent to create AI that supports national security ventures is stark. Militaries are increasingly using AI in war, where their reliance on incomplete or faulty data and flawed calculations increases the risk of civilian harm. Such digital tools complicate accountability for battlefield decisions that may have life-or-death consequences.”
Google itself gives the game away with a phrase so tucked in to its utopian La Scala Chopped you almost have to read it twice to realize what it’s telling us. The company, it says, aims to “create AI that protects people, promotes global growth, and supports national security.” (Itals mine.)
Well, I guess it sounds better than “we dig the idea of machines telling us who to kill.”
We’ve told you about AI weapons before — the moral hazards of allowing computer programs to take life-or-death action (faulty and biased data is one big concern; the ease of going to war is another) and even the advocates who say it will lead to lives saved by eliminating human error and emotion. None of this is new in the world — an autonomous drone already killed someone in Libya back in 2020 without any human control.
In fact AI military applications are not even new for Google, which despite the above pledge made by AI unit DeepMind was still selling DeepMind projects to militaries via its cloud division — the kind of arms-lengthism you can only applaud for its creativity.
But something has shifted now: the company’s official position is that all use cases, even military use cases, are on the table, for everything Google does. That there isn’t even a need for a roadblock or workaround on developing/licensing out AI for weapons systems.
That, really, a Rubicon has been crossed. We can only assume the imminent removal of its name from atop this six-year-old “no-AI-weapons” industry pledge organized by the Future of Life Institute. And how long before other companies follow suit?
As Bacciarelli says, “That a global industry leader like Google can suddenly abandon self-proclaimed forbidden practices underscores why voluntary guidelines are not a substitute for regulation and enforceable law. Existing international human rights law and standards do apply in the use of AI, and regulation can be crucial in translating norms into practice.” (Don’t count on that happening in the U.S. under Trump, where even basic relaxed AI guidelines on commercial uses have been scuttled.)
A secret weapon, as it were, does exist in the battle to stop this: Google employees themselves. Some 200 people who work for DeepMind signed a letter last summer demanding Google drop military contracts even via the cloud-division workaround, citing the company’s own pledge not to use AI for military purposes.
The irony is that now that the mission has been rewritten, Google’s not technically in violation of it. See how easy that is! But these DeepMind employees are the ones designing the things, and they have some sway over determining what they’re used for.
Still, all of this, I fear, will end up with AI being deployed in war and war-like situations (or even peacetime situations to putatively prevent us from being at war, in the case of surveillance). The monorail is rolling down the track. Whether it’s a machine using facial recognition to identify a national security threat or a weapon firing at a target it had selected, vetted and decided to move in on with no human control, it’s all happening, thanks to a military that argues for its necessity (“I mean, the other guy is doing it”) and a Big Tech complex that sees plenty of money and just enough moral daylight to argue for its plausibility.
Terminator soldiers on the ground are still many years away (the tech isn’t close). But machines making kill decisions and then carrying them out by air and sea? It’s coming. The best and brightest minds in Silicon Valley are working on it as we speak.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. This year started on kind of a good note. But it’s been rough since.
GEMINI KEEPS PEDDLING THE SEARCH MISTAKES: -2.0
GOOGLE’S DONE WITH THE NO AI WEAPONS THING: -7.0