Mind and Iron: Why Jon Stewart's clash with Apple is even more worrying than you think
Also, an AI weapons advocate talks. And AI speaks your language.
Hi and welcome back to Mind and Iron. I’m Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and referee of this here game of the week.
Every Thursday, Mind and Iron brings you a wide range of tech news through a human lens, and what better piece of glass to filter it through.
War continues to rage in the Middle East, so we’ll keep following those developments; in fact this week we bring you a fresh item in our ongoing AI weapons coverage — an interview with a retired Army colonel at the fore of the movement. He has a provocative perspective on where this is all headed.
But plenty of other news is multiplying on the future front, and we’ll continue to cover how it’s affecting all of us humans. So please consider pledging your support — it will keep us humming in this loud news environment and ensure you access to all the rich and dystopian-avoiding content we offer.
This week, we also look at how Apple’s clipping of Jon Stewart’s wings poses a threat to all of us.
And AI can now enable people — and by people I mean self-interested politicians — make it seem like they speak your language. Inside a new development that’s either a potent tool of unification or a shameless vehicle for appropriation. (With our AI weapons chat we’ll postpone to next week the second half of our interview with U.S. High Speed Rail Association honcho Andy Kunz.)
First, the future-world quote of the week, which comes from said military expert:
“The goal of autonomous weapons is to be more discriminate, not less.”
—Dr. C. Anthony Pfaff, retired U.S. army colonel and AI-weapons expert, making the case
Thanks to all of you for caring about these issues, and please feel encouraged to like this post or toss in a comment as needed.
Now let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
The problem with Jon Stewart’s Apple dismissal; AI makes us all polyglots?
1. APPLE IS GOING TO SPEND A BILLION DOLLARS ON AI. AND IT REALLY DOESN’T WANT JON STEWART TELLING THE WORLD THE PROBLEMS WITH THAT EXPENDITURE.
That’s not exactly the headline one should write for the company’s abrupt cancellation of “The Problem with Jon Stewart,” the star’s serious-minded Apple series. But it’s not far from it either.
The show was just a few weeks away from shooting its third season — interviews lined up, production plans in place — when Stewart and Apple abruptly announced last Friday they will not be moving forward….ever. The show was kaput. A source of mine says even the staff was surprised by the news; they knew of tension but never imagined it would mean the end.
Apparently the plug-pull came because Stewart and Apple couldn't agree on how he should handle a few topics, particularly AI and China/Middle East geopolitics.
In announcing the split the parties filled out the "creative differences" space on the press-release Mad Libs. But the more accurate term is "journalistic independence." Apple is about to spend a billion dollars on AI, and executives don't really want to fund and amplify a much-loved muckraker asking tough questions about it.
Or, for that matter, for him to air critical segments about China. Apple’s fastest-growing market is lately in peril as Huwaei makes some serious inroads there. And may be further in peril with Apple’s reliance on a Taiwan supplier. Tim Cook just flew to Beijing for damage control.
Right off the bat the show’s cancellation is a tragedy and red flag. Stewart lately has functioned a lot like a journalist, confronting public officials in a series of well-prepared interviews. (See his informed grilling of an Oklahoma state senator on gun safety and the Arkansas attorney general on gender-affirming care.)
That a platform was shutting him down because it held interests in the topics he covered should be enough to send palpitations through anyone who cares about media independence. Honestly the only explanation I can come up with why this hasn't caused more of a backlash is that people are a) too consumed by the Middle East and b) have such low expectations of Apple in the first place.
I mean, imagine if CBS News did this — upped and scrapped all of “60 Minutes” because Bill Whitaker and Lesley Stahl might do some stories that complicated Viacom's ability to make money. A barking snap-to would be heard from media watchdogs, and rightly so. But no one thinks of Apple as a news organization in the first place, so many of the watchdogs just raise their eyes and curl back up in the corner.
Ah, but that's the rub, isn't it? Apple isn't actually regarded as a journalism outlet, even though it is happy to wield the influence of one until things get inconvenient.
Oddly it was Stewart's own "The Daily Show" that was once branded with this iron — that he could comment and interview like a journalist but hide behind the comedian shield if his reporting didn't hold up. Now he's victimized by the same behavior he once allegedly perpetrated. Apple can function like a news platform when it suits them — offering up Apple News or providing a home for Stewart-esque pieces — and then innocently throw up their hands claiming they're just a tech company and walk away when journalism’s basic standards become too much.
This is concerning for all of us worried about a diverse media. But the withdrawal of another outlet is the least of our problems. Consolidation has gripped the industry at regular intervals over the past 30 years, and somehow we’ve managed. But something more insidious is at work here: The very players now consolidating the media are the ones who stand to lose the most from its reporting.
As AI becomes a greater force in (and in some cases greater threat to) our humanity in the coming years, journalism’s role in parsing it will be more important than ever. But those reporters will increasingly be owned by the very people who profit from these threats. It would be like Chevron announcing it's buying a Wall Street Journal drilling investigation. The fox guards the hen house and owns the property deed to boot.
The future of Big Media is increasingly Big Tech. The latter has the money to swallow up media companies whole — money that will pile higher still in the AI age. And that means trouble, because giant tech companies are increasingly showing themselves uninterested in doing anything but running roughshod over whoever they perceive is opposed to their interests.
A few years ago Netflix and other global streamers declined to buy an acclaimed documentary about alleged Saudi Arabian complicity in the death of Jamal Khashoggi, likely because to do so would have jeopardized their business ties with the Kingdom. That forced the film’s producers to go with a much smaller distributor and sabotage their chances of getting out their important free-speech message. The movie’s own fate demonstrated the tragic phenomenon it was documenting.
(To get a cross-industry sense of where this could go see under Spotify devouring any hope for democracy in the music business. Not to mention the whole social-media algorithm issue.)
So far this has mainly struck video journalism, in part because that needs more of Big Tech’s money than the text-based kind. But that also reflects the already-troubling reality that text-based journalism’s reach is smaller, especially with young people. And even some of the text-based outlets are — ahem — owned by tech barons.
Stewart can — and likely will — go to a more friendly large platform. He but only he. Because if the trend of Big Tech devouring Big Media continues, the people who aren't yet Jon Stewart won’t be able to grow into the next Jon Stewart, because to grow into the next Jon Stewart you need the kind of platform that the companies who can provide said platform would never give the next Jon Stewart! You see how rigged this is.
Big Tech controlling the world is a bad symptom. Big Tech controlling how we learn about the world may be the disease that kills us.
[NY Times, Bloomberg, The Hollywood Reporter and CNN]
2. LANGUAGE FLUENCY IS HOW OUR SOCIAL BRAINS SORT AND IDENTIFY PEOPLE, a fact borne out by dozens of linguistics papers and anyone who's ever had their dad try to order a burrito at Chipotle.
Which is what makes it disturbing that AI can now allow people to fluently speak languages they couldn’t break their teeth on at an airport.
Two years ago I wrote a story about a slew of new services that can turn your favorite non-English Netflix show into a near-perfect dub in English, or vice versa. These programs use a combination of machine learning and synthetic media to make a voice sound like the original, but in a new language.
The services have only gotten better since, which means you can now record a message in any language you like and have the listener convinced you're actually speaking it.
It was only a matter of time before a politician weaponized this. And it was only a matter of minutes before New York mayor Eric Adams became that politician.
The New York Times reports that Adams has been using one such program. New Yorkers with the misfortune of answering the phone when such a call came in heard Hizzoner tell them to, like, remember recycling rules in Mandarin or note the alternate-side parking calendar in Yiddish.
“I walk around sometimes and people turn around and say, ‘I just know that voice. That voice is so comforting. I enjoy hearing your voice,’” Adams told reporters. (Is he sure he understands what they’re saying?)
“Now they’re able to hear my voice in their language,” he finished.
The program that Adams uses comes from Eleven Labs, one of those aforementioned dubbing companies; it was founded by a pair of Polish entrepreneurs who hated hearing Hollywood stars sound like the local grocer.
The immediate consequence of tools like this as they become more commonplace is disorientation — “why is my friend Horton Westchester III trolling me in Tagalog?” And destructive of some of our great humor vehicles. East Coast readers out there might recall when former New York Mike Bloomberg so badly butchered Spanish it lead to the creation of "El Bloombito," one of all the all-time-great politician parody accounts.
There’s also the cultural-respect issue — the idea that a person who has never gotten past freshman Spanish acting like they can throw around irregular verbs. It’s a fine line between unification and appropriation.
But the long-term consequences may be the most troubling. Experts say that any person, from a doctor to a neighbor, who actually learns a language is undertaking important work; such proficiency shows the listener they care and enables the speaker to understand what’s said back in return. If a machine is simply faking that act, then the person is not doing any of that — and really not doing anything that will allow them to understand either. It’s worse than a trick — it’s the epitome of a one-way conversation.
And when used by an authority figure, it can project an unearned authority or worse. As the anti-surveillance activist Albert Fox Cahn posted on X, ”Using AI to convince New Yorkers that he speaks languages that he doesn’t is deeply Orwellian."
We are, it seems, headed to a world in which everyone can sound like they know a language but will have no idea what they’re talking about.
[NY Times]
IronClad
Conversations with people living the deep side of tech
Could autonomous weapons save lives?
For the past few weeks we’ve been telling you about the dangers of autonomous weapons, or “slaughterbots.” We’ve channeled the activists and researchers who warn about the many dangers of weapons that can make kill decisions without human intervention — dangers getting more imminent with war in Ukraine and the Middle East.
To these activists we are living in nothing less than an Oppenheimer reboot — a moment where the thirst for technological edge is blinding people to long-term risks.
But we’re all about multiple voices here at Mind and Iron. So this week I talked to a man in the middle of the AI-weapons world — the retired U.S. Army colonel Dr. C. Anthony “Tony” Pfaff.
A prominent author and teacher at Army War College, Pfaff knows much of the latest tech developed in branches of the U.S. Armed Forces; has worked on various AI-centric defense projects; and advised the military on the subject of autonomous weapons. He has thought through all of this from a host of moral and strategic angles.
Pfaff believes AI weapons are not only inevitable but preferable — that they’re a more humane way to fight wars and will save countless lives. If deployed properly, he argues, AI weapons could lead to fewer civilians killed, fewer mistakes made and the tragic toll of war blunted in ways it’s never been before.
This isn’t the view many activists hold. So we raised some of their biggest questions to him to hear what he says. Here’s a condensed, lightly edited version of our conversation.
M&I: A lot of people are worried that AI weapons will usher in a new era of bloodiness. But you believe the tech will help save lives. Why?
Pfaff: I think when we’re talking about autonomous weapons we’re talking about more precise weapons that will limit civilian casualties, because you won’t have as many of the errors as you do with humans. It will also hit targets faster, ending wars quicker.
M&I: You have a pithy principle for how we should determine the standard for their use.
Pfaff: Yes. It’s ‘if through the use of this technology no one is worse off and someone is better off, then you have a case for it.’ It might still make mistakes under this principle. But overall it will be better than a human-only process.
M&I: How soon is all this happening? The U.S. Defense Department talks about literally thousands of programs in just the next two years.
Pfaff: Because robotics is so hard, in the Army I think it will be a while; we’re a long way from Terminator soldiers. There are just too many variables for a robot to even move up or down a hill. But the Navy and Air Force have fewer of these issues [with drones et al]. So we’re going to start seeing them there first.
M&I: What do you think of the growing activist movement opposing these weapons? They would say they’re on the right side of history, standing at the gates of Los Alamos to ensure we don’t open another Pandora’s box. How do you see them?
Pfaff: I think protesting voices are good; I think the military should hear the people raising legitimate objections. But to me it’s helpful when these voices know the technology and respond to what is being done. If it’s just people being abjectly cynical and saying that because there’s no human involved in the targeting decision all of these weapons should be banned — that I have a harder time with. It’s not realistic given the fact that militaries will be using these weapons and it also doesn’t help shape responsible policy for when they are used.
M&I: Why don’t you believe the tech will be exploited? It’s not an unreasonable concern, made worse by the fact that many of the world’s strongest militaries seem to be very reluctant to sign any kind of Geneva Convention-type agreement about them.
Pfaff: I don’t believe there is an inherent issue to this technology any more than any other military technology. We had this issue with submarines, where people in the 19-teens and early 1920’s said they need to be banned because they can just sneak up and knock out a cruise ship. So in the 1920’s we developed rules about when they can and can’t engage merchant vessels. Like with many new systems, it’s certainly possible to establish policies and rules that help you use autonomous weapons responsibly. The question is: do you?
M&I: Yeah, and I think one of the things that worries people is the sheer height of the stakes if you don’t. That when a bad actor violated rules in the past they were endangering one ship. But someone violating the rules with AI weapons will be able to do a lot more damage because these machines are so much more efficient.
Pfaff: But an actor that wants to do that doesn’t need autonomous weapons. They can do that with nuclear weapons. They don’t even need nuclear weapons — they can use thermobaric weapons. The goal of autonomous weapons is to be more discriminate, not less — the whole point of these is to get more precise. Because that helps you win. So that’s why a combatant would be using them. If not why would they use them in the first place.
M&I: More discriminate, many activists would argue back, isn’t truly possible without discriminating humans capable of making human judgments about what to target and what to pass over. What would you say to them?
Pfaff: If the systems are trained and deployed under responsible rules, then a lot of the cases you’re talking about come down to the ‘naked soldier problem’ — the ‘can you kill a Nazi if you stumble upon him while he’s taking a shower?’ I think that’s a good question. A human may not, while an autonomous weapon might because it’s not trained to make that kind of distinction. But I’m not sure there’s an ethical concern there — many people say you’re not obligated to save the Nazi in the shower. And again, if it’s reducing the number of civilians targeted, well.
M&I: There is also the worry that if countries do lose fewer people, it may make them more willing to go to war in the first place.
Pfaff: Yes, the risk-reduction argument is a real concern and I think it’s a question we need to be discussing. We’ll have to adjust to it.
M&I: And then we have the huge issue that AI is more prone to errors with certain groups of people — particularly people of color. We’ve already seen that with facial recognition elsewhere. And the stakes here are so much higher.
Pfaff: That’s something we also really need to make sure is addressed. If the overall error rate is lower but it’s targeting a specific group disproportionately, then we need to solve that. Algorithmic bias is a big issue.
M&I: The question of how we’re programming these also touches on how hackers could be re-programming these.
Pfaff: I think that’s a real strategic danger. In other words, I’m reasonably confident we can set the right rules that will work. But one problem is the poisoning of the data set — that a bad actor can come in and make the AI basically think and do all the wrong things.
M&I: Does the U.S. military recognize this and understand that it needs to be hiring a whole bunch of highly specialized IT people? It’s a real shift compared to the engineer-centric approach currently in use.
Pfaff: It is absolutely something you have to be concerned about. We’re still working on educating the workforce. Every new technology has vulnerabilities that can outpace our ability to guard against them. We have to work to keep up. We need to make our average soldier more literate, and we have projects underway (Convergence and Ridgway) to do that.
M&I: I think generally readers would also want to know that the U.S. military is asking the right questions and being as deliberate as you are. I don’t want to make you speak for the entire military. But from your observations, is that a reassurance people should have?
Pfaff: I believe they should. There are a lot of people who are working on this first to figure out what can be done and what can not be done technologically — and then once we’ve figured that out, how it can be done most responsibly.
M&I: A rich web of issues. Thank you for your time.
Pfaff: Thank you.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
BIG TECH FLEXES ITS ANTI-JOURNALISM MUSCLES: Yikes. -4
AUTO-TRANSLATORS LET LEADERS PRETEND THEY UNDERSTAND: -2
AI WEAPONS HOLD THE POWER TO BE MORE HUMANE, NOT LESS: Only semi-convinced. +1.5