Mind and Iron: Is more political violence in our future?
The modellers get a shocking data point. And can predictive policing ever help?
Hi and welcome back to another glittering episode of Mind and Iron. I'm Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and ring-toss operator at this newsy carnival.
Every Thursday we bring you the best of what's around in tech and future news — what awaits and how we can change it if we don't like it. You can support our mission here.
This week has been filled with extraordinary political developments. In fact I'm not sure that in the time it takes to publish this newsletter another couple of them won't go down. So we're partly pushing off our regular slate of coverage to focus on politics.
First, we'll revisit some predictions two months ago about the likelihood of political violence. As you may recall from that May issue, we surveyed a number of thinkers who model outcomes about potential conflict. Now that we just had a concrete event, we'll circle back for new assessments.
Also this week, with all the questions about whether law enforcement should/could have known about Thomas Matthew Crooks, we'll take a look at "predictive policing" — the idea of using data to forecast threats before they've materialized. Such an approach has been among the more controversial of AI applications. Should it be?
Finally we'll get to some more OpenAI juiciness, as the kimono has been opened on the firm's secretive "Strawberry" project, formerly the "Q*" project (seriously who comes up with these names). Strawberry is the reasoning-based innovation Sam Altman's company (may have) made a breakthrough on — the kind that could transform AI into a true thinking machine.
First, the future-world quote of the week:
“Predictive policing used to be the future. Now it is the present.”
—Former LAPD and NYPD chief William Bratton, in a quote that hasn’t aged well since 2016
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
More political violence could await; the perils of predictive policing; have you shopped Strawberry today?
1. OUR POLITICAL FUTURE THIS WEEK WENT FROM A MATTER OF SOME URGENCY TO STRAIGHT-UP-WTF-CODE-RED EMERGENCY
The specter of political violence, which we dived into for an issue in May, became a frightening reality with the attempted assassination of presumptive Republican nominee Donald Trump at a rally in Butler, Pa. and the killing of rallygoer Corey Comperatore.
Now the inevitable question: could more be in store? Jon Stewart on “The Daily Show” this week offered the bleak forecast many of us unconsciously agree with. “There will be another tragedy in this country, self-inflicted by us, to us,” he said. “And then we’ll have this feeling again.”
Is this, in fact, a given? One of the leading figures in assessing the likelihood of political violence is Philip Schrodt, a retired professor who has built data models to predict unrest and consulted for the U.S. government and military on his findings.
This is what Schrodt said when I checked in with him two months ago about where our country was headed in that regard.
"A lot of what we saw from 1965-1975 we haven't seen yet. We might. But we haven't... I think we're seeing a lot more plots foiled before they can do damage or before we even hear of them." Alas as of last weekend that's no longer true.
So what does Schrodt say now? The prospect of contagion — other attempted killings of high-profile candidates or officials, regardless of party — would seem high.
"My original inclination was 'of course they cluster!' but then I remembered that one of the fundamental 'laws' of this stuff from a data analytic perspective is that events that are completely random in space and time are perceived to be clustering," he wrote when I messaged him this week. And while "there are going to be more assassination attempts this year globally because there are an unusual number of elections (which brings more contact with the public)," he also said that there were plenty of reasons to think there wouldn't be a repeat. "Security will be tighter now because they assume...that there is contagion," he wrote, citing one reason.
I also asked Schrodt about escalation — how heated rhetoric as a result of the shooting could encourage more political violence. Already Republican lawmakers such as Congresspeople Mike Collins and Marjorie Taylor Greene pointed the finger at President Biden.
Nor is the threat limited to the campaign. If elected, Trump has said he would seek to deport millions of immigrants, potentially with force, an event that could entail mass Democratic protests, and then potentially even stronger crackdowns and/or harsh reactions by law enforcement…you see where it goes.
Schrodt had a more sanguine view. He said he believed that "Trump will be enjoying the adulation of victory so much he won't want to mess it up — e.g. his greatest fear, and I think he has the social smarts to know this — is he gives a draconian order (most likely on mass deportations) and half the country ignores it: then the emperor is very naked."
On the other side, "Democrats, meanwhile, can't even organize enough to get rid of a single 81-year-old candidate who is almost certain to lose, so barring anything truly outrageous (again, could happen, e.g. on immigration/deportation) I don't think we'll see" anything like that.
Tl; dr, Schrodt's forecasts say this kind of broader violence isn’t too likely.
One of the bright young minds in the space is Clayton Besaw a University of Central Florida researcher who is closely involved with a model called CoupCast. His opinion in May was bleaker than Schrodt's. "We're seeing the structural factors for political violence that are similar to the time right before the Civil War," he said then, noting so-called “democratic backsliding” and "ideological outbidding."
Besaw said back in May that he wasn't especially worried about massive outbreaks of political violence because the best bulwark against it, according to scores of models, is “strong state capacity"( a catch-all term for empowered law enforcement that is widely respected).
But even that seems in doubt in the post-shooting discourse. Basic questions of competency are one thing. Insinuations of a conspiracy are another. At the RNC Wednesday night Ted Cruz wondered to Fox News if alleged Secret Service failures were "political." That kind of rhetoric, if it seeps into enough minds, could weaken precisely the state capacity that Besaw says is so crucial to pushing back against threats.
One of the hallmarks of past assassination attempts is how it brought the country together politically, at least for a short period. The attempt on Ronald Reagan's life in 1981 not only moved Democrats — it prompted them to pass large sections of Reagan’s agenda. And JFK's killing at least partly smoothed the way for a consensus around Lyndon Johnson's Great Society programs in the 24 months that followed (though was far from the only factor).
The difference now, as it so often is, is social media. Without it, conspiracy theories can't flourish, dividers can't jump into the breach and the general temperature and prospects for violence can't be raised. With it? Well I guess we’ll find out. Political futurist Barbara Walter says social-media disinfo is the great risk factor when it comes to political violence. In fact, she says regulating it is the only meaningful way to stem the tide.
Of course moderation isn’t likely anytime soon — not when the person who runs one of the most prominent social platforms both accommodates conspiracy theories and promulgates them himself.
But maybe, just maybe, tech tools aren’t as essential here; maybe leaders and the American people can turn a corner all on their own.
Online wags (and 90's one-hit wonder) Eve 6 had this quip Saturday night.
OK so that’s unlikely. But on Monday, Newt Gingrich, in a speech to the Wisconsin delegation at the RNC, suggested he believed a cooling off was a realistic possibility and that the assassination attempt will actually have a broad harmonizing effect.
"We were right at the brink of falling apart as a country and potentially drifting towards a civil war. And I think this shocked everybody and I think you will see a very different approach," he said. A prediction hardly based on the data. But let's hope the human intelligence is right just the same.
2. "MINORITY REPORT” CAME OUT 22 SUMMERS AGO, OFFERING US THE TEMPTING TABLEAUX OF CRIMES BOTH FORECAST AND PREVENTED.
The Spielberg film — inspired by a Philip K. Dick novella — laid out the idea of "precogs," a science-meets-oracle invention that can predict with uncanny accuracy where a serious crime is about to occur. Needless to say, this makes keeping the peace much easier. Even more needlessly said, this poses all kinds of free-will and fairness issues.
Thoughts of that movie danced, as they so often do after a shocking crime from the blue, in the wake of a 20-year-old Pennsylvania man attempting to kill a former president — a man who was not in the least bit on law-enforcement’s radar prior to Saturday. Some of us may yet again wonder if technology could have helped (because the human intelligence didn’t seem great).
In the years since the movie came out, tech companies and law-enforcement agencies have indeed tried with an intensity as furious as it is futile to emulate the world of the film. Unraveling this history may tell us just a little something about where we’re headed on AI and crime-solving.
Let’s leave aside the surveillance and privacy issues for a moment, pretty big matzah balls as they are. Theoretically these systems should be making strides. The amount of data available — on suspects, on geographic trouble spots — is more voluminous than ever. And large language models were designed exactly for functions like this — ingesting everything every human knows and then seeing patterns in that knowledge that an individual human brain with more limited processing power can't see. If criminology is closer to a data science than a right-brained creative pursuit, machine intelligence should be able to help.
And in fact that’s the bang with which all this began. One of the leading predictive-policing software programs is known as PredPol, which offers a daily map with dots noting where crime is most likely to occur. PredPol was founded nearly a decade ago in conjunction with the author of a 2015 UCLA study. (Fascinating research — it had human analysts predicting the block a crime might occur every day for four months and pitting them against the model; the model got it right twice as often.) “Not only did the model predict twice as much crime as trained crime analysts predicted, but it also prevented twice as much crime,” Jeffrey Brantingham, the UCLA researcher who led the study, said at the time.
City police departments rushed to sign up for PredPol — LA and Santa Cruz and Atlanta and dozens of others. A competitor, HunchLab, also soon proved popular. Think tanks proclaimed that these outfits were “a return, albeit a high-tech one, to the days of police on the beat who knew their constituents.” Another company, a gunshot-detection system called ShotSpotter, could supposedly “hear” where shots were fired anywhere in the city the minute it happened, allowing law-enforcement to get there faster than if it waited for a call. “Predictive policing used to be the future,” said former LA and NY police chief William Bratton amid all this fervor. “And now it is the present.”
Yet the trajectory soon shifted. Predictive policing not only turned out to biased and riddled with social problems — it was straight-up ineffective.
“We didn’t get any value out of it,” Palo Alto police spokeswoman Janine De la Vega said in 2019 as the city ended its program after three years of use. “It didn’t help us solve crime.”
Soon many other municipalities were cutting back. Los Angeles stopped using PredPol in 2020 with no decisive evidence it worked. Santa Cruz did the same. Oakland followed the next year. New Jersey towns questioned why they ever used it. Utah felt it had been hoodwinked by another predictive startup. The Chicago inspector general investigated in 2021 and concluded ShotSpotter “rarely produced evidence of a gun-related crime.”
To say nothing of the massive issue of bias. Predictive policing “creates the illusion that police departments…are being proactive about tackling crime. The truth is predictive policing just perpetuates centuries of inequalities in policing and exacerbates racial violence against Black, Latin, and other communities of color,” the nonprofit Electronic Frontier Foundation said in October. And that violence is of course already really bad.
A study of a New Jersey town last year by Wired Magazine and the nonprofit investigative site The Markup found that the system predicted only about one percent of the 23,000 crimes that wound up being reported. Soon after PredPol owner Geolitica said that it would shut down large parts of the program.
The hits keep coming. Citing bias in particular, a group of seven federal lawmakers — including Ron Wyden, Alex Padilla, Yvette Clarke and John Fetterman — in January wrote a letter to DOJ asking it to stop funding predictive-policing tools.
“Mounting evidence indicates that predictive policing technologies do not reduce crime. Instead, they worsen the unequal treatment of Americans of color by law enforcement,” the letter said.
A PredPol-like system also would have been deeply unlikely to identify a threat in Butler, beyond the basic fact that it was a rally with a presidential candidate. And it almost certainly couldn’t have flagged Crooks, who had zero criminal or disciplinary record. Being bullied or suffering from mental health issues, while sometimes a contributing factor in the decision to commit murder, is hardly dispositive for it. Crooks also lived in an area with a level of crime-per-square-mile 40 percent lower than the Pennsylvania average, which certainly would have appeased the model.
And yet there are caveats to all the naysaying. The Markup study, with that horrible success rate, only counted the particular category of crimes predicted; the number would have been higher with others. And the study used data from 2018 — hardly a period of the latest AI models. It’s entirely possible new models will be more accurate and less biased (the two go together). No company that I know of has emerged yet to widely take advantage of GPT-4 and other AI models that have arisen within the past 18 months, and it’s hard to comment on the systems’ worth until they do.
Also as anyone who has worked with these models knows, they’re only as good as their human interlocutors. Predictive-policing companies have often argued that low accuracy could be a function of improper usage. And while that’s a convenient excuse, it may not be entirely wrong either; veteran local police officers are not historically accustomed to working with machine intelligence. Training the officers could help as much as training the data.
Plus there’s the matter of staying current. With criminal schemes becoming more novel and pervasive, standing still with the same human skills is moving backward. For those who say the system wouldn’t catch someone like Crooks, the question is not whether a model would have caught one high-profile alleged perpetrator — it's whether an approach broadly deployed would make the world safer (and, of course, if it did so by lowering the amount of bias inherent to policing instead of replicating it). Trying to invalidate predictive policing because it wouldn't have caught a would-be assassin is like writing off self-driving cars because of an individual accident. The model doesn't have to be perfect— it just has to be better than a world in which it didn't exist.
The reality is we simply don’t know if these systems can be helpful in predicting crime because nearly all the research comes from older, far inferior models.
The other reality, whether we like it or not, is the tide of history tends to favor new tools for the police, no matter their danger. Guns themselves were still a provocative idea for officers in the first half of the 19th century and only started to gain currency after the Civil War. A Trump administration that favors law enforcement over civil liberties could accelerate the trend.
It’s far from clear that you could get any meaningful data without Constitutionally-questionable surveillance of ordinary citizens — LLMs, after all, work on large amounts of data and the U.S. justice system relies on only collecting targeted information. But anyone who thinks predictive policing is a thing of the past may want to look again. The tools will get better. And ready or not, they just may become commonplace.
3. FINALLY, IN NON-POLITICAL NEWS, I’D BE REMISS IF I DIDN’T MENTION STRAWBERRY.
Yes, Strawberry.
OpenAI has been rumored for months to be working on a high-level reasoning project, codenamed Q* (pronounced Q-Star).
We told you about Q* back in November. This is the application that can solve math problems — i.e., can do independent thinking in a way the current elegant parrots of Large Language Models can’t. So powerful is it (maybe) that it played a key role in all that crazy drama of board members trying to oust Sam Altman to prevent the company from going down that rabbit hole.
Well, they lost, and OpenAI researchers have been down there the last eight months poking around the dirt with Alice. Reuters reported last Friday that there’s plenty of potion.
The news service (which also broke the original Q* story) said its reporters saw an internal OpenAI document that “describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms ‘deep research.’”
Now, that’s not exactly math problems/independent thinking. But it’s still a lot more independent than the current models allow. If Strawberry can indeed do this, it could have all sorts of implications not just in how AI is used in our daily lives but for reaching AGI — the holy grail of Artificial General Intelligence, ie, a computer that can think like a human.
We’ll have more to say on this in the coming months, pending what we learn. But one thing is clear: OpenAI knows it needs to find something. There are now fundamental limits to what the company can do with the current models of LLMs. “My guess is [the next generation of LLMs] still won’t do hard math and planning problems,” Stanford’s Ray Perrault told Mind and Iron in April. “How do you get to the next stage?” Perhaps by rooting around the strawberry patch.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. Can 2024 do better? The first six months have been OK enough — or at least a lot better than last year. This week, we’re taking a step back.
MORE POLITICAL VIOLENCE ISN’T INEVITABLE, BUT IT’S NOT LOOKING GOOD: -3.5
PREDICTIVE POLICING MAY HOLD SOME PROMISE, BUT WE’RE A LONG WAY FROM HERE TO THERE -1
STRAWBERRY, THE COMPUTER THAT THINKS LIKE A HUMAN? -1.5