Mind and Iron: Google is getting rid of Googling and we should be very worried
The Overview is an undermine. And the NFL discovers machine intelligence.
Hi and welcome back to Mind and Iron. I'm Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and lead farmer at this tech-news homestead.
There’s some news going on this evening. Or so we’ve heard. But hey, other news doesn’t stop. And this AI and tech stuff is pretty important to our democracy too.
Remember, we're a human-centric news site — not only by putting humans first in the coming machine-brain revolution but also as a site run entirely by humans. Everyday scruffy individuals, with no dependence money-wise or access-wise on big corporations. (Which really comes to the fore with this week’s stories.) Please consider supporting us so we can keep at that.
This week we press ahead with the next phase of our coverage of Google's slow (but not slow enough) shift to AI answers, which sounds inside-baseball until you realize we query Google 99,000 times every second and rely on it to tell us, well, everything.
Also, the NFL is considering a more automated system to determine first downs — a portent of a new and fraught era of machine exactitude in our sports.
And finally, a survey about how much we really use AI shows some surprising results. (Short version: don’t believe everything you read.)
First, the future-world quote of the week.
“Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”
—Google's now-infamous AI answer about how to ensure that cheese sticks to pizza, which in one absurd statement encapsulates the entire often-wrong-never-in-doubt attitude of the machine age
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
The revolution will be overviewed; here come the NFL machine refs; just how much do we actually use ChatGPT?
1. THE SCANDAL THAT IS GOOGLE’S OVERVIEWGATE CONTINUES APACE THIS WEEK, SENDING EVER-GREATER SHUDDERS DOWN OUR SPINES. Which, as the AI search-reply feature could tell you, are the glued backs of books that can be found inside every human being.
“Search-engine dynamics” sounds wonky. But it's a massively serious issue. At stake is nothing less than how we're informed and who profits when we do.
As you may know, Google’s Overviewgate is the scandal that arose — who would have anticipated! — when the search giant a few weeks ago began surfacing AI answers plucked from its own (internally known) roster of sites rather than letting us see the sites for ourselves.
The screenshotting of all the funny/unsurprising hallucinations as a result of this transition reached apex silliness this past week. (A rock diet! Glue on pizza! Maybe no more culinary suggestions from LLMs.) Over the weekend the company acknowledged the mistakes but said that they were mostly confined to "very uncommon queries." All pretty amusing. Humans are safe; machines are fools.
Except…
The models will get better and better and people will use these results more and more. And the Internet, as we and others have been warning, will get more walled off and less human. Instead of being steered to an array of helpful (human-written) sites, we'll all be hearing only the most narrow (machine-written) answers without even an inkling as to what we're missing.
At most this could lead to rampant misinformation (the kind that's less obvious than rocks for breakfast) and at the very least to a reduction of all the beautiful serendipity the World Wide Web has given us ever since the moment we still called it the World Wide Web. "Ask a question, get the hyper-specific answer you were looking for and nothing else" sounds like the opposite of how, and why, many of us have been using the Internet for the past 25 years And more the pity. Which, as everyone knows, was a line first uttered by the character Touchstone Pictures in the great volleyball comedy As You Spike It.
Google also confirmed it will be selling ads off Overview responses, which scuttles the hope that the company may slow down the Overview rollout because that would shut off their advertising spigot. It will — so they'll just open a new spigot.
A cynical, corrosive ethos bubbles beneath what Google is doing, picking off information they had nothing to do with and selling it as its own. I mean, the old bargain was already a little shaky, since Google didn't really have anything to do with producing the information it was providing a window on/collecting ad revenue from all these years either. But at least it wasn't hiding the source — at least it wasn't pretending to be anything more than a locational mechanism.
The new model is much more misleading, info on which it expended neither sweat to create nor time to publish now presented entirely as its own. In doing so Google’s Overview not only obscures the actual creators but deprives them the traffic. (That its name slightly sounds like Overlord doesn't help its case.)
Then again, there's a tradition of ripoff-ery here that Google didn't invent. The individual writers of so much of the content we search for on the Web are not compensated when someone views what they created (or if they were it was with a long-forgotten salary). Publishers for years engaged in this sort of low-key and copyright-sanctioned exploitation of individual work. Now Google is continuing the tradition, as the publishers themselves will be boarded and wrung. All roads lead to Big Tech. So what if they didn't buy the materials, lay the asphalt or maintain the driving conditions?
The fact that so many of these wrong answers stem from jokes — you know, the stuff humans do — does feel like a bit of a victory. (The rock diet answer appears to originate with a piece in The Onion and the pizza-glue from a Reddit user's quip.) The human brain is not a force easily tamed. Then again, just because it can’t be tamed doesn’t mean it will run the Internet.
What we can do about this Google move is anyone's guess. (I would suggest asking the AI, but, well.) The reality, I suspect, is nothing much, not with the tech iterating quickly, competitors coming on fast and a general corporate imperative toward automation and de-emphasizing the stuff derived from people.
Sure, human-generated sites will still exist in the years ahead— largely via Wikipedia-esque nonprofit entities — but they will be harder and harder to reach, requiring ever greater workarounds. I've increasingly found myself seeking out such sites in recent months anyway, Wikipedia and Reddit and Quora, to ensure I get the lived-in experience and valuable crowdsourcing of a bunch of humans instead of the human-adjacent bots and their corporate spin that have been popping up with increasing frequency. Now we'll have to dodge the Overview answers too.
So RIP, democratic, messy and actually informative Internet, 1995-2024. You were gone too soon. Then again, better to burn out than to fade away. Which as the Overview could tell you, was a lyric in a rock anthem first written and performed by Thomas Edison.
2. ON THE SPORTS TALK SHOWS THAT I CONSUME A TOTALLY HEALTHY AMOUNT OF, a debate has been raging this past week over a new piece of sports-officiating automation.
It involves the NFL, which is considering an optical tracking system to replace the charmingly retro “chain gang” — that stripe-clad team that dutifully huffs onto the field to measure whether sufficient inches were achieved for a first down.
Now, this is not the “spot” — the completely arbitrary placement of the ball by the official in the place they think it was when the player’s knee hit the ground. That, it seems, will go on adjudicated by human eyeballs. No, this is the measurement process — ie, the process that ensures ten yards have been gained from the spot of the previous first down.
The NFL will test this new optical system (it operates on cameras and infrared) this preseason. If it’s deemed effective, it will be used in all NFL stadiums during the regular season. The chain gang will sit down (or, kept around as an anachronistic backup for a little while longer).
Even if this system technically works I have some practical questions — for one thing, how does the tech triangulate where the ball actually was when a knee or other body part hit the ground, since said ball often continues to move after this happens?
But the bigger questions are spiritual, as argued by pundits on said talk shows. On a recent segment on ESPN’s “Around the Horn” host Tony Reali and several panelists put forth that it’s about getting the call right and so, if this system works, bring it on. Panelists Clinton Yates and Kevin Clark volleyed back that this would take the romance, the drama, out of the game.
This is a question already roiling all of sports — anyone watching the French Open right now will be struck by that charmingly dusty (literally) practice of the umpire coming down to the check the mark of the ball on the out line of the red clay. Roland Garros is the only of the four tennis majors not to make its peace with full digital automation on these line calls, but that will change, at least on the men’s side, by next year.
Baseball, in contrast, is still hanging on. The tech of a robot-eye calling balls and strikes — an elaborate system known as ABS that uses machine-learning to tailor itself to each player — is here and verifiable, and has been working its way through the minors. Such a system would almost certainly have prevented one of the most egregious MLB playoff calls of the past few years. But the Majors have resisted it. (Commissioner Rob Manfred said last week he supported ABS as part of a secondary challenge system. Which just seems weird. If you trust a machine on the second go-round why not trust it on the first?)
And yet I understand the philosophical resistance. A football nosing over a yard line is an empirical fact no matter who’s carrying it; a baseball strike-zone varies tremendously by batter, and so requires perhaps a nuance-seeing human to pull it off.
Then again, pitches are getting both faster and spinnier, and that makes balls and strikes harder than ever for the human eye to spot, as this column astutely points out. And so back and forth we go. (The retirement this week of notorious call-misser Angel Rodriguez may be the best argument for the techies; I’m at least 60 percent sure he’s been a bot sent by Silicon Valley to make the case for AI.)
The question in fact is not just roiling sports but all of society — how much should we use technology to increase efficiency when to do so means displacing not just human jobs but human unpredictability and drama?
These complexities are why so many investors still say they’d prefer to make their own choices than listen to an AI; it’s why so many creators eschew AI tools even when it might give them a boost.
Quant types argue that machine intelligences make life more accurate. Right-brained sorts counter that a life filled only with accuracy is a life empty indeed.
In this regard sports sit an uncomfortable nexus. They are, on the one hand, a massive business with not just player salaries but gamblers’ wallets on the line, and thus would seem to militate for maximum accuracy. On the other hand sports is, in the end, still a game, and one meant for our entertainment satisfaction, there to harness human emotion both on the field and back home.
No one watching these NBA playoffs, with their endless video reviews and dozens of slow-motion angles, would say the product is enhanced by technology. For every call that’s right the experience feels wrong; all these forensics makes us forget this is supposed to be fun. The finale of NBA playoff games — long the sine qua non of athletic finishes — can now feel like an interminable judicial-review hearing occasionally interrupted by some foul shots. More technology doesn’t make the game better. Often it makes it worse.
I totally get why tennis would want to impose consistency on line calls. And yet with that consistency so much is lost — we don’t get fans facepalming in beautiful human disbelief, we don’t get the strategy of when to use a challenge, and we never, ever would have gotten John McEnroe. The uniquely 21st-century obsession with making life more efficient may be a worthy goal. But we probably should stop pretending it comes without costs.
3. A NEW POLL ABOUT AI USAGE GETS ONE THING VERY RIGHT AND ONE THING VERY WRONG.
The Reuters Institute for the Study of Journalism and Oxford University recently hired YouGov to ask 12,000 people in six countries (across four continents!) about their AI usage. What they found was the number of people who use ChatGPT daily was small — really small. The highest percentage was seven percent, in the U.S.; Japan, France and the U.K. all fail even to top two percent. The number who use Microsoft and Google AI products was even smaller.
The study found that some experimentation is common — about 40 percent of people under the age of 55 have used ChatGPT at least once. (The most willing demographic is 18-24-year-olds, 56 percent of whom have tried it.) But regular usage, the study found, is rare.
"Large parts of the public are not particularly interested in generative AI,” the lead author of the report, Richard Fletcher, concluded to the BBC.
Fletcher and his cohort are on the money in one regard: adoption is almost always slower than tech enthusiasts and journalists believe. Whether because of snow-blindness or something else, those of us incentivized to take an interest in this stuff tend to overestimate the interest (and time) of all the good but otherwise distracted people everywhere else. I still remember the excitement among the tech and entertainment press in the early days of TiVo, when most of America still thought it was a sandal brand.
But the study also gets something big wrong — a misconception that honestly drives me and many of us who cover this stuff crazy. Namely, that the idea of machine intelligences intervening in our lives is a tool as opposed to a foundational way of existing. Asking people whether they’re interested in AI is like asking someone if they’re interested in using vowels. I mean, hypothetically I guess you could opt out — you could move to an Eastern European country and be very taciturn. But for everyone else vowel usage is built into our lives whether we’re “interested” in it or not.
AI is already an unseen fixture in our lives if you use a map app to get somewhere; search for a mate on a dating app; listen to a recommendation from a streamer; or rely on auto-complete on your phone. Soon it will become a regular part of searching for information online; translating conversations with people from another culture; determining who to vote for (whether to suss out our own preferences or because of campaigns targeting us); and making life decisions small and large. To name a few.
(Btw I asked an AI image-generator to “create an image of people who don't want to use ai” and this is what it gave me. They sure look like the kind of people who could be using AI!)
Now I don’t know if this opt-in tool idea came about because of the way AI has been commercially packaged (Microsoft trying to tempt us into trying Copilot the way it did Excel or Word) or because the whole notion is so vast we’d rather just boil it down to something manageable. But it’s a misconception we’d be wise to let go of. Because the sooner we do the sooner we can get to discussing how to steer it in the best possible/least bad direction.
A few months ago I asked someone in the fundraising space about whether they imagine AI changing their jobs in the coming years. “Oh, not really,” they said. “Maybe the people who do our marketing brochures; it could help those people. But those of us who raise money for a living don’t have much use for it.” Somehow the person couldn’t imagine using a machine intelligence and its data to spit out the potential donors most worth spending time on or the pitches most likely to resonate with them — two pretty straight-ahead use cases.
TiVo (and DVRs and streaming) stuck both because it was an innovation with a clear value proposition and because the forces that sell us our entertainment so coupled it with their every aspect of their products you have to go painfully out of your way to avoid it. Ditto for AI — no matter what we tell pollsters about ChatGPT.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. Can 2024 do better? So far it’s been pretty good. This week it’s…..pretty bleak again.
IT’S THE END OF GOOGLING AS WE KNOW IT: I don’t feel fine. -3.5
SPORTS OFFICIATING GETS YET MORE AUTOMATED: -1.5
PEOPLE WHO USE AI ALL THE TIME INSIST THEY’RE NOT MUCH INTERESTED IN AI -.5