Mind and Iron: We're entering a world where digital doctors follow us everywhere
Also, how Putin is shrewdly exploiting AI and Kate Middleton
Hi and welcome aboard another thrilling ride on the Mind and Iron Express. I'm Steve Zeitchik, veteran of The Washington Post and Los Angeles Times and lead conductor on said scenic passage.
Every Thursday we come at your inbox with news of what's happening in the world of AI and the future. What our lives will look like, what we should do if we don’t want them to look that way. It's a human-driven mission, supported by humans, so please consider being one who commits a few dollars to our cause.
Also of course please hit like on the post if you, well, like it.
This week we're taking a turn to health — to whether a coming age of relentless monitoring and feedbacking will be good for same. These imminent AI years will see detailed aspects of our day-to-day well-being overseen by technology, providing many of us with doctors (or mothers) when we simply walk around the house. How should we feel about a world in which we're peppered with real-time information about our bodies — empowered or oppressed?
Also, a look at how two big news events featuring some of the world's most famous people — Kate Middleton's announcement of her cancer diagnosis and Vladimir Putin's blaming-not-blaming Ukraine for the horrific Crocus City Hall shooting — are not only connected but offer a peek at the dark future of media. With some coverage of the Francis Scott Key Bridge tragedy tossed in.
And finally, we take the first of a few trips to March Madness to see how AI fared against da humans in forecasting da brackets. Were the first two rounds of buzzer-beaters and top-seed escapes secretly known to a machine programmed to understand? Or are such claims full of more air than a Cam Spencer jumper?
First, the future-world quote of the week.
“You don’t drive a car around without a dashboard. I would argue it’s just as crazy to go around without a health monitor.”
—Michael Snyder, a Stanford University professor who is constructing a universe of doctors we wear
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
The omnipresent physician; Putin’s AI-flavored gambit; what March Madness machines tell us about intelligence
1. ONE OF THE MOST VISIBLE WAYS OUR MACHINE-ANALYZING, DATA-HAPPY WORLD OF THE LATE 2020’S WILL CHANGE is with regard to our daily health. Specifically, the AI tools accompanying us through it.
The concept is straightforward: Under this system, we get information about the hundreds of processes our body engages in every minute from computers that conduct billions of processes every second. And live longer and healthier lives as a result.
The notion briefly took a satiric turn Sunday when Marc Benioff posted a cheeky item to X about an AI Oral-B toothbrush. The Salesforce leader was clearly out doing some weekend shopping when he came across the below product and wondered why large language models that are out there transforming education and solving the climate crisis were keeping track of the piece of garlic stuck between our teeth.
The post brought out the comedians — like incubator Bill Gross' retort of equal, um, cheekiness.
But an AI toothbrush is actually not so crazy (apart from the price — $200 for something your dentist usually hands you with sample-size floss?) Leaving aside that and the liberal use of the word “revolutionary” (I don't recall much talk of oral hygiene in "The Motorcycle Diaries") there's actually a meaningful proposal here.
Far from just attaching buzzy initials to move some product, Oral-B does utilize technology to prevent your next root canal. As the company describes, the toothbrush's AI "enables it to recognize your brushing style and coaches you for your best results every day. Using Bluetooth and your smartphone, it allows you to see and improve your daily brushing habits, and provide you with real-time feedback."
The firm’s pitch is basically this: If at least some percentage of the absurd pain and money we expend on dental care every year can be reduced by brushing better, why not let technology help us do that?
This is also a perfect application of AI evangelists' idea that machines can do what humans can't, and vice versa. An AI toothbrush isn't replacing dental hygienists — it's extending them to your home between visits. A system in which the G.I. or cardiologist or pulmonologist treats you at regular intervals before passing off daily oversight to AI-enabled wearables and apps seems like a welcome leap in our medical care, both improving health maintenance and potentially reducing emergencies. In extreme cases it could even provide an early-warning system for diseases.
Or maybe you feel differently. Maybe you feel like the chance for ambiguously better outcomes isn't worth the soft-core surveillance and general nudginess this will add up to across our health world. Because it will add up to a lot.
Here's just a sample of tools that are being developed in this vein.
—In January the Brooklyn biotech firm Nanowear landed FDA pre-market approval to sell a wearable garment that uses sensors to monitor blood pressure day and night — good for anyone with risk of cardiac events or who watches too many March Madness games. (That’s it pictured above.) Nanowear has a platform called SimpleSense into which it ultimately aims for 85 biomarkers — from oxygen saturation to blood circulation — to be fed. If something is out of whack, it alerts you/your doctors. "The platform provides high quality, continuous and time synchronous biometric data, aggregating millisecond by millisecond cardiopulmonary assessments to assist medical professionals in remote patient management," the company says.
—In October researchers presented a study at a European MS convention that showed that people with the disease who wore a smartwatch tracking their movements every day were able to provide enough feedback via the platform that the progression of their disease — and potential treatment adjustments — could be tracked accordingly. Such a tool represents another category to the trend: persistent monitoring for chronic conditions, a lot easier with more advanced gadgets.
—On-the-go EKG’s have been a feature of Apple Watches for some time, part of its suite of health apps. But Apple’s is not the only such app, nor the best. The most recent release from specialist KardiaMobile has been getting strong reviews, in part because its test offers six “leads” to Apple’s one. (A clinic-based EKG usually has 12.) The app tells cardiac patients whether they’ve entered AFib and need to get to the hospital, among other info. The use case for anyone with stents or a history of cardiac problems is obvious. But I think the shift in this new era will be an expansion beyond folks with known issues; scores of people don’t even realize they have blockages until major symptoms develop. No doubt some doctors will say there’s no need for an EKG in people without symptoms. No doubt there will be plenty of health consumers who say they’d rather get ahead of the issue — especially as testing gets this easy.
—A University of Cincinnati researcher named Jason Heikenfeld has spent several years investigating how sweat might offer clues to the body's health, including whether drug dosages should be upped or lowered throughout a regimen — a much more flexible and adaptive approach. “Take this fixed dosage no matter what and come back in two weeks” might soon be as outdated as leeches and bloodletting.
—Then there’s what we might dub the Oral-B genre, the kind of tech that gathers all sorts of specific info about your behavior, fills you in on what it’s found and advises on how to do better. How would you feel if tech could put you on an email chain filled with detailed data about how your cells are reacting to every aerobic movement or consumed calorie — basically turn you into an object of scrutiny like an elite athlete? Deeply annoyed, I suspect. But over the long run probably feeling and living better.
Anyway there are more examples but you get the idea. What we're talking about with all of these innovations is a new level of transparency and mobility — a Fitbit for every part of our body in every waking moment of our lives.
And yet also not Fitbit, which offers rudimentary info for an elite group of the mainly healthy. (Less than 1% of the world's population actively uses one.) The new trend I think will be for the tech to offer far more granular detail to young and old, sick and healthy, up and down the spectrum — both people who have conditions and people who’d like to avoid them. People who’d rather health be a little more a game of chess they control and a little less of pin-the-tail-on-the-donkey.
An all-encompassing health system, really, in which you don't so much breathe without technology keeping an eye.
Propelling this revolution of course is the fact that tech can now gather, transmit and interpret information like never before. But more than that, the movement is fueled by an ideology: that the human body, for reasons of complexity or capitalism (unhealth after all means more visits into the system), has fallen way behind relative to other stuff we monitor.
As Michael Snyder, a Stanford University professor who is among the pioneers of this tech-as-constant-consultant movement, has said, “You don’t drive a car around without a dashboard. I would argue it’s just as crazy to go around without a health monitor.”
Now, I don’t know if that sounds helpful or horrifying, just like I don’t know if a toothbrush correcting you every time you slather on the Colgate sounds helpful or horrifying. I can certainly see the horrifying side of the argument.
Let’s not forget the human-psychology aspect too (which both scientists and techbros can do). People may not want to be reminded of their health 24-7. Our car dashboard alerts us when there’s a problem, yes, but it doesn’t tell us the movement of every piston or fate of every gas molecule. Also cars aren’t people, and people get stressed when they hear what’s going on with them constantly. A lot of these aforementioned products would find an addressable audience among the illness-anxious — probably the exact population that shouldn’t be using them. There’s a reason we take time off between doctor’s visits; mental health is also health.
And yet I can’t help feeling like this is not only the world we are heading to, but the world we should be heading to.
Much of this 24-7 AI-enhanced care sounds weird to Benioff (and us). But then, a device measuring steps would have sounded strange to our grandparents. It’s just that now these step monitors — a nice tap on the shoulder but hardly focused information — can get supersized. And make us live longer and more enjoyably.
Even more important, they can democratize health information. The hospital portals of the last couple decades have been a step in this direction, allowing patients to see all their info instead of relying on a doctor to pull a paper chart and tell them about it. Feedback like this takes things to another level.
Sure, we need to do all of this right — make sure these devices and systems are regulated properly (something that decidedly did not happen with an earlier generation of medical devices).
Make sure they do what they say will, since the potential for misleading or simply extraneous information is high. One doctor I know told me that since this amount of data has never been generated outside of individual patients in a lab, what it actually will be telling us about our health is an open question.
And data security should be a massive priority; it’s going to be a lot harder to keep a lid on records when they’re provided to a network of private companies than it is with a hospital database.
Yet all these caveats said, if this class of innovations work, medicine will work better. And the benefits will accrue not just individually but collectively. Better health for more people means a stronger health-care system — which in turn means better health, because now the system is weighed down by fewer unhealthy people. Getting more people healthier helps everyone become healthier.
Yes, the specter of all this body data being sorted constantly sounds like it belongs in the world of elite athletes (if not a satire) — the digital equivalent of teams of doctors poring over equipment to get every last health data point not for Michael Phelps but Johnny Budweiser, competing in zero Olympics. But the history of personal health is one in which care that was once reserved for athletes eventually falls down to us plebes. AI (which can partly remove the need for that team of doctors ) and wearables (which can remove the need for the expensive machines) could accelerate this cascade. Let it flow.
2. I HAD BARELY FINISHED WATCHING THE STUNNING FOOTAGE OF THE FRANCIS SCOTT KEY BRIDGE COLLAPSE LATE MONDAY NIGHT before my mind went to a cynical place: how do we know it's not AI?
Or should I say to a reasonable place. Because the vantage point I was seeing — of lights twinkling on a distant bridge, of a grainy ship moving toward one of its columns, of a wholesale collapse of the bridge into the water like so much flimsy Tinker Toy — could plausibly have been AI.
This might sound outlandish, but the usual progression one goes through to ensure a piece of media is real didn't do much here. I scrolled through X accounts, which all seemed to be reacting to the same media I couldn’t personally verify. I went over to reputable news outlets, but they seemed to be reacting to the social-media accounts.
Eventually of course enough evidence accreted — emergency services reporting the event, public-official statements, eyewitness accounts from people who did not seem to be bots — to show it was real. But at that initial moment of revelation there was no credible way to know — the video provided no assurance whatsoever. (Including, I should add, to journalists charged with covering the event. The age-old tactic we reporters have of relying on video from the scene felt suddenly useless — someone we trust had to physically go down there to verify that the pictures were real. )
Check out the below image. I created it with OpenAI's tool in about five seconds, needing just two prompts. It's of course not what the actual Francis Scott Key bridge collapse looks like. But if we came upon this image would we know right away it wasn't real? While assuming the actual shot of a collapse was? The line between Dall-E and Dali is short indeed.
This same media-skepticism had been front-of-mind when I had watched the Kate Middleton video a few days earlier.
The sight of her sitting outside talking frankly about her diagnosis instantly quieted the skeptics — well, most of them — who had earlier gone off the rails questioning and theorizing about a photo the palace had released. But the video didn’t actually give us any more reason to believe than the photo did; it came from the same palace source averring its legitimacy (this was before the palace itself conceded it had manipulated the photo). The only difference was that, thanks to the historical difficulty of manipulating video, we were more likely to take it at face value than we were a photo.
But that's fast becoming less true — see what OpenAI released this week on behalf of Sora. Or this dime-a-dozen Obama deepfake. Both look for all the world like they happened with real people on a real set. Yet it was just a person typing prompts into a laptop. The tide is getting higher, and the fact that someone like me, who pretty decidedly shuns conspiracy theories (I hope), had the thought that I may need to see more than a video to know a bridge collapsed is Proof No. 2538 of the point.
Into this giant cosmic truth-dumpster comes Vladimir Putin. After the tragic Crocus City Hall attack last weekend claimed the lives of at least 133 people and hurt scores more, Putin, keen to blame Ukraine, did not outright contradict all the news sources that said it was indisputably an ISIS-K attack. He’s too smart for that. No, he did something more insidious: he said there was no way to know what happened.
As the NY Times put it, Putin “did not definitively pin the attack on Ukraine; nor did he refer to the assessment by American officials that a branch of the Islamic State was behind it."
This is an age-old disinformation trick. The disinfo expert Barbara McQuade calls it the “destroy-truth” phase of operations. Essentially it has the disinfo-spreader invoking relativism to chip away at the very idea of facts, so that even when a person comes along with one, it loses its force. If all truth is squishily relative, then there’s suddenly no power to or need for actual evidence. Facts are what we want them to be.
(It’s why, for instance, when Donald Trump was running for office in 2016 some Russian media actually reported that “some say he’s from Queens, some say he’s from Brooklyn, it’s a disagreement.” Well there’s no disagreement — he’s obviously from Queens. And this is easily verified. But when you make even a fact as knowable as this seemingly unknowable, then facts themselves lose meaning.)
That’s what Putin did here. As a bad actor you can’t just contradict the truthful explanation. First you need to clear out the hole. Only then you can put whatever you want in it. Which is what happened. Having suggested there was no way to know the truth, he said that the attackers "were trying to hide and were moving toward Ukraine," then began to sound more explicit notes about Ukraine and the West. A few days later, he asked, “Who benefited from it?…This atrocity can be just an element in a series of attempts of those who have been at war with our country since 2014 [Ukraine].”
(Notice, by the way, that even this isn’t a hard assertion of facts, more of an intimation. If you assert something that’s blatantly suspect, you’ll lose people. If you’re “just raising some food for thought,” your vulnerability disappears. How can we criticize someone just offering up a little truthy snack?)
Now in any normal news environment, all of this might not work. Or might not work very well. People aren’t idiots, no matter the mind-tricks. Ah but we don’t live in a normal news environment. We live in a news environment where AI has made even reasonable people reflexively wonder about video of bridge collapses; we don a fabric torn wide by tech. It’s in this environment that disinformation spreaders operate.
As AI-disinfo expert Hany Farid of Cal-Berkeley told Mind and Iron last month, given the current climate “I don’t need to poison the whole dish of M&Ms. I just poison one and then you don’t eat any of them.”
(Btw, far from theoretical, disinfo has huge real-world effects, as has been shown time and again. If enough people believe the Kremlin that Ukraine was involved with the Moscow attacks, Putin doesn't need to straight-up defeat the country in battle. He can just erode the global financial and diplomatic support that helps them fight it.)
These attempted exploitations have gotten so bad that Sweden created a unit of its defense ministry known as the Psychological Defense Agency to counter disinfo from Russia and elsewhere. Far too soon to say whether it will work. But at least they’re doing something to stave off the threat. Unlike the U.S., which with its large language models at the moment is mostly increasing it. And ignoring how bad actors could be doing the same thing on our shores.
None of this will get easier as the tech gets better — the next Sora demo will doubtless be even sharper. And soon, of course, we’ll all have Sora, so these videos will be everywhere. We’ll know firsthand how non-factual a video can be because we just created one ourselves.
The Northwestern computer-science professor Kristian Hammond captured the current state-of-play in a recent interview with ABC News. "The clarity of truth we thought we had with recorded photography and video is gone. We've inadvertently built a world of propaganda engines."
And a lot of Vladimir Putins out there wanting to take them for a spin.
[NY Times]
3. LAST WEEK WE DELVED INTO THE QUESTION OF WHETHER THE ARCANE SKILL OF CHOOSING MARCH MADNESS WINNERS — and by arcane I mean impossible — could be done better by a computer properly trained.
Even the humans who organize the men’s basketball tournament seemed to get it wrong lately, as evidenced by all those bottom-half teams making deep runs the past few years.
The lack of major upsets last weekend means that human committee wasn’t doing so bad after all. After a semi-chaotic first round with more than a half-dozen of the double-digit seeds winning, the second round pretty much followed the seedings — there was only one victory by a team more than two places lower (No. 6 Clemson taking down No. 3 Baylor). The organizers seemed to know which teams were better and seeded them accordingly.
But how about us outsiders? With the seeds in place, could we decode the eternal mystery of what happens when two groups of humans, with all their data points and x-factors, clash in a matchup to determine supremacy? And most important — could AI decode it better?
To determine the answer I decided to run a little experiment.
I first took the AI bracket of the betting site Sportsbook Review, which had fed a massive trove of data into GPT-4; its resultant “ChatGPT bracket” is the most comprehensive out there, incorporating all kinds of analytics and history in its training data. If any bracket circa 2024 is going to represent the machines, this one is it.
Then I sought a representative set of human brackets. As it happens, I have run a small informal March Madness pool for many years, which provides a pretty good frame of reference. About 30-35 people usually join — this year it’s 31 — from across the knowledge spectrum. Some may know a lot of what the AI was fed, but most don’t. So a pretty good test of whether a machine designed for forecasting purposes could outpace humans of not particularly exceptional knowledge.
My point system awards underdog wins, incentivizing slightly out-of-the-box picks. Sportsbook programmed its AI bracket that way too, prompting it to make “plausible but entertaining” picks. It was a fair fight.
Here’s where we are after two rounds. The humans in the pool ranged a pretty wide point gamut. The top two scores currently are 100 and 98 points; the bottom are 50 and 53.
And the AI bracket? Through the first two rounds it’s scored…72 points.
Yep, right in the middle of the pack.
In fact the median scores of the 31 human entrants — places No. 15 and 16 — are 72 and 70. The AI, had it entered my pool of 31 humans, would be sitting tied right at 15.
A similar finding revealed itself on the number of Sweet Sixteen teams predicted: the four people right in the heart of the standings — the 14th- to 17th-ranked entrants — averaged a total of 9.75 correct teams. The AI bracket got…10. You pretty much could not devise a more average bracket if you tried.
So what do these results mean? Well, they’re kind of thrilling, but for two opposite reasons.
First off, the AI did actually understand the basics enough to be competitive. Half the pool is trailing behind it, which means that if you were entering a competition of this sort and knew nothing but wanted to seem respectable, an AI could help you get the job done. The main goal of AI for many decades has been the Turing Test — the idea that a computer can blend in with humans undetected. And landing between the 15th and 16th humans in a 31-person pool is as blended-in as it gets.
Pretty cool. When it comes to forecasting the results of a messy system with many x-factors, AI can do as well as the average, slightly tapped-in human.
But the results are also thrilling for the opposite reason: that it’s being bested by the other half of the humans. For many years but especially over the last year, we’ve been sold the idea of AI as some kind of mega-intelligence that can think in ways humans can’t and know what humans don’t and in turn make us all irrelevant. Well, it’s getting its best shot here. And so far, fourteen out of 31 humans are saying, nope, not good enough. An old-fashioned human brain, with all its methods of reasoning and reliance on intuition, performed better on a test than an AI armed with all the data a system could ever want.
For those of us who dwell on the question of whether AI can make such a compelling case for lapping the humans that companies, governments and even individuals will have no choice but to call it in, our experiment offers a small but telling rejoinder: not really. (Or at least not yet.)
Anyway we’ll see where all this stands when we know the Final Four teams after this weekend. But a picture is coming into place. The AI is out on the court and hanging with the competition. But it is far, far from running away with the game.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. Last year ended with a score of -21.5 — gulp. Can 2024 do better? It started off strong but now we’re sliding.
A NEW ECOSYSTEM OF AI HEALTH MAINTENANCE IS EMERGING: Could actually make us a lot healthier and more empowered. +4.5
PUTIN IS LEVERAGING AI-ENABLED DISTRUST TO FURTHER HIS AGENDA: It keeps steamrolling. -3.5
GOOD OLE HUMANS ARE SWATTING AWAY AI ON MARCH MADNESS PREDICTIONS: +2.5