Mind and Iron: A new publication for our humanist future
Or, a veteran Washington Post reporter helps you make sense of where the hell we're headed
Welcome to Mind and Iron, a new newsletter about how this crazily changing tech world is transforming how we live, work, heal, become informed, get entertained and do pretty much everything else. (Weird name. More on that shortly.)
I’m Steve Zeitchik, a longtime reporter with the Washington Post and Los Angeles Times who in 2023, the Year of Our Logan Roy, left behind more than 15 years of working for Big Media to launch a direct newsletter.
I loved my time in the trenches of some of the country’s most influential newspapers, where I was surrounded by scores of talented writers, story editors, copyeditors and designers. I’m feeling slightly…less regret at leaving behind a Big Media system that can feel too-many-cooks bureaucratic (I know, you’re shocked) or panicky social-media-chasing (I know, you’re even more shocked!)
I’ve spent much of my career covering both culture and tech, and that’s what Mind and Iron will do — locate that spot where the vectors of business and science crash into each other and land right onto our lives. Many of my stories in recent years have looked at the human consequences of tech developments (the AI researchers changing how we’re screened for cancer; the Whitney Houston hologram that freaked out everyone in Vegas; the algorithm that could predict the next Jan 6; the robot umpire that would transform sports as we know it).
And that’s what you’ll see in this newsletter — simply, pieces that make vivid how society and humanity could change (and not change) in this new tech era. Stories about how these new developments will affect our lives in the most profound ways. Stories that resist the pull many legacy outlets don’t always resist in their tech coverage — toward heavily covering the next product release or Big Corporate maneuver.
No, we’ll do the human stuff. And fundamentally we’ll see things through a humanity-affirming lens — by covering either changes that could make our lives better or changes we need to avoid if we don’t want our lives to get worse.
Mind and Iron will sprinkle down everything from deeply reported stories to quick-hit interviews, from boppy thoughts to rich analyses, all based on conversations with actual sources. And years of covering tech, culture and humans.
In our first issue, for instance, you’ll get a fresh longread in our BigMind section about the victim of a crypto scam who committed suicide, and my attempt to re-create what happened by tracking down the man’s friends. You’ll get peeks at novel uses of AI in schools, and less impressive ones in the art world. Also, remember that whole craze this winter/50 years ago when people thought ChatGPT was coming to life and going after the New York Times’ Kevin Roose? We’ll have a ditty on that, because it still offers a surprising lesson and because, well, we weren’t in existence when it happened.
In the coming weeks we’ll continue to run with these themes — AI as it’s affecting so many parts of our lives, from politics to marketing to education to health.
But it won’t all be AI. Cutting-edge forms of transit, media, biotech, finance, weapons, religious worship, even space exploration — they lie within the Mind and Iron purview too. Basically, if it’s going to make tomorrow look and feel different than today, we’ll cover it. And for those who know me from my entertainment reporting days, don’t worry, plenty of pop culture will get infused throughout. You never forget your roots.
We’ll do all this in a series of sections, from our news roundup (“IronSupplement”) to a commentary space (“MindandIrony”) to even what’s happening on the more extreme end of the future space (“MindF#ck”). (Yes, we are going to run the wordplay into the ground.)
So that’s the gig. A handy, accessible, fun but still thoughtful guide to all the insanity coming at us — everything you need to process our rapidly evolving existence.
The world is shapeshifting. It is shapeshifting at a rate that will make the previous era of tech-driven change— of social media and digital payments, of streaming and laser surgery — seem downright Conestogan by comparison. And wouldn’t it be nice to have a few words every week to help make sense of it?
Of course if you have your own ideas on something I should be covering — a tip, a notion, a client, a vision that came to you while thinking ‘there has to be a better way to spend my time than watching these talking heads on cable news’ — please reach out. Steve@mindandiron.com. I’m eager to hear all of it. Administrative issue? Hit up admin@mindandiron.com and one of the many oompaloompas who work for this large-scale operation will help you out. There will be some special guests writing soon too, so keep an eye out for that.
One housekeeping note: You’re currently receiving this entire newsletter free as part of an introductory offer — new platforms! we’re giving it away! — though in the very near future there’s going to be a premium tier where, if you want the full complement of content, it will cost a few bucks each month. Hey, I gotta eat too. But I also think that what’s on offer will be rich, fun and important enough to be worth those couple spare shekels. But you tell me.
OK, now let’s get to it, shall we?
Oh wait, the name! So here’s the deal. Isaac Asimov originally wanted to call his “I, Robot” collection of AI-themed stories “Mind and Iron.” But his publisher prevailed on him to call it “I, Robot,” a name he disliked. Which frankly makes sense, because if you could come up with an advanced intelligence that could save/ruin civilization it would probably say something more impressive than a pronoun-y grunt that sounds like me when I’m hungry and see ballpark fries. “Mind and Iron” describes much better the tension between intuitive human and analytical machine. Which is what he was contemplating. And what we’ll be exploring.
If you’ve read these Asimov stories (forget that dunderheaded Will Smith movie), you’ll know ol’ Isaac is fundamentally an optimist — technology can serve humanity and even achieve utopia. BUT he doesn’t think this happens by accident. It happens with the greatest minds putting in max effort and pulling in the same direction. If they don’t? All hell breaks loose.
Asimov says, let’s think about this stuff in the right way. Otherwise machines + profit principle = off the cliff. Otherwise iron overcomes mind. Otherwise machine dominates humanity. And as every screenwriter who’s ever sat in a café on Melrose has tried to tell us, that doesn’t turn out well.
And that’s the underlying philosophy of this space too. After (despite) all my reporting, I’m fundamentally optimistic about what these innovations can bring. But only if they’re undertaken thoughtfully and with a larger social good in mind. Otherwise, all hell. Like with so much else in the world, the outcome is only as good as the people.
Ok, enough blather. To the good stuff!
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Mulder was right; Monet Monet; AI education that doesn’t suck?
1. Someone with a laptop and too much time on their hands recently decided that it would be a good idea — one that humanity could definitely not live without — to use AI to generate expanded backgrounds of Monet and Da Vinci paintings beyond what the artist/the world/God Herself intended.
The logic seems to run along the lines of “well it can’t yet create an enduring masterpiece, but maybe AI might be able to ruin one?” And so this is what we get — what’s really happening in the outer limits of the Mona Lisa’s field of vision, eg. The mind cogitates on all the truly rich ways this could be applied, in various art forms. Carton after the guillotine fails. A Love Supreme record had Coltrane been given that second hour. And by rich I mean horrifying. At least some art-savvy Twitter users put a stop to it — for now.
2. The concept that alien craft already are present on this planet has long been the stuff of conspiracy theories, like that of self-appointed ‘80’s physicist Bob Lazar, who has been just slightly discredited in the time since. But a whistleblower named David Charles Grusch is suggesting it might be time to start taking this notion seriously. Dude has bona fides — a former combat officer in Afghanistan as well as intelligence officer who also worked closely with the U.S. Navy’s project to identify unidentified craft.
Grusch recently told the science-tech site The Debrief that the U.S. already has “intact vehicles” of “exotic origin…[with] the possession of unique atomic arrangements and radiological signatures.” Stops just short of calling it alien but basically sounds like that’s what it is. The truth is out there — but what if marooned craft are already in here?
While it’s possible that some government (or prankster humans) attempted to design said craft specifically to seem non-human, no one involved in the research cited by The Debrief appears to think that’s what’s happening here. These are found craft of apparently genuinely non-human origin. The implications are chilling/exciting/instructive/pick your sci-fi tableau.
One funny footnote is Grusch says he’s blowing the whistle because he’s previously been intimidated by intelligence services into not revealing the findings to Congress. (I guess they’re worried some oversight committee could tighten the vise.) The found craft of intelligent aliens could radically transform the future. Congress and intelligence agencies infighting? That’ll never change.
[From NY Mag and The Debrief]
3. Speaking of intelligence-agency/military confusion, what to make of the classic “misspeak” moment from Col. Tucker Hamilton, a U.S. Air Force officer who runs AI testing for the military branch. According to an account from a summit in London last month, Hamilton said that a simulation saw an AI shrewdly circumvent a human operator — who was telling it not to kill a perceived threat — by instead knocking out a communications tower so the operator could no longer message with it. The move suggests that an AI can autonomously find a way around its programming — scary because it brings into full relief how AI can outsmart humans, and in the most lethal realm possible.
No sooner did this get reported than the walkback started, culminating in USAF spokesperson Ann Stefanek saying that no such simulation had taken place. But this is a distinction without a difference, because as Hamilton, a known cautionary voice on AI ethics in the military, said, it almost certainly is what would have happened had the test happened.
"We've never run that experiment,” Hamilton told the Royal Aeronautical Society — then added, seemingly rather importantly, “nor would we need to in order to realize that this is a plausible outcome." (Not sure why the USAF is keen to say it didn’t run a simulation that would be, you know, kind of smart to get done, but another matter.)
Basically, what Hamilton is describing is what many of us fear: A world in which AI can outsmart humans, or at least think of possibilities we didn’t imagine until it’s too late. Then again, that’s what simulations are for, so at least we have time to start game-planning for this. It won’t lap us with awful consequences — yet. (More on AI-driven autonomous weapons in a future episode.)
[From The Guardian, Insider and Aerosociety.com]
4. The threats that AI poses to education have been a pretty heavy focus in the past few months, in part because anything that can even remotely harm our kids’ learning is worth taking seriously, in part because we all have nightmarish memories of pulling all-nighters on term papers and did machines just magically make all of that obsolete?
But The Deseret News finds some pretty cool examples of teachers using AI in the classroom in productive ways. A few of them come from a history professor at Utah State, Chris Babits.
Babits actually instructed students to use ChatGPT to complete an assignment and then asked them to do a separate assignment breaking down what it got wrong in their view. Good for critical thinking, and a pretty nifty fireproof; to find out what ChatGPT got wrong you can’t exactly turn to ChatGPT.
But what Babits did that I really like is abandon outdated modes of teaching for something more creative and forward thinking. The professor actually eschewed term papers in his class because he knew ChatGPT could pretty easily recycle old information and churn out papers as well as most humans. Instead, he had his students tackle trickier tasks, like creating a museum exhibit and then a social-media campaign to market it — challenging college kids in a way those of us who went to school in non-GPT eras were never challenged.
“[It's] probably more meaningful than sitting down and writing three essays on three different books over the course of a semester,” Babits said. It sure is.
[From The Deseret News]
MindandIrony
A possibly penetrating, perhaps droll comment on current tech developments
When AI Comes Home to Roose
The story has by now passed into legend: as reporters began trying out the beta version of the new Bing this winter, the New York Times' Kevin Roose found himself in an…interesting conversation with the search engine. What began as a straight-ahead request for information with the ChatGPT-enabled program (codenamed Sydney by Microsoft) had turned, over the course of two hours, into a progressively weirder exchange.
Until finally the AI, sounding angry and needy, suggested it was in love with Roose and that he in turn should, or did, love it more than his wife. The AI also tossed in its not-so-hidden desire to hack, misinform and sow chaos across the Internet.
The transcript went viral online and, in the ultimate if ironic sign of digital infamy, ended up on the print cover of the NYT. Social media was a'filled with talk of the impending computer takeover, with some even calling for Congressional action. This was “2001’s” HAL, only real.
"Many will think this is a parody but it isn’t. It’s the nightmarish sci-fi future that has actually arrived in 2023. It is amusing, electrifying, terrifying,” wrote the veteran journalist Jonathan Alter. “To harden the target — i.e. humanity—Congress must pass a bill holding Big Tech liable if their Chat bot rules fail.”
The reaction from the people who've been following this stuff was a bit more…low-key. As AI veteran/my old colleague/refreshing voice of sanity Will Oremus eyerolled, "friendly reminder that Bing does not have emotions no matter what name you give it!" A certain amused ennui hovered over the future-insider set, whose response I'd translate roughly as "what Roose basically did with Bing was make doves fly out of a hat. Which is cool. But it don’t make him a bird breeder."
It would take a certain-sized blind spot, this camp held, to conflate Sydney's act of imitation with something real; channeling sentience wasn't the same as *becoming* sentient. That all this went down as a social-media #tbt surfaced Katie Couric's 1994-era "what is Internet" clip with Bryant Gumbel (is it something you write to?) only underscored the point: We might at some point look back at fears that a computer will steal a New York Times reporter's wife with the same how-naive-were-we chuckle that we now give a “Today” host trying to figure out if the Web is a pen-pal. [Ron Howard voice-over: that point is now.]
After all, this was little more than an elaborate sleight-of-hand — of our own design. The point of an AI — the whole way it *works* — is by training on so-called large language models, on everything humans had written and said before. And so, so much of what humans had written and said about AI before was...well, how we might love it or how it might threaten our global safety.
This AI was sounding like HAL or “Her” not because it was actually coming to life, but because we had spent so much time creating fictions like HAL and “Her” for it to learn from in the first place. This wasn't a realization of our fears — it was a reflection of them. Look no further, truth-seeker, the enemy is you.
(The idea that we need immediate legislation to stop what is essentially a kind of rehashed storytelling is especially funny. If we're really asking Congress to stop a mechanistic approach to fiction then half of Hollywood would be in trouble.)
If there was an apt movie comparison it might be to the 1896 Lumiere Bros. short about a train in which audiences maybe ran out of a theater afraid it was hurtling toward them. New technology always appears to be coming for us first.
The irony in all this teapot-tempesting is that the Roose incident does highlight a danger — just not the one people flipped out about. Our collective reaction to his conversation suggests, more strongly than has ever been suggested before, that many of us are willing to suspend our disbelief and treat bots almost as humans, as worthy equals in the matter of complex emotional states. This has huge implications. A raft of startups is as we speak creating "virtual humans" — AI-powered screen beings — for a host of functions. To guide the airport lost. To serve as a companion to the elderly. And even, eventually, to become friends with our children.
The notion that a recent version of GPT can now imitate humans so well that we more or less believe it has emotions like sadness, anger and loneliness suggests these are not idle efforts. AI bots are a powerful force we may soon want to be spending a lot more time with.
For if we are so ready to give in to the illusion of marriage-destroying Sydney, then how quickly might we embrace those AIs programmed to love us? How quickly will some of these startups succeed? And how much of a problem would we soon then have on our hands? The last bunch of years has brought the reasonable fear that too many of us are spending too much time online with social-media friends who are not really our friends. That worry would look positively quaint when we don't even need humans to have a satisfying digital interaction.
Plenty of research remains to be done, but some psychologists are already sounding the alarm about AI-powered virtual humans. It's easy to see why. If the fear is that children spend too much time communicating on Instagram or TikTok with people they never met, what happens when their digital interactions stop involving people at all?
Also, how will all this impugn our ability to distinguish real from fake, which — let's face it — isn't exactly stellar as it is?
And maybe most of all, what effect will there be on our social skills when we no longer need to deal with the messy business of unpredictable humans for friendship but can instead opt for an AI, who we can program and train to respond as we'd like? And talk to them all the day long instead of to real people. (Kazuo Ishiguro wrote about this in “Klara and the Sun” in 2021, and man, it’s seeming a lot less science-fiction-y by the day.)
In short, what happens both to our psyches and the social fabric when AI becomes a viable alternative to human companionship?
In this regard the Roose episode does offers reason for concern. But it’s not because an AI will come along and forcibly steal things like our relationships and our humanity. It is because we might, slowly but surely, decide to hand them over to it.
BigMind
A longread on something kinda serious and/or important
Alright, so we’re going try to do a magazine-style deep-dive with at least some semi-regularity. This first one’s close to my heart because it’s a story The Post refused to publish. My immediate editor, bless his heart, wanted it, but the situation up the chain was…another matter. I tried, for weeks on end, and was getting nowhere, not with anything remotely resembling what you’ll read. Were they uncomfortable with the human implications? Didn’t think it important enough? I tried in vain to figure it out. Anyway, here’s the story. All I can say is I hope I finally did right by you, Hutch.
Crypto’s Tragic Figure
The email stares at me, daring me to open it.
“We had a victim in our group commit suicide back in May,” the subject line reads.
My stomach sinks. I switch tabs. I don’t want it to sink any further.
The “group” is a Facebook group, and it includes hundreds of victims of a romance-based cryptocurrency scam known as pig-butchering that had claimed at least $66 million in people’s crypto with the help of a woman who called herself Alice. It is the summer of 2022, and a few months earlier my colleague Jeremy Merrill and I had broken the scam story, focusing on a victimized former Atlantic City cop named PJ. PJ’s story was like so many others', in which a single man between the ages of 30 and 60 is cajoled by a woman he meets online (herself likely conscripted by a crime ring) into slowly giving over his trust, and eventually his money, on Coinbase, from where it is stolen.
The email on this afternoon has come from another of Alice’s victims, a 48-year-old Ohio actor named Troy. His subject line suggests the worst possible outcome.
I gulp and click ‘open.’ The message and a quick Google search of local news coverage reveal what happened.
On May 25, 2022, in the very early hours of the morning in the western English town of Chippenham, a 51-year-old man named James Hutcheson — Hutch, to his friends — stepped onto the tracks at a station near his home. He was killed instantly. Authorities had ruled it a suicide. Hutch had been a British Transport police officer for nearly 28 years.
What the coverage didn't say (but the Facebook group did): Several months earlier Hutch had lost $115,000 to Alice.
It’s of course impossible to know exactly what Hutch was feeling. But conversations with other Alice victims had returned an almost identical set of emotions. Resentment toward the scammer. Anger at Coinbase. Frustration with law enforcement (little of the victims’ money has been recovered).
And, ultimately, the inexpressible regret that comes from knowing you’ve thrown away, with one lapse, so much of what you’ve worked for.
I, on the other hand, feel nothing but anger. All the people in a position to do something about this instead just succumbed to inaction — the law-enforcement officers, the Coinbase executives, everyone else who Hutch and so many other victims reached out to. They all just whistled through their day, too unbothered to follow up. And now a man was dead.
Then it hits me. I am one of these people.
HUTCH OBSCURED
The email looks right back at me, and I am scared to open it.
Hi Steven
I lost 115k usdt to the same scam as PJ.
I’m in the UK but the story sounds the same.
I can be called on XXXX.
James
The message had come on April 5 2022, a day after our Alice story was published. In the flurry of emails from other victims after the story ran — and using the journalist’s clinical calculus in which new scams go to the top and more-of-the-sames get backburnered — I never replied. As I look at it now I notice I had even forwarded the note to my personal account, the usual trick of putting it on my digital to-do list. But that somehow makes it worse: I'd wanted to do something but didn't.
Sure I could rationalize it — mental health is complicated, my response may have not done much to give him hope, hadn’t I already done my part by writing about the scam in the first place?
But there’s no way to really know how accurate that thought is. Maybe my consolation or continued investigation would have done nothing to change Hutch’s fate. Or maybe it would have given him just a little bit more hope to carry on. Maybe I could have written something to him that day — to this man seeking hope wherever he could find it — that wouldn’t have led to him taking his own life six weeks later.
Because the reality is that when it comes to scams — particularly scams in the murky world of crypto — complicity is everywhere. When confronted with the constant barrage of money lost, many of us say caveat emptor and move on with our day. But opportunities abound to help. I had a juicy one. And didn’t jump at it.
I begin reaching out to people who were close to him. A part of me obviously wants to assuage my own guilt — wants to learn something, anything, that alleviates my responsibility. But I also want to honor Hutch, if entirely too late. Who was this man? Who was the human behind what a bunch of criminals saw simply as one more mark?
An examination of his Facebook page offers some clues. In addition to happy photos with his twin sons, Tom and Cameron, now about 21, athletic images of Hutch abound — in workout clothes rounding hilly terrain, or beaming over handlebars. Hutch was a frequent participant in “Ironman,” the epically challenging triathlon in which competitors swim 2.4 miles, bike 112 miles and finish it off by running a full 26.2-mile marathon.
Photos show a goofy side too. In one such image, he poses in his official British Transport vest with pink bunny ears. In another, he has photoshopped his face onto a Shaun the Sheep character. He has also posted New Yorker-style cartoons about life’s foibles that reveal a wry outlook. And when he had put on a spiffy suit for a mirror selfie, one friend ribbed him “court case due Hutch?”
It’s a start. I need to push further, reach beneath this social-media surface. But weeks of messages to various family members go unanswered. And I am left in limbo, unable to find out who he was but equally incapable of shaking the feeling that I badly need to.
A hail of DM attempts finally yield a piece of fruit. Well, a hail of DM attempts followed by increasingly bizarre methods of proving I am who I say I am. (Let’s just say holding the day’s newspaper seems quaint by comparison.) But after running the gantlet I finally get an appointment to talk by Zoom with Dave Ashcroft, Hutch’s good friend from Ironman and — would you believe it — a noted anti-fraud investigator.
And he has a lot to say.
“If James — a lot of people called him Hutch; I called him James — would want to be remembered for anything, it’s that he loved the hell out of his kids, that he cared about people, that he had a good sense of humor and what the best finishing time of his Ironman was," he tells me when we start our chat.
“Honestly,” he adds with a laugh, “that’s probably the detail he’d really want to make sure people know about him."
Popular perception has crypto investors as aggressive and troll-happy bros. But that wasn’t Hutch, says Ashcroft. The most dragging he ever did was over Ashcroft’s beloved Everton. (Hutch supported Liverpool.) More often he was showing conviviality, offering affectionate nicknames like "Big Man," to the 6′ 5″ Ashcroft.
Hutch revealed to Ashcroft that the $115,000 was the sum of his savings. “He told me ‘Dave, I lost everything.’ ” (At the end of April 2022, Hutch had also posted to the Facebook group. “All of what I worked for has been taken from me and I feel truly awful,” he wrote. “It has been over a month for me and the gravity of it has hit me really hard.”)
Ashcroft had counseled Hutch that the best chance of recouping the money came not with the scammers or Coinbase but the two English banks from which Hutch had transferred the money, since the banks’ safeguard systems had clearly failed.
Initial letters from the bank rejected the claim, dispiriting Hutch. But Ashcroft believed that within a month or two they could claw it back by appealing their way up to the ombudsmen. He had been through situations like this with other victims and it often worked. “Just give it time,” he told his friend.
Ashcroft, though, wondered if Hutch was really hearing the message. “At some point I’m not sure how much [it] gets through,” he says. “You’re just focused on the fact that this happened to you.” He paused. “It was the shame, really. Shame killed James.”
As a policeman, Ashcroft says, Hutch was far from gullible. But he did have a sweetness about him, and a naivete about digital matters. Suicide is complicated, its causes fraught. But Ashcroft has little doubt the crypto scam marked a breaking point.
“He started using phrases like ‘severe depression,’ and that was unusual for him,” Ashcroft recalls.
To many of us, crypto’s effects barely register. The terms are arcane; the huge sums abstract. What is $5,000,000-worth of bitcoin, anyway, or a trillion-dollar market cap? Mumbo-jumbo on a CNBC crawl.
But reduce the number and the impact paradoxically goes up. The $115,000 taken from Hutch won’t mean much to authorities or the markets. But it meant everything to a dad and retired cop.
The scammers behind Alice had seen Hutch as just one more piece of a blockchain to fatten their digital wallets, technology’s depersonalizing effect. But his vibrancy and humanity was coming through strong.
“For them it’s just a game. And for him he was giving part of himself,” says Sarah Whibley, another close Ironman friend. Ashcroft put me in touch with her. Whibley dated Hutch a number of years ago and remained close with him, the two talking as often as several times a week.
Hutch had a kind of deep-seated tenderness, Whibley said. She found a particular tragedy in this, his greatest asset becoming the tool of his undoing. “He was the kind of person who was very trusting and wore his heart on his sleeve. And of course the scammers can spot that.”
Hutch’s was not a simple existence, Whibley says, owing to a difficult divorce, struggles faced by one of his sons and a stressful job at the transport police, which is a grueling place involving much grimness and little of the prestige of a Scotland Yard
Ironman competitions, in contrast, were his salvation. They gave Hutch both a community and a chance for control, Whibley says. He and others were part of a subset of the extreme-athlete community known as the “Pirates,” and for nearly a decade they would gather across England to support and jibe each other. Hutch, who sometimes went by the oxymoronic online handle “Tough Guy Wabby” (“he was definitely not a tough guy,” says Ashcroft) would often foster the camaraderie among the group.
Whibley says Hutch had left the force a number of months ago, set to retire, but the wipeout from the scam meant he had to start taking odd jobs like housepainting. She says he was wracked by the thought that he now couldn’t provide for his sons. “It kept him up nights in unimaginable ways.”
Cath Hartwell, another Ironman friend, responded to a direct-message on Twitter by saying she was always struck by her friend’s level of sensitivity (“he often phoned for a catch up to see how I was doing after struggling with illness that kept me away from competing”) but also his doggedness. “His ability to never give up during Ironman events was unreal.”
The irony in his ability to stave off his own growing sense of resignation was hard to miss. At one time, Hutch posted to Facebook several times per week. But by the winter of 2022, when he would have been enmeshed in the Alice scam, the posts had trickled to less than one a month.
The last post Hutch offered was March 8. It was simple but effective: a picture of Zelenskyy sitting at his desk in military fatigues. No words, just the image.
It feels like a symbolic choice, a man who’d just been taken advantage of posting the symbol of the man who wouldn’t be exploited. As his own hope for fairness and justice waned, Hutch appeared to be taking heart in the man who was valiantly fighting for it.
And then the account went silent.
THAT OWNERSHIP THING
I am staring at an email draft that I can’t look away from.
Unable to sleep, I have picked up my phone and begun composing a message — the message I would have written to Hutch had I known he was thinking about ending his life. The message that I wish he could have seen just before he stepped out onto those train tracks.
I start writing, telling him the things that he might need to hear from me, a man he doesn’t know but who knows crypto, who has heard many stories from people like him.
“Hi James. Thanks for your note. I just want to say that you are not alone. I don’t mean that in some cliched, generically psychological way. I mean you are literally not alone. The smartest people in the world are losing money on crypto, whether by official scams or otherwise. Doctors. Lawyers. Academics. Wall Street people. It’s too volatile; it’s too unregulated; it’s too subject to manipulations; nobody knows anything. There is no need for shame, or self-loathing. I don’t know if you feel that. But if you do, it totally makes sense. This is at best a mercurial system and at worst a rigged system. Anyway, I don’t know if any of this helps. But you are in extremely good company.”
I think about sending it to his address as some kind of karmic gesture But the pain of knowing that I didn’t send a message like this to him when I had the chance — when it could have still done something — is too much to bear, and I put down my phone.
Journalists are trained on the holy orthodoxy of accountability. If reporters write about bad people enough, and call out enough government officials, the badness could be reduced. And regulation and government intervention are certainly crucial in helping society function more effectively, I believe that strongly.
But a harsh reality also exists — in a world of scammers, some who trust will get hurt, no matter how many laws are in place.
Right under Troy’s email notifying me of Hutch’s death is a pitch from a security company called ActiveFence. The firm, pitching itself to those who’d written about pig-butchering, promises tips to help readers avoid losing money to scams.
The email is filled with truisms like “Check the privacy policy, and make sure it leads to the official app website.” There is something amusingly naive about it, and not just in its implicit assumption that scammers only operate on rogue platforms. The message presupposes that we could simply do away with a victim’s pain if only we are more vigilant — that vulnerability is something that can be checklisted away. But the thousands of victims of the pig-butchering scam like Hutch didn’t forget to read a disclaimer. They simply wanted to believe in love and partnership a little too much.
The last week of May, Ashcroft and Hutch had spoken by phone, ribbing each other a little and then moving on to discussions of their latest strategy to get Hutch’s claim to the bank ombudsmen. The conversation went on for a while, and then it was time to go.
“Thank you, Big Man,” Hutch said to Ashcroft before he hung up. The next night, he stepped onto the tracks.
Ashcroft says even months later he finds it difficult to think about. “There was something about the ‘thank you’ that told me something was wrong."
He grows quiet. “I feel — I don’t know if responsibility is the word — I feel like I knew more than most people but hadn’t done enough. To me, this was completely avoidable.”
His voice catches. “I thought I’d have at least another week.”
My mind also runs through the possibilities of what I could have said during that critical springtime moment when he reached out to tell me something was wrong. I, too, can’t stop wondering if it would have made a difference.
I call Dr. Dan Reidenberg, executive director of the suicide-prevention nonprofit SAVE and a longtime good source on mental-health matters, and explain what happened/what I’m grappling with. “It certainly makes sense that you would feel a sense of responsibility here because you didn’t write back to him,” Reidenberg says. “But unless you were actually able to replenish the money he lost, there’s no reason to believe you would have been able to help him with a note."
He adds, “The truth is as much as we tell people to reach out and ask how someone is doing — and we should — suicide is very complex and not something an outside person can heal." I want to believe him. But I’m not sure I fully can.
One restless night, I decide to take a look at Hutch’s Facebook page. There is an image I hadn’t seen before, from a few years earlier. It’s a reproduction of a flyer of some kind, and it has nothing but text on it. At first it feels like a cruel irony, a message brutally undone by life events. But maybe that's the wrong way to think about it. This isn’t an old message that hindsight proved inaccurate. It's a message being sent now, perhaps even in some spiritual sense by Hutch himself. The lesson is one he had to learn in the most painful way.
But it is one that he, nonetheless, is eager to teach anyone who can still hear it.
It reads:
“Just a reminder in case your mind is playing tricks on you today.
“You matter. You’re important. You’re loved. And your presence on this earth makes a difference whether you see it or not,” all quoted by the man who once completed an Ironman in an extremely brisk 10 hours and 57 minutes.
If you or someone that you know needs help, please call the National Suicide Prevention Lifeline at 988. Crisis Text Line also provides free 24/7confidential support via text message to people in crisis when they text 741741.