Mind and Iron: Can the Trump Administration's AI Rules Stop Mass Persuasion?
Also the bike that just may change living in a city
Hi and welcome back to another salty issue of Mind and Iron. I'm Steven Zeitchik, veteran of The Washington Post and Los Angeles Times, senior editor of tech and politics at The Hollywood Reporter and chief welder at this journalistic sheet-metal plant
Every Thursday we come at you with all the tech, science and business news about the future you could want, wiped cleaner than a fresh countertop of unwanted spin. Please consider supporting our independent mission.
This week brought some OpenAI news, as the company released more "reasoning" models — ie systems that take a little longer to come up with answers and (allegedly) actually puzzle through a problem. Among its purported skills: "think[ing] with images — meaning they don't just see an image, they can integrate visual information directly into their reasoning chain." So if you're considering training to be an interior designer or art critic, don't; OpenAI has you beat.
But we're not going to get into all that this week. Instead we'll platform two voices worth listening to when it comes to building this whole viable humanist future.
First up is Jason Van Beek, chief government affairs officer for the Future of Life Institute, the global powerhouse organization whose mission it is to "steer transformative technology towards benefitting life and away from extreme large-scale risks." A longtime advisor to current Senate majority leader John Thune, Van Beek has just authored a whole slew of recommendations his organization thinks the Trump administration might adopt in its AI plan, and he dropped by to talk to us about them.
Also, U.S. cities have been irrevocably transformed in recent years by bikesharing apps. But those are just good for a quick hop — sans groceries, package, pup, child or anything else you could normally transport in a car. Enter the "electric cargo bike," and a nifty little Boston company that is helping to transform how we get around when we have stuff we need to move around.
Housekeeping note that we won't be publishing next Thursday as we take a reporting trip to D.C.; back at you the week following with sweet dollops of newsy cream.
First, the future-world quote of the week:
“Brainwashing is always a concern.”
— Jason Van Beek, chief government affairs officer at The Future of Life Institute, on the dangers of AI's mass persuasion
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Tipping Trump; The bike that moves you (and everything else)
1. AS THE TRUMP ADMINISTRATION PREPARES FOR THE RELEASE OF ITS "AI ACTION PLAN" THIS SUMMER, a lot of big tech companies are trying to get their interests nestled into whatever executive order the White House is going to release to replace the Biden executive order, now moon-bound. (To wit: OpenAI and its "freedom to learn" agenda, a piece of reasoning so good its own new systems couldn't come up with it.)
But some pretty stout nonprofits are trying to bend the White House guidance in their direction too. And some of their ideas are very strong.
One of these policy papers comes from The Future of Life Institute, the heavy-hitting nonprofit co-founded by the MIT professor Max Tegmark and a slew of other people imagining a better, safer future including Stephen Hawking, George Church, Sandra Faber and a pre-X/pre-DOGE Elon Musk.
The aforementioned Van Beek — he was Senator Thune's staff designee to the Armed Services Committee and also conducted investigations of large technology companies — has come up with a document that includes some pretty sharp ideas that have now been sent on to the White House’s Office of Science and Technology Policy.
Among them: "Establish an AI industry whistleblower program to incentivize AI development that is free from ideological bias or engineered social agendas," "Task the Secretary of Labor with tracking AI's potential to replace workers, including a breakdown of the impact across different states" and "Issue a moratorium on developing future AI systems with the potential to escape human control, including those with self-improvement and self-replication capabilities.” (Also: “Mandate installation of an off-switch for all advanced AI systems.")
How many of these are reasonable expectations of a Trump administration that has shown little interest in AI Safety so far is a question, though Musk has been one of Big Tech's more vocal proponents for safety, and we know the sway he has. But whatever their feasibility, the Future of Life blueprint is a useful document, a sketch of what we should want to keep in mind as we set the rules of this twisty road. Here's Van Beek.
Mind and Iron: So off the bat I have to ask: This administration is not known for regulation. How much did that play in your mind as you drafted these recommendations?
Jason Van Beek: We did want to make this a PR document that all folks can take a look at and glean some things from. Some of these debates aren’t happening and we’re trying to get more debate going given how quickly some of these things are coming together. But I also think if you dig deeper there's some folks within this administration open to scrutinizing this kind of technology — these platforms, these companies and partnerships. It's a complicated picture, but we're writing this document to speak to these concerns.
M&I: What's interesting is that even as Big Tech has cozied up to the president some of the populist elements of his base really would oppose a lot of what they're selling.
JVB: There is this coalition that President Trump has built with the tech side of things and the MAGA base, some fine lines that have to be walked. I think they'll be able to navigate that pretty well and I'm hopeful they'll have a thoughtful product come July.
M&I: Who else can you appeal to at the White House that might lend an ear?
JVB: One recommendation I'm passionate about that I think will resonate with the administration is ensuring the White House Faith Office will have a role to play with the AI Action Plan. That's what could bring the MAGA base into this discussion. Religious communities have not had a large role to play in this strategic discussion about AI. We think having the office play a role could resonate with the president, that the White House could take into account concerns raised by these communities.
M&I: Certainly it seems like religious leaders might have some skepticism here — so much of AI is about the drive to superintelligence, and religious communities might say 'well, we sorta already have that.'
JVB: These companies are pretty forthright about superintelligence and we think religious communities should be concerned about that — that if they heard about it they would want to have some say about it. So we just want to highlight that this is fast becoming a reality, that companies are developing superintelligence and seem less interested in loss of control. Religious communities might apply the brakes and steer things in a more fruitful direction.
M&I: The awareness and advocacy aspects are obviously very important. But it seems to me they can only go so far; it’s laws that really would get us where we need to go. But do laws work here? Does Capitol Hill have a role to play in actually legislating something like, say, superintelligence?
JVB: The analogue is we regulate pharmaceutical companies that make products that are healthy and useful but we want to make sure they’re safe. The aviation industry, the same thing. There’s certainly analogues to this. If you’re talking about superintelligence that can get out of human control that’s something there’s a lot of analogues to developing legislation for.
M&I: The whistleblower provision is fascinating in that regard. What would you hope they would give us? Obviously we think of them as alerting us from the inside when laws are being broken. But what happens in an area like this where the law itself is spotty and struggling to catch up?
JVB: [To give us] things in the public interest. These are huge issues that are changing society. Some of the people on the inside are very aware of the capabilities of these systems and technologies that are being built and probably have more insight into the dangers and harms that are potentially happening than even people in Congress.
M&I: You have a bunch of stuff in here about protecting the shipment of the technology to other countries. It's funny — we talk a lot about not "losing" the AI race in terms of development, but not so much in terms of what we're sharing after we’ve developed it.
JVB: The idea of export controls, of establishing rules in terms of chip technology and what could potentially be exported, is one where there's bipartisan consensus. Making sure foreign adversaries don't have access to some of these chips and some of this technology. There is an AI executive order of Biden's from toward the end of his presidency about this that has not been rescinded. I think this is an issue of non-partisan concern.
M&I: You also talk about a grand "off switch" — something that allows the entire AI system to be turned off, like turning off the Internet, but much bigger. Is that possible technologically? Politically?
JVB: One of the big questions on the switch is who decides to turn it off. We didn't touch that. And that's really the hard part. And it might happen so quickly we won't be able to control it. So we need a machine that would control the switch too.
M&I: Head-spinning.
JVB: That aspect of things could be difficult.
M&I: Finally, you speak in the document about mass persuasion — the idea that one day an AI can know how to press our buttons, emotionally or otherwise, so well that we're basically under some kind of hypnotic spell. Given how good tech has already gotten at this with its algorithms without all these new models, this seems like a massive concern.
JVB: It's a really important issue. Sam Altman developing a system that sells you something is one thing. But having Sam Altman develop a system that persuades you how to vote or how you should think about an issue? That's a really different thing and something we should be concerned about and government should be concerned about. There’s not a real good answer about who in government. The FTC seems most logical. But superhuman persuasion is something Sam Altman has said will happen before superintelligence. That's why we want these systems to be free from ideological agendas.
M&I: Even the marketing part seems worrisome — manipulating us into buying something like automatons.
JVB: Brainwashing is always a concern.
2. A FEW WEEKS AGO I WAS LUCKY ENOUGH TO ATTEND THE SMARTCITY EXPO USA CONVENTION in Manhattan. There were A LOT of interesting innovations, small doodads like portable air purifiers as our oxygen gets more compromised and sprawling approaches like a system of smart traffic lights to make our cities' emergency vehicles respond faster.
But I was struck by a quirky idea that seemed small until you realize just how great a niche it fills. It comes courtesy of a Boston transportation entrepreneur named Dorothy Fennell, who has come up with the notion to launch a bike-rideshare company — except for frontloading electric cargo bikes. Or in lay terms, an ebike with a sizable storage area at its front.
Though humble, the cargo-bike idea holds plenty of promise for anyone who wants to take a bike around the neighborhood but wonders just where they'll put the stuff. But such bikes are expensive — like $8,000 a pop, so not so affordable for someone who wants to use one for some occasional errands. That's why Fennell had the idea to do this all as a share system.
Fennell is the founder of CargoB. With her company you just pay a few bucks per use, and get the dog to the park, the kid to choir practice, the package home from Target. With pedal-assist you're also not working very hard, but still going about 12 miles per hour, which makes it perfect for any trip of a couple or three miles, which is what they're designed for. (This video explains all.)
Right now CargoB has a handful of bikes you can rent by the hour in Boston and Camberville — they just signed a deal with Harvard, so you may see them there, if administrators still have any money to run their campus after this week.
CargoB basically seeks to fill the space between bikes, which don't really let you carry anything, and cars, which do let you carry things but take up space, money and gas while befouling the environment and generally making us scream to the heavens sitting in traffic.
"We need to move things in a dense area — we can figure out this totally solvable problem," she says, noting that 50 percent of all trips in cities are under three miles.
And a lot more of us are living in cities in the first place. The urban U.S. population — in the middle of last century just 60 percent — went up six percent last decade to currently sit at 80 percent. And it is forecast to increase to nearly 90 percent by 2050. Not to mention the nearly a million people who commute to work by bike already.
CargoB is part of a movement of what Fennell and others call "utilitarian shared tools" — basically, let's make our world easier by collectively renting what we once individually needed to own.
The cool part is this doesn't just make a personal life easier. The compound effect of these bikes, if they catch on, is fewer cars, which means less traffic, and also fewer parking lots, which means more space for something else, and also a general lowering of the barrier to entry to live in dense areas since now you have an easier way to get your stuff around. A whole bunch of European cities already make regular use of cargo bikes.
"This is the next frontier in bikeshare," Fennell says. "We're happy for Amazon delivery people to have cargo bikes, but what can it do for everyday consumers?"
Of course some limitations abide. Right now there's simply not enough of a market to make these cargo pedalfests ubiquitous — ie enough to let you drop them off pretty much anywhere a la New York's Citibikes. You have to return them to where you got them. Which means you need to plan well or live near a rental site. Also, Fennell notes she needs to find the land on which to put these bikes between uses — since she's just an operator, she needs either private businesses or city councils to grant her space to store them without running up her/our costs.
So while she hopes to get to a couple dozen CargoB bikes in Boston by the end of the year, don't expect them at every street corner in every city right away.
But the idea of adding a new environmental form of transport, getting some cars off the road, and ensuring everyone gets where they need to go with the stuff they need to get there with? We’re down to buy what Fennell is pedaling.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. This year started on kind of a good note. But it’s been pretty rough since. This week? Looking good.
TRUMP’S OPAQUE AI PLANS? THE FUTURE OF LIFE INSTITUTE IS ON IT AND HAS SOME REALLY GOOD IDEAS: +3.0
A BIKE THAT MAKES LIVING IN CITIES EASIER AND CLEANER? Sign us up. +2.5