Mind and Iron: How DOGE Is Using AI to Rewrite Housing Laws
Some Trump-tech craziness. Also, a 'smarter' social media? And Natasha Lyonne's AI foray.
Hi and welcome back to another savory episode of Mind and Iron. I'm Steven Zeitchik, veteran of The Washington Post and Los Angeles Times, senior editor of tech and politics at The Hollywood Reporter and chief line-painter on this journalistic highway project.
Every Thursday we bring you the crucial news on AI, tech, business and science, spin-free. Please consider supporting our mission.
Our reporting trip to DC last week was fruitful. But we’re back at giving you the full future scoop this week. Three tight items, all relevant, all not belaboring the point.
As I reported in THR this week, an AI movie from big Hollywood filmmakers is coming, and depending on how it turns out it could prove the enthusiasts very right or very wrong.
Also, is AI making policy decisions in the Trump administration?
And finally, AI, now infused with social media. Let's take the tech trend that devoured the 2010's and the tech trend that will devour the 2020's and put them together to see what hydra emerges!
First, the future-world quote of the week:
“It all sounds crazy — having AI recommend revisions to regulations.”
— An anonymous sourced to Wired, referring to news that DOGE has come in to HUD and asked AI to rewrite our country’s housing laws
Let's get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Building an AI House (of cards); the Russian Doll of AI movies; Social media, now awash in AI
1. YOU MAY REMEMBER HOW A FEW WEEKS AGO A KERFUFFLE emerged when the economics writer James Surowiecki appeared to uncover that the Trump administration's tariff rates came from the same formula that ChatGPT dunderheadedly suggested.
Now there's a new government-policy use case. Apparently the DOGE people who've come in to the Housing and Urban Development office to do their DOGE thing have asked the overseer to use AI to determine how we should be regulating housing in the Trump age. As Wired reported this week, well, let's just let their lede tell us, because it tells everything:
"A young man with no government experience who has yet to even complete his undergraduate degree is working for Elon Musk’s so-called Department of Government Efficiency (DOGE) at the Department of Housing and Urban Development (HUD) and has been tasked with using artificial intelligence to rewrite the agency’s rules and regulations."
The piece goes on to keep bringing the hits:
“‘I'd like to share with you that Chris Sweet has joined the HUD DOGE team with the title of special assistant, although a better title might be ‘Al computer programming quant analyst,’” Scott Langmack, a DOGE staffer and chief operating officer of an AI real estate company, wrote in an email widely shared within the agency.”
Basically what this college student is doing is deploying an AI to look for regulatory flaws in our housing policy — according to who? Don’t ask any smart questions — and then flagging them for humans accordingly.
Now, you could also wonder if the AI might recommend stricter regulations as shouldn’t hypothetically an AI model be objective if it’s looking for flaws? Maybe these deregulators will be hoisted on their own petard!
Of course this assumes the myth of objectivity. In reality an AI adapts to us; an AI tells us what we want to hear. That’s true with the data we put in; that’s true with the prompts that get things out. If someone comes in wanting an AI to deregulate the housing market, as DOGE does, then deregulation we will get. Indeed, Mr. Sweet’s AI has already filled an Excel spreadsheet numbering one thousand (!) rows in which the laws as written are an “overreach.” The model has suggested more underreaches.
The DOGE folks will claim the machine is smarter than the regulators propping up the law, that the machine is ruthlessly seeing what partisans don’t want to see. But that would be nonsense. If a progressive was programming and running the model, it would find a thousand places to recommend that the legal language be strengthened. An AI model isn’t a God-like arbiter of objectivity. It’s a telltale sign of its programmers’ subjectivity.
Is it even a good subjective tool though — that is, even assuming deregulation is the goal, is it doing it in the best possible way? Well that’s the problem. It’s not even doing hat. Because AI models, in their existential need to hoover up all the data out there, aren’t hand-selecting the smartest of the smart training material. DOGE’s Housing AI likely isn’t trained exclusively on some thoughtful analysis of deregulation — on carefully considered, well-researched academic papers. It’s at least partly a mashup of all the half-baked and partisan stuff that came before. That, after all, is what almost all AI is, what it has to be given the paucity of good data.
We could all laugh a little bit at this, experiment horrifying as it is, because we recognize the absurdity. When it comes to the important questions of governing our country, we expect human experts to be in there doing human expert-y things. Policy as conducted by AI would be like the Western Conference finals as conducted by an NBA 2K champion.
But I'm not so sure we'll be laughing forever. AI will increasingly be making big organizational decisions. Partly that's because the models will be getting better. But mainly it will be because the stigma to use them in business settings will be decreased— or more specifically the situations in which they are used will become more common and normalized. We'll go from letting an AI pick the catering at the all-staff to the order size on the new shipment to, yes, the licensing deals and mergers that can move markets. Consulting an AI, like the ancient Greeks consulting the oracle, will become a go-to move when the battlefield calls. It will be easy; it will even be noble. Chances are, knowing the memeified Internet, it will even be cute.
While we all like to think decisions at the highest levels of corporations and government are made with a fine level of thought and artistry, example after example tell us otherwise. That should make this coming AI decision-making trend less worrisome than it appears. Isn’t this just asking elders for a decision? Is that worse than whoever Time Warner execs consulted with when deciding to merge with AOL or Excite execs consulted with when they decided not to buy Google?
Except of course consulting an AI is nothing like consulting elders, who could take their vast experience and apply it to a custom-made situation, not simply put all past examples into the blender to try to sound intelligent, which is what the models do.
So let's all have a good laugh at AI making policy decisions. It will soon get a little smarter. Unfortunately, it will also soon become a lot more relied upon.
2. AS THE DEBATE OVER AI IN FILM HAS SWIRLED, one of the big questions has been what mainstream Hollywood figures would finally make a movie that puts the technology front and center.
Now we know. Natasha Lyonne, Brit Marling and — a decidedly not Hollywood figure — the tech innovator/skeptic Jaron Lanier.
As I reported in The Hollywood Reporter on Tuesday, the trio are collaborating on a film that Lyonne will direct, backed by an AI studio called Asteria. Shooting will begin next year.
As you might expect of a movie from an AI studio that uses AI technology, the models won’t just be a tool for its creation but a part of the plot line, as an immersive video game begins to make people lose control. Which means a movie about a generative AI video game (a fancy way of saying content you can generate/personalize on the spot) will use the tech of a generative AI videogame. It’s a book about coffee tables that’s also a coffee table.
Called, ironically, “Uncanny Valley” — for the term describing how tech makes video seem creepy — the film will test the theory that AI can be a creative force on its own. Until now, we’ve heard about it becoming a tool for various VFX purposes or, of course, to help with those pesky Hungarian accents in “The Brutalist.” Or even, as some fear, to personalize existing originality.
But can it actually be used to make something original itself? That’s the feat being attempted here.
Lyonne is the raspy talking creative at the center of bold stuff like “Russian Doll” and the current hit “Poker Face;” Marling is one of the most original creators around (go back and check out her “Sound of My Voice” indie film from the early 2010’s); and Lanier is all about human creativity, even authoring a 2010 manifesto "You Are Not a Gadget," in which he argued that social-media and iPhone users need to reclaim their humanity. He may have created VR, but he’s hardly some Big Tech stooge.
And if you doubt that human creativity is what they're after here, just listen to the statement Lanier gave me:
“There is a story here about technology, but it is really about people, and the unpredictable thread of connection that joins us across generations, technologies and divergent weirdness.”
So in a way it’s perfect that these three minds will try to make an original movie with the technology. If it works, it will silence a lot of the critics who say the tech is essentially a regurgitative shortcut; if it doesn’t it will demonstrate strongly that they’re right. I can’t imagine a better test case.
3. FINALLY THIS WEEK, I WANTED TO CALL YOUR ATTENTION TO A NEW SOCIAL-MEDIA AI PLAN.
It comes from Meta, which said a few days ago that it was adding a “Discover” feature to Meta AI, its ChatGPT/Gemini competitor. Now when you ask the chatbot a question, it will show how others, including your friends/followers, are interacting with the chatbot too. It’s AI as brought to you by the makers of Instagram.
On one level this seems kind of silly or even counterintuitive — isn’t the point of the chatbot to customize its responses to you? What do you care what your friends are asking about?
But the program gets at something very savvy about how we will interact with AI, at least at first: skeptically. And what will reduce the skepticism? Knowing others are using it too, and for the same queries.
Of course it’s obvious what dangers this social proof poses. The models already threaten to flatten our digital interactions into a homogenized blend of what came before, dehydrating answers from something personal and quirky we might get on reddit to a freeze-dried packet as generic as packaged soup. Now we’ll be more likely to do that from the question end too. It basically homogenizes from both sides. Why ask something unique when we can see how ten other people have framed the question?
Meta doing this is questionable enough. But it is a social-media company; you expect that. Ditto Grok, which grew out of X. What’s worse is that we’re seeing it from other kinds of companies too. OpenAI, which never had anything to do with social media, is building a social-media network, so that when you go to ChatGPT to ask a question, you’ll see simultaneous questions being asked and answered across the system. Whichever way we’re coming at it, we’ll get the same dilutive result.
If these companies have their way, how we interact with AI won’t be a set of solo conversations with a digital pasteurizer. No, it will involve seeing all the groupthink/pasteurized queries that precede the digital pasteurizer.
Maybe seeing what everyone else is doing will make us want to be more unique; maybe social media will allow for differentiation. Maybe everything we know about how people interact with technology is wrong. I wouldn’t bet on it though. Just ask any chatbot.
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way. This year started on kind of a good note. But it’s been pretty rough since. This week? Not much better.
HOUSING LAWS, SCALED BACK BECAUSE A MACHINE TELLS US TO: Blech. -3.5
AN AI MOVIE FROM TOP CREATORS? Could be interesting. +1.5
AI SOCIAL-MEDIA TAKEOVER: What could go wrong? -2.0