Mind and Iron: ChatGPT enters the job market
AI starts designing buildings. And will we soon trade our passwords for faces?
Hello and welcome to Mind and Iron, a place of great wonderment and dazzle. Or at least where you can learn how the latest tech innovations will silly-putty our lives into new shapes.
I’m longtime WaPo whisperer Steve Zeitchik. Every Thursday here at Mind and Iron we aim to deliver a collection of news, perspective and general thoughtfulness on all things future-related. If you’re interested in knowing where we’re headed — where AI and the machines are taking our culture, politics, health-care and media — then hop aboard the M&I train. Ditto if you just want to sound smart at your next cocktail party; that’s a good use case too.
So if you haven’t signed up yet, please do so here. And if you have, please consider pledging your support. The paywall will soon be up — hammer, brick, hammer, brick — and this will ensure you never miss a moment of access.
Last week we introduced a new feature, the Totally Scientific Apocalypse Score. In a nutshell, it aims to tell us whether we should be climbing into a nutshell — whether the latest developments in future-world should make us feel good, bad, or semi-permanently stressed out. A barometer of the future, really. More functionality coming in the weeks ahead, but in the meantime you can follow how well we’re doing just below our Iron Supplement roundup.
As we begin to emerge from our summer beta of sorts there will also be more Substack-y features — more, more, always more. So keep an eye out for that.
First, the all-important future-world quote of the week:
“AI is already way beyond what human architects are capable of. This could be the final nail in the coffin of a struggling profession.”
—Neil Leach, author of Architecture in the Age of Artificial Intelligence, getting real about how our buildings could soon come into being
This week: What AI is indeed doing to our buildings and the designers who design them. How the doctor of the future will be consulting algorithms more than charts. And could passwords soon be going ByeBye86?
Also, would you ever use ChatGPT for a job application? What would you do if a prospective employee used it for a job application? An influential public official just ran this gantlet, and as he told Mind and Iron, it really spun his head.
Let’s get to the messy business of building the future.
IronSupplement
Everything you do — and don’t — need to know in future-world this week
Passwords get 86ed#; doctors and algorithms, a love story; will AI come for architects?
1. CLIMATE DISASTER, AI BIAS, POLITICAL DIVISION — JUST WHEN YOU THOUGHT THERE WAS NOTHING NEW TO WORRY ABOUT, along come the acoustic side-channel attacks.
The what now?
Well, a new Cornell study finds that AI can predict passwords based only on the sound of keystrokes. With scary accuracy.
The researchers trained an AI to hear people entering thousands of keystrokes. Then they unleashed it to listen to the entry of new passwords and had it report back on what it heard. The machine got the password right an insane 95 percent of the time.
Yes, it literally could replicate entire passwords, from letters to numbers to that pound sign you always forget about, 95 percent of the time, just based on how people’s hands moved around a keyboard.
I don’t think I remember my passwords 95 percent of the time. And I came up with them.
(It’s testament to the prescience or maybe just the paranoia of your dear author that he’s long been worried about a related threat — a snooping mic recording the keytones of a password he just entered in an automated phone system.)
What’s most startling here is that no great equipment or processing power was needed. “Our results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms,” the authors wrote. The only weak spot for the machine was the switch between upper- and lower-case characters; that release of the shift key is as stealth as a cat in a rainstorm.
This is all sufficiently scary — AI is becoming more eagle-eared than Jessica Fletcher at a remote country inn. But it’s the implications that really detonate. Because I don’t think the efficacy of attacks like this just means the number of hacks will go up. I think it means that security itself will be transformed.
Because let’s face it, none of us are defeating an AI that can listen to keystrokes. No, what we’ll do is get rid of the password system entirely. And in its place we’ll go biometric. The new age of AI efficiency shows, once again, how outdated some of our old methods of locking down daily life are.
Faces opening iPhones? Just the beginning. Iris scans, fingers prints and other biological information not easily copied? Those will be the ticket for everything from checking our email to paying our electric bill. Sure, there’ll be privacy pushback. But how much sway will that have in the face of hacks galore? Already thieves can run a large amount of passwords on an endless number of accounts to get in. And soon we’ll be needing to enter a soundproof room just to create a password.
So we’ll jettison the password. And the idea of Web security being about recalling your grandmother’s birthday will be as quaint as Barney Fife tipping his hat to Aunt Bee.
Remember when we were worried that our digital lives could too easily be linked to our real-life identities? Not anymore — our faces will be our digital life.
2. YOU DON’T NEED TO BE A FRANK LLOYD WRIGHT ACOLYTE TO KNOW how much buildings shape the aesthetic and functional character of a community.
But maybe computers should be doing more of the shaping?
So asked the Guardian’s architecture critic last week.
The paper’s Oliver Wainwright focused on XKool, a China-based firm that’s been at this for a while, letting AI play a key role in designing buildings. Proponents’ argument: that architecture's real work lies not with the final image but the hundreds of small design decisions that go into it. And that’s where AI can really excel, running through far more possibilities in far less time than a human ever could.
“AI enable[s] the kind of calculations and predictive modelling that was impossibly time-consuming before,” Wainwright concludes. Among other consequences, this could allows designers “instant feedback on the implications of moving a wall or piece of furniture.”
That sounds like it doesn’t leave a lot of room for humans. Indeed, it’s why Leach says “this could be the final nail in the coffin of a struggling profession.”
So certainly we’ll get a lot more design possibilities, and we’ll get them because of machines. But does that mean architects go away? I can’t help thinking of a writing analogy. In the 1980’s the word processor started to catch on. Suddenly you didn’t need to get prose right the first time like you did with a typewriter or pen; you could move and cut, write and rewrite, at will.
The word processor didn’t mean the end of writing — it meant the proliferation of a lot more writing. Including, if I and my colleagues are any indication, a lot better writing. (Trust me, you do not want to see the first drafts of anything we’ve written.)
Like word processing, AI in architecture isn’t just an efficiency tool — it’s a creativity-expanding gamechanger. Because suddenly art can be rearranged at will, expanding the possibilities. But it probably should still be rearranged by humans. As Carl Christensen, a Norwegian software engineer who created an AI tool, notes of his software, “I call it ‘AI on the shoulder’ to emphasize that you’re still in control.”
Because in the final analysis, architecture, like writing, involves vision and originality, something machines notoriously do badly. It’s why GPT can write a pop-cultural “in the style” pastiche of a Bob Dylan or Taylor Swift or Nina Simone lyric but not come close to the original. By definition it’s just synthesizing everything that came before. Why should buildings be any different?
None of the boosters offered a good answer to that in the Guardian. Indeed, as Wainwright said of one building designed with humans out of the loop, “so far, the results are clunky….[it] looks very much like it was designed by robots for an army of robot guests.” (That image above is courtesy of my new pal, Architect Bard.)
No doubt some firms will use AI to design buildings from scratch. They’d just be better off letting humans take control of the word processor.
3. IF YOU’VE BEEN PAYING ATTENTION IN THE COUPLE MONTHS I’ve been at this newsletter, you know how interested I am in health-care in the AI age. The smart-digital revolution will upend few professions more than medicine. And medicine matters more than most professions.
Especially intriguing is this question: what will change when we head to the doctor? The answers range from “nothing except maybe the machines read a few radiology scans” to “everything, from testing to diagnosis to disease-prediction.” (Our interview with the head of the Alliance of AI in Health Care last month yielded some shiny insights on the topic.)
My own view is that the human aspect of medicine won’t go away, but it will be dramatically transformed, far more than some doctors seem to believe.
The New England Journal of Medicine last week took on the topic in a bracingly original way. The authors’ main point: doctors will need to become a lot better at algorithmic operations and probability theory than they currently are. From the earliest days of medical school. These algorithms (they’re known as Clinical Decision Support or CDS systems), will after all be essential in assessing and treating patients.
Doctors won’t become moot. But those who don’t know how to integrate these methods will be badly left behind while the others zoom ahead. It’ll be the difference between kindergarten and Harvard Medical School.
“Physicians do not need to become experts in math or computer science to use CDS algorithms effectively,” said the story. “Rather, clinicians need to understand where in the decision-making pathway individual CDS algorithms are operating and how various clinical and institutional factors will change the interpretation of the resulting predictions.”
Translation: It’s a helluva tool. Doctors need to know how to use it.
As for what this means for patients…. well, a lot more weapons to help with diagnosis and treatment, for starters. “I hear great things about that doctor’s bedside manner” could be replaced by “I hear that doctor really knows how to apply a P(A ∩ B) formula.”
Of course that’s only the patients who can afford such care. Will the algorithmic age widen health-care inequities that are already pretty damn wide, enabling some patients to get these AI-equipped super-doctors while disadvantaged patients are left with scraps? Sadly you don’t need a math Ph.D to answer that one.
[New England Journal of Medicine]
The Mind and Iron Totally Scientific Apocalypse Score
Every week we bring you the TSAS — the TOTALLY SCIENTIFIC APOCALYPSE SCORE (tm). It’s a barometer of the biggest future-world news of the week, from a sink-to-our-doom -5 or -6 to a life-is-great +5 or +6 the other way.
Here’s how the future looks this week:
AI THREATENS ARCHITECTS: Obviously any mass-scale automation will take a human toll, enough to knock the score down a few points. But a tool that could also give us a lot more design possibilities and make our buildings more interesting? And do so with human involvement? +0.5
DOCTORS NEED TO ADAPT OR DIE: Smarter health-care — a big plus! But only for those who can afford it! An even bigger minus. -2
AI-ENABLED HACKING MEANS NO MORE ANNOYING PASSWORDS: But a lot more invasive biometrics. -3
The Mind and Iron Totally Scientific Apocalypse Score for this week:
-4.5
The Mind and Iron Totally Scientific Apocalypse Score for this month:
-6.0
MindandIrony
A possibly penetrating, perhaps droll comment on current tech developments
A new look at AI Ethics
I was a big fan of the New York Times Magazine’s “The Ethicist” column back in the day. I’d wile away many hours reading, thinking on and, ok, occasionally mocking Randy Cohen’s take on the issue posed him.
You may remember Cohen’s column — people would write in with their ethical dilemmas, and the expert would write back (often predictably) weighing the issues, providing a seduction of the obvious.
But the questions were frequently interesting, and would occasion what a friend of mine calls what-if-you-weres — as in, what if you were in that situation. (Apparently the column still exists at the NYT, which I learned while Googling the below image.)
I’ve been thinking that we need a new version of this. That in an era when computers can take on tasks they never took on before, we need a guiding light for all the questions this raises — what the limits are, what our responsibilities might be. We’re currently shaping the rules of the road in an area with no rules (and, until recently, no pavement).
The idea crystallized when I had lunch recently with Isaac Pollack, an associate superintendent at Saint Louis Public Schools. Pollack’s portfolio is school innovation, turnaround, and charter partnerships — so he’s right at the cutting edge of these issues in a large public bureaucracy.
Here’s what went down: Pollack recently was hiring a person for a charter schools job. A key part of said job involved communication. The field had been narrowed to six candidates. As part of their applications, all of them had to write a memo describing their approach to the job.
Of the top three candidates, two wrote memos that seemed to come from their own minds, with specific references, and a warmth and even some shagginess that Pollack recognized as essentially human. (As good as large-language models have gotten they still can’t replicate the messiness of being human.)
The memo of the third candidate, however, had a different vibe. It felt hollow, impersonal. Everything about it was neat — too neat, such that Pollack, without even using any detection software, suspected it was partly written by an AI. (He actually put some of the parameters for the memo into ChatGPT, and what returned confirmed his suspicion. Caught red-handed.)
This was concerning — one of the main responsibilities of the job was communication. And this person seemed incapable of doing that without serious machine help. “Did I want to hire someone who didn't feel confident enough to write their own memo?” he asked. Even more complicated: this person was previously his top choice.
Pollack presented this dilemma to me mid-stream — he hadn’t responded to the candidate yet. As he and I talked, it became clear there were four basic paths he could take.
A. Say nothing about the GPT issue and just proceed with the next stage of the interview process as though it never came up.
B. Say nothing about the GPT issue but ask the candidate to resubmit the application and see if they do it again.
C. Confront the candidate and see if their response indicates contrition or at least a creative explanation for why they did it.
D. Reject their application out of hand.
What would you do?
(I was also a fan of the Choose-Your-Own-Adventure Books.)
Now, the rationale for Options C and D are straightforward—the person is applying for a job, and the job requires communication skills. If they used AI in a communication test to get the job, what would they do once they had the job? (And if the answer is ‘use AI there too,’ then the question is why you need to hire a person in the first place.) So either pass on them or, if that’s too harsh, confront them.
The Option B logic makes sense too — it doesn’t ignore what the applicant did but it gives them another chance. Maybe they had a bad day, or maybe they got so caught up in other parts of the application that they ran out of time and had to use a shortcut. Given another shot, they might write the memo themselves. So hand them that opportunity.
Option A is the most unexpected — it’s actively closing your eyes to the fact that they used AI to write their memo. What would the rationale for that be?
Well, maybe AI isn’t cheating, but a tool that a job candidate could indeed use even after they got to the chair. “If an employee was doing a great job and they were secretly asking a friend for help all the time, would I care?” Pollack said he soon thought when considering his choices. “Their friend may get mad at them or want some money. But I don’t think it would matter to me as long as they’re doing a good job. So how is this different?”
A good point — and complicating insight. Maybe there’s nothing wrong with using AI after all, and Pollack should just evaluate the memo as he would any other.
I asked Rachel Gordon, a shrewd Boston-based HR consultant who has been involved in scores of hiring decisions, what she would do.
"What's right for one employer doesn't necessarily fit for another," she said. "But speaking generally, I don’t think I would confront the candidate. I would probably just ask politely and seek to understand if they had a good explanation, like maybe they just didn’t have enough time [to complete the application]. They shouldn’t necessarily be penalized for that.”
And if they didn’t have that explanation? “I think I’d still see if AI was something they could use while in the job.” (Like, if the job’s communication requirements were more about letters and memos than real-time speaking.) “If it was, I’d treat AI more like a tool, just like a calculator or Excel.”
Which is an interesting approach: AI isn't cheating. It's just new.
Or put another way: If generative AI is going to be a part of our lives and jobs, why not let it be part of how we get those jobs?
What this situation underscores (besides the fact that we haven't figured any of this out yet) is the tension between fresh digital tools that can legitimately help us and the idea, for the moment still common, that relying on those tools is underhanded. It could be. But will it always be? Does it have to be?
That part is less clear, and the reflexive reaction that these tools constitute cheating seems a little misaligned with how work itself will be done in the not so deep future. Sure, a memo written with ChatGPT doesn’t test communication skills. But maybe the onus should be on employers to find a better way to run this test — and not punish a candidate who just made use of something legal at their disposal. Maybe the goal should be not to ban AI but to help employees use it optimally.
Ultimately what employers want from a candidate — always but especially in the cold-machine age — is humanity. A humanity that people the employees will work with (like the school board) can relate to. And if the candidate has that, AI on an application seems like small cashews indeed.
In the end, Pollack in fact went with Option A — he didn’t say anything to the applicant. “Yep, just ignored it,” he said with a laugh. “Looked at other things.” He began moving through the other parts of the recruitment process and will make his hiring decision based solely on those, discounting this GPT-gate entirely.
This won’t always be the right move; I can see some employers disqualifying a candidate out of hand for using AI. (You probably wouldn’t hire a journalist who did this, for instance.) And I don’t think that from an employee’s standpoint withholding the fact that AI helped you is the best way to start off a relationship with a new employer.
But AI played out here as I think it will eventually come to play out — as a tool a person will deploy both before and after getting a job. Some more, some less. Some more effectively, some not so effectively. But deploy it just the same. Employers might soon be asking job candidates not ‘did you use AI on the job application?’ but ‘how will you use AI in the job?’