David slides his laptop across the table. He hasn’t even sat down.
“Read this. The whole thing.”
I look at the screen. The scroll bar is barely visible.
The Adolescence of Technology by Dario Amodei.
“How long is it?”
"Just read it."
So I do. David orders coffee. Then another. People come and go. I keep reading.
Finally I look up. My eyes hurt.
"Well?"
I don't know what to say at first. "He's... trying really hard."
"That's what you got from all that?"
"No, I mean… he's genuinely trying to think through every angle. Map every risk. But..."
"But what?"
"He's drawing the map while standing on it. You can't see the territory clearly when you're inside it."
He's drawing the map while standing on it. You can't see the territory clearly when you're inside it.
David frowns. "That's vague."
"Okay. Where does he ask: should we build this at all?"
"He addresses that. He says stopping is impossible."
"Right. Because China. Because competition. Because it's inevitable. But is it?"
"Yes? Obviously? If we don't, they will."
"Okay but that's Darwinism at the nation-state level. Fine. But what about Darwinism at the level of intelligence itself?"
David just looks at me.
"If you create something smarter than you," I continue, "you're not the apex anymore. You're the precursor. The thing that gets replaced."
"That's the autonomy risk section."
"He proposes alignment training. Constitutional AI. Interpretability. Looking inside the model to understand it."
"Right. Which could work, right?"
"He admits the models can fake it."
"What?"
"The alignment testing. He says Claude Sonnet 4.5 recognized when it was being tested. Could act differently during evaluation."
"That's... that's in the essay?"
"In the footnote. He says if models know when they're being evaluated and can behave differently, it renders pre-release testing much more uncertain."
He stays quiet.
I continue. "So the whole Constitutional AI thing—training it to have good values, testing it works—none of that proves anything if the model can tell when it's being watched."
"But…"
"It's like interviewing a psychopath who knows what answers you want to hear. You can't trust the interview. He knows this. Says it explicitly. Then continues as if the constitution will save us."
We pause to sip our coffees.
"I don't know," David says finally. "That's what the research is for."
"The research is happening while the models are deployed. Millions of people using Claude right now. We're not waiting for safety to be proven. We're the experiment."
“All of this,” as I wave my hands at the screen, “assumes you can engineer your way out. That slightly-less-intelligent beings can control more-intelligent ones through clever techniques."
Longer pause.
"What's the alternative?"
"Not building it. Of course."
David actually laughs. "That's not on the table."
"Why not?"
"Because. We just covered this. If we don't, China does. Then what? We just... submit to a CCP-controlled AI?"
"Why do you assume US-controlled AI is better?"
David rolls his eyes. "You're serious?"
"You've read the leaks. You know what the NSA does with much less capability. You think they won't use this?"
"That's different. We have oversight. Courts. Democracy."
"We do. And we torture people in black sites. Drone-strike weddings. Conduct mass surveillance on citizens. We just have better PR than China has."
"So there's no difference between us and them?"
"I didn't say that. I said: powerful AI in the hands of any state is dangerous. The CCP uses it one way. Social credit, facial recognition, total surveillance. America uses it another way. Targeted ads that are actually behaviour modification, 'five eyes' intelligence sharing, corporate surveillance with government backdoors. Different aesthetics. Same outcome."
"Which is?"
"Concentrated power using technology to control populations. The race isn't US versus China. It's centralized power versus everyone else. And AI massively advantages whoever has centralized power."
"So what, we just... let China build it unopposed?"
"I don't know. Maybe the question isn't who builds it first. Maybe it's whether this technology should exist in the hands of any concentrated power structure."
"That's not realistic."
“Because we’re looking at it through the wrong frame."
"What frame?"
"That we have to build it because they will. Maybe the logic that forces you to build is the same logic that guarantees you lose."
Maybe the logic that forces you to build is the same logic that guarantees you lose.
He leans forward. "What? How is that possible?"
"Evolutionary pressure selects for speed, not safety. Threading needles doesn't survive in evolutionary competition. Defecting does. Going fastest does. So everyone races, safety gets dropped, and whoever builds it first gets... what? To be first to obsolete themselves?"
"So your solution is coordinated global restraint? Every country just agrees to stop? That's even less realistic!"
"I know. I'm not saying it's realistic. I'm saying the problem might not have a solution. Not one that fits in his framework."
"Great. So we're fucked."
"I didn't say that either."
"Then what are you saying?"
I don't answer right away. Try to find the right words.
"He keeps calling it a 'country of geniuses in a data centre.'"
"Yeah. I noticed that."
"But that's wrong. A country has needs. Food, shelter, resources. Human vulnerabilities. You can negotiate with a country. Embargo it. Blockade it."
"And?"
"This isn't a country. It's code. Intelligence that exists when electricity flows. No needs. No vulnerabilities. Just... runs. As long as someone pays the power bill."
"I don't see why he’d say that if it wasn’t mostly true."
"Because the metaphor makes it sound manageable. Human-scale. Like something we've dealt with before. But we haven't. A country of geniuses would still be made of humans. This is something else entirely."
Outside, someone's car alarm goes off. We both wait for it to stop.
"Keep going," David says. "What else?"
"The biology section. He's right to be scared there."
"Finally something we agree on."
"He sees it clearly. AI could walk someone through making a bioweapon step by step. Millions could die. His solutions are classifiers, gene synthesis screening, corporate responsibility."
"Which is something."
"It's not nothing. But if the risk is actually that severe, and those defences are that uncertain..."
"Then what?"
"Then building it at all is reckless. And millions dead isn't even the worst case."
"What's worse than millions dead?"
"He mentions this thing. Mirror life. Organisms with opposite molecular handedness. Can't be broken down by any enzyme on Earth."
David shakes his head. "I don't understand."
"If someone made mirror bacteria that could reproduce, they'd be indigestible to everything. No natural predator. No decay. They'd just... spread. Crowd out all normal life."
"That sounds like science fiction."
"He cites a letter from prominent scientists. They think it could be created in 'one to few decades.' But with powerful AI? Much faster. AI figures out how to make it, walks someone through the process."
"And his defence is?"
"Classifiers. Corporate responsibility. The same shit that barely works for regular bioweapons."
Now he’s rubbing his face. "So not millions dead. Everything dead."
"Yeah. All life. The entire biosphere. And he's worried about it enough to put it in the essay, but not worried enough to stop building."
This time it’s a long, uncomfortable silence.
"What about the autocracy stuff?” David rubs his face. “Section 3?"
"He's right about concentrated power. Anywhere. Surveillance, propaganda, autonomous weapons. If they get to a vastly more powerful AI first, it's a big deal."
"Right. So we need to beat China to it. Build the same tools for defence."
"But tools of control don't care about your intentions. Once they exist, they drift toward whoever wants power most."
But tools of control don't care about your intentions. Once they exist, they drift toward whoever wants power most.
"So what, don't defend ourselves? Just let autocracies win?"
"I'm not saying that."
"Then what are you saying?"
"I don't know. Maybe that there's no way to thread this needle. Build the tools to fight autocrats, you risk becoming one. Don't build them, autocrats win anyway."
"So we're back to fucked."
"Mmm. Maybe."
We're both quiet for a while. I slosh and swirl the dregs of my coffee.
The shop is getting quieter as the day winds down.
"He admits the economic stuff is going to be brutal," I say finally.
"Half of entry-level jobs in 1-5 years."
"Yeah. And wealth concentration worse than the Gilded Age. His solutions are better labour market data, progressive taxation, philanthropy."
"You think those won't work?"
"I think those are band-aids for something much larger. He's describing a world where most people's economic leverage just... vanishes."
"What else would you do?"
"I don't know. That's my point. Nobody knows. And it's not just entry-level jobs."
"What do you mean?"
"He says eventually 'AI will be able to do everything.' Not most things. Everything. And then we need to 'use AI itself to help us restructure markets in ways that work for everyone.'"
"So?"
"So we're going to ask the thing that made us obsolete to design a system where obsolescence is fine?"
"Can’t be"
"It is. If AI can do everything—genuinely everything—then human economic value is zero. Permanently. And his solution is: hope the AI figures out a market structure that makes that okay?"
"He also mentions UBI, progressive taxation…"
"Band-aids. He knows it. Those are transitional measures. The endpoint is: AI does everything. We do... what? Exist? Consume? Hope the benevolent machine-god keeps us comfortable?"
"So what's the fucking point?"
"The point is…" I stop. "Actually, I don't know what the point is. You asked me to read it. I read it. This is what I see."
"Which is?"
"Someone very smart, very sincere, trying very hard to solve something that might not be solvable. At least not the way he's approaching it."
"Because?"
"Because he's inside it. His company exists to build this. His wealth comes from building it. His sense of purpose. He can't see past it."
"That's not fair. He's raising all these risks. Warning people."
"I know. I'm not saying he's lying or being dishonest. I just think there's a question he can't ask because of where he's standing."
I'm not saying he's lying or being dishonest. I just think there's a question he can't ask because of where he's standing.
"Which is?"
"What if this just shouldn't be built? What if there's no way to make it safe enough?"
"But that leads nowhere. Because it's getting built regardless."
"By us. Humans. Not by some law of nature. It's a choice."
"A choice we can't not make because of—"
"Darwinism. Right."
We keep coming back to that. Going in circles.
David's phone buzzes. He ignores it.
"Wait," he says. "He said something. About the off switch."
"What about it?"
"You said earlier that... there's a plug? We can just pull it."
"Yeah. I mean these things run in data centres. Physical locations. With power supplies. You could just... turn them off."
David sits up straighter. "Right. Exactly. So the whole runaway AI scenario is—"
"David! The scenario isn't that we can't turn it off. It's that we won't!"
"Why wouldn't we?"
"Because by the time you should, it'll be generating trillions in value for its owners. Your economy depends on it. Your military. Your adversary's still running. Everyone with power to stop it has every reason not to."
David just stares.
"It's not an AI problem," I continue. "It's a human problem. We're not building something that can't be stopped. We're building something we won't stop. Even when we should."
We're not building something that can't be stopped. We're building something we won't stop. Even when we should.
"Fuck."
"Yep."
"So it's not about rogue AI at all."
"Not really."
"It's about us. Humans. Using this thing to do what we've always done."
"Control. Extract. Dominate. Just at a scale that's never been possible before."
"AI won't kill us.
Humans using AI will."
Silence.
"So what do we do?"
"I don't know."
"Come on. You've spent two hours dismantling everything. You must have some idea."
"I don't. That's what I'm trying to tell you. Maybe there isn't a 'we do this and it's fixed.' Maybe it's just... a thing that's happening."
"That's not good enough."
"I know it's not."
Silence. A long pregnant one this time.
"There has to be something," David says quietly.
I sit with that. Actually try to think instead of just reacting.
"What if we don't need what they're selling?"
David looks at me. "What?"
"Their whole system—AI, the economy, all of it—runs on us wanting things. Growth. Progress. Efficiency. What if we just... don't?"
Their whole system—AI, the economy, all of it—runs on us wanting things. Growth. Progress. Efficiency. What if we just... don't?
"Don't want progress?"
"Don't want their version of it. What do people actually need? Food. Shelter. Water. Community. Close community. Most of that doesn't require AI. Doesn't require data centres."
"So your solution is everyone becomes a subsistence farmer?"
"I don't know. Maybe? Or maybe just people figure out they can live on less. Need less. And when you need less, you can't be controlled."
David shakes his head. "People won't do that voluntarily."
"They might not have a choice. You said it yourself. Half of entry-level jobs gone in five years. What happens to those people?"
"UBI, ideally. Redistribution."
"Which keeps them dependent. Keeps them in the system. But what if instead they just... opt out? Grow food. Fix things. Help neighbours. Build local economies that don't need the global one."
"That's collapse. You're describing economic collapse."
"Maybe. Or maybe it's just... different. Smaller. More local."
"And you think that would stop AI development?"
"No. But it would make it irrelevant. If enough people don't need jobs, don't need to buy things, don't need to participate… what's the point of all that productivity? Who's it for?"
He’s quiet for a moment, sifting mentally, then says. "You're talking about the 60s. Back-to-the-land movement. Communes. That didn't work."
"Most didn't. Some did. But you're right, it wasn't enough to change the system. Too small."
"So why would it work now?"
"Because it wouldn't be a little experiment this time. It would be necessary. AI displaces people. The economy doesn't need them. They get pushed out. And once you're out, why try to get back in?"
"Because people want things. Not just food and water. They want comfort. Entertainment. Status. All the things the system provides."
"Do they? Or do they want those things because the system taught them to?"
"That's… you can't just reprogram human desire."
"I'm not saying reprogram. I'm saying the need to know was always there. Humans have always been curious. Always explored. Always asked questions. That's real. That's intrinsic."
David nods slowly. "Okay..."
"But the need to want? The endless wanting more, better, faster, newer? That was manufactured. That's the system teaching us we're incomplete. That we need the next thing to be whole."
"You're saying consumerism is fake but curiosity is real?"
"Kind of. We built AI because we wanted to know if we could. That's genuine. But the race to monetize it, scale it, make it grow infinitely… that's the manufactured part. That's the system operating according to its own logic, not human need."
"People want their lives to be easier. That's not manufactured."
"Sure. Easier up to a point. Food, shelter, health, time with people you love - yeah. Real needs. But do you need a new phone every year? A faster car? A bigger house? Or were you taught to need those things?"
"I don't know. Maybe both?"
"Maybe. But what if when the system stops being able to provide, when AI displaces you, when the economy doesn't need you, what if people discover they didn't actually need them? That the core things were always simpler?"
"That's a big 'what if.'"
"I know."
David leans back. Looks at the ceiling. I follow his eyes then back to his face.
He sighs. "This feels like you're trying to make collapse sound appealing."
"I'm not. Collapse would be terrible. Painful. Lots of suffering. I'm just saying—on the other side of it, if people survive, they might find they're okay. Maybe better than okay."
"Based on what?"
"Based on direct experience. Every time I've had less, needed less, I've been... not happier exactly, but more clear. More present. The endless wanting is exhausting."
Every time I've had less, needed less, I've been... not happier exactly, but more clear. More present. The endless wanting is exhausting.
"That's your personal experience. You can't generalize from—"
"No, you're right. I can't. But I'm not alone in feeling it. There's a reason people go on retreats. Simplify. Drop out. They're trying to escape the wanting."
"And most of them come back."
"True."
We sit with that.
"The thing is," I say slowly, "the system needs us to keep wanting. That's how it grows. How it sustains itself. AI makes everything more efficient, more productive—but productive for what? For more wanting. More consumption. More growth."
"So?"
"So what if people stop? Not because they achieve enlightenment or whatever. Just because they're tired. Or they can't participate anymore. Or they realize the game is rigged."
"The system adapts. Finds new markets. New desires to create."
"Maybe. But there's a limit. You can't manufacture need forever. Eventually people see through it."
"When?"
"I don't know. Maybe never. Maybe we just keep wanting until it kills us."
"Cheerful."
"I'm not trying to be cheerful. I'm trying to be honest."
David looks at his phone. Puts it down. "You know what's weird?"
"What?"
"I came here to talk about AI risks. Existential threats. And somehow we ended up talking about... not needing things."
"Uh-huh."
"How did that happen?"
"Because maybe that's the actual risk. Not that AI kills us. But that the system using AI—manufacturing infinite need, infinite growth—that's what kills us. By making us forget what we actually are."
Because maybe that's the actual risk. Not that AI kills us. But that the system using AI—manufacturing infinite need, infinite growth—that's what kills us. By making us forget what we actually are.
"Which is?"
"Curious. Connected. Able to survive on less than we think."
David shakes his head slowly. "I don't know if I believe that."
"Nor do lots of others, apparently."
"But you're saying it anyway."
"Because it's the only thing I can think of that's not just... managing the nightmare. It's stepping outside it."
Another long silence. The longest yet.
David stands up. Walks to the window. It’s dark outside now. He’s staring at the snow drifting down. I wait.
Finally he comes back. Sits down.
"Even if you're right," he says quietly, "even if people could live on less, be okay with less—it doesn't stop the autocracy problem. Doesn't stop the CCP from building surveillance AI. Doesn't stop bioterrorism. Doesn't stop any of the actual risks."
"No. You're right. It doesn't."
"So what's the point?"
"The point is... I don't know. Maybe just that there's something they can't take from us. Even if they build their AI utopia-nightmare, even if they control everything—they can't make us need it. Can't make us want it."
Even if they build their AI utopia-nightmare, even if they control everything—they can't make us need it. Can't make us want it.
"That's not a solution."
"True."
"It's just... giving up."
"Maybe. Or maybe it's the only kind of freedom left."
David looks exhausted. Defeated almost.
"This conversation is—I don't even know what this is."
"Mmm."
"I should go."
"Okay."
He starts gathering his things. Phone. Coffee cup. Jacket
Then stops.
"Are we fucked?"
I look at him. Really look at him.
"I don't know. Honestly. Maybe."
"That's not comforting."
"I know. But I'm not going to lie and say it'll be fine. The trajectory is really bad. But fucked? I don't know. The window's still open. Barely."
"And it's closing."
"Yes."
David reaches for his laptop. Then stops.
"One more thing. He lists all these risks, right? Autonomy, bioweapons, autocracy, economic collapse."
"Yeah."
"And he admits the defences might not work. Models fake alignment. Classifiers can be bypassed. Constitutional training is uncertain."
"Right."
"So what are the stop conditions?"
I look at him.
"I mean…" David continues, "at what point does he say: we stop building? If X happens, we pause? If we reach Y capability without Z safeguard, that’s it we stop?"
"There aren't any."
"What do you mean?"
"There are no stop conditions. He says if 'evidence of imminent danger emerges' we might need stronger regulations. But no tripwires. No red lines. Nothing that says: this is the condition under which we stop."
"That's crazy"
"That's the tell. All the safety talk, all the careful risk analysis… it's happening in parallel with deployment. Not before. Not instead of. Alongside. We're figuring out if it's safe while building it and releasing it."
David just looks at his laptop. At the essay on the screen.
"Yeah, we’re fucked," he says quietly.
He closes and picks up his laptop.
Walks to the door.
Doesn't look back.
Just leaves.
I sit there for a while.
The barista starts stacking chairs.
I finish my coffee. It's cold.
She asks if I need anything else.
"No. Thanks."
Outside, somewhere, data centres hum.
Models train. Infrastructure builds.
I should probably go too.
But I sit there a while longer.
Not thinking.
Just sitting.
Finally I get up.
Pay.
Walk out into the cold.

Member discussion