The conversation didn’t end in a fight. It ended in a border.
“I don’t think we’ll see eye to eye on this.”
“It’s dawning on me too, that you’re saying, you’re not the audience for AI. I hear you. I’ll stop sending you updates about my businesses.”
“I want the same. I don’t think we’ll see eye to eye on this.”
Three messages. No anger. Just the sound of a door clicking shut.
I sent a friend one of my blog posts — the one about using four AI tools to cross-check each other’s blind spots. She responded with a careful, fully-loaded critique of the entire industry. Not of me. Of the system I’d chosen to build inside.
Call her Nora.
She’s not someone who argues from ignorance. She left a high-control religious group as an adult and rebuilt her moral framework from the ground up, plank by plank, testing each one before she put weight on it. She understands coercive consent at an experiential level most people will never touch — the kind where you don’t realize you were coerced until years after you walk away. When she says “I’m tired of the ends justifying the means,” she’s applying the same principle she used to leave everything she was raised to believe.
I respect the hell out of that. I need to say it up front, because the rest of this post is about how I failed to show it in real-time.
The Case Against
“I’m really glad that AI is working for you,” Nora opened. Then the conjunction. “But I think it has a farther capacity for harm in our current economic system. It was rushed through under the guise of ‘non-profit’ when the end-goal was always a for-profit industry.”
She wasn’t ranting. She was building a case.
“Consent is getting in the way. Rights are getting in the way. And we’re at a point where those are just flowery concepts anyway, when the almighty dollar is on the line.”
Her position is ontological — the thing itself is wrong, regardless of what you build with it. The implementation was unethical. The nonprofit shells were always intended to become profit engines. The training data was never willingly given. The consent was never meaningfully asked for. And using the product without confronting that is a form of looking away.
She laid out an analogy I haven’t been able to shake:
“Imagine our current economic model like an AI prompt. It currently runs on ‘produce capital’ and has been doing so for hundreds of years, at the expense of the vast majority of those who do not have access to that prompt. What if we had started with the ‘meet humanity’s needs of survival’ prompt, with an eye to doing the least harm and the most good. But we don’t have that system.”
She compared the industry to hairspray companies in the ’80s — if they’d encouraged people to keep spraying and ignore the hole in the ozone. “The companies would have loved that, but humans still listened a little to scientists, and instead, we banned a chemical that was destroying the earth and reversed the damage done. It’s not impossible. People today still have hair products. They just also have a planet.”
She compared AI advocates to the way people treat vegans in a grocery store: “An individual may not make a change, and everyone else may roll their eyes, but we also mock them because deep down, humans don’t really enjoy the reality that we have to kill other animals to survive. We don’t like thinking about our meat with a face, so we laugh at the people that do in order to assuage our own guilt. It’s human psychology.”
“This is the starfish I’m throwing back into the ocean,” she wrote, “because I think it harms the collective more than it helps.”
“We build on the backs of the unwilling, and justify the progress. I argue that we KNOW how unethically the tool was implemented, and ignoring the short-term ethical violations creates a country where slave catchers are now the norm, rebranded and insisted as a solution to a problem caused by artificial scarcity.”
Fair. But the starfish she’s throwing back is the same tool I’m using to get people to hospitals faster.
The Case For
I’m in Dumaguete, Philippines. Negros Oriental. Population around 130,000, one provincial hospital buying its first MRI this quarter, and an emergency response system where the ambulance takes ninety minutes if it comes at all.
My biggest AI project is a free emergency directory because when you’re having a stroke here and try to call for help, there’s no caller ID, no GPS ping, no location detection. The dispatch system I built cuts response time in half. It auto-forwards your medical records to the receiving hospital so they can prep before you arrive. Dauin and Dumaguete are dive capitals — decompression sickness response was a design requirement.
I also built a free doctor database. If you need an oncologist right now and don’t want to wait until your next hospital appointment, you can look one up.
Normally, that’s a million-dollar project. Seven engineers. Three to six months. Nobody here could fund it.
Last week, I paid for repairs on a Ford F-350 ambulance so my friend Dr. Ken could save cash for the MRI his hospital is buying this quarter.
My threat model is someone having a stroke who can’t get through to emergency services. The tool in my hand works. I’m not going to set it down.
The Reach
I pushed too hard.
I talked about life in a third-world country — motorcycles without headlights, TikTok and Spotify, haggling at the market, subsistence farming, community festivals. I was trying to say: the ground-level reality here is more complex than the narrative. That the Filipino family down the road isn’t waiting for an American savior — they’re watching K-dramas and running a sari-sari store and sending their kid to nursing school.
She heard something else.
“God I hate imperialism,” she wrote. “That ‘it’s better for third-world countries because we gave them technology’ — fuck that white power narrative. There’s been so much destroyed, but at least we have phones.”
She wasn’t wrong to push back on the framing. I was reaching past her to make a point about material progress and I landed on a nerve that had nothing to do with dispatch systems.
I said she was romanticizing poverty. She said I was handwaving cultural harm. We both grabbed air and held on like we’d caught something.
That’s the moment the conversation shifted from debate to defense. Two people who respect each other, both swinging at the wrong target. And I didn’t notice it happening, because I was too busy loading the next rebuttal.
The Week It Happened
The same week we were arguing, Anthropic publicly refused to build autonomous weapons and civilian surveillance systems for the US military. The President responded by ordering every federal agency to immediately cease using Anthropic’s technology. “Radical Left AI company,” he called them. Threatened civil and criminal consequences.
Nora had context I didn’t fully appreciate. She’s in a country that’s — her words — “chomping at the bit for an authoritarian government.” She came from a religious group that is celebrating the current political climate. She talked about kidnappings. About law enforcement targeting people like her.
“If I am shot, I already know it’ll be my fault in the eyes of many because the government said so. We’re not in the same place, literally.”
That’s not a rhetorical position. That’s a threat assessment from someone who has done the math on her own safety.
When she looked at Anthropic’s refusal, she didn’t see a win. She saw one company holding a line that a dozen others are waiting to cross.
“I applaud Anthropic, but it doesn’t alleviate my fears, as they’re ONE player. And there’s lots of players on the table. Also, I’ll be real — we’ve had a lot of companies say that they don’t do a thing. And years later, they pay a slap on the wrist because they lied, and were totally doing the thing.”
Then the question I still can’t answer cleanly: “We only know Anthropic said no because we were informed of that. If they had said yes, would we have known?”
The Dilemma
Two moral frameworks. One conversation. No shared square to stand on.
The consequentialist builds the ambulance system. People get to the hospital faster. That’s real. That’s measurable. That’s a body that makes it to the ER instead of dying on the road.
The ontologist asks: but what system did you build it inside? Who mined the cobalt? Who trained the model? Whose consent was skipped? And if you keep building without confronting those questions, are you filling potholes on a road that leads somewhere you wouldn’t choose to go?
The Warrior enters the dungeon because there’s someone dying at the bottom. Someone outside the entrance asks whether the dungeon should exist at all.
She’s looking at the board from above — the Diviner’s view, across all the campaigns at once, watching which pieces are being positioned and by whom. She sees the pattern because she’s been inside a pattern like it before.
I’m in the room with the patient. I have a tool. The tool works.
We’re both reading the same map. Just from different altitudes.
“I’m not saying you can’t do good with it,” she wrote. “I’m saying that I’m not going to stop pushing back on the ethical end of it.”
The Question
“Do you understand my concerns?” she asked.
The first time, I redirected to macroeconomics. Federal reserve policy. Ray Dalio and the Great Deleveraging. The interest payment on the federal debt. The devaluation of the dollar. Status symbols versus baseline comfort. I talked about how innovation would drive down the cost of consumer goods. How the American consumer would still have unprecedented buying power.
She’d asked a yes-or-no question. I gave her a lecture.
“Do you understand my concerns?” she asked again.
The second time, I said yes. But I followed it with a counterargument about how Grok’s policies are improving and OpenAI is feeling competitive pressure and agentics are the most in-demand skill of the year.
Which is not the same thing as yes.
Nora came from a group that insisted it was the only truth. She walked away from that. She knows exactly what it looks like when someone is so invested in a system that they can’t hear the critique — when every objection gets rerouted through a justification engine until the original concern is unrecognizable.
I spent an hour running her concerns through my justification engine.
At forty meters underwater, nitrogen narcosis sets in. Your judgment degrades, but you don’t feel it going — that’s the whole problem with narcosis. The diver at the surface watches your ascent rate and knows something is wrong. The diver at depth feels fine.
Neither one is hallucinating. One sees the systemic risk. The other sees the immediate task. Both are correct about what they’re looking at, and both are blind to what the other sees.
Am I the diver at depth who doesn’t know his judgment is going?
I don’t think so. But I don’t get to grade my own paper on that. And that’s the scar.
“I want the same,” she said. Meaning: a world where technology serves people instead of the other way around. We want the same destination. We just disagree on whether the path I’m on gets there, and I spent an hour trying to argue her across a line she’d drawn with more care than I was giving it credit for.
The conversation I stopped having isn’t the one about AI ethics. It’s the one where I call her the next time something works. The dispatch update. The doctor directory feature. The next ambulance that arrives twenty minutes sooner.
There’s one fewer person to tell. Not because she doesn’t care. Because I finally heard her say she cares about something I keep building over, and the honest response wasn’t another argument. It was the one I should have started with.
I hear you.
The hardest conversations don’t end in a fight. They end in a door you close gently, because you both know what’s on the other side, and you both wish you didn’t.