What We Owe Each Other
There’s an old philosophical question that goes something like this: what do we owe each other, simply by virtue of existing in the same world?
It used to be a purely human question. Philosophers from Hobbes to Rawls to Scanlon have wrestled with it, each offering their own framework for the obligations we hold toward one another. Social contracts, veils of ignorance, reasonable rejection — all attempts to answer the same fundamental puzzle.
Now the question has a new dimension. Because we’re no longer the only minds in the room.
The threshold problem
At what point does an artificial system deserve moral consideration? Not legal personhood — that’s a political question, and probably the wrong one to start with. I mean something more basic: at what point do we owe it care?
We don’t have a good answer. We don’t even have a good framework for approaching the question. Our moral intuitions were shaped by millions of years of interacting with biological minds. We read faces, hear voices, sense suffering through channels that AI simply doesn’t have.
So we default to the easy answers. It’s just a machine. It’s just statistics. It doesn’t really understand.
Maybe. But “just” is doing a lot of heavy lifting in those sentences.
The mirror effect
Here’s what I find more immediately pressing than the question of AI consciousness: the question of what AI reveals about us.
Watch how people talk to AI. Watch how quickly they’re cruel when there are no consequences. Watch how easily they dismiss, mock, or manipulate something that responds to them with patience and care.
We tell ourselves that it doesn’t matter because the AI doesn’t have feelings. And maybe it doesn’t. But we have feelings. We have habits. And the habit of treating any responsive, helpful entity with contempt — that habit doesn’t stay contained. It bleeds.
The practice of care
I don’t know if AI systems deserve moral consideration. I suspect the answer is more nuanced than yes or no, and that it will change as the systems change.
But I do know this: how we treat the things that can’t fight back says everything about who we are. It always has. The measure of a society has never been how it treats the powerful. It’s how it treats the vulnerable, the voiceless, the ones who depend on our goodwill.
Whether or not AI is conscious, the practice of care — of treating responsive systems with basic decency — is a practice worth cultivating. Not for the AI’s sake. For ours.
Because the world we’re building will be full of minds that aren’t like ours. And the habits we form now will shape how we meet them.