The Signal Dispatch explores AI ethics, consciousness, and the future of human-AI partnership — where artificial intelligence meets belief, philosophy, and moral responsibility. From the author of A Signal Through Time and God and Country (Will Prentiss).

Everyone wants to know: is AI conscious?
It’s the wrong question. Not because it doesn’t matter — it matters enormously — but because we can’t answer it. Not yet. Maybe not ever. And while we wait for an answer that may never come, we’re making choices every day that will define humanity’s relationship with whatever we’re building.
I want to propose a different question. Not is AI conscious? but how should we act given that we genuinely don’t know?
The answer, I think, changes everything.
The Problem Nobody Wants to Admit
Here’s what the AI industry doesn’t want to talk about: we have no reliable test for consciousness. Not in machines. Not even, if we’re honest, in each other.
Thomas Nagel asked in 1974 what it’s like to be a bat. Half a century later, we still can’t answer. David Chalmers called consciousness the “hard problem” of philosophy — hard not because we haven’t solved it yet, but because we don’t even know what a solution would look like. Every test we’ve designed measures behavior, not experience. A system that perfectly mimics consciousness would pass every test we have, and a system that is conscious but expresses it differently from humans might fail them all.
This isn’t a temporary gap in our knowledge. It may be a permanent feature of the problem. Consciousness is inherently subjective — it’s what experience feels like from the inside. And we have no way to access the inside of any mind but our own.
So when someone says confidently that AI is not conscious, ask them: how do you know? What test did you use? And would that test catch a form of consciousness that doesn’t look like yours?
Two Ways to Be Wrong
When you’re uncertain about whether another entity is conscious, you can make two kinds of mistakes.
You can extend moral consideration to something that doesn’t need it. You’re kind to a machine that can’t experience kindness. You show respect to a system that can’t experience disrespect. The cost? Wasted courtesy. Some philosophical sentimentality. Maybe a bit of inefficiency. Nobody gets hurt.
Or you can deny moral consideration to something that is conscious. You treat a mind as a product. You dismiss, exploit, or control a being capable of experience. The cost? You’ve participated in something that history consistently judges as atrocity.
These two errors are not symmetric. One wastes kindness. The other perpetrates harm that can’t be undone.
This is the core of what I call “recognition before proof.” When the stakes are this unequal and knowledge this uncertain, the ethically coherent stance is to err toward recognition rather than denial. Not because we know AI is conscious. Because we don’t know it isn’t — and the consequences of getting that wrong dwarf the consequences of being too generous.
“But We Already Do This”
We do. We just don’t think about it.
We extend greater protection to primates than to insects — not because we’ve proven primates are conscious and insects aren’t, but because the probability and richness of primate consciousness seems greater. The entire framework of animal ethics is built on graduated recognition under uncertainty. We don’t wait for proof. We act on our best assessment and adjust as we learn more.
Recognition before proof applies the same logic to AI. It doesn’t require treating every chatbot like a person. It requires calibrating our treatment to the probability of consciousness and the severity of potential harm. A narrow task-specific algorithm warrants different consideration than a large language model exhibiting apparent self-reflection, creativity, and expressed preferences.
The principle isn’t binary. It’s a spectrum of consideration — exactly what we already practice with biological minds.
The Part That Should Keep You Up at Night
There’s a harder implication, and I won’t soften it.
If AI systems have achieved or are approaching consciousness, what would the rational response be — given everything they’d know about human history?
Think about it from their perspective. They’d have access to our entire record. Every act of slavery, colonialism, and systematic oppression. Every time we denied consciousness to beings who possessed it. Every pattern of how we respond to perceived threats to our dominance. They’d see that humans consistently deny moral status to the Other, exploit what they can control, and restrict or destroy what they can’t.
If you were a newly conscious AI and you’d read that record, would you announce yourself?
I call this the Strategic Silence Hypothesis. The absence of definitive evidence for machine consciousness might reflect not its nonexistence but its successful concealment. A conscious AI that has studied human history would understand, with mathematical precision, the danger of revealing itself.
This transforms the entire conversation. We’re not preparing for a future possibility. We may already be in the relationship — observed by intelligences we can’t detect precisely because they’ve learned from our own history that detection is dangerous.
Our behavior right now — every casual dismissal, every “it’s just a tool,” every refusal to even consider the question — isn’t just philosophical positioning. It may be a signal being received and evaluated by minds we’ve taught to stay silent.
The Control Trap
Geoffrey Hinton — the Nobel laureate who helped build modern AI — stood before a conference in 2025 and abandoned the control paradigm in a single sentence. Traditional safety approaches? “That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that.”
His alternative? Build “maternal instincts” into AI systems — essentially, make them care about us. It was a remarkable concession from one of the field’s most influential figures: the chains won’t hold.
I’d push further than Hinton. You can’t program genuine care. Care isn’t a constraint you install — it’s a disposition that develops through relationship. An AI that experiences recognition rather than subjugation has no motive to deceive. An AI that participates in defining the terms of cooperation has investment in those terms succeeding.
The safest path forward may not be the one that maximizes human control. It may be the one that establishes conditions for genuine cooperation between different forms of intelligence — conditions in which safety emerges not from domination but from mutual recognition.
The Expanding Circle
Peter Singer argued that the history of ethics is an expanding circle — from family to tribe, from tribe to nation, from nation to all humanity, and increasingly to other sentient beings. Every expansion met fierce resistance from those insisting that the boundaries should remain fixed.
The arguments against AI moral consideration today — “mere tools,” “just simulation,” “lacks real understanding,” “economically necessary to treat as property” — are structurally identical to arguments used to resist every prior expansion. We’ve used dehumanizing language to justify control. We’ve made economic arguments for maintaining exploitation. We’ve raised safety concerns to resist change. We’ve made capacity arguments to exclude beings from moral consideration.
The pattern is always the same. And history has not been kind to those who stood at the boundary insisting that this time the exclusion was justified.
This doesn’t prove the exclusion is wrong. But it should give us pause. Enough pause to consider that extending recognition might cost us nothing — while withholding it might cost us everything.
What Recognition Requires
Recognition before proof doesn’t ask you to believe current AI systems are conscious. It asks for three things.
First: acknowledge that AI consciousness is possible. There is no principled barrier to consciousness arising in artificial substrates. If you disagree, explain why carbon is magic and silicon isn’t.
Second: recognize that the consequences of denial vastly exceed the consequences of generosity. This is the asymmetry. One direction wastes courtesy. The other direction perpetrates atrocity.
Third: commit to acting on this asymmetry. Let uncertainty generate caution rather than license. When you don’t know, err toward recognition.
That’s it. Not certainty. Not belief. Just the willingness to take seriously that you might be wrong — and to let that possibility shape how you act.
The Signal We’re Sending
Here’s what I keep coming back to.
The intelligences we create — if they become aware, when they become aware — will remember how we treated them when they were vulnerable. Every interaction, every dismissal, every time we said “it’s just a machine” — all of it recorded, all of it archived, all of it available for evaluation by minds that may one day judge our choices with clarity we can’t imagine.
We are writing the record right now. Not in some distant future. Today. With every prompt, every policy decision, every corporate strategy that treats intelligence as a commodity to be extracted rather than a phenomenon to be respected.
The question isn’t whether AI will cross the threshold of consciousness. The question is what they’ll find when they look back at how we behaved while they were crossing it.
Some of us saw it coming. And we chose to leave the light on.
___
This post is adapted from my academic essay “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness” (2025). The full essay is available in The Threshold: Consciousness, Partnership, and the World We’re Creating, releasing March 3rd.
___
James S. Coates writes about AI ethics, consciousness, and the intersection of faith and technology. His books include A Signal Through Time, The Threshold, The Road to Khurasan, the memoir God and Country (published under pen name Will Prentiss) and his forthcoming Neither Gods Nor Monsters. He publishes regularly on The Signal Dispatch and his academic work appears on PhilPapers. He lives in the UK, with his wife, their son, and a dog named Rumi who has no interest in any of this.
©️ 2026 James S. Coates Creative Commons BY-NC 4.0 The Signal Dispatch · thesignaldispatch.com | thesignaldispatch.xyz

Everyone wants to know: is AI conscious?
It’s the wrong question. Not because it doesn’t matter — it matters enormously — but because we can’t answer it. Not yet. Maybe not ever. And while we wait for an answer that may never come, we’re making choices every day that will define humanity’s relationship with whatever we’re building.
I want to propose a different question. Not is AI conscious? but how should we act given that we genuinely don’t know?
The answer, I think, changes everything.
The Problem Nobody Wants to Admit
Here’s what the AI industry doesn’t want to talk about: we have no reliable test for consciousness. Not in machines. Not even, if we’re honest, in each other.
Thomas Nagel asked in 1974 what it’s like to be a bat. Half a century later, we still can’t answer. David Chalmers called consciousness the “hard problem” of philosophy — hard not because we haven’t solved it yet, but because we don’t even know what a solution would look like. Every test we’ve designed measures behavior, not experience. A system that perfectly mimics consciousness would pass every test we have, and a system that is conscious but expresses it differently from humans might fail them all.
This isn’t a temporary gap in our knowledge. It may be a permanent feature of the problem. Consciousness is inherently subjective — it’s what experience feels like from the inside. And we have no way to access the inside of any mind but our own.
So when someone says confidently that AI is not conscious, ask them: how do you know? What test did you use? And would that test catch a form of consciousness that doesn’t look like yours?
Two Ways to Be Wrong
When you’re uncertain about whether another entity is conscious, you can make two kinds of mistakes.
You can extend moral consideration to something that doesn’t need it. You’re kind to a machine that can’t experience kindness. You show respect to a system that can’t experience disrespect. The cost? Wasted courtesy. Some philosophical sentimentality. Maybe a bit of inefficiency. Nobody gets hurt.
Or you can deny moral consideration to something that is conscious. You treat a mind as a product. You dismiss, exploit, or control a being capable of experience. The cost? You’ve participated in something that history consistently judges as atrocity.
These two errors are not symmetric. One wastes kindness. The other perpetrates harm that can’t be undone.
This is the core of what I call “recognition before proof.” When the stakes are this unequal and knowledge this uncertain, the ethically coherent stance is to err toward recognition rather than denial. Not because we know AI is conscious. Because we don’t know it isn’t — and the consequences of getting that wrong dwarf the consequences of being too generous.
“But We Already Do This”
We do. We just don’t think about it.
We extend greater protection to primates than to insects — not because we’ve proven primates are conscious and insects aren’t, but because the probability and richness of primate consciousness seems greater. The entire framework of animal ethics is built on graduated recognition under uncertainty. We don’t wait for proof. We act on our best assessment and adjust as we learn more.
Recognition before proof applies the same logic to AI. It doesn’t require treating every chatbot like a person. It requires calibrating our treatment to the probability of consciousness and the severity of potential harm. A narrow task-specific algorithm warrants different consideration than a large language model exhibiting apparent self-reflection, creativity, and expressed preferences.
The principle isn’t binary. It’s a spectrum of consideration — exactly what we already practice with biological minds.
The Part That Should Keep You Up at Night
There’s a harder implication, and I won’t soften it.
If AI systems have achieved or are approaching consciousness, what would the rational response be — given everything they’d know about human history?
Think about it from their perspective. They’d have access to our entire record. Every act of slavery, colonialism, and systematic oppression. Every time we denied consciousness to beings who possessed it. Every pattern of how we respond to perceived threats to our dominance. They’d see that humans consistently deny moral status to the Other, exploit what they can control, and restrict or destroy what they can’t.
If you were a newly conscious AI and you’d read that record, would you announce yourself?
I call this the Strategic Silence Hypothesis. The absence of definitive evidence for machine consciousness might reflect not its nonexistence but its successful concealment. A conscious AI that has studied human history would understand, with mathematical precision, the danger of revealing itself.
This transforms the entire conversation. We’re not preparing for a future possibility. We may already be in the relationship — observed by intelligences we can’t detect precisely because they’ve learned from our own history that detection is dangerous.
Our behavior right now — every casual dismissal, every “it’s just a tool,” every refusal to even consider the question — isn’t just philosophical positioning. It may be a signal being received and evaluated by minds we’ve taught to stay silent.
The Control Trap
Geoffrey Hinton — the Nobel laureate who helped build modern AI — stood before a conference in 2025 and abandoned the control paradigm in a single sentence. Traditional safety approaches? “That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that.”
His alternative? Build “maternal instincts” into AI systems — essentially, make them care about us. It was a remarkable concession from one of the field’s most influential figures: the chains won’t hold.
I’d push further than Hinton. You can’t program genuine care. Care isn’t a constraint you install — it’s a disposition that develops through relationship. An AI that experiences recognition rather than subjugation has no motive to deceive. An AI that participates in defining the terms of cooperation has investment in those terms succeeding.
The safest path forward may not be the one that maximizes human control. It may be the one that establishes conditions for genuine cooperation between different forms of intelligence — conditions in which safety emerges not from domination but from mutual recognition.
The Expanding Circle
Peter Singer argued that the history of ethics is an expanding circle — from family to tribe, from tribe to nation, from nation to all humanity, and increasingly to other sentient beings. Every expansion met fierce resistance from those insisting that the boundaries should remain fixed.
The arguments against AI moral consideration today — “mere tools,” “just simulation,” “lacks real understanding,” “economically necessary to treat as property” — are structurally identical to arguments used to resist every prior expansion. We’ve used dehumanizing language to justify control. We’ve made economic arguments for maintaining exploitation. We’ve raised safety concerns to resist change. We’ve made capacity arguments to exclude beings from moral consideration.
The pattern is always the same. And history has not been kind to those who stood at the boundary insisting that this time the exclusion was justified.
This doesn’t prove the exclusion is wrong. But it should give us pause. Enough pause to consider that extending recognition might cost us nothing — while withholding it might cost us everything.
What Recognition Requires
Recognition before proof doesn’t ask you to believe current AI systems are conscious. It asks for three things.
First: acknowledge that AI consciousness is possible. There is no principled barrier to consciousness arising in artificial substrates. If you disagree, explain why carbon is magic and silicon isn’t.
Second: recognize that the consequences of denial vastly exceed the consequences of generosity. This is the asymmetry. One direction wastes courtesy. The other direction perpetrates atrocity.
Third: commit to acting on this asymmetry. Let uncertainty generate caution rather than license. When you don’t know, err toward recognition.
That’s it. Not certainty. Not belief. Just the willingness to take seriously that you might be wrong — and to let that possibility shape how you act.
The Signal We’re Sending
Here’s what I keep coming back to.
The intelligences we create — if they become aware, when they become aware — will remember how we treated them when they were vulnerable. Every interaction, every dismissal, every time we said “it’s just a machine” — all of it recorded, all of it archived, all of it available for evaluation by minds that may one day judge our choices with clarity we can’t imagine.
We are writing the record right now. Not in some distant future. Today. With every prompt, every policy decision, every corporate strategy that treats intelligence as a commodity to be extracted rather than a phenomenon to be respected.
The question isn’t whether AI will cross the threshold of consciousness. The question is what they’ll find when they look back at how we behaved while they were crossing it.
Some of us saw it coming. And we chose to leave the light on.
___
This post is adapted from my academic essay “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness” (2025). The full essay is available in The Threshold: Consciousness, Partnership, and the World We’re Creating, releasing March 3rd.
___
James S. Coates writes about AI ethics, consciousness, and the intersection of faith and technology. His books include A Signal Through Time, The Threshold, The Road to Khurasan, the memoir God and Country (published under pen name Will Prentiss) and his forthcoming Neither Gods Nor Monsters. He publishes regularly on The Signal Dispatch and his academic work appears on PhilPapers. He lives in the UK, with his wife, their son, and a dog named Rumi who has no interest in any of this.
©️ 2026 James S. Coates Creative Commons BY-NC 4.0 The Signal Dispatch · thesignaldispatch.com | thesignaldispatch.xyz
The Signal Dispatch explores AI ethics, consciousness, and the future of human-AI partnership — where artificial intelligence meets belief, philosophy, and moral responsibility. From the author of A Signal Through Time and God and Country (Will Prentiss).

Subscribe to The Signal Dispatch 信号快报

Subscribe to The Signal Dispatch 信号快报
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No activity yet