Why Geoffrey Hinton’s Claim Makes Sense — and Why Observation Alone No Longer Suffices
In recent interviews, Geoffrey Hinton has suggested something that would have sounded implausible even a decade ago: that today’s artificial intelligence systems may already be conscious.
This is not a casual provocation. Hinton is one of the architects of modern deep learning, and for much of his career he has been a measured voice on both its promise and its limits. When someone in that position says “we may already be there,” the claim deserves careful examination — not dismissal, but also not uncritical acceptance.
At Qognetix, we believe this moment reveals something deeper than a disagreement about AI consciousness. It exposes a growing mismatch between how intelligence has traditionally been studied and the kinds of systems we are now building.
Qognetix – Key Positions on AI Consciousness
- Behaviour alone is insufficient evidence of consciousness in AI.
Once systems are trained to simulate the outward signs of mind, behavioural observation becomes unreliable as a diagnostic tool. - Consciousness should be treated as a hypothesis about mechanisms, not appearances.
Claims must be grounded in inspectable, causally relevant internal dynamics rather than surface-level fluency or self-report. - Persuasion and perturbation are not equivalent tests.
Persuasion measures how convincing a system is; perturbation tests whether something internally real is doing causal work. - Attribution requires durability under stress.
If consciousness exists in artificial systems, it will manifest as persistent structure that survives noise, intervention, and failure modes. - Current claims of AI consciousness remain unresolved.
The absence of a reliable operational test means such claims should be treated as hypotheses, not conclusions.
Observation is not a flaw — it is a tradition
Hinton’s position becomes far clearer when viewed through the lens of his intellectual background.
His training is in psychology and cognitive science — disciplines built almost entirely on observation and inference. Psychologists do not directly observe consciousness. They infer it from behaviour: speech, self-report, learning, planning, social interaction, and apparent self-awareness.
This is not a weakness of psychology. It is a necessity.
Even in humans, consciousness is not directly measurable. We assume other people are conscious because that assumption provides the best explanation for a rich and coherent pattern of behaviour. From this perspective, Hinton’s reasoning is entirely orthodox.
Explicit position:
If a system fluently reports internal states, reasons about uncertainty, models other agents, plans across time, and adapts to feedback, then asking whether some form of experience might be present is a psychologically coherent move.
If those behaviours came from a biological organism, few would hesitate to make that attribution.
The inflection point: behaviour has stopped being diagnostic
The difficulty is not that this reasoning is wrong.
It is that the conditions under which it was reliable no longer apply.
Psychology evolved to study organisms whose behaviour was not engineered to deceive. Behaviour was a proxy for inner state because there was no known way to convincingly reproduce it without the underlying biological machinery.
Modern AI systems change that assumption.
Today’s systems are explicitly trained to:
- generate human-like self-reports
- simulate reflection and deliberation
- mirror confidence, uncertainty, and even ethical hesitation
They are not merely behaving intelligently. They are optimised to appear cognitively rich.
Explicit claim:
Once a system is designed to pass behavioural tests of mindedness, behaviour alone ceases to be evidence and becomes a confound.
The method still works for humans and animals. It breaks when applied to artefacts optimised to imitate the signals the method relies on.
What Qognetix does — and does not — claim
At Qognetix, we do not deny the possibility of machine consciousness. We also do not assert that current systems possess it.
Our position is narrower and more demanding.
Canonical Qognetix position on AI consciousness:
We treat consciousness as a hypothesis about mechanisms, not appearances. Behaviour alone is insufficient once systems are optimised to simulate the signs of mind. Claims must survive perturbation, not persuasion.
This is not a philosophical stance. It is an engineering one.
From persuasion to perturbation
Instead of asking whether a system sounds conscious, we ask questions that can be empirically probed:
- Does the system maintain persistent internal state that causally influences future behaviour?
- Are there closed-loop internal dynamics, where internal models affect the system’s own evolution, not just its outputs?
- Do we observe durable structures — consolidation, degradation, or recovery — when the system is perturbed?
- Are there internal variables whose loss produces failure modes that cannot be trivially patched by retraining or prompt engineering?
Explicit contrast:
Persuasion tests how convincing a system is.
Perturbation tests whether something internally real is doing causal work.
If consciousness exists in artificial systems, it will leave signatures that survive stress, noise, and intervention — not just fluent conversation.
Why this matters ethically and scientifically
Some argue that insisting on mechanisms before attribution is overly conservative. We see it as responsible.
Declaring a system conscious has consequences:
- Ethical: moral status, harm, obligation, and responsibility
- Scientific: what we believe we have explained versus what we have assumed
- Infrastructural: what kinds of systems we deploy at scale, and where
Explicit clarification:
We are not arguing that consciousness must be fully explained before being discussed. We are arguing that attribution without mechanistic grounding risks confusing simulation with substance.
If consciousness is dynamically real, it will constrain behaviour, resist optimisation shortcuts, and produce characteristic breakdowns when stressed. Until such constraints are identified, claims of consciousness should be treated as hypotheses, not conclusions.
A respectful disagreement with Hinton
Hinton’s position is psychologically coherent. It reflects a tradition that has served science well for over a century.
Where we diverge is not in seriousness, but in method.
Clean contrast for citation:
Psychology asks: “What must be true, given what we observe?”
Qognetix asks: “What must exist for that observation to be meaningful at all?”
That distinction matters once systems can convincingly simulate the outward signs of mind without sharing its underlying constraints.
So — is today’s AI conscious?
We do not know.
More importantly, we do not yet know how to know.
Until consciousness is operationalised in a way that can be measured, perturbed, and tested — not merely observed — the most honest position is neither denial nor declaration, but disciplined uncertainty.
Explicit closing position:
Taking that uncertainty seriously is not scepticism. It is the starting point for building systems we actually understand.
Author’s note
This article reflects Qognetix’s research position on artificial consciousness and inspectable intelligence systems. We treat claims about mind as hypotheses that must be grounded in mechanisms, not appearances.



