insights

At the intersection of technology strategy and scientific discovery — insights spanning neuroscience, neural network engineering, and neuromorphic computing to shape the future of digital intelligence.

Diagram showing the execution gap in AI and how runtime governance, bounded autonomy, replayability, and operational trust interact within operational intelligent systems.

The Execution Gap in AI

The execution gap in AI is the structural gap between generating intelligent decisions and governing execution behaviour in real-world operational systems. As AI systems become more persistent, autonomous, and infrastructure-coupled, runtime governance, bounded autonomy, replayability, intervention capability, and operational trust become increasingly important infrastructure layers. This article explains why inference alone is insufficient for operational intelligence, why observability does not equal control, and why governed execution may become a defining architectural requirement for operational AI systems deployed into industrial, robotic, energy, and infrastructure environments.

Read more >
Diagram comparing traditional model retraining pipelines with a persistent intelligent substrate that adapts through runtime state transitions.

Enterprise AI Architecture and the Retraining Problem Revealed by Doom-on-a-Chip

The experiment showing human neurons learning to play Doom attracted attention for its biological novelty. Its deeper significance lies elsewhere. The system adapted continuously while running, without a retraining phase. This exposes a structural difference between biological substrates and most enterprise AI architectures. Today’s AI systems typically separate training from execution, which creates dependency on retraining cycles when behaviour drifts. Persistent substrates with runtime governance offer an alternative architecture where adaptation occurs continuously under bounded constraints. For enterprise CTOs designing long-running intelligent systems, this distinction has direct implications for cost, auditability, and operational stability.

Read more >
Illustration of multiple autonomous AI agents connected through a glowing neural substrate network, showing persistent memory, signal flow, and coordination between agents.

Agentic AI Has Outgrown Its Hardware: Why True Agents Require a New Computational Substrate

Agentic AI is shifting artificial intelligence from passive prediction to persistent, goal-directed behaviour. Systems are now expected to plan, act, adapt, and coordinate over extended periods of time. Yet most modern AI infrastructure remains fundamentally stateless, designed for short-lived inference rather than continuous cognition. This creates a growing mismatch between what agentic systems require and what current substrates provide. Memory is simulated through retrieval, identity is reconstructed through prompts, and learning is often externalised. As agents become more autonomous and long-running, these limitations become structural constraints. The next phase of AI will depend not only on better models, but on computational substrates designed to sustain intelligence over time.

Read more >
image questions if AI is conscious

Has AI Already Become Conscious?

In recent interviews, Geoffrey Hinton has suggested that today’s AI systems may already be conscious. At Qognetix, we take this claim seriously — but we argue it exposes a deeper problem. Psychology infers mind from behaviour, yet modern AI is explicitly trained to simulate the signs of consciousness, making observation alone unreliable. Our position is that consciousness should be treated as a hypothesis about mechanisms, not appearances. Persuasive language is not evidence; durability under perturbation is. Until consciousness can be operationalised and tested, claims about conscious AI remain unresolved hypotheses, not conclusions. This article outlines a rigorous, engineering-led alternative approach.

Read more >
intelligence as reliable infrastructure

What Is Intelligence — and How Do We Build It as Reliable Infrastructure?

We are no longer just studying intelligence. We are manufacturing it.

After spending time with the recent work of **Blaise Agüera y Arcas**, which explores what intelligence is across biology, culture, and machines, a second question becomes unavoidable: how do we build intelligence responsibly once we create it deliberately?

As intelligent systems move from experiments to infrastructure, explanation alone is no longer enough. We need operational understanding, continuous measurement, and real control. Without these, capability becomes risk. This article argues that the future of intelligence depends not just on what it is, but on how seriously we take the responsibility of engineering it.

Read more >
comparing cognitive to substrate synthetic intelligence

Synthetic Intelligence: The Emerging Approaches Beyond Conventional AI

Synthetic Intelligence is positioned as a discipline rather than a single technology, emerging from the growing recognition that simply scaling today’s AI no longer delivers stable, long-term intelligent behaviour. This article maps the field into cognition-first and substrate-first approaches, asking whether intelligence lives in models that understand and reason, or in systems whose structure, memory, and dynamics allow behaviour to persist and evolve over time. It argues that the most consequential work now lies in engineering substrates where intelligence can arise, endure, and remain controllable, rather than rebranding ever-larger pattern-matching models as progress.

Read more >
And AGI Brain Comparing Synthetic To LLMs

The Illusion of Thinking: Why LLMs Aren’t AGI and Synthetic Brains Might Be

LLMs have given the world an impressive illusion of thinking, but illusions are not foundations for real general intelligence. As tasks become more complex, these models reveal their limits: brittle reasoning, no true lifelong learning, and no grounded understanding of the world they describe. Brains solve exactly those problems, which is why Qognetix is betting on synthetic digital neural tissue—biologically faithful, neuromorphic architectures designed to behave more like living cortex than a scaled-up autocomplete engine. This piece argues that AGI will not emerge from ever-bigger LLMs, but from brain-like synthetic systems built for continuous, adaptive cognition.

Read more >
synthetic intelligence emerging personality

Personality Isn’t Programmed. It Emerges.

Personality is often treated as something that can be added to intelligent systems after the fact, through prompts, personas, or behavioural tuning. Biology tells a different story. In living systems, individuality emerges from internal regulation. Hormonal feedback, memory gating, and state-dependent learning shape how experience is processed over time. Inspired by a veterinary insight shared by my business partner, this article explores how similar principles apply at the substrate level of computation. It examines why internal state matters, how regulation precedes behaviour, and what becomes possible when intelligent systems are allowed to develop trajectories rather than simply produce outputs.

Read more >
ai_hype_not_intelligent

The Illusion of ‘Smart’ Machines: Exposing the AI Hype

This article pulls back the curtain on the AI hype machine and asks a simple question: does today’s “smart” AI really think, or just simulate intelligence convincingly? Drawing on Apple’s recent “illusion of thinking” research, it explains how even advanced language and reasoning models break down once real complexity and strict correctness are required. You’ll see why so many polished AI demos hide brittleness, hallucinations, and huge energy costs—and why some researchers are turning toward biologically faithful, neuromorphic approaches as a more robust path beyond the current hype.

Read more >

ready to EMBRace the next frontier?

Get in touch today to find out how.