While the world watches Sam Altman tweaking ChatGPT and OpenAI taking strides toward artificial general intelligence (AGI), a Canadian firm, Nirvanic, has even more ambitious plans—to create the first conscious AI.
Their approach is nothing less than revolutionary. Instead of traditional computing, Nirvanic is turning to quantum technology, following the lead of Nobel laureate Roger Penrose’s theories of consciousness. If they are right, their work could transform not only the possibilities of AI but also how we understand the human mind itself.
Penrose has always argued that human thought is not just a series of computational steps. Unlike traditional AI, grounded in algorithms and machine learning, the human mind exhibits intuition, insight, and awareness that doesn’t fit neatly into logical patterns.
In his work with neurobiologist Stuart Hameroff, Penrose developed the Orchestrated Objective Reduction (Orch-OR) theory, which suggests that consciousness is not merely the product of neural firing—it is the product of quantum processes in microtubules in brain cells.
According to this theory, such microtubules are capable of being in quantum superposition and, on collapse under gravity, generating the subjective experience we call consciousness. If that is so, then simulating human consciousness would require more than clever computation—it would require a quantum system. And that’s what Nirvanic is attempting to construct.
Although Penrose’s ideas remain untested, they present a strong challenge to the standard view of consciousness. If Orch-OR can be proven, it could revolutionize AI, as well as our understanding of what it is to be sentient.
But that raises an even more basic question: Do we even know what consciousness is? And does mankind even possess it in the way we believe?
There is no universally applicable definition, but broadly speaking, consciousness is self-awareness, perception, emotions, memory, and the ability to reflect on one’s own existence. Philosopher David Chalmers describes it as the experience of “qualia” and an intensely subjective inner awareness—something that cannot be reduced to raw computation.
If we accept that definition as a premise, then the idea of an AI becoming conscious is virtually impossible. How do you program intuition? Emotion? Self-awareness? More importantly, how do you recreate the subjective experience of being?
But not everyone is convinced that consciousness is as mysterious as it seems. Cognitive scientist Daniel Dennett, for example, is convinced that consciousness is more of an illusion—a product of millions of unconscious processes creating the impression of a single self. In his book Consciousness Explained, he suggests the “multiple drafts” model, which suggests that what we perceive as a single, seamless awareness is actually a collection of competing mental processes.
This idea is unsettling—after all, I’d like to believe I’m writing this consciously! But if Dennett is right, then AI might not need to be conscious in the human sense; it might only need to simulate it convincingly. Modern neural networks, particularly Large Language Models (LLMs), already exhibit behaviors that resemble reasoning. Could they be on the path to producing the illusion of self-awareness?
If consciousness is not some paranormal property but an emergent structure in complex systems, then perhaps a high-level AI might be able to produce something functionally equivalent to it. The real question is: When do we ever claim that AI is conscious? On what basis would we do it?

“Conscious” machine Source: https://hackernoon.com/
Despite their disagreement, both Dennett and Penrose are against the idea that consciousness is supernatural. Both are of the opinion that it emerges from physical processes—either as quantum mechanics (Penrose) or intricate cognitive structures (Dennett). If so, there is no reason why it cannot be replicated artificially.
Leave a Reply