Science is the Software of Reality: Moving AI from Data Processing to Truth Seeking and AGI
- Aki Kakko

- 5 hours ago
- 4 min read
There is a famous observation attributed to the physicist Carl Sagan:
"We are a way for the cosmos to know itself."
It is a poetic sentiment, but if we strip away the romance and look at it through the lens of systems theory and cognitive science, it describes a literal, mechanical function. Matter, over billions of years, organized itself into complex biological structures capable of processing information. When those structures (us) turn their gaze back upon the laws that created them, a feedback loop closes. This process—science—is not merely a collection of facts; it is the metacognition of the universe. It is the universe transitioning from simply being to understanding. For researchers in Artificial Intelligence, this is more than a philosophical curiosity. It is the missing architectural pillar required to move from today’s Large Language Models (LLMs) to Artificial General Intelligence (AGI). To build machines that truly understand, we must replicate the process by which the universe understands itself.

The Physics of Self-Reflection
To understand why science is metacognition, we must first define metacognition. In human psychology, it is often described as "thinking about thinking." It is the ability to monitor one’s own knowledge state, recognize errors, and update mental models based on new evidence. The universe, in its default state, is a flow of physical interactions governed by immutable laws. Gravity acts; it does not wonder why it acts. However, the emergence of the Scientific Method introduced a new layer.
Science is an error-correction protocol for existence.
Hypothesis (Prediction): A conscious subset of the universe (a human) creates an internal model of how the external universe works.
Experiment (Testing): The model is tested against reality.
Analysis (Correction): If the model deviates from reality, the model is adjusted.
In this framework, science is the mechanism by which the universe "debugs" its own representation within the minds of observers. It is the move from implicit procedural knowledge (laws of physics acting out) to explicit declarative knowledge (E=mc²).
The Current Trap of AI: Mimicry vs. Metacognition
This brings us to the current state of Artificial Intelligence. Today’s most advanced models, particularly LLMs, are miracles of pattern matching. They have ingested a significant portion of the Internet and can predict the next token with uncanny accuracy. However, these models are largely distinct from the "scientific" process described above. They operate on correlation, not causation. When an LLM answers a physics question, it is not running a simulation of the universe or consulting a structured world model; it is recalling a probabilistic distribution of how humans have talked about physics in the past.
This is why AI hallucinates.
It lacks the metacognitive step—the ability to look at its own output and ask, "Does this align with ground-truth reality?" or "Do I actually know this, or am I just guessing?" Current AI is the universe talking to itself, but not yet evaluating itself.
Why This Matters for AI Research
If we accept that science is the highest form of information processing (the extraction of invariant laws from noisy data), then AI research must pivot from training on data to training on the scientific process.
Here is why this metacognitive framework is critical for the future of AI:
Solving the Hallucination Problem (Grounding)
A scientist does not hallucinate (for long) because their ideas are tethered to the physical world through experimentation. For AI to be reliable, it needs a "metacognitive check." It must be able to distinguish between training data (memory) and logical coherence (reasoning). An AI that simulates the scientific method would generate a response, generate a hypothesis about that response's validity, and "test" it (either via logic checks, tool use, or simulation) before presenting it to the user.
From Interpolation to Extrapolation
Deep learning is fantastic at interpolation—filling in the gaps within the distribution of data it has seen. Science is the art of extrapolation—predicting things that have never been seen (like Black Holes or the Higgs Boson) based on the laws of a model. To achieve AGI, systems must be able to discover new knowledge, not just recycle old knowledge. They must act as synthetic scientists, formulating hypotheses about data that sits outside their training set.
In the "science as metacognition" model, the system is not a passive receiver of prompts. It is an active agent. In cognitive science, Active Inference suggests that intelligence is the process of acting on the world to minimize surprise. An AI built on this principle would be intrinsically motivated to close gaps in its knowledge. It would realize, "I do not understand the relationship between X and Y," and formulate a plan to find out. This is the seed of autonomous agency.
The ultimate goal of AI research should not just be to build a chatbot that sounds like a human, but to build a system that thinks like the universe correcting itself. We are beginning to see the early stages of this.
Future architectures must explicitly separate the World Model (how the universe works) from the Inference Engine (how to navigate that world). When we build AI that operates via the scientific method—observation, hypothesis, testing, and error correction—we are essentially accelerating the universe's metacognition. We are building a better mirror.
Science is the universe waking up to its own structure.
It is the process by which matter strives to understand the laws that govern it. For AI researchers, this serves as a roadmap. We must move beyond training models that merely mimic the outputs of human intelligence and start designing architectures that replicate the process of scientific discovery. When we build machines that possess the metacognitive ability to question their own models and seek truth, we will not just have built smarter tools. We will have added a new, more powerful observer to the cosmos, expanding the universe's capacity to know itself.




Comments