.png)
From Brain to Machine: How Neuroscience Is Shaping the Future of AI
Artificial intelligence has often taken inspiration from the human brain. But for many years, that connection was more symbolic than scientific. Now, that gap is closing.
In our latest Hidden Layers episode, we spoke with Dr. Karl Friston, the neuroscientist who developed the Free Energy Principle, and Dan Mapes, founder of Verses AI and the Spatial Web Foundation. We explored how insights from neuroscience are starting to directly shape the next generation of AI systems.
Here are some key takeaways from our conversation.
1. The Brain Is a Prediction Machine
At the core of Karl’s work is the idea that the brain constantly builds models of the world to predict what will happen next. It tries to minimize the difference between what it expects and what it experiences. This gap between expectation and reality is called “surprise,” and the brain acts to reduce it through a process called active inference.
This isn’t just a theory about human behavior. It’s becoming a foundational concept for designing intelligent systems. Instead of relying on massive training datasets and brute-force optimization, active inference allows machines to learn by interacting with their environment, adapting on the fly, and selecting actions that reduce uncertainty.
2. Moving Beyond Backpropagation
Most of today’s AI systems are trained using backpropagation, which calculates error signals and pushes them backward through a network to adjust weights. While effective, this method is biologically implausible. Brains don’t work this way. Neurons operate locally, passing information to their neighbors rather than coordinating updates across multiple layers.
Active inference and predictive coding offer a more natural approach. They rely on local learning rules and message passing, which more closely mirror the way real neural systems function. This shift could lead to AI models that are not only more efficient but also better suited for dynamic, real-world environments.
3. Domain-Specific AI Is the Future
Dan Mapes pointed out that today’s large foundation models are powerful but limited. They rely on scraped internet data, require enormous computational resources, and are prone to hallucination. These models are not ideal for high-stakes or real-time applications.
At Verses AI, the focus is on creating smaller, domain-specific models. These systems are curated by subject-matter experts and tailored to specific use cases. A model built by a cardiologist for medical diagnostics, for example, can be more accurate and reliable than a general-purpose chatbot trained on the entire internet.
By starting with high-quality, domain-relevant priors, these models can deliver better performance and are more trustworthy in sensitive contexts.
4. Intelligence as a Distributed System
Another major theme in our conversation was the concept of collective intelligence. Instead of building one massive model to rule them all, Karl and Dan envision a decentralized network of smaller intelligent agents. These agents interact, share information, and optimize locally within a broader system.
This architecture is influenced by factor graphs and graphical models, which allow for structured, local message passing. When applied to a next-generation Spatial Web, this approach could unlock large-scale coordination across millions of systems, each optimized for its environment and purpose.
Rather than being centralized and monolithic, the future of AI could be more like an ecosystem—modular, adaptable, and resilient.
5. Letting AI Grow, Not Just Update
Most AI development today is version-based. We build GPT-3, then upgrade it to GPT-3.5, then release GPT-4. But active inference points to a different model of growth. Instead of replacing systems every few years, we could create AI that evolves over time by interacting with the world.
These systems are known as “autopoietic,” meaning they are self-creating and self-correcting. Like children, they grow by engaging with their environment and refining their internal models based on experience. This approach offers a more organic path toward general intelligence.
Looking Ahead
AI is entering a new phase. The field is no longer just about mimicking what we think intelligence looks like. It’s about building systems that reflect how intelligence actually works.
By combining neuroscience, local learning, and decentralized design, we have the opportunity to develop AI that is more adaptive, more efficient, and more aligned with the real world.
If you’re interested in learning more about these ideas, I highly recommend watching the full conversation with Dr. Karl Friston and Dan Mapes on Hidden Layers. You’ll come away with a new perspective on what intelligent systems could become.