Contact Us

Episode 10: How Active Inference is Bridging Neuroscience and AI with Dr. Sanjeev Namjoshi

In episode 10, Ron, dives into the fascinating world of neuroscience and artificial intelligence with our special guest, Dr. Sanjeev Namjoshi, a Machine Learning Engineer at VERSES.

In this episode, we unravel the connection between neuroscience and artificial intelligence, exploring Dr. Namjoshi's upcoming books on active inference and the free energy principle. We get into the unique advantages of active inference over other cognitive frameworks for modeling human behavior and cognition.

Ron Green: Welcome to Hidden Layers, where we explore the people and the tech behind artificial intelligence. I'm your host, Ron Green, and I am very happy to be joined today by Dr. Sanjeev Namjoshi. Together, we're going to talk about topics at the intersection of neuroscience and artificial intelligence and get a sneak peek into his upcoming books on active inference and The Free Energy Principle. Sanjeev is a machine learning engineer at Versus AI. He holds a PhD in neuroscience from the University of Texas at Austin. His theoretical research interests lie at the intersection of neuroscience and machine learning and their application to the computational modeling and simulation of living systems. Sanjeev has extensive research experience in machine learning, computer vision, computational neuroscience, bioinformatics, and molecular biology. He's currently writing two books to be published by MIT Press on interrelated approaches to modeling self-organized and adaptive systems based on active inference. Welcome, Sanjeev.

Dr. Sanjeev Namjoshi: Great, thank you so much Ron for inviting me.

Ron Green: Let's kick off by talking a little bit about how you got interested in pursuing a career at the intersection of neuroscience and artificial intelligence.

Dr. Sanjeev Namjoshi: Yeah, so that's a great question. So I've had a very meandering path. I actually didn't discover active inference until I was in my first post-doc. And even though I already in the neuroscience field, And I think if we go back to, you know, really go back undergrad when I was starting to get into science and thinking about what I wanted to do, I originally started in molecular biology. But I already knew that I really wanted to study something where we, something where I could look at and understand some of the deeper philosophical questions around human nature and the human lived experience. And I didn't know what that was yet. But as I kind of, I ended up in molecular biology, I was actually studying yeast genetics for three years in graduate school before I switched into neuroscience. And it was at that point that I started figuring out what I really wanted to do, but I also had a deep interest in mathematics. All of these things didn't really come together until I actually was ready to leave academia and I writing a grant and it was something I didn't want to. I don t know exactly where I wanted it to go but I thought it was machine learning. Then I found this paper by Dr. Karl Friston who we ll talk about soon called Life as We Know It, which at this point, the field has taken that paper and moved further along in the evolution of these ideas. But it's a beautiful and wonderfully written paper. And it essentially that moment I saw all of my interests coming together. And then going into machine learning, I've kind of moved through all about until I'm now at versus where those interests have now collided.

Ron Green: I am lucky enough to have had a early peak at your book, at the first volume of your book, which is coming out later this year, on active inference and free energy principle. Let's start there. What is active influence? What is the free-energy principle, and how do they apply to neuroscience and artificial intelligence?

Dr. Sanjeev Namjoshi: Great. Yeah, so starting with active inferences, active inference was created, it's a field of computational neuroscience that was created by Dr. Karl Friston, neuroscientist working at University College London in UK. And he basically took some of the ideas that already existed in computational neuroscience. Going back, really, it goes back to around 1880s with Helmholtz, which is this idea. It's called perception as inference. And later, as it kind of evolved, Richard Gregory was most famous than the eighties for this idea of a hypothesis testing brain. And active inference kind of takes a lot of that lineage plus a of other fields that came together around this Bayesian brain hypothesis, which I'll describe in a moment. And Active Inference kind of comes out of this framework, so it's a way of describing human and animal behavior from the perspective of Bayesian inference. But it is not purely just computational. It also includes a lot of the neurobiology and scientific research that has gone into all these fields.

Ron Green: Meaning it's not just a potential theory, it is actually grounded in some of our understanding of biology of brain.

Dr. Sanjeev Namjoshi: Absolutely. And that means there are a couple of ways to look at it. You can look it from the biological plausibility angle, but also from ML angle it doesn't really matter if it plausible, does it do things we want to do, right?

Ron Green: Right, is it useful?

Dr. Sanjeev Namjoshi: Yeah, so there's different camps depending on where you're looking at, you can kind of interpret it in a bit of a different way.

Ron Green: Okay. I'm really fascinated that it goes all the way back to the 1880s. I had not heard that before. I thought it was a much, much more – I realize it's not the current conception, but that the grains of it would go back that far. That's really fascinating. Yeah. So I wanted to ask you, so the idea in active inference is it suggests that the brain is constantly making predictions and updating its internal model based on how sensory inputs differ from predictions and sensory input. I'm kind of curious, you know, how is that related to the idea that mammals, humans and other type of mammals really enjoy being surprised? The idea of that subverted expectations are maybe at the heart of humor.

Dr. Sanjeev Namjoshi: Yeah, so that's a really, really important and interesting question. And I think when I think about human behavior, I think that aspect of it is to me the most fascinating and interesting. We haven't really gone into all the details of the kind of mechanism behind active inference, but just to as a prelude to that, when you talk about surprise minimization, surprise actually has a technical meaning in information theory, but you can think of generally in the psychological sense, minimizing surprise is kind the name of game in active influence. You want to be predicting really well, which means you're not surprised by the sensory information you are receiving. Right. So then you think about exactly what you asked like, you know, what about humor? You know there's so many other examples like magic tricks and you know things we engage in where our expectations are subverted. Right, I'm gonna speculate here. This is you, know piecing together some things that I think are really interesting to give a response to this and I, think the first thing that comes to mind is curiosity is a natural part of of human behavior, and it's built into the core of active inference, this intrinsic motivation where you go after things just for the motivation of just exploring the unknown and unobserved things in the environment. And that also applies to emotion. So dopamine will spike in response to unexpected emotional events, not all emotions have this utilitarian survival value. We have things like fascination, insight, and curiosity, where we want to learn more and that's an exciting feeling. So we have that on one side and then when you look at the other end of it, you start thinking about the ability to experience surprise in a comforting setting because not all surprise is good. I mean, there's...

Ron Green: Absolutely.

Dr. Sanjeev Namjoshi: Pranks can go wrong. There's fraud and people conning you. That's never a good surprise, right? But when we simulate in our brain, we stimulate these extra possibilities that we've never seen before. So what would happen if I explored this unknown thing? And I think that the idea of subverting expectations is a way of exploring those unknown possibilities that are out there in a ways that is safe and simulated. In the same way we have art and storytelling, these sorts of other ways of experiencing others emotions and other things that we can't do. And some people are risk takers, they'll go and they want to do the crazy things, others will read about it in the book and get the thrill out of that.

Ron Green: Right, right. I remember the first time I stumbled upon active inference and it felt so right to me because it explained to me in a real unifying way a bunch of disparate things about human beings that felt very odd, like you named some of them, the idea that we enjoy being surprised and we might find it humorous. Even sometimes when you get surprised you think about like a jump cut and a scary movie, it's surprising in a different way, but it still kind of can be thrilling, right, in the controlled environment. And the idea that through active inference, the classic example I always think of is like maybe a snake flicking its tongue, sensing the air around it to provide sensory input. To me, that is a really compelling argument that why we love to learn, why we enjoy surprises, and why exploration is sort of baked into human nature is because it's a very positive, evolutionarily selected behavior, and our brains as prediction engines are at the heart of that.

Dr. Sanjeev Namjoshi: Absolutely. We haven't actually gotten into the free energy principle yet, but you can motivate these ideas from the perspective of evolution as well.

Ron Green: Okay, let's do that now. Let's jump into the Free Energy Principle. I'd love that. And then tie that back into the evolutionary you thought you were going with.

Dr. Sanjeev Namjoshi: Okay. Yeah. And I'll also tie it back to active inference too, so we can have all this together here. So the free-energy principle, I like to think of it as more of the, it's based in statistical physics, so it's kind of the overarching background assumptions out of which active inference ideas come from. So, you know, active inferences is really focused mainly on animal and human behavior, particularly things that have brains. When you talk about the free energy principle, you're talking more broadly about this survivability of living systems. So that will key into evolution in a moment. So before I can really talk about that, though, I have to go into a little bit about mechanisms involved in active of inference, I mentioned it was Bayesian inference and Bayesian inference in this context means that we have some prior information about what the world is like and that prior information it makes sense, you know, reconstructing the world from scratch every time would be inefficient. If you want to make quick decisions, the world has so much structure to it and the laws of physics don't change, you can leverage that. The interesting part of it is that everything becomes an expectation. So, it's all about a prediction that you think the world is going to be like. And you have sensory data that, you combine with that and you are able to use that to then infer or guess what's most likely state of something unknown out there that I don't yet know. And that's the process of Bayesian inference. But in the exact case computationally, its usually intractable, especially if you want to make quick decisions. There's no way to do it in a world that changes like the one we live in. So, that means you need to propose some approximate way to make that inference work. Variational free energy is an objective function that's introduced as a way to make that approximation happen. So when it's minimized, the minima tells you what's your best guess about what is going on out there. So we talk about the free-energy principle. We're taking that idea of variational free energy minimization, which tells us what's going on outside there. And we're saying that if we minimize variational free energy, we get everything else out of it. We can get perception, learning, attention, planning, and decision making. All these things come back together under this one umbrella. So that's one aspect of the free energy principle, meaning active inference encompasses all those things, and the mechanism is minimize free energy to do that. So it's a form of approximation inference. So going back to what you were asking earlier, that's the way that what the value of the free energy principle is to talk about now, what do we know about living systems that are minimizing variational free energy? So under the Free Energy Principle, we would say it's kind of a reductive out of absurd argument, which is, well, things that exist in our alive, they must be minimizing variational free energy, because if they weren't, they would be dead. That's the heuristic argument, just kind of giving the general one. Now the field has evolved into a whole new field, which is subject to the second book called Bayesian Mechanics, which tries to formalize that concept in statistical physics to say, we are constituted particles that are organized in this way and we want to avoid thermal equilibrium. So what are we doing as these living systems to avoid that. Well, the argument is we're minimizing variational free energy. We're predicting what's going to come next. And that helps us survive. But on the population level, the populations has its own free-energy minima as well. So different groups and their eco niches, they're different ecologically adapted environments they are in. They all have separate free energy minema that's local to those areas. And so you can create kind of a evolutionary variation free energy principle as

Ron Green: Okay. So I'm not sure if you can explain this without using a whiteboard, but maybe take a stab at explaining, when you say minimizing free energy, what is the physical process behind that, or to the extent that we understand the process so far?

Dr. Sanjeev Namjoshi: So the formal name in the field is called recognition dynamics. It's a way of specifying how is neural activity in the brain changing in response to information? So the change in neural activity is, which can be encoded in various ways with different types of signals we can measure, that change and activity is a minimization of a calculated quantity that we call variational free energy. So it's literally saying that the brain is calculating that quantity and it is trying to minimize its actual value. and the states encoded or represented in the different neurons in the brain are changing in a particular way such that variational free energy, which can be computed by populations of neurons, is actually minimized.

Ron Green: Okay, and just to dig into that just a little bit more to make sure I understand, you are measuring the signal on individual neurons or collections of neurons, and is the idea that there is a delta between that signal and some sensory input or something like that, the spit trial, that difference is trying to be minimized?

Dr. Sanjeev Namjoshi: That's correct. I think the one important thing to take a step back and talk about is that the fact that your brain, we think of it in this field, is generative model. That means something very specific here.

Dr. Sanjeev Namjoshi: We mean a joint distribution of observed and unobserved variables. But it's a probability distribution about your beliefs about states of the world and sensory data. So that means you can generate predictions about what you expect the world is like. And that's what you actually experience. You don't experience the world itself in the sensory signal, which is crazy because it means that, you know, what you think, what you expect the world to be like, you make your own reality. And we all have different generative models in our minds that create different realities of what the world... We think the world is like.

Ron Green: On that, really quickly, just make sure I understand that as well. Are you familiar with the video where people are passing a ball around and their crew walks through the group and you don't see it the first time you see the video? Is that an example of which you're talking about where you you don t really perceive the the World raw, you're generating a construction of the world?

Dr. Sanjeev Namjoshi: Yeah, so I think that's a good example of something like that. I thing that would also fall into like attention mechanisms, which are also, you know, it's kind of gaining synaptic gain. So like how much gaining you put about looking at noise, you're focusing your attention on certain areas of a visual stream. And you may forget other information because you are so highly focused on it. You can still motivate that in this kind of language too with free energy minimization as well if you wanted to. So getting back to your question, the field of predictive coding predates Active inference and it has its origins and signal processing and information theory and other things like that video compression And the idea the name of the game here is the different layers of your cortex are constantly trying to predict the layer below So they're saying I expect as a top-down prior What do I think is going to come in next is input and the player below does that when you get to the bottom? Well, the very bottom is actual sensory stream from your senses themselves And so you have a top -down guess about what you're expecting, and you meet the actual sensory information. So this is encoded in populations of neurons in the cortex, so it's not just like one neuron. It's a whole system of the neurons who would be doing this. And then the mismatch that you get is a prediction error. In this case, it is the sensory prediction errors. You thought this was a sensation you were going to receive because of your prediction. Now you are seeing what it is. So that difference is encoded in other neurons, error neurons. And this speaks to the really extreme importance in information theory when we talk about compression. We talk about what information is in the formal sense. It's what's unexplained, right? You don't care about where you already know because it's already there in your model, what haven't you explained yet and you try to reduce that uncertainty. All that's passed up is an error signal and it then updates all the layers going up and your model now conforms closer to the actual world. So the world is your training set is one way to think of it. You're constantly updating your predictions based on what sensory experiences you're getting. There's more complication in that because we often know that we don't always believe what we see. Data can be noisy, there are reasons to ignore it, people don't change their mind. That's all these other layers that are on top there. But the fundamental mechanism, that's essentially what the differences that have been compared are.

Ron Green: I can't help but kind of think about the idea that we think of young children as just being amazed by the world, and then you lose that wonder as you become an adult. Maybe that's partly just because your internal model is getting pretty good, right? And so it's sort of a natural progression. I wanted to ask you about The Free Energy Principle. So in traditional cognitive science, It sort of emphasizes the brain's role as processing information in contrast to free energy principle positions the brain more as a prediction generator. So how does that shift our understanding of the Brain Function? Because those are pretty radically different perspectives.

Dr. Sanjeev Namjoshi: Yes, absolutely. And to be clear, this isn't a completely accepted view in neuroscience yet. So there are a lot of people studying this particular view, but it's one among many and it is becoming more and more popular now is more data is coming out and people are testing it in specific situations, but the potential that's really really interesting here is as we were talking about earlier your expectation is what drives your experience of the world and so you the one thing about this that is really, really subtle is we are beings that go out in search of sensory evidence that confirms our own models of the world. So we're not like scientists, we are not going out there, it is a hypothesis testing brain, but we aren't doing it in this like blind way, we looking for things that tell us we right essentially. So there's all kinds of psychological implications here, but there is two ways to look at it. One we already talked about which is you use the sensory data to make your brain conform to the world, make it closer and closer approximation to the real world.

Ron Green: Your internal model.

Dr. Sanjeev Namjoshi: Internal model matches the real word, your representation of the world meaning the brain, we talk about the brain as being a model or at least it behaves as if it is, depending on where your stance is philosophically. So when you take actions, because we haven't really talked about the action part of this yet, which is really important here, that's active inference, is you make the word conform to your brain. You expect the world to be a certain way, so then you go out and change the word in this particular way so that it conforms to you expectations. A really simple example is when you eat food, from your brains perspective, your body is part of the world. It's separate from the brain, and your brain has these certain set points or bounds. Blood glucose needs to be in the certain range. And it suddenly starts getting these sensory inputs from the body, which says, Hey, blood glucose has fallen. So now you have a prediction error. Your blood of glucose is much lower than your brain expects it to be because your brain is a prediction. I predict it, to me, in this range. And you know from prior experience that if you went out now into the world and you ate food, you took those actions, You would get rid of that error, so you make your prediction a reality by going out and doing that and eat the food. Your blood glucose goes back up and your brain says great. Now we've, we eliminated that prediction error right.

Ron Green: You close the loop.

Dr. Sanjeev Namjoshi: close that loop. And you can just imagine, you know, for things like trauma, for example, you start people who are traumatized or experienced PTSD, they will repeatedly experience these things that because they're reliving the expectation, the world is now a scary place. The sound of that car horn is not, you're not in a war zone anymore. You're just on the street, but you are primed to now expect to see the world in this really dangerous way. So all the signals you receive may still trigger and elicit certain responses in you. So, there's a very wide literature in active inference on how different aspects of mental health and other types of aspects brain function fit into this paradigm.

Ron Green: It's fascinating. On that topic, so the idea that active influence is this model where decision making is trying to select actions that minimize expected free energy. How does that play against traditional models of the brain as sort of like this deliberation, this deliberative process where you are thinking rationally in making, you know, free will comes into it. I don't know if you want to touch on that as well, but all of these kind of intersections are at play here.

Dr. Sanjeev Namjoshi: Yeah, so, you know, traditionally we talk about rationality. It comes from the literature in economics, I mean, decision-making under uncertainty, von Neumann and Morgenstien's classic work. And so we talked about it in the sense of expected utility theory, which it turns out that active inference under certain specific assumptions is compatible with expected-utility theory. One major difference is that a lot of work in economics has shown that at least if we describe rational behavior as in monetary sense or maximizing reward there that is still compatible with active inference but now all the other behavioral effects like when we think about the types of rewards that we maximize they're not always monetary sometimes they are social cohesion there's other types of things that you might want to maximize that active inference allows us to incorporate. Um, and the problem, um, it actually deliberation to, I mean, expected free energy calculations, which is what is, what is free energy like in the future? You're predicting because you don't know yet, these are unobserved states. Um you are actually in the actual computations, you're creating little, What if counterfactual paths? You were saying, if I did this, this is what sensory input I would get. If I do this after that, and then you can compare all those branches by averaging the expected free energy for each one. And that's essentially what the theory is of what planning in the brain is to a really large temporal horizon. So we are doing this sort of deliberative planning. The real key comes in when you consider that the brain technically Bayes optimal, but the problem is that we cannot be rational agents because we don't have enough time. We are bounded rational agent.

Ron Green: Meaning we can't think through the perfect next action constantly.

Dr. Sanjeev Namjoshi: Not unless, you know, like we're, we are creating a society for ourselves where that's possible because all of our basic needs are taken care of, right? But our brains have not changed in any evolutionary sense for 40,000 years. And you imagine you're out in a forest and you look in the field and you see 25 rabbits, the next day you'll see five. Well, you could make some theory about, random sampling and probability and whatever, but it's much safer just to say, well, there's a predator out there. I'm going to escape and just go. And so a lot of our heuristics and the biases that we have come from a survival instinct, and that underscores the fact that the rational part of our brain, really what it is, is we're not trying to model the world perfectly. We're modeling it in a way insofar as it helps us survive. Right. And that's the practical nature of it.

Ron Green: Absolutely. If you think you might have seen a tiger, it's better to have been wrong about that, right, and have an overly heightened sense of fear than to guess wrong once.

Dr. Sanjeev Namjoshi: Right, because then you're dead and then if you want to be more reductive here, then you are not minimizing variational free energy, right? So you would not be alive to say that you did that, so that's how it all sort of ties in back to that.

Ron Green: Okay, great. All right. Well, as a, you know, a machine learning practitioner working in the AI field principally with deep learning techniques, the idea that you are trying to make a prediction, that you're going to measure the difference between that prediction and the correct outcome, the actual reality of that, and then minimize a difference. Well, you know, you speaking my language. That just, that makes perfect sense. Do we understand within the deep learning world, within artificial intelligence, we're using back-propagation to minimize that error. Do we understand how the brain is minimizing that prediction reality difference?

Dr. Sanjeev Namjoshi: Yeah, so there's no evidence so far the brain uses back-propagation, right? So this is usually it's called predictive coding is the actual like term of what's going on, the recognition dynamics that I referred to earlier, this change of gradient that's happening on all the different layers of the Brain. That essentially is a predictive coding type of architecture, active inference adds action to that story, and also a couple other bells and whistles that makes it a more complicated, more universal kind of model beyond what predictive coding is saying. But the core actual learning, or I should say learning is not the right word, but the updating rule is that error minimization, where you pass the errors up the chain of the hierarchy to try to minimize prediction error. And the thing that's confusing about it is you have two -way messages. You have downward-facing messages that are coming up, top-down messages and you have bottom-up signals. So you actually have things moving in two directions, gradients being updated simultaneously. So it isn't exactly, you can't use, for example, this classic back propagation to do that. You end up having these sort of state dependencies and things like that that come into play.

Ron Green: Really quickly, I understand how the top down signal might be corrected because because that's where the prediction is meeting the incoming sensor input. How are the signals going up being changed? That's interesting.

Dr. Sanjeev Namjoshi: Yeah, so the way it works is they're usually called autonomous states in active inference is what the name of the actual types of states that you're talking about here. So essentially, when you go up one layer, information from the sensory information is is sent up to the layer above and becomes like a sensory input to the next layer. So it's treated like it was a sensor input when really it it actually a predictive, it is a signal. The brain is actually generating itself. So, it like saying on this layer of the hierarchy, using my model, I'm going to simulate what the sensory signal would be, and then you get that same error and you go up the level because you kind of have these, these layers of abstraction as you up. So the brain doesn't have the sensor data anymore. or it only has a representation of the next layer of what that data would look like in this hierarchical chain.

Ron Green: Okay, does that help to stay on that point for just a second more? Does that, is it part of that trying to fine tune the sensory inputs in any way, or is just about interpreting the sensor inputs?

Dr. Sanjeev Namjoshi: So the process of fine tuning, you mean actually changing what sensory data you get.

Ron Green: Yes, on the signal coming up.

Dr. Sanjeev Namjoshi: That's more, when we talk more about action, action, which is kind of action happens, um, that's all kind of built into this. So it's sort of, it's hard to really talk about without really going into action and the active inference part you take, you take a, when you, When you compute expected free energy, um your, there are different ways to formulate it, but some ways you can look at it is you're specifying a preferred state, a preferred set of states and observations that your target distribution you want to get to. There's no actual like cost functions per se. It's more of like a distribution that you wanna end up in. There's trajectories you need to take of sequences of actions to get there. There is a sense in which you optimize the actual sensory data itself, but that's more on the action side.

Ron Green: Okay. That makes perfect sense. Okay, great.

Dr. Sanjeev Namjoshi: Just to close that up, then predictive coding, you can see that back-propagation is special case of predictive code. So, it's a more general theory in terms of gradient-based learning methods.

Ron Green: Okay. Deep Learning is by far the dominant technique within artificial intelligence right now. And I'm kind of curious, is it possible that the brain uses active inference in the minimization of free energy, but that deep learning might be a better approach for developing AI or AGI in the same way that, you know, jets don't flap their wings to fly. Because we were able to develop flying vehicles without being constrained by certain evolutionary processes the way, obviously, birds and other flying animals were. So is it possible that it's different but maybe a more streamlined way to achieve intelligence? Are there aspects to active inference and the way the brain works that are more powerful, more generalizable?

Dr. Sanjeev Namjoshi: Yeah, that's a really great question because I think it's always good to think about why are we using the methods we're using instead of just blindly applying the most exciting thing you can see. And I there are many different interpretations of this question and I like to look at it is that what is the right tool for the job? What are you trying to do? If you're trying to achieve human-like intelligence, usually we call that natural intelligence. It's kind of a contrast to artificial, meaning natural, inspired by human and animal behavior, you would want to look at the brain because it has already done that, it's already solved it. It is the most complicated system that we know of, and it successfully solved that problem. And we don't have to take every single thing that it does when we make our models. We can look at it, you know, why, what is it doing well? And we can take those pieces and there could be, even when we talk about active inference, it's a relatively self -contained kind of model. It isn't like a cognitive architecture where you have like a speech center and a, it just a general theory about certain types of human-like intelligent characteristics and how we can model them. So you can even have a deep learning. There are ways using amortized inference to even do active inference, uh, and learning the parameters with, um, deep neural networks, even universal function approximators.

Ron Green: Oh, fascinating.

Dr. Sanjeev Namjoshi: You learn parameters. Okay. That's actually been like the, like the earliest attempts at scaling active inference went in that direction. So they aren't necessarily mutually, uh they, they don't necessarily are incompatible with one another. . Um, but I will say that active inference looks like it's going to be far more efficient and less cost and compute intensive, the deep learning. So when, when you think about from a sustainability perspective, I would put my bet on active inference in the long term. It's not there yet because it has to be better. We have to prove that it's better than deep learning. Right.

Ron Green: I've got to dig into that a little bit. Can you, can you maybe just briefly explain what makes it computationally less, less expensive?

Dr. Sanjeev Namjoshi: I think that's a, that definitely open area of research. I think that not a completely well understood, but I would say, if I'm going to speculate that, it has something to do with the fact that the very nature of the specification of a generative model to begin with. And the way that you're designing the usually deep learning, it depends what you talk about deep learning but deep neural networks at least, usually we talk about those are discriminative models. They're, they're just figuring out it's kind of a supervised learning type of problem. This is self supervised. It's kinda an unsupervised sort of system that we talked about active inference and the ability to the generative model essentially is able to represent or capture the data generation process. So that's in statistical terms, but what it means is, you know, whatever is going on outside our heads is constantly generating data that we sense. If you can learn the structure of that process, then you can recapitulate it. And that means you're looking at causal relationships between variables. You're learning the actual causal structure of the models. You have more context. It's not like just pure pattern recognition. We were very good at figuring out things quickly because we know how cause and effect works So we can jump past all these other chains of logic that we need to make You know, we don't need a calculate all of these variables to come to some conclusion We just know cause an effect work. We don t need know like, you know when I throw a ball that it's gonna hit the ground I just knew that I can I know it'll hit ground I don't have to calculate the exact equations of motion. For example, I do know that that's just the causal way that the world is set up I think it's the biggest power of it and strength of it, and then as a result of that being data efficient, you also need less compute to actually run it. In theory, there is still more work to be done on that area.

Ron Green: Okay. Yeah, my mind goes to, as we're recording this, OpenAI released the Sora video generative model recently. One of the videos that I was most fascinating was, There were two pirate ships fighting in a coffee cup. And what's amazing about that is some of the emergent capabilities of these generative video systems, it was doing ray tracing fantastically well. The lights and the reflections were amazingly plausible. It was in fluid dynamics, famously very difficult to model physically. And as you mentioned with active inference, you know, these generative models, they have these powerful underlying representational systems so that you're not having to learn these things separately. They're kind of baked in. And my mind goes to the Sora model because none of those capabilities were explicitly delineated.

Dr. Sanjeev Namjoshi: Programmed, right?

Ron Green: Yeah. they just fell out emergently from it having to learn to generate realistic video across all these different environments and circumstances. Are there some common misconceptions about active inference out there in the free energy principle, whether it's in neuroscience or within the AI field that are sort of just common misunderstandings?

Dr. Sanjeev Namjoshi: Yeah, I would say that the two that come to mind, there's a couple, but I think one is that, the first one, is kind of, they're two sides of the same coin. One is that it's extremely complicated and too difficult and hard to understand. And related to that is the mystique that's sort of come around it. I think it is certainly, it a fascinating field and it does combine so many different areas. And that has been both its strength and its weakness. Because if you want to explain something, complex idea, the more fields you pack in there, the harder it's going to be, and that has been, it is amazing because that's attracted so much attention. It's explanatory power is in that it unifies so much, but every time you propose anything that is grand and unifying, you're going to attract people who are going to want to critique it naturally. That's the process of science, right? So I think that that the first thing I'd point at is that at its core, when you strip away the biology, when you strip away all the other fields that go into it, you're left with a really quite simple and elegant machine learning type of mechanism that any machine-learning engineer would be familiar with. In some sense, they would dealing with time series type of data or partially observable Markov decision processes. It's all related to things that are very well known and understood. I think that's the first misconception is that it's not as complicated as it appears. It's still complicated because things are, but...

Ron Green: It still is complicated. I remember reading the Wikipedia page the first time and I was complicated, um, to sic on that point, you know, it's such a powerful predictive model. To me, its very akin to like evolutionary theory, which is, all of modern biology, none of it makes sense without that foundational theory of natural selection. And it explains so much. But it took quite a bit of time for it to be generally accepted. So as we kind of wrap up here, I want to ask you about your book. Do you know when the first volume is going to come out yet?

Dr. Sanjeev Namjoshi: Well, I know that I have to deliver the 1st draft in June, and there's some revisions. So I would say maybe optimistically by the early part of next year. Not quite sure yet. So it'll depend on other work that needs to done.

Ron Green: And that's the first volume, and then you'll immediately jump on the second volume.

Dr. Sanjeev Namjoshi: Right. So the first volume was on active inference itself, what we were mainly talking about here, we haven't really even touched on Bayesian Mechanics, which is the second volume. Um, that field, just to be clear, is a lot more theoretical and is in under development still. So whereas active inference, um, there's, you know, the core ideas are, are probably fairly set. Um there is just going to, be, a, lot, more work done to make it, uh, to expand the scope of it and scale it. Bayesian Mechanics is very theoretical. And if you're a person who loves the theoretical research, it's very exciting as it is all based in this idea of physics of living systems and incorporates a lot of other information from non-equilibrium thermodynamics and other fields. It's really cool, but it is a separate book that will be written in the future. Definitely have different editions come out as the field evolves.  

Ron Green: Oh, I can't wait. As I said earlier, you were kind enough to share some early copies with I mean, it's going to be an unbelievable book. OK, well, we love to wrap up here by asking the same question, which is, if you could have AI automate something in your daily life, what would you pick?

Dr. Sanjeev Namjoshi: Yeah, you know, one thing that constantly gets in the way of me trying to just live my life and spend time with the people that I love and write more is that the constant just things are breaking down. So suddenly, washing machine breaks down, suddenly you have an appointment, and you have to make, you do this, there's always something that kind of goes on. And I think I don't like context switching. For me, I like to focus on a task for three hours without being disturbed. I guess I'm looking for something like a personal assistant kind AI that could do, set up all my appointments for me. Look at my calendar. And also, if I have a contractor to come over, they could take care of all that for and I can spend time with people and do a lot of writing.

Ron Green: I would love one of those, too. I cannot tell you much I appreciated this. It's great seeing you again, and I thank you so much for coming on Sanjeev.

Dr. Sanjeev Namjoshi: Absolutely. Thank you for having me.