Skip to content

That bloke

Is Karl Friston that bloke? You know who I mean. That really clever bloke, master of some academic field. He’s used to understanding a few important things far better than the rest of us. But when people in the pub start discussing philosophy of mind, he feels wrong-footed in this ill-defined territory.

“You see,” he says, “the philosophers make such a meal of this, with all their vague,  mystical talk of intentions and experiences. But I’m a simple man, and I can’t help looking at it as a scientist.  To me it just seems obvious that it’s basically nothing more than a straightforward matter of polyvalent symmetry relations between vectors in high-order Minkowski topologies, isn’t it?

Aha! Now you have to meet him on his own turf and defeat him there: otherwise you can’t prove he hasn’t solved the problem of consciousness!

I’m sure that isn’t really Friston at all; but his piece in Aeon, perhaps due to its unavoidable brevity, seems to invoke some relatively esoteric apparatus while ultimately saying things that turn out to be either madly optimistic or relatively banal. Let’s quickly canter through it.  Friston starts by complaining of philosophers and cognitive scientists who insist that the mind is a thing.   “As a physicist and psychiatrist,” he says ” I find it difficult to engage with conversations about consciousness.”  In physics, he says, it’s dangerous to assume that that things ‘exist’; the real question is what processes give rise to the notion that something exists. (An unexpected view: physics, then, seeks to explain the notions in our minds, not external reality? Poor old Isaac Newton wasn’t really doing proper physics at all.) Friston, instead, wants to brush all that ‘thing’ nonsense aside and argue that consciousness, in reality, is a natural process.

I’ve spent some time trying to think of anyone who would deny that, and really I’ve come up empty. Panpsychists probably think that at the most fundamental level consciousness can be a very simple form of awareness, too simple to go through complex changes: but even they, I think, would not deny that human consciousness is a natural process. Perhaps all Friston means is that he doesn’t want to spend any time on definitions of consciousness; that territory is certainly a treacherous swamp where people get lost, although setting out to understand something you haven’t defined can be kinda bold, too.

To illustrate the idea of consciousness as a process, Friston (inexplicably, to me anyway: this is one of the places where I feel something might have got lost in editing) suggests we swap the word and talk about whether evolution is a process. Scientifically, he says, we know that evolution isn’t for anything – it’s just a process that happens. Since consciousness is a product of evolution, it isn’t for anything either. I don’t know about that; it’s true it can’t borrow its purpose from evolution if evolution doesn’t have one; the thing is, putting aside all the difficult issues of ultimate purpose and the nature of teleology, there is a well-established approach within evolutionary theory of asking what, say, horns or eyes are for (defeating rivals, seeing food, etc). This is just a handy way to ask about the survival value of particular adaptations. So within evolution things like consciousness can be for something in a relatively clear sense without evolution itself having to be for anything. Actually Friston understands this perfectly well; he immediately goes on to speak approvingly of Dennett’s free-floating rationales, just the kind of intra-evolutionary purpose I mean. (He says Dennett has spent his career trying to understand the origin of the mind – what, is he one of those pesky guys who treat the mind as a thing?)

Anyway, now we’re getting nearer to the real point; inference.  Inference, claims Friston, is quite close to a theory of everything (maybe, but so is ‘stuff happens’). First, though, it seems we need to talk about complex systems, and we’re going to do it by talking about their state spaces. I wish we weren’t. Really, this business of state spaces is like a fashion – or a disease – sweeping through certain parts of the intellectual community. Is there an emerging belief, comparable to the doctrine of the Universality of Computation, that everything must be usefully capable of analysis by state space? I might be completely up the creek , but it seems to me it’s all too easy to use  hypothetical state spaces to give yourself a false assurance that something is tractable when it really isn’t. Of course proper defined state spaces are perfectly legitimate in certain cases. Friston mentions quantum systems; yes, but elementary particles have a relatively small number of properties that provide a manageable number of regular axes from which our space can be constructed. At least you’ve got to be able to say what variables you’re mapping in your space, haven’t you? Here it seems Friston wants to talk about, for example, the space of electrochemical states of the brain. I mean, he’s a professor of neurology, so he knows whereof he speaks – but what on earth are the unimaginable axes that define that one, using cleanly separated independent variables? He’s very hand-wavy about this; he half-suggests we might be dealing with a state space detailing the position of every particle, but surely that’s radically below the level of description we need even for neurology, never mind inferences about the world. It’s highly likely that mental states are uniquely resistant to simple analysis; in the state space of human thoughts every one of the infinite possible thoughts may be adjacent to every other along one of the infinite number of salient axes. I doubt whether God could construct that state space meaningfully, not even even if we gave Him a pencil and paper.

Anyway, Friston wants us to consider a Lyapunov function applied to some suitable state space of – I think – a complex system such as one of us, ie a living organism. He describes the function in general terms and how it governs movement through the space, although without too much detail. In fact after a bit of a whistle stop tour of attractors and homeostasis all he seems to want from the discussion is the point that adaptive behaviour can be read as embodying inferences about the world the organism inhabits. We could get there just by talking about hibernation or bees building hives, so the unkind suspicion crossed my mind that he brings Lyapunov into it partly in order to have a scary dead Russian on the team. I’m sure that’s unfair, and most likely there is illuminating detail about the Lyapunov function that didn’t survive into this account because of limitations on space (or probable reader comprehension). In any case it seems that all we really need to take away is that this function in complex systems like us can be interpreted as making inferences about the future, or minimising ‘surprise’.

It’s important to be clear here that we’re not actually talking literally about actual surprise or about actual inference, the process of some conscious entity inferring something. We’re using the word metaphorically to talk about the free-floating explanations, in a Dennettian sense, that complex, self-maintaining systems implicitly display. By acting in ways that keep them going, such systems can sort of be said to embody guesses about the future. Friston says these sorts of behaviour in complex systems ‘effectively’ make inferences about the world; this is one of those cases where we should remember that ‘effectively’ should strictly be read as ‘don’t’. It’s absolutely OK to talk in these metaphorical terms – I’d go so far as to say it’s almost unavoidable – but to talk of metaphorical inference in an account of consciousness, where we’re trying to explain the real thing, raises the obvious risk of losing track and kidding ourselves that we’ve solved the problem of cognition when all we’ve done is invoke a metaphor. So let’s call this kind of metaphorical inference ‘minference’. If we want to claim later that inference is minference, or that we can build inference out of minference, well and good: but we’ll be sure of noticing what we’re doing.

So complex, self-sustaining systems like us do minference; but that tells us nothing about consciousness because it’s true of all such systems. It’s true of plants and bacteria just as much as it’s true of us, and it’s even true of some non-living systems. Of course, says Friston, but for similar reasons consciousness must also be a process of inference (minference).  It’s just the (m)inference done by the brain. That’s fine, but it just tells us consciousness must tend to produce behaviour that helps us stay alive, without telling us anything at all about the distinctive nature of the process; digestion is also a process that does minference, isn’t it? But we don’t usually attribute consciousness to the gut. Brain minference is entirely different to conscious explicit inferences.

Friston does realise that he needs to explain the difference between conscious and non-conscious minferring creatures, but I don’t think he’s clear enough that the earlier talk of (m)inference is no real use to him. He suggests that in order to infer (really infer) the consequences of its actions, a creature needs an internal model. This seems quite problematic to me, though I’m led to believe he has a more extensive argument for it which doesn’t appear here. While we may use models for some purposes, I honestly don’t see that inference requires one (in fact, building a model and then making your inferences about that would be asking for trouble). I plan to go and catch a train in a minute, having inferred that there will be one at the station; does that mean I must have a small train set or a miniature timetable simulated in my brain? Nope. Friston wants to say that the difference between conscious and unconscious behaviour is that the former derives from a ‘thick’ model of time, which here seems to mean no more than one that takes account of a relatively extended period. The idea that the duration is crucial makes no great sense to me: the behaviour patterns of ants reflect hundreds of thousands or even millions of years of minference: my conscious decisions may be the work of a moment; but I think in the end what Friston means to say is that conscious thought detaches us from the immediate moment by modelling the world and so allows us to entertain plans motivated by long-term considerations. That’s fine, but it has nothing much to do with the state spaces, attractors and Lyapunov functions discussed earlier; it looks as if we can junk all that and just start afresh with the claim about consciousness being a matter of a model that helps us plan. And once that idea is shorn of all the earlier apparatus it becomes clear that it’s not an especially new insight. In fact, it’s pretty much the sort of thing a lot of those pesky mind-as-thing fellows have been saying all along.

Alas, it’s worse than that. Because of the confusion between inference and minference Friston seems to be saddled with the idea that actual consciousness is about minimising actual surprise. Is the animating purpose of human thought to avoid surprise? Do our explicit mental processes seek above all to attain a steady equilibrium, always thinking about the same few things in a tight circle and getting away from new ideas as quickly as possible? It doesn’t seem plausible to me.

Friston concludes with these two sentences.

There’s no real reason for minds to exist; they appear to do so simply because existence itself is the end-point of a process of reasoning. Consciousness, I’d contend, is nothing grander than inference about my future.

Frankly, I don’t understand the first one; the second is fine, but I’m left with the nagging feeling that I missed something more exciting along the way?


[The picture is Lyapunov, bu the way, not Karl Friston]


Posted in Conscious Entities.

%d bloggers like this: