What about simple mind-likeness for a change?

I recently received a link to an article by David Deutsch about why Artificial General Intelligence (allegedly) is going nowhere even though by all indication it should be possible. One thing about that and the discussion it generated is fairly uncontroversial: the problem of what intelligence means is so ill specified so as to make any answer pointless. I don't think it's the case that AGI has been standing still, but for every new, and genuinely cool development, the usual response is that it's just some small task part of a larger unknown whole. If we have a clearly defined problem (like, say, make suggestions for other items of potential interest on amazon.com) then we can make progress. If the problem is ill-defined, one can always shift the goal-posts.

Which results in discussions about consciousness and intelligence and the meaning of it all that are actually rather sterile, merely occasions for the posters to sound smart. At least in their own eyes. I can certainly understand that temptation, as I'm falling into it right now, but I'd venture that insisting on 'intelligence' and 'consciousness' is a bit premature. There may be a different notion which, while not as impressive, is more accessible, easier to define, and ultimately more fruitful.


I'll call it 'mind-likeness' for now. I'm sure Dennett has a better name for it (his 'intentional stance' is closely related) but I've yet to read the relevant text in the original English. A first, very fuzzy attempt at a definition is

the quality of something to be such, that it is useful to regard it as if it had a mind
Notice that the definition so far is really vague, but I'll make it more precise in a moment. What may be noticed already is that it's not concerned with ontology (whether the mind is real) but with practicality. Full disclosure, I subscribe to the kind of ontology that considers minds, consciousness, and all manner of emergent phenomena as 'real'. But even if one ascribes to the allegedly Buddhist interpretation that mind and self are 'illusions', what really matters is that they're useful ones.

I've heard said that the mind is an illusion because it's the result of many small, simple things interacting with each other. It's not a 'substance' of some kind that exists by itself and that you can take from one place and move to another. That's fine. But you don't court someone, nor respond to courtship, nor attempt to persuade, nor deceive, nor educate, nor socialize, based on calculations made on the inner state of all the particles that make up that person.

What people do instead, when dealing with other people, is ascribe intentions, desires, fears, attitudes, states of knowing or not knowing something, of believing or not believing something else, to those other people. One also assumes that some procedure exists (for example rationality, but certainly not limited to this) to convert perceptions into updates to those internal states, and that those internal states then are translated into behaviour.

And this folk psychology that we all do, all the time, works well enough to keep society going.

So one can now see how the original definition mind-likeness can be made more precise. 'Useful' refers to being able to understand, predict and influence behaviour. 'As if it had a mind' refers to constructing a web of assumed intentions, knowledge and so on for the thing one supposes has a mind. The philosophical name for this is 'theory of mind' and it appears to me like the literature about it would allow formulating a modal logic that would make 'as if it had a mind' a mathematically rigurous notion.

There already are modal logics that concern themselves with knowledge states of agents. And there's no one stopping you from adding varying degrees of belief in a hypothetical system, for example by using a Bayesian approach (EDIT: other methods have also been suggested to explain how humans actually make sense of new evidence because full-on Bayesian updates appear a bit too computationally intensive).

It's also the case that the idea explained above sounds a lot like 'dimensionality reduction', a topic which is already fairly rich and well established in fields like machine learning and computer vision, therefore a promising base. What dimensionality reduction means is the ability to find, in a 'jumble' of data with many degrees of freedom, some few salient features that are enough to approximately describe the data. One example, is recognizing a simple shape in a noisy bitmap image: one goes from many degrees of freedom for each pixel's color, to relatively few parameters for that shape.

So I believe that one should rather look at a research program (and I'm sure such programs exist) which starts with a modal logic describing some theory of mind and then attempts to find what kind of feature definitions are needed so that the features will correspond to notions in that modal logic; then, define some procedure to detect and reason about those features. Another question is what kind of organization must a dynamic system have, what kind of coherence properties, for it to produce such mind-like features. I'd say that the existence of such organization/coherence properties (and they must exist, or otherwise why the differences between a rock, a hurricane, and you?) justifies thinking of the mind as real, but that's another story.

Anyway, I'm sure such programs are going on right now. The problem with them, from a PR perspective, is that they are very unglamorous. Consider an example given by Dennett in his book 'Kinds of Minds': a chess-playing computer program. A human player wouldn't be ill served by pretending the machine 'wants to win', 'knows what a good or bad move looks like' and therefore 'avoids bad moves'. So the player ascribes mental states to the chess-playing program. Despite the fact that a chess-playing program is nowhere near being a person.

Based on my understanding of Dennett's ideas, I'd argue that this is not a bug, but a feature. Minds don't need to be especially bright to be minds, nor do they need to be self-aware. Mind-likeness is thus a more general notion, weaker and simpler.

The advantages of this are that it's clearer what one looks for, and that one can ramp up the complexity gradually. Yes, a chess-playing computer behaves as if it has some simple one-task mind. One could then imagine a mind a bit more complicated and build that, and so on. It's no longer the case that one needs to tackle, right from the get go, the daunting and divisive problems of consciousness, creativity and all the other topics of pot philosophy since pot.

One apparent problem with the definition of mind-likeness given here, if one accepts that it was made precise, is that its commitment to practicality at the expense of ontology implies that, given enough computing power, one could do away with assuming a mind exists and just operate on the underlying, full dynamics of the system.

The practical objection to that is that it's practically impossible to model a physical system in the full detail of its constituent elementary particles. There's loads of them, and their interactions are not that simple to compute either. One always makes simplifying assumptions and attempts to reduce the degrees of freedom that one needs to track. Rather than doing away with the mind-model then, one would be better served by refining it, if more computational power is available.

It's certainly the case that a system will not have the capacity to simulate itself in full detail. When reasoning about itself, something pretty much must rely on thinking about itself as if it had a mind.

And if it thinks, then it is ;)

Comments

Popular posts from this blog

Dark Magics to avoid

Review of "Mind over money"

Parity Games: Intro