The ELIZA effect

Back in the 1960s, Joseph Weizenbaum made some of the first chatterbots. Some excerpts from conversations between those bots and human beings can be found here. What was rather peculiar, despite knowing that the bots were bots, many users felt an emotional connection of some kind developing. Even if there was nothing on the bot's part to support such a thing. Even if the human participant knew that.

That turn of events is particularly interesting to me and my 'mind-likeness' program, as it shows a few things ...

1. Human beings are very keen on/easily tricked into applying mind-likeness/intentional stance. However, the fact that spurious correlations may appear between unrelated phenomena does not make correlation a useless statistical tool, and similarly mind-likeness/intentional stance is often useful to deploy. All the more reason to understand how to deploy it correctly.

2. The nature of the trickery appears misunderstood in the accounts of the experiment. It doesn't/shouldn't matter that the chatterbot's output, basically, just reflected the user's input back at the user. Human beings often do that to each other, in similar situations. What matters, I think, is that the chatterbot didn't remember answers, made no attempt to model the mind of the human- that would have required curiosity. Imagine if the chatterbot were programmed to model the user's mind, at least in some simplified form, and probe for dissonances in it, actively attempting to manipulate it so as to ease those dissonances. Indeed, one possible action would be just to let the user talk, like a therapist would allow their patient to get stuff off their chest. Hm. This needs another post. Soon.

3. Apparently, psychologists, of the couch-listener variety, should have been replaced by machines since the 1960s. I tell you, AGI failed to materialize because of a global conspiracy of psychotherapists.

Comments

Popular posts from this blog

Review of "Mind over money"

Parity Games: Intro

Dark Magics to avoid