Do Animals and Computers have Minds? Probably Not

In one of my recent university lectures on human consciousness arose some debate about whether other animals have conscious thought or sustained self-awareness; in short, whether animals have minds. My position: I have yet to hear a convincing argument suggesting that they can. Some of the counterarguments to my position involve reference to chimpanzees that seem capable of everything from sign language to perceiving the mental states of others, and talking parrots that appear to understand symbolic mental concepts. Take the late Alex, for example, a talking parrot that was able to vocalize hundreds of words, identify 50 different objects, recognize quantities up to six, and appeared capable of differentiating between two objects based on color, shape, and texture (matter), thus suggesting that it understood the concept of ‘difference.’ Take a look at the video below. I do not deny that it is quite impressive.

But is Alex using symbolic/semantic language? Does Alex truly know what these words mean? Alex’s trainer, Dr. Pepperperg, was quite enthusiastic about Alex’s mental abilities, including her belief that he does understand concepts. However, she is more careful with the question of language. In an interview, she reportedly says:

“I would not call it language. … I don’t believe you could interview him … what little syntax he has is very simplistic. Language is what you and I are doing, an incredibly complex form of communication.”

We might likewise wonder whether Alex is capable of thinking and knowing. Dr. Herbert Terrace, an expert in animal cognition, suggests only “minimally. … in every situation, there is an external stimulus that guides his response. Thought involves the ability to process information that is not right in front of you.” As an aside, I should mention that Dr. Terrace was the main researcher involved in “Project Nim,” the long-term study designed to explore the limits of chimpanzee cognition and communication. Initial reports were hopeful and boasted about Nim’s ability to communicate with humans through sign language. However, the study was considered an overall failure. Though Nim learned many signs, the researchers concluded his sign use was better explained by the modelling of trainer cues and through operant conditioning. While Nim was at times capable of manipulating human signs, there was little evidence to suggest that he actually understood them in the way people do.

The difference is hard for the layperson (and indeed for many scientists) to understand, in part due to a natural tendency toward anthropomorphism. This involves the assumption that if other animals are capable of performing human-like behaviors, they do so for the same reasons as humans. The word ‘reasons’ should be a red-flag, since in many cases we end up smuggling our conclusion (e.g. that an animal is capable of abstract reasoning) into the premise. In comparative animal psychology, there is often a succinct warning against anthropomorphism called ‘Morgan’s Canon.’ It goes as follows:

“In no case may we interpret an action as the outcome of the exercise of a higher faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale (Morgan, 1894).”

Morgan’s Canon is a useful guide. If we fail to heed this warning, we might be inclined to believe a squirrel stores nuts because it knows that winter is coming and realizes that it will be a long time before spring, versus it following a series of pre-programmed or reflexive behaviors in response to a drop in temperature. We might also be inclined to believe that my dog understands the meaning of the word ‘come,’ versus the more likely scenario of the word being operantly reinforced. The word has meaning to us, but not necessarily the dog.

But let us take a brief sidestep into a comparison between human minds and computers. Note for instance, that you could program a computer to do what the parrot and chimpanzee are doing through mere syntax or algorithmic programming. A computer (or a parrot) can still come up with the conceptually ‘correct’ response without knowing what the concept really means. Meaning involves not just syntax, but semantics. John Searle came up with an interesting thought experiment (‘the Chinese Box’) to illustrate the difference. It goes something like this: imagine that there is an English-speaking man (John Searle) sitting inside a large box, while another man of a different language (e.g. Chinese) is standing just outside. The Chinese-speaking man is unaware that there is a man inside the box; to him, it is just a box. Nevertheless, he is capable of inserting a series of Chinese symbols into the box. While the man in the box (John Searle) does not know what the symbols mean, he is able to look them up in a reference book in order to determine a complementary response, which he writes down and pushes out of a window. For our purposes we might consider the Chinese symbols coming in to be the equivalent of a question, while the symbols coming out might be something equivalent to an answer. To the Chinese man on the outside, it appears as if the box is intelligently responding to his questions, and it appears as though the computer-like box is conscious and thinking. But this is only an illusion based on his particular point of reference. While John Searle is manipulating the symbols, he has no idea what any of it means. He knows that these symbols ‘go-together’ but he does not know why. It means something to the Chinese man on the outside, but it does not mean anything to him.

Computers work in a similar way, which is why it is not entirely correct for us to say that they process information. Information contains meaning. But we are the ones that tell the computer what the clusters of one’s and zero’s (inherent within a programming language) will represent. Computers can manipulate algorithm and syntax, but it cannot get any closer to meaning; you can program more and more syntax into the computer, but it is like a dog chasing its tail. Computers do not process information, people do.

However animals are even more complicated than some of the most sophisticated computers. Animals are capable of adapting to the environment in some very complex ways. So let’s look at a modified version of Searle’s Chinese Box experiment to try and get a better handle on what is happening with Alex the parrot and with animal cognition more generally. Suppose that the command entering the modified box is: “Li-24,” alongside the contextual features of a greenish oval and a bluish triangle. Now you might say, “what does that mean?” That is my point – it should not mean anything to us. But let’s say it means something to the person standing outside of the box. And let’s say that instead of having a reference list to refer to, on the inside of the box John Searle has nothing but three possible responses, in this case: H3U, N73, and M8L. Now suppose John tries the response N73, but finds that it is incorrect (nothing happens). The command is repeated again, so this time he tries M8L, and finds that for whatever reason, a beer and sandwich are slid through the window (or the equivalent of a cracker to a human).

The response is thus reinforced. The command ‘Li-24’ is repeatedly paired with certain contextual features and over time, John Searle learns that they ‘go together’ with certain responses. There is an association, but John Searle does not know why. He does not know what it means. To the person on the outside, it looks like John Searle is demonstrating awareness of the concept, but in reality, it is only the result of associational (operant) learning, not sematic understanding.

I believe that both Alex the parrot and Nim the chimp were manipulating language in a similar way, that is, non-symbolically. Note that humans can sometimes do this as well, as evidenced by an example I will borrow from Kenan Malik (Man, Beast, and Zombie). Imagine, for example, that you are in a foreign country that speaks a different language. You pick up a few words, but have no idea what they mean. You observe contextual features involving the people (e.g. smiling, proximity, the offering of food, etc.) and learn that under certain conditions, you can use one of the 3 words you know to get a desired response. You learn that these things effectively ‘go together.’ You learn to use the word correctly a great deal of the time, but still have no idea what it means. We might complicate things, by suggesting that given enough time, you might start thinking and reasoning (e.g. in your own language) and eventually figure out or translate its meaning. But note that we must necessarily invoke language in order to have the mental activity of thinking. It only works if we assume that language (and a human-like mind) is already there, which is true in this case, but not so for parrots and chimpanzees.

Terrence Deacon (The Symbolic Species) suggests that we tend to get confused in these matters because we fail to differentiate between kinds of communication or reference. Deacon describes them in order of complexity as: iconic, indexical, and symbolic. The first two are common among almost all animals, while the last is likely unique to humans.

The iconic mode of reference is mediated by a similarity between sign and object. We act toward the sign ‘as if’ it was the object of reference. An example might be a shark mistaking a human surfer for a seal. In the human world, we might use the example of male sexual arousal in response to a pornographic image. The image is not real, but if it is close enough, the brain overgeneralizes and responds as if it was. A stick-figure drawing of a naked woman is not likely to get a man aroused, but a more realistic drawing or a photograph is close enough that it might, even though it has little to do with a flesh and blood woman or real sex.

The second level of reference, the index, is mediated by physical or temporal connection between sign and object; it involves contiguity or correlation. This level of reference could be innate or learned. Deacon uses the alarm calls of vervet monkeys as an example of an innate indexical mode of reference. Researchers have observed that vervet monkeys have a specialized call for warning others of a predatory threat. They appear to have unique calls signifying the presence of a leopard, an eagle, or a snake. Each call has an instinctive response.

When the leopard call resounds, the monkeys run for the trees, when the eagle call is made, they run to the ground and try to seek cover under some dense bush, when the snake call is made, they all stand up and scan through the tall grass. It does not take much thinking to understand the evolutionary adaptive function of these responses. If the monkey was to stand up in the tall grass in response to an eagle call, it would make it an easier target; this particular response would not be selected because it is maladapted to the situation.

Researchers have been fascinated by these complex responses and they have been quick to suggest that this mode of communication is an early form of symbolic language; that the monkeys appear to understand what these calls mean and are able to make an appropriate response. The calls were thought to be the equivalent to specific names for each animal, which could then be used to decide an appropriate course of action. But Deacon disagrees. Firstly, the monkey calls are innately preassigned – that is, the ‘eagle call’ is the same in every troop and cannot be substituted for one of the other calls. This is unlike the case in humans, where a vocal utterance for an object may vary by native languages and cultures that were more-or-less arbitrarily invented by groups of people.

Deacon also points out that when one monkey starts an alarm call, the others join in, regardless of whether it helps the situation or not. It appears to be more like a reflexive impulse or behavior that seemingly pulls for another reflex-like response; in short, they go just together. The parallel in human beings might be laughter or crying. We say that laughter is contagious, and it simultaneously points to a situation that is generally safe and unthreatening. When someone we care about is upset, it is hard not to be upset yourself or resist providing comfort. These are innate indexical modes of reference or communication, though we can also have learned modes of indexical reference, such as the case with operant conditioning. A dog learns, for example, that a certain utterance or command sets the conditions for a behavioral response that will likely be rewarded; the response ‘goes together’ with the dog treat. However, as we talked about, this does not mean that the dog understands what the request means.

The last level of referential communication involves the symbolic, which is mediated by some formal or merely agreed upon link irrespective of any physical characteristics of either sign or object. This level of communication is not a linear continuation of pre-existing communication, but rather a system that evolved alongside those other forms. It involves human-made facts, concepts, ideas, reasons, meanings, languages, cultures, and so on. It involves more than syntax. It does more than just ‘refer’ to something… it carries symbolic meaning. We need to understand that there is a difference here. It is not just a difference of degree: it is another kind of communication altogether. This beautiful quote by Becker should serve to illustrate the point:

“Nature provided all of life with water, but only man could create the symbol H20, which gave him some command over water, and the word ‘holy,’ which gave water special powers that even nature could not give.”

Other animals, even chimpanzees, do not seem to get this level of abstract thought or symbolic mode of communication. See for yourself in this video:

In sum, humans are a truly symbolic species. This is something that many scientists fail to understand. Unlike other animals, part of our environment is non-physical; it involves the environment of abstract language and symbolic meanings. While it requires a fully functioning brain in order to access it, a brain alone is insufficient. The mind is in this sense extended… we belonging to what Raymond Tallis calls the ‘community of minds.’

3 Responses to “Do Animals and Computers have Minds? Probably Not”

  1. Alicia Says:

    I find it interesting that animals cannot really have executive functioning and yet so many people (especially if they own pets) still act as though they do. I am sometimes guilty of this because there are a lot of dogs and cats in my family and I tend to assign anthropomorphic (sp?) qualities to my pets. However, I am very aware that this is not so, just as the case of Alex the parrot. He learns to perform an action to a command but he does not learn the meaning of that command, because only humans (as far as we know) have language, as well as can assign meaning to things (which, arguably meaning and language go hand in hand).

  2. Mrs. Neutron Says:

    I have had 3 comments not appear…? but, I wondered if you had read this..

    Brad Reply:

    No I did not. Thanks for the link, it was a great read. I can certainly empathize with the feelings of the modern therapist trying to advertize their practice… I struggle with this myself and ultimately decided to build the clinical website ( and do the online videos. I tell myself I am not selling out because I refuse to offer or advertize the superficial ‘quick fixes’ in the therapy work I do – I’ll go poor before that happens. I’m stubborn like that.