Since this blog’s inception I have taken the unpopular position that psychology is not an empirical science. Indeed it is my contention that psychologists are only able to regard their work as empirically scientific, by virtue of their Kuhnian-like faith in a preferred conceptual community, a dogmatic adherence to an empirical epistemology, and a tacit neglect of theoretical philosophy. As a result, it seems to me that the non-technical or mainstream social sciences remain largely oblivious to their own theoretical assumptions, and are largely unaware of the rigorous arguments and debates that may seriously contest the validity of their preferred concepts, analogies, and metaphors. Nowhere is this theoretical confusion and philosophical ignorance more obvious than in discussions concerning the relationship between mind and brain.
Scientific empiricism is committed to studying objects; but in what way are the subjective qualities of psychological experiences to be objectively explained? How does one objectively quantify a thought, feeling, or motive driving a behavior? Furthermore, how are we to objectify secondary qualia, phenomenology, intentionality, and so on?
Colleagues and students of psychology typically regard the neurosciences as a primary example of how we can objectively study the mind. But to take this position is to either presuppose how they relate, or else to have a masterful grasp of the many philosophical problems about describing this relationship, and a series of valid and feasibly sound arguments for getting around them. It would be a wondrous thing if the latter were often the case; but when I engage in a Socratic follow-up with individuals holding these positions, assumptions abound, and it turns out that the person does not appear to know what it was they thought they knew.
I have already argued that the mind is not reducible to the brain, so I will not revisit the same material here. Rather what follows is a no-nonsense ‘cheat-cheat’ to help non-philosophers begin to think about some of these conceptual issues and their problems. I will outline three main approaches to thinking about the mind-brain relationship, mainly drawing from Raymond Tallis’ excellent book, The Explicit Animal (1999), along with a few other sources noted throughout.
The term ‘Perception’ (P) is used here as a stand-in for the mental event or activity of the mind. Eo stands for the event(s) in the object causing it to be perceived, and Ep for the event(s) in the brain that are in some way responsible for perception. The aim is then to describe the relationship between Ep and perception (P) or other mental activity. Tallis (1999) describes three typical answers (p. 57):
- Substance Dualism: Ep causes P (neural events cause perception)
- Property Dualism: Ep and P are two aspects of the same thing
- Material Monism: Ep and P are identical
Substance Dualism: Physical events cause mental events
This is the view that neural firing causes perception and conscious experience; this position claims that the activity of the nervous system is fully capable of creating the subjective experiences of the mind. It is likely close to that taken by British neurobiologist, Colin Blakemore:
“The human brain is a machine which alone accounts for all of our actions, our most private thoughts, our beliefs… all our actions are products of the activity of our brains. …we feel ourselves, usually, to be in control of our actions, but that feeling is itself a product of our brain, whose machinery has been designed, on the basis of its functional utility, by means of natural selection (The Mind Machine, 1988, pp. 269-271).”
The historical problem with Cartesian dualism, involves explaining how a non-material mind can influence the physical body. The modern version of this approach is to simply say that it cannot. From this view, conscious perception is regarded as a powerless side-effect or ‘epiphenominal dangler,’ without any causal force of its own.
Epiphenomenalism (see Robinson, 2008) is the consequence of accepting two main premises: 1) the Knowledge Argument, and 2) Physical Closure. The Knowledge Argument (KA) simply states that even if we had complete physical knowledge about another conscious being, we would still lack knowledge about what it’s like to have the experiences of that being. The second premise, Physical Closure (PC), states that there is no feature of the purely physical effect that is not contributed by the purely physical cause. In short, the physical and mental are treated as separate entities, but only the physical has causal power. The direction of causality is one-way – that is, the brain creates the subjective experiences of the mind, but the non-material mind is powerless to influence the body (i.e. they are said to be epiphenomenal). Behavior is regarded as effect of neural processes (Ep), and not, as folk psychology would have us believe, of our mental thoughts, intentions, and so on (P). Conscious perceptions, decisions, choices, reasons, and even our intuitive sense of free-will, have no causal force on their own. Gantt & Williams (2013) explain this view as follows:
“What we might normally consider to be uniquely human and intrinsically meaningful aspects of our behaviors, such as the social and moral purposes of our acts in the interpersonal space of human relationships, are regarded as epiphenomenal, and thus, relegated to secondary status – a by-product of asocial and amoral mechanically determinative motivational processes operating on us as if we were objects (p. 7).”
There are several ways to challenge this position, but in my opinion the strongest is the self-stultification argument (see Robinson, 2008). It contends that if epiphenomenalism were true, then we should not be capable of having knowledge about our own minds. The argument begins by stating that reference to any x, will necessarily involve causal influence from x to the referential act. But if x is epiphenomenal, then it is something to which we cannot refer. Therefore, if human thoughts and qualia are indeed epiphenomenal, they cannot be objects of reference. In other words, if we acknowledge the existence of mental states, thoughts, and qualia (via the KA), and commit to a physical determinism (via PC), it would appear to render thoughts and qualia impotent of the causal force necessary to refer to and influence our own thoughts.
“Ironically, this very psychological understanding of human action, wherein reasons, creativity, and meaning are merely epiphenomenal, obviates the very types of creative, rational, and analytical scientific thinking that most experimental psychologists … extol as necessary for the fruitful pursuit of psychological understanding of human phenomena (Gantt & Williams, 2013; p. 7).”
In sum, substance dualism leads to epiphenomenalism, but epiphenomenalism seems to be incompatible with knowing our own thoughts or mind – including knowing whether epiphenomenalism is even true, which seems absurd. The duality of substance argument appears to be a dead end.
Property Dualism: Brain events and mental events are Two Aspects of the Same Thing
This view contends that while there is nothing beyond the physical brain, certain kinds of brain processes can nonetheless have non-physical properties, including mental phenomena and subjective experiences. So neural events (Ep) and mental phenomena (P) are not treated as two different substances, but rather two sides of the same coin (Tallis, 1999).
The most common versions of property dualism suggest that the subjective inner experience is an emergent phenomenon… that these experiences emerge from complex neural events in the brain. Property dualists claim, for example, that the relationship between mind and brain, is similar to the relationship between the liquidity or blueness of water and the microphysics of H2O molecules; to heat and molecular kinetics; to light and electromagnetic energy; and so on. Mental phenomena and neural events are not of a different ontological kind, but rather the same thing described at different levels. The mental level thus emerges from the level of neural networks to create this secondary property.
But what exactly is meant by emergence? And in what way can the activity of neural networks have properties different from those of the physical material on which they are based? There is an ontological gap that must be explained – how something of one ontological status (physical processes) can give rise to another aspect of a different ontological status (mental processes).
The analogies used by the property dualist are enticing at first glance. It seems obvious that the appearance of water and H2O ought to be the same thing described at different levels. But Tallis points out that in order to even have two apparent viewpoints, levels of observation, or modes of description, we must necessarily presuppose a conscious perceiver, which is to assume precisely what was supposed to be explained, explained away, or reduced (Tallis, 1999; p. 62).
“The general difficulty with dual aspect theories is that while it is easy to imagine the two aspects – or levels of descriptions – it is equally easy to forget that those aspects – levels and descriptions – require the existence of two viewpoints (the objective and the subjective) and so of consciousness (p. 63).”
In other words, H20 molecules are the objective material property (available to scientific instruments). But we must implicitly invoke consciousness to get the second property – the phenomenal ‘appearance’ of what we call water (e.g. ‘blueness’). The real ‘trick’ is when we again evoke the conscious perceiver to deny an ontological difference, and to assert that they are indeed a single entity with two properties.
“But these differences are differences in the appearance of the object and such differences must presuppose a consciousness to which it appears. The difference between one aspect or appearance of an object and another cannot be used as a model of the difference between the object and its appearance to consciousness, between the physical object and its viewpoint-dependent appearings. Aspects are established within consciousness; they cannot be used to establish the relationship of consciousness to that which is not conscious (p. 63).”
The final irony is when we use consciousness in this way to undermine its own ontological status; that is, to say that everything mental, emerges entirely by the physical. So it seems that this conception of property dualism, where objective brain events are singular in substance, though believed to have emergent mental properties, is yet another dead end. But why are we so easily seduced by it? Gantt and Williams (2013) claim that it has a lot to do with the language we use, including words like emergence:
“The strategy essentially names a process, structure, or entity that ostensibly bridges the gap between the material and the psychological, and then endows (linguistically) that process, structure, or entity with the precise power that makes it capable of doing the conceptual task required – that is, to manufacture meaningful psychological phenomena out of the raw materials of the meat and chemical of the body. Such is, however, really just a sort of magic trick, a feat of linguistic legerdemain in which a difficult conceptual problem is simply disappeared with a word (p. 13).”
I should mention that Tallis gives a stinging and lengthy critical treatment of the computer metaphor as a way of understanding human minds. I was particularly impressed with his debunking of the multiple meanings and heavy-lifting done by the term ‘information,’ such as when we talk of nervous systems being information processing systems. I strongly encourage interested readers to check out what he has to say on these topics.
Material Monism: Brain Processes and Mental Processes are Identical
Identity theory might be one of the most popular theories of mind-brain relation. In this case mental phenomena or perceptions literally are physical events in, or states of, the brain. Pinker (1997), for example, claims that “The mind is what the brain does. (p. 21),” and argues that there is “… overwhelming evidence that the mind is the activity of the brain (p. 64).” Daniel Dennett has likewise said that: “There is only one sort of stuff, namely matter – the physical stuff of physics, chemistry, and physiology – and the mind is somehow nothing but a physical phenomenon. In short, the mind is the brain (1991, p. 33).”
Tallis (1999) points out one obvious problem of these positions in that mental phenomenon seem nothing like the neural events they are supposed to be identical with. This kind of strict identity seems to violate Leibniz’s law, which states that a = b, only if a and b have every property in common.
“For one can say of a brain process that it occupies a particular point in space or that it can be displayed on an oscilloscope screen; whereas neither of these things could be said of the subjective sensation of the colour blue or the thought that I hate Monday mornings. … it is difficult to comprehend an object that is utterly unlike itself (p. 65).”
As we saw above, the dual-aspect theories try to deal with this obvious difference in ‘appearance’ by trying to suggest that there is a singular substance with two aspects or properties. The attempted resolution for identity theorists is to suggest that mental phenomena and brain processes are exactly the same thing viewed within different descriptive levels or theoretical frameworks. Thus, mind is the brain in the same way that lightning is motion of electric charges; water is H2O; heat is molecular motion; and so on.
Note that the above analogies – involving lightning, water, and heat – there is a described relationship that is supposedly identical, though there also appears to be an element of contingency. For example, the flash of lightening and the motion of electric charges are claimed to be exactly the same thing, even though we might imagine perceiving a flash that turned out not to be the motion of electrical charges. The claimed relationship is identical in a strict Leibnizian sense, though it is simultaneously contingent. Could there be a similar kind of relationship between mental phenomena and physical brains?
Saul Kripke (1980) was especially critical of this idea of ‘contingent identity,’ and in setting up his argument, carefully distinguishes between what he terms rigid versus non-rigid designators. A rigid designator describes objects or expressions that refer to some property by necessity and in all possible circumstances. In contrast, a non-rigid designator is a description or expression that is contingent – if things had been different, the description may actually refer to something else. ‘Leo Tolstoy’ is an example of a rigid designator – since the label points to a specific man, and it does not make any sense to say ‘Leo Tolstoy might not have been Leo Tolstoy.’ In contrast, ‘the man who wrote War and Peace’ is an example of a non-rigid designator, since it refers to an author who might not have been Leo Tolstoy.
Applying these terms to the above analogies, we can see that water is a rigid designator, because the thing that it refers to is the molecular component composition of H2O. For instance, if we were to call something water, but were to later discover that it was not H2O, we would cease to call it water. The appearance of water and H2O share a necessary identity; they do indeed reference the same thing.
In short, water and H20 share a necessary identity even though the descriptive label and the subjective experience make it a contingent relationship. But can we say the same about the relationship between mental events and neural processes?
Recall again the identity theorists’ claim that mental phenomena are identical to neural processes. It seems obvious that neural events (e.g. C-fiber stimulation) are rigid designators. Mental events (e.g. the perception of pain) also have a rigid designation since what we refer to when we talk about pain is the subjective experience of pain. If both mental events and neural processes are each rigidly designators, it follows that the identity statement: ‘mental events are physical events in the brain’ must be true by necessity. But ‘pain is identical to C-fiber stimulation’ is not necessarily true (we can imagine a world where one exists without the other). And if it is not necessarily true, following Leibniz Law, the argument as a whole is flawed.
While the relationship between water and H20 may involve both necessity and some degree of contingency, the same cannot be said of the relationship between mental events and brain processes. If we were to discover that pain were not caused by neural firing, or if we were to discover an alien race that had subjective pain states that were not caused by neural firing, we would still refer to it as pain; the mental state (i.e. pain) does not refer to anything else other than itself – the perceptual experience of pain. It cannot be theoretically reduced in the way that water is reduced to H20, without losing its referent.
We can also see why the favored analogies of identity theorists are flawed – namely, they point to referents that can be physically substantiated on both ends. Take, for example, the claim that ‘heat is molecular motion.’ With objective instruments we can detect measure molecular motion; with another set of instruments we can likewise detect and measure heat; we compare the two in order to verify that they are the same thing. There is no such comparison for mental phenomenon since the only instruments capable of measuring said phenomena are subjective human beings.
Another way of stating the problem is to point out that in the cases of water and H20, heat and molecular motion, lightening and electrical charges, we are talking about physical properties objectively measured at different levels, and so there is no ontological gap to be crossed. In these situations, talk of ‘emergence’ is more acceptable because we are talking about complex physical properties arising from some assemblage of lower-level physical properties. They all share the ontology of physical things (Gantt & Williams, 2013). However, it becomes problematic when we try to get from one ontological category (e.g. physical) to another (e.g. mental or phenomenological).
As stated earlier, one of the common ways of bridging this gap is by unconscious linguistic fiat. In short, we take terms that have one meaning, and then apply them to situations that are inappropriate but that appear to solve the problem. Tallis (1999) calls this ‘thinking by transferred epithet.’ Examples include our utilization of terms such as emergence, representation, and information, which we use to easily get the nervous system involving chemical-electrical energy, to do what we want it to do – namely, to explain phenomenological consciousness.
“The metaphysical duties of the nervous system are thereby greatly lightened; for the energy/information barrier – or the body-mind gap – may be crossed simply by referring to, for example, light not only as ‘energy’ but also as ‘information’, so that at the very least, it possesses intrinsically the subjective qualities of the experience of it, such as brightness (p. 84).”
So this concludes a brief survey of some common mind-brain problems for non-philosophers. I would encourage empirically-minded persons to be aware of the theoretical assumptions underlying their favored position… be capable of backing up your claims and responding to the arguments against them.
At this point readers may be inclined to ask about my own position regarding the mind-brain relationship. To be honest, I am still trying to figure it out. But here is a partial extract from an upcoming publication, where I partially address the issue (Peters, in press):
“While the brain is a necessary condition for the existence of a human mind, it a dubious claim to suggest that it might alone account for it (Gergen, 2010). Unlike other animals, humans exist in a symbolic and pluralistic community of other subjective minds where meaning is collectively constructed (Malik, 2002; Tallis, 2011). Within this symbolic matrix, semantic labels allow for the validation of first-person subjective experiences and provide a mental arena for human reasoning, where values can be constructed, upheld, defended, or discarded. All of these things are created not by a mechanistic machine, but by pluralistic community of rational agents, who are capable of acting on those symbolic systems in ways that machines and other animals cannot:
‘The reasons that circulate in culture and society acquire action-motivating force only once they find a point of entry from this ‘objective mind’ … into the subjective mind, that is, into the consciousness of persons who, for their part, are prepared for this by processes of socialization. However much human infants may also be ‘pre-programmed’ for this by their genetic endowment, they do not develop into persons until they get ‘hooked up’ with the intersubjectively shared meanings of the cultural program. Personhood stands out as the early ontogenetic socialization of cognition that then also shapes the structure of action and the formation of motives. (Habermas, 2007; p. 17)’
The human mind might thus be conceived as comprising both causal mechanisms that are largely biologically-dictated, and those that are for the most part symbolically-dictated, though still biologically dependent. Insofar as neurophysiology is concerned, we might envision the former mechanisms to involve lower-level systems such as those outlined by Panksepp, whereas the latter might engage a comparatively flexible associational cortex. In the latter case, causal force is governed not by biological mechanism, but by symbolic meaning; exigent biological processes are in other words co-opted within the external matrix of symbolic (i.e. non-material) contingencies that arguably hold an explanatory power greater than the biological structures upon which they rely. … any theory claiming to fully comprehend the human animal, must be able to account for, or at least acknowledge the unique symbolic capacity of human minds. … while mechanisms may indeed serve as the neural vehicle for representations, their meaning and function are in many cases dictated by a community of symbolically embodied agents and the human world of meanings, values, and justified reasons that have their own causal force. …
the mechanisms behind our higher-order cognitive capacities are in many cases part of the extended and symbolically embedded world of man-made values, ideas, and reasons. … From this perspective, we can also challenge the rather dubious assertion that we could fully understand such mechanisms by assuming a causal relationship attributed to their ‘neurobiological underpinnings’ …
… [we] must be capable of theoretically distinguishing between varieties of psychological processes and neurobiological mechanisms – between cases when we might indeed be operating as evolved biological animal, versus those when we act as a partially transcendent (though still biologically dependent) symbolic creature – an embodied subject and self-conscious rational agent (Tallis, 2003; 2004a; 2004b). It is important to note that this difference between humans and non-human animals is more than quantitative (e.g. more specialized adaptations); it represents an enormous qualitative divergence, given that we are the only symbolic species.”
Dennett, D. C., & Weiner, P. (1991). Consciousness explained. New York, NY US: Little, Brown and Co
Gantt, E. E., & Williams, R. N. (2013). Psychology and the Legacy of Newtonianism: Motivation, Intentionality, and the Ontological Gap. Journal Of Theoretical And Philosophical Psychology, doi:10.1037/a0031587
Kripke, S. (1980). Naming and Necessity. Cambridge: Harvard University Press.
Pinker, S. (1997). How the mind works. New York: W W Norton.
Robinson, H.. “Why Frank should not have jilted Mary.” In The case for qualia, edited by E. L. Wright, 223-247. Cambridge, Mass.: MIT Press, 2008.
Tallis, R. (1999). The explicit animal: a defense of human consciousness. New York: St Martin’s Pr.