All posts by IEP Author

Phenomenology and Natural Science

Phenomenology provides an excellent framework for a comprehensive understanding of the natural sciences. It treats inquiry first and foremost as a process of looking and discovering rather than assuming and deducing. In looking and discovering, an object always appears to a someone, either an individual or community; and the ways an object appears and the state of the individual or community to which it appears are correlated.

To use the simplest of examples involving ordinary perception, when I see a cup, I see it only through a single profile. Yet to perceive it as real rather than a hallucination or prop is to apprehend it as having other profiles that will show themselves as I walk around it, pick it up, and so forth. No act of perception – not even a God’s – can grasp all of a thing’s profiles at once. The real is always more than what we can perceive.

Phenomenology of science treats discovery as an instrumentally mediated form of perception. When researchers detect the existence of a new particle or asteroid, it assumes these will appear in other ways in other circumstances – and this can be confirmed or disconfirmed only by looking, in some suitably broad sense. It is obvious to scientists that electrons appear differently when addressed by different instrumentation (for example, wave-particle duality), and therefore that any conceptual grasp of the phenomenon involves instrumental mediation and anticipation. Not only is there no “view from nowhere” on such phenomena, but there is also no position from which we can zoom in on every available profile. There is no one privileged perception and the instrumentally mediated “positions” from which we perceive constantly change.

Phenomenology looks at science from various “focal lengths.” Close up, it looks at laboratory life; at attitudes, practices, and objects in the laboratory. It also pulls back the focus and looks at forms of mediation – how things like instruments, theories, laboratories, and various other practices mediate scientific perception. It can pull the focus back still further and look at how scientific research itself is contextualized, in an environment full of ethical and political motivations and power relations. Phenomenology has also made specific contributions to understanding relativity, quantum mechanics, and evolution.

Table of Contents

  1. Introduction
  2. Historical Overview
  3. Science and Perception
  4. General Implications
    1. The Priority of Meaning over Technique
    2. The Priority of the Practical over the Theoretical
    3. The priority of situation over abstract formalization
  5. Layers of Experience
    1. First Phase: Laboratory Life
    2. Second Phase: Forms of Mediation
    3. Third Phase: Contextualization of Research
  6. Phenomenology and Specific Sciences
    1. Relativity
    2. Quantum Mechanics
    3. Evolution
  7. Conclusion
  8. References and Further Reading

1. Introduction

Phenomenology provides an excellent starting point, perhaps the only adequate starting point, for a comprehensive understanding of the natural sciences: their existence, practices, methods, products, and cultural niches. The reason is that, for a phenomenologist, inquiry is first and foremost a question of looking and discovering rather than assuming and deducing. In looking and discovering, an object is always given to a someone – be it an individual or community – and the object and its manners of givenness are correlated. In the special terminology of phenomenology, this is the doctrine of intentionality (for example, see Cairns 1999). This doctrine has nothing to do with the distinction between “inner” and “outer” experiences, but is a simple fact of perception. To use the time-honored phenomenological example, even when I see an ordinary object such as a cup, I apprehend it only through a single appearance or profile. Yet for me to perceive it as a real object – rather than a hallucination or prop – I apprehend it as having other profiles that will show themselves as I walk around it, pick it up, and so forth, each profile flowing into the next in an orderly, systematic way. I do more than expect or deduce these profiles; the act of perceiving a cup contains anticipations of other acts in which the same object will be experienced in other ways. That’s what gives my experience of the world its depth and density. Perhaps I will discover that my original perception was misled, and my anticipations were mere assumptions; still, I discover this only through looking and discovering – through sampling other profiles. In science, too, when researchers propose the existence of a new particle or asteroid, such a proposal involves anticipations of that entity appearing in other ways in other circumstances, anticipations that can be confirmed or disconfirmed only by looking, in some suitably broad sense (Crease 1993). In ordinary perception, each appearance and profile (noema) is correlated with a particular position of the one who apprehends it (noesis); a change in either one (the cup turning, the person moving) affects the profile apprehended. This is called the noetic-noematic correlation. In science, the positioning of the observer is technologically mediated; what a particle or cell looks like depends in part on the state of instrumentation that mediates the observation.

Another core doctrine of phenomenology is the lifeworld (Crease 2011). Human beings, that is, engage the world in different ways. For instance, they seek wealth, fame, pleasure, companionship, happiness, or “the good”. They do this as children, adolescents, parents, merchants, athletes, teachers, and administrators. All these ways of being are modifications of a matrix of practical attachments that human beings have to the world that precedes any cognitive understanding. The lifeworld is the technical term phenomenologists have for this matrix. The lifeworld is the soil out of which grow various ways of being, including science. Understanding photosynthesis or quantum field theory, for instance, is only one – and very rare – way that human beings interact with plants or matter, and not the default setting. Humans have to be trained to see the world that way; they have to pay a special kind of attention and pursue a special kind of inquiry. Thus the subject-inquirer (again, whether individual or community) is always bound up with what is being inquired into by practical engagements that precede the inquiry, engagements that can be altered by and in the wake of the inquiry. It is terribly tempting for metaphysicians to “kick away the ladder of lived experience” from scientific ontology as a means to gain some sort of privileged access to the world that bypasses lifeworld experience, but this condemns science to being “empty fictions” (Vallor 2009).

The aim of phenomenology is to unearth invariants in noetic-noematic correlations, to make forms or structures of experience appear that are hidden in ordinary, unreflective life, or the natural attitude. Again, the parallel with scientific methodology is uncanny; scientific inquiry aims to find hidden forms or structures of the world by varying, repeating, or otherwise changing interventions into nature to see what remains the same throughout. Phenomenologists seek invariant structures at several different phases or levels – including that of the investigator, the laboratory, and the lifeworld – and can examine not only each phase or level, but the relation of each to the others. Over the last hundred years, this has generated a vast and diverse body of literature (Ginev 2006; Kockelmans & Kisiel 1970; Chasan 1992; Hardy and Embree 1992; McGuire and Tuchanska 2001; Gutting 2005).

2. Historical Overview

Phenomenology started out, in Husserl’s hands, well-positioned to develop an account of science. After all, Husserl was at the University of Göttingen during the years when David Hilbert, Felix Klein, and Emmy Noether were developing and extending the notion of invariance and group theory. Husserl not only had a deep appreciation for mathematics and natural science, but his approach was allied in many key respects with theirs, for he extended the notion of invariance to perception by viewing the experience of an object as of something that remains the same in the flux of changing sensory conditions produced by changing physical conditions. This may seem far-removed from the domain of mathematics but it is not. Klein’s Erlanger program viewed mathematical objects as not representable geometrically all at once but rather in definite and particular ways, depending on the planes on which they were projected; the mathematical object remained the same throughout different projections. In an analogous way, Husserl’s phenomenological program viewed a sensuously apprehended object as not given to an experiencing subject all at once but rather via a series of adumbrations or profiles, one at a time, that depend on the respective positioning of subject and object. The “same” object – even light of a certain wavelength – can look very different to human observers in different conditions. What is different about Husserl’s program, and may make it seem removed from the mathematical context, is that these profiles are not mathematical projections but lifeworld experiences. What remained to be added to the phenomenological approach to create a fuller framework for a natural philosophy of science was a notion of perceptual fulfillment under laboratory conditions, and of the theoretical planning and instrumental mediation leading to the observing of a scientific object. The “same” structure – for example, a cell – will look very different using microscopes of different magnification and quality, and phenomenology easily provides an account for this (Crease 2009).

Despite this promising beginning, many phenomenologists after Husserl turned away from the sciences, sometimes even displaying a certain paternalistic and superior attitude towards them as impoverished forms of revealing. This is unwarranted. Husserl’s objection to rationalistic science in the Crisis of the European Sciences was after all not to science but to the Galilean assumption that the ontology of nature could be provided by mathematics alone, bypassing the lifeworld (Gurwitsch 1966, Heelan 1987). And Heidegger’s objection, in Being and Time, most charitably considered, was not to theoretical knowledge, but to the forgetting of the fact that it is a founded mode in the lifeworld, to be interpreted not merely as an aid to disclosure but as a special and specialized mode of access to the real itself. Others to follow, including Gadamer and  Merleau-Ponty, for various reasons did not pursue the significance of phenomenology for natural science.

Science also lagged behind other areas of phenomenological inquiry for historical reasons. The dramatic success of Einstein’s theory of general relativity, in 1919, brought “a watershed for subsequent philosophy of science” that proved to be detrimental to the prospects of phenomenology for science (Ryckman 2005). Kant’s puzzling and ambiguous doctrine of the schematism – according to which intuitions, which are a product of sensibility, and categories, which are a product of understanding, are synthesized by rules or schemata to produce experience – had nurtured two very different approaches to the philosophy of science. One, taken by logical empiricists, rejected the schematism and treated sensibility and the understanding as independent, and the line between the intuitive and the conceptual as that between experienced physical objects and abstract mathematical frameworks. The empiricists saw these two as linked by rules of coordination that applied the latter to the former. Such coordination – the subjective contribution of mind to knowledge – produced objective knowledge. The other, more phenomenological route was to pursue the insight that experience is possible only thanks to the simultaneous co-working of intuitions and concepts. While some forms and categories are subject to replacement, producing a “relativized a priori” (my conception of things like electrons, cells, and simultaneity may change) such forms and categories make experience possible. Objective knowledge arises not by an arbitrary application of concepts to intuitions – it is not just a decision of consciousness – but is a function of the fulfillment of physical conditions of possible conscious experience; scientists look at photographic plates or information collected by detectors in laboriously prepared conditions that assure them that such information is meaningful and not noise. Husserl’s phenomenological approach to transcendental structures, though, must be contrasted with Kant’s, for while Kant’s transcendental concepts are deduced, Husserl’s are reflectively observed and described. However, following the stunning announcement of the success of general relativity in 1919, which seemed to destroy transcendental assumptions about at least the Euclidean form of space and about absolute time, logical empiricists were quick to claim it vindicated their approach and refuted not only Kant but all transcendental philosophy. “Through Einstein … the Kantian position is untenable,” Schlick declared, “and empiricist philosophy has gained one of its most brilliant triumphs.”  But the alleged vanquishing of transcendental philosophy and triumph of logical empiricism’s claims to understand science was due to “rhetoric and successful propaganda” rather than argument (Ryckman 2005). For as other transcendental philosophers such as Ernst Cassirer, and philosophically sophisticated scientists such as Hermann Weyl, realized, in making claims about the forms of possible phenomena general relativity called for what amounted to a revision, rather than a refutation, of Kant’s doctrine; how we may experience spatiality in ordinary life remains unaffected by Einstein’s theory. But the careers of both Cassirer and Weyl took them away from such questions, and nobody else took their place.

3. Science and Perception

One way of exhibiting the deep link between phenomenology and science is to note that phenomenology is concerned with the difference between local effects and global structures in perception. To use the time-honored example of perceiving a cup through a profile again: Grasping it under that particular adumbration or profile is a local effect, though what I intend is a global structure – the phenomenon – with multiple horizons of profiles. Phenomenology aims to exhibit how the phenomenon is constituted in describing these horizons of profiles. But this of course is closely related to the aim of science, which seeks to describe how phenomena (for example, electrons) appear differently in different contexts – and even, in the case of general relativity, incorporates a notion of invariance into the very notion of objectivity itself (Ryckman 2005). An objective state of affairs, that is, is one that has the same description regardless of whether the frame of reference from which it is observed is accelerating or not.

In science, however, perceiving (observing) is mediated by theory and instruments. Thanks to theories, the lawlike behavior of scientific phenomena (for example, how electrons behave in different conditions) is represented or “programmed” and then correlated with instrumental techniques and practices so that a phenomenon appears. The theory (for example, electromagnetism) thus structures both the performance process thanks to which the phenomenon appears, and the phenomenon itself. Read noetically, with respect to production, the theory is something to be performed; read noematically, with respect to the product, it describes the object appearing in performance. A theory does not correspond to a scientific phenomenon; rather, the phenomenon fulfills or does not fulfill the expectations of its observation raised by the theory. Is this an electron beam or not?  To decide that, its behavior has to be evaluated. Theory provides a language that the experimenter can use for describing or recognizing or identifying the profiles. For the theorist, the semantics of the language is mathematical; for the experimenter, the semantics are descriptive and the objects described are not mathematical objects but phenomena – bodily presences in the world. Thus the dual semantics of science (Heelan 1988); a scientific word (such as ‘electron’) can refer to both an abstract term in a theory and to a physical phenomenon in a laboratory. The difference is akin to that between a ‘C’ in a musical score and a ‘C’ heard in a concert hall. Conflating these two usages has confused many a philosopher of science. But our perception of the physical phenomenon in the laboratory has been mediated by the instruments used to produce and measure it (Ihde 1990).

By adding theoretical and experimental mediation to Husserl’s account of what is “constitutive” of perceptual horizons (Husserl 2001, from where the following quotations are taken except where noted), one generates a framework for a phenomenological account of science. To grasp a scientific object, like a perceptual object, as a presence in the world, as “objective,” means, strangely enough, to grasp it as never totally given, but as having an unbounded number of profiles that are not simultaneously grasped. Such an object is embedded in a system of “referential implications” available to us to explore over time. And it is rarely grasped with Cartesian clarity the first time around, but “calls out to us” and “pushes us” towards appearances not simultaneously given. A new property, for example parity violation, is detected in one area of particle physics – but if it shows up here it should also show up there even more intensely and dramatically. Entities, that is, show themselves as having further sides to be explored, and as amenable to better and better instrumentation. Phenomena even as it were call attention to their special features – strangeness in elementary particles, DNA in cells, gamma ray bursters amongst astronomical bodies – and recommends these features to us for further exploration. “There is a constant process of anticipation, of preunderstanding.”  With sufficient apprehension of sampled profiles, “The unfamiliar object is … transformed …into a familiar object.”  This involves development both of an inner horizon of profiles already apprehended, already sampled, and an external of not-yet apprehended profiles. But the object is never fully grasped in its complete presence, horizons remain, and the most one can hope for is for a thing to be given optimally in terms of the interests for which it is approached. And because theory and instruments are always changing, the same object will always be grasped with new profiles. Thus, Husserl’s phenomenological account readily handles the often vexing question in traditional philosophy of science of how “the same” experiment can be “repeated.”  It equally readily handles the even more troublesome puzzle in traditional approaches of how successive theories or practices can refer to the same object. For just as the same object can be apprehended “horizontally” in different instrumental contexts at the same time, it can also be apprehended “vertically” by successively more developed instrumentation. Husserl, for instance, refers to the “open horizon of conceivable improvement to be further pursued” (Husserl Crisis #9a). Newer, more advanced instruments will pick out the same entity (for example, an electron), yield new values for measurements of the same quantities, and open up new domains in which new phenomena will appear amid the ones that now appear on the threshold. Today’s discovery is tomorrow’s background.

The basic account of perception given above has been further elaborated in the context of group theory by Ernst Cassirer in a remarkable article (Cassirer 1944). Cassirer extends the attempts of Helmholtz, Poincaré and others to apply the mathematical concept of group to perception in a way that makes it suitable to the philosophy of science. Group theory may seem far from the perceptual world, Cassirer says. But the perceptual world, like the mathematical world, is structured; it possesses perceptual constancy in a way that cannot be reduced to “a mere mosaic, an aggregate of scattered sensations” but involve a certain kind of invariance. Perception is integrated into a total experience in which keeping track of “dissimilarity rather than similarity” is a hallmark of the same object. The cup is going to look different as the light changes and as I move about it. “As the particular changes its position in the context, it changes its “aspect.”  Thus, Cassirer writes, “the ‘possibility of the object’ depends upon the formation of certain invariants in the flux of sense-impressions, no matter whether these be invariants of perception or of geometrical thought, or of physical theory. The positing of something endowed with objective existence and nature depends on the formation of constants of the kinds mentioned …. The truth is that the search for constancy, the tendency toward certain invariants, constitutes a characteristic feature and immanent function of perception. This function is as much a condition of perception of objective existence as it is a condition of objective knowledge.”  The constitutive factor of objective knowledge, Cassirer concludes, “manifests itself in the possibility of forming invariants.”  Again, one needs to flesh out such an approach with account of fulfillment as mediated both theoretically and practically.

4. General Implications

The above, it will be seen, has three general implications for philosophy of science:

a. The Priority of Meaning over Technique.

In contrast to positivist-inspired and much mainstream philosophy of science, a phenomenological approach does not view science as pieced together at the outset from praxes, techniques, and methods. Praxes, techniques, and methods – as well as data and results – come into being by interpretation. The generation of meaning does not move from part to whole, but via a back-and-forth (hermeneutical) process in which phenomena are projected upon an already-existing framework of meaning, the assumptions of which are at least partially brought into question, and by this action further reviewed and refined within the ongoing process of interpretation. This process is amply illustrated by episode after episode in the history of science. Relativity theory evolved as a response to problems and developments experienced by scientists working within Newtonian theory.

b. The Priority of the Practical over the Theoretical

The framework of meaning mentioned above in terms of which phenomena are interpreted is not comprised merely of tools, texts, and ideas, but involves a culturally and historically determined engagement with the world which is prior to the subject and object separation. On the one hand, this means that the meanings generated by science are not ahistorical forms or natural kinds that have a transcendent origin. On the other hand, it means that these meanings are also not arbitrary or mere artifacts of discourse; science has a “historical space” in which meanings are realized or not realized. Results are right or wrong; theories are adjudicated as true or false. Later, as the historical space changes, the “same” theory (or more fully developed versions thereof) may be confirmed by different results inconsistent with previous confirmations of the earlier version. What a “cell” is may look very different depending on the techniques and instruments used to apprehend it, but what is happening is not a wholesale replacement of one picture or theory by another, but expanding and evolving knowledge (Crease 2009).

c. The Priority of the Practical over the Theoretical

Truth always involves a disclosure of something to someone in a particular cultural and historical context. Even scientific knowledge can never completely transcend these culturally and historically determined involvements, leaving them behind as if scientific knowledge consisted in abstractions viewed from nowhere in particular. The particularity of the phenomena disclosed by science is often disguised by the fact that they can show themselves in many different cultural and historical contexts if the laboratory conditions are right, giving rise to the illusion of disembodied knowledge.

5. Layers of Experience

These three implications suggest a way of ordering the kinds of contributions that a phenomenology can make to the philosophy of science. For there are several different phases – focal lengths, one might say – at which to set one’s phenomenology, and it is important to distinguish between them. The focal length can be trained within the laboratory on laboratory life, and investigate the attitudes, practices, and objects encountered in the laboratory. These, however, are nested in the laboratory environment and in the structure of scientific knowledge, which is their exterior expression. Another phase concerns the forms of mediation, both theoretical and instrumental, and how these contextualize the phase just mentioned of attitudes, practices, and objects, and how these are related to their exterior. This phase is nested in turn in another kind of environment, the lifeworld itself, with its ethical and political motivations and power relations. The contributions of phenomenology to the philosophy of science is first of all to describe these phases and how they are nested in each other, and then to describe and characterize each. A philosophical account of science cannot begin, nor is it complete, without a description of these phases.

a. First Phase: Laboratory Life

One phase has to do with specific attitudes, practices, or objects encountered by a researcher doing research in the laboratory environment – with the phenomenology of laboratory perception. Inquiry is one issue here. Conventional textbooks often treat the history of science as a sequence of beliefs about the state of the world, as if it were like a series of snapshots. This creates problems having to do with accounting for how these beliefs change, how they connect up, and what such change implies about continuity of science. It also rings artificial from the standpoint of laboratory practice. A phenomenological approach, by contrast, considers the path of science as rather like an evolving perception, as a continual process that cannot be neatly dissected into what’s in question and what not, what you believe and what you do not. Affects of research is another issue. The moment of experience involves more than knowledge, global or local, more than iterations and reiterations. Affects like wonder, astonishment, surprise, incredulity, fascination, and puzzlement are important to inquiry, in mobilizing the transformation of the discourse and our basic way of being related to a field of inquiry. They indicate to us the presence of something more than what’s formulated, yet also not arbitrary. When something unexpected happens, it is not a matter of drawing a conceptual blank. When something unexpected and puzzling happens in the lab, it involves a discomfort from running into something that you think you should understand and you do not. Taking that discomfort with you is essential to what transformations ensue. Other key issues of the phenomenology of laboratory experience include trust, communication, data, measurement, and experiment (Crease 1993). Experiment is an especially important topic. For there is nothing automatic about experimentation; experiments are first and foremost material events in the world. Events to not produce numbers – they do not measure themselves – but do so only when an action is planned, prepared, and witnessed. An experiment, therefore, has the character of a performance, and like all performances is a historically and culturally situated hermeneutical process. Scientific objects that appear in laboratory performances may have to be brought into focus, somewhat like the ship that Merleau-Ponty describes that has run aground on the shore, whose pieces are at first mixed confusingly with the background, filling us with a vague tension and unease, until our sight is abruptly recast and we see a ship, accompanied by release of the tension and unease (Crease 1998). In the laboratory, however, what is at first latent in the background and then recognized as an entity belongs to an actively structured process. We are staging what we are trying to recognize, and the way we are staging it may interfere with our recognition and the experiment may have to be restaged to bring the object into better focus.

b. Second Phase: Forms of Mediation

Second order features have to do with understanding the contextualization of the laboratory itself. For the laboratory is a special kind of environment. The laboratory is like a garden, walled off to a large extent from the wider and wilder surrounding environment outside. Special things are grown in it that may not appear in the outside world, but yet are related to them, and which help us understand the outside world. To some extent, the laboratory can be examined as the product or embodiment of forms discursive formations imposing power and unconditioned knowledge claims (Rouse 1987). But only to a limited extent. For the laboratory is not like an institution in which all practices are supposed to work in the same way without changing. It thus cannot be understood by studying discursive formations of power and knowledge exclusively; it is unlike a prison or military camp. A laboratory is a place designed to make it possible to stage performances that show themselves at times as disruptive of discourse, to explore such performances and make sure there really is a disruption, and then to foster creation of a new discourse.

c. Third Phase: Contextualization of Research

A third phase has to do with the contextualization of research itself, with approaches to the whole of the world, and with understanding why human beings have come to privilege certain kinds of inquiry over others. The lifeworld – a kind of horizon or atmosphere in which we think, pre-loaded with powerful metaphors and images and deeply embedded habits of thought – has its own character and changes over time. This character affects everyone in it, scientists and philosophers who think about science. The conditions of the lifeworld can, for instance, seduce us into thinking that only the measurable is the real. This is the kind of layer addressed by Husserl’s Crisis (Husserl 1970), Heidegger’s “The Question Concerning Technology,” (Heidegger 1977) and so forth. The distinction between the second and third phases thus parallels the distinction in sociology of science between micro-sociology and macro-sociology.

6. Phenomenology and Specific Sciences

Phenomenology has also been shown to contribute to understanding certain features or developments in contemporary theories which seem of particular significance for science itself, including relativity, quantum mechanics, and evolution.

a. Relativity

Ryckman (2005) highlights the role of phenomenology in understanding the structure and implications of general relativity and of certain other developments in contemporary physics. The key has to do with the role of general covariance, or the requirement that objects must be specified without any reference to a dynamical background space-time setting. Fields, that is, are not properties of space-time points or regions, they are those points and regions. The result of the requirement of general covariance is thus to remove the physical objectivity of space and time as independent of the mass and energy distribution that shapes the geometry of physical space and time. This, Ryckman writes, is arguably its “most philosophically significant aspect,” for it specifies “what is a possible object of fundamental physical theory.”  The point was digested by transcendental philosophers who could understand relativity. One was Cassirer, who saw that covariance could not be treated as a principle of coordination between intuitions and formalisms, and thus was not part of the “subjective” contribution to science, as Schlick and his follower Hans Reichenbach were doing. Rather, it amounted to a restriction on what was allowed as a possible object of field theory to begin with. The requirement of general covariance meant that relativity was about a universe in which objects did not flit about on a space-time stage, but were that stage. Ryckman’s book also demonstrates the role of phenomenology in Weyl’s classic treatment of relativity, and in his formulation of the gauge principle governing the identity of units of measurement. Phenomenology thus played an important role in the articulation of general relativity, and certain concepts central to modern physics.

b. Quantum Mechanics

Phenomenology may also contribute to explaining the famous disparity between the clarity and correctness of the theory and the obscurity and inaccuracy of the language used to speak about its meaning. In Quantum Mechanics and Objectivity (Heelan 1965) and other writings (Heelan 1975), Heelan applies phenomenological tools to this issue. His approach is partly Heideggerian and partly Husserlian. What is Heideggerian is the insistence on the moment prior to object-constitution, the self-aware context or horizon or world or open space in which something appears. The actual appear­ing (or phenomenon) to the self is a second moment. This Heelan analyses in a Husserlian way by studying the intentionality structure of object constitution and insisting on the duality therein of its (embodied subjective) noetic and (embodied objective) noematic poles. “The noetic aspect is an open field of connected scientific questions addressed by a self-aware situated researcher to empirical experience; the noematic aspect is the response obtained from the situated scientific experiment by the experiencing researcher. The totality of actual and possible answers constitutes a horizon of actual and possible objects of human knowledge and this we call a World.”  (Heelan 1965, x; also 3-4). The world then becomes the source of meaning of the word “real,” which is defined as what can appear as an object in the world. The ever-changing and always historical laboratory environment with all its ever-to-be-updated instrumentation and technologies belongs to the noetic pole; it is what makes the objects of science real by bringing them into the world in the act of measurement. Measure­ment involves “an interaction with a measuring instrument capable of yielding macroscopic sensible data, and a theory capable of explaining what it is that is measured and why the sensible data are observable symbols of it” (Heelan 1965, 30-1). The difference between quantum and classical physics does not lie in the intervention of the observer’s subjectivity but in the nature of the quantum object: “[W]hile in classical physics this is an idealised normative (and hence abstract) object, in quantum physics the object is an individual instance of its idealised norm” (Heelan 1965, xii). For while in classical physics deviations of variables from their ideal norms are treated independently in a statistically based theory of errors, the variations (statistical distribution) of quantum measurements are systematically linked in one formalism. The apparent puzzle raised by the “reduction of the wave packet” is thus explained via an account of measurement. In the “orthodox” interpretation, the wave function is taken to be the “true” reality, and the act of measurement is seen as changing the incoming wave packet into one of its component eigen functions by an anonymous random choice. The sensible outcome of this change is the eigenvalue of the outgoing wave function which is read from the measuring instrument. (An eigen function, very simply, is a function which has the property that, when an operation is performed on it, the result is that function multiplied by a constant, which is called the eigenvalue.) The agent of this transformation is the human spirit or mind as a doer of mathematics. Heelan also sees this process as depending on the conscious choice and participation of the scientist-subject, but through a much different process. The formulae relate, not to the ideal object in an absolute sense, apart from all human history, culture, and language, but to the physical situation in which the real object is placed, yielding a particular instance of an ensemble or system that admits of numerous potential experimental realizations. The reduction of the wave packet then “is nothing more than the expression of the scientist’s choice and implementation of a measuring process; the post-measurement outcome is different from the means used to prepare the pure state” prior to the implementation of the measurement (Heelan 1965, 184). The wave function describes a situation which is imperfectly described as a fact of the real world; it describes a field of possibilities. That does not mean there is more-to-be-discovered (“hidden variables”) which will make it a part of the real world, nor that only human participation is able to bring it into the real world, but that what becomes a fact of the real world does so by being fleshed out by an instrumental environment to one or another complementary presentations. Heelan’s work therefore shows the value of Continental approaches to the philosophy of science, and exposes the shortcomings of approaches to the philosophy of science which relegate such themes to “somewhere between mysticism and crossword puzzles” (Heelan 1965, x).

c. Evolution

One of the ­most significant discover­ies of 20th century phe­nomenology was of what is variously called em­bodiment, lived body, flesh, or animate form, the experiences of which are that of a unified, self-aware being, and which cannot be understood apart from reflection on con­crete human experience. The body is not a bridge that connects subject and world, but rather a primordial and unsurpassab­le unity productive of there being persons and worlds at all. Husserl was aware even of the significance of evolution and move­ment. His use of the expression “animate organism” betrays a recognition that he was discussing “something not exclusive to humans, that is, something broader and more funda­mental than human animate organism” (Sheets-Johnstone 1999, 132); thus, a need to discuss matters across the evolutionary spec­trum. Failing to examine our evolutionary heritage, in fact, means misconceiving the wellsprin­gs of our humanity (Sheets-Johnstone 1999). Biologists who developed phenomenological treatments of animal behavior include von Uxhull, to whom Heidegger refers in the section on animals in Fundamental Concepts of Metaphysics, and Adolph Portmann, both of whom discussed the animal’s umwelt. And Sheets-Johnstone has emphasized that phenome­nology needs to ­examine not only the ontogenet­ic dimension B infant behavior B but also the phylogenetic one. ­­­­­­If we treat human animate form as unique we shirk our phenomenologic­al duties and end up with incomplete and distorted accounts containing implicit and unexamined notions. “[G]enuine understandings of consciousness demand close and serious study of evolution as a history of animate form” (Sheets-Johnstone 1999, 42).

7. Conclusion

Developing a phenomenological account of science is important for the philosophy of science insofar as it has the potential to move us beyond a dead-end in which that discipline has entrapped itself. The dead-end involves having to choose between: on the one hand, assuming that a fixed, stable order pre-exists human beings that is uncovered or approximated in scientific activity; and on the other hand, assuming that the order is imposed by the outside. Each approach is threatened, though in different ways, by the prospect of having to incorporate a role for history and culture. Phenomenology is not as threatened, for its core doctrine of intentionality implies that parts are only understood against the background of wholes and objects against the background of their horizons, and that while we discover objects as invariants within horizons, we also discover ourselves as those worldly embodied presences to whom the objects appear. It thus provides an adequate philosophical foundation for reintroducing history and culture into the philosophy of the natural sciences.

8. References and Further Reading

  • Babich, Babette, 2010. “Early Continental Philosophy of Science,” in A. Schrift, ed., The History of Continental Philosophy, V. 3, Durham: Acumen, 263-286.
  • Cairns, Dorion, 1999. “The Theory of Intentionality in Husserl,” ed. by L. Embree, F. Kersten, and R. Zaner, Journal of the British Society for Phenomenology 32: 116-124.
  • Cassirer, E. 1944. “The Concept of Group and the Theory of Perception,” tr. A. Gurwitsch, Philosophy and Phenomenological Research, pp. 1-35.
  • Chasan, S. 1992. “Bibliography of Phenomenological Philosophy of Natural Science,” in Hardy & Embree, 1992, pp. 265-290.
  • Crease, R. 1993. The Play of Nature. Bloomington, IN: Indiana University Press.
  • Crease, R. ed. 1997. Hermeneutics and the Natural Sciences. Dordrecht: Kluwer.
  • Crease, R. 1998. “What is an Artifact?”  Philosophy Today, SPEP Supplement, 160-168.
  • Crease, R. 2009. “Covariant Realism.”  Human Affairs 19, 223-232.
  • Crease, R. 2011. “Philosophy Rules.”  Physics World, August 2011.
  • Ginev, D. 2006. The Context of Constitution: Beyond the Edge of Epistemological Justification. Boston: Boston Studies in the Philosophy of Science.
  • Gurwitsch, A. 1966. “The Last Work of Husserl,” Studies in Phenomenology and Psychology, Evanston: Northwestern University Press, 1966, pp. 397-447.
  • Gutting, G. (ed.). 2005  Continental Philosophy of Science. Oxford: Blackwell.
  • Hardy, L., and Embree, L., 1992. Phenomenology of Natural Science. Dordrecht: Kluwer.
  • Heelan, P. 1965. Quantum Mechanics and Objectivity. The Hague: Nijhoff.
  • Heelan, P. 1967. Horizon, Objectivity and Reality in the Physical Sciences. International Philosophical Quarterly 7, 375-412.
  • Heelan, P. 1969. The Role of Subjectivity in Natural Science, Proc. Amer. Cath. Philos. Assoc. Washington, D.C.
  • Heelan, P. 1970a. Quantum Logic and Classical Logic: Their Respective Roles, Synthese 21: 2 – 33.
  • Heelan, P. 1970b. Complementarity, Context-Dependence and Quantum Logic. Foundations of Physics 1, 95-110.
  • Heelan, P. 1972. Toward a new analysis of the pictorial space of Vincent van Gogh. Art Bull. 54, 478-492.
  • Heelan, P. 1975. Heisenberg and Radical Theoretical Change. Zeitschrift für allgemeine Wissenchaftstheorie 6, 113-136, and following page 136.
  • Heelan, P. 1983. Space-Perception and the Philosophy of Science. Berkeley: University of California Press.
  • Heelan, P. 1987. “Husserl’s Later Philosophy of Science,” Philosophy of Science 54: 368-90.
  • Heelan, P. 1988. “Experiment and Theory: Constitution and Reality,” Journal of Philosophy 85, 515-24.
  • Heelan, P. 1991. Hermeneutical Philosophy and the History of Science. In Nature and Scientific Method: William A. Wallace Festschrift, ed. Daniel O. Dahlstrom. Washington, D.C.: Catholic University of America Press, 23-36.
  • Heelan, P. 1995 An Anti-epistemological or Ontological Interpretation of the Quantum Theory and Theories Like it, in Continental and Postmodern Perspectives in the Philosophy of Science, ed. by B. Babich, D. Bergoffen, and S. Glynn. Aldershot/Brookfield, VT: Avebury Press, 55-68.
  • Heelan, P. 1997. Why a hermeneutical philosophy of the natural sciences? In Crease 1997, 13-40.
  • Heidegger, M. 1977. The Question Concerning Technology, tr. W. Lovitt. New York: Garland.
  • Husserl, E. 1970. The Crisis of European Sciences and Transcendental Phenomenology, tr. D. Carr. Evanston: Northwestern University Press.
  • Husserl, E. 2001. Analyses Concerning Passive and Active Synthesis: Lectures on Transcendental Logic, tr. A. Steinbock. Boston: Springer.
  • Ihde, D. 1990. Technology and the Lifeworld. Bloomington: Indiana University Press.
  • Kockelmans, J. and Kisiel, T., 1970. Phenomenology and the Natural Sciences. Evanston: Northwestern University Press.
  • Mcguire, J. and Tuschanska, B. 2001. Science Unfettered: A Philosophical Study in Sociopolitical Ontology. Columbus: Ohio University Press.
  • Michl, Matthias, Towards a Critical Gadamerian Philosophy of Science, MA thesis, University of Auckland, 2005.
  • Rouse, J. 1987. Knowledge and Power: Toward a Political Philosophy of Science. Ithaca: Cornell University Press.
  • Ryckman, T. 2005. The Reign of Relativity: Philosophy in Physics 1915-1925. New York: Oxford.
  • Seebohm, T. 2004. Hermeneutics: Method and Methodology. Springer.
  • Sheets-Johnstone, M. 1999. The Primacy of Movement. Baltimore: Johns Benjamins.
  • Ströker, Elisabeth 1997. The Husserlian Foundations of Science. Boston: Kluwer.
  • Vallor, Shannon, 2009. “The fantasy of third-person science: Phenomenology, ontology, and evidence.”  Phenom. Cogn. Sci. 8: 1-15.

 

Author Information

Robert P. Crease
Email: rcrease@notes.cc.sunysb.edu
Stony Brook University
U. S. A.

Philosophy of Technology

Like many domain-specific subfields of philosophy, such as philosophy of physics or philosophy of biology, philosophy of technology is a comparatively young field of investigation. It is generally thought to have emerged as a recognizable philosophical specialization in the second half of the 19th century, its origins often being located with the publication of the Ernst Kapp’s book, Grundlinien einer Philosophie der Technik (Kapp, 1877). Philosophy of technology continues to be a field in the making and as such is characterized by the coexistence of a number of different approaches to (or, perhaps, styles of) doing philosophy. This highlights a problem for anyone aiming to give a brief but concise overview of the field because “philosophy of technology” does not name a clearly delimited academic domain of investigation that is characterized by a general agreement among investigators on what are the central topics, questions and aims, and who are the principal authors and positions. Instead, “philosophy of technology” denotes a considerable variety of philosophical endeavors that all in some way reflect on technology.

There is, then, an ongoing discussion among philosophers, scholars in science and technology studies, as well as engineers about what philosophy of technology is, what it is not, and what it could and should be. These questions will form the background against which the present article presents the field. Section 1 begins by sketching a brief history of philosophical reflection on technology from Greek Antiquity to the rise of contemporary philosophy of technology in the mid-19th to mid-20th century. This is followed by a discussion of the present state of affairs in the field (Section 2). In Section 3, the main approaches to philosophy of technology and the principal kinds of questions which philosophers of technology address are mapped out. Section 4 concludes by presenting two examples of current central discussions in the field.

Table of Contents

  1. A Brief History of Thinking about Technology
    1. Greek Antiquity: Plato and Aristotle
    2. From the Middle Ages to the Nineteenth Century: Francis Bacon
    3. The Twentieth Century: Martin Heidegger
  2. Philosophy of Technology: The State of the Field in the Early Twenty-First Century
  3. How Philosophy of Technology Can Be Done: The Principal Kinds of Questions That Philosophers of Technology Ask
  4. Two Exemplary Discussions
    1. What Is (the Nature of) Technology?
    2. Questions Regarding Biotechnology
  5. References and Further Reading

1. A Brief History of Thinking about Technology

The origin of philosophy of technology can be placed in the second half of the 19th century. But this does not mean that philosophers before the mid-19th century did not address questions that would today be thought of as belonging in the domain of philosophy of technology. This section will give the history of thinking about technology – focusing on a few key figures, namely Plato, Aristotle, Francis Bacon and Martin Heidegger.

a. Greek Antiquity: Plato and Aristotle

Philosophers in Greek antiquity already addressed questions related to the making of things. The terms “technique” and “technology” have their roots in the ancient Greek notion of “techne” (art, or craft-knowledge), that is, the body of knowledge associated with a particular practice of making (cf. Parry, 2008). Originally the term referred to a carpenter’s craft-knowledge about how to make objects from wood (Fischer, 2004: 11; Zoglauer, 2002: 11), but later it was extended to include all sorts of craftsmanship, such as the ship’s captain’s techne of piloting a ship, the musician’s techne of playing a particular kind of instrument, the farmer’s techne of working the land, the statesman’s techne of governing a state or polis, or the physician’s techne of healing patients (Nye, 2006: 7; Parry, 2008).

In classical Greek philosophy, reflection on the art of making involved both reflection on human action and metaphysical speculation about what the world was like. In the Timaeus, for example, Plato unfolded a cosmology in which the natural world was understood as having been made by a divine Demiurge, a creator who made the various things in the world by giving form to formless matter in accordance with the eternal Ideas. In this picture, the Demiurge’s work is similar to that of a craftsman who makes artifacts in accordance with design plans. (Indeed, the Greek word “Demiourgos” originally meant “public worker” in the sense of a skilled craftsman.) Conversely, according to Plato (Laws, Book X) what craftsmen do when making artifacts is to imitate nature’s craftsmanship – a view that was widely endorsed in ancient Greek philosophy and continued to play an important role in later stages of thinking about technology. On Plato’s view, then, natural objects and man-made objects come into being in similar ways, both being made by an agent according to pre-determined plans.

In Aristotle’s works this connection between human action and the state of affairs in the world is also found. For Aristotle, however, this connection did not consist in a metaphysical similarity in the ways in which natural and man-made objects come into being. Instead of drawing a metaphysical similarity between the two domains of objects, Aristotle pointed to a fundamental metaphysical difference between them while at the same time making epistemological connections between on the one hand different modes of knowing and on the other hand different domains of the world about which knowledge can be achieved. In the Physics (Book II, Chapter 1), Aristotle made a fundamental distinction between the domains of physis (the domain of natural things) and poiesis (the domain of non-natural things). The fundamental distinction between the two domains consisted in the kinds of principles of existence that were underlying the entities that existed in the two domains. The natural realm for Aristotle consisted of things that have the principles by which they come into being, remain in existence and “move” (in the senses of movement in space, of performing actions and of change) within themselves. A plant, for instance, comes into being and remains in existence by means of growth, metabolism and photosynthesis, processes that operate by themselves without the interference of an external agent. The realm of poiesis, in contrast, encompasses things of which the principles of existence and movement are external to them and can be attributed to an external agent – a wooden bed, for example, exists as a consequence of a carpenter’s action of making it and an owner’s action of maintaining it.

Here it needs to be kept in mind that on Aristotle’s worldview every entity by its nature was inclined to strive toward its proper place in the world. For example, unsupported material objects move downward, because that is the natural location for material objects. The movement of a falling stone could thus be interpreted as a consequence of the stone’s internal principles of existence, rather than as a result of the operation of a gravitational force external to the stone. On Aristotle’s worldview, contrary to our present-day worldview, it thus made perfect sense to think of all natural objects as being subject to their own internal principles of existence and in this respect being fundamentally distinct from artifacts that are subject to externally operating principles of existence (to be found in the agents that make an maintain them).

In the Nicomachean Ethics (Book VI, Chapters 3-7), Aristotle distinguished between five modes of knowing, or of achieving truth, that human beings are capable of. He began with two distinctions that apply to the human soul. First, the human soul possesses a rational part and a part that does not operate rationally. The non-rational part is shared with other animals (it encompasses the appetites, instincts, etc.), whereas the rational part is what makes us human – it is what makes man the animal rationale. The rational part of the soul in turn can be subdivided further into a scientific part and a deductive or ratiocinative part. The scientific part can achieve knowledge of those entities of which the principles of existence could not have been different from what they are; these are the entities in the natural domain of which the principles of existence are internal to them and thus could not have been different. The deductive or ratiocinative part can achieve knowledge of those entities of which the principles of existence could have been different; the external principles of existence of artifacts and other things in the non-natural domain could have been different in that, for example, the silver smith who made a particular silver bowl could have had a different purpose in mind than the purpose for which the bowl was actually made. The five modes of knowledge that humans are capable of – often denoted as virtues of thought – are faculties of the rational part of the soul and in part map onto the scientific part / deductive part dichotomy. They are what we today would call science or scientific knowledge (episteme), art or craft knowledge (techne), prudence or practical knowledge (phronesis), intellect or intuitive apprehension (nous) and wisdom (sophia). While episteme applies to the natural domain, techne and phronesis apply to the non-natural domain, phronesis applying to actions in general life and techne to the crafts. Nous and sophia, however, do not map onto these two domains: while nous yields knowledge of unproven (and not provable) first principles and hence forms the foundation of all knowledge, sophia is a state of perfection that can be reached with respect to knowledge in general, including techne.

Both Plato and Aristotle thus distinguished between techne and episteme as pertaining to different domains of the world, but also drew connections between the two. The reconstruction of the actual views of Plato and Aristotle, however, remains a matter of interpretation (see Parry, 2008). For example, while many authors interpret Aristotle as endorsing the widespread view of technology as consisting in the imitation of nature (for example, Zoglauer, 2002: 12), Schummer (2001) recently argued that for Aristotle this was not a characterization of technology or an explication of the nature of technology, but merely a description of how technological activities often (but not necessarily) take place. And indeed, it seems that in Aristotle’s account of the making of things the idea of man imitating nature is much less central than it is for Plato, when he draws a metaphysical similarity between the Demiurge’s work and the work of craftsmen.

b. From the Middle Ages to the Nineteenth Century: Francis Bacon

In the Middle Ages, the ancient dichotomy between the natural and artificial realms and the conception of craftsmanship as the imitation of nature continued to play a central role in understanding the world. On the one hand, the conception of craftsmanship as the imitation of nature became thought of as applying not only to what we would now call “technology” (that is, the mechanical arts), but also to art. Both were thought of as the same sort of endeavor. On the other hand, however, some authors began to consider craftsmanship as being more than merely the imitation of nature’s works, holding that in their craftsmanship humans were also capable of improving upon nature’s designs. This conception of technology led to an elevated appreciation of technical craftsmanship which, as the mere imitation of nature, used to be thought of as inferior to the higher arts in the Scholastic canon that was taught at medieval colleges. The philosopher and theologian Hugh of St. Victor (1096-1141), for example, in his Didascalicon compared the seven mechanical arts (weaving, instrument and armament making, nautical art and commerce, hunting, agriculture, healing, dramatic art) with the seven liberal arts (the trivium of grammar, rhetoric, and dialectic logic, and the quadrivium of astronomy, geometry, arithmetic, and music) and incorporated the mechanical arts together with the liberal arts into the corpus of knowledge that was to be taught (Whitney, 1990: 82ff.; Zoglauer, 2002: 13-16).

While the Middle Ages thus can be characterized by an elevated appreciation of the mechanical arts, with the transition into the Renaissance thinking about technology gained new momentum due to the many technical advances that were being made. A key figure at the end of the Renaissance is Francis Bacon (1561-1626), who was both an influential natural philosopher and an important English statesman (among other things, Bacon held the offices of Lord Keeper of the Great Seal and later Lord Chancellor). In his Novum Organum (1620), Bacon proposed a new, experiment-based method for the investigation of nature and emphasized the intrinsic connectedness of the investigation of nature and the construction of technical “works”. In his New Atlantis (written in 1623 and published posthumously in 1627), he presented a vision of a society in which natural philosophy and technology occupied a central position. In this context it should be noted that before the advent of science in its modern form the investigation of nature was conceived of as a philosophical project, that is, natural philosophy. Accordingly, Bacon did not distinguish between science and technology, as we do today, but saw technology as an integral part of natural philosophy and treated the carrying out of experiments and the construction of technological “works” on an equal footing. On his view, technical “works” were of the utmost practical importance for the improvement of the living conditions of people, but even more so as indications of the truth or falsity of our theories about the fundamental principles and causes in nature (see Novum Organum, Book I, aphorism 124).

New Atlantis is the fictional report of a traveler who arrives at an as yet unknown island state called Bensalem and informs the reader about the structure of its society. Rather than constituting a utopian vision of an ideal society, Bensalem’s society was modeled on the English society of Bacons” own times that had become increasingly industrialized and in which the need for technical innovations, new instruments and devices to help with the production of goods and the improvement of human life was clearly felt (compare Kogan-Bernstein, 1959). The utopian vision in New Atlantis only pertained to the organization of the practice of natural philosophy. Accordingly, Bacon spent much of New Atlantis describing the most important institution in the society of Bensalem, Salomon’s House, an institution devoted entirely to inquiry and technological innovation.

Bacon provided a long list of the various areas of knowledge, techniques, instruments and devices that Salomon’s House possesses, as well as descriptions of the way in which the House is organized and the different functions that its members fulfill. In his account of Salomon’s house Bacon’s unbridled optimism about technology can be seen: Salomon’s House appears to be in the possession of every possible (and impossible) technology that one could think of, including several that were only realized much later (such as flying machines and submarines) and some that are impossible to realize. (Salomon’s House even possesses several working perpetuum mobile machines, that is, machines that once they have been started up will remain in motion forever and are able to do work without consuming energy. Contemporary thermodynamics shows that such machines are impossible.) Repeatedly it is stated that Salomon’s House works for the benefit of Bensalem’s people and society: the members of the House, for example, regularly travel through the county to inform the people about new inventions, to warn them about upcoming catastrophic events, such as earthquakes and droughts the occurrence of which Salomon’s House is been able to forecast, and to advise them about how they could prepare themselves for these events.

While Bacon is often associated with the slogan “knowledge is power”, contrary to how the slogan is often understood today (where “power” is often taken to mean political power or power within society) what is meant is that knowledge of natural causes gives us power over nature that can be used for the benefit of mankind. This can be seen, for instance, from the way Bacon described the reasons of the Bensalemians for founding Salomon’s House: “The end of our foundation is the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire to the effecting of all things possible.” Here, inquiry into “the knowledge of causes, and secret motions of things” and technological innovation by producing what is possible (“enlarging of the bounds of human empire to the effecting of all things possible”) are explicitly mentioned as the two principal goals of the most important institution in society. (It should also be noted that Bacon himself never formulated the slogan “knowledge is power”. Rather, in the section “Plan of the Work” in the Instauratio Magna he speaks of the twin aims of knowledge – Bacon’s term is ‘scientia” – and power – “Potentia” – as coinciding in the devising of new works because one can only have power over nature when one knows and follows nature’s causes. The connection between knowledge and power here is the same as in the description of the purpose of Salomon’s House.)

The improvement of life by means of natural philosophy and technology is a theme which pervades much of Bacons’ works, including the New Atlantis and his unfinished opus magnum, the Instauratio Magna. Bacon saw the Instauratio Magna, the “Great Renewal of the Sciences”, as the culmination of his life work on natural philosophy. It was to encompass six parts, presenting an overview and critical assessment of the knowledge about nature available at the time, a presentation of Bacon’s new method for investigating nature, a mapping of the blank spots in the corpus of available knowledge and numerous examples of how natural philosophy would progress when using Bacon’s new method. It was clear to Bacon that his work could only be the beginning of a new natural philosophy, to be pursued by later generations of natural philosophers, and that he would himself not be able to finish the project he started in the Instauratio. In fact, even the writing of the Instauratio proved a much too ambitious project for one man: Bacon only finished the second part, the Novum Organum, in which he presented his new method for the investigation of nature.

With respect to this new method, Bacon argued against the medieval tradition of building on the Aristotelian/Scholastic canon and other written sources as the sources of knowledge, proposing a view of knowledge gained from systematic empirical discovery instead. For Bacon, craftsmanship and technology played a threefold role in this context. First, knowledge was to be gained by means of observation and experimentation, so inquiry in natural philosophy heavily relied on the construction of instruments, devices and other works of craftsmanship to make empirical investigations possible. Second, as discussed above, natural philosophy should not be limited to the study of nature for knowledge’s sake but should also always inquire how newly gained knowledge could be used in practice to extend man’s power over nature to the benefit of society and its inhabitants (Kogan-Bernstein, 1959; Fischer, 1996: 284-287). And third, technological “works” served as the empirical foundations of knowledge about nature in that a successful “work” could count as an indication of the truth of the involved theories about the fundamental principles and causes in nature (see above).

While in many locations in his writings Bacon suggests that the “pure” investigation of nature and the construction of new “works” are of equal importance, he did prioritize technology. From the description that Bacon gives of how Salomon’s House is organized, for example, it is clear that the members of Salomon’s House also practice “pure” investigation of nature without much regard for its practical use. The “pure” investigation of nature seems to have its own place within the House and to be able to operate autonomously. Still, as a whole, the institution of Salomon’s House is decidedly practice-oriented, such that the relative freedom of inquiry in the end manifests itself within the confines of an environment in which practical applicability is what counts. Bacon draws the same picture in the Instauratio Magna, where he explicitly acknowledges the value of “pure” investigation while at the same time emphasizing that the true aims of natural philosophy (‘scientiae veros fines” – see towards the end of the Preface of the Instauratio Magna) concern its benefits and usefulness for human life.

c. The Twentieth Century: Martin Heidegger

Notwithstanding the fact that philosophers have been reflecting on technology-related matters ever since the beginning of Western philosophy, those pre-19th century philosophers who looked at aspects of technology did not do so with the aim of understanding technology as such. Rather, they examined technology in the context of more general philosophical projects aimed at clarifying traditional philosophical issues other than technology (Fischer, 1996: 309). It is probably safe to say that before the mid to late 19th century no philosopher considered himself as being a specialized philosopher of technology, or even as a general philosopher with an explicit concern for understanding the phenomenon of technology as such, and that no full-fledged philosophies of technology had yet been elaborated.

No doubt one reason for this is that before the mid to late 19th century technology had not yet become the tremendously powerful and ubiquitously manifest phenomenon that it would later become. The same holds with respect to science, for that matter: it is only after the investigation of nature stopped being thought of as a branch of philosophy – natural philosophy – and the contemporary notion of science emerged that philosophy of science as a field of investigation could emerge. (Note that the term “scientist”, as the name for a particular profession, was coined in the first half of the 19th century by the polymath and philosopher William Whewell – see Snyder, 2009.) Thus, by the end of the 19th century natural science in its present form had emerged from natural philosophy and technology had manifested itself as a phenomenon distinct from science. Accordingly, “until the twentieth century the phenomenon of technology remained a background phenomenon” (Ihde, 1991: 26) and the philosophy of technology “is primarily a twentieth-century development” (Ihde, 2009: 55).

While one reason for the emergence of the philosophy of technology in the 20th century is the rapid development of technology at the time, according to the German philosopher Martin Heidegger an important additional reason should be pointed out. According to Heidegger, not only did technology in the 20th century develop more rapidly than in previous times and by consequence became a more visible factor in everyday life, but also did the nature of technology itself at the same time undergo a profound change. The argument is found in a famous lecture that Heidegger gave in 1955, titled The Question of Technology (Heidegger, 1962), in which he inquired into the nature of technology. Note that although Heidegger actually talked about “Technik” (and his inquiry was into “das Wesen der Technik”; Heidegger, 1962: 5), the question he addressed is about technology. In German, “Technologie” (technology) is often used to denote modern “high-tech” technologies (such as biotechnology, nanotechnology, etc.), while “Technik” is both used to denote the older mechanical crafts and the modern established fields of engineering. (“Elektrotechnik”, for example, is electrical engineering.) As will be discussed in Section 2, philosophy of technology as an academic field arose in Germany in the form of philosophical reflection on “Technik”, not “Technologie”. While the difference between the two terms remains important in contemporary German philosophy of technology (see Section 4.a below), both “Technologie” and “Technik” are commonly translated as “technology” and what in German is called “Technikphilosophie” in English goes by the name of “philosophy of technology”.

On Heidegger’s view, one aspect of the nature of both older and contemporary technology is that technology is instrumental: technological objects (tools, windmills, machines, etc.) are means by which we can achieve particular ends. However, Heidegger argued, it is often overlooked that technology is more than just the devising of instruments for particular practical purposes. Technology, he argued, is also a way of knowing, a way of uncovering the hidden natures of things. In his often idiosyncratic terminology, he wrote that “Technology is a way of uncovering” (“Technik ist eine Weise des Entbergens”; Heidegger, 1962: 13), where “Entbergen” means “to uncover” in the sense of uncovering a hidden truth. (For example, Heidegger (1962: 11-12) connects his term “Entbergen” with the Greek term “aletheia”, the Latin “veritas” and the German “Wahrheit”.) Heidegger thus adopted a view of the nature of technology close to Aristotle’s position, who conceived of techne as one of five modes of knowing, as well as to Francis Bacon’s view, who considered technical works as indications of the truth or falsity of our theories about the fundamental principles and causes in nature.

The difference between older and contemporary technology, Heidegger went on to argue, consists in how this uncovering of truth takes place. According to Heidegger, older technology consisted in “Hervorbringen” (Heidegger, 1962: 11). Heidegger here plays with the dual meaning of the term: the German “Hervorbringen” means both “to make” (the making or production of things, material objects, sound effects, etc.) and “to bring to the fore”. Thus the German term can be used to characterize both the “making” aspect of technology and its aspect of being a way of knowing. While contemporary technology retains the “making” aspect of older technology, Heidegger argued that as a way of knowing it no longer can be understood as Hervorbringen (Heidegger, 1962: 14). In contrast to older technology, contemporary technology as a way of knowing consists in the challenging (“Herausfordern” in German) of both nature (by man) and man (by technology). The difference is that while older technologies had to submit to the standards set by nature (e.g., the work that an old windmill can do depends on how strongly the wind blows), contemporary technologies can themselves set the standards (for example, in modern river dams a steady supply of energy can be guaranteed by actively regulating the water flow). Contemporary technology can thus be used to challenge nature: “Heidegger understands technology as a particular manner of approaching reality, a dominating and controlling one in which reality can only appear as raw material to be manipulated” (Verbeek, 2005: 10). In addition, on Heidegger’s view contemporary technology challenges man to challenge nature in the sense that we are constantly being challenged to realize some of the hitherto unrealized potential offered by nature – that is, to devise new technologies that force nature in novel ways and in so doing uncover new truths about nature.

Thus, in the 20th century, according to Heidegger, technology as a way of knowing assumed a new nature. Older technology can be thought of as imitating nature, where the process of imitation is inseparably connected to the uncovering of the hidden nature of the natural entities that are being imitated. Contemporary technology, in contrast, places nature in the position of a supplier of resources and in this way places man in an epistemic position with respect to nature that differs from the epistemic relation of imitating nature. When we imitate nature, we examine entities and phenomena that already exist. But products of contemporary technology, such as the Hoover dam or a nuclear power plant, are not like already existing natural objects. On Heidegger’s view, they force nature to deliver energy (or another kind of resource) whenever we ask for it and therefore cannot be understood as objects made by man in a mode of imitating nature – nature, after all, cannot produce things that force herself to deliver resources in ways that man-made things can force her to do this. This means that there is a fundamental divide between older and contemporary technology, making the rise of philosophy of technology in the late 19th century and in the 20th century an event that occurred in parallel to a profound change in the nature of technology itself.

2. Philosophy of Technology: The State of the Field in the Early Twenty-First Century

In accordance with the preceding historical sketch, the history of philosophy of technology – as the history of philosophical thinking about issues concerned with the making of things, the use of techne, the challenging of nature and so forth – can be (very) roughly divided into three major periods.

The first period runs from Greek antiquity through the Middle Ages. In this period techne was conceived of as one among several kinds of human knowledge, namely the craft-knowledge that features in the domain of man-made objects and phenomena. Accordingly, philosophical attention for technology was part of the philosophical examination of human knowledge more generally. The second period runs roughly from the Renaissance through the Industrial Revolution and is characterized by an elevated appreciation for technology as an increasingly manifest but not yet all-pervasive phenomenon. Here we see a general interest in technology not only as a domain of knowledge but also as a domain of construction, that is, of the making of artifacts with a view on the improvement of human life (for instance, in Francis Bacon’s vision of natural philosophy). However, there is no particular philosophical interest yet in technology per se other than the issues that earlier philosophers had also considered. The third period is the contemporary period (from the mid 19th century to the present) in which technology had become such a ubiquitous and important factor in human lives and societies that it began to manifest itself as a subject sui generis of philosophical reflection. Of course, this is only a very rough periodization and different ways of periodizing the history of philosophy of technology can be found in the literature – e.g., Wartofsky (1979), Feenberg (2003: 2-3) or Franssen and others (2009: Sec. 1). Moreover, this periodization applies only to Western philosophy. To be sure, there is much to be said about technology and thinking about technology in technologically advanced ancient civilizations in China, Persia, Egypt, etc., but this cannot be done within the confines of the present article. Still, the periodization proposed above is a useful first-order subdivision of the history of thinking about technology as it highlights important changes in how technology was and is understood.

The first monograph on philosophy of technology appeared in Germany in the second half of the 19th century in the form of Ernst Kapp’s book, Grundlinien einer Philosophie der Technik (“Foundations of a Philosophy of Engineering”) (Kapp, 1877). This book is commonly seen as the origin of the field (Rapp, 1981: 4; Ferré, 1988: 10; Fischer, 1996: 309; Zoglauer, 2002: 9; De Vries, 2005: 68; Ropohl, 2009: 13), because the term “philosophy of technology” (or rather, “philosophy of technics”) was first introduced there. Kapp used it to denote the philosophical inquiry into the effects of the use of technology on human society. (Mitcham (1994: 20), however, mentions the Scottish chemical engineer Andrew Ure as a precursor to Kapp in this context. Apparently in 1835 Ure coined the phrase “philosophy of manufactures” in a treatise on philosophical issues concerning technology.) For several decades after the publication of Kapp’s work not much philosophical work focusing on technology appeared in print and the field didn”t really get going until well into the 20th century. Again, the main publications appeared in Germany (for example, Dessauer, 1927; Jaspers, 1931; Diesel, 1939).

It should be noted that if philosophy of technology as an academic field indeed started here, the field’s origins lie outside professionalized philosophy. Jaspers was a philosopher, but neither Kapp nor most of the other early authors on the topic were professional philosophers. Kapp, for example, had earned a doctorate in classical philology and spent much of his life as a schoolteacher of geography and history and as an independent writer and untenured university lecturer (a German “Privatdozent”). Dessauer was an engineer (and an advocate of an unconditionally optimistic view of technology), Ure a chemical engineer and Diesel (son of the inventor of the Diesel engine, Rudolf Diesel) an independent writer.

In his book, Kapp argued that technological artifacts should be thought of as man-made imitations and improvements of human organs (see Brey, 2000; De Vries, 2005). The underlying idea is that human beings have limited capacities: we have limited visual powers, limited muscular strength, limited resources for storing information, etc. These limitations have led human beings to attempt to improve their natural capacities by means of artifacts such as cranes, lenses, etc. On Kapp’s view, such improvements should not so much be thought of as extensions or supplements of natural human organs, but rather as their replacements (Brey, 2000: 62). Because technological artifacts are supposed to serve as replacements of natural organs, they must on Kapp’s view be devised as imitations of these organs – after all, they are intended to perform the same function – or at least as being modeled on natural organs: ‘since the organ whose utility and power is to be increased is the standard, the appropriate form of a tool can only be derived from that organ” (Kapp, quoted and translated by Brey, 2000: 62). This way of understanding technology, which echoes the view of technology as the imitation of nature by men that was already found with Plato and Aristotle, was dominant throughout the Middle Ages and continued to be endorsed later.

The period after World War II saw a sharp increase in the amount of published reflections on technology that, for obvious reasons given the role of technology in both World Wars, often expressed a deeply critical and pessimistic view of the influence of technology on human societies, human values and the human life-world in general. Because of this increase in the amount of reflection on technology after World War II, some authors locate the emergence of the field in that period rather than in the late 19th century (for example Ihde, 1993: 14-15, 32-33; Dusek, 2006: 1-2; Kroes and others, 2008: 1). Ihde (1993: 32) points to an additional reason to locate the beginning of the field in the period following World War II: historians of technology rate World War II as the technologically most innovative period in human history until then, as during that war many new technologies were introduced that continued to drive technological innovation as well as the associated reflection on such innovation for several decades to follow. Thus, from this perspective it was World War II and the following period in which technology reached the level of prominence in the early 21st century and, accordingly, became a focal topic for philosophy. It became “a force too important to overlook”, as Ihde (1993: 32) writes.

A still different picture is obtained if one takes the existence of specialized professional societies, dedicated academic journals, topic-specific textbooks as well as a specific name identifying the field as typical signs that a particular field of investigation has become established as a branch of academia. (Note that in his influential The Structure of Scientific Revolutions, historian and philosopher of science Thomas Kuhns mentions these as signs of the establishment of a new paradigm, albeit not a new field or discipline – see Kuhn, 1970: 19.) By these indications, the process of establishing philosophy of technology as an academic field has only begun in the late 1970s and early 1980s – as Ihde (1993: 45) writes, “from the 1970s on, philosophy of technology began to take its place alongside the other “philosophies of …”” – and continued into the early 21st century.

As Mitcham (1994: 33) remarks, the term “philosophy of technology” was not widely used outside Germany until the 1980s (where the German term is “Technikphilosophie” or “Philosophie der Technik” rather than “philosophy of technology”). In 1976, the Society for the Philosophy of Technology was founded as the first professional society in the field. In the 1980s introductory textbooks on philosophy of technology began to appear. One of the very first (Ferré, 1988) appeared in the famous Prentice Hall Foundations of Philosophy series that included several hallmark introductory texts in philosophy (such as Carl Hempel’s Philosophy of Natural Science, David Hull’s Philosophy of Biological Science, William Frankena’s Ethics and Wesley Salmon’s Logic). In recent years numerous introductory texts have become available, including Ihde (1993), Mitcham (1994), Pitt (2000), Bucciarelli (2003), Fischer (2004), De Vries (2005), Dusek (2006), Irrgang (2008) and Nordmann (2008). Anthologies of classic texts in the field and encyclopedias of philosophy of technology have only very recently begun to appear (e.g., Scharff & Dusek, 2003; Kaplan, 2004; Meijers, 2009; Olsen, Pedersen & Hendricks, 2009; Olsen, Selinger, & Riis, 2009). However, there were few academic journals in the early 21st century dedicated specifically to philosophy of technology and covering the entire range of themes in the field.

”Philosophy of technology” denotes a considerable variety of philosophical endeavors. There is an ongoing discussion among philosophers of technology and scholars in related fields (e.g., science and technology studies, and engineering) on how philosophy of technology should be conceived of. One would expect to find a clear answer to this question in the available introductory texts, along with a general of agreement on the central themes and questions of the field, as well as on who are its most important authors and which the fundamental positions, theories, theses and approaches. In the case of philosophy of technology, however, comparing recent textbooks reveals a striking lack of consensus about what kind of endeavor philosophy of technology is. According to some authors, the sole commonality of the various endeavors called “philosophy of technology” is that they all in some way or other reflect on technology (cf. Rapp, 1981: 19-22; 1989: ix; Ihde, 1993: 97-98; Nordmann, 2008: 10).

For example, Nordmann characterized philosophy of technology as follows: “Not only is it a field of work without a tradition, it is foremost a field without its own guiding questions. In the end, philosophy of technology is the whole of philosophy done over again from the start – only this time with consideration for technology” (2008: 10; Reydon’s translation). Nordmann (2008: 14) added that the job of philosophy of technology is not to deal philosophically with a particular subject domain called “technology” (or “Technik” in German). Rather, its job is to deal with all the traditional questions of philosophy, relating them to technology. Such a characterization of the field, however, seems impracticably broad because it causes the name “philosophy of technology” to lose much of its meaning. On Nordmann’s broad characterization it seems meaningless to talk of “philosophy of technology”, as there is no clearly recognizable subfield of philosophy for the name to refer to. All of philosophy would be philosophy of technology, as long as some attention is paid to technology.

A similar, albeit apparently somewhat stricter, characterization of the field was given by Ferré (1988: ix, 9), who suggested that philosophy of technology is ‘simply philosophy dealing with a special area of interest”, namely technology. According to Ferré, the various “philosophies of” (of science, of biology, of physics, of language, of technology, etc.) should be conceived of as philosophy in the broad sense, with all its traditional questions and methods, but now “turned with a special interest toward discovering how those fundamental questions and methods relate to a particular segment of human concern” (Ferré, 1988: 9). The question arises what this “particular segment of human concern” called “technology” is. But first, the kinds of questions philosophers of technology ask with respect to technology must be explicated.

3. How Philosophy of Technology Can Be Done: The Principal Kinds of Questions That Philosophers of Technology Ask

Philosopher of technology Don Ihde defines philosophy of technology as philosophy that examines the phenomenon of technology per se, rather than merely considering technology in the context of reflections aimed at philosophical issues other than technology. (Note the opposition to Nordmann’s view, mentioned above.) That is, philosophy of technology “must make technology a foreground phenomenon and be able to reflectively analyze it in such a way as to illuminate features of the phenomenon of technology itself” (Ihde, 1993: 38; original emphasis).

However, there are a number of different ways in which one can approach the project of illuminating characteristic features of the phenomenon of technology. While different authors have presented different views of what philosophy of technology is about, there is no generally agreed upon taxonomy of the various approaches to (or traditions in, or styles of doing) philosophy of technology. In this section, a number of approaches that have been distinguished in the recent literature are discussed with the aim of providing an overview of the various kinds of questions that philosophers ask with respect to technology.

In an early review of the state of the field, philosopher of science Marx W. Wartofsky distinguished four main approaches to philosophy of technology (Wartofsky, 1979: 177-178). First, there is the holistic approach that sees technology as one of the phenomena generally found in human societies (on a par with phenomena such as art, war, politics, etc.) and attempts to characterize the nature of this phenomenon. The philosophical question in focus here is: What is technology? Second, Wartofsky distinguished the particularistic approach that addresses specific philosophical questions that arise with respect to particular episodes in the history of technology. Relevant questions are: Why did a particular technology gain or lose prominence in a particular period? Why did the general attitude towards technology change at a particular time? And so forth. Third is the developmental approach that aims at explaining the general process of technological change and as such has a historical focus too. And fourth, there is the social-critical approach that conceives of technology as a social/cultural phenomenon, that is a product of social conventions, ideologies, etc. In this approach, technology is seen as a product of human actions that should be critically assessed (rather than characterized, as in the holistic approach). Besides critical reflection on technology, a central question here is how technology has come to be what it is today and which social factors have been important in shaping it. The four approaches as distinguished by Wartofsky clearly are not mutually exclusive: while different approaches address similar and related questions, the difference between them is a matter of emphasis.

A similar taxonomy of approaches is found with Friedrich Rapp, an early proponent of analytic philosophy of technology (see also below). For Rapp, the principal dichotomy is between holistic and particularistic approaches, that is, approaches that conceive of technology as a single phenomenon the nature of which philosophers should clarify vs. approaches that see “technology” as an umbrella term for a number of distinct historical and social phenomena that are related to one another in complex ways and accordingly should each be examined in relation to the other relevant phenomena (Rapp, 1989: xi-xii). Rapp’s own philosophy of technology stands in the latter line of work. Within this dichotomy, Rapp (1981: 4-19) distinguished four main approaches, each reflecting on a different aspect of technology: on the practice of invention and engineering, on technology as a cultural phenomenon, on the social impact of technology, and on the impact of technology on the physical/biological system of planet Earth. While it is not entirely clear how Rapp conceives of the relation between these four approaches and his holistic/particularistic dichotomy, it seems that holism and particularism can generally be understood as modes of doing philosophy that can be realized within each of the four approaches.

Gernot Böhme (2008: 23-32) also distinguished between four main paradigms of contemporary philosophy of technology: the ontological paradigm, the anthropological paradigm, the historical-philosophical paradigm and the epistemological paradigm. The ontological paradigm, according to Böhme, inquires into the nature of artifacts and other technical entities. It basically consists in a philosophy of technology that parallels philosophy of nature, but focuses on the Aristotelian domain of poiesis instead of the domain of physis (see Section 1.a. above). The anthropological paradigm asks one of the traditional questions of philosophy – What is man? – and approaches this question by way of an examination of technology as a product of human action. The historical-philosophical paradigm examines the various manifestations of technology throughout human history and aims to clarify what characterizes the nature of technology in different periods. In this respect, it is closely related to the anthropological paradigm and individual philosophers can work in both paradigms simultaneously. Böhme (2008: 26), for example, lists Ernst Kapp as a representative of both the anthropological and historical-philosophical paradigms. Finally, the epistemological paradigm inquires into technology as a form of knowledge in the sense in which Aristotle did (See Sec. 1.a. above). Böhme (2008: 23) observed that despite the factual existence of philosophy of technology as an academic field, as yet there is no paradigm that dominates the field.

Carl Mitcham (1994) made a fundamental distinction between two principal subdomains of philosophy of technology, which he called “engineering philosophy of technology” and “humanities philosophy of technology”. Engineering philosophy of technology is the philosophical project aimed at understanding the phenomenon of technology as instantiated in the practices of engineers and others working in technological professions. It analyzes “technology from within, and [is] oriented toward an understanding of the technological way of being-in-the-world” (Mitcham, 1994: 39). As representatives of engineering philosophy of technology Mitcham lists, among others, Ernst Kapp and Friedrich Dessauer. Humanities philosophy of technology, on the other hand, consists of more general philosophical projects in which technology per se is not principal subject of concern. Rather, technology is taken as a case study that might lead to new insights into a variety of philosophical questions by examining how technology affects human life.

The above discussion shows how different philosophers have quite different views of how the field of philosophy of technology is structured and what kinds of questions are in focus in the field. Still, on the basis of the preceding discussion a taxonomy can be constructed of three principal ways of conceiving of philosophy of technology:

(1) philosophy of technology as the systematic clarification of the nature of technology as an element and product of human culture (Wartofsky’s holistic and developmental approaches; Rapp’s cultural approach; Böhme’s ontological, anthropological and historical paradigms; and Mitcham’s engineering approach);

(2) philosophy of technology as the systematic reflection on the consequences of technology for human life (Wartofsky’s particularistic and social/critical approaches; Rapp’s social impact and physical impact approaches; and Mitcham’s humanities approach);

(3) philosophy of technology as the systematic investigation of the practices of engineering, invention, designing and making of things (Wartofsky’s particularistic approach; Rapp’s invention approach; Böhme’s epistemological paradigm; and Mitcham’s engineering approach).

All three approaches are represented in present-day thinking about technology and are illustrated below.

(1) The systematic clarification of the nature of technology. Perhaps most philosophy of technology has been done – and continues to be done – in the form of reflection on the nature of technology as a cultural phenomenon. As clarifying the nature of things is a traditional philosophical endeavor, many prominent representatives of this approach are philosophers who do not consider themselves philosophers of technology in the first place. Rather, they are general philosophers who look at technology as one among the many products of human culture, such as the German philosophers Karl Jaspers (e.g., his book Die geistige Situation der Zeit; Jaspers, 1931), Oswald Spengler (Der Mensch und die Technik; Spengler, 1931), Ernst Cassirer (e.g., his Symbol, Technik, Sprache; Cassirer, 1985), Martin Heidegger (Heidegger, 1962; discussed above), Jürgen Habermas (for example with his Technik und Wissenschaft als “Ideologie”; Habermas, 1968) and Bernhard Irrgang (2008). The Spanish philosopher José Ortega y Gasset is also often counted among the prominent representatives of this line of work.

(2) Systematic reflection on the consequences of technology for human life. Related to the conception of technology as a human cultural product is the approach to philosophy of technology that reflects on and criticizes the social and environmental impact of technology. As an examination of how technology affects society, this approach lies at the intersection of philosophy and sociology, rather than lying squarely within philosophy itself. Prominent representatives thus include the German philosopher/sociologists of the Frankfurt School (Herbert Marcuse, Theodor W. Adorno and Max Horkheimer), Jürgen Habermas, the French sociologist Jacques Ellul (1954), or the American political theorist Langdon Winner (1977).

A central question in contemporary versions of this approach is whether technology controls us or we are able to control technology (Feenberg, 2003: 6; Dusek, 2006: 84-111; Nye, 2006: Chapter 2). Langdon Winner, for example, thought of technology as an autonomously developing phenomenon fundamentally out of human control. As Dusek (2006: 84) points out, this issue is in fact a constellation of two separate questions: Are the societies that we live in, and we ourselves in our everyday lives, determined by technology? And are we able to control or guide the development of technology and the application of technological inventions, or does technology have a life of its own? As it might be that while our lives are not determined by technology we still are not able to control the development and application of technology, these are separate, albeit intimately related issues. The challenge for philosophy of technology, then, is to assess the effects of technology on our societies and our lives, to explore possibilities for us to exert influence on the current applications and future development of technology, and to devise concepts and institutions that might enable democratic control over the role of technology in our lives and societies.

(3) The systematic investigation of the practices of engineering, invention, designing and making of things. The third principal approach to philosophy of technology examines concrete technological practices, such as invention, design and engineering. Early representatives of this approach include Ernst Kapp (1877), Friedrich Dessauer (1927; 1956) and Eugen Diesel (1939). The practical orientation of this approach, as well as its comparative distance from traditional issues in philosophy, is reflected in the fact that none of these three early philosophers of technology were professional philosophers (see Section 2).

A guiding idea in this approach to philosophy of technology is that the design process constitutes the core of technology (Franssen and others, 2009: Sec. 2.3), such that studying the design process is crucial to any project that attempts to understand technology. Thus, philosophers working in this approach often examine design practices, both in the strict context of engineering and in wider contexts such as architecture and industrial design (for example, Vermaas and others, 2008). In focus are epistemological and methodological questions, such as: What kinds of knowledge do engineers have? (for example, Vincenti, 1990; Pitt, 2000; Bucciarelli, 2003; Auyang, 2009; Houkes, 2009). Is there a kind of knowledge that is specific to engineering? What is the nature of the engineering process and the design process? (for example, Vermaas and others, 2008). What is design? (for example, Houkes, 2008). Is there a specific design/engineering methodology? How do reasoning and decision processes in engineering function? How do engineers deal with uncertainty, failure and error margins? (for example, Bucciarelli, 2003: Chapter 3). Is there any such thing as a technological explanation? If so, what is the structure of technological explanations? (for example, Pitt, 2000: Chapter 4; Pitt, 2009). What is the relation between science and technology and in what way are design processes similar to and different from investigative processes in natural science? (for example, Bunge, 1966).

This approach to philosophy of technology is closely related to philosophy of science, where also much attention is given to methodology and epistemology. This can be seen from the fact that central questions from philosophy of science parallel some of the aforementioned questions: What is scientific knowledge? Is there a specific scientific method, or perhaps a clearly delimited set of such methods? How does scientific reasoning work? What is the structure of scientific explanations? Etc. However, there still seems to be comparatively little attention for such questions among philosophers of technology. Philosopher of technology Joseph Pitt, for example, observed that notwithstanding the parallel with respect to questions that can be asked about technology “there is a startling lack of symmetry with respect to the kinds of questions that have been asked about science and the kinds of questions that have been asked about technology” (2000: 26; emphasis added). According to Pitt, philosophers of technology have largely ignored epistemological and methodological questions about technology and have instead focused overly on issues related to technology and society. But, Pitt pointed out, social criticism “can come only after we have a deeper understanding of the epistemological dimension of technology (Pitt, 2000: 27) and “policy decisions require prior assessment of the knowledge claims, which require good theories of what knowledge is and how to assess it” (ibid.). Thus, philosophers of technology should orient themselves anew with respect to the questions they ask.

But there are more parallels between the philosophies of technology and science. An important endeavor in philosophy of science that is also seen as central in philosophy of technology is conceptual analysis. In the case of philosophy of technology, this involves both concepts related to technology and engineering in general (concepts such as “technology”, “technics”, “technique”, “machine”, “mechanism”, “artifact”, “artifact kind”, “information”, ‘system”, “efficiency”, “risk”, etc.; see also Wartofsky, 1979: 179) and concepts that are specific for the various engineering disciplines. In addition, in both philosophy of science and philosophy of technology a renewed interest in metaphysical issues can currently be seen. For example, while philosophers of science inquire into the nature of the natural kinds that the sciences study, philosophers of technology are developing a parallel interest into the metaphysics of artifacts and kinds of artifacts (e.g., Houkes & Vermaas, 2004; Margolis & Laurence, 2007; Franssen, 2008). And lastly, philosophers of technology and philosophers of particular special sciences are increasingly beginning to cooperate on questions that are of crucial interest to both fields; a recent example is Krohs & Kroes (2009) on the notion of function in biology and technology.

A difference between the states of affairs in philosophy of science and in philosophy of technology, however, lies in the relative dominance of continental and analytic approaches. While there is some continental philosophy of science (e.g., Gutting, 2005), it constitutes a small minority in the field in comparison to analytic philosophy of science. In contrast, continental-style philosophy of technology is a domain of considerable size, while analytic-style philosophy of technology is small in comparison. Analytic philosophy of technology has existed since the 1960s and only began the process of becoming the dominant form of philosophy of technology in the early 21st century (Franssen and others, 2009: Sec. 1.3.). Kroes and others (2008: 2) even speak of a “recent analytic turn in the philosophy of technology”. Overviews of analytic philosophy of technology can be found in Mitcham (1994: Part 2), Franssen (2009) and Franssen and others (2009: Sec. 2).

4. Two Exemplary Discussions

After having mapped out three principal ways in which one can conceive of philosophy of technology, two discussions from contemporary philosophy of technology will be presented to illustrate what philosophers of technology do. The first example will demonstrate philosophy of technology as the systematic clarification of the nature of technology. The second example shows philosophy of technology as systematic reflection on the consequences of technology for human life, and is concerned with biotechnology. (Illustrations of philosophy of technology as the systematic investigation of the practices of engineering, invention, designing and making of things will not be presented. Examples of this approach to philosophy of technology can be found in Vermaas and others (2008) or Franssen and others (2009).)

a. What Is (the Nature of) Technology?

The question, What is technology? or What is the nature of technology?, is both a central question that philosophers of technology aim to answer and a question the answer to which determines the subject matter of philosophy of technology. One can think of philosophy of technology as the philosophical examination of technology, in the same way as the philosophy of science is the philosophical examination of science and the philosophy of biology the philosophical study of a particular subdomain of science. However, in this respect the philosophy of technology is in a similar situation as the philosophy of science finds itself in.

Central questions in the philosophy of science have long been what science is, what characterizes science and what distinguishes science from non-science (the demarcation problem). These questions have recently somewhat moved out of focus, however, due to the lack of acceptable answers. Philosophers of science have not been able to satisfactorily explicate the nature of science (for a recent suggestion, see Hoyningen-Huene, 2008) or to specify any clear-cut criterion by which science could be demarcated from non-science or pseudo-science. As philosopher of science Paul Hoyningen-Huene (2008: 168) wrote: “fact is that at the beginning of the 21st century there is no consensus among philosophers or historians or scientists about the nature of science.”

The nature of technology, however, is even less clear than the nature of science. As philosopher of science Marx Wartofsky put it, ““Technology” is unfortunately too vague a term to define a domain; or else, so broad in its scope that what it does define includes too much. For example, one may talk about technology as including all artifacts, that is, all things made by human beings. Since we “make” language, literature, art, social organizations, beliefs, laws and theories as well as tools and machines, and their products, such an approach covers too much” (Wartofsky, 1979: 176). More clarity on this issue can be achieved by looking at the history of the term (for example, Nye, 2006: Chapter 1; Misa, 2009; Mitcham & Schatzberg, 2009) as well as at recent suggestions to define it.

Jacob Bigelow, an early author on technology, conceived of it as a specific domain of knowledge: technology was “an account […] of the principles, processes, and nomenclatures of the more conspicuous arts” (Bigelow, 1829, quoted in Misa, 2009: 9; Mitcham & Schatzberg, 2009: 37). In a similar manner, Günter Ropohl (1990: 112; 2009: 31) defined “technology” as the ‘science of technics” (“Wissenschaft von der Technik”, where “Technik” denotes the domain of crafts and other areas of manufacturing, making, etc.). The important aspect of Bigelow’s and Ropohl’s definitions is that “technology” does not denote a domain of human activity (such as making or designing) or a domain of objects (technological innovations, such as solar panels), but a domain of knowledge. In this respect, their usage of the term is continuous with the meaning of the Greek “techne” (Section 1.a).

A review of a number of definitions of “technology” (Li-Hua, 2009) shows that there is not much overlap between the various definitions that can be found in the literature. Many definitions conceive of technology in Bigelow’s and Ropohl’s sense as a particular body of knowledge (thus making the philosophy of technology a branch of epistemology), but do not agree on what kind of knowledge it is supposed to be. On some definitions it is seen as firm-specific knowledge about design and production processes, while others conceive of it as knowledge about natural phenomena and laws of nature that can be used to satisfy human needs and solve human problems (a view which closely resembles Francis Bacon’s).

Philosopher of science Mario Bunge presented a view of the nature of technology along the latter lines (Bunge, 1966). According to Bunge, technology should be understood as constituting a particular subdomain of the sciences, namely “applied science”, as he called it. Note that Bunge’s thesis is not that technology is applied science in the sense of the application of scientific theories, models, etc. for practical purposes. Although a view of technology as being “just the totality of means for applying science” (Scharff, 2009: 160) remains present among the general public, most engineers and philosophers of technology agree that technology cannot be conceived of as the application of science in this sense. Bunge’s view is that technology is the subdomain of science characterized by a particular aim, namely application. According to Bunge, natural science and applied science stand side by side as two distinct modes of doing science: while natural science is scientific investigation aimed at the production of reliable knowledge about the world, technology is scientific investigation aimed at application. Both are full-blown domains of science, in which investigations are carried out and knowledge is produced (knowledge about the world and how it can be applied to concrete problems, respectively). The difference between the two domains lies in the nature of the knowledge that is produced and the aims that are in focus. Bunge’s statement that “technology is applied science” should thus be read as “technology is science for the purpose of application” and not as “technology is the application of science.”

Other definitions reflect still different conceptions of technology. In the definition accepted by the United Nations Conference on Trade and Development (UNCTAD), technology not only includes specific knowledge, but also machinery, production systems and skilled human labor force. Li-Hua (2009) follows the UNCTAD definition by proposing a four-element definition of “technology” as encompassing technique (that is, a specific technique for making a particular product), specific knowledge (required for making that product; he calls this technology in the strict sense), the organization of production and the end product itself. Friedrich Rapp, in contrast, defined “technology” even more broadly as a domain of human activity: “in simplest terms, technology is the reshaping of the physical world for human purposes” (Rapp, 1989: xxiii).

Thus, attempts to define “technology” in such a way that this definition would express the nature of technology, or only some of the principal characteristics of technology, have not led to any generally accepted view of what technology is. In this context, historian of science and technology Thomas J. Misa observed that historians of technology have so far resisted defining “technology” in the same way as “no scholarly historian of art would feel the least temptation to define “art”, as if that complex expression of human creativity could be pinned down by a few well-chosen words” (Misa, 2009: 8). The suggestion clearly is that technology is far too complex and too diverse a domain to define or to be able to talk about the nature of technology. Nordmann (2008: 14) went even further by arguing that not only can the term “technology” not be defined, but also it should not be defined. According to Nordmann, we should accept that technology is too diverse a domain to be caught in a compact definition. Accordingly, instead of conceiving of “technology” as the name of a particular fixed collection of phenomena that can be investigated, Nordmann held that “technology” is best understood as what Grunwald & Julliard (2005) called a “reflective concept”. According to the latter authors, “technology” (or rather, “Technik” – see Section 1.c) should simply be taken to mean whatever we mean when we use the term. While this clearly cannot be an adequate definition of the term, it still can serve as a basis for reflections on technology in that it gives us at least some sense of what it is that we are reflection on. Using “technology” in this extremely loose manner allows us to connect reflections on very different issues and phenomena as being about – in the broadest sense – the same thing. In this way, “technology” can serve as the core concept of the field of philosophy of technology.

Philosophy of technology faces the challenge of clarifying the nature of a particular domain of phenomena without being able to determine the boundaries of that domain. Perhaps the best way out of this situation is to approach the question on a case-by-case basis, where the various cases are connected by the fact that they all involve technology in the broadest possible sense of the term. Rather than asking what technology is, and how the nature of technology is to be characterized, it might be better to examine the natures of particular instances of technology and in so doing achieve more clarity about a number of local phenomena. In the end, the results from various case studies might to some extent converge – or they might not.

b. Questions Regarding Biotechnology

The question how to define “technology” is not merely an academic issue. Consider the case of biotechnology, the technological domain that features most prominently in systematic reflections on the consequences of technology for human life. When thinking about what the application of biotechnologies might mean for our lives, it is important to define what we mean by “biotechnology” such that the subject matter under consideration is delimited in a way that is useful for the discussion.

On one definition, given in 1984 by the United States Office of Technology Assessment, biotechnology comprises “[a]ny technique using organisms and their components to make products, modify plants and animals to carry desired traits, or develop micro-organisms for specific uses” (Office of Technology Assessment, 1984; Van den Beld, 2009: 1302). On such a conception of biotechnology, however, traditional farming, breeding and production of foodstuffs, as well as modern large-scale agriculture and industrialized food production would all count as biotechnology. The domain of biotechnology would thus encompass an extremely heterogeneous collection of practices and techniques of which many would not be particularly interesting subjects for philosophical or ethical reflection (although all of them affect human life: consider, for example, the enormous effect that the development of traditional farming had with respect to the rise of human societies). Accordingly, many definitions are much narrower and focus on “new” or “modern” biotechnologies, that is, technologies that involve the manipulation of genetic material. These are, after all, the technologies that are widely perceived by the general public as ethically problematic and thus as constituting the proper subject matter of philosophical reflection on biotechnology. Thus, the authors of a 2007 reported on the possible consequences, opportunities and challenges of biotechnology for Europe make a distinction between traditional and modern biotechnology, writing about modern biotechnology that it “can be defined as the use of cellular, molecular and genetic processes in production of goods and services. Its beginnings date back to the early 1970s when recombinant DNA technology was first developed” (quoted in Van den Beld, 2009: 1302).

Such narrow definitions, however, tend to cover too little. As Van den Beld (2009: 1306) pointed out in this context, “There are no definitions that are simply correct or incorrect, only definitions that are more or less pragmatically adequate in view of the aims one pursues.” When it comes to systematic reflection on how the use of technologies affects human life, the question thus is whether there is any particular area of technology that can be meaningfully singled out as constituting “biotechnology”. However, the spectrum of technological applications in the biological domain is simply too diverse.

In overviews of the technologies that are commonly discussed under the name of “biotechnology” a common distinction is between “white biotechnology” (biotechnology in industrial contexts), “green biotechnology” (biotechnology involving plants) and “red biotechnology” (biotechnology involving humans and non-human animals, in particular in medical and biomedical contexts). White biotechnology involves, among other things, the use of enzymes in detergents or the production of cheeses; the use of micro-organisms for the production of medicinal substances; the production of biofuels and bioplastics and so forth. Green biotechnology typically involves genetic technology and is also often called “green genetic technology”. It mainly encompasses the genetic modification of cultivated crops. Philosophical/ethical issues discussed under this label include the risk of outcrossing between genetically modified types of plants and the wild types; the use of genetically modified crops in the production of foodstuffs, either directly or indirectly as food for animals intended for human consumption (for example, soy beans, corn, potatoes and tomatoes); the labeling of foodstuffs produced on the basis of genetically modified organisms; issues related to the patenting of genetically modified crops and so forth.

Not surprisingly, red biotechnology is the most hotly discussed area of biotechnology as red biotechnologies directly involve human beings and non-human animals, both of which are categories that feature prominently throughout ethical discussions. Red biotechnology involves such things as the transplantation of human organs and tissues, and xenotransplantation (the transplantation of non-human animal organs and tissues to humans); the use of cloning techniques for reproductive and therapeutic purposes; the use of embryos for stem cell research; artificial reproduction, in vitro fertilization, the genetic testing of embryos and pre-implantation diagnostics and so forth. In addition, an increasingly discussed area of red biotechnology is constituted by human enhancement technologies. These encompass such diverse technologies as the use of psycho-pharmaceutical substances for the improvement of one’s mental capacities, the genetic modification of human embryos to prevent possible genetic diseases and so forth.

Other areas of biotechnology can include synthetic biology, which involves the creation of synthetic genetic systems, synthetic metabolic systems and attempts at creating living synthetic life forms from scratch. Synthetic biology does not fit into the distinction between white, green and red biotechnology and receives attention from philosophers not only because projects in synthetic biology may raise ethical questions (for example, Douglas & Savulescu, 2012) but also because of questions from epistemology and philosophy of science (for example, O”Malley, 2009; Van den Beld, 2009: 1314-1316).

Corresponding to this diversity of technologies covered by the label of “biotechnology”, philosophical reflection on biotechnology as such and on its possible consequences for human life will not be a very fruitful enterprise as there will not be much to say about biotechnology in general. Instead, philosophical reflection on biotechnology will need to be conducted locally rather than globally, taking the form of close examination of particular technologies in particular contexts. Philosophers concerned with biotechnology reflect on such specific issues as the genetic modification of plants for agricultural purposes, or the use of psycho-pharmaceutical substances for the improvement of the mental capacities of healthy subjects – not biotechnology as such. In the same way as “technology” can be thought of as a “reflective concept” (Grunwald & Julliard, 2005) that brings together a variety of phenomena under a common denominator for the purposes of enabling philosophical work, so “biotechnology” too can be understood as a “reflective concept” that is useful to locate particular considerations within the wide domain of philosophical reflection.

This is, however, not to say that on more general levels nothing can be said about biotechnology. Bioethicist Bernard Rollin, for example, considered genetic engineering in general and addressed the question whether genetic engineering could be considered intrinsically wrong – that is, wrong in any and all contexts and hence independently of the particular context of application that is under consideration (Rollin, 2006: 129-154). According to Rollin, the alleged intrinsic wrongness of genetic engineering constituted one out of three aspects of wrongness that members of the general public often associate with genetic engineering. These three aspects, which Rollin illustrated as three aspects of the Frankenstein myth (see Rollin, 2006: 135), are: the intrinsic wrongness of a particular practice, its possible dangerous consequences and the possibilities of causing harm to sentient beings. While the latter two aspects of wrongness might be avoided by means of appropriate measures, the intrinsic wrongness of a particular practice (in cases where it obtains) is unavoidable. Thus, if it could be argued that genetic engineering is intrinsically wrong – that is, something that we just ought not to do, irrespective of whatever positive or negative consequences are to be expected –, this would constitute a strong argument against large domains of white, green and red biotechnology. On the basis of an assessment of the motivations that people have to judge genetic engineering as being intrinsically wrong, Rollin, however, concluded that such an argument could not be made because in the various cases in which people concluded that genetic engineering was intrinsically wrong the premises of the argument were not well-founded.

But in this case, too, the need for local rather than global analyses can be seen. Assessing the tenability of the value judgment that genetic engineering is intrinsically wrong requires examining concrete arguments and motivations on a local level. This, I want to suggest by way of conclusion, is a general characteristic of the philosophy of technology: the relevant philosophical analyses will have to take place on the more local levels, examining particular technologies in particular contexts, rather than on more global levels, at which large domains of technology such as biotechnology or even the technological domain as a whole are in focus. Philosophy of technology, then, is a matter of piecemeal engineering, in much the same way as William Wimsatt has suggested that philosophy of science should be done (Wimsatt, 2007).

5. References and Further Reading

  • Auyang, S.Y. (2009): “Knowledge in science and engineering”, Synthese 168: 319-331.
  • Brey, P. (2000): “Theories of technology as extension of human faculties”, in: Mitcham, C. (Ed.): Metaphysics, Epistemology, and Technology (Research in Philosophy and Technology, Vol. 19), Amsterdam: JAI, pp. 59-78.
  • Böhme, G. (2008): Invasive Technologie: Technikphilosophie und Technikkritik, Kusterdingen: Die Graue Edition.
  • Bucciarelli, L.L. (1994): Designing Engineers, Cambridge (MA): MIT Press.
  • Bucciarelli, L.L. (2003): Engineering Philosophy, Delft: Delft University Press.
  • Bunge, M. (1966): “Technology as applied science”, Technology and Culture 7: 329-347.
  • Cassirer, E. (1985): Symbol, Technik, Sprache: Aufsätze aus den Jahren 1927-1933 (edited by E.W. Orth & J. M. Krois), Hamburg: Meiner.
  • De Vries, M.J. (2005): Teaching About Technology: An Introduction to the Philosophy of Technology for Non-Philosophers, Dordrecht: Springer.
  • Dessauer, F. (1927): Philosophie der Technik: Das Problem der Realisierung, Bonn: Friedrich Cohen.
  • Dessauer, F. (1956): Der Streit um die Technik, Frankfurt am Main: Verlag Josef Knecht.
  • Diesel, E. (1939): Das Phänomen der Technik: Zeugnisse, Deutung und Wirklichkeit, Leipzig: Reclam & Berlin: VDI-Verlag.
  • Douglas, T. & Savulescu, J. (2010): “Synthetic biology and the ethics of knowledge”, Journal of Medical Ethics 36: 687-693.
  • Dusek, V. (2006): Philosophy of Technology: An Introduction, Malden (MA): Blackwell.
  • Ellul, J. (1954): La Technique ou l’Enjeu du Siècle, Paris: Armand Colin.
  • Feenberg, A. (2003): “What is philosophy of technology?”, lecture at the University of Tokyo (Komaba campus), June 2003.
  • Ferré, F. (1988): Philosophy of Technology, Englewood Cliffs (NJ): Prentice Hall; unchanged reprint (1995): Philosophy of Technology, Athens (GA) & London, University of Georgia Press.
  • Fischer, P. (1996): “Zur Genealogie der Technikphilosophie”, in: Fischer, P. (Ed.): Technikphilosophie, Leipzig: Reclam, pp. 255-335.
  • Fischer, P. (2004): Philosophie der Technik, München: Wilhelm Fink (UTB).
  • Franssen, M.P.M. (2008): “Design, use, and the physical and intentional aspects of technical artifacts”, in: Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer, pp. 21-35.
  • Franssen, M.P.M. (2009): “Analytic philosophy of technology”, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 184-188.
  • Franssen, M.P.M., Lokhorst, G.-J. & Van de Poel, I. (2009): “Philosophy of technology”, in: Zalta, E. (Ed.): Stanford Encyclopedia of Philosophy (Fall 2009 Edition).
  • Grunwald, A. & Julliard, Y. (2005): “Technik als Reflexionsbegriff: Zur semantischen Struktur des Redens über Technik”, Philosophia Naturalis 42: 127-157.
  • Gutting, G. (Ed.) (2005): Continental Philosophy of Science, Malden (MA): Blackwell.
  • Habermas, J. (1968): Technik und Wissenschaft als “Ideologie”, Frankfurt am Main: Suhrkamp.
  • Heidegger, M. (1962): Die Technik und die Kehre, Pfullingen: Neske.
  • Houkes, W. (2008): “Designing is the construction of use plans”, in: Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer, pp. 37-49.
  • Houkes, W. (2009): “The nature of technological knowledge”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 310-350.
  • Houkes, W. & Vermaas, P.E. (2004): “Actions versus functions: A plea for an alternative metaphysics of artefacts”, The Monist 87: 52-71.
  • Hoyningen-Huene, P. (2008): ‘Systematicity: The nature of science”, Philosophia 36: 167-180.
  • Ihde, D. (1993): Philosophy of Technology: An Introduction, New York: Paragon House.
  • Ihde, D. (2009): “Technology and science”, in: Olsen, J.K.B., Pedersen, S.A. & Hendricks, V.F. (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 51-60.
  • Irrgang, B. (2008): Philosophie der Technik, Darmstadt: Wissenschaftliche Buchgesellschaft.
  • Jaspers, K. (1931): Die geistige Situation der Zeit, Berlin & Leipzig: Walter de Gruyter & Co.
  • Kaplan, D.M. (Ed.) (2004): Readings in the Philosophy of Technology, Lanham (Md.): Rowman & Littlefield.
  • Kapp, E. (1877): Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Cultur aus neuen Gesichtspunkten, Braunschweig: G. Westermann.
  • Kogan-Bernstein, F.A. (1959): “Einleitung”, in: Kogan-Bernstein, F.A. (Ed): Francis Bacon: Neu-Atlantis, Berlin: Akademie-Verlag, pp. 1-46
  • Kroes, P.E., Light, A., Moore, S.A. & Vermaas, P.E. (2008): “Design in engineering and architecture: Towards an integrated philosophical understanding”, in: Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer, pp. 1-17.
  • Krohs, U. & Kroes, P. (Eds) (2009): Functions in Biological and Artificial Worlds: Comparative Philosophical Perspectives, Cambridge (MA): MIT Press.
  • Kuhn, T.S. (1970): The Structure of Scientific Revolutions (Second Edition, Enlarged), Chicago: University of Chicago Press.
  • Li-Hua, R. (2009): “Definitions of technology”, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 18-22.
  • Margolis, E. & Laurence, S. (Eds) (2007): Creations of the Mind: Theories of Artifacts and Their Representation, Oxford: Oxford University Press.
  • Meijers, A.W.M. (Ed.) (2009): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland.
  • Misa, T.J. (2009): “History of technology”, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 7-17.
  • Mitcham, C. (1994): Thinking Through Technology: The Path Between Engineering and Philosophy, Chicago & London: University of Chicago Press.
  • Mitcham, C. & Schatzberg, E. (2009): “Defining technology and the engineering sciences”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 27-63.
  • Nordmann, A. (2008): Technikphilosophie: Zur Einführung, Hamburg: Junius.
  • Nye, D.E. (2006): Technology Matters: Questions to Live With, Cambridge (MA): MIT Press.
  • O”Malley, M.A. (2009): “Making knowledge in synthetic biology: Design meets kludge”, Biological Theory 4: 378-389.
  • Parry, R. (2008): “Episteme and techne”, in: Zalta, E. (Ed.): Stanford Encyclopedia of Philosophy (Fall 2008 Edition).
  • Pitt, J.C. (2000): Thinking About Technology: Foundations of the Philosophy of Technology, New York & London: Seven Bridges Press.
  • Pitt, J.C. (2009): “Technological explanation”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 861-879.
  • Olsen, J.K.B., Pedersen, S.A. & Hendricks, V.F. (Eds) (2009): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell.
  • Olsen, J.K.B., Selinger, E. & Riis, S. (Eds) (2009): New Waves in Philosophy of Technology, Houndmills: Palgrave Macmillan.
  • Office of Technology Assessment (1984): Commercial Biotechnology: An International Analysis, Washington (DC): U.S. Government Printing Office.
  • Rapp, F. (1981): Analytical Philosophy of Technology (Boston Studies in the Philosophy of Science, Vol. 63), Dordrecht: D. Reidel.
  • Rapp, F. (1989): “Introduction: General perspectives on the complexity of philosophy of technology”, in: Durbin, P.T. (Ed.): Philosophy of Technology: Practical, Historical and Other Dimensions, Dordrecht: Kluwer, pp. ix-xxiv.
  • Rollin, B.E. (2006): Science and Ethics, Cambridge: Cambridge University Press.
  • Ropohl, G. (1990): “Technisches Problemlösen und soziales Umfeld”, in: Rapp, F. (Ed.): Technik und Philosophie, Düsseldorf: VDI, pp. 111-167.
  • Ropohl, G. (2009): Allgemeine Technologie: Eine Systemtheorie der Technik (3., überarbeitete Auflage), Karlsruhe: Universitätsverlag Karlsruhe.
  • Scharff, R.C. (2009): “Technology as “applied science””, in: J.K.B. Olsen, S.A. Pedersen & V.F. Hendricks (Eds): A Companion to the Philosophy of Technology, Chichester: Wiley-Blackwell, pp. 160-164.
  • Scharff, R.C. & Dusek, V. (Eds.) (2003): Philosophy of Technology: The Technological Condition – An Anthology, Malden (MA): Blackwell.
  • Schummer, J. (2001): “Aristotle on technology and nature”, Philosophia Naturalis 38: 105-120.
  • Snyder, L.J. (2009): “William Whewell”, in: Zalta, E. (Ed.): Stanford Encyclopedia of Philosophy (Winter 2009 Edition).
  • Spengler, O. (1931): Der Mensch und die Technik: Beitrag zu einer Philosophie des Lebens, München: C.H. Beck.
  • Van den Beld, H. (2009): “Philosophy of biotechnology”, in: Meijers, A.W.M. (Ed.): Philosophy of Technology and Engineering Sciences (Handbook of the Philosophy of Science, Volume 9), Amsterdam: North Holland, pp. 1302-1340.
  • Verbeek, P.-P. (2005): What Things Do: Philosophical Reflections on Technology, Agency, and Design, University Park (PA): Pennsylvania State University Press.
  • Vermaas, P.E., Kroes, P., Light, A. & Moore, S.A. (Eds) (2008): Philosophy and Design: From Engineering to Architecture, Dordrecht: Springer.
  • Vincenti, W.G. (1990): What Engineers Know and How They Know It: Analytical Studies from Aeronautical History, Baltimore (MD): Johns Hopkins University Press.
  • Wartofsky, M.W. (1979): “Philosophy of technology”, in: Asquith, P.D. & Kyburg, H.E. (eds): Current Research in Philosophy of Science, East Lansing (MI): Philosophy of Science Association, pp. 171-184.
  • Whitney, E. (1990): Paradise Restored: The Mechanical Arts From Antiquity Through the Thirteenth Century (Transactions of the American Philosophical Society, Vol. 80), Philadelphia: The American Philosophical Society.
  • Wimsatt, W.C. (2007): Re-engineering Philosophy for Limited Beings: Piecewise Approximations to Reality, Cambridge (MA): Cambridge University Press.
  • Winner, L. (1977): Autonomous Technology: Technics-out-of-control as a Theme in Political Thought, Cambridge (MA): MIT Press
  • Zoglauer, T. (2002): “Einleitung”, in: Zoglauer, T. (Ed.): Technikphilosophie, Freiburg & München: Karl Alber.

 

Author Information

Thomas A.C. Reydon
Email: reydon@ww.uni-hannover.de
Leibniz University of Hannover
Germany

John McTaggart Ellis McTaggart (1866—1925)

McTaggartJ. M. E. McTaggart is a British idealist, best known for his argument for the unreality of time and for his system of metaphysics advocating personal idealism. By the early twentieth century, the philosophical movement known as British Idealism was waning, while the ‘new realism’ (later dubbed ‘analytic philosophy’) was gaining momentum. Although McTaggart’s commitment to idealism never faltered, he enjoyed an usually close relationship with several of the new realists. McTaggart spent almost his entire career at Trinity College, Cambridge, and there he taught Bertrand Russell, G. E. Moore and C. D. Broad. McTaggart influenced all of these figures to some degree, and all of them speak particularly highly of his careful and clear philosophical method.

McTaggart studied Hegel from the very beginning of his philosophical career and produced a large body of Hegel scholarship, including the mammoth Studies in Hegelian Cosmology (1901). Towards the end of his career he produced his two volume magnum opus The Nature of Existence (1921 & posthumously 1927), a highly original metaphysical system developing─what McTaggart took to be─Hegel’s ontology. This personal idealism holds that the universe is composed solely of minds and their perceptions, bound into a tight unity by love. However, McTaggart is best known for his influential paper “The Unreality of Time” in which he argues that change and time are contradictory and unreal. This argument, and the metaphysical groundwork it lays down, especially its contrast between his A-series and B-series of time, is still widely discussed.

Table of Contents

  1. Biography
  2. Philosophical Influences
    1. The British Idealists
    2. The British New Realists
  3. Philosophical Writings
    1. Hegel
    2. Some Dogmas of Religion
    3. The Unreality of Time
    4. The Nature of Existence
  4. References and Further Reading
    1. Primary Sources
    2. Selected Secondary Sources

1. Biography

McTaggart was born in London on 3 September 1866, the son of Francis Ellis, a county court judge, and his wife Caroline Ellis. McTaggart was born ‘John McTaggart Ellis’ and acquired the name ‘McTaggart’ as a surname when his father adopted it on condition of inheriting an uncle’s wealth. As a boy McTaggart attended a preparatory school in Weybridge, from which he was expelled for his frequent avowal of atheism. He subsequently attended school in Caterham and Clifton College, Bristol. He began studying philosophy at Trinity College, Cambridge in 1885. Once McTaggart began at Trinity, he hardly left: he graduated in 1888 with a first class degree, became a Prize Fellow in 1891, became a lecturer in Moral Sciences in 1897 and stayed until his retirement in 1923. In a letter to a friend, he writes of Cambridge: ‘Unless I am physically or spiritually at Cambridge or Oxford, I have no religion, no keenness (I do not identify them) except by snatches. I must have been made for a don… I learn a good many things there, the chief one being that I am a damned sight happier than I deserve to be’. In addition to being an academic, McTaggart was a mystic. He reports having visions ─ not imaginary, but literal perceptions of the senses ─ conveying the spiritual nature of the world; this may have played a part in his unswerving devotion to idealism. McTaggart investigates the nature of mysticism in “Mysticism” ─ reprinted in his Philosophical Studies (1934) ─ and he takes it to involve an awareness of the unity of the universe.

Beginning in 1891, McTaggart took a number of trips to New Zealand to visit his mother, and it was there that he met his future wife. He married Margaret Elizabeth Bird in New Zealand on 5 August 1899, and subsequently removed her to Cambridge. They had no children. During the first World War, McTaggart worked as a special constable and helped in a munitions factory. McTaggart’s friend Dickinson writes of him, ‘it is essential to remember that, if he was a philosopher by nature and choice he was also a lover and a husband… and a whole-hearted British patriot’ (Dickinson, 1931, 47).

Towards the end of his life McTaggart produced the first volume of his magnum opus The Nature of Existence (1921). He retired shortly afterwards in 1923, and died unexpectedly two years later on 18 January 1925. In his introduction to the second edition of Some Dogmas of Religion, McTaggart’s friend and former student Broad describes McTaggart’s funeral and mentions how one of McTaggart’s favourite Spinozistic passages were read out. It is worth mentioning here that, although McTaggart never contributed to Spinoza scholarship, he admired him greatly ─ perhaps even more than Hegel. McTaggart describes Spinoza as a great religious teacher, ‘in whom philosophical insight and religious devotion were blended as in no other man before or since’ (McTaggart, 1906, 299). The passage from Spinoza was consequently engraved on McTaggart’s memorial bass in Trinity College. McTaggart did not live to see the second volume of The Nature of Existence in print but fortunately the manuscript was largely complete and it was finally published in 1927, under Broad’s careful editorial care. Broad describes McTaggart as follows:

‘Take an eighteenth-century English Whig. Let him be a mystic. Endow him with the logical subtlety of the great schoolmen and their belief in the powers of human reason, with the business capacity of a successful lawyer, and with the lucidity of the best type of French mathematician. Inspire him (Heaven knows how) in early youth with a passion for Hegel. Then subject him to the teaching of Sidgwick and the continual influence of Moore and Russell. Set him to expound Hegel. What will be the result? Hegel himself could not have answered this question a priori, but the course of world history has solved it ambulando by producing McTaggart.’

For further biographical information (and anecdotes) see Dickinson’s (1931) biographical sketch of McTaggart, and Broad’s (1927) notice on McTaggart.

2. Philosophical Influences

McTaggart was active in British philosophy at a time when it was caught between two opposing philosophical currents ─ British Idealism and the New Realism ─ and McTaggart was involved with figures within both of these movements.

a. The British Idealists

McTaggart began his career in British philosophy when it was firmly under the sway of British Idealism, a movement which argues that the world is inherently unified, intelligible and idealist. Due to the influence of Hegel on these philosophers, the movement is also sometimes known as British Hegelianism. The movement began in the latter half of the nineteenth century; J. H. Stirling is generally credited with introducing Hegel’s work to Britain via his book The Secret of Hegel (1865). Aside from McTaggart himself, important figures in British Idealism include T. H. Green, F. H. Bradley, Harold Joachim, Bernard Bosanquet and Edward Caird. Early on, a schism appeared in the movement as to how idealism should be understood. Absolute idealists ─ such as Bradley ─ argued that reality is underpinned by a single partless spiritual unity known as the Absolute. In contrast, personal idealists ─ such as G. F. Stout and Andrew Seth ─ argued that reality consists of many individual spirits or persons. McTaggart firmly endorses personal idealism, the doctrine that he took to be Hegel’s own. In addition to his idealism, McTaggart shared other neo-Hegelian principles. Among these are his convictions that the universe is as tightly unified as it is possible for a plurality of existents to be, that the universe is fundamentally rational and open to a priori investigation, and his disregard for naturalism. On this last point, McTaggart goes so far as to say that, while science may investigate the nature of the universe, only philosophy investigates its ‘ultimate nature’ (McTaggart, 1906, 273).

Nearly all of McTaggart’s early work concerns Hegel, or Hegelian doctrines, and this work forms the basis of the metaphysical system he would later develop in so much detail. A good example of this is his earliest publication, a pamphlet printed for private circulation entitled “The Further Determination of the Absolute” (1893); it is reprinted in McTaggart’s Philosophical Studies. In this defence of idealism, McTaggart’s Hegelian credentials are well established: he repeatedly references Hegel, Green, and Bradley ─ whom he later describes as ‘the greatest of all living philosophers’. McTaggart apparently cared  greatly about this paper. In its introduction, McTaggart apologises for its ‘extreme crudeness… and of its absolute inadequacy to its subject’. In private correspondence (see Dickinson) McTaggart describes the experience of writing it. ‘It has been shown to one or two people who are rather authorities (Caird of Glasgow and Bradley of Oxford) and they have been very kind and encouraging about it… [writing] it was almost like turning one’s heart out’.

b. The British New Realists

Despite his close philosophical ties to British Idealism, McTaggart bucked the trends of the movement in a number of ways. (In fact, Broad (1927) goes so far as to say that English Hegelianism filled McTaggart with an ‘amused annoyance’). To begin with, McTaggart spent his entire career at Cambridge. Not only was Oxford, rather than Cambridge, the spiritual home of British Idealism but Cambridge became the home of new realism. While at Trinity College, McTaggart taught a number of the new realists ─ including Moore, Russell and Broad ─ and held great influence over them. Moore read and gave notes on a number of McTaggart’s works prior to publication, including Some Dogmas of Religion (1906) and the first volume of The Nature of Existence. In his obituary note on McTaggart, Moore describes him as a philosopher ‘of the very first rank’ (Moore, 1925, 271). For more on McTaggart’s influence on Moore, see Baldwin (1990). McTaggart was also involved with some of the realist debates of the time; for example, see his discussion note on Wittgenstein “Propositions Applicable to Themselves”, reprinted in his Philosophical Studies (1906).

As a young philosopher, Russell was so impressed by McTaggart’s paper “The Further Determination of the Absolute” and its doctrine of philosophical love that he used it to woo his future wife. In his autobiography, Russell writes that he remembers wondering as a student ‘as an almost unattainable ideal, whether I should ever do work as good as McTaggart’s’ (Russell, 1998, 129). Later, their relationship soured; McTaggart took a leading role in the expulsion of Russell from his fellowship following Russell’s controversial pacifist wartime writings. For more on this, and on McTaggart’s more general influence on Russell, see Dickinson (1931) and Griffin (1984). McTaggart, Russell and Moore were described at one point as ‘The Mad Tea Party of Trinity’, with McTaggart painted as the Dormouse.

As for Broad, McTaggart describes him as his ‘most brilliant’ pupil. Broad edited the two volumes of McTaggart’s The Nature of Existence, and produced extensive studies of both. Both Moore and Broad heap praise upon McTaggart for his exceptional clarity and philosophic rigour; the lack of these qualities in other idealists played a part in driving both of these new realists away from British Idealism. For example, Broad writes: ‘The writings of too many eminent Absolutists seem to start from no discoverable premises; to proceed by means of puns, metaphors, and ambiguities; and to resemble in their literary style glue thickened with sawdust’ (Broad, 1933, ii). In contrast, Broad says of McTaggart that he ‘was an extremely careful and conscientious writer… [to] be ranked with Hobbes, Berkeley and Hume among the masters of English philosophical prose… [his] style is pellucidly clear’ (Broad, 1927, 308).

Not only did McTaggart enjoy close relationships with the new realists, they shared some basic philosophic tenets. For example, McTaggart and the new realists reject the Bradleyian claim that reality and truth come in degrees. McTaggart argues that there is a ‘confusion’ which leads philosophers to move from one to the other (McTaggart, 1921, 4). McTaggart also rejects the coherence theory of truth espoused by British idealists such as Joachim (and, arguably, Bradley) in favour of the correspondence theory of truth (McTaggart, 1921, 10).

3. Philosophical Writings

a. Hegel

While many of the British idealists studied Hegel, few entered into the murky waters of Hegel scholarship. McTaggart is an exception: Hegel scholarship occupied McTaggart for most of his career. Hegel is a German idealist and his work is notoriously difficult. While some of the British  idealists understood Hegel to be arguing that reality consists of a single partless spiritual being known as the Absolute, McTaggart took Hegel to be arguing for personal idealism.

Hegel is discussed in McTaggart’s very first publication, “The Further Determination of the Absolute” (1893). McTaggart argues that the progress of any idealistic philosophy may be divided into three stages: the proof that reality is not exclusively matter, the proof that reality is exclusively spirit and determining the fundamental nature of that spirit. McTaggart describes Hegel’s understanding of the fundamental nature of spirit as follows. ‘Spirit is ultimately made up of various finite individuals, each of which finds his character and individuality by relating himself to the rest, and by perceiving that they are of the same nature as himself’. The individuals that make up spirit are interdependent, united by a pattern or design akin to an organic unity. McTaggart adds that justifying this ‘would be a task beyond the limits of this paper… it could only be done by going over the whole course of Hegel’s Logic’. One way of understanding the rest of McTaggart’s career is to see that he is making good on his threat to justify Hegel’s understanding of spirit.

Just some of McTaggart’s works on Hegel include Studies in the Hegelian Dialectic (1896), Studies in Hegelian Cosmology (1901) and A Commentary on Hegel’s Logic (1910). A central theme in these books is the question of how the universe, as unified spirit, is differentiated into finite spirits ─ how can a unity also be a plurality? McTaggart takes Hegel to have solved this problem by postulating a unity which is not only in the individuals, but also for the individuals, in that reality is a system of conscious individuals wherein each individual reflects the whole: ‘If we take all reality, for the sake of convenience, as limited to three individuals, A, B, and C, and suppose them to be conscious, then the whole will be reproduced in each of them… [A will] be aware of himself, of B, and of C, and of the unity which joins them in a system’ (McTaggart, 1901, 14). Later, this is exactly the position that McTaggart himself advances. McTaggart also discusses Hegel’s dialectic method at length; this is the process whereby opposition between a thesis and an anti-thesis is resolved into a synthesis. For example, ‘being’ and ‘not being’ are resolved into ‘becoming’. Despite his admiration for this method, McTaggart does not use it in his Nature of Existence; instead of proceeding by dialectic, his argument proceeds via the more familiar method of principles and premises.

There is disagreement within contemporary Hegel scholarship as to how correct McTaggart’s reading of Hegel is. Stern argues that McTaggart’s reading of Hegel bears close similarities to contemporary readings, and that it should be seen as an important precursor (Stern, 2009, 121). In contrast, in his introduction to Some Dogmas of Religion, Broad argues that ‘if McTaggart’s account of Hegelianism be taken as a whole and compared with Hegel’s writings as a whole, the impression produced is one of profound unlikeness’. Similarly, Geach compares McTaggart’s acquaintance with Hegel’s writings to the chapter-and-verse knowledge of the Bible that out-of-the-way Protestant sectarians often have; he adds that the ‘unanimous judgement’ of Hegel scholars appears to be that McTaggart’s interpretations of Hegel were as perverse as these sectarians’ interpretations of the Bible (Geach, 1979, 17).

b. Some Dogmas of Religion

Some Dogmas of Religion (1906) is an exception to McTaggart’s main body of work, in that it assumes no knowledge of philosophy and is intended for general audience. The book covers a large number of topics, from the compatibility of God’s attributes to human freewill. This section picks out three of the book’s central themes: the role of metaphysics, McTaggart’s brand of atheism and the immortality of the soul.

McTaggart defines metaphysics as ‘the systematic study of the ultimate nature of reality’. A dogma is ‘any proposition which has a metaphysical significance’, such as belief in God (McTaggart, 1906, 1). McTaggart argues that dogmas can only be produced by reason ─ by engaging in metaphysics. Science does not produce dogmas, for scientific claims do not aim to express the fundamental nature of reality. For example, science tells us about the laws governing the part of the universe know as ‘matter’ are mechanical. Science does not go on to tell us whether these laws are manifestations of deeper laws, or the will of God (McTaggart, 1906, 13-4). In fact, McTaggart argues that the consistency of science would be unaffected if its object of study ─ matter ─ turned out to be immaterial. To learn about the ultimate nature of the world, we must look to metaphysics, not science.

McTaggart embodies two apparently contradictory characteristics: he is religious and an atheist. The apparent contradiction is resolved by McTaggart’s definition of religion. ‘Religion is clearly a state of mind… an emotion resting on a conviction of a harmony between ourselves and the universe at large’ (McTaggart, 1906, 3). McTaggart aims to define religion as broadly as possible, so as to include the traditional systems ─ such as those of the Greeks, Roman Christians, Judaism and Buddhism ─ and the idiosyncratic ones espoused by philosophers like Spinoza and Hegel. Given this very broad definition of religion, McTaggart’s own system of personal idealism qualifies as religious. However, McTaggart is an atheist, for he denies the existence of God. In Some Dogmas of Religion McTaggart does not argue for atheism, he merely rejects some of the traditional arguments for theism. He defines God as ‘a being that is personal, supreme and good’ (McTaggart, 1906, 186) and argues that theistic arguments do not prove the existence of such a being. For example, the cosmological ‘first cause’ argument claims that if every event must have a cause, including the universe’s very first event, then the first cause must being a which is uncaused: God. McTaggart argues that even if this argument is valid, it does not prove the existence of God, for it does not prove that the first existing being is either personal or good (McTaggart, 1906, 190-1). In The Nature of Existence, McTaggart goes even further than this and argues directly for atheism (McTaggart, 1927, 176-89).

Given that McTaggart denies the reality of time and the existence of God, it may seem strange that he also affirms the immortality of the human soul. However, McTaggart held all three of these claims throughout his life. In Some Dogmas of Religion, McTaggart takes the immortality of the soul as a postulate, and considers objections to it, such as the claim that the soul or self is an activity of the finite human body, or that it cannot exist without it. McTaggart argues that none of these objections are successful. For example, concerning the claim that the self is of such a nature that it cannot exist outside of its present body, McTaggart argues that while we have no evidence of disembodied selves, this shows at most that the self needs some body, not that it needs the body it currently has (McTaggart, 1906, 104-5). McTaggart concludes that the immortality of the soul is at least a real possibility, for souls can move from body to body. He argues that souls are immortal, not in the sense of existing at every time ─ for time does not exist ─ but in the sense that we enjoy a succession of lives, before and after this one. McTaggart calls this the doctrine of the ‘plurality of lives’ (McTaggart, 1906, 116). He goes on to argue that our journey throughout these lives is not guided by chance or mechanical necessity, but rather by the interests of spirit: love, which ‘would have its way’. For example, our proximity to our loved ones is not the product of chance or mechanical arrangement, but is rather caused by the fact that our spirits are more closely connected to these selves than to others. This accounts for phenomena such as ‘love at first sight’: we have loved these people before, in previous lives (McTaggart, 1906, 134-5). In The Nature of Existence, McTaggart puts forward a positive argument for the immortality of the soul and continues to emphasise that love is of the utmost importance. By affirming the immortality of the soul, McTaggart seems to take himself to be following Spinoza in making death ‘the least of all things’ (McTaggart, 1906, 299).

c. The Unreality of Time

McTaggart’s paper “The Unreality of Time” (1908) presents the argument he is best known for. (The argument of this paper is also included in the second volume of The Nature of Existence.) McTaggart argues that the belief in the unreality of time has proved ‘singularly attractive’ throughout the ages, and attributes such belief to Spinoza, Kant, Hegel and Bradley. (In the case of Spinoza, this attribution is arguable; Spinoza describes time as a general character of existents, albeit one conceived using the imagination.) McTaggart offers us a wholly new argument in favour of this belief, and here is its outline.

McTaggart distinguishes two ways of ordering events or ‘positions’ in time: the A series takes some position as present, and orders other positions as running from the past to the present and from the present to the future; meanwhile the B series orders events in virtue of whether they are earlier or later than other events. The argument itself has two steps. In the first step, McTaggart aims to show that there is no time without the A series because only the A series can account for change. On the B series nothing changes, any event N has ─ and will always have ─ the same position in the time series: ‘If N is ever earlier than O and later than M, it will always be, and has always been… since the relations of earlier and later are permanent’. In contrast, change does occur on the A series. For example an event, such as the death of Queen Anne, began by being a future event, became present and then became past. Real change only occurs on the A series when events move from being in the future, to being in the present, to being in the past.

In the second step, McTaggart argues that the A series cannot exist, and hence time cannot exist. He does so by attempting to show that the existence of the A series would generate contradiction because past, present and future are incompatible attributes; if an event M has the attribute of being present it cannot also be in the past and the future. However, McTaggart maintains that ‘every event has them all’ ─ for example, if M is past, then it has been present and future ─ which is inconsistent with change. As the application of the A series to reality involves a contradiction, the A series cannot be true of reality. This does not entail that our perceptions are false; on the contrary, McTaggart maintains that it is possible that the realities which we perceive as events in a time series do really form a non-temporal C series. Although this C series would not admit of time or change, it does admit of order. For example, if we perceive two events M and N as occurring at the same time, it may be that ─ while time does not exist ─ M and N have the same position in the ordering of the C series. McTaggart attributes this view of time to Hegel, claiming that Hegel regards the time series as a distorted reflexion of something in the real nature of the timeless reality. In “The Unreality of Time”, McTaggart does not consider at length what the C series is; he merely suggests that the positions within it may be ultimate facts or that they are determined by varying quantities within objects. In “The Relation of Mind to Eternity” (1909) ─ reprinted in his Philosophical Studies ─ McTaggart goes further than this. He compares our perception of time to viewing reality through a tinted glass, and suggests that the C series is an ordering of representations of reality according to how accurate they are. Our ersatz temporal perception that we are moving through time reflects our movement towards the end point of this series, which is the correct perception of reality. This end point will involve the fact that reality is really timeless, so time is understood as the process by which we reach the timeless. Later still, in the second volume of The Nature of Existence, McTaggart reconsiders this position and argues that while the objects of the C series are representations of reality, they are not ordered by veracity. Instead, McTaggart argues that the ‘fundamental sense’ of the C series is that it is ordered according to the ‘amount of content of the whole that is included in it’: it runs from the less inclusive to the more inclusive (McTaggart, 1927, 362). However, McTaggart does not give up his claim that the C series will reach a timeless end point. For more on this, see The Nature of Existence (1927), chapters 59-61.

Reception to “The Unreality of Time” among McTaggart’s contemporaries was mixed. Ewing describes its implausible conclusion as ‘heroic’, while Broad describes it ‘as an absolute howler’. This argument is probably the most influential piece of philosophy that McTaggart ever produced. Although the paper’s conclusions are rarely endorsed in full, it is credited with providing the framework for a debate ─ between the A and B series of time ─ which is still alive today.  For discussion, see Dummett “A Defence of McTaggart’s Proof of the Unreality of Time” (1960),  Lowe “The Indexical Fallacy in McTaggart’s Proof of the Unreality of Time” (1987) and Le Poidevin & Mellor “Time, Change, and the ‘ Indexical Fallacy’” (1987). For an extended, more recent discussion, see Dainton (2001).

d. The Nature of Existence

McTaggart’s magnum opus aims to provide a comprehensive, systematic a priori description of the world; the conclusion of this system is personal idealism. Broad claims that The Nature of Existence may quite fairly be ranked with the Enneads of Plotinus, the Ethics of Spinoza, and the Encyclopaedia of Hegel (Broad, 1927). The central argument of The Nature of Existence is based on the nature of substance and it is extremely complex. The bare bones of the argument contains three steps but along the way, McTaggart makes use of a number of subsidiary arguments.

In the first step, McTaggart argues that the universe contains multiple substances. McTaggart defines a substance as whatever exists and has qualities, or stands in relations, but is not itself a quality or relation, entailing that the following are all substances: sneezes, parties and red-haired archdeacons (McTaggart, 1921, 73). Given this broad definition, McTaggart argues that at least one substance exists; this is true given the evidence of our senses, and that there is anything around to consider the statement at all. All substances have qualities (today, we would say ‘properties’) such as redness and squareness. If there are multiple substances, then relations hold between them. Although to contemporary philosophers the claim that relations are real is familiar, in the context of British Idealism this is a significant departure from Bradley’s claim that relations are unreal. The qualities and relations possessed by a substance are jointly called its characteristics. McTaggart puts forward two kinds of arguments for the claim that there are multiple substances. Firstly, there are empirical proofs, such as the claim that if I and the things I perceive exist, then there are at least two substances (McTaggart, 1921, 75). Secondly, as we will see below, McTaggart argues that all substances can be differentiated into further substances. If this is true then it follows that if at least one substance exists, many exist.

In the second step, McTaggart places two necessary ontological conditions on the nature of substances ─ they must admit of sufficient descriptions, and they must be differentiated into further substances ─ which results in his theory of determining correspondence.

The first ontological condition McTaggart places on substance is that they must admit of sufficient descriptions. This grows out of McTaggart’s extended discussion of the ‘Dissimilarity of the Diverse’ ─ see Chapter 10 of the first volume of the Nature of Existence ─ which argues that diverse (that is, non-identical) things are dissimilar, that two things cannot have the same nature. This ‘similarity’ involves the properties and relations a substance has. For example, McTaggart argues that if space is absolute then two things will occupy different spatial positions and stand in dissimilar spatial relations. McTaggart discusses the relationship between his principle the ‘Dissimilarity of the Diverse’, and Leibniz’s principle the ‘Identity of Indiscernibles’, which states that two things are identical if they are indiscernible. McTaggart prefers the name of his principle, for it does not suggest that there are indiscernibles which are identical but rather that there is nothing which is indiscernible from anything else. McTaggart goes on to argue that all substances admit of an ‘exclusive description’ which applies only to them via a description of their qualities. For example, the description ‘The continent lying over the South Pole’ applies to just one substance. All substances admit of exclusive descriptions because, given the Dissimilarity of the Diverse, no substance can have exactly the same nature as any other (McTaggart, 1921, 106). There are two kinds of exclusive descriptions: firstly, the kind that introduce another substance into the description, such as ‘The father of Henry VIII’; secondly, the kind known as ‘sufficient descriptions’, which describe a substance purely in terms of its qualities, without introducing another substance into the description, such as ‘The father of a monarch’. McTaggart argues that all substances must admit of sufficient descriptions: all substances are dissimilar to all other substances and as a result they admit of exclusive descriptions. If a substance could not be described without making reference to other substances then we would arrive at an infinite regress (because, as we will see, all substances are differentiated to infinity) and the description would correspondingly be infinite (McTaggart, 1921, 108). Such a regress would be vicious because it would never be completed. As substances do exist, they must admit of sufficient descriptions.

The second ontological condition placed on substances is that they are infinitely differentiated into proper parts which are also substances. By ‘differentiated,’ McTaggart implies that they are divisible and that they are divisible into parts unlike their wholes. To illustrate, a homogeneous ─ that is, uniform ─ liquid akin to milk might be infinitely divisible, but all of its parts would be like their wholes, they would merely be smaller portions of milk. In contrast, a heterogeneous ─ that is, non-uniform ─ liquid akin to a fruit smoothie would be infinitely divisible into parts that are unlike their whole: the whole might contain cherry and orange, while its parts contain pieces of cherry and orange respectively. McTaggart argues that all substances are infinitely differentiated by denying a priori that ‘simple’ partless substances are possible; he does so in two ways. The first way is based on divisibility. Simples would have to be indivisible in every dimension ─ in length, breadth and time ─ and this is impossible because even a substance like ‘pleasure’ has two dimensions, if it lasts for at least two moments of time (McTaggart, 1921, 175). The second way is based on notion of content. A simple substance would be a substance without ‘content’ in that it would lack properties and would not stand in relations. McTaggart argues that it is part of our notion of a substance that they must have a ‘filling’ of some sort ─  an ‘internal structure’ ─ and this could only be understood to mean that they must have parts (McTaggart, 1921, 181). Both of these arguments are somewhat hazy; see Broad (1933) for an extensive discussion and critique.

McTaggart’s full account of parts and wholes ─ which discusses divisibility, simples and composition ─ can be found in the first volume of The Nature of Existence, chapters 15-22. McTaggart endorses the doctrine of unrestricted composition, whereby any two substances compose a further compound substance. It follows from this that the universe or ‘all that exists’ is a single substance composed of all other substances (McTaggart, 1921, 78). While we might doubt the existence of simples (that is, partless atoms) we cannot doubt the existence of the universe because it includes all content (McTaggart, 1921, 172). Given McTaggart’s claim that all substances are differentiated and that unrestricted composition occurs, it follows that all parts and all collections of substances are themselves substances. These dual claims have made their way into an argument within contemporary metaphysics by Jonathan Schaffer. In contemporary parlance, anything that is infinitely divisible into proper parts which also have proper parts is ‘gunky’. One way of understanding McTaggart is to see that he claiming that, while all substances lack a ‘lower’ level ─ because they are gunky, infinitely divisible into further parts ─ all substances have a ‘highest’ level in the form of the universe, a substance which includes all content. Schaffer makes use of this asymmetry of existence ─ the fact that one can seriously doubt the existence of simples but not the existence of the universe as a whole ─ to argue for priority monism (Schaffer, 2010, 61).

With these two ontological conditions in place ─ that substances must admit of sufficient descriptions and be differentiated ─ McTaggart sets out to combine them into his theory of determining correspondence. This theory is extremely difficult and rather obscure; see Wisdom (1928) and Broad (1933). Essentially, McTaggart argues that the two ontological conditions result in contradiction unless substances fulfil a certain requirement. The worry is that a substance A cannot be given a sufficient description in virtue of sufficient descriptions of its parts M, for they can only be described in virtue of a sufficient descriptions of their parts… and so on to infinity. This is a vicious series because the sufficient descriptions of the members of M can only be made sufficient by means of the last stage of an unending series; in other words, they cannot be made sufficient at all (McTaggart, 1921, 199). Of course, as there are substances, they must admit of sufficient descriptions. McTaggart’s way out of this apparent contradiction seems to be to reverse the direction of epistemological priority: we have been considering deducing a sufficient description of a substance in virtue of its parts; instead, we should be deducing sufficient descriptions of the parts in virtue of the substance of which they are a whole. ‘[If] the contradiction is to be avoided, there must be some description of every substance which does imply sufficient descriptions of every part through all its infinite series of sets of parts’ (McTaggart, 1921, 204). The only way to provide such a description is via the law of determining correspondence, which asserts that each part of A is in a one-to-one correspondence with each term of its infinite series, the nature of the correspondence being such that, in the fact that a part of A corresponded in this way to a reality with a given nature, there would be implied a sufficient description of that part of A. The theory of determining correspondence involves a classification of the contents of the universe. The universe is a primary whole and it divides into primary parts, whose sufficient descriptions determine ─ by virtue of the relation of determining correspondence ─ the sufficient description of all further, secondary parts.

In the third step of his argument, McTaggart shows that the only way the nature of substance could comply with the theory of determining correspondence is if substance is spirit. He does this by eliminating the other candidates for the nature of substance, matter and sense data. His objections to both of these rival candidates are similar; we will focus on his rejection of matter. McTaggart argues that while there ‘might’ be no difficulty in the claim that matter is infinitely divisible, there is certainly is difficulty in the claim that matter can allow for determining correspondence (McTaggart, 1927, 33). This is impossible because, in a material object, the sufficient description of the parts determines the sufficient description of the whole, not the other way around. ‘If we know the shape and size of each one of a set of parts of A, and their position relatively to each other, we know the size and shape of A… we shall thus have an infinite series of terms, in which the subsequent terms imply the precedent’ (McTaggart, 1927, 36). As we have already seen above, such a series will involve a contradiction, for the description will never ‘bottom out’. One way out of this contradiction might be to postulate that, at each level of composition, the parts bear a ‘new’ property ─ such as a new colour or taste ─ which would be sufficient to describe them. McTaggart swiftly dispenses with this reply by arguing that it would require matter to possess an infinite number of sorts of qualities ─ ‘one sort for each of the infinite series of grades of secondary parts’ ─ and there is no reason to suppose that matter possesses more than the very limited number of qualities that are currently known to us (McTaggart, 1927, 35). McTaggart briefly considers dividing matter to infinity in time but dismisses the idea because of course, for McTaggart, time is unreal. McTaggart concludes that matter cannot exist. Interestingly, he does not take this conclusion to imply anti-realism about science or common sense, for when those disciplines use terms which assume the existence of matter, what is meant by those terms ‘remains just as true’ if we take the view that matter does not exist (McTaggart, 1927, 53).

Having dispensed with its rivals, McTaggart turns to idealism. Spiritual substances include selves, their parts, and compounds of multiple selves. Idealism is compatible with the theory of determining correspondence when the primary parts of the universe are understood to be selves, and the secondary parts their perceptions which are differentiated to infinity (McTaggart, 1927, 89).   While this does not amount to a positive proof of idealism, it gives us good reason to believe that nothing but spirit exists, for there is no other option on the table (McTaggart, 1927, 115). McTaggart also describes how the universe is a ‘self-reflecting unity’, in that the parts of the universe reflect every other part (McTaggart, 1921, 299). As we saw above, this is the view that McTaggart attributed to Hegel. McTaggart’s system also bears some similarity to the monadism advanced in Leibniz’s Monadology, wherein each monad is a spirit that reflects every other monad. Furthermore, in Leibniz’s system the highest ranks of monads are capable of entering into a community with God of pure love. Similarly, in McTaggart’s system (although there is no divine monarch) the souls are bound together by the purest form of love which results in the purest form of happiness (McTaggart, 1927, 156). These arguments are but developments of principles that McTaggart had espoused his entire life.

This section merely describes the main thread of argument in The Nature of Existence; the work itself covers many more topics. These include the notion of organic unity, the nature of cogitation, volition, emotion, good and evil, and error. Further topics are also covered in McTaggart’s Philosophical Studies, such the nature of causality and the role of philosophy as opposed to science.

4. References and Further Reading

a. Primary Sources

  • (1893) “The Further Determination of the Absolute”. Pamphlet designed for private distribution only. Reprinted in McTaggart’s Philosophical Studies.
  • (1896) “Time and the Hegelian Dialectic”. Mind Vol. 2, 490–504.
  • (1896) Studies in the Hegelian Dialectic. CUP: GB.
  • (1897) “Hegel’s Treatment of the Categories of the Subjective Notion”. Mind Vol. 6, 342–358.
  • (1899) “Hegel’s Treatment of the Categories of the Objective Notion”. Mind Vol. 8, 35–62.
  • (1900) “Hegel’s Treatment of the Categories of the Idea”. Mind Vol. 9, 145–183.
  • (1901) Studies in Hegelian Cosmology. CUP: Glasgow.
  • (1906) Some Dogmas of Religion. Edward Arnold press: GB.
  • (1908) “The Unreality of Time”. Mind Vol. 17, 457–474.
  • (1909) “The Relation of Time to Eternity” Mind Vol. 18, 343-362.
  • (1910) A Commentary on Hegel’s Logic. CUP: GB.
  • (1916) Human Immortality and Pre-Existence. Edward Arnold Press: GB.
  • (1921) The Nature of Existence I. CUP: London.
  • (1927)The Nature of Existence II. Edited by C. D. Broad. CUP: London.
  • (1934) Philosophical Studies. Edited by S.V. Keeling. Theomes Press: England.
    • [A large collection of McTaggart’s papers]

b. Selected Secondary Sources

  • Baldwin, Thomas (1990). G. E. Moore. Routledge: UK.
    • [Describes the relationship between Moore and McTaggart]
  • Bradley, F. (1920). Appearance and Reality. George Allen & Unwin Ltd: GB
    • [Bradley is the arch British idealist]
  • Broad, C. D. (1927). “John McTaggart Ellis McTaggart 1866-1925”, Proceedings of the British Academy Vol. XIII, 307-334.
  • Broad, C.D. (1933) An Examination of McTaggart’s Philosophy. CUP: GB
  • Dainton, Barry (2001). Time and Space. Acumen Publishing Ltd: GB.
    • [Provides an excellent discussion of McTaggart’s argument on the unreality of time]
  • Dickinson, G. Lowes (1931). J. M. E. McTaggart. CUP: GB.
  • Geach, Peter (1979). Truth, Love and Immortality: an Introduction to McTaggart’s Philosophy. University of California Press: GB.
  • Moore, G.E. (1925). “Death of Dr. McTaggart”, Mind Vol. 34, 269–271.
  • Moore, G.E. (1942). “An Autobiography”, in The Philosophy of G.E. Moore. Tudor Publishing Company: GB.
  • Russell, Bertrand (1998). The Autobiography of Bertrand Russell. Routledge: GB.
  • Schaffer, Jonathan (2010). “Monism: The Priority of the Whole”, Philosophical Review, Vol. 119, pp. 31-76.
    • [Utilises McTaggart’s asymmetry of existence – between the non-existence of simples and the existence of the universe as a whole – in a new way]
  • Stern, Robert (2009). Hegelian Metaphysics. OUP: GB.
    • [Gives an excellent history of the movement, and discusses how close McTaggart’s interpretation of Hegel is to Hegel himself]
  • Wisdom, John. 1928. “McTaggart’s Determining Correspondence of Substance: a Refutation”, Mind Vol. 37, 414–438.

 

Author Information

Emily Thomas
Email: aeet2@cam.ac.uk
University of Cambridge
United Kingdom

The Lucas-Penrose Argument about Gödel’s Theorem

In 1961, J.R. Lucas published “Minds, Machines and Gödel,” in which he formulated a controversial anti-mechanism argument.  The argument claims that Gödel’s first incompleteness theorem shows that the human mind is not a Turing machine, that is, a computer.  The argument has generated a great deal of discussion since then.  The influential Computational Theory of Mind, which claims that the human mind is a computer, is false if Lucas’s argument succeeds.  Furthermore, if Lucas’s argument is correct, then “strong artificial intelligence,” the view that it is possible at least in principle to construct a machine that has the same cognitive abilities as humans, is false.  However, numerous objections to Lucas’s argument have been presented.  Some of these objections involve the consistency or inconsistency of the human mind; if we cannot establish that human minds are consistent, or if we can establish that they are in fact inconsistent, then Lucas’s argument fails (for reasons made clear below).  Others object to various idealizations that Lucas’s argument makes.  Still others find some other fault with the argument.  Lucas’s argument was rejuvenated when the physicist R. Penrose formulated and defended a version of it in two books, 1989’s The Emperor’s New Mind and 1994’s Shadows of the Mind. Although there are similarities between Lucas’s and Penrose’s arguments, there are also some important differences.  Penrose argues that the Gödelian argument implies a number of claims concerning consciousness and quantum physics; for example, consciousness must arise from quantum processes and it might take a revolution in physics for us to obtain a scientific explanation of consciousness.  There have also been objections raised to Penrose’s argument and the various claims he infers from it: some question the anti-mechanism argument itself, some question whether it entails the claims about consciousness and physics that he thinks it does, while others question his claims about consciousness and physics, apart from his anti-mechanism argument.

Section one discusses Lucas’s version of the argument.  Numerous objections to the argument – along with Lucas’s responses to these objections – are discussed in section two. Penrose’s version of the argument, his claims about consciousness and quantum physics, and various objections that are specific to Penrose’s claims are discussed in section three. Section four briefly addresses the question, “What did Gödel himself think that his theorem implied about the human mind?”  Finally, section five mentions two other anti-mechanism arguments.

Table of Contents

  1. Lucas’s Original Version of the Argument
  2. Some Possible Objections to Lucas
    1. Consistency
    2. Benacerraf’s Criticism
    3. The Whiteley Sentence
    4. Issues Involving “Idealizations”
    5. Lewis’s Objection
  3. Penrose’s New Version of the Argument
    1. Penrose’s Gödelian Argument
    2. Consciousness and Physics
  4. Gödel’s Own View
  5. Other Anti-Mechanism Arguments
  6. References and Further Reading

1. Lucas’s Original Version of the Argument

Gödel’s (1931) first incompleteness theorem proves that any consistent formal system in which a “moderate amount of number theory” can be proven will be incomplete, that is, there will be at least one true mathematical claim that cannot be proven within the system (Wang 1981: 19).  The claim in question is often referred to as the “Gödel sentence.”  The Gödel sentence asserts of itself: “I am not provable in S,” where “S” is the relevant formal system.  Suppose that the Gödel sentence can be proven in S.  If so, then by soundness the sentence is true in S.  But the sentence claims that it is not provable, so it must be that we cannot prove it in S.  The assumption that the Gödel sentence is provable in S leads to contradiction, so if S is consistent, it must be that the Gödel sentence is unprovable in S, and therefore true, because the sentence claims that it is not provable.  In other words, if consistent, S is incomplete, as there is a true mathematical claim that cannot be proven in S. For an introduction to Gödel’s theorem, see Nagel and Newman (1958).

Gödel’s proof is at the core of Lucas’s (1961) argument, which is roughly the following.  Consider a machine constructed to produce theorems of arithmetic.  Lucas argues that the operations of this machine are analogous to a formal system.  To explain, “if there are only a definite number of types of operation and initial assumptions built into the [machine], we can represent them all by suitable symbols written down on paper” (Lucas 1961: 115).  That is, we can associate specific symbols with specific states of the machine, and we can associate “rules of inference” with the operations of the machine that cause it to go from one state to another.  In effect, “given enough time, paper, and patience, [we could] write down an analogue of the machine’s operations,” and “this analogue would in fact be a formal proof” (ibid).  So essentially, the arithmetical claims that the machine will produce as output, that is, the claims the machine proves to be true, will “correspond to the theorems that can be proved in the corresponding formal system” (ibid).  Now suppose that we construct the Gödel sentence for this formal system.  Since the Gödel sentence cannot be proven in the system, the machine will be unable to produce this sentence as a truth of arithmetic.  However, a human can look and see that the Gödel sentence is true.  In other words, there is at least one thing that a human mind can do that no machine can.  Therefore, “a machine cannot be a complete and adequate model of the mind” (Lucas 1961: 113).  In short, the human mind is not a machine.

Here is how Lucas (1990: paragraph 3) describes the argument:

I do not offer a simple knock-down proof that minds are inherently better than machines, but a schema for constructing a disproof of any plausible mechanist thesis that might be proposed.  The disproof depends on the particular mechanist thesis being maintained, and does not claim to show that the mind is uniformly better than the purported mechanist representation of it, but only that it is one respect better and therefore different.  That is enough to refute that particular mechanist thesis.

Further, Lucas (ibid) believes that a variant of his argument can be formulated to refute any future mechanist thesis.  To explain, Lucas seems to envision the following scenario:  a mechanist formulates a particular mechanistic thesis by claiming, for example, that the human mind is a Turing machine with a given formal specification S.  Lucas then refutes this thesis by producing S’s Gödel sentence, which we can see is true, but the Turing machine cannot.  Then, a mechanist puts forth a different thesis by claiming, for example, that the human mind is a Turing machine with formal specification S’.  But then Lucas produces the Gödel sentence for S’, and so on, until, presumably, the mechanist simply gives up.

One who has not studied Gödel’s theorem in detail might be wondering: why can’t we simply add the Gödel sentence to the list of theorems a given machine “knows” thereby giving the machine the ability Lucas claims it does not have?  In Lucas’s argument, we consider some particular Turing machine specification S, and then we note that “S-machines” (that is, those machines that have formal specification S) cannot see the truth of the Gödel sentence while we can, so human minds cannot be S-machines, at least.  But why can’t we simply add the Gödel sentence to the list of theorems that S-machines can produce?  Doing so will presumably give the machines in question the ability that allegedly separates them from human minds, and Lucas’s argument falters.  The problem with this response is that even if we add the Gödel sentence to S-machines, thereby producing Turing machines that can produce the initial Gödel sentence as a truth of arithmetic, Lucas can simply produce a new Gödel sentence for these updated machines, one which allegedly we can see is true but the new machines cannot, and so on ad infinitum.  In sum, as Lucas (1990: paragraph 9) states,  “It is very natural…to respond by including the Gödelian sentence in the machine, but of course that makes the machine a different machine with a different Gödelian sentence all of its own.”  This issue is discussed further below.

One reason Lucas’s argument has received so much attention is that if the argument succeeds, the widely influential Computational Theory of Mind is false.  Likewise, if the argument succeeds, then “strong artificial intelligence” is false; it is impossible to construct a machine that can perfectly mimic our cognitive abilities.  But there are further implications; for example, a view in philosophy of mind known as Turing machine functionalism claims that the human mind is a Turing machine, and of course, if Lucas is right, this form of functionalism is false. (For more on Turing machine functionalism, see Putnam (1960)).  So clearly there is much at stake.

2. Some Possible Objections to Lucas

Lucas’s argument has been, and still is, very controversial.  Some objections to the argument involve consistency; if we cannot establish our own consistency, or if we are in fact inconsistent, then Lucas’s argument fails (for reasons made clear below).  Furthermore, some have objected that the algorithm the human mind follows is so complex we might be forever unable to formulate our own Gödel sentence; if so, then maybe we cannot see the truth of our own Gödel sentence and therefore we might not be different from machines after all.  Others object to various idealizations that Lucas’s argument makes.  Still others find some other fault with the argument.  In this section, some of the more notable objections to Lucas’s argument are discussed.

a. Consistency

Lucas’s argument faces a number of objections involving the issue of consistency; there are two related though distinct lines of argument on this issue.  First, some claim that we cannot establish our own consistency, whether we are consistent or not.  Second, some claim that we are in fact inconsistent.  The success of either of these objections would be sufficient to defeat Lucas’s argument.  But first, to see why these objections (if successful) would defeat Lucas’s argument, recall that Gödel’s first incompleteness theorem states that if a formal system (in which we can prove a suitable amount of number theory) is consistent, the Gödel sentence is true but unprovable in the system.  That is, the Gödel sentence will be true and unprovable only in consistent systems.  In an inconsistent system, one can prove any claim whatsoever because in classical logic, any and all claims follow from a contradiction; that is, an inconsistent system will not be incomplete.  Now, suppose that a mechanist claims that we are Turing machines with formal specification S, and this formal specification is inconsistent (so the mechanist is essentially claiming that we are inconsistent).  Lucas’s argument simply does not apply in such a situation; his argument cannot defeat this mechanist.  Lucas claims that any machine will be such that there is a claim that is true but unprovable for the machine, and since we can see the truth of the claim but the machine cannot, we are not machines.  But if the machine in question is inconsistent, the machine will be able to prove the Gödel sentence, and so will not suffer from the deficiency that Lucas uses to separate machines from us.  In short, for Lucas’s argument to succeed, human minds must be consistent.

Consequently, if one claims that we cannot establish our own consistency, this is tantamount to claiming that we cannot establish the truth of Lucas’s conclusion.  Furthermore, there are some good reasons for thinking that even if we are consistent, we cannot establish this.  For example, Gödel’s second incompleteness theorem, which quickly follows from his first theorem, claims that one cannot prove the consistency of a formal system S from within the system itself, so, if we are formal systems, we cannot establish our own consistency.  In other words, a mechanist can avoid Lucas’s argument by simply claiming that we are formal systems and therefore, in accordance with Gödel’s second theorem, cannot establish our own consistency.  Many have made this objection to Lucas’s argument over the years; in fact, Lucas discusses this objection in his original (1961) and attributes it to Rogers (1957) and Putnam.  Putnam made the objection in a conversation with Lucas even before Lucas’s (1961) (see also Putnam (1960)).  Likewise, Hutton (1976) argues from various considerations drawn from Probability Theory to the conclusion that we cannot assert our own consistency.  For example, Hutton claims that the probability that we are inconsistent is above zero, and that if we claim that we are consistent, this “is a claim to infallibility which is insensitive to counter-arguments to the point of irrationality” (Lucas 1976: 145).  In sum, for Lucas’s argument to succeed, we must be assured that humans are consistent, but various considerations, including Gödel’s second theorem, imply that we can never establish our own consistency, even if we are consistent.

Another possible response to Lucas is simply to claim that humans are in fact inconsistent Turing machines.  Whereas the objection above claimed that we can never establish our own consistency (and so cannot apply Gödel’s first theorem to our own minds with complete confidence), this new response simply outright denies that we are consistent.  If humans are inconsistent, then we might be equivalent to inconsistent Turing machines, that is, we might be Turing machines.  In short, Lucas concludes that since we can see the truth of the Gödel sentence, we cannot be Turing machines, but perhaps the most we can conclude from Lucas’s argument is that either we are not Turing machines or we are inconsistent Turing machines.  This objection has also been made many times over the years; Lucas (1961) considers this objection too in his original article and claims that Putnam also made this objection to him in conversation.

So, we see two possible responses to Lucas: (1) we cannot establish our own consistency, whether we are consistent or not, and (2) we are in fact inconsistent.  However, Lucas has offered numerous responses to these objections.  For example, Lucas thinks it is unlikely that an inconsistent machine could be an adequate representation of a mind.  He (1961: 121) grants that humans are sometimes inconsistent, but claims that “it does not follow that we are tantamount to inconsistent systems,” as “our inconsistencies are mistakes rather than set policies.”  When we notice an inconsistency within ourselves, we generally “eschew” it, whereas “if we really were inconsistent machines, we should remain content with our inconsistencies, and would happily affirm both halves of a contradiction” (ibid).  In effect, we are not inconsistent machines even though we are sometimes inconsistent; we are fallible but not systematically inconsistent.   Furthermore, if we were inconsistent machines, we would potentially endorse any proposition whatsoever (ibid).  As mentioned above, one can prove any claim whatsoever from a contradiction, so if we are inconsistent Turing machines, we would potentially believe anything.  But we do not generally believe any claim whatsoever (for example, we do not believe that we live on Mars), so it appears we are not inconsistent Turing machines.  One possible counter to Lucas is to claim that we are inconsistent Turing machines that reason in accordance with some form of paraconsistent logic (in paraconsistent logic, the inference from a contradiction to any claim whatsoever is blocked); if so, this explains why we do not endorse any claim whatsoever given our inconsistency (see Priest (2003) for more on paraconsistent logic).  One could also argue that perhaps the inconsistency in question is hidden, buried deep within our belief system; if we are not aware of the inconsistency, then perhaps we cannot use the inconsistency to infer anything at all (Lucas himself mentions this possibility in his (1990)).

Lucas also argues that even if we cannot prove the consistency of a system from within the system itself, as Gödel’s second theorem demonstrates, there might be other ways to determine if a given system is consistent or not.  Lucas (1990) points out that there are finitary consistency proofs for both the propositional calculus and the first-order predicate calculus, and there is also Gentzen’s proof of the consistency of Elementary Number Theory.  Discussing Gentzen’s proof in more detail, Lucas (1996) argues that while Gödel’s second theorem demonstrated that we cannot prove the consistency of a system from within the system itself, it might be that we can prove that a system is consistent with considerations drawn from outside the system.  One very serious problem with Lucas’s response here, as Lucas (ibid) himself notes, is that the wider considerations that such a proof uses must be consistent too, and this can be questioned.  Another possible response is the following: maybe we can “step outside” of, say, Peano arithmetic and argue that Peano arithmetic is consistent by appealing to considerations that are outside of Peano arithmetic; however, it isn’t clear that we can “step outside” of ourselves to show that we are consistent.

Lucas (1976: 147) also makes the following “Kantian” point:

[perhaps] we must assume our own consistency, if thought is to be possible at all.  It is, perhaps like the uniformity of nature, not something to be established at the end of a careful chain of argument, but rather a necessary assumption we must make if we are to start on any thinking at all.

A possible reply is that assuming we are consistent (because this assumption is a necessary precondition for thought) and our actually being consistent are two different things, and even if we must assume that we are consistent to get thought off of the ground, we might be inconsistent nevertheless.  Finally, Wright (1995) has argued that an intuitionist, at least, who advances Lucas’s argument, can overcome the worry over our consistency.

b. Benacerraf’s Criticism

Benacerraf (1967) makes a well-known criticism of Lucas’s argument.  He points out that it is not easy to construct a Gödel sentence and that in order to construct a Gödel sentence for any given formal system one must have a solid understanding of the algorithm at work in the system.  Further, the formal system the human mind might implement is likely to be extremely complex, so complex, in fact, that we might never obtain the insight into its character needed to construct our version of the Gödel sentence.  In other words, we understand some formal systems, such as the one used in Russell and Whitehead’s (1910) Principia, well enough to construct and see the truth of the Gödel sentence for these systems, but this does not entail that we can construct and see the truth of our own Gödel sentence.  If we cannot, then perhaps we are not different from machines after all; we might be very complicated Turing machines, but Turing machines nevertheless.  To rephrase this objection, suppose that a mechanist produces a complex formal system S and claims that human minds are S.  Of course, Lucas will then try to produce the Gödel sentence for S to show that we are not S.  But S is extremely complicated, so complicated that Lucas cannot produce S’s Gödel sentence, and so cannot disprove this particular mechanistic thesis.  In sum, according to Benacerraf, the most we can infer from Lucas’s argument is a disjunction: “either no (formal system) encodes all human arithmetical capacity – the Lucas-Penrose thought – or any system which does has no axiomatic specification which human beings can comprehend” (Wright, 1995, 87).  One response Lucas (1996) makes is that he [Lucas] could be helped in the effort to produce the Gödel sentence for any given formal system/machine.  Other mathematicians could help and so could computers.  In short, at least according to Lucas, it might be difficult, but it seems that we could, at least in principle, determine what the Gödelian formula is for any given system.

c. The Whiteley Sentence

Whiteley (1962) responded to Lucas by arguing that humans have similar limitations to the one that Lucas’s argument attributes to machines; if so, then perhaps we are not different from machines after all.  Consider, for example, the “Whiteley sentence,” that is, “Lucas cannot consistently assert this formula.”  If this sentence is true, then it must be that asserting the sentence makes Lucas inconsistent.  So, either Lucas is inconsistent or he cannot utter the sentence on pain of inconsistency, in which case the sentence is true and so Lucas is incomplete.  Hofstadter (1981) also argues against Lucas along these lines, claiming that we would not even believe the Whiteley sentence, while Martin and Engleman (1990) defend Lucas on this point by arguing against Hofstadter (1981).

d. Issues Involving “Idealizations”

A number of objections to Lucas’s argument involve various “idealizations” that the argument makes (or at least allegedly makes).  Lucas’s argument sets up a hypothetical scenario involving a mind and a machine, “but it is an idealized mind and an idealized machine,” neither of which are subject to limitations arising from, say, human mortality or the inability of some humans to understand Gödel’s theorem, and some believe that once these idealizations are rejected, Lucas’s argument falters (Lucas 1990: paragraph 6).  Several specific instances of this line of argument are considered in successive paragraphs.

Boyer (1983) notes that the output of any human mind is finite.  Since it is finite, it could be programmed into and therefore simulated by a machine.  In other words, once we stop ignoring human finitude, that is, once we reject one of the idealizations in Lucas’s argument, we are not different from machines after all.  Lucas (1990: paragraph 8) thinks this objection misses the point: “What is in issue is whether a computer can copy a living me, when I have not yet done all that I shall do, and can do many different things.  It is a question of potentiality rather than actually that is in issue.”  Lucas’s point seems to be that what is really at issue is what can be done by a human and a machine in principle; if, in principle, the human mind can do something that a machine cannot, then the human mind is not a machine, even if it just so happens that any particular human mind could be modeled by a machine as a result of human finitude.

Lucas (1990: paragraph 9) remarks, “although some degree of idealization seems allowable in considering a mind untrammeled by mortality…, doubts remain about how far into the infinite it is permissible to stray.”    Recall the possible objection discussed above (in section 1) in which the mechanist, when faced with Lucas’s argument, responds by simply producing a new machine that is just like the last except it contains the Gödel sentence as a theorem.  As Lucas points out, this will simply produce a new machine that has a different Gödel sentence, and this can go on forever.  Some might dispute this point though.  For example, some mechanists might try “adding a Gödelizing operator, which gives, in effect a whole denumerable infinity of Gödelian sentences” (Lucas 1990: paragraph 9).  That is, some might try to give a machine a method to construct an infinite number of Gödel sentences; if this can be done, then perhaps any Gödel sentence whatsoever can be produced by the machine.  Lucas (1990) argues that this is not the case, however; a machine with such an operator will have its own Gödel sentence, one that is not on the initial list produced by the operator.  This might appear impossible: how, if the initial list is infinite, can there be an additional Gödel sentence that is not on the list?  It is not impossible though: the move from the initial infinite list of Gödel sentences to the additional Gödel sentence will simply be a move into the “transfinite,” a higher level of infinity than that of the initial list.  It is widely accepted in mathematics, and has been for quite some time, that there are different levels of infinity.

Coder (1969) argues that Lucas has an overly idealized view of the mathematical abilities of many people; to be specific, Coder thinks that Lucas overestimates the degree to which many people can understand Gödel’s theorem and this somehow creates a problem for Lucas’s argument.  Coder holds that since many people cannot understand Gödel’s theorem, all Lucas has shown is that a handful of competent mathematical logicians are not machines (the idea is that Lucas’s argument only shows that those who can produce and see the truth of the Gödel sentence are not machines, but not everyone can do this).  Lucas (1970a) responds by claiming, for example, that the only difference between those who can understand Gödel’s theorem and those who cannot is that, in the case of the former, it is more obvious that they are not machines; it isn’t, say, that some people are machines and others are not.

Dennett (1972) has claimed there is something odd about Lucas’s argument insofar as it seems to treat humans as creatures that simply wander around asserting truths of first-order arithmetic.  Dennett (1972: 530) remarks,

Men do not sit around uttering theorems in a uniform vocabulary, but say things in earnest and jest, makes slips of the tongue, speak several languages…, and – most troublesome for this account – utter all kinds of nonsense and contradictions….

Lucas’s (1990: paragraph 7) response is that these differences between humans and machines that Dennett points to are sufficient for some philosophers to reject mechanism, and that he [Lucas] is simply giving mechanism the benefit of the doubt by assuming that they can explain these differences.  Furthermore, humans can, and some actually do, produce theorems of elementary number theory as output, so any machine that cannot produce all of these theorems cannot be an adequate model of the human mind.

e. Lewis’s Objection

Lewis (1969) has also formulated an objection to Lucas’s argument:

Lewis argues that I [that is, Lucas] have established that there is a certain Lucas arithmetic which is clearly true and cannot be the output of some Turing machine. If I could produce the whole of Lucas arithmetic, then I would certainly not be a Turing machine. But there is no reason to suppose that I am able in general to verify theoremhood in Lucas arithmetic (Lucas 1970: 149).

To clarify, “Peano arithmetic” is the arithmetic that machines can produce and “Lucas arithmetic” is the arithmetic that humans can produce, and Lucas arithmetic will contain Gödel sentences while Peano arithmetic will not, so humans are not machines, at least according to Lucas’s argument.  But Lewis (1969) claims that Lucas has not shown us that he (or anyone else, for that matter) can in fact produce Lucas arithmetic in its entirety, which he must do if his argument is to succeed, so Lucas’s argument is incomplete.   Lucas responds that he does not need to produce Lucas arithmetic in its entirety for his argument to succeed.  All he needs to do to disprove mechanism is produce a single theorem that a human can see is true but a machine cannot; this is sufficient.  Lucas (1970: 149) holds that “what I have to do is to show that a mind can produce not the whole of Lucas arithmetic, but only a small, relevant part.  And this I think I can show, thanks to Gödel’s theorem.”

3. Penrose’s New Version of the Argument

Penrose has formulated and defended versions of the Gödelian argument in two books, 1989’s The Emperor’s New Mind and 1994’s Shadows of the Mind. Since the latter is at least in part an attempt to improve upon the former, this discussion will focus on the latter.  Penrose’s (1994) consists of two main parts: (a) a Gödelian argument to show that humans minds are non-computable and (b) an attempt to infer a number of claims involving consciousness and physics from (a).  (a) and (b) are discussed in successive sections.

a. Penrose’s Gödelian Argument

Penrose has defended different versions of the Gödelian argument.  In his earlier work, he defended a version of the argument that was relatively similar to Lucas’s (although there were some minor differences (for example, in his argument, Penrose used Turing’s theorem, which is closely related to Gödel’s first incompleteness theorem)).  Insofar as this version of the argument overlaps with Lucas’s, this version faces many of the same objections as Lucas’s argument.  In his (1994) though, Penrose formulates a version of the argument that has some more significant differences from Lucas’s version.  Penrose regards this version “as the central (new) core argument against the computational modelling of mathematical understanding” offered in his (1994) and notes that some commentators seem to have completely missed the argument (Penrose 1996: 1.3).

Here is a summary of the new argument (this summary closely follows that given in Chalmers (1995: 3.2), as this is the clearest and most succinct formulation of the argument I know of): (1) suppose that “my reasoning powers are captured by some formal system F,” and, given this assumption, “consider the class of statements I can know to be true.”  (2) Since I know that I am sound, F is sound, and so is F’, which is simply F plus the assumption (made in (1)) that I am F (incidentally, a sound formal system is one in which only valid arguments can be proven).  But then (3) “I know that G(F’) is true, where this is the Gödel sentence of the system F’” (ibid).  However, (4) Gödel’s first incompleteness theorem shows that F’ could not see that the Gödel sentence is true.  Further, we can infer that (5) I am F’ (since F’ is merely F plus the assumption made in (1) that I am F), and we can also infer that I can see the truth of the Gödel sentence (and therefore given that we are F’, F’ can see the truth of the Gödel sentence). That is, (6) we have reached a contradiction (F’ can both see the truth of the Gödel sentence and cannot see the truth of the Gödel sentence).  Therefore, (7) our initial assumption must be false, that is, F, or any formal system whatsoever, cannot capture my reasoning powers.

Chalmers (1995: 3.6) thinks the “greatest vulnerability” with this version of the argument is step (2); specifically, he thinks the claim that we know that we are sound is problematic (he attempts to show that it leads to a contradiction (see Chalmers 1995: section 3)).  Others aside from Chalmers also reject the claim that we know that we are sound, or else they reject the claim that we are sound to begin with (in which case we do not know that we are sound either since one cannot know a falsehood).  For example, McCullough (1995: 3.2) claims that for Penrose’s argument to succeed, two claims must be true: (1) “Human mathematical reasoning is sound.  That is, every statement that a competent human mathematician considers to be “unassailably true” actually is true,” and (2) “The fact that human mathematical reasoning is sound is itself considered to be “unassailably true.””  These claims seem implausible to McCullough (1995: 3.4) though, who remarks, “For people (such as me) who have a more relaxed attitude towards the possibility that their reasoning might be unsound, Penrose’s argument doesn’t carry as much weight.”  In short, McCullough (1995) thinks it is at least possible that mathematicians are unsound so we do not definitively know that mathematicians are sound.  McDermott (1995) also questions this aspect (among others) of Penrose’s argument.  Looking at the way that mathematicians actually work, he (1995: 3.4) claims, “it is difficult to see how thinkers like these could even be remotely approximated by an inference system that chugs to a certifiably sound conclusion, prints it out, then turns itself off.”  For example, McDermott points out that in 1879 Kempe published a proof of the four-color theorem which was not disproved until 1890 by Heawood; that is, it appears there was an 11 year period where many competent mathematicians were unsound.

Penrose attempts to overcome such difficulties by distinguishing between individual, correctable mistakes that mathematicians sometimes make and things they know are “unassailably” true.  He (1994: 157) claims “If [a] robot is…like a genuine mathematician, although it will still make mistakes from time to time, these mistakes will be correctable…according to its own internal criteria of “unassailable truth.””  In other words, while mathematicians are fallible, they are still sound because their mistakes can be distinguished from things they know are unassailably true and can also be corrected (and any machine, if it is to mimic mathematical reasoning, must be the same way).  The basic idea is that mathematicians can make mistakes and still be sound because only the unassailable truths are what matter; these truths are the output of a sound system, and we need not worry about the rest of the output of mathematicians.  McDermott (1995) remains unconvinced; for example, he wonders what “unassailability” means in this context and thinks Penrose is far too vague on the subject.  For more on these issues, including further responses to these objections from Penrose, see Penrose (1996).

b. Consciousness and Physics

One significant difference between Lucas’s and Penrose’s discussions of the Gödelian argument is that, as alluded to above, Penrose infers a number of further claims from the argument concerning consciousness and physics.  Penrose thinks the Gödelian argument implies, for example, that consciousness must somehow arise from the quantum realm (specifically, from the quantum properties of “microtubules”) and that we “will have no chance…[of understanding consciousness]… until we have a much more profound appreciation of the very nature of time, space, and the laws that govern them” (Penrose 1994: 395).  Many critics focus their attention on defeating Penrose’s Gödelian argument, thinking that if it fails, we have little or no reason to endorse Penrose’s claims about consciousness and physics.  McDermott (1995: 2.2) remarks, “all the plausibility of Penrose’s theory of “quantum consciousness” in Part II of the book depends on the Gödel argument being sound,” so, if we can refute the Gödelian argument, we can easily reject the rest.  Likewise, Chalmers (1995: 4.1) claims that the “reader who is not convinced by Penrose’s Gödelian arguments is left with little reason to accept his claims that physics is non-computable and that quantum processes are essential to cognition…”  While there is little doubt that Penrose’s claims about consciousness and physics are largely motivated by the Gödelian argument, Penrose thinks that one might be led to such views in the absence of the Gödelian argument (for example, Penrose (1994) appeals to Libet’s (1992) work in an effort to show that consciousness cannot be explained by classical physics).  Some (such as Maudlin (1995)) doubt that there even is a link between the Gödelian argument and Penrose’s claims about consciousness and physics; therefore, even if the Gödelian argument is sound, this might not imply that Penrose’s views about consciousness and physics are true.  Still others have offered objections that directly and specifically attack Penrose’s claims about consciousness and physics, apart from his Gödelian argument; some of these objections are now briefly discussed.

Some have expressed doubts over whether quantum effects can influence neural processes.  Klein (1995: 3.4) states “it will be difficult to find quantum effects in pre-firing neural activity” because the brain operates at too high of temperature and “is made of floppy material (the neural proteins can undergo an enormously large number of different types of vibration).”  Furthermore, Penrose “discusses how microtubules can alter synaptic strengths…but nowhere is there any discussion of the nature of synaptic modulations that can be achieved quantum-mechanically but not classically” (Klein 1995: 3.6).  Also, “the quantum nature of neural activity across the brain must be severely restricted, since Penrose concedes that neural firing is occurring classically” (Klein 1995: 3.6).  In sum, at least given what we know at present, it is far from clear that events occurring at the quantum level can have any effect, or at least much of an effect, on events occurring at the neural level.  Penrose (1994) hopes that the specific properties of microtubules can help overcome such issues.

As mentioned above, the Gödelian argument, if successful, would show that strong artificial intelligence is false, and of course Penrose thinks strong A.I. is false.   However, Chalmers (1995: 4.2) argues that Penrose’s skepticism about artificial intelligence is driven largely by the fact that “it is so hard to see how the mere enaction of a computation should give rise to an inner subjective life.”  But it isn’t clear how locating the origin of consciousness in quantum processes that occur in microtubules is supposed to help: “Why should quantum processes in microtubules give rise to consciousness, any more than computational processes should?  Neither suggestion seems appreciably better off than the other” (ibid).  According to Chalmers, Penrose has simply replaced one mystery with another.  Chalmers (1995: 4.3) feels that “by the end of the book the “Missing Science of Consciousness” seems as far off as it ever was.”

Baars (1995) has doubts that consciousness is even a problem in or for physics (of course, some philosophers have had similar doubts).  Baars (1995: 1.3) writes,

The…beings we see around us are the products of billions of years of biological evolution. We interact with them – with each other – at a level that is best described as psychological. All of our evidence regarding consciousness …would seem to be exclusively psychobiological.

Furthermore, Baars cites much promising current scientific work on consciousness, points out that some of these current theories have not yet been disproven, that, relatively speaking, our attempt to explain consciousness scientifically is still in its infancy, and concludes that “Penrose’s call for a scientific revolution seems premature at best” (Baars 1995: 2.3).  Baars is also skeptical of the claim that the solution to the problem of consciousness will come from quantum mechanics specifically.  He claims “there is no precedent for physicists deriving from [quantum mechanics] any macro-level phenomenon such as a chair or a flower…much less a nervous system with 100 billion neurons” (section 4.2) and remarks that it seems to be a leap of faith to think that quantum mechanics can unravel the mystery of consciousness.

4. Gödel’s Own View

One interesting question that has not yet been addressed is: what did Gödel think his first incompleteness theorem implied about mechanism and the mind in general?  Gödel, who discussed his views on this issue in his famous “Gibbs lecture” in 1951, stated,

So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified . . . (Gödel 1995: 310).

That is, his result shows that either (i) the human mind is not a Turing machine or (ii) there are certain unsolvable mathematical problems.  However, Lucas (1998: paragraph 1) goes even further and argues “it is clear that Gödel thought the second disjunct false,” that is Gödel “was implicitly denying that any Turing machine could emulate the powers of the human mind.”  So, perhaps the first thinker to endorse a version of the Lucas-Penrose argument was Gödel himself.

5. Other Anti-Mechanism Arguments

Finally, there are some alternative anti-mechanism arguments to Lucas-Penrose.  Two are briefly mentioned.  McCall (1999) has formulated an interesting argument.  A Turing machine can only know what it can prove, and to a Turing machine, provability would be tantamount to truth.  But Gödel’s theorem seems to imply that truth is not always provability.  The human mind can handle cases in which truth and provability diverge.  A Turing machine, however, cannot.  But then we cannot be Turing machines.  A second alternative anti-mechanism argument is formulated in Cogburn and Megill (2010).  They argue that, given certain central tenets of Intuitionism, the human mind cannot be a Turing machine.

6. References and Further Reading

  • Benacerraf, P. (1967). “God, the Devil, and Gödel,” Monist 51:9-32.
    • Makes a number of objections to Lucas’s argument; for example, the complexity of the human mind implies that we might be unable to formulate our own Gödel sentence.
  • Boyer, D. (1983). “J. R. Lucas, Kurt Godel, and Fred Astaire,” Philosophical Quarterly 33:147-59.
    • Argues, among other things, that human output is finite and so can be simulated by a Turing machine.
  • Chalmers, D. J. (1996). “Minds, Machines, and Mathematics,” Psyche 2:11-20.
    • Contra Penrose, we cannot know that we are sound.
  • Coder, D. (1969). “Gödel’s Theorem and Mechanism,” Philosophy 44:234-7.
    • Not everyone can understand Gödel, so Lucas’s argument does not apply to everyone.
  • Cogburn, J. and Megill, J. (2010).  “Are Turing machines Platonists?  Inferentialism and the Philosophy of Mind,” Minds and Machines 20(3): 423-40.
    • Intuitionism and Inferentialism entail the falsity of the Computational Theory of Mind.
  • Dennett, D.C. (1972). “Review of The Freedom of the Will,” The Journal of Philosophy 69: 527-31.
    • Discusses Lucas’s The Freedom of the Will, and specifically his Gödelian argument.
  • Feferman, S. (1996). “Penrose’s Godelian argument,” Psyche 2(7).
    • Points out some technical mistakes in Penrose’s discussion of Gödel’s first theorem.  Penrose responds in his (1996).
  • Gödel, K. (1931). “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monash. Math. Phys. 38: 173-198.
    • Gödel’s first incompleteness theorem.
  • Gödel, K. (1995). Collected Works III (ed. S. Feferman). New York: Oxford University Press.
    • Gödel discusses his first theorem and the human mind.
  • Dennett, D.C. and Hofstadter, D. R. (1981).  The Mind’s I: Fantasies and Reflections on Self and Soul.  New York: Basic Books.
    • Contains Hofstadter’s discussion of the Whiteley sentence.
  • Hutton, A. (1976). “This Gödel is Killing Me,” Philosophia 3:135-44.
    • Probabilistic arguments that show that we can’t know we are consistent.
  • Klein, S.A.  “Is Quantum Mechanics Relevant to Understanding Consciousness,” Psyche 2(3).
    • Questions Penrose’s claims about consciousness arising from the quantum mechanical realm.
  • Lewis, D. (1969). “Lucas against Mechanism,” Philosophy 44:231-3.
    • Lucas cannot produce all of “Lucas Arithmetic.”
  • Libet, B. (1992). “The Neural Time-factor in Perception, volition and free will,” Review de Metaphysique et de Morale 2:255-72.
    • Penrose appeals to Libet to show that classical physics cannot account for consciousness.
  • Lucas, J. R. (1961). “Minds, Machines and Gödel,” Philosophy 36:112-127.
    • Lucas’s first article on the Gödelian argument.
  • Lucas, J. R. (1968). “Satan Stultified: A Rrejoinder to Paul Benacerraf,” Monist 52:145-58.
    • A response to Benacerraf’s (1967).
  • Lucas, J. R. (1970a). “Mechanism: A Rejoinder,” Philosophy 45:149-51.
    • Lucas’s response to Coder (1969) and Lewis (1969).
  • Lucas, J. R. (1970b). The Freedom of the Will. Oxford: Oxford University Press.
    • Discusses and defends the Gödelian argument.
  • Lucas, J. R. (1976). “This Gödel is killing me: A rejoinder,” Philosophia 6:145-8.
    • Lucas’s reply to Hutton (1976).
  • Lucas, J. R. (1990). “Mind, machines and Gödel: A retrospect.”  A paper read to the Turing Conference at Brighton on April 6th.
    • Overview of the debate; Lucas considers numerous objections to his argument.
  • Lucas, J. R. (1996).  “The Godelian Argument: Turn Over the Page.”  A paper read at a BSPS conference in Oxford.
    • Another overview of the debate.
  • Lucas, J. R. (1998).  “The Implications of Gödel’s Theorem.”  A paper read to the Sigma Club.
    • Another overview.
  • Nagel, E. and Newman J.R. (1958).  Gödel’s Proof.  New York: New York University Press.
    • Short and clear introduction to Gödel’s first incompleteness theorem.
  • Martin, J. and Engleman, K. (1990). “The Mind’s I Has Two Eyes,” Philosophy 510-16.
    • More on the Whiteley sentence.
  • Maudlin, T. (1996).  “Between the Motion and the Act…” Psyche 2:40-51.
    • There is no connection between Penrose’s Gödelian argument and his views on consciousness and physics.
  • McCall, S. (1999).  “Can a Turing Machine Know that the Gödel Sentence is True?”  Journal of Philosophy 96(10): 525-32.
    • An anti-mechanism argument.
  • McCullough, D. (1996). “Can Humans Escape Gödel?” Psyche 2:57-65.
    • Among other things, doubts that we know we are sound.
  • McDermott, D. (1996). “*Penrose is wrong,” Psyche 2:66-82.
    • Criticizes Penrose on a number of issues, including the soundness of mathematicians.
  • Penrose, R. (1989). The Emperor’s New Mind. Oxford: Oxford University Press.
    • Penrose’s first book on the Gödelian argument and consciousness.
  • Penrose, R. (1994).  Shadows of the Mind.  Oxford: Oxford University Press.
    • Human reasoning cannot be captured by a formal system; consciousness arises from the quantum realm; we need a revolution in physics to fully understand consciousness.
  • Penrose, R. (1996). “Beyond the Doubting of a Shadow,” Psyche 2(23).
    • Responds to various criticisms of his (1994).
  • Priest, G. (2003). “Inconsistent Arithmetic: Issues Technical and Philosophical,” in Trends in Logic: 50 Years of Studia Logica (eds. V. F. Hendricks and J. Malinowski), Dordrecht: Kluwer Academic Publishers.
    • Discusses paraconistent logic.
  • Putnam, H. (1960). “Minds and Machines,” Dimensions of Mind. A Symposium (ed. S. Hook). London: Collier-Macmillan.
    • Raises the consistency issue for Lucas.
  • Rogers, H. (1957). Theory of Recursive Functions and Effective Computability (mimeographed).
    • Early mention of the issue of consistency for Gödelian arguments.
  • Whitehead, A. N. and Russell, B. (1910, 1912, 1913). Principia Mathematica, 3 vols, Cambridge: Cambridge University Press.
    • An attempt to base mathematics on logic.
  • Wang, H. (1981).  Popular Lectures on Mathematical Logic. Mineolam NY: Dover.
    • Textbook on formal logic.
  • Whiteley, C. (1962). “Minds, Machines and Gödel: A Reply to Mr. Lucas,” Philosophy 37:61-62.
    • Humans are limited in ways similar to machines.
  • Wright, C. (1995).  “Intuitionists are Not Turing Machines,” Philosophia Mathematica 3(1):86-102.
    • An intuitionist who advances the Lucas-Penrose argument can overcome the worry over our consistency.

Author Information

Jason Megill
Email: jmegill@carroll.edu
Carroll College
U. S. A.

Philosophy of Medicine

While philosophy and medicine, beginning with the ancient Greeks, enjoyed a long history of mutually beneficial interactions, the professionalization of “philosophy of medicine” is a nineteenth century event.  One of the first academic books on the philosophy of medicine in modern terms was Elisha Bartlett’s Essay on the Philosophy of Medical Science, published in 1844.  In the mid to late twentieth century, philosophers and physicians contentiously debated whether philosophy of medicine was a separate discipline distinct from the disciplines of either philosophy or medicine.  The twenty-first century consensus, however, is that it is a distinct discipline with its own set of problems and questions. Professional journals, books series, individual monographs, as well as professional societies and meetings are all devoted to discussing and answering that set of problems and questions.  This article examines—by utilizing a traditional approach to philosophical investigation—all aspects of the philosophy of medicine and the attempts of philosophers to address its unique set of problems and questions.  To that end, the article begins with metaphysical problems and questions facing modern medicine such as reductionism vs. holism, realism vs. antirealism, causation in terms of disease etiology, and notions of disease and health.  The article then proceeds to epistemological problems and questions, especially rationalism vs. empiricism, sound medical thinking and judgments, robust medical explanations, and valid diagnostic and therapeutic knowledge.  Next, it will address the vast array of ethical problems and questions, particularly with respect to principlism and the patient-physician relationship.  The article concludes with a discussion of what constitutes the nature of medical knowledge and practice, in light of recent trends towards both evidence-based and patient-centered medicine.  Finally, even though a vibrant literature on nonwestern traditions is available, this article is concerned only with the western tradition of philosophy of medicine (Kaptchuk, 2000; Lad, 2002; Pole, 2006; Unschuld, 2010).

Table of Contents

  1. Metaphysics
    1. Reductionism vs. Holism
    2. Realism vs. Antirealism
    3. Causation
    4. Disease and Health
  2. Epistemology
    1. Rationalism vs. Empiricism
    2. Medical Thinking
    3. Explanation
    4. Diagnostic and Therapeutic Knowledge
  3. Ethics
    1. Principlism
    2. Patient-Physician Relationship
  4. What is Medicine?
  5. References and Further Reading

1. Metaphysics

Traditionally, metaphysics pertains to the analysis of objects or events and the forces or factors causing or impinging upon them.  One branch of metaphysics, denoted ontology, investigates problems and questions concerning the nature and existence of objects or events and their associated forces or factors.  For philosophy of medicine in the twenty-first century, the two chief objects are the patient’s disease and health, along with the forces or factors responsible for causing them.  “What is/causes health?” or “What is/causes disease?” are contentious questions for philosophers of medicine.  Another branch of metaphysics involves the examination of presuppositions that inform a given ontology.  For philosophy of medicine, the most controversial debate centers around the presuppositions of reductionism and holism.  Questions like “Can a disease be sufficiently reduced to its elemental components?” or “Is the patient more than simply the sum of physical parts?” drive discussion among philosophers of medicine.  In addition, the debate between realism and antirealism has important traction within the field.  This debate centers on questions like, “Are disease-causing entities real?” or “Are these entities socially constructed?”   This section first explores the reductionism-holism and realism-antirealism debates, along with the notion of causation, before turning attention to the notions of disease and health.

a. Reductionism vs. Holism

The reductionism-holism debate enjoys a lively history, especially from the middle to the latter part of the twentieth century.  Reductionism, broadly construed, is the diminution of complex objects or events to their component parts.  In other words, the properties of the whole are simply the addition or summation of the properties of the individual parts.  Such reductionism is often called metaphysical or ontological reductionism to distinguish it from methodological or epistemological reductionism.  Methodological reductionism refers to the investigation of complex objects and events and their associated forces or factors by using technology that isolates and analyzes individual components only.  Epistemological reductionism involves the explanation of complex objects and events and their associated forces or factors in terms of their individual components only.  For the life sciences vis-à-vis reductionism, an organism is made of component parts like bio-macromolecules and cells, whose properties are sufficient for investigating and explaining the organism, if not life itself.  Life scientists often sort these parts into a descending hierarchy. Beginning with the organism,  they proceed downward through organ systems, individual organs, tissues, cells, and macromolecules until reaching the atomic and subatomic levels.  Albert Szent-Gyorgyi once remarked, as he descended this hierarchy in his quest for understanding living organisms, “life slipped between his fingers.”  Holism, however, is the position that the properties of the whole are not reducible to properties of its individual components.  Jan Smuts (1926) introduced the term in the early part of the twentieth century, especially with respect to biological evolution, to account for living processes—without the need for immaterial components.

The relevance of the reductionism-holism debate pertains to both medical knowledge and practice.  Reductionism influences not only how a biomedical scientist investigates and explains disease, but also how a clinician diagnoses and treats it.  For example, if a biomedical researcher believes that the underlying cause of a mental disease is dysfunction in brain processes or mechanisms, especially at the molecular level, then that disease is often investigated exclusively at that level.  In turn, a clinician classifies mental diseases in terms of brain processes or mechanisms at the molecular level, such as depletion in levels of the neurotransmitter serotonin.  Subsequently, the disease is treated pharmacologically by prescribing drugs to elevate the low levels of the neurotransmitter in the depressed brain to levels considered normal within the non-depressed brain.  Although the assumption of reductionism produces a detailed understanding of diseases in molecular or mechanistic terms, many clinicians and patients are dissatisfied with the assumption.  Both clinicians and patients feel that the assumption excludes important information concerning the nature of the disease, as it influences both the patient’s illness and life experience.  Rather than simply treating the disease, such information is vital for treating patients with chronic cases.  In other words, patients often feel as if physicians reduce them to their disease or diseased body part  rather than on the overall illness experience.  The assumption of holism best suits the approach to medical knowledge and practice that includes the patient’s illness experience.  Rather than striving exclusively for restoration of the patient to a pre-diseased state, the clinician assists the patient in redefining what the illness means for their life.  The outcome is not a physical cure necessarily, as it is healing of wholeness from the fragmentation in the patient’s life caused by the illness.

b. Realism vs. Antirealism

Realism is the philosophical notion that observable objects and events are actual objects and events, independent of the person observing them.  In other words, since it exists outside the minds of those observing it, reality does not depend on conceptual structures or linguistic formulations..  Proponents of realism also espouse that even unobservable objects and events, like subatomic particles, exist.  Historically, realists believe that universals—abstractions of objects and events—are separate from the mind cognizing them.  For example, terms like bacteria and cell denote real objects in the natural world, which exist apart from the human minds trying to examine and understand them.  Furthermore, scientific investigations into natural objects like bacteria and cells yield true accounts of these objects.  Anti-realism, on the other hand, is the philosophical notion that observable objects and events are not actual objects and events as observed by a person but rather they are dependent upon the person observing them.  In other words, these objects and events are mind-dependent—not mind-independent.   Anti-realists deny the existence of objects and events apart from the mind cognizing them.  Human minds construct these objects and events based on social or cultural values.  Historically, anti-realists subscribe to nominalism, in which universals do not exist and predicates of particular objects do.  Finally, they question the truth of scientific accounts of the world since no mind-independent framework can be correct absolutely.  Antirealists hold that such truth is framework dependent, so when one changes the framework, truth itself changes.

The debate among realists and anti-realists has important implications for philosophers of medicine, as well as for the practice of clinical medicine.  For example, a contentious issue is whether disease entities or conditions for the expression of a disease are real or not.  Realists argue that such entities or conditions are real and exist independent of medical researchers investigating them, while anti-realists deny their reality and existence.  Take the example of depression.  According to realists, the neurotransmitter serotonin is a real entity that exists in a real brain—apart from clinical investigations or investigators.  A low level of that transmitter is a real condition for the disease’s expression.  For anti-realists, however, serotonin is a laboratory or clinical construct based on experimental or clinical conditions.  Changes in that construct lead to changes in understanding the disease.  The debate is not simply academic but has traction for the clinic.  Clinical realists believe that restoring serotonin levels cures depression.  Clinical anti-realists are less confident about restoring levels of the neurotransmitter to affect a cure.  Rather, they believe that both diagnosis and treatment of depression do not devolve into simplistic measurements of serotonin levels.  Importantly, the anti-realists do not harbor the hope that additional information is likely to remedy the clinical problems associated with serotonin replacement therapy.  The problems are ontological to their core.  The neurotransmitter is a mental construct entirely dependent on efforts to investigate and translate laboratory investigations into clinical practice.

c. Causation

Causation has a long philosophical history, beginning with the ancient Greek philosophers.  Aristotle in particular provided a robust account of causation in terms of material cause, what something is made of; formal cause, how something is made; efficient cause, forces responsible for making something; and, final cause, the purpose for or end to which something is made.  In the modern period, Francis Bacon pruned the four Aristotelian causes to material and efficient causation.  With the rise of British empiricism, especially with David Hume’s philosophical analysis of causation, causation comes under critical scrutiny.  For Hume, in particular, causation is simply the constant conjunction of object and events, with no ontological significance in terms of hooking up the cause with the effect.  According to Hume, society indoctrinates us to assume something real exists between the cause and its effect.  Although Hume’s skepticism of causal forces prevailed in his personal study, it did not prevail in the laboratory.  During the nineteenth century, with the maturation of the scientific revolution, only one cause survived for accounting for natural entities and phenomena—the material cause, which is not strictly Aristotle’s original notion of material causation.  The modern notion involves mechanisms and processes and thereby eliminates efficient causation.  The material cause became the engine driving mechanistic ontology.  During the twentieth century, after the Einsteinian and quantum revolutions, mechanistic ontology gave way to physical ontology that included forces such as gravity as causal entities.  A century later, efficient causation is the purview of philosophers, who argue endlessly about it, while scientists take physical causation as unproblematic in constructing models of natural phenomena based on cause and effect.

For philosophers of medicine, causation is an important notion for analyzing both disease etiology and therapeutic efficacy (Carter, 2003).  At the molecular level, causation operates physico-chemically to investigate and explain disease mechanisms.  In the example of depression, serotonin is a neurotransmitter that binds specific receptors within certain brain locations, which in turn causes a cascade of molecular events in maintaining mental health.  This underlying causal or physical mechanism is critical not only for understanding the disease, but also for treating it.  Medical causation also operates at other levels.  For infectious diseases, the Henle-Koch postulates are important in determining the causal relationship between an infectious microorganism and an infected host (Evans, 1993).  To secure that relationship the microorganism must be associated with every occurrence of the disease, be isolated from the infected host, be grown under in vitro conditions, and be shown to elicit the disease upon infection of a healthy host.  Finally, medical causation operates at the epidemiological or population level.  Here, Austin Bradford Hill’s nine criteria are critical for establishing a causal relationship between environmental factors and disease incidence (Hill, 1965).  For example, the relationship between cigarette smoking and lung cancer involves the strength of the association between smoking and lung cancer, as well as the consistency of that association for the biological mechanisms.  These examples establish the importance of causal mechanisms involved in disease etiology and treatment, especially for diseases with an organic basis; however, some diseases, particularly mental disorders, do not reduce to such readily apparent causality.  Instead, they represent rich areas of investigations for philosophers of medicine.

d. Disease and Health

“What is disease?” is a contentious question among philosophers of medicine.  These philosophers distinguish among four different notions of disease.  The first is an ontological notion.  According to its proponents, disease is a palpable object or entity whose existence is distinct from that of the diseased patient.  For example, disease may be the condition brought on by the infection of a microorganism, such as a virus.  Critics, who champion a physiological notion of disease, argue that advocates of the ontological notion confuse the disease condition, which is an abstract notion, with a concrete entity like a virus.  In other words, proponents of the first notion often combine the disease’s condition and cause.  Supporters of this second notion argue that disease represents a deviation from normal physiological functioning.  The best-known defender of this notion is Christopher Boorse (1987), who defines disease as a value-free statistical norm with respect to “species design.”  Critics who object to this notion, however, cite the ambiguity of the term “norm” in terms of a reference class.  Instead of a statistical norm, evolutionary biologists propose a notion of disease as a maladaptive mechanism, which factors in the organism’s biological history.  Critics of this third notion claim that disease manifests itself, especially clinically, in terms of the individual patient and not a population.  A population may be important to epidemiologists but not to clinicians who must treat individual patients whose manifestation of a disease and response to therapy for that disease may differ from each other significantly.  The final notion of disease addresses this criticism.  The genetic notion claims that disease is the mutation in or absence of a gene.  Its champions assert that each patient’s genomic constitution is unique. By knowing the genomic constitution, clinicians are able to both diagnose the patient’s disease and tailor a specific therapeutic protocol.  Critics of the genetic notion claim that disease, especially its experience, cannot be reduced to nucleotide sequences.  Instead, it requires a larger notion including social and cultural factors.

“What is health?” is an equally contentious question  among philosophers of medicine.   The most common notion of health is simply absence of disease.  Health, according to proponents of this notion, represents a default state as opposed to pathology.  In other words, if an organism is not sick then it must be healthy.  Unfortunately, this notion does not distinguish between various grades of health or preconditions towards illness.  For example, as cells responsible for serotonin stop producing the neurotransmitter a person is prone to depression.  Such a person is not as healthful as a person who is making sufficient amounts of serotonin.  An adequate understanding of health should account for such preconditions.  Moreover, health as absence of disease often depends upon personal and social values of what is health.  Again, ambiguity enters into defining health given these values.  For one person, health might be very different from that of another.  The second notion of health does permit distinction between grades of health, in terms of quantifying it, and does not depend upon personal or social values.  Proponents of this notion, such as Boorse, define health in terms of normal functioning, where the normal reflects a statistical norm with respect to species design.  For example, a person with low levels of serotonin who is not clinically symptomatic in terms of depression is not as healthful as a person with statistically normal neurotransmitter levels.  Criticisms of the second notion revolve around its lack of incorporating the social dimension of health and jettison the notion altogether opting for the notion of wellbeing.  Wellbeing is a normative notion that combines both a person’s values, especially in terms of his or her life goals, and objective physiological states.  Because normative notions contain a person’s value system, they are often difficult to define and defend since values vary from person to person and culture to culture.  Proponents of this notion include Lennart Nordenfelt (1995), Carol Ryff and Burton Singer (1998), Carolyn Whitbeck (1981).

2. Epistemology

Epistemology is the branch of philosophy concerned with the analysis of knowledge, in terms of both its origins and justification.  The overarching question is, “What is knowing or knowledge?”  Subsidiary questions include, “How do we know that we know?”; “Are we certain or confident in our knowing or knowledge?”; “What is it we know when we claim we know?” Philosophers generally distinguish three kinds or theories of knowledge.  The first pertains to knowledge by acquaintance, in which a knowing or an epistemic agent is familiar with an object or event.  It is descriptive in nature, that is, a knowing-about knowledge.  For example, a surgeon is well acquainted with the body’s anatomy before performing an operation.  The second is competence knowledge, which is the species of knowledge useful for performing a task skillfully.  It is performative or procedural in nature, that is, a knowing-how knowledge.  Again, by way of example, the surgeon must know how to perform a specific surgical procedure before executing it.  The third, which interests philosophers most, is propositional knowledge.  It pertains to certain truths or facts.  As such, philosophers traditionally call this species of knowledge, “justified true belief.”  Rather than descriptive or performative in nature, it is explanatory, or a knowing-that knowledge.  Again, by way of illustration, the surgeon must know certain facts or truths about the body’s anatomy, such as the physiological function of the heart, before performing open-heart surgery.  This section begins with the debate between rationalists and empiricists over the origins of knowledge, before turning to medical thinking and explanation and then concluding with the nature of diagnostic and therapeutic knowledge.

a. Rationalism vs. Empiricism

The rationalism-empiricism debate has a long history, beginning with the ancient Greeks, and focuses on the origins of knowledge and its justification.  “Is that origin rational or empirical in nature?”  “Is knowledge deduced or inferred from first principles or premises?”  “Or, is it the result of careful observation and experience?”  These are just a few of the questions fueling the debate, along with similar questions concerning epistemic justification.  Rationalists, such as Socrates,Plato,  Descartes, and Kant, appeal to reason as being both the origin and the justification of knowledge.  As such, knowledge is intuitive in nature, and in contrast to the senses or perception, it is exclusively the product of the mind.  Given the corruptibility of the senses, argue the rationalists, no one can guarantee or warrant knowledge—except through the mind’s capacity to reason.  In other words, rationalism provides a firm foundation not only for the origin of knowledge but also for warranting its truth.    Empiricists, such as Aristotle, Avicenna, Bacon, Locke, Hume, and Mill, avoid the fears of rationalists and exalt observation and experience with respect to the origin and justification of knowledge.  According to empiricists, the mind is a blank slate (Locke’s tabula rasa) upon which observations and experiences inscribe knowledge.  Here, empiricists champion the role of experimentation in the origin and justification of knowledge.

The rationalism-empiricism debate originates specifically with ancient Greek and Roman medicine.  The Dogmatic school of medicine, founded by Hippocrates’ son and son-in-law in the fourth century BCE, claimed that reason is sufficient for understanding the underlying causes of diseases and thereby for treating them.  Dogmatics relied on theory, especially the humoral theory of health and disease, to practice medicine.  The Empiric school of medicine, on the other hand, asserted that only observation and experience, not theory, is a sufficient foundation for medical knowledge and practice.  Theory is an outcome of medical observation and experience, not their foundation.  Empirics relied on palpable, not underlying, causes to explain health and disease and to practice medicine.  Philinus of Cos and his successor Serapion of Alexandria, both third century BCE Greek physicians, are credited with founding the Empiric school, which included the influential Alexandrian school.  A third school of medicine arose in response to the debate between the Dogmatics and Empirics, the Methodic school of medicine.  In contrast to Dogmatics, and in agreement with Empirics, Methodics argued that underlying causes are superfluous to the practice of medicine.  Rather, the patient’s immediate symptoms, along with common sense, are sufficient and provide the necessary information to treat the patient.  Thus, in contrast to Empirics, Methodics argued that experience is unnecessary to treat disease and that the disease’s symptoms provide all the knowledge needed to practice medicine.

The Dogmatism-Empiricism debate, with Methodism representing a minority position, raged on and was still lively in the seventeenth and eighteenth centuries.  For example, Giorgio Baglivi (1723), an Armenian-born seventeenth century Italian physician, decried the polarization of physicians along dogmatic and empiric boundaries and recommended resolving the debate by combining the two.  Contemporary philosophical commentators on medicine recognize the importance of both epistemic positions, and several commentators propose synthesis of them.  For example, Jan van Gijn (2005) advocates an “empirical cycle” in which experiments drive hypothetical thinking, which in turn results in additional experimentation.  Although no clear resolution of the rationalism-empiricism debate in medicine appears on the immediate horizon, the debate does emphasize the importance of and the need for additional philosophical analysis of epistemic issues surrounding medical knowledge.

b. Medical Thinking

“How doctors think” is the title of two  twenty-first century books on medical thinking.  The first is by a medical humanities scholar, Kathryn Montgomery (2006).  Montgomery addresses vital questions about how physicians go about making clinical decisions when often faced with tangible uncertainty.  She argues for medical thinking based not on science but on Aristotelian phronesis, or practical or intuitive reasoning.  The second book is by a practicing clinician, Jerome Groopman (2007).  Groopman also addresses questions about medical thinking, and he too pleads for clinical reasoning based on practical or intuitive foundations.  Both books call for introducing the art of medical thinking to offset the over dependence on the science of medical thinking.  In general, medical thinking reflects the cognitive faculties of clinicians to make rational decisions about what ails patients and how best to go about treating them both safely and effectively.  That thinking, during the twentieth century, mimicked the technical thinking of natural scientists—and, for good reason.  As Paul Meehl (1954) convincingly demonstrated, statistical reasoning in the clinical setting out performs intuitive clinical thinking.  Although Montgomery’s and Groopman’s attempts to swing the pendulum back to the art of medical thinking, the risk of medical errors often associated with such thinking demands clearer analysis of the science of medical thinking.  That analysis centers traditionally on both logical and algorithmic methods of clinical judgment and decision-making, to which the twenty-first century has turned.

Georg Stahl’s De logico medica, published in 1702, is one of the first modern treatises on medical logic.  However, not until the nineteenth century did logic of medicine become an important area of sustained analysis or have an impact on medical knowledge and practice.  For example, Friedrich Oesterlen’s Medical logic, published in English translation in 1855, promoted medical logic not only as a tool for assessing the formal relationship between propositional statements and thereby avoiding clinical error, but also for analyzing the relationship among medical facts and evidence in the generation of medical knowledge.  Oesterlen’s logic of medicine was indebted to the Paris school of clinical medicine, especially Pierre Louis’ numerical method (Morabia, 1996).  Contemporary logic of medicine continues this tradition, especially in terms of statistical analysis of experimental and clinical data.  For example, Edmond Murphy’s The Logic of Medicine (1997) represents an analysis of logical and statistical methods used to evaluate both experimental and clinical evidence.  Specifically, Murphy identifies several “rules of evidence” critical for interpreting such evidence as medical knowledge.  A particularly vigorous debate concerns the role of frequentist vs. Bayesian statistics in determining the statistical significance of data from clinical trials.  The logic of medicine, then, represents an important and a fruitful discipline in which medical scientists and clinical practitioners can detect and avoid errors in the generation and substantiation of medical knowledge and in its application or translation to the clinic.

Philosophers of medicine actively debate the best courses of action for making clinical decisions.  For, clinical judgment is an informal process in which a clinician assesses a patient’s clinical signs and symptoms to come to an accurate judgment about what is ailing the patient. To make such a judgment requires an insight into the intelligibility of the clinical evidence.  The issue for philosophers of medicine is what role intuition should play in clinical judgment when facing the ideals of objective scientific reasoning and judgment.  Meehl’s work on clinical judgment, as noted earlier casted suspicion on the effectiveness of intuition in clinical judgment; and yet, some philosophers of medicine champion  the understood dimension in such decision-making.  The debate often reduces to whether clinical judgment is an art or a science; however, some, like Alvan Feinstein (1994), argue for a reconciliatory position between them.  Once a physician comes to a judgment then the physician must make a decision as to how to proceed clinically.  Although clinical decision making, with its algorithmic-like decision trees, is a formal procedure compared to clinical judgment, philosophers of medicine actively argue about the structure of these trees and procedures for generating and manipulating them.  The main issue is how best to define the utility grounding the trees.

c. Explanation

Epistemologists are generally interested in the nature of propositions especially the explanatory power of those justified true beliefs.  To know something truly is to understand and explain the hidden causes behind it.  Explanations operate at a variety of levels.  For example, neuroscientific explanations account for human behavior in terms of the neurological activity while astrological explanations account for such behavior with respect to astronomical activity.  Philosophers, especially philosophers of science, distinguish several kinds of explanations, including the covering law explanation, causal explanation, and inference to the best explanation.  In twenty-first century medicine, explanations are important for understanding disease mechanisms and, in understanding those mechanisms, for developing therapeutic modalities to treat the patient’s disease.  This line of reasoning runs deep in medical history, beginning, as we have seen, with the Dogmatics.  Twenty-first century philosophers of medicine utilize the explanatory schemes developed by philosophers of science to account for medical phenomena.  The Following section will briefly examine each of these explanatory schemes and their relevance for medical explanations.

Carl Hempel and Paul Oppenheim introduced covering law explanation in the late 1940s.  According to Hempel and Oppenheim (1948), explanations function as arguments with the conclusion or explanandum—that which is explained—deduced or induced from premises or explanans—that which does the explaining.  At least one of the explanans must be a scientific law, which can be a mechanistic or statistical law.  Although covering law explanations are useful for those medical phenomena that reduce to mechanistic or statistical laws, such as explaining cardiac output in terms of heart rate and stroke volume, not all such phenomena lend themselves to such reductive explanations.  The next explanatory scheme, causal explanation, attempts to rectify that problem.  Causal explanation relies on the temporal or spatial regularity of phenomena and events and utilizes antecedent causes to explain phenomena and events.  The explanations can be simplistic in nature, with only a few antecedent causes arranged linearly, or very complex, with multiple antecedent causes operating in a matrix of interrelated and integrated interactions.  For example, causal explanations of cancer involve at least six distinct sets of genetic factors controlling cellular phenomena such as cell growth and death, immunological response, and angiogenesis.  Finally, Gilbert Harman articulated the contemporary form of inference to the best explanation, or IBE, in the 1960s.  Harman (1965) proposed that based on the totality of evidence one must choose the explanation that best accounts for or infers that evidence and reject its competitors.  The criteria for “bestness” range from the explanation’s simplicity to its generality or consilience to account for analogous phenomena.  Peter Lipton (2004) offers Ignaz Semmelweis’ explanation of increased mortality of women giving birth in one ward compared to another, as an example of IBE.  Donald Gillies (2005) provides an analysis of it in terms of Kuhnian paradigm.

d. Diagnostic and Therapeutic Knowledge

Diagnostic knowledge pertains to the clinical judgments and decisions made about what ails a patient.  Epistemologically, the issues concerned with such knowledge are its accuracy and certainty.  Central to both these concerns are clinical symptoms and signs.  Clinical symptoms are subjective manifestations of the disease that the patient articulates during the medical interview, while clinical signs are objective manifestations that the physician discovers during the physical examine.  What is important for the clinician is how best to quantify those signs and symptoms, and then to classify them in a robust nosology or disease taxonomy.  The clinical strategy is to collect the empirical data through the physical examination and laboratory tests, to deliberate on that data, and then to draw a conclusion as to what the data means in terms of the patient’s disease condition.  The strategy is fraught with questions for philosophers of medicine, from “What constitutes symptoms and signs and how they differ?” to “How best to measure and quantify the signs and to classify the diseases?”  Philosophers of medicine debate the answers to these questions, but the discussion among philosophers of science over the strategy by which natural scientists investigate the natural world guides much of the debate.  Thus, a clinician generates hypotheses about a patient’s disease condition, which he or she then assesses by conducting further medical tests.  The result of this process is a differential diagnosis, which represents a set of hypothetical explanations for the patient’s disease condition.  The clinician then narrows this set to one diagnostic hypothesis that best explains most, and hopefully all, of the relevant clinical evidence.  The epistemic mechanism that accounts for this process and the factors involved in it are unclear.  Philosophers of medicine especially dispute the role of tacit factors in the process.  Finally, the heuristics of the process are an active area of philosophical investigation in terms of identifying rules for interpreting clinical evidence and observations.

Therapeutic knowledge refers to the procedures and modalities used to treat patients.  Epistemologically, the issues concerned with such knowledge are its efficacy and safety.  Efficacy refers to how well the pharmacological drug or surgical procedure treats or cures the disease, while safety refers to possible patient harm caused by side effects.  The questions animating discussion among philosophers of medicine range from “What is a cure?” to “How to establish or justify the efficacy of a drug or procedure?” The latter question occupies a considerable amount of the philosophy of medicine literature, especially the nature and role of clinical trials.  Although basic medical research into the etiology of disease mechanisms is important, the translation of that research and the philosophical problems that arise from it are foremost on the agenda for philosophers of medicine.  The origin of clinical trials dates at least to the eighteenth century but not until the twentieth century is consensus reached over the structure of these trials.  Today, four phases define a clinical trial.  During the first phase, clinical investigators establish the maximum tolerance of healthy volunteers to a drug.  The next phase involves a small patient population to determine the drug’s efficacy and safety.  In the third phase, which is the final phase required to obtain FDA approval, clinical investigators utilize a large and relatively diverse patient population to establish the drug’s efficacy and safety.  A fourth stage is possible in which clinical investigators chart the course of the drug’s use and effectiveness in a diverse patient population over a longer period.  The following are topics of active discussion among philosophers of medicine: The nature of clinical trials with respect to features like randomizing in which test subjects are arbitrarily assigned to either experimental or control groups, blinding of patients and physicians to randomizing to remove assessment bias, concurrent control in which the control group does not receive the experimental treatment that the test group receives, and the role of the placebo effect or the expected benefit patient’s anticipate from receiving treatment represent.  However, the most pressing problem is the type of statistics utilized for analyzing clinical trial evidence.   Some philosophers of medicine champion frequentist statistics, while others Bayesian statistics.

3. Ethics

Ethics is the branch of philosophy concerned with the right or moral conduct or behavior of a community and its members.  Traditionally, philosophers divide ethics into descriptive, normative, and applied ethics.  Descriptive ethics involves detailing ethical conduct without evaluating it in terms of moral codes of conduct, whereas normative ethics pertains to how a community and its members should act under given situations, generally in terms of an ethical code.  This code is often a product of certain values held in common within a community.  For example, ethical codes against murder reflect values community members place upon taking human life without just cause.  Besides values, ethicists base normative ethics on a particular theoretical perspective.  Within western culture, three such perspectives predominate.  The first and historically oldest ethical theory—although it experienced a Renaissance in the late twentieth century—is virtue ethics.  Virtue ethics claims that ethical conduct is the product of a moral agent who possesses certain virtues, such as prudence, courage, temperance, or justice—the traditional cardinal virtues.  The second ethical theory is deontology and bases moral conduct on adherence to ethical precepts and rules reflecting moral duties and obligations.  The third ethical theory is consequentialism, which founds moral conduct on the outcome or consequence of an action.  The chief example of this theory is utilitarianism, or the maximization of an action’s utility, which claims that an action is moral if it realizes the greatest amount of happiness for the greatest number of community members.   Finally, applied ethics is the practical use of ethics within a profession such as business or medicine.  Medical or biomedical ethics reflects applied ethics and is a major feature within the landscape of twenty-first century medicine.  Historically, ethical issues are a conspicuous component of medicine beginning with Hippocrates.  Throughout medical history several important treatises on medical ethics have been published.  Probably the best-known example is Thomas Percival’s Medical Ethics, published in 1803, which influenced the development of the American Medical Association’s ethical code.  Today, medical ethics is founded not on any particular ethical theory but on four ethical principles.

a. Principlism

The origins of the predominant system for contemporary medical or biomedical ethics began in 1932.  In that year, the Public Health Service, in conjunction with the Tuskegee Institute in Macon County, Alabama, undertook a clinical study to document the course of syphilis on untreated test subjects.  The subjects were Afro-American males.  Over the next forty years, healthcare professionals observed the course of the disease, even after the introduction of antibiotics.  Not until 1972, did the study end and only after public outcry from newspaper articles—especially an article in the New York Times—reporting the study’s atrocities.  What made the study so atrocious was that the healthcare professionals misinformed the subjects about treatment or failed to treat the subjects with antibiotics.  To insure that such flagrant abuse of test subjects did not happen again, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research met from February 13-16, 1976.  At the Smithsonian Institute’s Belmont Conference Center in Maryland, the commission drafted guidelines for the treatment of research subjects.  The outcome was a report entitled, Ethical Principles and Guidelines for the Protection of Human Subjects of Research, or known simply as the Belmont Report, published in 1979.  The report lists and discusses several ethical principles necessary for protecting human test subjects and patients from unethical treatment at the hands of healthcare researchers and providers.  The first is respect for persons, in that researchers must respect the test subject’s autonomy to make informed decisions based on accurate and truthful information concerning the test study’s procedures and risks.  The next principle is beneficence or maximizing the benefits to risk ratio for the test subject.  The final ethical principle is justice, which ensures that the cost to benefit ratio is equitably distributed among the general population and that no one segment of it bears an unreasonable burden with respect to the ratio.

One of the framers of the Belmont Report was a young philosopher named Tom Beauchamp.  While working on the report, Beauchamp, in collaboration with a colleague, James Childress, was also writing a book on the role of ethical principles in guiding medical practice.  Rather than ground biomedical ethics on any particular ethical theory, such as deontology or utilitarianism, Beauchamp and Childress looked to ethical principles for guiding and evaluating moral decisions and judgments in healthcare.  The fruit of their collaboration was Principles of Biomedical Ethics, first published in the same year as the Belmont Report, 1979.  In the book, Beauchamp and Childress apply the ethical principles approach of the report to regulate the activities of biomedical researchers, to assist physicians in deliberating over the ethical issues associated with the practice of clinical medicine.  However, besides the three guiding principles of the report, they added a fourth—nonmaleficence.  Moreover, the first principle became patient autonomy, rather than respect of persons as denoted in the report.  For the autonomy principle, Beauchamp and Childress stress the patient’s liberty to make critical decisions concerning treatment options, which have a direct impact on the patient’s own values and life plans.  The authors’ second principle, nonmaleficence, instructs the healthcare provider to avoid doing harm to the patient, while the next principle of beneficence emphasizes removing harm and doing good to the patient.  Beauchamp and Childress articulate the final principle, justice, in terms reminiscent of the Belmont report with respect to equitable distribution of risks and benefits, as well as healthcare resources, among both the general and patient populations.  The bioethical community quickly dubbed  the Beauchamp and Childress approach to biomedical ethics as principlism.

Principlism’s impact on the bioethical discipline is unparalleled.  Beauchamp and Childress’ book is now in its fifth edition and is a standard textbook for teaching biomedical ethics at medical schools and graduate programs in medical ethics.  One of the chief advocates of principlism is Raanan Gillon (1986)  Gillon expanded the scope of the principles to aid in their application to difficult bioethical issues, especially where the principles might conflict with one another.  However, principlism is not without its critics.  A fundamental complaint is the lack of theoretical support for the four principles, especially when the principles collide with one another in terms of their application to a bioethical problem. In its use, ethicists and clinicians generally apply the principles in an algorithmic manner to justify practically any ethical position on a biomedical problem.  What critics want is a unified theoretical basis for grounding the principles, in order to avoid or adjudicate conflicts among the principles.  Moreover, Beauchamp and Childress’ emphasis on patient autonomy had serious ramifications on the physician’s role in medical care, with that role at times marginalized to the patient’s role.  Alfred Tauber (2005), for instance, charges that such autonomy itself is “sick” and often results in patients abandoned to their own resources with detrimental outcomes for them.  In response to their critics, Beauchamp and Childress argue that no single ethical theory is available to unite the four principles to avoid or adjudicate conflicts among them.  However, they did introduce in the fourth edition of Principles, a notion of common morality—a set of shared moral standards—to provide theoretical support for the principles.  Unfortunately, their notion of common morality lacks the necessary theoretical robustness to unify the principles effectively.  Although principlism still serves a useful function in biomedical ethics, particularly in the clinic, early twenty-first century trends towards healthcare ethics and global bioethics have made its future unclear.

b. Patient-Physician Relationship

According to many philosophers of medicine, medicine is more than simply a natural or social science; it is a moral enterprise.  What makes medicine moral is the patient-physician or therapeutic relationship.  Although some philosophers of medicine criticize efforts to model the relationship, given the sheer number of contemporary models proposed to account for it, modeling the relationship has important ramifications for understanding and framing the moral demands of medicine and healthcare.  The traditional medical model within the industrialized West, especially in mid twentieth century America, was paternalism or “doctor knows best.”  The paternalistic model is doctor-centered in terms of power distribution, with the patient representing a passive agent who simply follows the doctor’s orders.  The patient is not to question those orders, unless to clarify them.  The source for this power distribution is the doctor’s extensive medical education and training relative to the patient’s lack of medical knowledge.  In this model, the doctor represents a parent, generally a father figure and the patient a child—especially a sick child.  The motivation of this model is to relieve a patient burdened with suffering from a disease, to benefit the patient from the doctor’s medical knowledge, and to affect a cure while returning the patient to health.  In other words, the model’s guiding principle is beneficence.  Besides the paternalistic model, other doctor-centered models include the priestly and mechanic models.  However, the paternalistic model, as well as the other doctor-centered models, ran into severe criticism with abuses associated with the models and with the rise of patient advocacy groups to correct the abuses.

Within the latter part of the twentieth century and the rise of patient autonomy as a guiding principle for medical practice, alternative patient-physician models challenged traditional medical paternalism.  Instead of doctor-centered, one set of models are patient-centered in which patients are the locus of power.  The most predominant patient-centered model is the business model, where the physician is a healthcare provider and the patient a consumer of healthcare goods and services.  The business model is an exchange relationship and relies heavily on a free market system.  Thus, the patient possesses the power to pick and choose among physicians until a suitable healthcare provider is found.  The legal model is another patient-centered model, in which the patient is a client and the guiding forces are patient autonomy and justice.  Patient and physician enter into a contract for healthcare services.  Another set of models in which patients have significant power in the therapeutic relationship are the mutual models.  In these models, neither patients nor physicians have the upper hand in terms of power-they share it.  The most predominant model is the partnership model in which patient and physician are associates in the therapeutic relationship.  The guiding force of this model is informed consent in which the physician apprises the patient of the available therapeutic options and the patient then chooses which is best.  Both the patient and physician share decision making over the best means for affecting a cure.  Finally, other examples of mutual models include the covenant model, which stresses promise instead of contract; the friendship model, which involves a familial-like relationship; and, the neighbor model, which maintains a “safe” distance and yet familiarity between patient and physician.

4. What is Medicine?

The nature of medicine is certainly an important question facing twenty-first century philosophers of medicine.  One reason for its importance is that the question addresses the vital topic of how physicians should practice medicine.  During the turn of the twenty-first century, clinicians and other medical pundits have begun to accept evidence-based medicine, or EBM, as the best way to practice medicine.  Proponents of EBM claim that physicians should engage in medical practices based on the best scientific and clinical evidence available, especially from randomized controlled clinical trials, systematic observations, and meta-analyses of that evidence, rather than on pathophysiology or an individual physician’s clinical experience. Proponents also claim that EBM represents a paradigmatic shift away from traditional medicine.  Traditional practitioners doubt the radical claims of EBM proponents.  One specific objection is that application of evidence from population based clinical trials to the individual patient within the clinic is not as easy to accomplish as EBM proponents realize.  In response, some clinicians propose patient-centered medicine (PCM).   Patient-centered advocates include the patient’s personal information in order to apply the best available scientific and clinical evidence in treatment.  The focus then shifts from the patience’s disease to the patience’s illness experience.  The key for the practice of patient-centered medicine is a physician’s effective communication with the patient.  While some commentators present EBM and PCM as competitors, others propose a combination or integration of the two medicines.  The debate between advocates of EBM and PCM is reminiscent of an earlier debate between the science and art of medicine and belies a deep anxiety over the nature of medicine.  Certainly, philosophers of medicine can play a strategic role in the debate and assist towards its satisfactory resolution.

Besides its nature,  twenty-first century medicine also faces a number of crises, including economic, malpractice, healthcare insurance, healthcare policy, professionalism, public or global health, quality-of-care, primary or general care, and critical care—to name a few (Daschle, 2008; Relman, 2007).  Philosophers of medicine can certainly contribute to the resolution of these crises by carefully and insightfully analyzing the issues associated with them.  For example, considerable attention has been paid in the literature to the crisis over the nature of medical professionalism (Project of the ABIM Foundation, et al., 2002; Tallis, 2006).  The question that fuels this crisis is what type of physician best meets the patient’s healthcare needs and satisfies medicine’s social contract.  The answer to this question involves the physician’s professional demeanor or character.  However, little consensus as to how best to define professionalism is palpable in the literature.  Philosophers of medicine can aid by furnishing guidance towards a consensus on the nature of medical professionalism.

Philosophy of medicine is a vibrant field of exploration into the world of medicine in particular, and of healthcare in general.  Along traditional lines of metaphysics, epistemology, and ethics, a cadre of questions and problems face philosophers of medicine and cry out for attention and resolution.  In addition, many competing forces are vying for the soul of medicine today.  Philosophy of medicine is an important resource for reflecting on those forces in order to forge a medicine that meets both physical and existence needs of patients and society.

5. References and Further Reading

  • Achinstein, P. 1983. The nature of explanation. Oxford: Oxford University Press.
  • Andersen, H. 2001. The history of reductionism versus holism approaches to scientific research. Endeavor 25:153-156.
  • Aristotle. 1966. Metaphysics. H.G. Apostle, trans. Bloomington: Indiana University Press.
  • Baglivi, G. 1723. Practice of physick, 2nd edition. London: Midwinter.
  • Bartlett, E. 1844. Essay on the philosophy of medical science. Philadelphia: Lea & Blanchard.
  • Beauchamp, T., and Childress, J.F. (2001) Principles of biomedical ethics, 5th edition. Oxford:Oxford University Press.
  • Black, D.A.K. (1968) The logic of medicine. Edinburgh: Oliver & Boyd.
  • Bock, G.R., and Goode, J.A., eds. 1998. The limits of reductionism in biology. London: John Wiley.
  • Boorse, C. 1975. On the distinction between disease and illness. Philosophy and Public Affairs 5:49-68.
  • Boorse, C. 1987. Concepts of health. In Health care ethics: an introduction, D. VanDeVeer and T. Regan, eds.  Philadelphia: Temple University Press, pp. 359-393.
  • Boorse, C. 1997. A rebuttal on health. In What is disease?, J.M. Humber and R.F.  Almeder, eds. Totowa, N.J.: Humana Press, pp. 1-134.
  • Brody, H. 1992. The healer’s power. New Haven, CT: Yale University Press.
  • Caplan, A.L. 1986 Exemplary reasoning? A comment on theory structure in biomedicine. Journal of Medicine and Philosophy 11:93-105.
  • Caplan, A.L. 1992. Does the philosophy of medicine exist? Theoretical Medicine 13:67-77.
  • Carter, K.C. 2003. The rise of causal concepts of disease: case histories. Burlington, VT: Ashgate.
  • Cassell, E.J. 2004. The nature of suffering and the goals of medicine, 2nd edition. New York: Oxford University Press.
  • Clouser, K.D., and Gert, B. 1990. A critique of principlism. Journal of Medicine and Philosophy 15:219-236.
  • Collingwood, R.G. 1940. An essay on metaphysics. Oxford: Clarendon Press.
  • Coulter, A. 1999. Paternalism or partnership? British Medical Journal 319:719-720.
  • Culver, C.M., and Gert, B. 1982. Philosophy in medicine: conceptual and ethical issues in medicine and psychiatry. New York: Oxford University Press.
  • Daschle, T. 2008. Critical: what we can do about the health-care crisis. New York: Thomas Dunne Books.
  • Davis, R.B. 1995. The principlism debate: a critical overview. Journal of Medicine and Philosophy 20:85-105.
  • Davis-Floyd, R., and St. John, G. 1998. From doctor to healer: the transformative journey. New Brunswick, NJ: Rutgers University Press.
  • Dummett, M.A.E. 1991. The logical basis of metaphysics. Cambridge: Harvard University Press.
  • Elsassar, W.M. 1998. Reflections on a theory of organisms: holism in biology. Baltimore: Johns Hopkins University Press.
  • Emanuel, E.J., and Emanuel, L.L. 1992 Four models of the physician-patient relationship. Journal of American Medical Association 267:2221-2226.
  • Engel, G.L. 1977. The need for a new medical model: a challenge for biomedicine. Science 196:129-136.
  • Engelhardt, Jr., H.T. 1996. The foundations of bioethics, 2nd edition. New York: Oxford University Press.
  • Engelhardt, Jr., H.T., ed., 2000. Philosophy of medicine: framing the field. Dordrecht: Kluwer.
  • Engelhardt, Jr., H.T., and Erde, E.L. 1980. Philosophy of medicine. In A guide to culture of science, technology, and medicine, P.T. Durbin, ed. New York: Free Press, pp. 364-461.
  • Engelhardt, Jr., H.T., and Wildes, K.W. Philosophy of medicine. 2004. In Encyclopedia of bioethics, 3rd edition, S.G. Post, ed. New York: Macmillan, pp. 1738-1742.
  • Evans, A.S. 1993. Causation and disease: a chronological journey. New York: Plenum.
  • Evans, M., Louhiala, P. and Puustinen, P., eds. 2004. Philosophy for medicine: applications in a clinical context. Oxon, UK: Radcliffe Medical Press.
  • Evidence-Based Medicine Working Group. 1992. Evidence-based medicine: a new approach to teaching the practice of medicine. Journal of American Medical Association 268:2420- 2425.
  • Feinstein, A.R. .1967. Clinical judgment. Huntington, NY: Krieger.
  • Feinstein, A.R. 1994. Clinical judgment revisited: the distraction of quantitative models. Annals of Internal Medicine 120:799-805.
  • Fulford, K.W.M. 1989. Moral theory and medical practice. Cambridge: Cambridge University Press.
  • Gardiner, P. 2003. A virtue ethics approach to moral dilemmas in medicine. Journal of Medical Ethics 29:297-302.
  • Gert, B., Culver, C.M., and Clouser, K.D. 1997. Bioethics: a return to fundamentals. Oxford, Oxford University Press.
  • Gillies, D.A. 2005. Hempelian and Kuhnian approaches in the philosophy of medicine: the Semmelweis case. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Science 36:159-181.
  • Gillon, R. 1986. Philosophical medical ethics. New York: John Wiley and Sons.
  • Goldman, G.M. 1990. The tacit dimension of clinical judgment. Yale Journal of Biology and Medicine 63:47-61.
  • Golub, E.S. 1997. The limits of medicine: how science shapes our hope for the cure. Chicago: University of Chicago Press.
  • Goodyear-Smith, F., and Buetow, S. 2001. Power issues in the doctor-patient relationship. Health Care Analysis 9:449-462.
  • Groopman, J. 2007. How doctors think. New York: Houghton Mifflin.
  • Halpern, J. 2001. From detached concern to empathy: humanizing medical practice. New York: Oxford University Press.
  • Hampton, J.R. 2002. Evidence-based medicine, opinion-based medicine, and real-world medicine. Perspectives in Biology and Medicine 45:549-68.
  • Harman, G.H. 1965. The inference to the best explanation. Philosophical Review 74:88-95.
  • Haug, M.R., and Lavin, B. 1983. Consumerism in medicine: challenging physician authority. Beverly Hills, CA: Sage Publications.
  • Häyry, H. 1991. The limits of medical paternalism. London: Routledge.
  • Hempel, C.G. 1965. Aspects of scientific explanation and other essays in the philosophy of science. New York: Free Press.
  • Hempel, C.G., and Oppenheim, P. 1948. Studies in the logical of explanation. Philosophy of science 15:135-175.
  • Hill, A.B. 1965. The environment and disease: association or causation? Proceedings of the Royal Society of Medicine 58:295-300.
  • Howick, J.H. 2011. The philosophy of evidence-based medicine. Hoboken, NJ: Wiley- Blackwell.
  • Illari, P.M., Russo, F., and Williamson, J., eds. 2011. Causality in the sciences. New York: Oxford University Press.
  • Illingworth, P.M.L. 1988. The friendship model of physician/patient relationship and patient autonomy. Bioethics 2:22-36.
  • James, D.N. 1989. The friendship model: a reply to Illingworth. Bioethics 3:142-146.
  • Johansson, I., and Lynøe, N. 2008. Medicine and philosophy: a twenty-first century introduction. Frankfurt: Ontos Verlag.
  • Jonsen, A.R. 2000. A short history of medical ethics. New York: Oxford University Press.
  • Kaptchuk, T.J. 2000. The web that has no weaver: understanding Chinese medicine. Chicago, IL: Contemporary Books.
  • Kadane, J.B. 2005. Bayesian methods for health-related decision making. Statistics in Medicine 24:563-567.
  • Katz, J. 2002. The silent world of doctor and patient. Baltimore: Johns Hopkins University Press.
  • King, L.S. 1978. The philosophy of medicine. Cambridge: Harvard University Press.
  • Kleinman, A. 1988. The illness narratives: suffering, healing and the human condition. New York: Basic Books.
  • Knight, J.A. 1982. The minister as healer, the healer as minister. Journal of Religion and Health 21:100-114.
  • Konner, M. 1993. Medicine at the crossroads: the crisis in health care. New York: Pantheon Books.
  • Kovács, J. 1998. The concept of health and disease. Medicine, Health Care and Philosophy 1:31- 39.
  • Kulkarni, A.V. 2005. The challenges of evidence-based medicine: a philosophical perspective. Medicine, Health Care and Philosophy 8:255-260.
  • Lad, V. D. 2002. Textbook of Ayurveda: fundamental principles of Ayurveda, volume 1. Albuquerque, NM: Ayurvedic Press.
  • Larson, J.S. 1991. The measurement of health: concepts and indicators. New York: Greenwood Press.
  • Le Fanu, J. 2002. The rise and fall of modern medicine. New York: Carroll & Graf.
  • Levi, B.H. 1996. Four approaches to doing ethics. Journal of Medicine and Philosophy 21:7-39.
  • Liberati, A. Vineis, P. 2004. Introduction to the symposium: what evidence based medicine is and what it is not. Journal of Medical Ethics 30:120-121.
  • Lipton, P. 2004. Inference to the best explanation, 2nd edition. New York: Routledge.
  • Little, M. 1995. Humane medicine Cambridge: Cambridge University Press.
  • Loewy, E.H. 2002. Bioethics: past, present, and an open future. Cambridge Quarterly of Healthcare Ethics 11:388-397.
  • Looijen, R.C. 2000. Holism and reductionism in biology and ecology: the mutual dependence of higher and lower level research programmes. Dordrecht: Kluwer.
  • Maier, B., and Shibles, W.A. 2010. The philosophy and practice of medicine and bioethics: a naturalistic-humanistic approach. New York: Springer.
  • Marcum, J.A. 2005. Metaphysical presuppositions and scientific practices: reductionism and organicism in cancer research. International Studies in the Philosophy of Science 19:31-45.
  • Marcum, J.A. 2008. An introductory philosophy of medicine: humanizing modern medicine. New York: Springer.
  • Marcum, J.A. 2009. The conceptual foundations of systems biology: an introduction. Hauppauge, NY: Nova Scientific Publishers.
  • Marcum, J.A., and Verschuuren, G.M.N. 1986. Hemostatic regulation and Whitehead’s philosophy of organism. Acta Biotheoretica 35:123-133.
  • Matthews, J.N.S. 2000. An introduction to randomized controlled clinical trials. London: Arnold.
  • May, W.F. 2000. The physician’s covenant: images of the healer in medical ethics, 2nd edition. Louisville: Westminster John Knox Press.
  • Meehl, P.E. 1954. Clinical versus statistical prediction: a theoretical analysis and a review of the literature. Minneapolis: University of Minnesota Press.
  • Montgomery, K. 2006. How doctors think: clinical judgment and the practice of medicine. New York: Oxford University Press.
  • Morabia, A. 1996. P.C.A. Louis and the birth of clinical epidemiology. Journal of Clinical Epidemiology 49:1327-1333.
  • Murphy, E.A. 1997. The logic of medicine, 2nd edition. Baltimore: The Johns Hopkins University Press.
  • Nesse, R.M. (2001) On the difficulty of defining disease: a Darwinian perspective. Medicine, Health Care and Philosophy 4:37-46.
  • Nordenfelt, L. 1995. On the nature of health: an action-theory approach, 2nd edition. Dordrecht: Kluwer.
  • Overby, P. 2005. The moral education of doctors. New Atlantis 10:17-26.
  • Papakostas, Y.G., and Daras, M.D. 2001. Placebos, placebo effects, and the response to the healing situation: the evolution of a concept. Epilepsia 42:1614-1625.
  • Parker, M. 2002. Whither our art? Clinical wisdom and evidence-based medicine. Medicine, Health Care and Philosophy 5:273-280.
  • Pellegrino, E.D., and Thomasma, D.C. 1981. A philosophical basis of medical practice: toward a philosophy and ethic of the healing professions. New York: Oxford University Press.
  • Pellegrino, E.D., and Thomasma, D.C. 1988. For the patient’s good: the restoration of beneficence in health care. New York: Oxford University Press.
  • Pellegrino, E.D., and Thomasma, D.C. 1993. The virtues in medical practice. New York: Oxford University Press.
  • Pole, S. 2006. Ayurvedic medicine: the principles of traditional practice. Philadelphia, PA: Elsevier.
  • Post, S.G. 1994. Beyond adversity: physician and patient as friends? Journal of Medical Humanities 15:23-29.
  • Project of the ABIM Foundation, ACP-ASIM Foundation, and European Federation of Internal Medicine 2002. Medical professionalism in the new millennium: a physician charter. Annals of Internal Medicine 136:243-246.
  • Quante, M., and Vieth, A. 2002. Defending principlism well understood. The Journal of Medicine and Philosophy 27:621 – 649.
  • Reeder, L.G. 1972. The patient-client as a consumer: some observations on the changing professional-client relationship. Journal of Health and Social Behavior 13:406-412.
  • Reiser, S.J. 1978. Medicine and the reign of technology. Cambridge: Cambridge University Press.
  • Relman, A.S. 2007. A second opinion: rescuing America’s healthcare. New York: Perseus Books.
  • Reznek, L. 1987. The nature of disease. London: Routledge & Kegan Paul.
  • Rizzi, D.A., and Pedersen, S.A. 1992. Causality in medicine: towards a theory and terminology. Theoretical Medicine 13:233-254.
  • Roter, D. 2000. The enduring and evolving nature of the patient-physician relationship. Patient Education and Counseling 39:5-15.
  • Rothman, K.J. 1976. Causes. Journal of Epidemiology 104:587-592.
  • Ryff, C.D., and Singer, B. 1998. Human health: new directions for the next millennium. Psychological Inquiry 9:69-85.
  • Sackett, D.L., Richardson, W.S., Rosenberg, W., and Haynes, R.B. 1998. Evidence-based medicine: how to practice and teach EBM. London: Churchill Livingstone.
  • Salmon, W. 1984. Scientific explanation and the causal structure of the world. Princeton: Princeton University Press.
  • Samaniego, F.J. 2010. A comparison of the Bayesian and frequentist approaches to estimation. New York: Springer.
  • Schaffner, K.F. 1993. Discovery and explanation in biology and medicine. Chicago: University of Chicago Press.
  • Schaffner, K.F., and Engelhardt, Jr., H.T. 1998. Medicine, philosophy of. In Routledge Encyclopedia of Philosophy, E. Craig, ed. London: Routledge, pp. 264-269.
  • Schwartz, W.B., Gorry, G.A., Kassirer, J.P., and Essig, A. 1973. Decision analysis and clinical judgment. American Journal of Medicine 55:459-472.
  • Seifert, J. 2004. The philosophical diseases of medicine and their cures: philosophy and ethics of medicine, vol. 1: foundations. New York: Springer.
  • Senn, S. 2007. Statistical issues in drug development, 2nd edition. Hoboken, NJ: John Wiley & Sons.
  • Simon, J.R. 2010. Advertisement for the ontology of medicine. Theoretical Medicine and Bioethics 31:333-346.
  • Smart, J.J.C. 1963. Philosophy and scientific realism. London: Routledge & Kegan Paul.
  • Smuts, J. 1926. Holism and evolution. New York: Macmillan.
  • Solomon, M.J., and McLeod, R.S. 1998. Surgery and the randomized controlled trial: past, present and future. Medical Journal of Australia 169:380-383.
  • Spodick, D.H. 1982. The controlled clinical trial: medicine’s most powerful tool. The Humanist 42:12-21, 48.
  • Stempsey, W.E. 2000. Disease and diagnosis: value-dependent realism. Dordrecht: Kluwer.
  • Stempsey, W.E. 2004. The philosophy of medicine: development of a discipline. Medicine, Health Care and Philosophy 7:243-251.
  • Stempsey, W.E. 2008. Philosophy of medicine is what philosophers of medicine do. Perspectives in Biology and Medicine 51:379-371.
  • Stewart, M., Brown, J.B., Weston, W.W., McWhinney, I.R., McWilliam, C.L., and Freeman, T.R. 2003. Patient-centered medicine: transforming the clinical method, 2nd edition. Oxon, UK: Radcliffe Medical Press.
  • Straus, S.E., and McAlister, F.A. 2000. Evidence-based medicine: a commentary on common criticisms. Canadian Medical Association Journal 163:837-840.
  • Svenaeus, F. 2000. The hermeneutics of medicine and the phenomenology of health: steps towards a philosophy of medical practice. Dordrecht: Kluwer.
  • Tallis, R.C. 2006. Doctors in society: medical professionalism in a changing world. Clinical Medicine 6:7-12.
  • Tauber, A.I. 1999. Confessions of a medicine man: an essay in popular philosophy. Cambridge: MIT Press
  • Tauber, A.I. 2005. Patient autonomy and the ethics of responsibility. Cambridge: MIT Press.
  • Thagard, P. 1999. How scientists explain disease. Princeton: Princeton University Press.
  • Tonelli, M.R. 1998. The philosophical limits of evidence-based medicine. Academic Medicine 73:1234-1240.
  • Tong, R. 2007. New perspectives in health care ethics: an interdisciplinary and crosscultural approach. Upper Saddle River, NJ: Pearson Prentice Hall.
  • Toombs, S.K. 1993. The meaning of illness: a phenomenological account of the different perspectives of physician and patient. Dordrecht: Kluwer.
  • Toombs, S. K., ed. 2001. Handbook of phenomenology and medicine. Dordrecht: Kluwer.
  • Unschuld, P.U. 2010. Medicine in China: a history of ideas, 2nd edition. Berkeley, CA: University of California Press.
  • van der Steen, W.J., and Thung, P.J. 1988. Faces of medicine: a philosophical study.         Dordrecht: Kluwer.
  • van Gijn, J. 2005. From randomized trials to rational practice. Cardiovascular Diseases 19:69- 76.
  • Veatch, R.M. 1981. A theory of medical ethics. New York: Basic Books.
  • Veatch, R.M. 1991. The patient-physician relations: the patient as partner, part 2. Bloomington, IN: Indiana University Press.
  • Velanovich, V. 1994. Does philosophy of medicine exist? A commentary on Caplan. Theoretical Medicine 15:88-91.
  • Weatherall, D. 1996. Science and the quiet art: the role of medical research in health care. New York: Norton.
  • Westen, D., and Weinberger, J. 2005. In praise of clinical judgment: Meehl’s forgotten legacy. Journal of Clinical Psychology 61:1257-1276.
  • Whitbeck, C. 1981. A theory of health. In Concepts of health and disease: interdisciplinary perspectives, A.L. Caplan, H.T. Engelhardt, Jr., and J.J. McCartney, eds. London: Addison- Wesley, pp. 611-626.
  • Wildes, K.W. 2001. The crisis of medicine: philosophy and the social construction of medicine. Kennedy Institute of Ethics Journal 11:71-86.
  • Woodward, J. 2003. Making things happen: a theory of causal explanation. Oxford: Oxford University Press.
  • Worrall, J. 2002. What evidence in evidence-based medicine? Philosophy of Science 69:S316- S330.
  • Worrall, J. 2007. Why there’s no cause to randomize. British Journal for the Philosophy of Science 58:451-488.
  • Wulff, H.R., Pedesen, S.A., and Rosenberg, R. 1990. Philosophy of medicine: an introduction, 2nd edition. Oxford: Blackwell.
  • Zaner, R.M. 1981. The context of self: a phenomenological inquiry using medicine as a clue. Athens, OH: Ohio University Press.

 

Author Information

James A. Marcum
Email: James_Marcum@baylor.edu
Baylor University
U. S. A.

Synesthesia

The word “synesthesia” or “synaesthesia,” has its origin in the Greek roots, syn, meaning union, and aesthesis, meaning sensation: a union of the senses.  Many researchers use the term “synesthesia” to refer to a perceptual anomaly in which a sensory stimulus associated with one perceptual modality automatically triggers another insuppressible sensory experience which is usually, but not always, associated with a different perceptual modality as when musical tones elicit the visual experience of colors (“colored-hearing”).  Other researchers consider additional unusual correspondences under the category of synesthesias, including the automatic associations of specific objects with genders, ascriptions of unique personalities to numbers, and the involuntary assignment of spatial locations to months or days of the week.  Many synesthetes experience more than one cross-modal correspondence, and others who have unusual cross-modal sensory experiences also have some non-sensory correspondences such as those mentioned above.

Researchers from fields as varied as neurology, neuroscience, psychology and aesthetics have taken an interest in the phenomenon of synesthesia.  Consideration of synesthesia has also shed light on important subjects in philosophy of mind and cognitive science.  For instance, one of the most widely discussed problems in recent philosophy of mind has been to determine how consciousness fits with respect to physical descriptions of the world.  Consciousness refers to the seemingly irreducible subjective feel of ongoing experience, or the character of what it is like.  Philosophers have attempted to reduce consciousness to properties that will ultimately be more amenable to physical characterizations such as representational or functional properties of the mind.  Some philosophers have argued that reductive theories such as representationalism and functionalism cannot account for synesthetic experience.

Another metaphysical project is to provide an account of the nature of color.  There are two main types of views on the nature of color.  Color objectivists take color to be a real feature of the external world.  Color subjectivists take color to be a mind-dependent feature of the subject (or the subject’s experience).  Synesthesia has been used as a counter-example to color objectivism.  Not everyone agrees, however, that synesthesia can be employed to this end.  Synesthesia has also been discussed in regards to the issue of what properties perceptual experiences can represent objects as having (for example, colors).  The standard view is that color experiences represent objects as having color properties, but a special kind of grapheme-color synesthesia may show that color experience can signify numerical value.  If this is right, it shows that perceptual experiences can represent so-called “high-level” properties.

Synesthesia may also be useful in arbitrating the question of how mental processing can be so efficient given the abundance of mentally stored information and the wide variety of problems that we encounter, which must each require highly specific albeit different, processing solutions.  The modular theory of mind is a theory about mental architecture and processing aimed at solving these problems.  On the modular theory, at least some processing is performed in informationally encapsulated sub-units that evolved to perform unique processing tasks.  Synesthesia has been used as support for mental modularity in several different ways.  While some argue that synesthesia is due to an extra module, others argue that synesthesia is better explained as a breakdown in the barrier that keeps information from being shared between modules.

This article begins with an overview of synesthesia followed by a discussion of synesthesia as it has been relevant to philosophers and cognitive scientists in their discussions of the nature of consciousness, color, mental architecture, and perceptual representation, as well as several other topics.

Table of Contents

  1. Synesthesia
  2. Consciousness
    1. Representationalism
    2. Functionalism
  3. Modularity
  4. Theories of Color
  5. An Extraordinary Feature of Color-Grapheme Synesthesia
  6. Wittgenstein’s Philosophical Psychology
  7. Individuating the Senses
  8. Aesthetics and “Literary Synesthesia”
  9. Synesthesia and Creativity
  10. References and Further Reading

1. Synesthesia

Most take synesthesia to be a relatively rare perceptual phenomenon. Reports of prevalence vary, however, from 1 in 25,000 (Cytowic, 1997) to 1 in 200 (Galton, 1880), to even 1 in 20 (Simner et al., 2006).  It typically involves inter-modal experiences such as when a sound triggers a concurrent color experience (a photism), but it can also occur within modalities.  For example, in grapheme-color synesthesia the visual experience of alpha-numeric graphemes such as of a “4” or a “g,” induces color photisms.  These color photisms may appear to the synesthete as located inside the mind, in the peri-personal space surrounding the synesthete’s body (Grossenbacher & Lovelace, 2001), or as being projected right where the inducing grapheme is situated perhaps as if a transparency were placed on top of the grapheme (Dixon, et al., 2004).  Reported cross-modal synesthesias also include olfactory-tactile (where a scent induces a tactile experience such as of smoothness), tactile-olfactory, taste-color, taste-tactile and visual-olfactory, among others.  It is not clear which of these is most common.  Some researchers report that colored-hearing is the most commonly occurring form of synesthesia (Cytowic, 1989; Harrison & Baron-Cohen, 1997), and others report that approximately 68% of synesthetes have the grapheme-color variety (Day, 2005).  Less common forms include sound-olfactory, taste-tactile and touch-olfactory.  In recent years, synesthesia researchers have increasingly been attending to associations that don’t fit the typical synesthesia profile of cross activations between sensory modalities, such as associations of specific objects with genders, ascriptions of unique personalities to particular numbers, and the involuntary assignment of spatial locations to months or days of the week.  Many synesthetes report having these unusual correspondences in addition to cross-modal associations.

Most studied synesthesias are assumed to have genetic origins (Asher et al., 2009).  It has long been noted that synesthesia tends to run in families (Galton, 1883) and the higher proportion of female synesthetes has led some to speculate that it is carried by the X chromosome (Cytowic, 1997; Ward & Simner, 2005).  However, there are also reports of acquired synesthesias induced by drugs such as LSD or mescaline (Rang & Dale, 1987) or resulting from neurologic conditions such as epilepsy, trauma or other lesion (Cytowic, 1997; Harrison & Baron-Cohen, 1997; Critchley, 1997).  Recent studies suggest it may even be brought on through training (Meier & Rothen, 2009; Proulx, 2010) or post-hypnotic suggestion (Kadosh et al., 2009).  Another hypothesis is that synesthesia may have both genetic and developmental origins.  Additionally, some researches propose that synesthesia may arise in genetically predisposed children in response to demanding learning tasks such as the development of literacy.

Up until very recently, the primary evidence for synesthesia has come from introspectively based verbal reports.  According to Harrison and Baron-Cohen (1997), synesthesia is late in being a subject of scientific interest because the previously prevailing behaviorists rejected the importance of subjective phenomena and introspective report.  Some other researchers continue to downplay the reality of synesthesia, claiming that triggered concurrents are likely ideational in character rather than perceptual (for discussion and criticism of this view see Cytowic, 1989; Harrison, 2001; Ramachandran & Hubbard, 2001a).  One hypothesis is that synesthetic ideas result from learned associations that are so vivid in the minds of synesthetes that subjects mistakenly construe them to be perceptual phenomena.  As psychologists swung from physicalism back to mentalism, however, subjective experience became more accepted as an area of scientific inquiry.  In recent years, scientists have begun to study aspects of subjectivity, such as the photisms of synesthetes, using third person methods of science.

Recent empirical work on synesthesia suggests its perceptual reality.  For example, synesthesia is thought to influence attention (Smilek et al., 2003). Moreover, synesthetes have long reported that photisms can aid with memory (Luria, 1968).  And indeed, standard memory tests show synesthetes to be better with recall where photisms would be involved (Cytowic 1997; Smilek et al., 2002).

Other studies aimed at confirming the legitimacy of synesthesia have demonstrated that genuine synesthesia can be distinguished from other common types of learned associations in that it is remarkably consistent; over time synesthetes’ sensation pairings (for example, the grapheme 4 with the color blue) remain stable across multiple testings whereas most learned associations do not.  Synesthetes tested and retested to confirm consistency of pairings on multiple occasions, at an interval of years and without warning, exhibit consistency as high as 90% (Baron-Cohen, et al., 1987).  Non-synesthete associators are not nearly as consistent.

Grouping experiments are used to distinguish between perceptual and non-perceptual features of experience (Beck, 1966; Treisman, 1982).  In common grouping experiments, subjects view a scene comprised of vertical and tilted lines.  In perception, the tilted and vertical lines appear as grouped independently.  Studies seem to show some grapheme-color synesthetes to be subject to pop-out and grouping effects based on colored photisms (Ramachandran & Hubbard, 2001a, b; Edquist et al., 2006).  If an array of 2’s in the form of a triangle are hidden within a field of distracter graphemes such as 5’s, the 2’s may “pop-out” or appear immediately and saliently in experience as forming a triangle so long as the color ascribed to the 2’s is incongruent with the color of the 5’s (Ramachandran and Hubbard, 2001b).

synesthesia graphic

Some take these studies to show that, for at least some synesthetes, the concurrent colors are genuinely perceptual phenomena arising at a relatively early pre-conscious stage of visual processing, rather than associated ideas, which would arise later in processing.

Another study often cited as substantiating the perceptual reality of synesthetic photisms shows that synesthetes are subject to Stroop effects on account of color photisms.  When synesthetes were shown a hand displaying several fingers colored to match the color photism the synesthetes typically projected onto things signifying that quantity, they were quicker at identifying the actual quantity of fingers displayed than when the fingers were painted a color that was incongruent with the photism typically associated with things significant of that quantity (Ward and Sagiv, 2007).

Finally, Smilek et al. (2001) have conducted a study with a synesthete they refer to as “C,” that suggests the perceptual reality of synesthesia.  In the study, significant graphemes are presented individually against backgrounds that are either congruent or incongruent with the photism associated with the grapheme.  If graphemes really are experienced as colored, then they should be more difficult to discern by synesthetes when they are presented against congruent backgrounds.  C did indeed have difficulty discerning the grapheme on congruent but not incongruent trials.  In a similar study, C was shown a digit “2” or “4” hidden in a field of other digits.  Again, the background was either congruent or incongruent with the photism C associated with the target digit.  C had difficulty locating the target digit when the background was congruent with the target’s photism color, but not when it was incongruent.

Nevertheless, another set of recent studies could be seen as calling into question whether some of the above studies really demonstrate the perceptual reality of synesthesia.  Meier and Rothen (2009) have shown that non-synesthetes trained over several weeks to associate specific numbers and colors behave similarly to synesthetes on synesthetic Stroop studies.  The colors that the non-synesthetes were taught to associate with certain graphemes interfered with their ability to identify target graphemes.  Moreover, Kadosh et al. (2009) have shown that highly suggestible non-synesthetes report abnormal cross-modal experiences similar to congenital synesthetes and behave similarly to Smilek’s synesthete C on target identification after receiving post-hypnotic suggestions aimed to trigger grapheme-color pairings.  Some researchers conclude from these studies that genuine synesthetic experiences can be induced through training or hypnosis.  But it isn’t clear that the evidence warrants this conclusion as the results are consistent with the presence of merely strong non-perceptual associations.  In the cases of post-hypnotic suggestion, participants may simply be behaving as if they experienced genuine synesthesia.  An alternative conclusion to draw from these studies might be that Stroop and the identification studies conducted with C do not demonstrate the perceptual reality of synesthesia.  Nonetheless, it has not been established that training and hypnotism can replicate all the effects, such as the longevity of associations in “natural” synesthetes, and few doubt that synesthetes experience genuine color photisms in the presence of inducing stimuli.

For most grapheme-color synesthetes, color photisms are induced by the formal properties of the grapheme (lower synesthesia).  In some, however, color photisms can be correlated with high-level cognitive representations specifying what the grapheme is taken to represent (higher synesthesia).  Higher synesthesia can be distinguished from lower synesthesia by several testable behaviors.

First, individuals with higher synesthesia frequently have the same synesthetic experiences (for example, see the same colors) in response to multiple inducers that share meaning—for instance, 5, V, and an array of five dots may all induce a green photism (Ramachandran & Hubbard, 2001b; Ward & Sagiv, 2007).  Second, some higher-grapheme-color synesthetes will experience color photisms both when they are veridically perceiving an external numeral, and also when they are merely imagining or thinking about the numerical concept.  Dixon et al. (2000) showed one synesthete the equation “4 + 3” followed by a color patch.  Their participant was slower at naming the color of the patch when it was incongruent with the photism normally associated with the number that is the solution to the equation.  If thinking about the numerical concept alone induces a photism then we should expect that the photism would interfere with identifying the patch color.

Moreover, when an individual with higher synesthesia sees a grapheme that is ambiguous, for example a shape that resembles both a 13 and a B, he or she may mark it with different colors when it is presented in different contexts.  For instance, when the grapheme is presented in the series, “12, 13, 14,” it may induce one photism, but it may induce a different photism when it is presented in the series, A, 13, C.  This suggests that it isn’t merely the shape of the grapheme that induces the photism here, but also the ascribed semantic value (Dixon et al., 2006).  Similarly, if an array of smaller “3”s are arranged in the form of a larger “5,” an individual with higher-grapheme synesthesia may mark the figure with one color photism when attending to it as an array of “3”s, but mark it with a different color photism when attending to it as a single number “5” (Ramachandran & Hubbard, 2000).

2. Consciousness

Some contend that synesthesia presents difficulties for certain theories of mind when it comes to conscious experience, such as representationalism (Wager, 1999, 2001; Rosenberg, 2004) and functionalism (J.A. Gray, 1998, 2003, 2004, J.A. Gray et al.; 1997, 2002, 2006).  These claims are controversial and discussed in some depth in the following two sections.

a. Representationalism

Representationalism is the view that the phenomenal character of experience (or the properties responsible for “what it is like” to undergo an experience) is exhausted by, or at least supervenes on, its representational content (Chalmers, 2004).  This means that there can be no phenomenal difference in the absence of a representational difference, and, if two experiential states are indiscernible with respect to representational content, then they must have the same phenomenal character.  Reductive brands of representationalism say that the qualitative aspects of consciousness are just the properties represented in perceptual experience (that is, the representational contents).  For instance, perhaps the conscious visual sensation of a faraway aircraft travelling across the sky is just the representation of a silver object moving across a blue background (Tye, 1995, p.93).

According to Wager (1999, 2001) and Rosenberg (2004) synesthesia shows that phenomenal character does not always depend on representational content because mental states can be the same representationally, but differ when it comes to experiential character.  Wager dubs this the “extra qualia” problem (1999, p.268) noting that his objection specifically targets externalist versions of representationalism (p.276) contending that phenomenal content depends on what the world is like (such that perfect physical duplicates could differ in experiential character given that their environments differ).  Meanwhile, Rosenberg (2004, p.101) employs examples of synesthetes who see colors when feeling pain, or hearing loud noises.  According to Rosenberg, there is no difference between the representational content of the synesthete and the ordinary person: in the case of pain, they could both be representing damage to the body of, let us suppose, a certain intensity, location and duration.  Again, the examples are claimed to show that mental states with the same representational content can differ experientially.  However, others reject this sort of argument.

Alter (2006, p.4) argues that Rosenberg’s analysis overlooks plausible differences between the representational contents in question.  A synesthete who is consciously representing bodily damage as, say, orange, is representing pain differently than an ordinary person.  The nature of this representational difference might be understood in more than one way: perhaps the manner in which they represent their intentional objects differs, or, perhaps their intentional objects differ (or both).  In short, it is suggested that the synesthete and the ordinary person are not representationally the same, and it is no threat to representationalism that different kinds of experience represent differently.  To take a trivial case, the conscious difference between touching and seeing a snowball is accounted for in that they represent differently (only one represents the snowball as cold).

Turning to Wager, he considers three cases which all concern a synesthete named Cynthia who experiences extra visual qualia in the form of a red rectangle when she hears the note Middle C.  The cases vary according to the version of externalism in question.  Case 1 examines a simple casual co-variation theory of phenomenal content, case 2 a theory that mixes co-variation and teleology (such as Tye’s, 1995), while case 3 concerns a purely teleological account, (such as Dretske’s, 1995).  These cases purportedly show that synesthetic and ordinary experience can share the same contents despite the differences in qualitative character.  R. Gray’s (2001a, 2004, pp.68-9) general reply is that synesthetic experience does indeed differ representationally in that it misrepresents.

For example, instead of attributing the redness and rectangularity to Middle C, why not attribute these to a misrepresentation of a red rectangle triggered by the auditory stimulus?  Whether representationalism can supply a plausible account of misrepresentation is an open question, however, perhaps its problems with synesthesia can be resolved by discharging this explanatory debt.

Regarding case 1, perhaps there is no extra representational content had by Cynthia.  If content is determined by the co-variation of the representation and the content it tracks, then since there is no red triangle in the external world, perhaps her experience only represents Middle C, just as it does in the case of an ordinary person (Wager, 1999, p.269).  If so, then there would be a qualitative difference in the absence of a representational difference, and this version of representationalism would be refuted.  On the other hand, Wager concedes that the objection might fail if Cynthia has visually experienced red bars in the past, for then her synesthetic experience is arguably not representationally the same as that of an ordinary person hearing Middle C.  This is because it would be open to the externalist to reply that Cynthia’s experience represents the disjunction “red bar or Middle C” (p.270) thus differing from an ordinary person’s.   However, Wager then argues that a synesthete who has never seen red bars because she is congenitally blind (Blind Cynthia) would have the same representational contents as an ordinary person (they would both just represent Middle C) and yet since she would also experience extra qualia, the objection goes through after all.

In reply, R. Gray (2001a, p.342) points out that this begs the question against the externalist, since it assumes that synesthetic color experience does not depend on a background of ordinary color experience.  If this is so, there could not be a congenitally blind synesthete, since whatever internal states Blind Cynthia had would not be representing colors.  Wager has in turn acknowledged this point (2001, p.349) though he maintains that it is more natural to suppose that Blind Cynthia’s experience would nevertheless be very different.  Support for Wager’s view might be found in such examples as color blind synesthetes who report “Martian” colors inaccessible to ordinary visual perception (Ramachandran and Hubbard, 2003a).

Wager also acknowledges that case 1 overlooks theories allowing representational contents to depend on evolutionary functions, and so the possibility that the blind synesthete functions differently when processing Middle C needs to be examined.  This leads to the second and third cases.

Case 2 is designed around Tye’s hybrid theory according to which phenomenal character depends on evolutionary functions for beings that evolved, and causal co-variation for beings that did not–such as Swampman (your perfect physical duplicate who just popped into existence as a result of lightening striking in swamp material).  Wager argues that on Tye’s view Middle C triggers an internal state with the teleological function of tracking red in the congenitally blind synesthete.  Hence Tye can account for the idea that Blind Cynthia would be representing differently than an ordinary person.

However, now the problem is that it seems the externalist must, implausibly, distinguish between the phenomenal contents of the hypothetical blind synesthete and a blind Swampsynesthete (Blind Swamp Cynthia) when they each experience Middle C.  Recall that Tye’s theory does not allow teleology to be used to account for representational contents in Swampperson cases.  But if Tye falls back on causal co-variation the problem, discussed in the first case, returns.  Since the blind Swampsynesthete’s causal tracking of Middle C does not differ from that of an ordinary person, externalism seems committed to saying that their contents and experiences do not differ—that is, since Blind Swamp Cynthia’s state reliably co-varies with Middle C, not red, it cannot be a phenomenal experience of red.

This, however, is not the end of the matter.  R. Gray could try to recycle his reply that there could not be a blind synesthete (whether of swampy origins or not) since synesthesia is parasitic on ordinary color experience.  Still another response offered on behalf of Tye (Gray, 2001a, p.343) is that Wager fails to take note of the role played by “optimal” conditions in Tye’s theory.  Where optimal conditions fail to obtain, co-variation is mere misrepresentation.  But what counts as optimal and how do we know it?  Perhaps optimal conditions would fail to obtain if the co-varying relationships are one-many (that is, if an internal state co-varies with many stimuli, or, a stimulus co-varies with many internal states, Gray, 2001a, p.343).  Such may be the case for synesthetes, and if so, then synesthetic experience would misrepresent and so differ in content.  On the other hand, Wager disputes Gray’s conception of optimal conditions (2001, p.349) arguing that Tye himself accepts they can obtain in situations where co-variation is one-many.  In addition, Wager (2001, p.349) contends Blind Swamp Cynthia’s co-varying relationship is not one-many since her synesthetic state co-varies only with Middle C.  As for Gray’s claim that optimal conditions fail for the Blind Swamp Cynthia because Middle-C co-varies with too many internal states, Wager (2001, p.349) responds that optimal conditions should indeed obtain—for it is plausible that a creature with a backup visual system could have multiple independent states co-varying with, and bearing content about, a given stimulus.  To this, however, it can be replied that having primary and backup states with content says nothing about whether the content of the backup state is auditory or visual; in other words, does Blind Swamp Cynthia both hear and synesthetically see Middle C, or, does she just hear it by way of multiple brain states (cf. Gray, 2001a, pp.343-344)?  While this summary does not exhaust the debate between Wager and Gray, the upshot for case 2 seems to turn on contentious questions about optimal conditions: what are they, and how do we know when they obtain or fail to obtain?

Finally, Case 3 considers the view that phenomenal content always depends on the state’s content tracking function as determined by natural selection.  Hence, an externalist such as Dretske could maintain that the blind synesthete undergoes a misfiring of a state that is supposed to indicate the presence of red, not Middle C.  Wager’s criticism here concerns a hypothetical case whereby synesthesia comes to acquire the evolutionary function of representing Middle C while visual perception has faded from the species though audition remains normal.  This time the problem is that it seems plausible that two individuals with diverging evolutionary histories could undergo the same synesthetic experience, but according to the externalist their contents would differ (Wager, 1999, p.273).  Perhaps worse, it follows from externalism that a member of this new synesthetic species listening to Middle C would have the very same content and experience as an ordinary member of our own species.

R. Gray replies that he does not see why the externalist must agree that synesthesia has acquired an evolutionary function just because it is adaptive (2001a, p.344).  Returning to his point about cases 1 & 2, synesthesia might well result from a breakdown in the visual system, and saying that it has no function is compatible with saying that it is fitness-enhancing.  If synesthesia does not have a teleological function, then a case 3 externalist can deny that the mutated synesthete’s contents are indiscernible with respect to those of an ordinary person.

And yet even if R. Gray is right that the case for counting synesthesia as functional is inconclusive, it seems at least possible some being evolves such that it has states with the function of representing Middle C synesthetically.  Whether synesthesia is a bug or a feature depends on, as Gray acknowledges, evolutionary considerations (p.345, see also Gray, 2001b), so Wager need only appeal to the possible world in which those considerations favor his interpretation and he can have his counterexample to externalist representationalism (cf. Wager, 2001, p.348).

On the other hand, and as R. Gray notices, Wager’s strongest cases are not drawn from the real world – and so his objections likewise turn on the very sort of controversial, “thought experiments and intuitions about possibility” he aims to distance his own arguments from (Wager, 1999, p.264).  Consider that for case 3 externalists, since Swamppeople don’t have evolutionary functions, they are unconscious zombies.  Anybody who is willing to accept that outcome will probably not be troubled by Wager’s imagined examples about synesthetes.  After all, someone who thinks having no history makes one a zombie already believes that differing evolutionary histories can have a dramatic impact on the qualitative character of experience.  In short, a lot rides on whether synethesia in fact is the result of malfunction, or, the workings of a separate teleofunctional module.

Finally, the suggestion that representational properties can explain the “extra-qualia” in synesthesia courts controversy given worries about whether this is consilient with synesthetes’ self-reports (that is, would further scrutiny of the self-reports strongly support claims about additional representational content?).  There is also general uncertainty as to what evidential weight these reports ought to be granted.  Despite Ramachandran and Hubbard’s enthusiasm for the method of, “probing the introspective phenomenological reports of these subjects” (2001b, p.7, n.3), they acknowledge skepticism on the part of many psychologists about this approach.

b. Functionalism

Synesthesia might present difficulties for the functionalist theory of mind’s account of conscious experience.  Functionalism defines mental states in terms of their functions or causal roles within cognitive systems, as opposed to their intrinsic character (that is, regardless of how they are physically realized or implemented).  Here, mental states are characterized in terms of their mediation of causal relationships obtaining between sensory input, behavioral output, and each other.  For example, an itch is a state caused by, inter alia, mosquito bites, and which results in, among other things, a tendency to scratch the affected area.  As a theory of consciousness, functionalism claims that the qualitative aspects of experience are constituted by (or at least determined by) functional roles (for example, Lycan, 1987).

In a series of articles, J.A. Gray has argued that synesthesia serves as a counter-example to functionalism, as well as to Hurley and Noë’s (2003a) specific hypothesis that sensorimotor patterns best explain variations in phenomenal experience.

Hurley and Noë’s theory employs a distinction between what they call “deference” and “dominance.”  Sensory deference occurs when experiential character conforms to cortical role rather than sensory input, and dominance the reverse.  Sometimes,

nonstandard sensory inputs “defer” to cortical activity, as when the stimulation of a patient’s cheek is felt as a touch on a missing arm.  Here cortex “dominates,” in the sense that it produces the feel of the missing limb, despite the unusual input.  One explanation is that nerve impulses arriving at the cortical region designated for producing the feel of a touch on the cheek “spill over” triggering a neighboring cortical region assigned to producing sensation of the arm.  But the cortex can also “defer” to nonstandard input, as in the case of tactile qualia experienced by Braille readers corresponding to activity in the visual cortex. J.A. Gray (2003, p.193) observes that cortical deference, not dominance, is expected given functionalism, since the character of a mental state is supposed to depend on its role in mediating inputs and outputs. If that efferent-afferent mediating role changes, then the sensory character of the state should change with it.

Hurley and Noë (2003a) propose that cortical regions implicated in one sensory modality can shift to another (and, thus be dominated by input) if there are novel sensorimotor relationships available for exploitation.  For support they point out that the mere illusion of new sensorimotor relationships can trigger cortical deference.  Such is the case with phantom limb patients who can experience the illusion of seeing and moving a missing limb with the help of an appropriately placed mirror.  In time, the phantom often disappears, leading to the conjecture that the restored sensory-motor feedback loop dominates the cortex, forcing it to give up its old role of producing sensation of the missing limb.

Hurley and Noë (2003a, p.160) next raise a worry for their theory concerning synesthesia.   Perceptual inputs are “routed differently” in synesthetes, as in the case of an auditory input fed to both auditory and visual cortex in colored hearing (p.137). This is a case of intermodal cortical dominance, since the nonstandard auditory input “defers” to the visual cortex’s ordinary production of color experience.  But theirs is a theory assuming intermodal deference, that is, qualia is supposed to be determined by sensory inputs, not cortex (pp.140, 160).  It would appear that the visual cortex should not be stuck in the role of producing extra color qualia if their account is correct.

Hurley & Noë believe synesthesia raises a puzzle for any account of color experience, namely, why color experience defers to the colors of the world in some cases but not others.  For example, subjects wearing specially tinted goggles devised by Kohler at first see one side of the world as yellow, the other, blue.  However, color experience adapts and the subjects eventually report that the world looks normal once more (so a white object would still look white even as it passes through the visual field from yellow to blue).  On the other hand, synesthetic colors differ in that they “persist instead of adapting away.”

J.A. Gray points out that since colored hearing emerges early in life, there should be many opportunities for synesthetes to explore novel sensorimotor contingencies, such as conflicts between heard color names and the elicited “alien” qualia–a phenomenon reminiscent of the Stroop effect in which it takes longer to say “blue” if it’s written in red ink (Gray, et al., 2006; see also Hurley and Noë, 2003a, p.164, n.27).  Once again, why isn’t the visual cortex dominated by these sensory-motor loops and forced to cease producing the alien colors?  Gray (2003, p.193) calls this a “major obstacle” to Hurley and Noë’s theory since the visual cortex stubbornly refuses to yield to sensorimotor dominance.

In reply, Hurley and Noë have suggested that synesthetes are relatively impoverished with respect to their sensorimotor contingencies (2003a, pp.160, 165, n.27).  For example, unlike the case of normal subjects, where unconsciously processed stimuli can influence subsequent judgment, synesthetic colors need to be consciously perceived for there to be priming effects.  In short, the input-output relationships might not be robust enough to trigger cortical deference.  Elsewhere, Noë and Hurley (2003, p.195) propose that deference might fail to occur because the synesthetic function of the visual cortex is inextricably dependent on normal cortex functioning.  Whether sensorimotor accounts of experience can accommodate synesthesia is a matter of ongoing debate and cannot be decided here.

J.A. Gray, as mentioned earlier, also thinks synesthesia (specifically, colored hearing) poses a broader challenge to functionalism, since it shows that function and qualia come apart in two ways (2003, p.194).  His first argument contends that a single quale is compatible with different functions: seeing and hearing are functionally different, and yet either modality can result in exactly the same color experience (see also Gray, et al., 2002, 2006).  A second argument claims that different qualia are compatible with the same function.  Hearing is governed by only one set of input-output relationships, but gives rise to both auditory and visual qualia in the colored-hearing synesthete (Gray, 2003, p.194).

Functionalist replies to J.A. Gray et al.’s first argument (that is, that there can be functional differences in the absence of qualia differences) are canvassed by MacPherson (2007) and R. Gray (2004).  Macpherson points out (p.71) that a single quale associated with multiple functions is no threat to a “weak” functionalism not committed to the claim that functional differences necessarily imply qualia differences—qualia might be “multiply realizable” at the functional, as well as implementational level (note that qualia differences could still imply functional differences).  She continues arguing that even for “strong” functionalisms that do assert the same type of qualitative state cannot be implemented by different functions, the counter-example still fails.  Token mental states of the same type will inevitably differ in terms of some fine-grained causes and effects (for example, two persons can each have the same green visual experience even though the associated functional roles will tend to be somewhat different, for example, as green might lead to thoughts of Islam in one person, Ireland in another, ecology in still another, or envy, and so on).  In light of this, a natural way to interpret claims about functional role indiscernibility is to restrict the experience type individuating function to a “core” or perhaps “typical” or even “normal” role.  Perhaps a core role operates at a particular explanatory level—sort of as a MAC and a PC can be functionally indiscernible at the user-level while running a web browser, despite differing in terms of their underlying operating systems.  An alternative is to argue that the synesthetic “role” is really a malfunction, and so no threat to the claim that qualia differences imply normal role differences (R. Gray 2004, pp.67-8 offers a broadly similar response).

As for the other side of J.A. Gray’s challenge, namely that synesthesia shows functional indiscernibility does not imply qualia indiscernibility, Macpherson questions whether there really is qualia indiscernibility between normal and synesthetic experience (2007, p.77).  Perhaps synesthetes only imagine, rather than perceptually experience colors (Macpherson, 2007, pp.73ff.).  She also expresses doubts about experimental tests utilizing pop-out, and questions the interpretation of brain imaging studies (p.75)—for example, is an active “visual” cortex in colored hearing evidence of visual experience, or, evidence that this part of the brain has a non-visual role in synesthetes (cf. Hardcastle, 1997, p.387)?  In short, she contends there are grounds for questioning whether there is a clear case in which the experience of a synesthetic color is just like some non-synesthetic color.

Finally, although MacPherson does not make the point, J.A. Gray’s second argument is vulnerable to a response fashioned from her reply to his first argument.  Perhaps the qualia differences aren’t functionally indiscernible because core roles are not duplicated, or because the synesthetic “role” is really just a malfunction.  To make this more concrete, consider Gray’s example in which hearing the word “train” results in both hearing sound and seeing color (2003, p.194).  He claims that this shows that one-and-the-same function can have divergent qualia.  But this is a hasty inference, and conflates the local auditory uptake of a signal with divergent processing further downstream. Perhaps there are really two quite different input-output sets involved–the auditory signal is fed to both auditory and visual cortexes, after all, and so perhaps a single signal is fed into functionally distinct subsystems one of which is malfunctioning.  Malfunction or not, the functionalist could thus argue that Gray has not offered an example of a single function resulting in divergent qualia.

3. Modularity

The modular theory of mind, most notably advanced by Jerry Fodor (1983), holds that the mind is comprised of multiple sub-units or modules within which representations are processed in a manner akin to the processing of a classical computer.  Processing begins with input to a module, which is transformed into a representational output by inductive or deductive inferences called “computations.”  Modules are individuated by the functions they perform.  The mental processing underlying visual perception, auditory perception, and the like, take place in individual modules that are specially suited to performing the unique processing tasks relevant to each.  One of the main benefits of modularity is thought to be processing efficiency.  The time-cost involved if computations were to have access to all of the information stored in the mind would be considerable.  Moreover, since an organism encounters a wide variety of problems, it would have been economical for independent systems to have evolved for performing different tasks.  Some argue that synesthesia supports the modular theory.  Before discussing how synesthesia is taken as evidence for modularity, it will help to understand a bit more precisely, the important role that the concept of modularity plays in psychology.

Many, including Fodor, believe that scientific disciplines reveal the nature of natural kinds.  Natural kinds are thought to be mind-independent natural classes of phenomena that, “have many scientifically interesting properties in common over and above whatever properties define the class” (Fodor, 1983, p.46).  Those who believe that there are natural kinds commonly take things such as water, gold, zebras and penicillin to be instances of natural kinds.  If scientific disciplines reveal the nature of natural kinds, then for psychology to be a bona fide science, the mental phenomena that it takes as its objects of study would also have to be natural kinds.  For those like Fodor, who are interested in categorically delineating special sciences like psychology from more basic sciences, it must be that the laws of the special science cannot be reduced to those of the basic science.  This means that the natural kind terms used in a particular science to articulate that science’s laws cannot be replaced with terms for other more fundamental natural phenomena.  From this perspective, it is highly desirable to see whether modules meet the criteria for natural kinds.

According to Fodor, in addition to the properties that define specific types of modules, all modules share most, if not all, of the following nine scientifically interesting characteristics:  1. They are subserved by a dedicated neural architecture, that is, specific brain regions and neural structures uniquely perform each module’s task.  2. Their operations are mandatory, once a module receives a relevant input the subject cannot override or stop its processing.  3. Modules are informationally encapsulated, their processing cannot utilize information from outside of that module.  4. The information from inside the module cannot be accessed by external processing areas.  5. The processing in modules is very quick.  6. Outputs of modules are shallow and conceptually impoverished, requiring only limited expenditure of computational resources.  7. Modules have a fixed pattern of development that, like physical attributes, may most naturally be attributed to a genetic property.  8. The processing in modules is domain specific, it only responds to certain types of inputs.  9. When modules break down, they tend to do so in characteristic ways.

It counts in favor of a theory if it is able to accommodate, predict and explain some natural phenomena, including anomalous phenomena.  In this vein, some argue that the modular theory is particularly useful for explaining the perceptual anomaly of synesthesia.  But there are competing accounts for how modularity is implicated in synesthesia.  Some think that insofar synesthesia has all the hallmarks of modularity, it likely results from the presence of an extra cognitive module (Segal, 1997).  According to the extra-module thesis, synesthetes possess an extra module whose function is the mapping of, for example, sounds or graphemes (input) to color representations (output).  This grapheme-color module would, according to Segal, possess at least most of the nine scientifically interesting characteristics of modules identified by Fodor:

1. There seems to be a dedicated neural architecture, as lexical-color synesthesia appears uniquely associated with multimodal areas of the brain including the posterior inferior temporal cortex and parieto-occipital junctions (Pausenu et al., 1995).  2. Processing is mandatory, once synesthetes are presented with a lexical or grapheme stimulus the induction of a color photism is automatic and insuppressible.  3. Processing in synesthesia seems encapsulated, information that is available to the subject which might negate the effect has no effect on processing in the color-grapheme module.  4. The information and processing in the module is not made available outside of the module, for example, the synesthete does not know how the system affects mapping.  5. Since the processing in synesthesia happens pre-consciously, it meets the rapid speed requirement.  6. The outputs are shallow, they don’t involve any higher-order theoretically inferred features, just color.  7. Since synesthesia runs in families, is dominant in females, and subjects report having had it for as long as they can remember, synesthesia seems to be heritable, and this suggests that it would have a fixed pattern of development.  The features 8 and 9, domain specificity and characteristic pattern of breakdown, are the only two that Segal cannot easily attribute to the grapheme-color module.  Segal doesn’t doubt that a grapheme-color module could be found to have domain specific processing.  But on account of the rarity of synesthesia, he suspects that it may be too hard to find cases where the lexical or grapheme-color module breaks down.  Harrison and Baron-Cohen (1997) and Cytowic (1997) among others, however, note that for some, synesthesia fades with age and has been reported to disappear with stroke or trauma.

Another explanation for synesthesia that draws on the modular framework is that synesthesia is caused by a breakdown in the barriers that ordinarily keep modules and their information and processing separate (Baron-Cohen et al., 1993; Paulesu et al., 1995).  This failure of encapsulation would allow information from one module to be shared with others.  Perhaps in lexical or grapheme-color synesthesia, information is shared between the speech or text processing module and the color-processing module.  There are two hypotheses for how this might occur.  One hypothesis is that the failure of encapsulation originates with a faulty inhibitory mechanism that normally prevents information from leaking out of a module (Grossenbacher & Lovelace, 2001; Harrison & Baron-Cohen, 1997).  Alternatively, some propose that we are born without modules but sensory processes are pre-programmed to become modularized.  So infants are natural synesthetes, but during the process of normal development extra dendritic connections are paired off, resulting in the modular encapsulation typical of adult cognition (Maurer, 1993; Maurer and Mondloch 2004; see Baron-Cohen 1996 for discussion).  In synesthetes, the normal process of pairing off of extra dendritic connections fails to occur.  Kadosh et al. (2009) claim that the fact that synesthesia can be induced in non-synesthetes post-hypnotically, demonstrates that a faulty inhibitory mechanism is responsible for synesthesia rather than excessive dendritic connections; given the time frame of their study, new cortical connections could not have been established.

The modular breakdown theory may also be able to explain why synesthesia has the appearance of the nine scientifically interesting characteristics that Fodor identifies with mental modules (R. Gray, 2001b).  If this is right, then what reason is there to prefer either the breakdown theory or the extra module theory over the other?  Gray (2001b) situates this problem within the larger debate between computational and biological frameworks in psychology; he argues that the concept of function is central to settling the issue over which account of synesthesia we should prefer.  His strategy is to first determine what the most desirable view of function is.  Based on this, we can then use empirical means to arbitrate between the extra-module theory and the modular breakdown theory.

On the classical view of modularity developed by Fodor, function is elaborated in purely computational terms.  Computers are closed symbol-manipulating devices that perform tasks merely on account of the dispositions of their physical components.  We can describe the module’s performance of a task by appealing to just the local causal properties of the underlying physical mechanisms.  R. Gray thinks that it is desirable for a functional description to allow for the possibility of a breakdown.  To describe something as having broken down seems to mean understanding it as having failed to achieve its proper goal.  The purely computational/causal view of function does not seem to easily accommodate the possibility of there being a breakdown in processing.

R. Gray promotes an alternative conception of function that he feels better allows for the possibility of breakdown.  Gray’s alternative understanding is compatible with traditional local causal explanations.  But it also considers the role that a trait such as synesthesia would have in facilitating the organism’s ability to thrive in its particular external environment, its fitness utility.  Crucially, Gray finds the elaboration of modules using this theory of function to be compatible with Fodor’s requirement that a science’s kind predicates “are ones whose terms are the bound variables of proper laws” (1974, p. 506).  Assuming such an account, whether synesthesia is the result of an extra module or a breakdown in modularity will ultimately depend on how it contributes to the fitness of individuals.  According to Baron-Cohen, in order to establish that synesthesia results from a breakdown in modularity, it would have to be shown that it detracts from overall fitness.  The problem is that synesthesia has not been shown to compromise the bearer of the trait.  In contrast, Gray claims that the burden of proof lies with those who propose that synesthesia results from the presence of an extra-module to show that synesthesia is useful in a particular environment.  But at present, according to Gray, we have no reason to think that it is.  For instance, one indicator that something has some positive fitness benefit for organisms possessing it is the proliferation of that trait in a population.  But synesthesia is remarkably rare (Gray, 2001b).  Gray admits, however, that whether or not synesthesia has such a utility is an open empirical question.

4. Theories of Color

Visual perception seems to, at the very least, provide us with information about colored shapes existing in various spatial locations.  An account of the visual perception of objects should therefore include some account of the nature of color.  Some theorists working on issues pertaining to the nature of color and color experience draw on evidence from synesthesia.

Theories about the nature of color fall broadly into two categories.  On the one hand, color objectivism is the view that colors are mind-independent properties residing out in the world, for example, in objects, surfaces or the ambient light.  Typically, objectivists identify color with a physical property.  The view that color is a mind-independent physical property of the perceived world is motivated both by commonsense considerations and the phenomenology of color experience.  It is part of our commonsense or folk understanding of color, as reflected in ordinary language, that color is a property of objects.  Moreover, the experience of color is transparent, which is to say that colors appear to the subject as belonging to external perceptual objects; one doesn’t just see red, one sees a red fire hydrant or a yellow umbrella.  Color objectivism vindicates both the commonsense view of color and the phenomenology of color experience.  But some take it to be an unfortunate implication of the theory that colors are physical properties of objects, since it seems to entail that each color will be identical to a very long disjunctive chain of physical properties.  Multiple external physical conditions can all cause the same color experience both within and across individuals.  This means that popular versions of objectivism cannot identify a single unifying property behind all instances of a single color.

Subjectivist views, on the other hand, take colors to be mind-dependent properties of the subject or of his or her experience, rather than properties of the distal causal stimulus.  Subjectivist theories of color include the sense-data theory, adverbialism and certain varieties of representationalism.  The primary motivation for color subjectivism is to accommodate various types of non-veridical color experience where perceivers have the subjective experience of color in the absence of an external distal stimulus to which the color could properly be attributed.  One commonly cited example is the after-image. Some claim that the photisms of synesthetes provide another example of non-veridical non-referring color experiences (Fish, 2010; Lycan, 2006; Revonsuo, 2001).  But others argue that the door is open to regarding at least some cases of synesthesia as veridical perceptual experiences rather than hallucinations since photisms are often:  i) perceptually and cognitively beneficial, ii) subjectively like non-synesthetic experiences, and iii) fitness-enhancing.

Still, synesthesia may pose additional difficulties for objectivism.  Consider the implications for objectivism if color synesthesias were to become the rule rather than the exception.  How then would objectivism account for color photisms in cases where they are caused by externally produced sounds?  Revonsuo (2001) suggests that the view that colors can be identified with the objective disjunctive collections of physical properties that cause color experiences would have to add the changes of air pressure that produce sounds to that disjunctive collection of color properties.  This means that if synesthesia became the rule, despite the fact that nothing else about the world would have changed, physical properties that weren’t previously colored would suddenly become colored.  Revonsuo (2001) takes this to be an undesirable consequence for a theory of color.

Enactivism is a theory of perception that takes active engagement with perceptual objects along with other contextual relations to be highly relevant to perception.  Typically, enactivists take perception to consist in a direct relation between perceivers and objective properties.  Ward uses synesthesia in an argument for enactivism about color, proposing that the enactivist theory of color actually combines elements of both objectivism and subjectivism, and is therefore the only theory of color that can account for various facts about anomalous color experiences like synesthesia.

For instance, Kohler fitted normal perceivers with goggles, each of whose lenses were vertically bisected with yellow tinting on one side and blue on the other (Kohler, 1964).  When perceivers first donned the goggles, they reported anomalous color experiences consistent with the lens colors; the world appeared to be tinted yellow and blue.  But after a few weeks of wear, subjects reported that the abnormal tint adapted away.  Ward proposes that synesthetic photisms are somewhat similar to the tinted experiences of Kohler’s goggle wearers.  In both cases, the subject is aware of the fact that their anomalous color experiences are not a reliable guide to the actual colors of things around them.  The two cases are not alike, however, in one important respect.  Whereas goggle wearers’ color experiences adapt to fall in line with what they know to be true about their color experiences, synesthetes’ experiences do not.  This asymmetry calls for explanation and Ward demonstrates that the enactive theory of color provides an elegant explanation for this asymmetry.

According to Ward’s enactive view of color, “An object’s color is its property of modifying incident reflected light in a certain way.”  This is an objective property.  But, “we perceive this [objective] property by understanding the way [subjective] color appearances systematically vary with lighting conditions.”  This view explains the asymmetry noted above in the following way.  Kohler’s goggles interfere with regular color perception.  According to the enactive view of color, the tinted goggles introduce, “a complex new set of relationships between apparent colors, viewing conditions and objective color properties.”  So it is necessary for them to adapt away.  As perceivers acclimate to the fact that their color appearances no longer refer to the colors they had previously indicated, their ability to normally perceive color returns.  Ward assumes that synesthetes do not experience their color photisms as attributed to perceived objects, so they do not impact the synesthetes’ ability to veridically perceive color.  Synesthetes’ photisms fail to adapt away because they do not need to.

Another philosophical problem having to do with the nature of color concerns whether or not phenomenal color experiences are intentional.  If they are, we might wonder what sorts of properties they are capable of representing.  A popular view is that color experiences can only represent objects to have specific color or spectral reflectance properties. Matey draws on synesthesia to support the view that perceptual experiences can represent objects to have high-level properties such as having a specific  semantic value (roughly, as representing some property, thing or concept). This argument for high-level representational contents from synesthesia, it is argued, withstands several objections that can be lodged against other popular arguments such as arguments from phenomenal contrast.  The basic idea is that a special category of grapheme-color synesthesia depends on high-level properties.  In higher-grapheme-color synesthesia, perceivers mark with a particular color, graphemes that share conceptual significance such as the property of representing a number.  Matey argues that these high-level properties penetrate color experiences, and infect their contents so that the color-experiences of these synesthetes represent the objects they are projected onto as being representative of certain numbers or letters.  Matey  demonstrates that the conclusions of the argument from synesthesia may generalize to the common perceptual experiences of ordinary perceivers as well.

5. An Extraordinary Feature of Color-Grapheme Synesthesia

What the subject says about his or her own phenomenal experience usually carries great weight.  However, in the case of color-grapheme synesthesia, Macpherson urges caution (2007, p.76).  A striking and odd aspect of color-grapheme synesthesia is that it may seem to involve the simultaneous experience of different colors in exactly the same place at exactly the same time.  Consider synesthetes who claim to see both colors simultaneously: What could it be like for someone to see the grapheme 5 printed in black ink, but see it as red as well?  How are we to characterize their experience?  To Macpherson this “extraordinary feature” suggests that synesthetic colors are either radically unlike ordinary experience, or perhaps more likely, not experiences at all.  A third possibility would be to find an interpretation compatible with ordinary color experience.  For example, perhaps the synesthetic colors are analogous to a colored-transparency laid over ink (as suggested by Kim et al. 2006, p.196;  see also Cytowic 1989, pp.41, 51 and Cytowic & Eagleman 2009, p.72).  However, this analogy is unsatisfying and gives rise to further puzzlement.

One might expect that the colors would interfere with each other, for example, they should see a darker red when the 5 is printed in black ink, and a lighter red when in white.  And yet synesthetes tend to insist that the colors do not blend (Ramachandran & Hubbard 2001b, p.7, n.3) although if the ink is in the “wrong” color this can result in task performance delays analogous to Stroop-test effects and even induce discomfort (Ramachandran & Hubbard, 2003b, p.50).  Another possibility is that the overlap is imperfect, despite the denials, for example, perhaps splotches of black ink can be distinguished from the red (as proposed by Ramachandran & Hubbard 2001b, p.7, n.3).  Or, maybe there can be a “halo” or edge where the synesthetic and ordinary colors do not overlap—this might make sense of the claims of some that the synesthetic color is not “on” the number, but, as it were, “floating” somewhere between the shape and the subject.  But against these suggestions are other reports that the synesthetic and regular colors match up perfectly (Macpherson, 2007, p.76).

A second analogy from everyday experience is simultaneously seeing what is both ahead of and behind oneself by observing a room’s reflection in a window.  This, however, only recycles the problem.  In seeing a white lamp reflected in a window facing a blue expanse of water, the colors mix (for example, the reflected lamp looks to be a pale blue). Moreover, one does not undergo distinct impressions of the lamp and the region occupied by the waves overlapping with the reflected image (though of course one can alter the presentation by either focusing on the lamp or on the waves).

A third explanation draws on the claim mentioned earlier that the extra qualia can depend on top-down processing, appearing only when the shape is recognized as a letter, or as a number (as in seeing an ambiguous shape in FA5T versus 3456).  There is some reason to think that the synesthetic color can “toggle” on and off depending on whether it is recognized and attended to, as opposed to appearing as a meaningless shape in the subject’s peripheral vision (Ramachandran & Hubbard 2001a, 2001b).  Toggling might also explain reports that emphasize seeing the red, as opposed to (merely?) knowing the ink is black (cf. Ramachandran & Hubbard, 2001b, p.7, n.3).  Along these lines, Kim et al. tentatively suggest that the “dual experience” phenomenon might be explained by rapid switching modulated by changes in attention (2006, p.202).

Cytowic and Eagleman (2009, p.73), in contrast to these ruminations, deny there is anything mysterious or conceptually difficult about the dual presentation of imagined and real objects sharing exactly the same location in physical space.  They contend that the dual experience phenomenon is comparable to visualizing an imaginary apple in the same place as a real coffee cup, “you’ll see there is nothing impossible, or even particularly confusing about two objects, one real and one imagined, sharing the same coordinates.”  This dismissal, however, fails to come to terms with the conundrum.  Instead of an apple, try visualizing a perfect duplicate of the actual coffee cup in precisely the same location (for those who believe they can do this, continue visualizing additional coffee cups until the point becomes obvious).  If Cytowic and Eagleman are to be taken literally this ought to be easy.  The visualization of a contrasting color also meets a conceptual obstacle.  What does it even mean to visualize a red surface in exactly the same place as a real black surface in the absence of alternating presentations (as in binocular rivalry) or blending?

Another perplexing feature of synesthetic color experience are reports of strange “alien” colors somehow different from ordinary color experience.  These “Martian” colors may or may not indicate a special kind of color qualia inaccessible to non-synesthetes, though given the apparent causal role differences from ordinary colors when it comes to such things as “lighting, viewing geometry and chromatic context” (Noë & Hurley, 2003, p.195) this is unsurprising and even expected by broadly functionalist theories of phenomenal experience.  Ramachandran and Hubbard (2001b, pp.5, 26, 30) offer some discussion and conjectures about the underlying neural processes.

Whether the more bizarre testimony can be explained away along one (or more) of the above suggestions, or has deep implications about synesthesia, self-report, and the nature of color experience, demands further investigation by philosophers and scientists.

6. Wittgenstein’s Philosophical Psychology

Ter Hark (2009) offers a Wittgensteinian analysis of color-grapheme synesthesia, arguing that it fails to fit the contrast between perception and mental imagery, and so calls for a third category bearing only some of the logical marks of experience.  He contends that it is somewhat like a percept in that it depends on looking, has a definite beginning and end, and is affected by shifts in attention.  On the other hand, it is also somewhat like mental imagery in that it is voluntary and non-informative about the external world.

Although ter Hark cites Rich et al. (2005) for support, only 15% of their informants claimed to have full control over synesthetic experience (that is, induced by thought independent of sensory stimulation) and most (76%) characterized it as involuntary.  It would therefore seem that ter Hark’s analysis applies to only a fraction of synesthetes.  The claim that synesthetic percepts seem non-experiential because they fail to represent the world is also contestable.  Visual experience need not always be informative (for example, hallucinations, “seeing stars,” and so forth) and failing to inform us about the world is compatible with aiming to do so but misrepresenting.

7. Individuating the Senses

Synesthesia might be important when it comes to questions about the nature of the senses, how they interact, and how many of them there are.  For example, Keeley (2002) proposes that synesthesia may challenge the assumption that the various senses are, “significantly separate and independent” (p.25, n.37) and so complicate discussions about what distinguishes one sense from another.  A similar point is made by Ross who notes that synesthesia undermines his “modified property condition” (2001, p.502).  The modified property condition is supposed to be necessary for individuating the senses, and states that each sense modality specializes in detecting certain properties (2001, p.500).  As discussed in the section on representationalism, synesthesia might seem to indicate that properties usually deemed proprietary to one sense can be detected by others after all.  Meanwhile, Ross’ proposal that synesthesia be explained away as a memory association seems unpersuasive in light of the preponderance of considerations suggesting it is a genuine sensory phenomenon (see Ramachandran & Hubbard, 2001a, 2001b, 2003b; for further discussion of Ross see Gatzia, 2008).  At present, little seems to have been written by philosophers on the significance of synesthesia as concerns the individuation and interaction of the senses (though see Macpherson, 2007, O’Callaghan 1998, p.325 and R. Gray 2011, p.253, n.17).

8. Aesthetics and “Literary Synesthesia”

The use of “intersense analogy” or sense-related metaphor as a literary technique is long familiar to authors and critics (for example, a sharp taste, a loud shirt) perhaps starting with Aristotle who noticed a “sort of parallelism between what is acute or grave to hearing and what is sharp or blunt to touch” (quoted in O’Malley, 1957, p.391).  Intersense metaphors such as “the sun is silent” (Dante quoted in O’Malley, 1957, p.409) and, more recently, “sound that makes the headphones edible” (from the lyrics of a popular rock band) may be, “a basic feature of language” natural for literature to incorporate (O’Malley, 1957, p.397), and to some “an essential component in the poetic sensibility” (Götlind, 1957, p.329).  Such “literary” synesthesia is therefore an important part of aesthetic criticism, as in Hellman’s (1977, p.287) discussion of musical styles, Masson’s analysis of acoustic associations (1953, p.222) and Ueda’s evaluation of cross-modal analogies in Haiku poetry which draw attention to “strange yet harmonious” combinations (1963, p.428).

Importantly, “the writer’s use of the ‘metaphor of the senses’” (O’Malley, 1957, p.391) is not to be confused with synesthesia as a sensory phenomenon, as repeatedly noted over the years by several philosophical works on poetry and aesthetics including Downey (1912, p.490), Götlind (1957, p.328) and O’Malley (1958, p.178).  Nevertheless, there are speculations about the connection between the two (for example, Smith, 1972, p.28; O’Malley, 1957, pp.395-396) and sensory synesthesia has been put forward as an important creative source in poetry (Downey, 1912, pp.490-491; Rayan, 1969), music and film (Brougher et al., 2005), painting (Tomas, 1969; Cazeaux, 1999; Ione, 2004) and artistic development generally (Donnell & Duignan, 1977).

That not all sensory matches work aesthetically—it seems awkward to speak of a loud smell or a salty color—might be significant in suggesting ties to perceptual synesthesia.  Perhaps they have more in common than is usually suspected (Marks, 1982; Day 1996).

Synesthetic metaphor is a “human universal” found in every culture and may be an expression of our shared nature (Pinker, 2002, p.439).  Maurer and Mondloch (2004) suggest that the fact that the cross-modal parings in synesthesias tend to be the same as the sensory matches manifest in common metaphors may reveal that non-synesthete adults share cross-modal activations with synesthetes, and synesthesia is a normal feature of early development.  Matey suggests that this lends credibility to the view that the cross-wiring present in synesthetes and non-synesthetes differs in degree and so we may draw conclusions about the types of representational contents possible of normal perceivers’ experiences based on the perceptual contents of synesthetes.

9. Synesthesia and Creativity

Ramachandran and Hubbard, among others, have been developing a number of hypotheses about the explanatory value of synesthesia towards creativity, the nature of metaphor, and even the origins of language (2001b, 2003a; see also Mulvenna, 2007; Hunt, 2005).  Like synesthesia, creativity seems to consist in, “linking two seemingly unrelated realms in order to highlight a hidden deep similarity” (Ramachandran & Hubbard, 2001b, p.17).  Ramachandran and Hubbard (2001b) conjecture that greater connectivity (or perhaps the absence of inhibitory processes) between functionally discrete brain regions might facilitate creative mappings between concepts, experiences, and behaviors in both artists and synesthetes.  These ideas are controversial and although there is some evidence that synethetes are more likely to be artists (for example, Ward et al., 2008; Rothen & Meier, 2010) the links between synesthesia and creativity remain tentative and conjectural.

10. References and Further Reading

  • Alter, T. (2006). Does synesthesia undermine representationalism? Psyche, 12(5).
  • Asher, J.E., Lamb, J., Brocklebank, D., Cazier, J., Maestrini, E., Addis, L., … Monaco, A. (2009). A whole-genome scan and fine-mapping linkage study of auditory-visual synesthesia reveals evidence of linkage to chromosomes. American Journal of Human Genetics, 84, 279-285.
  • Baron-Cohen, S. (1996). Is there a normal phase of synaesthesia in development? Psyche, 2(27).
  • Baron-Cohen, S., Wyke, M.A., & Binnie, C. (1987). Hearing words and seeing colours: An experimental investigation of a case of synaesthesia. Perception, 16(6), 761-767.
  • Beck, J. (1966). Effect of orientation and of shape similarity on perceptual grouping. Perception and Psychophysics, 1, 300-302.
  • Baron-Cohen, S., Harrison, J., Goldstein, L., & Wyke, M.A. (1993). Coloured speech perception: Is synaesthesia what happens when modularity breaks down? Perception, 22, 419-426.
  • Brougher, K., Mattis, O., Strick, J., Wiseman, A., & Zikczer, J. (2005). Visual music: Synaesthesia in art and music since 1900. London: Thames and Hudson.
  • Cazeaux, C. (1999). Synaesthesia and epistemology in abstract painting. British Journal of Aesthetics, 39(3), 241-251.
  • Chalmers, D. (2004). The representational character of experience. In B. Leiter (Ed.), The Future for Philosophy (pp.153-181). Oxford: Clarendon Press.
  • Critchley, E.M.R. (1997). Synaesthesia: Possible mechanisms. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classical and contemporary readings (pp.259-268). Cambridge, Massachusetts: Blackwell
  • Cytowic, R.E. (1989). A union of the senses. New York: Springer-Verlag.
  • Cytowic, R.E. (1997). Synesthesia: Phenomenology and neuropsychology: A review of current knowledge. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classical and contemporary readings (pp.17-39). Cambridge, Massachusetts: Blackwell.
  • Cytowic, R.E., & Eagleman, D. (2009). Wednesday is indigo blue: Discovering the brain of synesthesia. Cambridge: The MIT Press.
  • Day, S.A. (1996). Synaesthesia and synaesthetic metaphor. Psyche, 2(32).
  • Day, S.A. (2005). Some demographic and socio-cultural aspects of synesthesia. In L. Robertson & N. Sagiv (Eds.), Synesthesia: Perspectives from cognitive neuroscience (pp.11-33). Oxford: Oxford University Press.
  • Dixon, M.J., Smilek, D., Cudahy, C., & Merikle, P.M. (2000). Five plus two equals yellow. Nature, 406, 365.
  • Dixon, M.J., Smilek, D., & Merikle, P.M. (2004). Not all synaesthetes are created equal: Projector versus associator synaesthetes. Cognitive, Affective & Behavioral Neuroscience, 4(3), 335-343.
  • Dixon, M.J., Smilek, D., Duffy, P.L., Zanna, M.P., & Merikle, P.M. (2006). The role of meaning in grapheme-colour synaesthesia. Cortex, 42(2), 243-252.
  • Donnell, C.A., & Duignan, W. (1977). Synaesthesia and aesthetic education. Journal of Aesthetic Education, 11, 69-85.
  • Downey, J.E. (1912). Literary Synesthesia. The Journal of Philosophy, Psychology and Scientific Methods, 9(18), 490-498.
  • Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: The MIT Press.
  • Edquist, J., Rich, A.N., Brinkman, C., & Mattingley, J.B. (2006). Do synaesthetic colours act as unique features in visual search? Cortex, 42(2), 222-231.
  • Fish, W. (2010). Philosophy of perception: A contemporary introduction. New York: Routledge.
  • Fodor, J. (1974). Special sciences, or the disunity of science as a working hypothesis. Synthese, 28, 97-115.
  • Fodor, J. (1983). Modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT Press.
  • Galton, F. (1880). Visualized numerals. Nature, 22, 494-495.
  • Galton, F. (1883). Inquiries into human faculty and its development. Dent & Sons: London.
  • Gatzia, D.E. (2008). Martian colours. Philosophical Writings, 37, 3-16.
  • Gray, J.A. (2003). How are qualia coupled to functions? Trends in Cognitive Sciences, 7(5), 192-194.
  • Gray, J.A. (2004). Consciousness: Creeping up on the hard problem. Oxford: Oxford University Press.
  • Gray, J.A., Williams, S.C.R., Nunn, J., & Baron-Cohen, S. (1997). Possible implications of synaesthesia for the question of consciousness. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.173-181). Cambridge, MA: Blackwell.
  • Gray, J.A. (1998).  Creeping up on the hard question of consciousness. In S. Hameroff, A. Kaszniak & A. Scott (Eds.), Toward a science of consciousness II: The second Tucson discussions and debates (pp.279-291). Cambridge, MA: The MIT Press.
  • Gray, J.A., Nunn J., & Chopping S. (2002). Implications of synaesthesia for functionalism: Theory and experiments. Journal of Consciousness Studies, 9(12), 5-31.
  • Gray, J.A., Parslow, D.M., Brammer, M.J., Chopping, S.M., Vythelingum, G.N., & Ffytche, D.H. (2006). Evidence against functionalism from neuroimaging of the alien colour effect in synaesthesia. Cortex, 42(2), 309-318.
  • Gray, R. (2001a). Synaesthesia and misrepresentation: A reply to Wager. Philosophical Psychology, 14(3), 339-346.
  • Gray, R. (2001b). Cognitive modules, synaesthesia and the constitution of psychological natural kinds. Philosophical Psychology, 14(1), 65-82.
  • Gray, R. (2004). What synaesthesia really tells us about functionalism. Journal of Consciousness Studies, 11(9), 64-69.
  • Gray, R. (2011). On the nature of the senses. In F. Macpherson (Ed.), The Senses: Classic and contemporary philosophical perspectives, pp.243-260. New York: Oxford University Press.
  • Götlind, E. (1957). The appreciation of poetry: A proposal of certain empirical inquiries. The Journal of Aesthetics and Art Criticism, 15(3), 322-330.
  • Grossenbacher, P.G., & Lovelace, C.T. (2001). Mechanisms of synesthesia: Cognitive and physiological constraints. Trends in Cognitive Sciences, 5(1), 36-42.
  • Hardcastle, V.G. (1997). When a pain is not. The Journal of Philosophy, 94(8), 381-409.
  • Harrison, J.E. (2001). Synaesthesia: The strangest thing. New York: Oxford University Press.
  • Harrison, J.E., & Baron-Cohen, S. (1997). Synaesthesia: A review of psychological theories. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.109-122). Cambridge, MA: Blackwell.
  • Hellman, G. (1977). Symbol systems and artistic styles. The Journal of Aesthetics and Art Criticism, 35(3), 279-292.
  • Hunt, H. (2005). Synaesthesia, metaphor, and consciousness: A cognitive-developmental perspective. Journal of Consciousness Studies, 12(12), 26-45.
  • Hurley, S., & Noë, A. (2003a). Neural plasticity and consciousness. Biology and Philosophy, 18, 131-168.
  • Hurley, S., & Noë, A. (2003b). Neural plasticity and consciousness: Reply to Block. Trends in Cognitive Sciences, 7(1), 342.
  • Ione, A. (2004). Klee and Kandinsky: Polyphonic painting, chromatic chords and synaesthesia. Journal of Consciousness Studies, 11(3-4), 148-158.
  • Keeley, B.L. (2002). Making sense of the senses: Individuating modalities in humans and other animals. The Journal of Philosophy, 99(1), 5-28.
  • Kim, C-Y., Blake, R., & Palmeri, T.J. (2006). Perceptual interaction between real and synesthetic colors. Cortex, 42, 195-203.
  • Kadosh R.C., Henik, A., Catena, A., Walsh, V., & Fuentes, L.J. (2009). Induced cross-modal synaesthetic experiences without abnormal neuronal connections. Psychological Science, 20(2), 258-265.
  • Kohler, I. (1964). Formation and transformation of the perceptual world. Psychological Issues 3(4, Monogr. No. 12), 1-173.
  • Lycan, W. (1987). Consciousness. Cambridge, MA: The MIT Press.
  • Lycan, W. (2006). Representational theories of consciousness. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy.
  • Luria, A.R. (1968). The mind of a mnemonist. New York: Basic Books.
  • Macpherson, F. (2007). Synaesthesia, functionalism and phenomenology. In M. de Caro, F. Ferretti & M. Marraffa (Eds.), Cartographies of the mind: Philosophy and psychology in intersection series: Studies in brain and mind (Vol.4, pp.65-80). Dordrecht, The Netherlands: Springer.
  • Marks, L.E. (1982). Synesthetic perception and poetic metaphor. Journal of experimental psychology: Human perception and performance, 8(1): 15-23.
  • Masson, D.I. (1953). Vowel and consonant patterns in poetry. The Journal Aesthetics and Art Criticism, 12(2), 213-227.
  • Maurer, D. (1993). Neonatal synesthesia: Implications for the processing of speech and faces. In B. de Boysson-Bardies, S. de Schonen, P. Jusczyk, P. Mcneilage & J. Morton (Eds.), Developmental neurocognition: Speech and face processing in the first year of life (pp.109-124). Dordrecht: Kluwer.
  • Maurer, D., & Mondloch, C. (2004). Neonatal synesthesia: A re-evaluation. In L. Robertson & N. Sagiv (Eds.), Attention on Synesthesia: Cognition, Development and Neuroscience, (pp. 193-213). Oxford: Oxford University Press.
  • Meier, B., & Rothen, N. (2009). Training grapheme-colour associations produces a synaesthetic Stroop effect, but not a conditioned synaesthetic response. Neuropsychologia, 47(4), 1208-1211.
  • Mulvenna, C.M. (2007). Synaesthesia, the arts and creativity: A neurological connection. Frontiers of Neurology and Neuroscience, 22, 206-222.
  • Noë, A., & Hurley, S. (2003). The deferential brain in action. Trends in Cognitive Sciences, 7(5), 195-196.
  • O’Callaghan, C. (1998). Seeing what you hear: Cross-modal illusions and perception. Philosophical Issues, 18(1), 316-338.
  • O’Malley, G. (1957). Literary synesthesia. The Journal of Aesthetics and Art Criticism, 15(4), 391-411.
  • O’Malley, G. (1958). Shelley’s “air-prism”: The synesthetic scheme of “Alastor.” Modern Philology, 55(3), 178-187.
  • Paulesu, E., Harrison, J., Baron-Cohen, S., Watson, J.D.G., Goldstein, L., Heather, J., … Frith, C.D. (1995). The physiology of coloured hearing: A PET activation study of colour-word synaesthesia, Brain 118, 661-676.
  • Pettit, P. (2003). Looks red. Philosophical Issues, 13(1), 221-252.
  • Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Viking.
  • Proulx, M.J. (2010). Synthetic synaesthesia and sensory substitution. Consciousness and Cognition, 19(1), 501-503.
  • Ramachandran, V.S., & Hubbard, E.M. (2000). Number-color synaesthesia arises from cross-wiring in the fusiform gyrus. Society for Neuroscience Abstracts, 30, 1222.
  • Ramachandran, V.S., & Hubbard, E.M. (2001a). Psychophysical investigations into the neural basis of synaesthesia. Proceedings of the Royal Society of London B, 268, 979-983.
  • Ramachandran, V.S., & Hubbard, E.M. (2001b). Synaesthesia: A window into perception though and language. Journal of Consciousness Studies, 8(12), 3-34.
  • Ramachandran, V.S., & Hubbard, E.M. (2003a). Hearing colors, tasting shapes. Scientific American, April, 52-59.
  • Ramachandran, V.S., & Hubbard, E.M. (2003b). The phenomenology of synaesthesia. Journal of Consciousness Studies, 10(8), 49-57.
  • Rang, H.P., & Dale, M.M. (1987). Pharmacology. Edinburgh: Churchill Livingstone.
  • Rayan, K. (1969). Edgar Allan Poe and suggestiveness. The British Journal of Aesthetics, 9, 73-79.
  • Revonsuo, A. (2001). Putting color back where it belongs. Consciousness and Cognition, 10(1), 78-84.
  • Rich, A.N., Bradshaw, J.L., & Mattingley, J.B. (2005). A systematic, large-scale study of synaesthesia: Implications for the role of early experience in lexical-colour associations, Cognition, 98, 53-84.
  • Rosenberg, G. (2004). A place for consciousness: Probing the deep structure of the natural world. Oxford: Oxford University Press.
  • Ross, P.W. (2001). Qualia and the senses. The Philosophical Quarterly, 51(205), 495-511.
  • Rothen, N., & Meier, B. (2010). Higher prevalence of synaesthesia in art students. Perception, 39, 718-720.
  • Segal, G.M.A. (1997). Synaesthesia: Implications for modularity of mind. In S. Baron-Cohen & J. Harrison (Eds.), Synaesthesia: Classic and contemporary readings (pp.211-223). Cambridge, MA: Blackwell.
  • Simner, J., Sagiv, N., Mulvenna, C., Tsakanikos, E., Witherby, S., Fraser, C., … Ward, J. (2006). Synaesthesia: The prevalence of atypical cross-modal experiences. Perception, 35, 1024-1033.
  • Smilek, D., Dixon, M.J., Cudahy, C., & Merikle, P.M. (2001). Synaesthetic photisms influence visual perception. Journal of Cognitive Neuroscience, 13, 930-936.
  • Smilek. D., Dixon, M.J., Cudahy, C. & Merikle, P.M. (2002). Synesthetic color experiences influence memory. Psychological Science, 13(6), 548-552
  • Smilek, D., Dixon M.J., & Merikle P.M. (2003). Synaesthetic photisms guide attention. Brain & Cognition, 53, 364-367.
  • Ter Hark, M. (2009). Coloured vowels: Wittgenstein on synaesthesia and secondary meaning. Philosophia: Philosophical Quarterly of Israel, 37(4), 589-604.
  • Tomas, V. (1969). Kandinsky’s theory of painting. British Journal of Aesthetics, 9, 19-38.
  • Treisman, A. (1982). Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human perception and performance, 8(2), 194-214.
  • Tye, M. (1995). Ten problems of consciousness: A representational theory of the phenomenal mind. Cambridge, MA: The MIT Press.
  • Ueda, M. (1963). Basho and the poetics of “Haiku.” The Journal of Aesthetics and Art Criticism, 21(4), 423-431.
  • Wager, A. (1999). The extra qualia problem: Synaesthesia and Representationalism. Philosophical Psychology, 12(3), 263-281.
  • Wager, A. (2001). Synaesthesia misrepresented. Philosophical Psychology, 14(3), 347-351.
  • Ward, J., & Simner, J. (2005). Is synaesthesia an X-linked dominant trait with lethality in males? Perception, 34(5), 611-623.
  • Ward, J., & Sagiv, N. (2007). Synaesthesia for finger counting and dice patterns: A case of higher synaesthesia? Neurocase, 13(2), 86-93.
  • Ward, J., Thompson-Lake, D., Ely, R., & Kaminski, F. (2008). Synaesthesia, creativity and art: What is the link? British Journal of Psychology, 99, 127-141.
  • Wittgenstein, L. (1958/1994). Philosophical investigations. Oxford: Blackwell.

 

Author Information

Sean Allen-Hermanson
Email: hermanso@fiu.edu
Florida International University
U. S. A.

and

Jennifer Matey
Email: jmatey@fiu.edu
Florida International University
U. S. A.

René Girard (1923—2015)

Rene GirardRené Girard’s thought defies classification. He has written from the perspective of a wide variety of disciplines: Literary Criticism, Psychology, Anthropology, Sociology, History, Biblical Hermeneutics and Theology. Although he rarely calls himself a philosopher, many philosophical implications can be derived from his work. Girard’s work is above all concerned with Philosophical Anthropology (that is, ‘What is it to be human?’), and draws from many disciplinary perspectives. Over the years he has developed a mimetic theory. According to this theory human beings imitate each other, and this eventually gives rise to rivalries and violent conflicts. Such conflicts are partially solved by a scapegoat mechanism, but ultimately, Christianity is the best antidote to violence.

Perhaps Girard’s lack of specific disciplinary affiliation has promoted a slight marginalization of his work among contemporary philosophers. Girard is not on par with more well known French contemporary philosophers (for example Derrida, Foucault, Deleuze, Lyotard), but his work is becoming increasingly recognized in the humanities, and his commitment as a Christian thinker has given him prominence among theologians.

Table of Contents

  1. Life
  2. Mimetic Desire
    1. External Mediation
    2. Internal Mediation
    3. Metaphysical Desire
    4. The Oedipus Complex
  3. The Scapegoat Mechanism
    1. The Origins of Culture
    2. Religion
    3. Ritual
    4. Myth
    5. Prohibitions
  4. The Uniqueness of the Bible and Christianity
    1. The Hebrew Bible
    2. The New Testament
    3. Nietzsche’s Criticism of Christianity
    4. Apocalypse and Contemporary Culture
  5. Theological Implications
    1. God
    2. The Incarnation
    3. Satan
    4. Original Sin
    5. Atonement
  6. Criticisms
    1. Mimetic Theory Claims Too Much
    2. The Origins of Culture are Not Verifiable
    3. Girard Exaggerates the Contrast Between Myths and the Bible
    4. Christian Uniqueness Does Not Imply a Divine Origin
    5. Lack of a Precise Scientific Language
  7. References and Further Reading
    1. Primary
    2. Secondary

1. Life

René Girard was born on December 25, 1923, in Avignon, France. He was the son of a local archivist, and he went on to follow his father’s footsteps. He studied in Paris’ École Nationale des Chartes, and specialized in Medieval studies. In 1947, Girard took the opportunity to emigrate to America, and pursued a doctorate at Indiana University. His dissertation was on Americans’ opinions about France. Although his later work has had little to do with his doctoral dissertation, Girard has kept a live interest in French affairs.

After the completion of his doctorate, Girard began to take interest in Jean-Paul Sartre’s work. Although on a personal level Girard is still very much interested in Sartre’s philosophy, it has had little influence on his thought. Girard settled in America, and has taught at different institutions (Indiana University, State University of New York in Buffalo, Duke, Johns Hopkins, Bryn Mawr and Stanford) until his retirement in 1995. He died in 2015.

During the beginning of his career as lecturer, Girard was assigned to teach courses on European literature; he admits he was not at all familiar with the great works of European novelists. As Girard began to read the great European novels in preparation for the course, he became especially engaged with the work of five novelists in particular: Cervantes, Stendhal, Flaubert, Dostoyevsky and Proust.

His first book, Mensonge Romantique et Vérité Romanesque (1961), is a literary comment on the works of these great novelists. Until that time, Girard was a self-declared agnostic. As he researched the religious conversions of some of Dostoyevsky’s characters, he felt he had lived a similar experience, and converted to Christianity. Ever since, Girard has been a committed and practicing Roman Catholic.

After the publication of his first book, Girard turned his attention to ancient and contemporary sacrifice rituals, as well as Greek myth and tragedy. This led to another important book, La Violence et le Sacré (1972), for which he gained much recognition. On a personal level, he was a committed Christian, but his Christian views were not publicly expressed until the publication of Des Choses Cachées Depuis la Fondation du Monde (1978), his magnum opus, and best systematization of his thought. Ever since, Girard has written books that expand various aspects of his work. In 2005, Girard was elected to the Académie Française, a very important distinction among French intellectuals.

2. Mimetic Desire

Girard’s fundamental concept is ‘mimetic desire’. Ever since Plato, students of human nature have highlighted the great mimetic capacity of human beings; that is, we are the species most apt at imitation. Indeed, imitation is the basic mechanism of learning (we learn inasmuch as we imitate what our teachers do), and neuroscientists are increasingly reporting that our neural structure promotes imitation very proficiently (for example, ‘mirror neurons’).

However, according to Girard, most thinking devoted to imitation pays little attention to the fact that we also imitate other people’s desires, and depending on how this happens, it may lead to conflicts and rivalries. If people imitate each other’s desires, they may wind up desiring the very same things; and if they desire the same things, they may easily become rivals, as they reach for the same objects. Girard usually distinguishes ‘imitation’ from ‘mimesis’. The former is usually understood as the positive aspect of reproducing someone else’s behavior, whereas the latter usually implies the negative aspect of rivalry. It should also be mentioned that because the former usually is understood to refer to mimicry, Girard proposes the latter term to refer to the deeper, instinctive response that humans have to each other.

a. External Mediation

Girard calls ‘mediation’ the process in which a person influences the desires and preferences of another person. Thus, whenever a person’s desire is imitated by someone else, she becomes a ‘mediator’ or ‘model’. Girard points out that this is very evident in publicity and marketing techniques: whenever a product is promoted, some celebrity is used to ‘mediate’ consumers’ desires: in a sense, the celebrity is inviting people to imitate him in his desire of the product. The product is not promoted on the basis of its inherent qualities, but simply because of the fact that some celebrity desires it.

In his studies on literature, Girard highlights this type of relationship in his literary studies, as for example in his study of Don Quixote. Don Quixote is mediated by Amadis de Gaula. Don Quixote becomes an errant knight, not really because he autonomously desires so, but in order to imitate Amadis. Nevertheless, Amadis and Don Quixote are characters on different planes. They will never meet, and in such a manner, they never become rivals.

The same can be said of the relation between Sancho and Don Quixote. Sancho desires to be governor of an island, mostly because Don Quixote has suggested to Sancho that that is what he should desire. Again, although they interact continuously, Sancho and Don Quixote belong to two different worlds: Don Quixote is a very complex man, Sancho is simple in extreme. Girard calls ‘external mediation’ the situation when the mediator and the person mediated are on different planes. Don Quixote is an ‘external mediator’ to Sancho, inasmuch as he mediates his desires ‘from the outside’; that is, Don Quixote never becomes an obstacle in Sancho’s attempts to satisfy his desires.

External mediation does not carry the risk of rivalry between subjects, because they belong to different worlds. Although the source of Sancho’s desire to be governor of an island is in fact Don Quixote, they never desire the same object. Don Quixote desires things Sancho does not desire, and vice versa. Hence, they never become rivals. Girard believes ‘external mediation’ is a frequent feature of the psychology of desire: from our earliest phase as infants, we look up in imitation to our elders, and eventually, most of what we desire is mediated by them.

b. Internal Mediation

In ‘internal mediation’, the ‘mediator’ and the person mediated are no longer abysmally separated and hence, do not belong to different worlds. In fact, they come to resemble each other to the point that they end up desiring the same things. But, precisely because they are no longer on different worlds and now reach for the same objects of desire, they become rivals. We are fully aware that competition is fiercer when competitors resemble each other.

Thus, in internal mediation the subject imitates the model’s desires, but ultimately, unlike external mediation, the subject falls into rivalry with the model/mediator. Consider this example: a toddler imitates his father in his occupations, and he desires to pursue his father’s career when he grows up. This will hardly cause any rivalry (although it may account for Freud’s Oedipus Complex; see section 2.d). This is, as we have seen, a case of external mediation. But, now consider a PhD candidate that learns a great deal from his supervisor, and seeks to imitate every aspect of his work, and even his life. Eventually, they may become rivals, especially if both are looking for scholarly recognition. Or, consider further the case of a toddler that is playing with a toy, and another toddler that, out of imitation, desires that very same toy: they will eventually become rivals for the control of the toy. This is ‘internal mediation’; that is the person is mediated from the ‘inside’ of his world, and therefore, may easily become his mediator’s rival. This rivalry often has tragic consequences, and Girard considers this a major theme in modern novels. In Girard’s view, this literary theme is in fact a portrait of human nature: very often, people will desire something as a result of imitating other people, but eventually, this imitation will lead to rivalries with the very person imitated in the first place.

c. Metaphysical Desire

In Girard’s view, mimetic desire may grow to such a degree, that a person may eventually desire to be her mediator. Again, publicity is illustrative: sometimes, consumers do not just desire a product for its inherent qualities, but rather, desire to be the celebrity that promotes such a product. Girard considers that a person may desire an object only as part of a larger desire; that is, to be her mediator. Girard calls the desire to be other people, ‘metaphysical desire’. Furthermore, acquisitive desire leads to metaphysical desire, and the original object of desire becomes a token representing the “larger” desire of having the being of the model/rival.

Whereas external mediation does not lead to rivalries, internal mediation does lead to rivalries. But, metaphysical desire leads a person not just to rivalry with her mediator; actually, it leads to total obsession with and resentment of the mediator. For, the mediator becomes the main obstacle in the satisfaction of the person’s metaphysical desire. Inasmuch as the person desires to be his mediator, such desire will never be satisfied. For nobody can be someone else. Eventually, the person developing a metaphysical desire comes to appreciate that the main obstacle to be the mediator is the mediator himself.

According to Girard, metaphysical desire can be a very destructive force, as it promotes resentment against others. Girard considers that the anti-hero of Dostoyevsky’s Notes From the Underground is the quintessential victim of metaphysical desire: the unnamed character eventually goes on a crusade against the world, as he is disillusioned with everything around him. Girard believes that the origin of his alienation is his dissatisfaction with himself, and his obsession to be someone else; that is, an impossible task.

d. The Oedipus Complex

Girard’s career has been mostly devoted to literary criticism, and the analysis of fictional characters. Girard believes that the great modern novelists (such as Stendhal, Flaubert, Proust and Dostoevsky) have understood human psychology better than the modern field of Psychology does. And, as a complement of his literary criticism, he has developed a psychology in which the concept of ‘mimetic desire’ is central. Inasmuch as human beings constantly seek to imitate others, and most desires are in fact borrowed from other people, Girard believes that it is crucial to study how personality relates to others.

Departing from the main premise of mimetic desire, Girard has sought to reformulate some of psychology’s long-held assumptions. In particular, Girard seeks to reconsider some of Freud’s concepts. Although Girard has been careful enough to warn that Freud’s thought may be highly misleading in many ways, he has been engaged with Freud’s work in a number of ways. Girard admits that Freud and his followers had some good initial intuitions, but criticizes Freudian psychoanalytic theory on the grounds that it tends to obviate the role that other individuals have on the development of personality. In other words, psychoanalysis tends to assume that human beings are largely autonomous, and hence, do not desire in imitation of others.

Girard grants that Freud was a superb observer, but was not a good interpreter. And, in a sense, Girard accepts that there is such a thing as the Oedipus Complex: the child will eventually come to unconsciously have a sexual desire for his mother, and a desire to kill his father; and indeed, perhaps this complex will endure throughout adulthood. But, Girard considers that the Oedipus Complex is the result of a mechanism very different from the one outlined by Freud.

According to Freud, the child has an innate sexual desire towards the mother, and eventually, discovers that the father is an obstacle to the satisfaction of that desire. Girard, on the other hand, reinterprets the Oedipus Complex in terms of mimetic desire: the child becomes identified with his father and imitates him. But, inasmuch as he imitates his father, the child imitates the sexual desire for his mother. Then, his father becomes his model and rival, and that explains the ambivalent feelings so characteristic of the Oedipus Complex.

3. The Scapegoat Mechanism

In Girard’s psychology, internal mediation and metaphysical desire eventually lead to rivalry and violence. Imitation eventually erases the differences among human beings, and inasmuch as people become similar to each other, they desire the same things, which leads to rivalries and a Hobbesian war of all against all. These rivalries soon bear the potential to threaten the very existence of communities. Thus, Girard asks: how is it possible for communities to overcome their internal strife?

Whereas the philosophers of the 18th century would have agreed that communal violence comes to an end due to a social contract, Girard believes that, paradoxically, the problem of violence is frequently solved with a lesser dose of violence. When mimetic rivalries accumulate, tensions grow ever greater. But, that tension eventually reaches a paroxysm. When violence is at the point of threatening the existence of the community, very frequently a bizarre psychosocial mechanism arises: communal violence is all of the sudden projected upon a single individual. Thus, people that were formerly struggling, now unite efforts against someone chosen as a scapegoat. Former enemies now become friends, as they communally participate in the execution of violence against a specified enemy.

Girard calls this process ‘scapegoating’, an allusion to the ancient religious ritual where communal sins were metaphorically imposed upon a he-goat, and this beast was eventually abandoned in the desert, or sacrificed to the gods (in the Hebrew Bible, this is especially prescribed in Leviticus 16).The person that receives the communal violence is a ‘scapegoat’ in this sense: her death or expulsion is useful as a regeneration of communal peace and restoration of relationships.

However, Girard considers it crucial that this process be unconscious in order to work. The victim must never be recognized as an innocent scapegoat (indeed, Girard considers that, prior to the rise of Christianity, ‘innocent scapegoat’ was virtually an oxymoron; see section 4.b below); rather, the victim must be thought of as a monstrous creature that transgressed some prohibition and deserved to be punished. In such a manner, the community deceives itself into believing that the victim is the culprit of the communal crisis, and that the elimination of the victim will eventually restore peace.

a. The Origins of Culture

Girard believes that the scapegoat mechanism is the very foundation of cultural life. Natural man became civilized, not through some sort of rational deliberation embodied in a social contract, (as it was fashionable to think among 18th century philosophers) but rather, through the repetition of the scapegoat mechanism. And, very much as many philosophers of the 18th Century believed that their descriptions of the natural state were in fact historical, Girard believes that, indeed, Paleolithic men continually used the scapegoat mechanism, and it was precisely this feature what allowed them to lay the foundations of culture and civilization.

In fact, Girard believes that this process goes farther back in the evolution of Homo sapiens: hominids probably were engaged in scapegoating. But, it was precisely scapegoating what allowed a minimum of communal peace among early hominid groups. Hominids could eventually develop their main cultural traits due to the efficiency of the scapegoat mechanism. The murder of a victim brought forth communal peace, and this peace promoted the flourishing of the most basic cultural institutions.

Once again, Girard takes deep inspiration from Freud, but reinterprets his observations. Freud’s Totem and Taboo presents a thesis that the origins of culture are founded upon the original murder of a father figure by his sons. Girard considers that Freud’s observations were only partially correct. Freud is right in pointing out that indeed, culture is founded upon a murder. But, this murder is not due to the oedipal themes Freud was so fond of. Instead, the founding murder is due to the scapegoat mechanism. The horde murdered a victim (not necessarily a father figure) in order to project upon her all the violence that was threatening the very existence of the community.

However, as mimetic desire has been a constant among human beings, scapegoating has never been entirely efficient. Nevertheless, human communities need to periodically recourse to the scapegoating mechanism in order to maintain social peace.

b. Religion

According to Girard, the scapegoat mechanism brings about unexpected peace. But, this moment is so marvelous, that it soon acquires a religious overtone. Thus, the victim is immediately consecrated. Girard is in the French sociological tradition of Durkheim, who considered that religion essentially accomplishes the function of social integration. In Girard’s view, inasmuch as the deceased victim brings forth communal peace and restores social order and integration, she becomes sacred.

At first, while living, victims are considered to be monstrous transgressors that deserve to be punished. But, once they die, they bring peace to the community. Then, they are not monsters any longer, but rather gods. Girard highlights that, in most primitive societies, there is a deep ambivalence towards deities: they hold high virtues, but they are also capable of performing some very monstrous deeds. That is how, according to Girard, primitive gods are sanctified victims.

In such a manner, all cultures are founded upon a religious basis. The function of the sacred is to offer protection for the stability of communal peace. And, to do this, it ensures that the scapegoat mechanism provides its effects through the main religious institutions.

c. Ritual

Girard considers rituals the earliest cultural and religious institution. In Girard’s view, ritual is a reenactment of the original scapegoating murder. Although, as anthropologists are quick to assert, rituals are very diverse, Girard considers that the most popular form of ritual is sacrifice. When a victim is ritually killed, Girard believes, the community is commemorating the original event that promoted peace.

The original victim was most likely a member of the community. Girard considers that, probably, earliest sacrificial rituals employed human victims. Thus, Aztec human sacrifice may have impacted Western conquistadors and missionaries upon its discovery, but this was a cultural remnant of a popular ancient practice. Eventually, rituals promoted sacrificial substitution, and animals were employed. In fact, Girard considers that hunting and the domestication of animals arose out of the need to continually reenact the original murder with substitute animal victims.

d. Myth

Following the old school of European anthropologists, Girard believes that myths are the narrative corollary of ritual. And, inasmuch as rituals are mainly a reenactment of the original murder, myths also recapitulate the scapegoating themes.

Now, Girard’s crucial point about the necessary unconsciousness of scapegoating: must be kept in mind in order for this mechanism to work, its participants must not recognize it as such. That is to say, the victim must never appear as what it really is: a scapegoat that is no guiltier of disturbance, than other members of the community.

The way to assure that scapegoats are not recognized as what they really are is by distorting the story of the events that led to their death. This is accomplished by telling the story from the perspective of the scapegoaters. Myths will usually tell a story of someone doing a terrible thing and, thus, deserving to be punished. The victim’s perspective will never be incorporated into the myth, precisely because this would spoil the psychological effect of the scapegoating mechanism. The victim will always be portrayed as a culprit whose deeds brought about social chaos, but whose death or expulsion brought about social peace.

Girard’s most recurrent example of myths is that of Oedipus. According to the myth, Oedipus was expelled from Thebes because he murdered his father and married his mother. But, according to Girard, the myth should be read as a chronicle written by a community that chose a scapegoat, blamed him of some crime, punished him, and once expelled, peace returned. Under Girard’s interpretation, the fact that there was a pest in Thebes is suggestive of a social crisis. To solve the crisis, Oedipus is selected as a scapegoat. But, he is never presented as such: quite the contrary, he is accused of parricide and incest, and this justifies his persecution. Thus, Oedipus’ perspective as a victim is suppressed from the myth.

Furthermore, Girard believes that, as myths evolve, later versions will tend to dissimulate the scapegoating violence (for example, instead of presenting a victim who dies by drowning, the myth will just claim that the victim went to live to the bottom of the sea), in order to avoid feeling compassion for the victim. Indeed, Girard considers that the evolution of myths may even reach a point where no violence is present. But, Girard insists, all myths are founded upon violence, and if no violence is found in a myth, it must be because the community made it disappear.

Myths are typical of archaic societies, but Girard thinks that modern societies have the equivalent of myths: persecution texts. Especially during the witch-hunts and persecution of Jews during the Middle Ages, there were plenty of chronicles written from the perspective of the mobs and witch-hunters. These texts told the story of a crisis that appeared as the consequence of some crime committed by a person or a minority. The author of the chronicle is part of the persecuting mob, as he projects upon the victim all the typical accusations, and justifies the mob’s actions. Modern lynching accounts are another prominent example of such persecutory dynamics.

e. Prohibitions

Inasmuch as, under the scapegoaters’ view, there are no innocent scapegoats, an accusation must be made. In the case of Oedipus, he was accused of parricide and incest, and these are recurrent accusations to justify persecution (for example Maria Antoinette), but many other accusations are found (for example blood libels, witchcraft, and so forth). After the victim is executed, Girard claims, a prohibition falls upon the action allegedly perpetrated by the scapegoat. By doing so, the scapegoaters believe they restore social order. Thus, along with ritual and myths, prohibitions derive from the scapegoat mechanism.

Girard also considers that prior to the scapegoating mechanism, communities go through a process he calls a ‘crisis of differences’. Mimetic desire eventually makes every member resemble each other, and this lack of differentiation generates chaos. Traditionally, this indifferentiation is represented through various symbols typically associated with chaos and disorder (plagues, monstrous animals, and so forth). The death of the scapegoat mechanism restores order and, by extension, differentiation. Thus, everything returns to its place. In such a manner, social differentiation and order in general is also derived from the scapegoat mechanism.

4. The Uniqueness of the Bible and Christianity

Girard’s Christian apologetics departs from a comparison of myths and the Bible. According to Girard, whereas myths are caught under the dynamics of the scapegoat mechanism by telling the foundational stories from the perspective of the scapegoaters, the Bible contains plenty of stories that tell the story from the perspective of the victims.

In myths, those who are collectively executed are presented as monstrous culprits that deserve to be punished. In the Bible, those who are collectively executed are presented as innocent victims that are unfairly accused and persecuted. Thus, Girard recapitulates the old Christian apologetic tradition of insisting upon the Bible’s singularity. But, instead of making emphasis on the Bible’s popularity, or fulfillment of prophecies, or consistency, Girard thinks the Bible is unique in its defense of victims.

However, according to Girard, this is not merely a shift in narrative perspective. It is in fact something much more profound. Inasmuch as the Bible presents stories from the perspective of the victims, the Biblical authors reveal something not understood by previous mythological traditions. And, by doing so, they make scapegoating inoperative. Once scapegoats are recognized for what they truly are, the scapegoating mechanism no longer works. Thus, the Bible is a remarkably subversive text, inasmuch as it shatters the scapegoating foundations of culture.

a. The Hebrew Bible

Girard thinks that the Hebrew Bible is a text in travail. There are plenty of stories that are still told from the perspective of the scapegoaters. And, more importantly, it continuously presents a wrathful God that sanctions violence. However, Girard appreciates some important shifts in some narratives from the Bible, especially when they are compared to myths that present similar structures.

For example, Girard contrasts the story of Cain and Abel with the myth of Remus and Romulus. In both stories, there is rivalry between the brothers. In both stories, there is a murder. But, in the Roman myth, Romulus is justified in killing Remus, as the latter transgressed the territorial limits they had earlier agreed upon. In the Biblical story, Cain is never justified in killing Abel. And, indeed, the blood of Abel is evoked as the blood of the innocent victims that have been murdered throughout history, and that God will vindicate.

Girard is also fond of comparing the story of Oedipus with the story of Joseph. Oedipus is accused of incest, and the myth accepts this accusation, therefore justifying his expulsion from Thebes. Joseph is also accused of incest (he allegedly attempted to rape Potiphar’s wife, his de facto step mother). But, the Bible never accepts such an accusation.

In Girard’s views, the Hebrew Bible is also crucial in its rejection of ritual sacrifice. Some prophets vehemently denounced the grotesque ritual killing of sacrificial victims, although, of course, the ritual requirement of sacrificial rituals permeates much of the Old Testament. Girard understands this as a complementary approach to the defense of victims. The prophets promote a new concept of the divinity: God is no longer pleased with ritual violence. This is evocative of Hosea’s plea from God: “I want mercy, not sacrifices”. Thus, the Hebrew Bible takes a twofold reversal of culture’s violent foundation: on the one hand, it begins to present the foundational stories from the perspective of the victims; on the other hand, it begins to present a God that is not satisfied with violence and, therefore, begins to dissociate the sacred from the violent.

b. The New Testament

Under Girard’s interpretation, the New Testament is the completion of the process that the Hebrew Bible had begun. The New Testament fully endorses the victims’ perspective, and satisfactorily dissociates the sacred from the violent.

The Passion story is central in the New Testament, and it is the complete reversal of traditional myth’s structure. Amidst a huge social crisis, a victim (Jesus) is persecuted, blamed of some fault, and executed. Even the apostles succumb to the collective pressure and abandon Jesus, tacitly becoming part of the scapegoating crowd. This is emblematic in the story of Peter’s denial of Jesus.

Nevertheless, the evangelists never succumb to the collective pressure of the scapegoating mob. The evangelists adhere to Jesus’ innocence throughout the whole story. Alas, Jesus is finally recognized as what he really is: an innocent scapegoat, the Lamb of God that was taken to the slaughterhouse, although no fault was in him. According to Girard, this is the completion of a slow process begun in the Hebrew Bible. Once and for all, the New Testament reverses the violent psychosocial mechanism upon which human culture has been founded.

Aside from that, Jesus’ ethical message is complementary. Under Girard’s interpretation, humanity has achieved social peace by performing violent acts of scapegoating. Jesus’ solution is much more radical and efficient: to turn the other cheek, to abstain from violent retribution. Scapegoating is not an efficient means to bring about peace, as it always depends on the periodic repetition of the mechanism. The real solution is in the total withdrawal from violence, and that is the bulk of Jesus’ message.

c. Nietzsche’s Criticism of Christianity

Girard is bothered by the possibility that his readers may fail to appreciate the uniqueness of the Bible and Christianity. In this sense, Girard is very critical of classical anthropologists such as Sir James Frazer, who saw the relevance of scapegoating in primitive rituals and myths, but, according to Girard, failed to see that the Christian story is fundamentally different from other scapegoating myths.

Indeed, Girard resents the fact that Christianity is usually considered to be merely one among many other religions. However, ironically, Girard seeks help from a powerful opponent of Christianity: Friedrich Nietzsche. Nietzsche criticized Christianity for its ‘slave morality’; that is, its tendency to side with the weak. Nietzsche recognized that, above other religions, Christianity promoted mercy as a virtue. Nietzsche interpreted this as a corruption of the human vital spirit, and advocated a return to the pre-Christian values of power and strength.

Girard is, of course, opposed to the Nietzschean disdain for mercy and antipathy towards the weak and victims. But, Girard considers Nietzsche a genius, inasmuch as the German philosopher saw what, according to Girard, most people (including the majority of Christians) fail to see: Christianity is unique in its defense of victims. Thus, in a sense, Girard claims that his Christian apologetics is for the most part a reversal of Nietzsche: they both agree that Christianity is singular, but whereas Nietzsche believed this singularity corrupted humanity, Girard believes this singularity is the manifestation of a power that reverses the violent foundations of culture.

d. Apocalypse and Contemporary Culture

Girard acknowledges that, on the surface, not everything in the New Testament is about peace and love. Indeed, there are some frightening passages in Jesus’ preaching, perhaps the most emblematic “I come not to bring peace, but a sword”. This is part of the apocalyptic worldview prevalent in Jesus’ days. But, much more than that, Girard believes that the apocalyptic teachings to be found in the New Testament are a warning about future human violence.

Girard considers that, inasmuch as the New Testament overturns the old scapegoating practices, humanity no longer has the capacity to return to the scapegoating mechanism in order to restore peace. Once victims are revealed as innocent, scapegoating can no longer be relied upon to restore peace. And, in such a sense, there is now an even greater threat of violence. According to Girard, Jesus brings a sword, not in the sense that he himself is going to execute violence, but in the sense that, through his work and the influence of the Bible, humanity will not have the traditional violent means to put an end to violence. The inefficacy of the scapegoat mechanism will bring even more violence. The way to restore peace is not through the scapegoat mechanism, but rather, through the total withdrawal of violence.

Thus, Girard believes that, ironically, Christianity has brought about even more violence. But, once again, this violence is not attributable to Christianity itself, but rather, to the stubbornness of human beings who do not want to follow the Christian admonition and insist on putting an end to violence through traditional scapegoating.

Girard believes that, 20th and 21st centuries are more than ever an apocalyptic age. And, once again, he acknowledges a 19th century German figure as a precursor of these views: Carl von Clausewitz. According to Girard, the great Prussian war strategist realized that modern war would no longer be an honorable enterprise, but rather, a brutal activity that has the potential to destroy all of humanity. Indeed, Girard believes 20th and 21st centuries are apocalyptic, but not in the fundamentalist sense. The ‘signs’ of apocalypse are not numerical clues such as 666, but rather, signs that humanity has not found an efficient way to put an end to violence, and unless the Christian message of repentance and withdrawal from violence is assumed, we are headed towards doomsday; not a Final Judgment brought forth by a punishing God, but rather, a doomsday brought about by our own human violence.

5. Theological Implications

Girard claims not to be a theologian, but rather, a philosophical anthropologist. But, echoing Simone Weil, he believes that the gospels, inasmuch as they reveal the nature of human beings, also indirectly reveal the nature of God. Thus, Girard’s work has great implications for theologians, and his work has generated new ways to interpret the traditional Christian doctrines.

a. God

Girard is little concerned with the classical theistic attempt to prove the existence of God (for example Aquinas, Plantinga, Craig and Swinburne). But, he does seem to assume that the only way to explain the Bible’s uniqueness in its rejection of scapegoating distortion and its refusal to succumb to the mob’s influence, is by proposing the intervention of a higher celestial power. So, in a weak sense, Girard’s apologetic works try to prove that the Bible is divinely inspired and, therefore, that God exists.

More importantly, Girard believes that the Bible reveals that the true God is far removed from violence, whereas gods that sanction violence are false gods, that is, idols. By revealing how human violence works, Girard claims, the Bible reveals that this violence does not come from God; rather, God sympathizes with victims and wants nothing to do with victimizers.

b. The Incarnation

Furthermore, the doctrine of Incarnation becomes especially important under Girard’s interpretation. For God himself incarnates in the person of Jesus, in order to become himself a victim. Thus, God is so far removed from aggressors and scapegoaters, He himself becomes a victim in order to show humanity that He sides with innocent victims. Thus, the way to overturn the scapegoat mechanism is not only by telling the stories from the perspective of the victim, but also by telling the story that the victim itself is God incarnate.

c. Satan

Most liberal contemporary Christians pay little attention to Satan, but Girard wishes to keep its relevance. Girard has little patience for the literal mythological interpretation of Satan as the red, horned creature. According to Girard, the concept of Satan and the Devil most frequently referred to in the gospels is what it etymologically expresses: the opponent, the accuser. And, in this sense, Satan is the scapegoating mechanism itself (or, perhaps more precisely, the accusing process); that is, the psychological processes in which human beings are caught up by the lynching mob, and eventually succumb to its influence and participate in the collective violence against the scapegoat.

Likewise, the Holy Spirit in Girard’s interpretation is the reverse of Satan. Again, Girard recurs to etymology: the Paraclete etymologically refers to the spirit of defense. Thus, Satan accuses victims, and the Paraclete mercifully defends victims. Thus, the Holy Spirit is understood by Girard as the overturning of the old scapegoating practices.

d. Original Sin

In the old Pelagian-Augustinian debate over original sin, Girard’s work clearly sides with Augustine. Under Girard’s interpretation, there is a twofold sense of original sin: 1) human beings are born with the propensity to imitate each other and, eventually, be led to violence; 2) human culture was laid upon the foundations of violence. Thus, human nature is tainted by an original sin, but it can be saved through repentance materialized in the withdrawal from violence.

The complementary aspect of the original sin debate, that is, free will, has not been tackled by Girard. But, being a Roman Catholic, it is presumable that Girard would not accept the Calvinist concepts of total depravity, irresistible grace and predestination. He seems to believe that human beings are born with sin, but they have the capacity to do something about it through repentance.

e. Atonement

Girard’s vision of Christianity also brings forth a new interpretation of the doctrine of atonement, that is, that Christ died for our sins. Anselm’s traditional account (God’s honor was offended by the sins of mankind, His honor was reestablished with the death of His own son), or other traditional interpretations (mankind was kidnapped by the Devil, God offered Christ as a ransom; Jesus died so God could show humanity what He is capable of doing if we do not repent, and so forth) are deemed inadequate by Girard. Under Girard’s interpretation, Jesus saved us by becoming a victim and overturning once and for all the scapegoat mechanism. Thanks to Jesus’ salvific mission, human beings now have the capacity to understand what scapegoats really are, and have the golden opportunity to achieve enduring social peace.

6. Criticisms

An important source of criticisms against Girard is his apologetic commitment to Christianity. Most critics argue that he has a tendency to twist interpretations of classical texts and myths in order to favor Christian doctrine. Girard has frequently asserted that he was not a Christian for the early part of his life, but that his work as a humanist eventually drove him to Christianity. Also, Girard has been seen with contempt by postmodernist critics who, on the whole, are suspicious of objective truth.

a. Mimetic Theory Claims Too Much

The first point of criticism directed at Girard is that he is too ambitious. His initial plausible interpretations of mimetic psychology and anthropology are eventually transformed into a grandiose theoretical system that attempts to explain every aspect of human nature.

Consequently, in such a manner, his methods have been questioned. His theories regarding mimetic desire are derived, not from a careful study of subjects and the implementation of tests, but rather, from the reading of works of fiction. The fact that his theory seems to coincide with what many neuroscientists are informing us about mirror neurons is immaterial: his was just a lucky guess.

The same critique may be extended to his work on the origins of culture. Again, his scapegoating thesis may be plausible, in as much as it is easy to find many examples of scapegoating processes in human culture. But, to claim that all human culture ultimately relies on scapegoating, and that the fundamental cultural institutions (myths, rituals, hunting, domestication of animals, and so forth), are ultimately derived from an original murder, is perhaps too much.

Thus, in a sense, Girard’s work is subject to the same criticism of many of the great theoretical systems of the human sciences in the 19th century (Hegel, Freud, Marx, and so forth): his sole concentration on his favorite themes makes him overlook equally plausible alternate explanations for the phenomena he highlights.

b. The Origins of Culture are Not Verifiable

As a corollary of the previous objection, empirically-minded philosophers would object that Girard’s theses are not verifiable in a meaningful way. There is little possibility to know what may have happened during Paleolithic times, apart from what paleontology and archaeology might tell us.

In some instances, Girard claims that his theses have indeed been verified. There have been plenty of archaeological remains that suggest ritual human sacrifice, and plenty of contemporary rituals and myths that suggest scapegoating violence. But, then again, the number of rituals and myths that do not display violence is even greater. Girard does not see this as a great obstacle to his theses, because according to him, cultures have a tendency to erase the track of original violence.

But, in such a case, the empirically-minded philosopher may argue that Girard’s work is not falsifiable in Popper’s sense. There seems to be no possibility of a counter-example that will refute Girard’s thesis. If a violent myth or ritual is considered, Girard will argue that this piece of evidence confirms his hypotheses. If, on the other hand, a non-violent myth or ritual is considered, Girard will once again argue that this piece of evidence confirms his evidence, because it proves that cultures erase tracks of violence in myths and rituals. Thus, Girard is open to the same Popperian objection leveled against Freud: both sexual and non-sexual dreams confirm psychoanalytic theory; therefore, there is no possible way to refute it, and in such a manner, it becomes a meaningless theory.

c. Girard Exaggerates the Contrast Between Myths and the Bible

Girard is also open to criticism inasmuch as his Christian apologetics seems to rely on an already biased comparison of myths and the Bible. It has been objected that he is not thoroughly fair in the application of standards when contrasting the Bible and myths. Girard’s hermeneutic goes to great lengths to highlight violence in rituals when, in fact, it is not all that evident. He may be accused of being predisposed to find sanctioned violence in myths and, based upon that predisposition, he interprets as sanctioned violence mythical elements that, under another interpretative lens, would not be violent at all. Metaphorically speaking, when studying many myths, Girard is just seeing faces in the clouds, and projecting upon myths some elements that are far from being clear.

In the same manner, one may object that Girard’s treatment of the Bible, and especially the New Testament, is too benevolent. Most secular historians would agree that there are some hints of persecution against the Jews in the gospels (for example, an exaggeration of Jewish guilt in the arrest and execution of Jesus), and that the historical Jesus’ apocalyptic preaching is not just a warning of future human violence, but rather, an announcement of imminent divine wrath.

d. Christian Uniqueness Does Not Imply a Divine Origin

Even if Girard’s thesis about the uniqueness of Christianity were accepted, it needn’t prove a divine origin. Perhaps Christianity is unique due to a set of historical and sociological circumstances that drove biblical authors to sympathize with victims (indeed, Max Weber’s explanation is as follows: the Bible’s authors sympathize with victims because they were themselves victims as subjects of the great empires of the Near East). In such a manner, Girard may be accused of incurring an ad ignorantiam fallacy. The fact that we cannot currently explain a given phenomenon does not imply that such phenomenon’s origins are supernatural.

e. Lack of a Precise Scientific Language

Even if one were to accept that the Bible reveals a profound nature about human beings, scientifically-minded philosophers would object that Girard’s language is too obscure and too religiously-based for scientific purposes. Perhaps the Bible does reveal some interesting insights about the nature of scapegoating. But, to name such a process ‘Satan’, or to name the human tendency to incur in rivalries ‘sin’, bears a great potential for confusion. Whenever most readers encounter the word ‘Satan’, they are prone to imagine the nasty horned tailed creature, and not in some sort of abstract psychological mechanism that gives rise to scapegoating violence. So, even if Girard’s use of those terms is metaphoric, they are easily open to confusion, and perhaps should be abandoned.

7. References and Further Reading

a. Primary

  • Deceit, Desire, and the Novel: Self and Other in Literary Structure. Baltimore: The Johns Hopkins University Press, 1965.
  • Resurrection from the Underground: Feodor Dostoevsky. New York: Crossroad, 1997.
  • Violence and the Sacred. Baltimore: The Johns Hopkins University Press, 1977.
  • Things Hidden since the Foundation of the World. Research undertaken in collaboration with Jean-Michel Oughourlian and Guy Lefort. Stanford, CA: Stanford University Press, 1987.
  • “To Double Business Bound”: Essays on Literature, Mimesis, and Anthropology. Baltimore: The Johns Hopkins University Press, 1978.
  • The Scapegoat. Baltimore: The Johns Hopkins University Press, 1986.
  • Job: The Victim of His People. Stanford, CA: Stanford University Press, 1987
  • A Theater of Envy: William Shakespeare. St. Augustine’s Press, 2004.
  • Quand ces choses commenceront…Entretiens avec Michel Treguer. Paris: Arléa, 1994.
  • The Girard Reader. Edited by James G. Williams. New York: Crossroad, 1996.
  • I See Satan Fall like Lightning. Maryknoll, NY: Orbis Books, 2001.
  • Celui par qui le scandale arrive: Entretiens avec Maria Stella Barberi. Paris: Brouwer, 2001.
  • Oedipus Unbound: Selected Writings on Rivalry and Desire. Edited by Mark Anspach. Stanford, CA: Stanford University Press, 2004.
  • Evolution and Conversion: Dialogues on the Origins of Culture. With Pierpaolo Antonello and Joao Cezar de Castro Rocha. London: T&T Clark/Continuum, 2007
  • Christianity, Truth, and Weakening Faith: A Dialogue. René Girard and Gianni Vattimo. Edited by Pierpaolo Antonello and translated by William McCuaig. New York: Columbia University Press, 2010
  • Battling to the End: Conversations with Benoît Chantre. East Lansing, MI: Michigan State University Press, 2010.
  • Anorexie et désir mimétique. Herne, 2008.

b. Secondary

  • ALBERG, Jeremiah. A Reinterpretation of Rousseau: A Religious System. Foreward by René Girard. Palgrave Macmillan, 2007. Hardcover
  • ALISON, James. Broken Hearts and New Creations: Intimations of a Great Reversal. New York: Continuum, 2010.
  • ALISON, James. Faith Beyond Resentment: Fragments Catholic and Gay. New York: Crossroad, 2001
  • ALISON, James. The Joy of Being Wrong: Original Sin Through Easter Eyes. New York: Crossroad, 1998.
  • ANDRADE, Gabriel. René Girard: Um retrato intellectual. E realizacaoes. 2011.
  • ASTELL, Ann W. Joan of Arc and Sacrificial Authorship. South Bend, IN: University of Notre Dame Press, 2003
  • BAILIE, Gil. Violence Unveiled: Humanity at the Crossroads. New York: Crossroad, 1995. Paper.
  • BANDERA, Cesáreo. The Humble Story of Don Quixote: Reflections on the Birth of the Modern Novel. Catholic University of America Press, 2006.
  • BANDERA, Cesáreo. The Sacred Game. Penn State Press. 2004.
  • BARTLETT, Anthony. Cross Purposes: The Violent Grammar of Christian Atonement. Valley Forge, PA: Trinity Press International, 2001.
  • BELLINGER, Charles K. The Genealogy of Violence: Reflections on Creation, Freedom, and Evil. Oxford University Press, 2001.
  • DALY, Robert J., S. J. Sacrifice Unveiled: The True Meaning of Christian Sacrifice. London: T&T Clark / New York: Continuum, 2009.
  • DUMOCHEL, Paul, ed. Violence and Truth: on the Work of René Girard. Stanford, CA: Stanford University Press, 1988.
  • FINAMORE, Stephen. God, Order, and Chaos: René Girard and the Apocalypse. Eugene, OR: Wipf & Stock, 2009.
  • FLEMING, Chris. René Girard: Violence and Mimesis. Cambridge, Eng.: Polity Press, 2004.
  • FREUD, Sigmund. Totem and Taboo. Create Space. 2011.
  • GOLSAN, Richard J. René Girard and Myth: An Introduction. New York: Routledge, 2001
  • GOODHART, Sandor; Jorgensen, Jorgen; Ryba, Thomas; Williams, James G.; eds. For René Girard: Essays in Friendship and in Truth. East Lansing, MI: Michigan State University Press, 2009.
  • GOODHART, Sandor; Jorgensen, Jorgen; Ryba, Thomas; Williams, James G.; eds. Sacrificing Commentary: Reading the End of Literature. Baltimore: Johns Hopkins University Press, 1996.
  • GRANDE, Per Bjørnar. Mimesis and Desire: An Analysis of the Religious Nature of Mimesis and Desire in the Work of René Girard. LAP Lambert Academic Publishing, 2009. Paperback: 224 pages.
  • GROTE, Jim and McGeeney, John. Clever as Serpents: Business Ethics and Office Politics. Collegeville, MN: The Liturgical Press, 1997. Paperback, 149 pages.
  • HAMERTON-KELLY, Robert G, ed. Politics & Apocalypse. East Lansing, MI: Michigan State University Press, 2007.
  • HAMERTON-KELLY, Robert G, ed. Sacred Violence: Paul’s Hermeneutic of the Cross. Minneapolis: Fortress Press, 1992 .
  • HAMERTON-KELLY, Robert G, ed. The Gospel and the Sacred: Poetics of Violence in Mark. Minneapolis: Fortress Press, 1994
  • of the victim.”
  • HOBBES, Thomas. Leviathan. Oxford UP. 2009.
  • KIRK-DUGGAN, Cheryl A. Refiner’s Fire: A Religious Engagement with Violence. Minneapolis: Fortress Press, 2001
  • KIRWAN, Michael. Discovering Girard. Cambridge, MA: Cowley Publications, 2005.
  • LEFEBURE, Leo D. Revelation, the Religions, and Violence. Maryknoll, NY: Orbis Books, 2000.
  • MCKENNA, Andrew J. Violence and Difference: Girard, Derrida, and Deconstruction. Chicago: University of Illinois Press, 1992.
  • OUGHOURLIAN, Jean-Michel. The Genesis of Desire. E. Lansing, MI: Michigan State University Press, 2010.
  • Chicago: University of Illinois Press, 1992.
  • OUGHOURLIAN, Jean-Michel. The Puppet of Desire: The Psychology of Hysteria, Possession, and Hypnosis. Stanford, CA: Stanford University Press, 1991.
  • SCHWAGGER, Raymund, S.J. Banished from Eden: Original Sin and Evolutionary Theory in the Drama of Salvation. Gracewing, 2006.
  • SWARTLEY, Willard M., editor. Violence Renounced: René Girard, Biblical Studies, and Peacemaking. Response by René Girard and Foreward by Diana M. Culbertson. Telford, PA: Cascadia Publishing House, 2000.
  • WILLIAMS, James G. The Bible, Violence, and the Sacred: Liberation from the Myth of Sanctioned Violence. Foreword by René Girard. Eugene, OR: Wipf & Stock, 2007.

 

Author Information

Gabriel Andrade
Email: gabrielernesto2000@yahoo.com
University of Zulia
Venezuela

Platonism and Theism

This article explores the compatibility of, and relationship between, the Platonic and Theistic metaphysical visions. According to Platonism, there is a realm of necessarily existing abstract objects comprising a framework of reality beyond the material world. Platonism argues these abstract objects do not originate with creative divine activity. Traditional Theism contends that God is primarily the creator and that God is the source of existence for all realities beyond himself, including the realm of abstract objects.

A primary obstacle between these two perspectives centers upon the origin, nature and existence of abstract objects.  The Platonist contends that these abstract objects exist as a part of the framework of reality and that abstract objects are, by nature, necessary, eternal and uncreated.  These qualities stand as challenges for the Traditional Theist, attempting to reconcile his or her metaphysic with that of Platonism since Traditional Theism contends that God is uniquely necessary, eternal, uncaused, and is the cause of everything that exists. The question, therefore, emerges as to whether these two metaphysical visions are reconcilable and, if not, then why not, and, if so, then how might this be accomplished?

Despite the differences, some Traditional Theists have found Platonism to be a helpful framework by which to convey their conclusions regarding the nature of God and of ultimate reality. Others pursue reconciliation between Theism and Platonism through the proposal of what has been defined as a modalized Platonism, which concludes that necessarily existing abstract objects, nevertheless, have origin in the creative activity of God.  Still others refuse any consideration of Theism in relationship to Platonism.

Table of Contents

  1. The Problem
  2. Platonism and Abstract Objects
    1. Abstract Objects and Necessary Existence
    2. Abstract Objects as Uncreated
    3. Abstract Objects as Eternal
  3. Traditional Theism
    1. God as Creator
    2. Creatio ex Nihilo
    3. Divine Freedom
  4. Emerging Tensions
    1. God as the Origin of Abstract Objects
    2. Abstract Objects as Uncreated
  5. Selected Proposals
    1. James Ross: A Critical Rejection of Platonism
    2. Nicholas Wolterstorff: A Restrictive Idea of Creation
    3. Morris and Menzel: Theistic Activism
    4. Bergman and Brower: Truthmaker Theory
    5. Plantinga: Christian Platonism
  6. References and Further Reading
    1. Books
    2. Articles

1. The Problem

Is the platonic metaphysical vision compatible with that of Traditional Theism? Some would contend that the two are compatible, while others would argue to the contrary. Platonists argue that at least some, if not all, abstract objects are uncreated, and exist necessarily and eternally; whereas Traditional Theism asserts that God exists as the uncreated creator of all reality existing beyond himself.

But can this central conclusion of Traditional Theism be reconciled with the Platonic understanding of abstract objects as uncreated, necessarily extant, and eternal? Furthermore, if it is possible to reconcile these worldviews, how might one do so?  Put differently, is there anything, other than himself, that God has not created? Or are we to understand the conclusion that God has created everything in a qualified or restricted sense? Are there things which are not to be included in the Theistic tenet of faith that God is the creator of all things? If so, what things do not result from God’s creative activity?

2. Platonism and Abstract Objects

Contemporary Platonism argues the existence of abstract objects. Abstract objects do not exist in space or time and are entirely non-physical and non-mental. Contemporary Platonism, while deriving from the teachings of Plato, is not directly reflective of the teachings of Plato. Abstract objects are non-physical entities in that they do not exist in the physical world, and they are not compositionally material. Abstract objects are non-mental, meaning that they are not minds or ideas in minds, neither are they disembodied souls or gods. Further, abstract objects are said to be causally inert. In short, Platonism contends that abstract objects exist necessarily, are eternal, and cannot be involved in cause and effect relationships with other objects.

Platonists argue the existence of abstract objects since it makes sense to believe, for instance, that numbers exist and that the only legitimate view of these things is that they are abstract objects. For Platonists, however, there are several categories of things, including physical things, mental things, spiritual things, and the problematic fourth category that includes things such as universals (the wisdom of Socrates, the redness of an apple), relationships (for example, loving, betweenness), propositions (such as 7 + 5 = 12, God is just), and mathematical objects such as numbers and sets. (Menzel, 2001, 3)

As we shall see below, the existence of abstract objects represents a significant challenge for the Christian in particular and for Traditional Theists in general since it is central to these worldviews that God is the creator of everything other than God himself. Generally, however, abstract objects are considered to be like God in that they are said to have always existed, and to always exist in future. Consequently, there is no point at which God is considered to have brought them into being. (Menzel, 2001, 1-5).

But why would the Platonist conclude that God has not created all abstract objects, or has created selected abstract objects?  The response to this question moves us to a consideration of the nature of abstract objects as necessarily extant, uncreated, and eternal, and to briefly address why God’s creation of abstract objects is questionable.

a. Abstract Objects and Necessary Existence

What is meant by the phrase necessary existence? A thing is said to possess necessary existence if it would have existed no matter what or if it would have existed under any possible circumstances. A thing has necessary existence if its non-existence is impossible. For instance, if x is a necessary being, then the non-existence of x is as impossible as a round square or a liquid wine bottle. Human beings are said not to exist necessarily since we would never have existed if our parents had never met and this is a possible circumstance. (Van Inwagen, 1993, 118)

For the Platonist, God’s creation of abstract objects is questionable since they are understood to exist necessarily. As such, abstract objects cannot have not existed.  Consequently, consider whether God can create something existing necessarily? Put differently, does the assertion “x exists necessarily” entail that “x is uncreated”?  If this constitutes a valid assumption, the Platonic understanding of the nature of abstracts objects as necessarily extant excludes the creation of these objects by God or any other external source.

b. Abstract Objects as Uncreated

Second, for the Platonist, God’s creation of abstract objects is questionable since the creative event in Traditional Theism is understood to be a causal event and Platonism understands abstract objects as being uncreated and also as being incapable of entering into causal relations. If, therefore, abstract objects are uncreated, then it seems that God is just one more extant entity existing in the universe and God cannot be the maker of all things, both visible and invisible. (Menzel, 1986)

c. Abstract Objects as Eternal

Third, for the Platonist, God’s creation of abstract objects is questionable due to their being eternal. There is no point at which God could be said to have brought abstract objects into being and, therefore, it is difficult to think of them as creatures since they are not created. If an abstract object has no beginning in time there could not have been a time at which God first created it. (Menzel, 2001, 4-6) If abstract objects are eternal, then they possess a character which parallels God, since according to Traditional Theism God is considered to be eternal.

These platonic affirmations regarding the nature of abstract objects as eternal, necessary and uncreated pose significant challenges to any effort to merge the worldviews of Platonism and Traditional Theism. With this understanding of abstract objects, we now turn to a consideration of the definition of Traditional Theism.

3. Traditional Theism

What are the central tenets of Traditional Theism? First, Traditional Theism and Classical Theism (hereafter referred to as Traditional Theism) are regarded as synonymous. Traditional Theism is supported in the writings of authors such as Moses Maimonides (1135-1204), the Islamic author Avicenna (980-1037), and the Christian author Thomas Aquinas (1224-74). Traditional Theism constitutes what all Jews, Christians and Muslims officially endorsed for many centuries. In addition, Traditional Theists strongly endorse the aseity-sovereignty doctrine, according to which God is the uncreated Creator of all things and all things other than God depend upon God, while God depends on nothing whatsoever. (Davies, 2004, 1) Numerous philosophers have assumed that God is as defenders of Traditional Theism consider him to be, the source of all reality external to himself. From the period of St. Augustine of Hippo (354-430) to the time of G. W. Leibniz (1646-1716), philosophers carried on with the assumption that belief in God is belief in Traditional Theism. This understanding has been endorsed by many theologians, and is represented in the tenets of the Roman Catholic Church. These beliefs were also endorsed and propagated by many of the major Protestant reformers, such as the eighteenth century American Puritan, Jonathan Edwards.

It is to the definition of Traditional Theism that we turn since it is these tenets of faith that represent the primary obstacles in our effort to reconcile the Theistic and Platonic metaphysical perspectives. These include: God as creator, Creation as ex nihilo, and the assertion of divine freedom.

a. God as Creator

Traditional Theism understands God to be the creative source for his own existence, as well as for the existence of all reality existing outside of himself. First, as regards God’s being the creative source for his own existence, if something else created God, and then God created the universe, it would seem to most that this other thing was the real and ultimate source of the universe and that God is nothing more than an intermediary. (Leftow, 1990, 584) Therefore, according to Traditional Theism, there can be no regress of explanations for what exists past the explanations for God’s existence.

Second, Traditional Theism not only endorses the belief that God is responsible for his own existence, but also that God is the Creator of all extant reality beyond himself. Consequently, God is essentially what accounts for the existence of anything beyond God or God is responsible for the existence of something rather than nothing. For Traditional Theism, this notion entails not only that God is responsible for the fact that the universe began to exist, but that God’s work is also responsible for the continued existence of the cosmos. (Davies, 2004, 3)

b. Creatio ex Nihilo

Is there anything that can pre-exist the creative activity of God? Traditional Theists respond to this question with a resounding, “No.”  Aquinas writes,

We must consider not only the emanation of a particular being from a particular agent, but also the emanation of all being from the universal cause, which is God; and this emanation we designate by the name of creation. Now what proceeds by particular emanation is not presupposed to that emanation; as when a man is generated, he was not before, but man is made from not-man, and white from not-white. Hence, if the emanation of the whole universal being from the first principle be considered, it is impossible that any being should be presupposed before this emanation. For nothing is the same as no being. Therefore, as the generation of a man is from the not-being which is not-man, so creation, which is the emanation of all being, is from the not-being which is nothing. (Thomas Aquinas, 1948, Ia, 45, 1.)

Traditional Theism, therefore, understands God as the one who creates ex nihilo, or from nothing. The phrase denotes not that God, in the creative act, worked with something called “nothing” but that God creates that which is external to himself without there being anything prior to his creative act with the exception of himself. The challenging implication of this tenet of Traditional Theism for the Platonic notion of abstract objects is obvious. Traditional Theists counter the Platonic notion that abstract objects are uncreated, contending that if God did not create non-substance items, such as abstract objects, creation would not truly be ex nihilo, since these entities would have accompanied God from all eternity and become aspects of God’s creation, for example, by being unsubstantiated. (Leftow, 1990, 583-84).

c. Divine Freedom

Traditional Theists also argue that God’s choices to act are always carried out in the context of divine freedom, signifying that God is not constrained by anything beyond the laws of logic and His own nature. This is regarded as true by the Traditional Theist since God has established these laws and can alter them if he chooses to do so. Further, God cannot be compelled to choose. If God makes choices in response to human action, so says the Traditional Theist, it is always in his power to prevent actions by any method he chooses.

In short, God always responds to the actions he permits. Consequently, God is always ultimately in control, even in the context of actions that we have created. Therefore, if God carried out his creative activity in the context of complete divine freedom and if God is not and cannot be compelled to act creatively by any external source, then how can God’s freedom be reconciled with the Platonic notion of abstract objects as existing necessarily, since, if abstract objects exist necessarily by God’s creative act, then God was compelled to create them by forces beyond himself. Again, the tension between the two worldviews of Traditional Theism and Platonism becomes apparent.

As this examination of the central tenets of Traditional Theism demonstrates, a challenge exists in the effort to integrate the worldviews of Traditional Theism and Platonism. In summary, Platonists contend that abstract objects are uncreated, whereas Traditional Theists argue that God created all reality; Platonists believe that abstract objects exist necessarily, whereas Traditional Theists contend that God alone is necessarily extant; Platonists propose that abstract objects are eternal, whereas Traditional Theists believe that God alone is eternal. With these contrasts in mind, we turn now to consider specific problems said to emerge from them.

4. Emerging Tensions

As has been observed in this article, the apparent conflict between Platonism and Traditional Theism emerges from the central notion of Traditional Theism, that God is the absolute creator of everything existing distinct from himself; and the central claim of contemporary Platonism, that there exists a realm of necessarily existent abstract objects that could not fail to exist. In considering the tension between abstract objects and Traditional Theism, Gould writes,

To see what the problem is, consider the following three jointly inconsistent claims: (a) there is an infinite realm of abstract objects which are (i) necessary independent beings and are thus (ii) uncreated; (b) only God exists as a necessary independent being; (c) God creates all of reality distinct from him, i.e. only God is uncreated. Statement (a) represents a common understanding of Platonism. Statements (b) and (c) follow from the common theistic claim that to qualify for the title “God,” someone must exist entirely from himself (a se), whereas everything else must be somehow dependent upon him. (Gould, 2010, 2)

A contradiction emerges in consideration of the first and third claims. Proposal (a) posits the existence of abstract objects that are necessary, independent and uncreated. Proposal (c) posits that all reality existing separately from God has its origin in divine creative activity. These two proposals would appear to be mutually exclusive. As a result a rapprochement appears to exist between Platonism and Traditional Theism. Platonism asserts that the existence of all things outside of God is rooted in divine activity. Platonism further argues that there are strong reasons for recognizing in our ontology the existence of a realm of necessarily existent abstract objects. In contradistinction, the Traditional Theist claims that the realm of necessity as well as that of contingency is within the province of divine creation. For the Traditional Theist, therefore, God is, in some fashion, responsible for the existence of all necessarily existent entities, as well as for contingent objects such as stars, planets and electrons, and so forth. (Morris and Menzel, 1986, 153)

But what are the specific problems associated with the effort to merge Platonism and Traditional Theism? Menzel clarifies,

On the [P]latonist conception, most, if not all, abstract objects are thought to exist necessarily. One can either locate these entities outside the scope of God’s creative activity or not. If the former, then it seems the believer must compromise his view of God: rather than the sovereign creator and lord of all things visible and invisible, God turns out to be just one more entity among many in a vast constellation of necessary beings existing independently of his creative power. If the latter, the believer is faced with the problem of what it could possibly mean for God to create an object that is both necessary and abstract. (Menzel, 1987, 1)

Therefore, both horns of this dilemma lead to inevitable challenges. To contend that God created abstract objects has been said to lead to a problem of coherence and a questioning of divine freedom. To contend that God did not create abstract objects has been understood to lead to a problem regarding the sovereignty of God, as well as the uniqueness of God. It is to these matters that we now turn.

a. God as the Origin of Abstract Objects

Consider the conclusion that God created abstract objects. Two objections arise from this proposal.

First, the coherence problem contends that it makes no sense to discuss the origin of things considered to exist necessarily, or that could not have failed to exist, such as abstract objects. (Leftow, 1990, 584)  Supposing that at least some abstract objects exist necessarily, does the truth of this conclusion entail also that God has not created such abstract objects that exist of necessity?

Second, the freedom problem has its origin in the contention of Traditional Theism that God always acts in total freedom. However, if abstract objects exist necessarily, then God had no choice in the matter of their creation. Therefore, God is constrained by something other than himself, a conclusion leading to questions regarding the nature of God as omnipotent and possessing complete freedom. Traditional Theists are quick to affirm that God’s intentions or choices are not constrained by any entity other than God and no chain of true explanations goes beyond a divine intention or choice – or else beyond God’s having his nature and whatever beliefs he has logically before he creates, which may explain certain of God’s intentions and choices. For if nothing other than God forces God to act as he does, the real explanation of God’s actions always lies within God himself. (Leftow, 1990, 584-585)

b. Abstract Objects as Uncreated

Suppose, on the other hand, that God did not create abstract objects. Problems still emerge.  First, if God did not create abstract objects, and if abstract objects are eternal, necessary and uncreated, then these realities are sovereign, as is God who also is eternal, necessary and uncreated, according to the Traditional Theist. God therefore is merely one more object in the vast array of items in the universe, which also includes abstract objects. This dilemma has been designated as the sovereignty problem. (Leftow, 1990, 584)

Further, a necessary object is said to constitute its own reason for existence. It is said to exist of and from itself. Therefore there is no need for a further explanation of the reason for the existence of the necessary object, a belief known as the doctrine of aseity. Aseity, however, has been associated uniquely with God. Therefore, if abstract objects exist a se, then God is not unique, exists alongside abstract objects and, exists as one being among many others existing by their own nature. This problem has been designated as the uniqueness problem.

In consideration of the relationship of Platonism and Traditional Theism, these problems force the Theist to revise, in some fashion, his understanding of the nature of God, reject Platonism altogether, or to seek a manner in which to reconcile the two. We now turn to a consideration of certain of the efforts made by Traditional Theists to merge or reconcile these two major metaphysical perspectives.

5. Selected Proposals

Can the worldviews of Traditional Theism and Platonism be merged in a manner that does not compromise the core tenets of these seemingly divergent metaphysical perspectives? Proposals range from those which reject altogether the notion of compatibility to those that use the Augustinian notion of abstract ideas as products of the intellectual activity of God. The present section considers five prominent proposals.

a. James Ross: A Critical Rejection of Platonism

Ross’ approach represents a rejection of the integration of Platonic and Theistic metaphysical perspectives. Ross presents a highly critical analysis of Platonism. He denies the Platonic notion of the world of eternal forms, opting instead for a thorough-going Aristotelianism, positing the existence of inherent explanatory structures throughout reality, which he understands as “forms”.  According to Ross, if the independent necessary beings of Platonic Theism are other than God, both the simplicity and independence of God are compromised. Ross further posits that by attracting our attention to the Platonic abstractions, which all existing things are supposed to exemplify, we are consequently distracted from the things or objects themselves. (Ross, 1989, 3)

Ross presents a further set of objections to Platonic metaphysics. He points out that the whole set of abstract entities, which all physical objects are supposed to instantiate, are held to be eternal and changeless realities. Within a Theistic point of view, two options exist regarding these abstract entities according to Ross. First, some Theists propose that abstract entities are co-eternal with God because they are in fact one with God, and second, abstract objects are in some other sense ideas in the mind of God and therefore co-eternal with him.

Ross objects that the first possibility is incompatible with an attribute traditionally ascribed to God, that is, God’s simplicity. Ross further objects that the second contention compromises the Traditional Theists’ understanding of God as the source of all extant realities beyond himself.  Third, Ross counters that the divine creation seems not to involve much creativity or choice if it consists completely of God instantiating beings that had already existed for all of eternity, thereby compromising God’s freedom. Further, the whole sense of creatio ex nihilo is, therefore, eliminated if we are to conceive of God as not making things up but only granting physical existence to that which already shared abstract existence co-eternally with him. (Ross, 1989, 3-5)

Ross concludes that there is an inherent incompatibility of Platonism and Traditional Theism since the incorporation of the Platonic worldview, which entails the existence of abstract objects that exist eternally, necessarily, and are uncaused, forces the Traditional Theist to compromise in some fashion his understanding of the nature of God, thereby leading the Theist to a departure from what is regarded as an orthodox understanding of the nature of God.

b. Nicholas Wolterstorff: A Restrictive Idea of Creation

Nicholas Wolterstorff finds a mediating position between the Platonic and Theistic worldviews. He does so, however, by adopting a non-Traditional Theistic perspective, which according to some is an unavoidable consequence of endorsing Platonism. Wolterstorff proposes that necessarily existing abstract objects are in fact not dependent upon God. (Wolterstorff, 1970) and he promotes the view that some properties, specifically those exemplified by God, are to be excluded from God’s creative activity. (Gould, 2010, 134) Wolterstorff goes so far as to propose that God in his nature has properties that he did not bring about. (Wolterstorff, 1970, 292) He writes:

[Consider] the fact that propositions have the property of being either true or false. This property is not a property of God. . . . For the propositions “God exists” and “God is able to create” exemplify being true or false wholly apart from any creative activity on God’s part; in fact, creative ability on his part presupposes that these propositions are true, and thus presupposes that there exists such a property as being either true of false. (Wolterstorff, 1970, 292; Gould, 2010, 135)

As such, Wolterstorff presents what may be termed a restrictive understanding of the creative activity of God. (Wolterstorff, 1970, 292). Wolterstorff, a Christian, argues that the biblical writers simply did not endorse a wide scope reading of the doctrine of creation. He posits that it cannot legitimately be entertained that the biblical writers actually had universals in view when speaking of God’s being the Creator of all things. In addition, he points out that the creator/creature distinction is invoked in Scripture for religious and not theoretical or metaphysical reasons.

Again we see in Wolterstorff’s approach what those who reject Traditional Theism altogether understand to be an inevitable result of endorsing Platonism. Wolterstorff, due to his endorsing of Platonism, is said therefore to have compromised the understanding of Traditional Theism in that God ceases to be the creator of various dimensions of his own identity, as well as of objects existing beyond himself.

c. Morris and Menzel: Theistic Activism

Christopher Menzel and Thomas Morris acknowledge a tension between Theism and Platonism, but seek to reconcile the divergent metaphysical perspectives utilizing the concept of Theistic Activism. Morris and Menzel ask whether God can not only be responsible for the creation of all contingent reality, but also if it can be intelligently and coherently concluded that God can also be creatively responsible for necessary existence and necessary truth. Morris and Menzel proceed to demonstrate what they term as the extraordinary compatibility of core elements of the Platonic and Theistic metaphysical visions. (Morris and Menzel, 1986, 361). Menzel writes,

The model that will be adopted . . . is simply an updated and refined version of Augustine’s doctrine of divine ideas, a view I will call theistic activism, or just activism, for short. Very briefly, the idea is this. On this model, abstract objects are taken to be contents of a certain kind of divine intellective activity in which God is essentially engaged; roughly, they are God’s thoughts, concepts, and perhaps certain other products of God’s mental life. This divine activity is. . . causally efficacious: the abstract objects that exist at any given moment, as products of God’s mental life, exist because God is thinking them; which is just to say that he creates them. (Menzel, 1986)

The authors, therefore, attempt to provide a Theistic ontology which places God at the center and which views everything else as exemplifying a relation of creaturely dependence on God. The authors agree that Platonism, in general, has been viewed historically as incompatible with Western Theism, but they propose that this perceived incompatibility is not insurmountable, and that the notion of Theistic Activism can overcome this apparent incompatibility. Menzel and Morris have two consequent objectives. First, they strive to eliminate the apparent inconsistency between Platonism and Theism. Second, the authors strive to preserve the Platonic notions of abstract objects, such as properties as necessary beings, as eternal, and as uncreated.

Morris and Menzel resolve the tension between abstract objects existing in simultaneity with God, concluding that God, in some fashion, must be creatively responsible for abstract objects. The authors therefore advance Theistic Activism, suggesting that the origination for the framework of reality that includes abstract objects has its source in the divine intellectual activity.

First, they argue that a Theistic Activist will hold God creatively responsible for the entire modal economy, for what is possible as well as what is necessary, and even for what is impossible. As stated above, the authors resort to the Augustinian divine ideas tradition, which concludes that the Platonic framework of reality arises out of the creatively efficacious intellective activity of God. The authors contend that the entire Platonic realm is, therefore, to be understood as deriving from God (Morris and Menzel, 1986, 356).

Second, Morris and Menzel proceed to propose a continuous model of creation, according to which God is always playing a direct causal role in the existence of his creatures and his creative activity is essential to a creatures being at all times, throughout its spacio-temporal existence. This is true regardless of whether God initially causes the created entity to exist. This conclusion is essential to the proposal of Morris and Menzel in that it provides a framework in which it can coherently be argued that God creates absolutely all objects, be they necessary or contingent. (Menzel, 1982, 2)

Third, for the Theistic Activist, God is understood to necessarily create the framework of reality. Morris and Menzel acknowledge the potentially problematic nature of this contention with regard to the activity of God as a free creator. As a resolution to the dilemma posed by the notions of God necessarily creating and God’s freedom, the authors argue that divine freedom must be understood in a radically different fashion from human freedom. Divine freedom is shaped by God’s moral nature. Therefore, God could not have done morally otherwise than was conducted in the act of creation.

Fourth, Morris and Menzel also address the problem of God’s own nature in relationship to this creative activity. The authors give consideration to the question of whether the varied dimensions of God’s own nature are part of the creative framework. The authors have two responses. They reject the proposal of some that God is to be understood as pure being and therefore devoid of determinate attributes such as omnipotence or omniscience. Morris and Menzel opt for the solution that God has a nature and that God creates his own nature. (Morris, 1989)

The writers conclude:

On the view of absolute creation, God is indeed a determinate, existent individual, but one whose status is clearly not just that of one more item in the inventory of reality. He is rather the source of absolutely everything there is: to use Tillich’s own characterization, he is in the deepest sense possible the ground of all-being. (Morris and Menzel, 1986, 360)

d. Bergman and Brower: Truthmaker Theory

Bergman and Brower conclude that Platonism is inconsistent with the central thesis of Traditional Theism, the aseity-dependence doctrine, which holds that God is an absolutely independent being who exists entirely from himself or a se. This central thesis of Traditional Theism led both philosophers and theologians of the Middle Ages to endorse the doctrine of divine simplicity by which God is understood to be an absolutely simple being, completely devoid of any metaphysical complexity. Further, according to the doctrine, God lacks the complexity associated with material or temporal composition, as well as the minimal form of complexity associated with the exemplification of properties.

The inconsistency is most apparent with regard to the tension between Platonism and divine simplicity. Platonism requires all true predications to be explained in terms of properties. Divine simplicity requires God to be identical with each of the things that can be predicated of him. If both are true, then God is identical with each of his properties and is therefore himself a property. This conclusion stands in contrast with the Traditional Theists understanding of God as a person and the conclusion that persons cannot be exemplified. Therefore Bergman and Brower advance that Platonism is inconsistent with the aseity-dependence doctrine itself. They further argue that the rejection of divine simplicity fails to remove this tension. In their conclusion, contemporary philosophers of religion have lost sight of a significant tension existing between Traditional Theism and Platonism, concluding that the two are perfectly compatible.

Bergman and Brower describe Platonism as characterized by two components. They remind that Platonism entails the view that a unified account of predication can be provided in terms of properties or exemplifiables. They also point out that Platonism entails the view that exemplifiables are best conceived of as abstract objects. Bergman and Brower indicate that Traditional Theism has typically addressed the second of these views and they propose that the distinctive aspect of their own argument targets the first. For Bergman and Brower this distinction is all important since it is often concluded that the inconsistency of Platonism and Traditional Theism is avoided merely by rejecting the Platonic view of properties in favor of another, such as the Augustinian view that properties are ideas in the mind of God. They write,

Traditional Theists who are Platonists, therefore, cannot avoid the inconsistency merely by dropping the Platonic conception of properties and replacing it with another – whether it be an Aristotelian conception (according to which there are no unexemplified universals), some form of immanent realism (according to which universals are concrete constituents of things that exemplify them), a nominalistic theory of tropes (according to which properties are concrete individuals), or even the Augustinian account (according to which all exemplifiables are divine concepts). (Bergman and Brower, 2006, 3-4)

However, Bergman and Brower contend that the inconsistency between the two metaphysical perspectives remains as long as the Traditional Theist continues to endorse the second of the two components of Platonism cited above. They further argue that the inconsistency can be resolved in only one of two ways. Either one is compelled to reject Traditional Theism and, therefore, become either a non-Theist or a non-Traditional Theist, or one is compelled to reject any unified account of predication in terms of exemplifiables. Those who desire to maintain the perspective of Traditional Theism are naturally inclined to adopt a unified account of predication and it is at this point that Bergman and Brower propose the alternative of Truthmaker Theory. (Bergman and Brower, 2006, 4)

But what is intended with the designation Truthmaker? The authors point out that the designation is not to be understood in causal terms or literally in terms of a “maker”, but on the contrary it is to be understood in terms of what they regard as a broadly logical entailment. Bergman and Brower begin their defense of Truthmaker Theory with a defense of the Truthmaker Theory of predication. Twenty-first century philosophers typically speak of Truthmakers as entailing the truth of certain statements or as predication by which is intended the truths expressed by them. For instance:

TM: If an entity E is a Truthmaker for a predication P, then “E exists” entails the truth expressed by P.

As a result, Socrates may be regarded as the Truthmaker for the statement “Socrates is human,” and God may be regarded as the Truthmaker for the statement, “God is divine.” If Traditional Theists desire to explain the truth of this predication in terms of something other than properties or exemplifiables, they can do so in terms of Truthmakers since, given that “God is divine” is a case of essential predication and that God necessitates its truth, God is, therefore, a plausible candidate for its Truthmaker. (Bergman and Bower, 2006, 25-27)

Not only do Bergman and Brower defend a Truthmaker Theory of predication, but they also attempt to demonstrate that Truthmaker Theory yields an understanding of the doctrine of divine simplicity that rescues the doctrine from the standard contemporary objection leveled against it, its alleged lack of coherence. Therefore, from the fact that God is simple, the medievals infer that God lacks any accidental or contingent properties and therefore that all true predications of the form “God is F” are cases of essential predication. Therefore, from the truth, “God is divine” it can be inferred that God is identical with his nature or divinity, which conclusion redeems the doctrine of divine simplicity. From the truth “God is good,” it can be inferred that he is identical with his goodness, the essence of the doctrine of divine simplicity. This is true for every other predication of this nature. Further, it can be concluded that just as God is identical with each of these qualities, so also each of these qualities is identical with each of the others, a further component of the doctrine of divine simplicity.

e. Plantinga: Christian Platonism

Alvin Plantinga has been described as a Platonist par–excellence. (Gould, 2010, 108) If Platonism is defined as the metaphysical perspective that there are innumerably many necessarily existing abstract entities, then Plantinga’s Does God Have A Nature? represents a thorough defense of Christian Platonism. (Freddoso, 145-53) Plantinga acknowledges that most Christians believe that God is the uncreated creator of all things and all things depend on him, and he depends upon nothing at all. The created universe presents no problem for this doctrine. God’s creation is dependent on him in a variety of ways and God is in no way dependent upon it. However, what does present a problem for this doctrine is the entire realm of Platonic universals, properties, kinds, propositions, numbers, sets, states of affairs and possible worlds. These things are everlasting, having no beginning or end. Abstract objects are also said to exist necessarily. Their non-existence is impossible. But how then are these abstract objects related to God? Plantinga frames the problem:

According to Augustine, God created everything distinct from him; did he then create these things? Presumably not; they have no beginnings. Are they dependent on him? But how could a thing whose non-existence is impossible . . . depend upon anything for its existence? And what about the characteristics and properties these things display? Does God just find them constituted the way they are? Must he simply put up with their being thus constituted? Are these things, their existence and their character, outside his control?  (Plantinga, 1980, 3-4)

Plantinga acknowledges two conflicting perceptions regarding God and he attempts to reconcile these two perspectives. On the one hand, it is argued that God has control over all things (sovereignty) and we believe that God is uncreated or that God exists a se.  Second, it is argued that certain abstract objects and necessary truths are independent of God and that certain of these, such as omniscience, omnipotence, omni-benevolence, constitute God’s nature. These two conclusions, however, are logically contradictory. How can God have sovereign control over all things and abstract objects exist independently?

Either the first or the second of these intuitions must be false. The entirety of Does God Have A Nature? is dedicated to an attempt to resolve this dilemma. Plantinga first discusses the proposal of Kant. Kant resolved the problem of these two conflicting intuitions through the denial that God has a nature, a conclusion that Plantinga rejects. Plantinga then moves to the consideration of the proposed solution of Thomas Aquinas. Aquinas argues on behalf of the doctrine of divine simplicity, which posits that God has a nature, but that God is identical with his nature. Plantinga concludes that Aquinas’ proposal is also inadequate due to the implications of the doctrine of divine simplicity, which seems to be problematic in that it leads to the denial of the personhood of God, thereby reducing him to an abstract object. Plantinga then turns to nominalism. The nominalist contends that abstract objects, such as properties, do not exist in any real sense. Abstract objects, therefore, are nothing more than designations and do not refer to any objects. Nominalism fails, in Plantinga’s opinion, since it is irrelevant to the real issue, the preservation of God’s absolute control. Plantinga then contends, in light of the failure of the previous approaches, that we may resolve to deny the truth of our intuition that abstract objects are necessary, or eternal, a conclusion which is designated as universal possibilism since the implication of the position is that everything is possible for God, a notion which Plantinga also rejects, since, in his opinion, this conclusion simply seems absurd.

However, for Plantinga the bifurcation between the Theistic notion of God as the uncreated creator of all that exists outside of himself and the Platonic notion of the existence of abstract objects, which exist necessarily and eternally, is not insurmountable. Plantinga endorses a form of Platonic realism. He espouses a conception of properties according to which these abstract objects are a specific type of abstract entity, namely, universals. Plantinga, proposes the following solution to the dilemma,

Augustine saw in Plato a vir sapientissimus et eruditissimus (Contra Academicos III, 17); yet he felt obliged to transform Plato’s theory of ideas in such a way that these abstract objects become . . . part of God, perhaps identical with his intellect. It is easy to see why Augustine took such a course, and easy to see why most later medieval thinkers adopted similar views. For the alternative seems to limit God in an important way; the existence and necessity of these things distinct from him seems incompatible with his sovereignty. (Plantinga, 1980, 5)

Plantinga, therefore, concludes that there may be some sense of dependence between God and abstract objects, that these abstract objects depend on God asymmetrically, and that they are the result of God’s intellective activity.

From the preceding overview we see that there exists a tension between the central notion of Traditional Theism, that God exists as the uncreated creator and that all objects existing beyond God have the source of their being in the creative activity of God, and the central notion of Platonism, that there exists a realm of abstract objects which are uncreated, and exist necessarily and eternally. Furthermore, we have seen that there exists a variety of proposals ranging from those that reject altogether the notion that these two distinctive worldviews are reconcilable, to those that would argue on behalf of their compatibility. (Freddosso, 1983)

6. References and Further Reading

a. Books

  • Aquinas, T. (1948). Summa Theologiae, trans. Fathers of the English Dominican Province. U.S.A: Christian Classics.
  • Brown, C. (1968). Philosophy and the Christian Faith. Illinois: Intervarsity Press.
    • Provides an examination of the historical interaction of philosophical thought and Christian theology.
  • Campbell, K. (1990). Abstract Particulars. Basil Blackwell Ltd.
    • Provides an in-depth analysis of Abstract Particulars.
  • Davies, B. (2004) An Introduction to the Philosophy of Religion (3rd ed.). New York: Oxford University Press.
    • An excellent introduction to the basic issues in Philosophy of Religion.
  • Gerson, L. P. (1990). Plotinus: The Arguments of the Philosophers. New York: Routledge.
    • Provides an analysis of the development of Platonic philosophy and its incorporation into Christian Theology.
  • Morris, T. (1989) Anselmian Explorations: Essays in Philosophical Theology. Notre Dame: University of Notre Dame Press.
  • Plantinga, A. (1980). Does God Have a Nature? Milwaukee, Wisconsin: Marquette University Press.
    • Discusses the relationship of God to abstract objects.
  • Plantinga, A. (2000). Warranted Christian Belief. New York: Oxford University Press.
    • Explores the intellectual validity of Christian faith.
  • Van Inwagen, P. (1993) Metaphysics. Westview Press.
    • An in-depth exploration of the dimensions of metaphysics.
  • Wolterstorff, N. (1970). On Universals: An Essay in Ontology. University of Chicago.
    • Explores the nature of Platonic thought, the tenets of Traditional Theism.

b. Articles

  • Bergman, M., Brower, J. E. (2006). “A Theistic Argument against Platonism.” Oxford Studies in Metaphysics, 2, 357-386.
    • Discusses the logical inconsistency between Theism and Platonism by virtue of the aseity dependence doctrine.
  • Brower, J. E. “Making Sense of Divine Simplicity.” Unpublished.
    • Presents an in-depth analysis of the nature of divine simplicity.
  • Freddoso, A. (1983). “Review of Plantinga’s ‘Does God Have a Nature?’.” Christian Scholars Review, 12, 78-83.
    • An excellent and helpful review of Plantinga’s most significant work.
  • Gould, P. (2010). “A Defense of Platonic Theism: Dissertation.” Purdue University West.
    • A defense of Platonic Theism, which seeks to remain faithful to the Theistic tradition.
  • Leftow, B. (1990). “Is God an Abstract Object?.” Nous, 24, 581-598.
    • Strives to demonstrate that the Identity Thesis follows from a basic Theistic belief.
  • Menzel, C. (2001). “God and Mathematical Objects.” Bradley, J., Howell, R. (Eds.). Mathematics in a Postmodern Age: A Christian Perspective. Eerdman’s.
  • Menzel, C. (1987). “Theism, Platonism, and the Metaphysics of Mathematics.” Faith and Philosophy, 4(4), 1-22.
  • Morris, T., Menzel, C. (1986). “Absolute Creation.” American philosophical quarterly, 23, 353-362.
    • Seeks to reconcile the divergent metaphysical perspectives utilizing the concept of Theistic Activism
  • Plantinga, A. (1982). “How to be an Anti-Realist.” Proceedings and Addresses of the American Philosophical Association, 56 (1), 47-70.
    • An insightful and helpful discussion of Plantinga’s rejection of contemporary anti-realism and unbridled realism.
  • Ross, J. (1989). “The Crash of Modal Metaphysics.” Review of Metaphysics, 43, 251-79.
    • Addresses Quantified Modal Logic as at one time promising for metaphysics.
  • Ross, J. (1983). Creation II. “In The Existence and Nature of God.” A. J. Freddoso, (Ed). Notre Dame: University of Notre Dame Press.
  • Van Inwagen, P. (2009). “God and Other Uncreated Things.” Timpe, K. (Ed). Metaphysics and God: Essays in Honor of Eleonore Stump, 3-20.
    • Addresses the question regarding whether there is anything other than himself that God has not created.
  • Van Inwagen, P. (1988). “A Theory of Properties.” Oxford Studies in Metaphysics, 1, 107-138.
    • Explores the rationality of belief in abstract objects in general and properties in particular.

 

Author Information

Eddy Carder
Email: efcarder@pvamu.edu
Prairie View A & M University
U. S. A.

Donald Davidson: Philosophy of Language

Donald Davidson (1917-2003) was one of the most influential analytic philosophers of language during the second half of the twentieth century and the first decade of the twenty-first century. An attraction of Davidson’s philosophy of language is the set of conceptual connections he draws between traditional questions about language and issues that arise in other fields of philosophy, including especially the philosophy of mind, action theory, epistemology, and metaphysics. This article addresses only his work on the philosophy of language, but one should bear in mind that this work is properly understood as part of a larger philosophical endeavor.

It is useful to think of Davidson’s project in the philosophy of language as cleaving into two parts. The first, which commences with his earliest publications in the field (Davidson 1965 and 1967), explores and defends his claim that a Tarski-style theory of truth for a language L, modified and supplemented in important ways, suffices to explain how the meanings of the sentences of a language L  depend upon the meanings of words of L, and thus models a significant part of the knowledge someone possesses when she understands L. In other words, Davidson claims that we can adapt a Tarski-style theory of truth to do duty for a theory of meaning. This claim, which is stronger and more complex than it appears at first reading, is examined in section 1.

The second part of Davidson’s work on language (in articles beginning with Davidson 1973 and 1974) addresses issues associated with constructing the sort of meaning theory he proposes in the first part of his project. A Davidsonian theory of meaning is an empirical theory that one constructs to interpret─that is, to describe, systematize, and explain─the linguistic behavior of speakers one encounters in the field or, simply, in line at the supermarket. Again, this problem turns out to be more complex and more interesting than it first appears. This set of issues is examined in section 2.

Table of Contents

  1. Davidson’s Theory of Meaning
    1. Constraints on a Theory of Meaning
      1. Compositionality
      2. No Meaning Entities
    2. Theories of Truth as Theories of Meaning
    3. Meaning and Truth
    4. Formal and Natural Languages
      1. Indexicals
      2. Indirect Discourse
  2. Davidson’s Theory of Interpretation
    1. Radical Translation
    2. Radical Interpretation
      1. Principles of Charity: Coherence
      2. Principles of Charity: Correspondence
    3. Language without Conventions
    4. Indeterminacy of Interpretation
    5. Meaning and Interpretation
  3. References and Further Reading
    1. Anthologies of Davidson’s Writings
    2. Individual Articles by Davidson
    3. Primary Works by other Authors
    4. Secondary Sources
      1. Anthologies
      2. Critical Discussions of Davidson’s Philosophy

1. Davidson’s Theory of Meaning

Davidson takes the notion of a theory of meaning as central, so it is important to be clear at the outset what he means by the term. Starting with what he does not mean, it is no part of his project to define the concept of meaning in the sense in which Socrates asks Euthyphro to define piety. Davidson writes that it is folly to try to define the concept of truth (Davidson, 1996), and the same holds for the closely related concept of meaning: both belong to a cluster of concepts so elementary that we should not expect there to be simpler or more basic concepts in terms of which they could be definitionally reduced. Nor does Davidson ask about meaning in such a way that we would expect his answer to take the form,

the meanings of a speaker’s words are such-and-suches.

Locke, who says that meanings of a speaker’s words are ideas in her mind, has a theory of meaning in this sense, as do contemporary philosophers of language who identify meanings with the contents of certain beliefs or intentions of the speaker.

Davidson, therefore, pursues neither a theory of what meaning is nor a theory of what meanings are. Rather, for Davidson a theory of meaning is a descriptive semantics that shows how to pair a speaker’s statements with their meanings, and it does this by displaying how semantical properties or values are distributed systematically over the expressions of her language; in short, it shows how to construct the meanings of a speaker’s sentences out of the meanings of their parts and how those parts are assembled. As a first approximation, one can think of a Davidsonian theory of meaning for the language L as a set of axioms that assign meanings to the lexical elements of the language and which, together with rules for constructing complex expressions of L, imply theorems of the form,

(M)  S means m,

for each sentence S of the language and m its meaning. If an observer of A’s linguistic behavior has such an “M-theorem” for each of his sentences, then she can explain and even make predictions about S‘s behavior; conversely, we can think of the M-theorems as expressing a body of linguistic knowledge that A possesses and which underwrites his linguistic competence.

a. Constraints on a Theory of Meaning

Much of the interest and originality of Davidson’s work on theories of meaning comes from his choice of Tarski-style theories of truth to serve as the model for theories of meaning. This choice is not obvious, though as early as 1935 Quine remarks that “in point of meaning… a word may be said to be determined to whatever extent the truth or falsehood of its contexts is determined” (Quine 1935, p. 89); it is not obvious since meaning is a richer concept than truth, for example, “snow is white” and “grass is green” agree in both being true, but they differ in meaning. As Davidson sees the matter, though, only theories of truth satisfy certain reasonable constraints on an adequate theory of meaning.

i. Compositionality

The first of these constraints is that a theory of meaning should be compositional. The motivation here is the observation that speakers are finitely endowed creatures, yet they can understand indefinitely many sentences; for example, you never before heard or read the first sentence of this article, but, presumably, you had no difficulty understanding it. To explain this phenomenon, Davidson reasons that language must possess some sort of recursive structure. (A structure is recursive if it is built up by repeatedly applying one of a set of procedures to a result of having applied one of those procedures, starting from one or more base elements.) For unless we can treat the meaning of every sentence of a language L  as the result of a speaker’s or interpreter’s performing a finite number of operations on a finite (though extendable) semantical base, L  will be unlearnable and uninterpretable: no matter how many sentences I master, there will always be others I do not understand. Conversely, if the meaning of each sentence is a product of the meanings of its parts together with the ways those parts are combined, then we can see “how an infinite aptitude can be encompassed by finite accomplishments” (Davidson 1965, p. 8). If every simple sentence of English results from applying a rule to a collection of lexical elements, for example, Combine a noun phrase and an intransitive verb (“Socrates” + “sits” ⇒ “Socrates sits”); and if every complex sentence results from applying a rule to sentences of English, such as Combine two sentences with a conjunction (“Socrates sits” + “Plato stands” ⇒ “Socrates sits and Plato stands”), then although human beings have finite cognitive capacities they can understand indefinitely many sentences. (“Socrates sits,” “Socrates sits and Plato stands,” “Socrates sits and Plato stands and Aristotle swims,” and so forth.)

This, then, gives us the requirement that a theory of meaning be compositional in the sense that it shows how the meanings of complex expressions are systematically “composed” from the meanings of simpler expressions together with a list of their modes of significant combination.

ii. No Meaning Entities

Davidson’s second adequacy constraint on a theory of meaning is that it avoid assigning objects (for example, ideas, universals, or intensions) to linguistic expressions as their meanings. In making this demand, Davidson does not stray into a theory of what meanings are; his point, rather, is that “the one thing meanings do not seem to do is oil the wheels of a theory of meaning… My objections to meanings in the theory of meaning is that… they have no demonstrated use” (Davidson 1967, p. 20).

To see this, consider that traditional logicians and grammarians divided a sentence into a subject term and a predicate term, for example, “Socrates sits” into the subject term “Socrates” and the predicate term “sits,” and assigned to the former as its meaning a certain object, the man Socrates, and to the latter a different sort of object, the universal Sitting, as its meaning. This leaves obscure, however, how the terms “Socrates” and “sits,” or the things Socrates and Sitting, combine to form a proposition, as opposed to, say, the terms “Socrates” and “Plato” (or the objects Socrates and Plato) which cannot combine to form a proposition. It also leaves obscure what role the copula “is” plays in sentences such as “Socrates is wise.” Does “is” refer to a third object that somehow “binds” Socrates to Wisdom? But how does this work? Or does “is” represent some relation? But what relation?

One might solve these difficulties faced by traditional accounts by assigning to different types of expressions different types of entities as their meanings, where these types differ in ways that make the entities amenable to combining in patterns that mirror the ways their corresponding expressions combine. If we read Frege as a Platonist, then his mature semantics is such a theory, since it divides expressions and their meanings, or Bedeutungen, into two types: “saturated” or “complete” expressions and meanings, and “unsaturated” or “incomplete” expressions and their meanings (see, for example, Frege, 1891). The proper noun “Annette” is an expression of the first type, and it means a particular object of the first type, the woman Annette; while the function expression “the father of ( )” belongs to the second type and means a certain nonspatiotemporal entity of the second type, namely, the function that maps objects to their fathers. (The open parentheses marks the argument place of the function expression, which is to be filled with a saturated expression such as “Annette,” and it lines up with a corresponding empty position in the function itself.) There is also the semantical rule that filling the parentheses of the expression, “the father of ( ),” yields a complete expression that means the father of whomever is meant by the saturated expression that fills the parentheses: Annette’s father if “Annette” fills the parentheses, Annette’s father’s father if “the father of Annette” fills the parentheses, and so forth. But now one has to ask, what is the point of our having said that the expression, “the father of ( )” means a certain entity? All the work is being done by the rule we have formulated, and none by the ontology.

There are other methodological considerations that lie behind Davidson’s hostility toward doing semantics by assigning objects and other sorts of entities to words as their meanings. People acquire a language by observing the situated behavior of other people, that is, by observing other people speaking about objects and occurrences in their shared environment; in turn, when they speak, what they mean by their words generally reflects the causes that prompt them to utter those words. These causes are usually mundane sorts of natural things and events, such as other people, grass, people mowing the grass, and the like. This picture of meaning is vague, but it suggests that the psychological achievement of understanding or being able to produce a sentence like “grass is green” rests on the same (or very nearly the same) natural abilities as knowing that grass is green; and it suggests to Davidson that theories of meaning should eschew the esoteric objects and relations that many contemporary philosophies of language presuppose, such as intensions, possible worlds, transworld identity relations, and so forth. By avoiding such things, Davidson positions theories of meaning more closely to the epistemology of linguistic understanding, in the sense of an account of the way that a speaker’s actions and other events are evidence for an interpreter’s attributing meaning to the speaker’s words.

b. Theories of Truth as Theories of Meaning

To begin to see what a Davidsonian theory of meaning looks like, recall schema M,

(M)  S means m,

where sentence S belongs to language L and m is its meaning. Recasting this in a more instructive version,

(M′)  S means that p,

we replace “m” in schema M by the schematic variable “p” in schema M′. In the latter, the schema is filled out by replacing “p” with a sentence in the interpreter’s background or metalanguage that translates the target or object language sentence S. For example, a theory of meaning for German constructed by an English-speaking interpreter might include as an instance of schema M′ the theorem,

“Schnee ist weiss” means that snow is white,

where “Schnee ist weiss” replaces “S” and “snow is white” replaces “p.

Now, schema M′ is more instructive than its predecessor because while the “m” in schema M names an object that S means – in violation of Davidson’s second constraint – the expression “p” holds the place for a sentence (for example, “snow is white”) that the interpreter uses to “track” the meaning of S (“Schnee ist weiss”) without reifying that meaning, that is, without treating that meaning as an object. The sentence that replaces “p” tracks the meaning of S in the sense that schema M′ correlates S (again, “Schnee ist weiss”) with the extra-linguistic condition that p (that snow is white) which the interpreter describes using her own sentence (“snow is white.”)

Schema M′ points the way forward, but we are not there yet. Davidson is not really interested in constructing theories of meaning in the sense of filling out schema M′ for every sentence of German or Urdu; rather, he theorizes about constructing theories of meaning to gain insight into the concept of meaning. And in this regard, schema M′ comes up short: it relies on the relation “means that” which is essentially synonymy across languages, which is as much in need of explication as meaning itself. What Davidson is really interested in is giving an explication, in Carnap’s sense (Carnap 1947, pp. 7-8), of an obscure explanandum, meaning, using a clear and exact explanans, and he finds his explanans in Tarski’s semantic theory of truth.

The semantic theory of truth is not a metaphysical theory of truth in the way that the correspondence theory of truth is. That is, the semantic theory of truth does not tell us what truth is, rather, it defines a predicate that applies to all and only the true sentences of a specified language (technically, true-in-L) by showing how the truth-conditions of a sentence of the language depend on the sentence’s internal structure and certain properties of its parts. This should sound familiar: roughly, the semantic theory of truth does for truth what Davidson wishes to do for meaning. Therefore, Davidson replaces schema M′ with Tarski’s schema T:

(T)  S is true if and only if p.

Schema T sits at the center of Tarski’s project. A formally adequate (that is, internally consistent) definition of truth for a language L is, in addition, materially adequate if it applies to all and only the true sentences of L; Tarski shows that an axiomatic theory θ meets this condition if it satisfies what he calls Convention T, which requires that θ entail for each sentence S of L  an instance of schema T. The idea is that the axioms of θ supply both interpretations for the parts of S, for example,

(A.i) “Schnee” means snow,

and

(A.ii)  an object a satisfies the German expression “x ist weiss” if and only if a is white,

and rules for forming complex German expressions from simpler ones, such as that

(A.iii)  “Schnee” + “x ist weiss” ⇒ “Schnee ist weiss,”

Together these axioms imply instances of schema T, for example,

“Schnee ist weiss” is true if and only if snow is white.

More precisely, an internally consistent theory of truth θ for a language L meets Convention T if it implies for each S of L an instance of schema T in which “p” is replaced by a sentence from the metalanguage that translates S. Clearly, such a theory will “get it right” in the sense that the T-sentences (that is, the instances of schema T) that θ implies do state truth conditions for the sentences of the object language.

Now, Davidson’s claim is not that a Tarski-style theory of truth in itself is a theory of meaning; in particular, he remarks that a T-sentence cannot be equated with a statement of a sentence’s meaning. At best, a Tarski-style theory of truth is a part of a theory of meaning, with additional resources being brought into play.

c. Meaning and Truth

Notice that Tarski’s Convention T employs the notion of translation, or synonymy across languages, and so a Tarski-style theory of truth cannot, as it stands, supply the explanans Davidson seeks. The underlying point, which Davidson acknowledges “only gradually dawned on me” (1984, p. xiv), is that Tarski analyzes the concept of truth in terms of the concept of meaning (or synonymy), while Davidson’s project depends on making the opposite move: he explains the notion of meaning in terms of truth.

Davidson, therefore, dispenses with translation and rewrites Convention T to require that

an acceptable theory of truth must entail, for every sentence s of the object language, a sentence of the form: s is true if and only if p, where “p” is replaced by any sentence that is true if and only if s is. Given this formulation, the theory is tested by the evidence that T-sentences are simply true; we have given up the idea that we must also tell whether what replaces ‘p’ translates s. (Davidson 1973, p. 134)

Thus, where Tarski requires that “p” translate S, Davidson substitutes the much weaker criterion that the T-sentences “are simply true.”

But Davidson’s weakened Convention T is open to the following objection. Suppose there is a theory of truth for German, θ1, that entails the T-sentence,

(T1)  “Schnee ist weiss” is true if and only if snow is white.

Suppose, further, that there is a second theory of truth for German, θ2, that is just like θ1 except that in place of (T1) it entails the T-sentence,

(T2)  “Schnee ist weiss” is true if and only if grass is green.

A theory of truth that entails (T2) is clearly false, but θ1 satisfies Davidson’s revised Convention T if and only if θ2 also satisfies it.

Here is why. The sentences “snow is white” and “grass is green” both happen to be true, and hence the two sentences are materially equivalent, that is,

Snow is white if and only if grass is green.

(Sentences are materially equivalent if they contingently have the same truth-value; sentences are logically equivalent if they necessarily have the same truth-value.) But since they are materially equivalent, it turns out that:

(T1) is true if and only if (T2 ) is true.

Therefore, all the T-sentences of θ1 are true if and only if all the T-sentences of θ2 are true, and thus θ1 satisfies Davidson’s revised Convention T if and only if θ2 does. The root of this problem is that when it comes to distinguishing between sentences, truth is too coarse a filter to distinguish between materially equivalent sentences with different meanings.

Davidson has a number of responses to this objection (in Davidson 1975). He points out that someone who knows that θ is a materially adequate theory of truth for a language L  knows more than that its T-sentences are true. She knows the axioms of θ, which assign meaning to the lexical elements of L, the words and simple expressions out of which complex expressions and whole sentences are composed; and she knows that these axioms imply the T-sentence correlations between object language sentences (“Schnee ist weiss”) and their interpreting conditions (that snow is white). Thus, someone who knows that θ is a materially adequate theory of truth for a language L   knows a systematic procedure for assigning to the sentences of L   their truth-conditions, making one’s grasp of a theory of truth-cum-meaning a holistic affair: knowing the T-sentence for any one object language sentence is tied to knowing the T-sentences for many object language sentences. (For example, knowing that “‘Schnee ist die Farbe der Wolken” is true if and only if snow is the color of clouds, and that “Schnee ist weiss” is true if and only if snow is white, is tied to knowing that “Wolken sind weiss” is true if and only if clouds are white.) In this way, although Davidson’s version of Convention T — stated in terms of truth rather than translation — does not prima facie filter out theories like θ2, such theories will raise red flags as deviant assignments (such as grass to “Schnee”) ramify through the language and interpreters consider the evidence of speakers pointing to snow and uttering, “Das ist Schnee!”

It matters, too, that the T-sentences of a Davidsonian theory of truth-cum-meaning are laws of an empirical theory and not mere accidental generalizations. The important difference here is that as empirical laws and not simple statements of chance correlations, T-sentences support counterfactual inferences: just as it is true that a certain rock would have fallen at 32 ft/sec2 if it had been dropped, even if it was not, it is also true that a German speaker’s utterance of “Schnee ist weiss” would be true if and only if snow is white, even in a world where snow is not white. (But in a world where grass is green, and snow is not white, it is not the case that a German speaker’s utterance of “Schnee ist weiss” would be true if and only if grass is green.)

This means that there is a logically “tighter” connection between the left- and right-hand sides of the T-sentences of materially adequate theories. This logically “tighter” connection underwrites the role that T-sentences have in constructing explanations of speakers’ behavior and, in turn, is a product of the nature of the evidence interpreters employ in constructing Davidsonian theories of truth-cum-meaning. An interpreter witnesses German speakers uttering “Schnee ist weiss!” while indicating freshly fallen snow; the interpreter singles out snow’s being white as the salient feature of the speaker’s environment; and she infers that snow’s being white causes him to hold the sentence, ‘Schnee ist weiss!,” true. Thus, the connection between snow’s being white and the T-sentence is more than a chance correlation, and this gets expressed by there being something stronger than an extensional relation between a statement of the evidence and the theory.

This has often been taken to be a fatal concession, inasmuch as Davidson is understood to be committed to giving an extensional account of the knowledge someone possesses when she understands a language. However, Davidson denies that he is committed to giving an extensional account of an interpreter’s knowledge; all he is after is formulating the theory of truth-cum-meaning itself in extensional terms, and he allows that ancillary knowledge about that theory may involve concepts or relations that cannot be expressed in extensionalist terms. Thus, it is not an objection to his project that an interpreter’s background logic, for example, in her understanding of her own theory, should involve appeal to intensional notions.

d. Formal and Natural Languages

Tarski restricts his attention to investigating the semantical properties of formal languages, whereas Davidson’s interest lies in the investigation of natural languages. Formal languages are well-behaved mathematical objects whose structures can be exactly and exhaustively described in purely syntactical terms, while natural languages are anything but well-behaved. They are plastic and subject to ambiguity, and they contain myriad linguistic forms that resist, to one degree or another, incorporation into a theory of truth via the methods available to the logical semanticist. Davidson has written on the problems posed by several of these linguistic forms (in Davidson 1967a, 1968, 1978, and 1979) including indexicals, adverbial modifiers, indirect discourse, metaphor, mood, and the propositional attitudes.

i. Indexicals

It is instructive to see how Davidson handles indexicals. The key insight here is that truth is properly a property of the situated production of a sentence token by a speaker at a certain time, that is, it is a property of an utterance, not a sentence. We define, therefore, an utterance to be an ordered triple consisting of a sentence token, a time, and a speaker. Truth is thus a property of such a triple, and in constructing a Tarski-style theory of truth for a language L the goal is to have it entail T-theorems such as:

“Das ist weiss” is true when spoken by x at t if and only if the object indicated by x at t is white.

This T-theorem captures two distinct indexical elements. First, the German pronoun “das” refers to the object the speaker indicates when she makes her utterance; we model its contribution to the utterance’s truth-condition by explicitly referring on the right side of the T-theorem to that object. Second, the German verb “ist” is conjugated in the present indicative tense and refers to the time the speaker performs her utterance. We represent this indexical feature by repeating the time variable “t” on both sides of the T-theorem. Not all sentences contain indexicals (“that,” “she,” “he,” “it”, “I,” “here,” “now,” “today,” and so forth, but unless it is formulated in the so-called “eternal present” (for example, “5 plus 7 is twelve”), every sentence contains an indexical element in the tense of the sentence’s main verb.

ii. Indirect Discourse

The philosophy of language is thick with proposals for treating the anomalous behavior of linguistic contexts involving intensional idioms, including especially indirect discourse and propositional attitude constructions. In such contexts, familiar substitution patterns fail; for example, it is true that

(1)  The Earth moves,

and that

(2)  The Earth = the planet on which D.D. was born in 1917.

By the Principle of Extensionality,

Co-referring terms can be exchanged without affecting the truth-value of contexts in which those terms occur,

we can infer that

The planet on which D.D. was born in 1917 moves.

However, if we report that Galileo said that (1), that is,

(3)  Galileo said that the Earth moves,

we are blocked from making the substitution,

(4)  Galileo said that the planet on which D.D. was born in 1917 moves,

for surely Galileo did not say that, since he died nearly three hundred years before D.D. was born. (2) and (3) are true, while (4) is false; hence (2) and (3) do not entail (4), and the Principle of Extensionality fails for “says that” contexts.

Davidson’s solution to this problem is as ingenious as it is controversial, for it comes at the price of some grammatical novelty. He argues that the word “that” that occurs in (3) is a demonstrative pronoun and not, as grammar books tell us, a relative pronoun; the direct object of “said” is this demonstrative, and not the subordinate noun clause “that the Earth moves.” In fact, under analysis this noun clause disappears and becomes two separate expressions: the demonstrative “that,” which completes the open sentence “Galileo said x,” and the grammatically independent sentence “The Earth moves.” This new sentence is the demonstrative’s referent; or, rather, its referent is the speaker’s utterance of the sentence, “The Earth moves,” which follows her utterance of the sentence “Galileo said that.” Thus Davidson proposes that from a logical point of view, (3) is composed of two separate utterances:

(5)  Galileo said that. The Earth moves.

In other words, the grammatical connection between “The Earth moves” and “Galileo said that” is severed and replaced by the same relationship that connects snow and my pointing to snow and saying “That is white.”

More properly, (5) should be:

(6)  Galileo said something that meant the same as my next utterance. The Earth moves.

This qualification is needed, since the utterance to which “that” refers in (5) is my utterance of a sentence in my language, which I use to report an utterance Galileo made in his language. As Davidson sometimes puts it, Galileo and I are samesayers: what he and I mean, when he performs his utterance and I perform mine, is the same. Finally, a careful semantical analysis of (6) should look something like this:

(7)  There exists some utterance x performed by Galileo, and x means the same in Galileo’s idiolect as my next utterance means in mine. The Earth moves.

Now in my utterance, “the Earth” can be exchanged for “the planet on which D.D. was born in 1917” because as I use them both expressions refer to the same object, namely, the Earth. Thus, the Principle of Extensionality is preserved.

Davidson proposes that this account can be extended to treat other opaque constructions in the object language, such as the propositional attitudes (Davidson 1975) and entailment relations (Davidson 1976). Looking at the former, the idea is that by analogy with (3), (5), and (6),

(8)  Galileo believed that the Earth moves,

should be glossed as

(9)  Galileo believed that. The Earth moves,

or, better,

(10)  Galileo believed something that had the same content meant as my next utterance. The Earth moves.

A question, then, is what is this something that Galileo believed? In the analysis of indirect discourse, my sentence (“The Earth moves”) tracks an actual utterance of Galileo’s (“Si muove”), but Galileo had many beliefs he never expressed verbally; so it cannot be an utterance of Galileo’s. Alternatively, one might treat thoughts as inner mental representations and belief as a relation between thinkers and thoughts so conceived; then what has the same content as my utterance of my sentence, “The Earth moves,” is Galileo’s mental representation in his language of thought. However, Davidson argues elsewhere (Davidson 1989) that believing is not a relation between thinker and mental objects; this point is important to the position he stakes out in the internalism/externalism debate in the philosophy of mind.

Instead, Davidson proposes (in Davidson 1975) that (3) is to (6) as (8) is to:

(11)  Galileo would be honestly speaking his mind were he to say something that had the same content as my next utterance. The Earth moves.

Galileo never actually said something that means the same as my sentence, “The Earth moves,” but had he spoken his mind about the matter, he would have. (Historically, of course, Galileo did say such a thing, but let us suppose that he did not.) This analysis, however, imports a counterfactual condition into the T-sentences of an interpreter’s theory for Galileo’s words and thoughts, which Davidson wants to avoid. Finally, in the same article Davidson seems to suggest that we treat Galileo’s thought more directly as a “belief state,” which might be glossed as:

(12)  Galileo was in some belief state that had the same content meant as my next utterance. The Earth moves.

Intuitively, this seems right: what I track with my utterance is precisely the content of Galileo’s belief. This leaves open, however, what “belief states” are such that they can be quantified over (as in (10)) and have contents that can be tracked by utterances. This, though, is a problem for the philosophy of mind rather than the philosophy of language, and there is no reason to suppose that it affects Davidson’s proposal more than other accounts of the semantics of the propositional attitudes.

2. Davidson’s Theory of Interpretation

Consideration of the exigencies of interpreting a person’s speech behavior yields additional constraints on theories of truth-cum-meaning, and it also provides deep insights into the nature of language and meaning. Davidson examines interpretation and the construction of theories of meaning by drawing extensively on the work of his mentor, W. V. Quine.

a. Radical Translation

In Quine’s famous thought experiment of radical translation, we imagine a “field linguist” who observes the verbal behavior of speakers of a foreign language, and we reflect on her task of constructing a translation manual that maps the speakers’ language onto her own. The translation task is radical in the sense that Quine assumes she has no prior knowledge whatsoever of the speakers’ language or its relation to her home language. Hence her only evidence for constructing and testing her translation manual are her observations of the speakers’ behavior and their relation to their environment.

The linguist’s entering wedge into a foreign language are those of the speakers’ utterances that seem to bear directly on conspicuous features of the situation she shares with her subject. Taking Quine’s well-known example, suppose a rabbit scurries within the field of view of both the linguist and an alien speaker, who then utters, “Gavagai!” With this as her initial evidence, the linguist sifts through the features of the complex situation that embeds his speech behavior; she reasons that were she in the subject’s position of seeing a rabbit, she would be disposed to assert, “Lo, a rabbit!” Supposing, then, that the alien speaker’s verbal dispositions relate to his environment as her verbal disposition are related to her own, she tentatively translates “Gavagai!” with her own sentence, “Lo, a rabbit!”

b. Radical Interpretation

Taking his inspiration from Quine, Davidson holds that a radical interpreter thus begins with observations such as:

(13)  A belongs to a community of speakers of a common language, call it K, and he holds “Gavagai!” true on Saturday at noon, and there is a rabbit visible to A on Saturday at noon,

and eliciting additional evidence from observing K-speakers’ situated verbal behavior, she infers that

(14)  If x is a K-speaker, then x holds “Gavagai!” true at t if and only if there is a rabbit visible to x at t.

This inference is subject to the vagaries that attend empirical research, but having gathered an adequate sample of instances of K-speakers holding “Gavagai” true when and only when rabbits cross their paths, she takes (14) to be confirmed. In turn, then, she takes (14) as support that (partly) confirms the following T-sentence of a Tarski-style truth theory for K:

(15)  “Gavagai!” is true when spoken by x at t if and only there is a rabbit visible to x at t.

Note that in reconstructing the language K, Davidson’s linguist does not mention sentences of her home language. Of course, she uses her own sentences in making these assignments, but her sentences are directed upon extra-linguistic reality. Thus, unlike a Quinean radical translator, who does mention sentences of his home language, a Davidsonian radical interpreter adopts a semantical stance: she relates speakers’ sentences to the world by assigning them objective truth conditions describing extra-linguistic situations and objects. It is in this sense that a Davidsonian linguist is an interpreter, and Davidson calls the project undertaken by his linguist the construction of a theory of interpretation.

i. Principles of Charity: Coherence

Like any empirical scientist, a Davidsonian radical interpreter relies on methodological assumptions she makes to move from her observations (13) to her intermediate conclusions (14) and to the final form of her theory (15). Davidson identifies as the radical interpreter’s two most important methodological assumptions the Principle of (Logical) Coherence and the Principle of Correspondence. Taken together these canons of interpretation are known, somewhat misleadingly, as the Principle(s) of Charity.

Since a Davidsonian theory of interpretation is modeled on a Tarski-style theory of truth, one of the first steps an interpreter takes is to look for a coherent structure in the sentences of alien speakers. She does this by assuming that a speaker’s behavior satisfies strong, normative constraints, namely, that he reasons in accordance with logical laws. Making this assumption, she can diagram the logical patterns in speakers’ verbal behavior and leverage evidence she gleans from her observations into a detailed picture of the internal structure of his language.

Assuming that a speaker reasons in accordance with logical laws is neither an empirical hypothesis about a subject’s intellectual capacities nor an expression of the interpreter’s goodwill toward her subject. Satisfying the norms of rationality is a condition on speaking a language and having thoughts, and hence failing to locate sufficient consistency in someone’s behavior means there is nothing to interpret. The assumption that someone is rational is a foundation on which the project of interpreting his utterances rests.

ii. Principles of Charity: Correspondence

The problem the radical interpreter faces is that by hypothesis she does not know what a speaker’s sentences mean, and neither does she have direct access to the contents of his propositional attitudes, such as his beliefs or desires. Both of these factors bear on making sense of his verbal behavior, however, for which sentences a speaker puts forward as true depends simultaneously on the meanings of those sentences and on his beliefs. For example, a K-speaker utters “Gavagai!” only if (α) the sentence is true if and only if a rabbit presents itself to him, and (β) he believes that a rabbit presents itself to him.

A speaker’s holding a sentence true is thus (as Davidson put it) a “vector of two forces” (Davidson 1974a, p. 196), what meanings his words have and what he believes about the world. The interpreter thus faces the problem of too many unknowns, which she solves by performing her own thought experiment: she projects herself into her subject’s shoes and assumes that he does or would believe what she, were she in his position, would believe. This solves the problem of her not knowing what the speaker believes since she knows what she would believe were she in his situation, and hence she knows what her subject does believe if he believes what she thinks he ought to believe. The Principle of Correspondence is the methodological injunction that an interpreter affirm the if-clause.

The Principle of Correspondence applies especially to speakers’ observation sentences, for example, there goes a rabbit! These are the points of immediate causal contact between the world shared by speakers and interpreters, on the one hand, and the utterances and attitudes of speakers, on the other. Where there is greater distance between cause (features of the speaker’s situation) and effect (which sentences the speaker puts forward as true), there are extra degrees of freedom in explaining how the speaker might reasonably hold true something that the interpreter believes is false.

Davidson sometimes formulates the Principle of Correspondence in terms of the interpreter’s maximizing agreement between her and the speakers she interprets, but this is misleading. An interpreter needs to fill out the contents of the speaker’s attitudes if her project is to move forward; and she does this by attributing to him those beliefs that allow her to tell the most coherent story about what he believes. Thus, she routes attributions of beliefs to the speaker through what she knows about his beliefs and values. An interpreter will still export to her subject a great deal of her own world view, but if there are grounds for attributing to him certain beliefs that she takes to be false, then she does so if what she knows about him makes it more reasonable than not. She thus makes use of whatever she knows about the speaker’s personal history and psychology.

c. Language without Conventions

Davidson typically presents radical interpretation as targeting a community’s language, but in his more careful statements he argues that the focus of interpretation is the speech behavior of a single individual over a given stretch of time (Davidson 1986). One reason for this is that Davidson denies that conventions shared by members of a linguistic community play any philosophically interesting role in an account of meaning. Shared conventions facilitate communication, but they are in principle dispensible. For so long as an audience discerns the intention behind a speaker’s utterance, for example, he intends that his utterance of “Schnee ist weiss” mean that snow is white, then his utterance means that snow is white, regardless of whether he and they share the practice that speakers use “Schnee ist weiss” to mean that snow is white. This point is implicit in the project of radical interpretation.

This implies, according to Davidson, that what we ordinarily think of as a single natural language, such as German or Urdu, is like a smooth curve drawn through the idiolects of different speakers. It also underwrites Davidson’s claim that “interpretation is domestic as well as foreign” (Davidson 1973, p. 125), that is, there is no essential difference between understanding the words spoken by radically alien speakers and our familiars; there is only the practical difference that one has more and better information about the linguistic behavior and propositional attitudes of those with whom one has more contact.

d. Indeterminacy of Interpretation

Davidson, following Quine, argues that although the methodology of radical interpretation (or translation, for Quine) winnows the field of admissible candidates, it does not identify a unique theory that best satisfies its criteria. At the end of the day there will be competing theories that are mutually inconsistent but which do equally well in making sense of a speaker’s utterance, and in this sense interpretation (and translation) is indeterminate.

Quine draws from this the skeptical conclusion that there is “no fact of the matter” when it comes to saying what speakers or their words mean. Davidson stops short of Quine’s skepticism, and he draws a different moral from the indeterminacy arguments. (In this section we emphasize Davidson’s agreements with Quine, in the following his disagreement.)

Here is how indeterminacy infects the task of the radical translator. She begins with a speaker’s situated observation sentences, and she finds her first success in correlating a sentence SH of her home language with a sentence SO of her subject’s language. She hypothesizes that SH and SO are synonymous, and this is her wedge into the speaker’s language. Next, the translator makes hypotheses about how to segment the speaker’s observation sentences into words (or morphemes) and about how to line these up with words of her own language. For example, she may identify the initial vocable of “Gavagai!,” “ga,” with the demonstrative “there” in her home language, and “gai” with the common noun “rabbit.” This permits her to puzzle out the translation of nonobservation sentences that share some of their parts with observation sentences. (In the Davidsonian version, these correlations take the form of interpretations rather than translations, but the point is the same.)

These additional hypotheses are essential to her project, but they are not backed by any direct behavioral evidence. They are confirmed just so long as the translations or interpretations they warrant are consistent with the linguist’s evidence for her evolving theory; however, that evidence has the form of information about the translation or interpretation of complete sentences. Indeterminacy arrives on the scene, then, because different sets of hypotheses and the translations or interpretations they imply do equally well in making sense of a speaker’s sentences, even though they assign different translations or interpretations to the parts of those sentences.

Indeterminacy, however, also infects the translation and interpretation of complete sentences. This is because the evidence for a translation manual or theory of interpretation does not, in fact, come at the level of sentences. The radical translator or interpreter does not test her translations or T-sentences one-by-one; rather, what goes before the tribunal of evidence is a complete translation manual or theory of interpretation for the entire language (Quine 1953). This means that in the case of sentences, too, there is slack between evidence and a translation or interpretation as the linguist may vary the translation or interpretation of a given sentence by making complementary changes elsewhere in her translation manual or theory of interpretation. Thus the interpretation of sentences as well as terms is indeterminate.

e. Meaning and Interpretation

Davidson’s response to the indeterminacy arguments is at the same time more modest and more ambitious than Quine’s. It is more modest because Davidson does not endorse the skeptical conclusion that Quine draws from the arguments that since there are no determinate meanings, there are no meanings. This reasoning is congenial to Quine’s parsimony and his behaviorism: all there is, according to Quine, are speaker’s behavior and dispositions to behave and whatever can be constructed from or explained in terms of that behavior and those dispositions.

It is more ambitious than Quine’s response insofar as Davidson offers in place of Quine’s skepticism what Hume would call a skeptical solution to the indeterminacy problem. That is, while acknowledging the validity of Quine’s argument that there are no meanings, he undertakes to reconceive the concept of meaning that figures as a premise in that reasoning (as Hume reconceives the concept of causation that figures in his skeptical arguments). There are no determinate meanings, therefore, meaning is not determinate. In place of the traditional picture of meanings as semantical quanta that speakers associate with their verbal expressions, Davidson argues that meaning is the invariant structure that is common to the competing interpretations of speakers’ behavior (Davidson 1999, p. 81). That there is such a structure is implied by holism: in assigning a certain meaning to a single utterance, an interpreter has already chosen one of a number of competing theories to interpret a speaker’s over-all language. Choosing that theory, in turn, presupposes that she has identified in the speaker’s utterances a pattern or structure she takes that theory to describe at least as well as any other. Herein lies the Indeterminacy of Interpretation, for that theory does only at least as well as any other. There is, therefore, no more an objective basis for choosing one theory of meaning over another than there is for preferring the Fahrenheit to the Celsius scale for temperature ascriptions.

This conclusion, however, has no skeptical implications, for by assumption each theory does equally well at describing the same structure. Whether there is a “fact of the matter” when it comes to saying what speakers or their sentences mean, therefore, becomes the question whether there are objective grounds for saying that that structure exists. That structure is a property of a system of events, and hence the grounds for saying that it exists are the criteria for attributing those properties to those events; the skeptical conclusion would follow, therefore, only if there were no such criteria. The argument for the Indeterminacy of Interpretation does not prove that, however. On the contrary, the methodology of radical interpretation provides a framework for attributing patterns of properties to speakers and their utterances.

As Davidson reconceives it, therefore, understanding a person’s sentences involves discerning patterns in his situated actions, but no discrete “meanings.” An interpreter makes sense of her interlocutor by treating him as a rational agent and reflecting on the contents of her own propositional attitudes, and she tracks what his sentences mean with her own sentences. This project may fail in practice, especially where the interpretation is genuinely radical and there is moral as well as linguistic distance separating an interpreter and a speaker; but in principle there is no linguistic behavior that cannot be interpreted, that is, understood, by another. If an interpreter can discern a pattern in a creature’s situated linguistic behavior, then she can make sense of his words; alternatively, if she cannot interpret his utterances, then she has no grounds for attributing meaning to the sounds he produces nor evidence to support the hypothesis that he is a rational agent. These observations are not a statement of linguistic imperialism; rather, they are implications of the methodology of interpretation and the role that Tarski-style theories of truth-cum-meaning play in the enterprise. Meaning is essentially intersubjective.

Further, meaning is objective in the sense that most of what speakers say about the world is true of the world. Some critics object that this statement rests on an optimistic assessment of human capacities for judgment; however, Davidson’s point is not an empirical claim that could turn out to be mistaken. Rather, it is a statement of the methodology of radical interpretation, an assumption an interpreter makes to gain access to her subject’s language. Her only path into his language is by way of the world they share since she makes sense of his sentences by discerning patterns in the relations between those sentences and the objects and events in the world that cause him to hold those sentences true. If too many of his utterances are false, then the link between what he says and thinks, on the one hand, and the world, on the other, is severed; and the enterprise of interpretation halts. Finding too much unexplainable error in his statements about the world, therefore, is not an option, if she is going to interpret him.

3. References and Further Reading

a. Anthologies of Davidson’s Writings

  • Davidson, Donald. 1984. Inquiries into Truth and Interpretation. New York: Oxford University Press. [Cited as ITI]
  • Davidson, Donald. 2001. Essays on Actions and Events. Oxford University Press. [Cited as EAE]
  • Davidson, Donald. 2001. Subjective, Intersubjective, Objective. New York: Oxford University Press. [Cited as SIO]
  • Davidson, Donald. 2005. Truth, Language, and History. New York: Oxford University Press. [Cited as TLH]
  • Davidson, Donald. 2005A. Truth and Predication. Boston: Harvard University Press.
    • Contains the texts of Davidson’s 1989 Dewey Lectures (given at Columbia University) on the concept of truth together with his 2001 Hermes Lectures (given at the University of Perugia). The first half is useful in understanding the role truth plays in his systematic philosophy, and the second half contains Davidson’s interesting criticisms of his predecessors, ranging from Plato to Quine.

b. Individual Articles by Davidson

  • Davidson, Donald. 1965. “Theories of Meaning and Learnable Languages,” reprinted in ITI.
  • Davidson, Donald. 1967. “Truth and Meaning,” reprinted in ITI.
  • Davidson, Donald. 1967a. “The Logical Form of Action Sentences,” reprinted in EAE.
  • Davidson, Donald. 1968. “On Saying That,” reprinted in ITI.
  • Davidson, Donald. 1973. “Radical Interpretation,” reprinted in ITI.
  • Davidson, Donald. 1974. “Belief and the Basis of Meaning,” reprinted in ITI.
  • Davidson, Donald. 1974a. “On the Very Idea of a Conceptual Scheme,” reprinted in ITI.
  • Davidson, Donald. 1975. “Thought and Talk,” reprinted in ITI.
  • Davidson, Donald. 1976. “Reply to Foster,” reprinted in ITI.
  • Davidson, Donald. 1978. “What Metaphors Mean,” reprinted in ITI.
  • Davidson, Donald. 1986. “A Nice Derangement of Epitaphs,” reprinted in TLH.
  • Davidson, Donald. 1989. “What is Present to the Mind?”, reprinted in SIO.
  • Davidson, Donald. 1996. “The Folly of Trying to Define Truth,” reprinted in TLH.
  • Davidson, Donald. 1999. “Reply to W.V. Quine,” printed in Hahn 1999.

c. Primary Works by other Authors

  • Carnap, Rudolf. 1947., Meaning and Necessity, 2nd ed. Chicago: University of Chicago Press.
  • Frege, Gottlob. 1891. “Funktion und Begriff,” translated as “Function and Concept” and reprinted in Brian McGuinness et al. (eds.), Collected Papers on Mathematics, Logic, and Philosophy, 1984. New York: Basil Blackwell.
  • Quine, W.V. 1935. “Truth by Convention,” reprinted in The Ways of Paradox, 1976. Cambridge, MA: Harvard University Press.

d. Secondary Sources

i. Anthologies

  • De Caro, Marion. 1999. Interpretations and Causes: New Perspectives on Donald Davidson’s Philosophy. Dordrecht: Kluwer Academic Publishers.
    • Articles by an internationally diverse range of authors focusing on the interplay between the notions of interpretation and causation in Davidson’s philosophy.
  • Dasenbrock, Reed Way. 1993. Literary Theory After Davidson. University Park: Penn State Press.
    • Articles addressing the significance of Davidson’s philosophy of language for literary theory.
  • Hahn, Edwin Lewis. 1999. The Philosophy of Donald Davidson. The Library of Living Philosophers, volume XXVII. Peru, IL: Open Court Publishing Company.
    • A useful collection of articles, including Davidson’s intellectual autobiography and his replies to authors.
  • Kotatko, Petr, Pagin, Peter and Segal, Gabriel. 2001. Interpreting Davidson. Stanford, CA: Center for the Study of Language and Information Publications.
  • Lepore, Ernest. 1986. Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson. Oxford: Basil Blackwell.
    • An excellent collection of articles addressing a range of topics in Davidson’s philosophy of language.
  • Stoecker, Ralf. 1993. Reflecting Davidson: Donald Davidson Responding to an International Forum of Philosophers. Berlin: de Gruyter.

ii. Critical Discussions of Davidson’s Philosophy

  • Evnine, Simon. 1991. Donald Davidson. Stanford, CA: Stanford University Press.
  • Joseph, Marc. 2004. Donald Davidson. Montreal: McGill-Queens University Press.
  • Lepore, Ernest, and Ludwig, Kirk. 2005. Davidson: Meaning, Truth, Language, and Reality. New York: Oxford University Press.
  • Lepore, Ernest, and Ludwig, Kirk. 2009. Donald Davidson’s Truth-Theoretic Semantics. New York: Oxford University Press.
    • A detailed and technical examination of Davidson’s use of Tarski-style theories of truth in his semantical project.
  • Ramberg, Bjørn. 1989. Donald Davidson’s Philosophy of Language. Oxford: Basil Blackwell.

Author Information

Marc A. Joseph
Email: majoseph@mills.edu
Mills College
U. S. A.

Immanuel Kant: Philosophy of Religion

kant2Immanuel Kant (1724-1804) focused on elements of the philosophy of religion for about half a century─from the mid-1750s, when he started teaching philosophy, until after his retirement from academia.  Having been reared in a distinctively religious environment, he remained concerned about the place of religious belief in human thought and action.  As he moved towards the development of his own original philosophical system in his pre-critical period through the years in which he was writing each Critique and subsequent works all the way to the incomplete, fragmentary Opus Postumum of his old age, his attention to religious faith was an enduring theme.  His discussions of God and religion represent a measure of the evolution of his philosophical worldview.  This began with his pre-critical advocacy of the rationalism in which he was educated.  Then this got subjected to the systematic critique that would open the doors to his own unique critical treatment.  Finally, at the end of his life, he seemed to experiment with a more radical approach.  As we follow the trajectory of this development, we see Kant moving from confidently advocating a demonstrative argument for the God of metaphysics to denying all theoretical knowledge of a theological sort, to affirming a moral argument establishing religious belief as rational, to suspicions regarding religion divorced from morality, and finally to hints of an idea of God so identified with moral duty as to be immanent rather than transcendent.  The key text representing the revolutionary move from his pre-critical, rationalistic Christian orthodoxy to his critical position (that could later lead to those suggestions of heterodox religious belief) is his seminal Critique of Pure Reason.  In the preface to its second edition, in one of the most famous sentences he ever wrote, he sets the theme for this radical transition by writing, “I have therefore found it necessary to deny knowledge, in order to make room for faith” (Critique, B).  Though never a skeptic (for example, he was always committed to scientific knowledge), Kant came to limit knowledge to objects of possible experience and to regard ideas of metaphysics (including theology) as matters of rational faith.

Table of Contents

  1. Kant and Religion
  2. God in Some Pre-critical Writings
  3. Each Critique as Pivotal
    1. The First Critique
    2. The Second Critique
    3. The Third Critique
  4. The Prolegomena and Kant’s Lectures
    1. The Prolegomena
    2. Kant’s Lectures
  5. Other Important Works
  6. His Religion within the Limits of Reason Alone
  7. Some Tantalizing Suggestions from the Opus Postumum
  8. References and Further Readings
    1. Primary Sources
    2. Secondary Sources

1. Kant and Religion

This article does not present a full biography of Kant. A more general account of his life can be found in the article Kant’s Aesthetics.  But five matters should be briefly addressed as background for discussing his philosophical theology:  (1) his association with Pietism; (2) his wish to strike a reasonable balance between (the Christian) religion and (Newtonian physical) science; (3) his attempt to steer a middle path between the excesses of dogmatic modern rationalism and skeptical modern empiricism; (4) his commitment to the Enlightenment ideals; and (5) his unpleasant encounter with the Prussian censor over his religious writings.

Kant was born, raised, educated, worked, lived, and died in Königsberg (now Kaliningrad, part of Russia), the capital city of East Prussia.  His parents followed the Pietist movement in German Lutheranism, as he was brought up to do.  Pietism stressed studying the scriptures as a basis for personal devotion, lay governance of the church, the conscientious practice of Christian ethics, religious education emphasizing the development and exercising of values, and preaching designed to inculcate and promote piety in its adherents.  At the age of eight, the boy was sent to a Pietist school directed by his family’s pastor.  Eight years later, he enrolled in the University of Königsberg, where he came under the influence of a Pietist professor of logic and metaphysics.  Even during later decades of his life, when he ceased to practice religion publicly (see letter to Lavater in Correspondence, pp. 152-154) and found external displays of pious devotion distasteful, his thought and values continued to be influenced by the Pietism of his earlier years.

Second, as a university student, Kant became a follower of Newtonian science.  The dissertation for his graduate degree was more what we would consider physics than philosophy, although in those days it was called “natural philosophy.”  Many of his earliest writings were in Newtonian science, including his Universal Natural History and Theory of the Heavens of 1755 (in Cosmogony), dedicated to his king, Frederick the Great, and propounding a nebular hypothesis to explain the formation of our solar system.  He had reason to worry that his thoroughly mechanistic explanation might run afoul of Biblical fundamentalists who advocated the traditional doctrine of strict creationism.  This is illustrative of a tension with which he had to deal all of his adult life—regarding how to reconcile Christian faith and scientific knowledge—which his philosophy of religion would address.

Third, although this is a bit of an oversimplification, before Kant, modern European philosophy was generally split into two rival camps:  the Continental Rationalists, following Descartes, subscribed to a theory of a priori innate ideas that provide a basis for universal and necessary knowledge, while the British Empiricists, following Locke, subscribed to a tabula rasa theory, denying innate ideas and maintaining that our knowledge must ultimately be based on sense experience.  This split vitally affected views regarding knowledge of God.  Descartes and his followers were convinced that a priori knowledge of the existence of God, as an infinitely perfect Being, was possible and favored (what Kant would later call) the Ontological Argument as a way to establish it.  By contrast, Locke and his followers spurned such a priori reasoning and resorted to empirical approaches, such as the Cosmological Argument and the Teleological Arguments or Design Arguments.  An important Continental Rationalist was the German Leibniz, whose philosophy was systematized by Christian Wolff; in the eighteenth century, the Leibnizian-Wolffian philosophy was replacing scholasticism in German universities.  Kant’s family pastor and the professor who was so important in his education were both significantly influenced by Wolff’s philosophy, so that their young student was easily drawn into that orbit.  But he also came to study British Empiricists and was particularly disturbed by the challenges posed by the skeptical David Hume, which would gradually undermine his attachment to rationalism.  A vital feature of Kant’s mature philosophy is his attempt to work out a synthesis of these two great rival approaches.

Fourth, the eighteenth century was the heyday of the intellectual movement of the Enlightenment in Europe (as well as in North America), which was committed to ideals that Kant would appropriate as his own—including those of reason, experience, science, liberty, and progress.  Frederick II, who was the Prussian king for most of Kant’s adult life (from 1740 to 1786) was called both “Frederick the Great” and “the Enlightenment King.”  Hume and Wolff were both Enlightenment philosophers, as was Kant himself, who published a sort of manifesto for the movement, called “What Is Enlightenment?” (1784). There he calls his an age of developing enlightenment, though not yet a fully enlightened age.  He champions the cause of the free use of reason in public discussion, including freedom from censorship regarding publishing on religion (Essays, pp. 41-46).

Fifth, Kant himself faced a personal crisis when the Prussian government condemned his published book, Religion within the Limits of Reason Alone.  As long as Frederick the Great, “the Enlightenment King,” ruled, Kant and other Prussian scholars had broad latitude to publish controversial religious ideas in an intellectual atmosphere of general tolerance.  But Frederick was succeeded by his illiberal nephew, Frederick William II, who appointed a former preacher named Wöllner as his reactionary minister of spiritual affairs.  The anti-Enlightenment Wöllner issued edicts forbidding any deviations from orthodox Biblical doctrines and requiring approval by official state censors, prior to publication, for all works dealing with religion.  Kant managed to get the first book of his Religion cleared by one of Wöllner’s censors in Berlin.  But he was denied permission to publish Book II, which was seen as violating orthodox Biblical doctrines.  Having publicly espoused the right of scholars to publish even controversial ideas, Kant sought and got permission from the philosophical faculty at Jena (which also had that authority) to publish the second, third, and fourth books of his Religion and proceeded to do so.  When Wöllner found out about it, he was furious and sent Kant a letter, which he had written and signed, on behalf of the king, censuring Kant and threatening him with harsh consequences, should he ever repeat the offense.  Kant wrote a reply to the king, promising, “as your Majesty’s most loyal subject,” to refrain from all further public discussion of religion.  Until that king died (in 1797), Kant kept his promise.  But, as he later explained (Theology, pp. 239-243), that carefully worded qualifying phrase meant that the commitment would pass with that king, after whose death Kant, in fact, did resume publishing on religion.

2. God in Some Pre-critical Writings

Kant’s pre-critical writings are those that precede his Inaugural Dissertation of 1770, which marked his assumption of the chair in logic and metaphysics at the university.  These writings reflect a general commitment to the Leibnizian-Wolffian rationalist tradition.  Near the beginning of his Universal Natural History and Theory of the Heavens of 1755, Kant observes that the harmonious order of the universe points to its divinely governing first Cause; near the end of it, he writes that even now the universe is permeated by the divine energy of an omnipotent Deity (Cosmogony, pp. 14 and 153).  In his New Exposition of the First Principles of Metaphysical Knowledge (of the same year), he points to God’s existence as the necessary condition of all possibility (Exposition, pp. 224-225).

In The One Possible Basis for a Demonstration of the Existence of God of 1763, after warning his readers that any attempt at proving divine reality will plunge us into the “dark ocean without coasts and without lighthouses” that is metaphysics, he develops that line of argumentation towards God as the unconditioned condition of all possibility.  He denies the Cartesian thesis that existence is a predicate, thus undermining modern versions of the Ontological Argument.  The absolutely necessary Being that is the ground of all possibility must be one, simple, immutable, eternal, the highest reality, and a spirit, he argues.  He analyzes possible theoretical proofs of God into four possible sorts.  Two of these—the Ontological, which he rejects, and his own—are based on possibility; the other two—the Cosmological and the Teleological (Design), both of which he deems inconclusive—are empirical.  The final sentence of the book maintains that, though we must be convinced of God’s existence, logically demonstrating it is not required (Basis, pp. 43, 45, 57, 69, 71, 79, 81, 83, 87, 223, 225, 229, 231, and 239).  That same year, Kant also published his Enquiry concerning the Clarity of the Principles of Natural Theology and Ethics.  Here, while still expressing doubts that any metaphysical system of knowledge has yet been achieved, he nevertheless maintains his confidence that rational argumentation can lead to metaphysical knowledge, including that of God, as the absolutely necessary Being (Writings, pp. 14, 25, and 29-30).  What we see in these pre-critical writings is the stamp of Leibnizian-Wolffian rationalism, but also the developing influence of Hume, whom Kant was surely studying during this period.

3. Each Critique as Pivotal

The heart of Kant’s philosophical system is the triad of books constituting his great critiques:  his Critique of Pure Reason, published in 1781 (the A edition), with a significantly revised second edition appearing in 1787 (the B edition); his Critique of Practical Reason, published in 1788; and his Critique of Judgment, published in 1790.

a. The First Critique

Though some key ideas of the Critique of Pure Reason were adumbrated in Kant’s Inaugural Dissertation of 1770 (in Writings), this first Critique is revolutionary in the sense that, because of it, the history of philosophy became radically different from what it had been before its publication.  We cannot adequately explore all of the game-changing details of the epistemology (theory of knowledge) he develops there, which has been discussed elsewhere in the IEP (see “Immanuel Kant: Metaphysics”), but will only consider the elements that have a direct bearing on his philosophy of religion.

The monumental breakthrough of this book is Kant’s invention of the transcendental method in philosophy, which allows him to discover a middle path between modern rationalism, which attributes intellectual intuition (for example, innate ideas) to humans, enabling them to have universal and necessary factual knowledge, and modern empiricism, which maintains that we only have sensible intuition, making it difficult to see how we can ever achieve universal and necessary factual knowledge through reason.  Kant argues that both sides are partly correct and partly mistaken.  He agrees with the empiricists that all human factual knowledge begins with sensible intuition (which is the only sort we have), but avoids the skeptical conclusions to which this leads them by agreeing with the rationalists that we bring something a priori to the knowing process, while rejecting their dogmatic assumption that it must be the innate ideas of intellectual intuition.  According to Kant, universal and necessary factual knowledge requires both sensible experience, providing its content, and a priori structures of the mind, providing its form.  Either without the other is insufficient.  He famously writes, “Thoughts without content are empty, intuitions without concepts are blind” (Critique, A51/B75).  Without empirical, sensible content, there is nothing for us to know; but without those a priori structures, we have no way of giving intelligible form to whatever content we may have.

The transcendental method seeks the necessary a priori conditions of experience, of knowledge, and of metaphysical speculation.  The two a priori forms of sensibility are time and space:  that is, for us to make sense of them, all objects of sensation, whether external or internal, must be temporally organizable and all objects of external sensation must also be spatially organizable.  But time and space are only forms of experience and not objects of experience, and they can only be known to apply to objects of sensible intuition.  When sensory inputs are received by us and spatio-temporally organized, the a priori necessary condition of our having objective knowledge is that one or more of twelve concepts of the understanding, also called “categories,” must be applied to our spatio-temporal representations.  These twelve categories include reality, unity, substance, causality, and existence.  Again, none of them is an object of experience; rather, they are all categories of the human mind, necessary for our knowing any objects of experience.  And, again, they can only be known to apply to objects of sensible intuition.  Now, by its very nature, metaphysics (including theology) necessarily speculates about ultimate reality that is not given to sensible intuition and therefore transcends any and all human perceptual experience.  It is a fact of human experience that we do engage in metaphysical speculation.  So what are the transcendental conditions of our capacity to do so?  Kant’s answer is that they are the three a priori ideas of pure reason—the self or soul, the cosmos or universe as an orderly whole, and God, the one of direct concern to us here.  But, as we never can have sensible experience of objects corresponding to such transcendent ideas and as the concepts of the understanding, without which human knowledge is impossible, can only be known to apply to objects of possible experience, knowledge of the soul, of the cosmos, and of God is impossible, in principle.

So what are we to make of ideas that can never yield knowledge?  Here Kant makes another innovative contribution to epistemology.  He says that ideas can have two possible functions in human thinking.  Some (for example, empirical) ideas have a “constitutive” function, in that they can be used to constitute knowledge, while others have only a “regulative” function (Critique, A180/B222), in that, while they can never constitute knowledge, they do serve the heuristic purpose of regulating our thought and action.  This is related to Kant’s dualistic distinction between the aspect of reality that comprises all phenomenal appearances and that which involves our noumenal ideas of things-in-themselves.  (Although it is important, we cannot here explore this distinction in the depth it deserves.)  Because metaphysical ideas are unknowable, they cannot serve any “constitutive” function.  Still, they have great “regulative” value for both our thinking and our voluntary choices.  They are relevant to our value-commitments, including those of a religious sort.  Three such regulative ideas are Kant’s postulates of practical reason, which are “God, freedom, and immortality” (Critique, A3/B7).  Although none of them refers to an object of empirical knowledge, he maintains that it is reasonable for us to postulate them as matters of rational faith.  This sort of belief, which is subjectively, but not objectively, justifiable, is a middle ground between certain knowledge, which is objectively, as well as subjectively, justified, and mere arbitrary opinion, which is not even subjectively justified (Critique, A822/B850).  Such rational belief can be religious—namely, faith in God.

Kant presents four logical puzzles that he calls “antinomies” to establish the natural  dialectical illusions that our reason inevitably encounters when it engages metaphysical questions about cosmology in an open-minded fashion.  The fourth of these particularly concerns us here, as reason purports to be able to prove both that there must be an absolutely necessary Being and that no such Being can exist.  His dualism can expose this apparent contradiction as bogus, maintaining that in the realm of phenomenal appearances, everything exists contingently, with no necessary Being, but that in the realm of noumenal things-in-themselves there can be such a necessary Being.

But, we might wonder, what about the traditional arguments for God?  If even one of them proves logically conclusive, would not that constitute some sort of knowledge of God?  Here we encounter yet another great passage in the first Critique, where Kant’s epistemology leads him to a trenchant undermining of all such arguments.  He maintains that there is a trichotomy of types of speculative arguments for God:  the “physico-theological” Argument from Design, various Cosmological Arguments, and the non-empirical “Ontological” Argument.  He cleverly shows that the first of these, even if it worked, would only establish a relatively intelligent and powerful architect of the world and not a necessarily existing Creator.  In order to establish it as a necessary Being, some version of the second approach is needed.  But, if that worked, it would still fail to show that the necessary creator is an infinitely perfect Being, worthy of religious devotion.  Only the Ontological Argument will suffice to establish that.  But here the problems accumulate.  The Ontological Argument fails because it tries to attribute infinite, necessary existence to God; but existence, far from being a real predicate of anything, is merely a concept of the human understanding.  Then the cosmological arguments also fail, in trying to establish that God is the necessary ultimate cause of the world, for both causality and necessity are merely categories of human understanding.  Although Kant exhibits considerable respect for the teleological argument from design, in addition to its conclusion being so disappointingly limited, it also fails as a logical demonstration, in trying to show that an intelligent Designer must exist to account for the alleged intelligent design of the world. The problem is that we do not and cannot ever experience the world as a coherent whole, so that the argument’s premise is merely assumed without foundation.  Thus Kant undermines the entire project of any philosophical theology that pretends to establish any knowledge of God (Critique, A592/B620-A614/B642 and A620/B648-A636/B664).  Yet he remains a champion of religious faith as rationally justifiable.  So how can he make such a position philosophically credible?

b. The Second Critique

Here we must turn to his ingenious Critique of Practical Reason.  Although it is essentially a work of ethics, a significant part of it is devoted to establishing belief in God (as well as in the immortality of the soul) as a rationally justifiable postulate of practical reason, by means of what has come to be called his “moral argument.”  The argument hinges on his claim that we have a moral duty to help bring about, not just the supreme good of moral virtue, which we can achieve by our own efforts in this life, but also “the highest good,” which is  the “perfect” correlation of “happiness in exact proportion to morality.”  Since there cannot be any moral obligation that it is impossible to meet (“ought” implies “can”), achieving this highest good must be possible.  However, there is no reason to believe that it can ever be achieved by us alone, acting either individually or collectively, in this life.  So it would seem that all our efforts in this life cannot suffice to achieve the highest good.  Yet there must be such a sufficient condition, supernatural and with attributes far exceeding ours, identifiable with God, with whom we can collaborate in the achievement of the highest good, not merely here and now but in the hereafter.  Thus he establishes God and human immortality as “morally necessary” hypotheses, matters of “rational faith.”  This is also the basis of Kant’s idea of moral religion, which we shall discuss in more detail below.  But, for now, we can observe his definition of “religion” as “the recognition of all duties as divine commands.”  Thus the moral argument is not purely speculative but has a practical orientation.  Kant does not pretend that the moral argument is constitutive of any knowledge.  If he did, it could be easily refuted by denying that we have any obligation to achieve the highest good, because it is, for us, an impossible ideal.  The moral argument rather deals with God as a regulative idea that can be shown to be a matter of rational belief.  The famous sentence near the end of the second Critique provides a convenient bridge between it and the third:  “Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them:  the starry heavens above me and the moral law within me” (Reason, pp. 114-115/AA V: 110-111, 126-130/AA V: 121-126, 134/AA V: 129-130, and 166/AA V: 161).  As morality leads Kant to God and religion, so does the awesome teleological order of the universe.

c. The Third Critique

Although Kant’s Critique of Judgment is also not essentially a work in the philosophy of religion, its long appendix contains an important section that is germane for our purposes.  We recall that, while criticizing the teleological argument from design, Kant exhibited a high regard for it.  Such physical teleology points to a somewhat intelligent and powerful designing cause of the world.  But now Kant pursues moral teleology, which will connect such a deity to our own practical purposes—not only to our natural desire for happiness, but to our moral worthiness to achieve it, which is a function of our own virtuous good will.  He gives us another version of his moral argument for God, conceived not as the amoral, impersonal metaphysical principle indicated by the teleological argument from design, but rather as a personal deity who is the moral legislator and governor of the world.  Again, all this points to God as a regulative matter of “moral faith,” without any pretense of establishing any theological knowledge (which would violate Kant’s own epistemology).  Such faith is inescapably doubtful, in that it remains reasonable to maintain some doubt regarding it, and a matter of trust in teleological ends towards which we should be striving.  Nor should we be so presumptuous as to suppose that we can ever comprehend God’s nature or purposes.  It is only by analogy that we can contemplate such matters at all (Judgment, pp. 295-338/AA II: 442-485), a point which Kant more carefully develops in his Prolegomena.

4. The Prolegomena and Kant’s Lectures

a. The Prolegomena

Most—but not all—of the religious epistemology that is of note in Kant’s Prolegomena to Any Future Metaphysics is already contained in his more philosophically impressive first Critique and will not be repeated here.  But a few pages of its “Conclusion” add something that we have not yet considered.  One of the abiding problems of the philosophy of religion is how we can speak (and even think) about God except in anthropomorphic human terms without resorting to an indeterminate fog of ineffable mysticism.  The great rationalists are particularly challenged here, and Hume, whom Kant credits with awaking him from his dogmatic slumbers, mercilessly exploits their dilemma.  Kant’s project continues to be to navigate a perilous middle path between the equally problematic approaches of anthropomorphism  and mysticism.  Kant appreciates the dilemma as acutely as Hume, but wants to solve it rather than merely highlighting it.  Hume means to replace theism with an indeterminate deism.  Kant, himself a theist, admits that Hume’s objections against theism are devastating but holds that his arguments undermine only attempted deistic proofs and not deistic beliefs.  Remembering that the concepts of the understanding cannot be known to apply to anything that transcends all possible experience, we can see that it will be a challenge for Kant to evade Hume’s dilemma.  His approach is to distinguish between a malignant “dogmatic anthropomorphism,” which tries literally to attribute to God natural qualities, such as those attributable to humans, and a more benign “symbolic anthropomorphism,” which merely draws an analogy between God’s relation to our world and relations among things in our world, while avoiding thinking of them as identical.  Kant’s example is helpful here:  while we have no possible natural knowledge of God’s love for us and should acknowledge that it cannot be identical to any (necessarily limited) human love, we can use analogical language to think and talk about God’s love for us—as the love of human parents is directed to the welfare of their children, so God’s love for us is directed to human well-being.  Thus, Kant maintains, we can avoid the vicious sort of dogmatic anthropomorphism which Hume rightly attacks and, for example, attribute to God a rational relationship to our world without pretending that divine reason is exactly the same as ours, for example, discursive and, thus, limited  (Prolegomena, pp. 5, 19, and 96-99).  Thinking and speaking of God with analogous language can facilitate a theology that neither is anthropomorphic in a bad way nor succumbs to the dialectical illusions from which Kant’s epistemology would save us.

b. Kant’s Lectures

A somewhat neglected, but still important, dimension of Kant’s philosophy of religion can be found in his Lectures on Philosophical Theology, which comprises an introduction, a first part on transcendental theology, and a second part on moral theology.  After maintaining that rational theology’s essential value is practical rather than speculative, he defines religion as “the application of theology to morality,” which is a bit broader than the definition of the second Critique but is in line with it.  He conceives of the God of rational theology as the causal author and moral ruler of the world.  He considers himself a theist rather than a deist because he is committed to a free and moral “living God,” holy and just, as well as omniscient and omnipotent, as a postulate of practical reason (Lectures, pp. 24, 26, 30, and 41-42).  In the first part of the Lectures, Kant considers the speculative proofs of God, as well as the use of analogous language as a hedge against gross anthropomorphism.  But, as we have already discussed the more famous treatments of these topics (in the first Critique and the Prolegomena, respectively), we can pass over these here.

The second part of the Lectures starts with a version of the moral argument, which we have already considered (in connection with its more famous treatment in the second Critique).  This line of reasoning leads to the moral attributes of “God as a holy lawgiver, a benevolent sustainer of the world, and a just judge.”  A major problem of the philosophy of religion we have yet to consider is the problem of evil.  If, indeed, an infinitely perfect and supremely moral God governs the world with divine providence, how can there be so much evil, in all its multiple forms, in that world?  More specifically, for Kant, how can moral evil be consistent with divine holiness, pain and suffering with divine benevolence, and morally undeserved well-being and the lack of it with divine justice?  Despite God’s holiness, moral evil is a function of our  free will as rational creatures and our responsibility for our own development.  Despite God’s benevolence towards personal creatures, the physical evils of pain and suffering provide incentives for our progressing towards fulfillment.  And, despite God’s justice, the disproportion between virtue and well-being in this life must be temporary, to be rectified hereafter (Lectures, pp. 112 and 115-121).  This earlier (from the 1780s) attempt at theodicy on Kant’s part was neither particularly original nor particularly convincing.

5. Other Important Works

Kant deals with the problem of evil more impressively in his “On the Miscarriage of All Philosophical Trials in Theodicy” (1791).  He analyzes possible attempts at theodicy into three approaches:  (a) it can argue that what we consider evil actually is not, so that there is really no conflict; (b) it can argue that the conflict between evil and God is naturally necessitated; and (c) it can argue that evil, though contingent, is the result of someone other than God.  Kant’s own earlier work attempted to combine the second and third strategies; but here he concludes that all of these approaches must fail.  More specifically, attempts to show that there is no pernicious conflict between moral evil and God’s holiness,   between the physical evils of pain and suffering and God’s goodness,  and, finally, between the disproportion of happiness and misery to virtue and vice and God’s justice, all fail using all three approaches.  Thus Kant’s considered conclusion is negative:  the doubts that are legitimately raised by the evil in our world can neither be conclusively answered nor conclusively refute God’s infinite moral wisdom.  Thus, theodicy, like matters of religion more generally, turns out to be a matter of faith and not one of knowledge (Theology, pp. 24-34; see also “What Does It Mean to Orient Oneself in Thinking?” in Theology pp. 12-15, and “Speculative Beginning of Human History,” in Essays).  In a work published the year he died, Kant analyzes the core of his theological doctrine into three articles of faith:  (1) he believes in one God, who is the causal source of all good in the world; (2) he believes in the possibility of harmonizing God’s purposes with our greatest good; and (3) he believes in human immortality as the necessary condition of our continued approach to the highest good possible (Metaphysics, p. 131).  All of these doctrines of faith can be rationally supported.  This leaves open the issue of whether further religious beliefs, drawn from revelation, can be added to this core.  As Kant makes clear in The Conflict of the Faculties, he does not deny that divinely revealed truths are possible, but only that they are knowable.  So, we might wonder, of what practical use is revelation if it cannot be an object of knowledge?  His answer is that, even if it can never constitute knowledge, it can serve the regulative function of edification—contributing to our moral improvement and adding motivation to our moral purposes (Theology, pp. 283 and 287-288).

6. His Religion within the Limits of Reason Alone

Kant’s Religion within the Limits of Reason Alone of 1793 is considered by some to be the most underrated book in the entire history of the philosophy of religion.  In a letter to a theologian, he subsequently repeats the questions with which he thinks any philosophical system should deal (three of them in his first Critique, A 805/B 833; see also his Logic, pp. 28-30, where he adds a fourth).  The first one, regarding human knowledge, had been covered in the first Critique and the Prolegomena; the second, regarding practical values, was considered in his various writings on ethics and socio-political philosophy; the fourth, regarding human nature, had been covered in his philosophical anthropology. Now,  with Religion, Kant addresses the third question of what we can reasonably hope for, and moves towards completing his system (Correspondence, pp. 458-459).  Thus we can conclude that Kant himself sees this book, the publication of which got him into trouble with the Prussian government, as crucial to his philosophical purposes.  Hence we should take it seriously here as representative of his own rational theology.  In his Preface to the first edition, he again points out that reflection on moral obligation should lead us to religion (Religion, pp. 3-6; see also Education, pp. 111-114, for his analysis of how religion should be taught to children).  In his Preface to the second edition, he offers an illuminating metaphor of two concentric circles—the inner one representing the core of the one religion of pure moral reason and the outer one representing many revealed historical religions, all of which should include and build on that core (Religion, p. 11).

In the first book, Kant considers our innate natural predisposition to good (in being animals, humans, and persons) and our equally innate propensity to evil (in our frailty, impurity, and wickedness).  Whether we end up being praiseworthy or blameworthy depends, not on our sensuous nature or our theoretical reason, but on the use we make of our free will, which is naturally oriented towards both good and evil.  There are two dimensions of what we call “will,” both of which are important in grasping Kant’s view here.  On the one hand, there is our capacity for free choice (his word is “Willkür”); on the other hand, there is practical reason as rationally legislating moral choice and action (“Wille”).  Thus a “good will” chooses in accordance with the rational demands of the moral law.  At any rate, we are born with a propensity to evil; but whether we become evil depends on our own free acts of will.  Thus Kant demythologizes the Christian doctrine of original sin.  He then distinguishes between the phony religion of mere worship designed to win favor for ourselves and the authentic moral religion of virtuous behavior.  Although it is legitimate to hope for God’s grace as helping us to lead morally good lives, it is mere fanaticism to imagine that we can become good by soliciting grace rather than freely choosing virtuous conduct (Religion, pp. 21-26, 30, 32, 35, and 47-49).

In the second book, Jesus of Nazareth is presented as an archetype symbolizing our ability to resist our propensity to evil and to approach the virtuous ideal of moral perfection.  What Kant does not say is whether or not, in addition to being a moral model whose example we should try to follow, Jesus is also of divine origin in some unique manner attested to by miracles.  Just as he neither denies nor affirms the divinity of Christ, so Kant avoids committing himself regarding belief in miracles, which can lead us into superstition (Religion, pp. 51, 54, 57, 74, 77, and 79-82; for more on the mystery of the Incarnation, see Theology, pp. 264-265).

In the third book, Kant expresses his rational hope for the ultimate supremacy of good over evil and the establishment of an ethical commonwealth of persons under a personal God, who is the divine law-giver and moral ruler—the ideal of the invisible church, as opposed to actual realities of visible churches.  Whereas statutory religion focuses on obedient external behavior, true religion concerns internal commitment (or good will).  Mere worship is a worthless substitute for good choices and virtuous conduct.  Here Kant makes a particularly provocative claim, that, ultimately, there is only “one (true) religion,” the religion of morality, while there can be various historical “faiths” promoting it.  From this perspective, Judaism, Islam, and the various denominations of Christianity are all legitimate faiths, to be located in Kant’s metaphorical outer circle, including the true religion of morality, his metaphorical inner circle.  However, some faiths can be relatively more adequate expressions of the religion of moral reason than others (Religion, pp. 86, 89-92, 95, and 97-98; see also Theology, pp. 262-265).

In his particularly inflammatory fourth book, Kant probes the distinction between legitimate religious service and the pseudo-service of religious clericalism.  From our human perspective, religion—both revealed and natural—should be regarded as “the recognition of all duties as divine commands.”  Kant embraces the position of “pure rationalist,” rather than naturalism (which denies divine revelation) or pure supernaturalism (which considers it necessary), in that he accepts the possibility of revelation but does not dogmatically regard it as necessary.  He acknowledges scripture scholars’ valuable role in helping to disseminate religious truth so long as they respect “universal human reason as the supremely commanding principle.”  Christianity is both a natural and a revealed religion, and Kant shows how the gospel of Matthew expresses Kantian ethics, with Jesus as its wise moral teacher.  Following his moral teachings is the means to true religious service, whereas substituting an attachment to external worship allegedly required instead of moral behavior is mere “pseudo-service.” Superstition and fanaticism are typical aspects of such illusions and substituting superstitious rituals for morally virtuous conduct  is mere “fetishism.”  Kant denounces clericalism as promoting such misguided pseudo-service.  The ideal of genuine godliness comprises a combination of fear of God and love of God, which should converge to help render us persons of morally good will.  So what about such religious practices as prayer, church attendance, and participation in sacraments?  They can be either good expressions of devotion, if they bind us together in moral community (occupying Kant’s inner circle) or bad expressions of mere pseudo-service, if designed to ingratiate us with God (an accretion to the outer circle not rooted in the inner circle of genuine moral commitment).  Mere external shows of piety must never be substituted for authentic inner virtue (Religion, pp. 142-143, 147-153, 156-158, 162, 165, 167-168, 170, and 181-189; cf. Ethics, pp. 78-116).  Kant’s Religion within the Limits of Reason Alone provides a capstone for the revolutionary treatment of religion associated with his critical philosophy.

7. Some Tantalizing Suggestions from the Opus Postumum

Yet it is quite admirable that, in the last few years of his life, despite struggling with the onset of dementia that made any such task increasingly challenging, he kept trying to explore new dimensions of the philosophy of religion.  As has already been admitted, the results, located in his fragmentary Opus Postumum, are more provocative than satisfying; yet they are nevertheless worthy of brief consideration here.  The work comprises a vast quantity of scattered remarks, many of which are familiar to readers of his earlier writings, but some of which represent acute, fresh insights, albeit none of them adequately developed.  Here again Kant  writes that reflection on moral duty, determinable by means of the categorical imperative, can reasonably lead us to the idea of God, as a rational moral agent with unlimited power over the realms of nature and of freedom, who prescribes our duties as divine commands.   He then adds a bold idea, which breaks with his own previous orthodox theological concept of a transcendent God.  Developing his old notion of God as “an ideal of human reason,” he identifies God with our concept of moral duty rather than as an independent substance.  This notion of an immanent God (that is, one internal to our world rather than transcendently separate from it), while not carefully worked out by Kant himself, would be developed by later German Idealists (most significantly, Hegel).  While conceding that we think of God as an omnipotent, omniscient, and omnibenevolent personal Being, Kant now denies that personality can be legitimately attributed to God—again stepping out of mainstream Judeo-Christian doctrine.  Also, rather than still postulating God as an independent reality, he here says that “God and the world are correlates,” interdependent and mutually implicating one another.  Unfortunately, we can only conjecture as to what, exactly, he means by this claim.  Referring to Spinoza (the most important pre-Kantian panentheist in modern philosophy), he pushes the point even more radically, writing, “I am in the highest being.”  But, then,   Kant goes on to condemn Spinoza’s panentheistic conception of God (that is, the view also found in  Hegel, that God contains our world rather than transcending it) as outlandish “enthusiastic” fanaticism. In fact, he suggests the inverse—instead of holding that we are in God, Kant now indicates that God is in us, though different from us,  in that God’s reality is ideal rather than substantial.  He proceeds to maintain that not only God is infinite, but so are the world and rational freedom, identifying God with “the inner vital spirit of man in the world.”  Kant makes one final controversial claim  when he denies that a concept of God is even essential to religion (Opus, pp. 200-204, 210-211, 213-214, 225, 229, 231, 234-236, 239-240, and 248).  This denial is clearly not an aspect of Kant’s thought that is familiar and famous, and we should beware of presuming that we understand precisely what should be made of it.  But what is undeniable is what a long and soaring intellectual journey Kant made as he developed his ideas on God and religion from his pre-critical writings through the central, revolutionary works of his philosophical maturity and into the puzzling but tantalizing thought-experiments of his old age.

8. References and Further Readings

a. Primary Sources

  • Immanuel Kant, “An Answer to the Question:  What is Enlightenment?” trans. Ted Humphrey, in Essays.
  • Immanuel Kant, The Conflict of the Faculties, trans. Mary J. Gregor and Robert Anchor, in Theology.
  • Immanuel Kant, Correspondence, trans. and edited by Arnulf Zweig.  New York:  Cambridge University Press, 1999.
  • Immanuel Kant, Critique of Judgment, trans. J. H. Bernard (called “Judgment”).  New York: Hafner, 1968. References to this translation are accompanied by references to the Akademie Ausgabe Volume II.
  • Immanuel Kant, Critique of Practical Reason, trans. Lewis White Beck (called “Reason”).  Indianapolis:  Bobbs-Merrill, 1956. References to this translation are accompanied by references to the Akademie Ausgabe Volume V.
  • Immanuel Kant, Critique of Pure Reason, trans. Norman Kemp Smith (called Critique).  New York: St. Martin’s Press, 1965. References are to the A and B German editions.
  • Immanuel Kant, Education, trans. Annette Churton.  Ann Arbor:  University of Michigan Press, 1960.
  • Immanuel Kant, “The End of All Things,” trans. Allen W. Wood, in Theology.
  • Immanuel Kant, Enquiry concerning the Clarity of the Principles of Natural Theology and Ethics, trans. G. B. Kerford and D. E. Walford, in Writings.
  • Immanuel Kant, Kant’s Cosmogony, trans. W. Hastie (called “Cosmogony”).  New York:  Garland, 1968.
  • Immanuel Kant, Lectures on Ethics, trans. Louis Infield (called “Ethics”).  New York:  Harper & Row, 1963.
  • Immanuel Kant, Lectures on Philosophical Theology, trans. Allen W. Wood and Gertrude M. Clark (called “Lectures”).  Ithaca, NY:  Cornell University Press, 1978.
  • Immanuel Kant, Logic, trans. Robert Hartman and Wolfgang Schwarz.  Indianapolis:  Bobbs-Merrill, 1974.
  • Immanuel Kant, New Exposition of the First Principles of Metaphysical Knowledge, trans. F. E. England (called “Exposition”), in England (below).
  • Immanuel Kant, On the Form and Principles of the Sensible and Intelligible World (Inaugural Dissertation), trans. G. B. Kerford and D. E. Walford, in Writings.
  • Immanuel Kant, “On the Miscarriage of All Philosophical Trials in Theodicy,” trans. George di Giovanni, in Theology.
  • Immanuel Kant, The One Possible Basis for a Demonstration of the Existence of God, trans. Gordon Treash (called “Basis”).  Lincoln:  University of Nebraska Press, 1994.
  • Immanuel Kant, Opus Postumum, edited by Eckart Förster and trans. Eckart Förster and Michael Rosen (called “Opus”).  New York:  Cambridge University Press, 1993.
  • Immanuel Kant, Perpetual Peace and Other Essays, trans. Ted Humphrey (called “Essays”).  Indianapolis:  Hackett, 1983.
  • Immanuel Kant, Prolegomena to Any Future Metaphysics, trans. Paul Carus and revised by James W. Ellington (called “Prolegomena”).  Indianapolis:  Hackett, 1977.
  • Immanuel Kant, Religion and Rational Theology, trans. and edited by Allen W. Wood and George di Giovanni (called “Theology”).  New York:  Cambridge University Press, 2001.
  • Immanuel Kant, Religion within the Limits of Reason Alone, trans. Theodore M. Greene and Hoyt H. Hudson (called “Religion”).  New York:  Harper & Row, 1960.
  • Immanuel Kant, Selected Pre-Critical Writings and Correspondence with Beck, trans. G. B. Kerford and D. E. Walford (called “Writings”).  Manchester:  Manchester University Press, 1968.
  • Immanuel Kant, “Speculative Beginning of Human History,” trans. Ted Humphrey, in Essays.
  • Immanuel Kant, Universal Natural History and Theory of the Heavens, trans. W. Hastie, in Cosmogony.
  • Immanuel Kant, “What Does It Mean to Orient Oneself in Thinking?”, trans. Allen W. Wood, in Theology.
  • Immanuel Kant, What Real Progress Has Metaphysics Made in Germany since the Time of Leibniz and Wolff?, trans. Ted Humphrey (called “Metaphysics”).  New York:  Abaris Books, 1983.

b. Secondary Sources

  • James Collins, The Emergence of Philosophy of Religion.  New Haven:  Yale University Press, 1967.
    • Chapters 3 through 5 deal with Kant’s philosophy of religion in a meticulous manner.
  • Frederick Copleston, S. J., A History of Philosophy, Volume 6.  Garden City:  Image Books, 1964.
    • Though old, this volume still represents exemplary Kant scholarship.
  • A. Hazard Dakin, “Kant and Religion,” in The Heritage of Kant, edited by George Tapley Whitney and David F. Bowers.   New York:  Russell & Russell, 1962.
    • This is a non-technical critical analysis of Kant’s views on religion.
  • Michel Despland, Kant on History and Religion.  Montreal:  McGill-Queen’s University Press, 1973.
    • The second part of this book offers a detailed coverage of Kant’s philosophy of religion.
  • George di Giovanni, “Translator’s Introduction” to Religion within the Boundaries of Mere Reason, in Theology, pp. 41-54.
    • This is an informative account of the history of Kant’s Religion.
  • S. Morris Engel, “Kant’s ‘Refutation’ of the Ontological Argument,” in Kant:  A Collection of Critical Essays, edited by Robert Paul Wolff.  Garden City:  Anchor Books, 1967.
    • This remains a provocative critical analysis of Kant’s critique of this argument.
  • F. E. England, Kant’s Conception of God.  New York:  Humanities Press, 1968.
    • This is a very good study of Kant’s development of a philosophy of religion.
  • Chris L. Firestone and Nathan Jacobs, In Defense of Kant’s Religion.  Bloomington:  Indiana University Press, 2008.
    • This book cleverly presents criticisms of Kant’s views answered by defenses.
  • Chris L. Firestone and Stephen R. Palmquist, editors, Kant and the New Philosophy of Religion.  Bloomington:  Indiana University Press, 2006.
    • This is a good anthology of recent essays from both philosophical and theological perspectives.
  • Chris L. Firestone, “Making Sense Out of Tradition:  Theology and Conflict in Kant’s Philosophy of Religion,” in Kant and the New Philosophy of Religion, pp. 141-156.
    • This article does a good job of explaining Kant’s views on the proper roles of philosophers and theologians in dealing with religion.
  • Eckart Förster, Kant’s Final Synthesis:  An Essay on the Opus Postumum.  Cambridge, MA:  Harvard University Press, 2000.
    • This is a close study of Kant’s final work.
  • Theodore M. Greene, “The Historical Context and Religious Significance of Kant’s Religion,” translator’s introduction to Religion.
    • This offers a long and still valuable perspective on Kant’s major work in the philosophy of religion.
  • Manfred Kuehn, Kant:  A Biography.  New York:  Cambridge University Press, 2001.
    • This is arguably the best intellectual biography of Kant in English.
  • G. E. Michalson, Jr., The Historical Dimensions of a Rational Faith:  The Role of History in Kant’s Religious Thought.  Washington, DC:  University Press of America, 1979.
    • This book relates Kant’s views on religion to his conception of history.
  • Stephen R. Palmquist, “Introduction” to Religion within the Bounds of Bare Reason, trans. Werner S. Pluhar.  Indianapolis:  Hackett, 2009.
    • This is a long and careful introduction to yet another translation of Kant’s most important book in the philosophy of religion.
  • Stephen R. Palmquist, Kant’s Critical Religion.  Aldershot, UK:  Ashgate, 2000.
    • This book explores its subject in astonishing detail.
  • Wayne P. Pomerleau, Western Philosophies of Religion.  New York:  Ardsley House, 1998.
    • The sixth chapter of this book is a detailed study of Kant’s philosophy of religion.
  • Bernard M. G. Reardon, Kant as Philosophical Theologian.  Totowa, NJ:  Barnes & Noble Books, 1988.
    • This fairly short book nevertheless develops a penetrating analysis of the subject.
  • Philip J. Rossi and Michael Wreen, editors, Kant’s Philosophy of Religion Reconsidered.  Bloomington:  Indiana University Press, 1991.
    • This anthology contains some valuable essays on Kant’s theory.
  • Clement C. J. Webb, Kant’s Philosophy of Religion.  Oxford:  Oxford University Press, 1926.
    • This classic general treatment of this topic is still valuable.
  • Allen W. Wood, “General Introduction” to Theology, pp. xi-xxiv.
    • This is brief but, like all of Wood’s work on this subject, well done.
  • Allen W. Wood, “Kant’s Deism,” in Kant’s Philosophy of Religion Reconsidered, pp. 1-21.
    • This is a provocative article considering the pros and cons of regarding Kant as a deist.
  • Allen W. Wood, Kant’s Moral Religion.  Ithaca, NY:  Cornell University Press, 1970.
    • This is an excellent treatment of Kant’s view of morality as the core of true religion.
  • Allen W. Wood, Kant’s Rational Theology.  Ithaca, NY:  Cornell University Press, 1978.
    • This book is more focused on Kant’s critique of speculative theology.
  • Allen W. Wood, “Rational Theology, Moral Faith, and Religion,” in The Cambridge Companion to Kant, edited by Paul Guyer.  New York:  Cambridge University Press, 1992.
    • This essay offers an illuminating connection of important strands of Kant’s philosophy of religion.

 

Author Information

Wayne P. Pomerleau
Email: Pomerleau@calvin.gonzaga.edu
Gonzaga University
U. S. A.

Metaethics

Metaethics is a branch of analytic philosophy that explores the status, foundations, and scope of moral values, properties, and words. Whereas the fields of applied ethics and normative theory focus on what is moral, metaethics focuses on what morality itself is. Just as two people may disagree about the ethics of, for example, physician-assisted suicide, while nonetheless agreeing at the more abstract level of a general normative theory such as Utilitarianism, so too may people who disagree at the level of a general normative theory nonetheless agree about the fundamental existence and status of morality itself, or vice versa. In this way, metaethics may be thought of as a highly abstract way of thinking philosophically about morality. For this reason, metaethics is also occasionally referred to as “second-order” moral theorizing, to distinguish it from the “first-order” level of normative theory.

Metaethical positions may be divided according to how they respond to questions such as the following:

  • Ÿ  What exactly are people doing when they use moral words such as “good” and “right”?
  • Ÿ  What precisely is a moral value in the first place, and are such values similar to other familiar sorts of entities, such as objects and properties?
  • Ÿ  Where do moral values come from—what is their source and foundation?
  • Ÿ  Are some things morally right or wrong for all people at all times, or does morality instead vary from person to person, context to context, or culture to culture?

Metaethical positions respond to such questions by examining the semantics of moral discourse, the ontology of moral properties, the significance of anthropological disagreement about moral values and practices, the psychology of how morality affects us as embodied human agents, and the epistemology of how we come to know moral values. The sections below consider these different aspects of metaethics.

Table of Contents

  1. History of Metaethics
    1. Metaethics before Moore
    2. Metaethics in the Twentieth-Century
  2. The Normative Relevance of Metaethics
  3. Semantic Issues in Metaethics
    1. Cognitivism versus Non-Cognitivism
    2. Theories of Moral Truth
  4. Ontological Issues in Metaethics
    1. Moral Realisms
    2. Moral Relativisms
  5. Psychology and Metaethics
    1. Motivation and Moral Reasons
    2. Experimental Metaethics
    3. Moral Emotions
  6. Epistemological Issues in Metaethics
    1. Thick and Thin Moral Concepts
    2. Moral Justification and Explanation
  7. Anthropological Considerations
    1. Cross-Cultural Differences
    2. Cross-Cultural Similarities
  8. Political Implications of Metaethics
  9. References and Further Reading
    1. Textual Citations
    2. Anthologies and Introductions

1. History of Metaethics

a. Metaethics before Moore

Although the word “metaethics” (more commonly “meta-ethics” among British and Australian philosophers) was coined in the early part of the twentieth century, the basic philosophical concern regarding the status and foundations of moral language, properties, and judgments goes back to the very beginnings of philosophy. Several characters in Plato’s dialogues, for instance, arguably represent metaethical stances familiar to philosophers today: Callicles in Plato’s Gorgias (482c-486d) advances the thesis that Nature does not recognize moral distinctions, and that such distinctions are solely constructions of human convention; and Thrasymachus in Plato’s Republic (336b-354c) advocates a type of metaethical nihilism by defending the view that justice is nothing above and beyond whatever the strong say that it is. Socrates’ defense of the separation of divine commands from moral values in Plato’s Euthyphro (10c-12e) is also a forerunner of modern metaethical debates regarding the secular foundation of moral values. Aristotle’s grounding of virtue and happiness in the biological and political nature of humans (in Book One of his Nicomachean Ethics) has also been examined from the perspective of contemporary metaethics (compare, MacIntyre 1984; Heinaman 1995). In the classical Chinese tradition, early Daoist thinkers such as Zhuangzi have also been interpreted as weighing in on metaethical issues by critiquing the apparent inadequacy and conventionality of human attempts to reify moral concepts and terms (compare, Kjellberg & Ivanhoe 1996). Many Medieval accounts of morality that ground values in religious texts, commands, or emulation may also be understood as defending certain metaethical positions (see Divine Command Theory). In contrast, during the European Enlightenment, Immanuel Kant sought a foundation for ethics that was less prone to religious sectarian differences, by looking to what he believed to be universal capacities and requirements of human reason. In particular, Kant’s discussions in his Groundwork on the Metaphysics of Morals of a universal “moral law” necessitated by reason have been fertile ground for the articulation of many contemporary neo-Kantian defenses of moral objectivity (for example, Gewirth 1977; Boylan 2004).

Since metaethics is the study of the foundations, if any, of morality, it has flourished especially during historical periods of cultural diversity and flux. For example, responding to the cross-cultural contact engendered by the Greco-Persian Wars, the ancient Greek historian Herodotus reflected on the apparent challenge to cultural superiority posed by the fact that different cultures have seemingly divergent moral practices. A comparable interest in metaethics dominated seventeenth and eighteenth-century moral discourse in Western Europe, as theorists struggled to respond to the destabilization of traditional symbols of authority—for example, scientific revolutions, religious fragmentation, civil wars—and the grim pictures of human egoism that thinkers such as John Mandeville and Thomas Hobbes were presenting (compare, Stephen 1947). Most famously, the eighteenth-century Scottish philosopher David Hume may be understood as a forerunner of contemporary metaethics when he questioned the extent to which moral judgments might ultimately rest on human passions rather than reason, and whether certain virtues are ultimately natural or artificial (compare, Darwall 1995).

b. Metaethics in the Twentieth-Century

Analytic metaethics in its modern form, however, is generally recognized as beginning with the moral writings of G.E. Moore. (Although, see Hurka 2003 for an argument that Moore’s innovations must be contextualized by reference to the preceding thought of Henry Sidgwick.) In his groundbreaking Principia Ethica (1903), Moore urged a distinction between merely theorizing about moral goods on the one hand, versus theorizing about the very concept of “good” itself. (Moore’s specific metaethical views are considered in more detail in the sections below.) Following Moore, analytic moral philosophy became focused almost exclusively on metaethical questions for the next few decades, as ethicists debated whether or not moral language describes facts and whether or not moral properties can be scientifically or “naturalistically” analyzed. (See below for a more specific description of these different metaethical trends.) Then, in the 1970s, largely inspired by the work of philosophers such as John Rawls and Peter Singer, analytic moral philosophy began to refocus on questions of applied ethics and normative theories. Today, metaethics remains a thriving branch of moral philosophy and contemporary metaethicists frequently adopt an interdisciplinary approach to the study of moral values, drawing on disciplines as diverse as social psychology, cultural anthropology, comparative politics, as well as other fields within philosophy itself, such as metaphysics, epistemology, action theory, and the philosophy of science.

2. The Normative Relevance of Metaethics

Since philosophical ethics is often conceived of as a practical branch of philosophy—aiming at providing concrete moral guidance and justifications—metaethics sits awkwardly as a largely abstract enterprise that says little or nothing about real-life moral issues. Indeed, the pressing nature of such issues was part of the general migration back to applied and normative ethics in the politically-galvanized intellectual climate of the 1970s (described above). And yet, moral experience seems to furnish myriad examples of disagreement concerning not merely specific applied issues, or even the interpretations or applications of particular theories, but sometimes about the very place of morality in general within multicultural, secular, and scientific accounts of the world. Thus, one of the issues inherent in metaethics concerns its status vis-à-vis other levels of moral philosophizing.

As a historical fact, metaethical positions have been combined with a variety of first-order moral positions, and vice versa: George Berkeley, John Stuart Mill, G.E. Moore, and R.M. Hare, for instance, were all committed to some form of Utilitarianism as a first-order moral framework, despite advocating radically different metaethical positions. Likewise, in his influential book Ethics: Inventing Right and Wrong, J.L. Mackie (1977) defends a form of (second-order) metaethical skepticism or relativism in the first chapter, only to devote the rest of the book to the articulation of a substantive theory of (first-order) Utilitarianism. Metaethical positions would appear then to underdetermine normative theories, perhaps in the same way that normative theories themselves underdetermine applied ethical stances (for example, two equally committed Utilitarians can nonetheless disagree about the moral permissibility of eating meat). Yet, despite the logically possible combinations of second and first-order moral positions, Stephen Darwall (2006: 25) notes that, nevertheless, “there do seem to be affinities between metaethical and roughly corresponding ethical theories,” for example, metaethical naturalists have almost universally tended to be Utilitarians at the first-order level, though not vice versa. Notable exceptions to this tendency—that is, metaethical naturalists who are also first-order deontologists—include Alan Gewirth (1977) and Michael Boylan (1999; 2004). For critical responses to these positions, see Beyleveld (1992), Steigleder (1999), Spence (2006), and Gordon (2009).

Other philosophers envision the connection between metaethics and more concrete moral theorizing in much more intimate ways. For example, Matthew Kramer (2009: 2) has argued that metaethical realism (see section four below) is itself actually a first-order moral view as well, noting that “most of the reasons for insisting on the objectivity of ethics are ethical reasons.” (For a similar view about the first-order “need” to believe in the second-order thesis that moral values are “objective,” see also Ronald Dworkin 1996.) Torbjörn Tännsjö (1990), by contrast, argues that, although metaethics is irrelevant to normative theorizing, it may still be significant in other psychological or pragmatic way, for example, by constraining other beliefs. Nicholas Sturgeon (1986) has claimed that the first-order belief in moral fallibility must be grounded in some second-order metaethical view. And David Wiggins (1976) has suggested that metaethical questions about the ultimate foundation and justification of basic moral beliefs may have deep existential implications for how humans view the question of the meaning of life.

The metaethical question of whether or not moral values are cross-culturally universal would seem to have important implications for how foreign practices are morally evaluated at the first-order level. In particular, metaethical relativism (the view that there are no universal or objective moral values) has been viewed as highly loaded politically and psychologically. Proponents of such relativism often appeal to the alleged open-mindedness and tolerance about first-order moral differences that their second-order metaethical view would seem to support. Conversely, opponents of relativism often appeal to what Thomas Scanlon (1995) has called a “fear of relativism,” citing an anxiety about the first-order effects on our moral convictions and motivations if we become too morally tolerant. (See sections five and eight below for a more detailed discussion of the psychological and political dimensions of metaethics, respectively.) Russ Shafer-Landau (2004) further draws attention to the first-order rhetorical uses of metaethics, for example, Rudolph Giuliani’s evocation of the dangers of metaethical relativism following the terrorist events in the United States on September 11, 2001.

3. Semantic Issues in Metaethics

a. Cognitivism versus Non-Cognitivism

One of the central debates within analytic metaethics concerns the semantics of what is actually going on when people make moral statements such as “Abortion is morally wrong” or “Going to war is never morally justified.” The metaethical question is not necessarily whether such statements themselves are true or false, but whether they are even the sort of sentences that are capable of being true or false in the first place (that is, whether such sentences are “truth-apt”) and, if they are, what it is that makes them “true.”  On the surface, such sentences would appear to possess descriptive content—that is, they seem to have the syntactical structure of describing facts in the world—in the same form that the sentence “The cat is on the mat” seems to be making a descriptive claim about a cat on a mat; which, in turn, is true or false depending on whether or not there really is a cat on the mat. To put it differently, the sentence “The cat is on the mat” seems to be expressing a belief about the way the world actually is. The metaethical view that moral statements similarly express truth-apt beliefs about the world is known as cognitivism. Cognitivism would seem to be the default view of our moral discourse given the apparent structure that such discourse appears to have. Indeed, if cognitivism were not true— such that moral sentences were expressing something other than truth-apt propositions—then it would seem to be difficult to account for why we nonetheless are able to make logical inferences from one moral sentence to another. For instance, consider the following argument:

1. It is wrong to lie.

2. If it is wrong to lie, then it is wrong to get one’s sibling to lie.

3. Therefore, it is wrong to get one’s sibling to lie.

This argument seems to be a valid application of the logical rule known as modus ponens. Yet, logical rules such as modus ponens operate only on truth-apt propositions. Thus, because we seem to be able to legitimately apply such a rule in the example above, such moral sentences must be truth-apt. This argument in favor of metaethical cognitivism by appeal to the apparent logical structure of moral discourse is known as the Frege-Geach Problem in honor of the philosophers credited with its articulation (compare, Geach 1960; Geach 1965 credits Frege as an ancestor of this problem; see also Schueler 1988 for an influential analysis of this problem vis-à-vis moral realism). According to proponents of the Frege-Geach Problem, rejecting cognitivism would force us to show the separate occurrences of the sentence “it is wrong to lie” in the above argument as homonymous: according to such non-cognitivists, the occurrence in sentence (1) is an expression of a non-truth-apt sentiment about lying, whereas the occurrence in sentence (2) is not, since it’s only claiming what one would express conditionally. Since this homonymy would seem to threaten to undermine the grammatical structure of moral discourse, non-cognitivism must be rejected.

Despite this argument about the surface appearance of cognitivism, however, numerous metaethicists have rejected the view that moral sentences ultimately express beliefs about the world. A historically influential forerunner of the alternate theory of non-cognitivism can be found in the moral writings of David Hume, who famously argued that moral distinctions are not derived from reason, but instead represent emotional responses. As such, moral sentences express not beliefs which may be true or false, but desires or feelings which are neither true nor false. This Humean position was renewed in twentieth-century metaethics by the observation that not only are moral disputes often heavily affect-laden in a way many other factual disputes are not, but also that the kind of facts which would apparently be necessary to accommodate true moral beliefs would have to be very strange sorts of entities. Specifically, the worry is that, whereas we can appeal to standards of empirical verification or falsification to adjudicate when our non-moral beliefs are true or false, no such standards seem applicable in the moral sphere, since we cannot literally point to moral goodness in the way we can literally point to cats on mats.

In response to this apparent disanalogy between moral and non-moral statements, many metaethicists embraced a sort of neo-Humean non-cognitivism, according to which moral statements express non-truth-apt desires or feelings. The Logical Positivism of the Vienna Circle adopted this metaethical position, finding anything not empirically verifiable to be semantically “meaningless.” Thus, A.J. Ayer (1936) defended what he called metaethical emotivism, according to which moral expressions are indexed always to the speaker’s own affective state. So, the moral utterance “Abortion is morally wrong” would ultimately mean only that “I do not approve of abortion,” or, more accurately (to avoid even the appearance of having descriptive content), “Abortion—boo!” C.L. Stevenson (1944) further developed metaethical non-cognitivism as involving not merely an expression of the speaker’s personal attitude, but also an implicit endorsement of what the speaker thinks the audience ought to feel. R.M. Hare (1982) similarly analyzed moral utterances as containing both descriptive (truth-apt) as well as ineliminably prescriptive elements, such that genuinely asserting, for instance, that murder is wrong involves a concomitant emotional endorsement of not murdering. Drawing on the work of ordinary-language philosophers such as J.L. Austin, Hare distinguished the act of making a statement (that is, the statement’s “illocutionary force”) from other acts that may be performed concomitantly (that is, the statement’s “perlocutionary force”)— as when, for example, stating “I do” in the context of a marriage ceremony thereby effects an actual legal reality. Similarly, Hare argued that in the case of moral language, the illocutionary act of describing a war as “unjust” may, as part and parcel of the description itself, also involve the perlocutionary force of recommending a negative attitude or action with respect to that war. For Hare, the prescriptive dimension of such an assertion must be constrained by the requirements of universalizability—hence, Hare’s metaethical position is referred to as “universal prescriptivism.”

More recently, sophisticated versions of non-cognitivism have flourished that build into moral expression not only the individual speaker’s normative endorsement, but also an appeal to a socially-shared norm that helps contextualize the endorsement. Thus, Alan Gibbard (1990) defends norm-expressivism, according to which moral statements express commitments not to idiosyncratic personal feelings, but instead to the particular (and, for Gibbard, evolutionarily adaptive) cultural mores that enable communication and social coordination.

Non-cognitivists have also attempted to address the Frege-Geach Problem discussed above, by specifying how the expression of attitudes functions in moral discourse. Simon Blackburn (1984), for instance, has famously argued that non-cognitivism is a claim only about the moral, not the logical parts of discourse. Thus, according to Blackburn, to say that “If it is wrong to lie, then it is wrong to get one’s sibling to lie” can be understood as expressing not an attitude toward lying itself (which is couched in merely hypothetical terms), but rather an attitude toward the disposition to express an attitude toward lying (that is, a kind of second-order sentiment). Since this still essentially involves the expression of attitudes rather than truth-apt assertions, it’s still properly a type of non-cognitivism; yet, by distinguishing expressing an attitude directly from expressing an attitude about another (hypothetical) attitude, Blackburn thinks the logical and grammatical structure of our discourse is preserved. Since this view combines the expressive thesis of non-cognitivism with the logical appearance of moral realism, Blackburn dubs it “quasi-realism”. For a critical response to Blackburn’s attempted solution to the Frege-Geach Problem, see Wright (1988). For an accessible survey of the history of the debate surrounding the Frege-Geach Problem, see Schroeder (2008), and for attempts to articulate new hybrid theories that combine elements of both cognitivism as well as non-cognitivism, see Ridge (2006) and Boisvert (2008).

One complication in the ongoing debate between cognitivist versus non-cognitivist accounts of moral language is the growing realization of the difficulty in conceptually distinguishing beliefs from desires in the first place. Recognition of the mingled nature of cognitive and non-cognitive states can arguably be found in Aristotle’s view that how we perceive and conceptualize a situation fundamentally affects how we respond to it emotionally; not to mention Sigmund Freud’s commitment to the idea that our emotions themselves stem ultimately from (perhaps unconscious) beliefs (compare, Neu 2000). Much contemporary metaethical debate between cognitivists and non-cognitivists thus concerns the extent to which beliefs alone, desires alone, or some compound of the two—what J.E.J. Altham (1986) has dubbed “besires”—are capable of capturing the prescriptive and affective dimension that moral discourse seems to evidence (see Theories of Emotions).

b. Theories of Moral Truth

A related issue regarding the semantics of metaethics concerns what it would even mean to say that a moral statement is “true” if some form of cognitivism were correct. The traditional philosophical account of truth (called the correspondence theory of truth) regards a proposition as true just in case it accurately describes the way the world really is independent of the proposition. Thus, the sentence “The cat is on the mat” would be true if and only if there really is a cat who is really on a mat. According to this understanding, moral expressions would similarly have to correspond to external features about the world in order to be true: the sentence “Murder is wrong” would be true in virtue of its correspondence to some “fact” in the world about murder being wrong. And indeed, several metaethical positions (often grouped under the title of “realism” or “objectivism”—see section four below) embrace precisely this view; although exactly what the features of the world are to which allegedly true moral propositions correspond remains a matter of serious debate. However, there are several obvious challenges to this traditional correspondence account of moral truth. For one thing, moral properties such as “wrongness” do not seem to be the sort of entities that can literally be pointed to or picked out by propositions in the same way that cats and mats can be, since the moral properties are not spatial-temporal objects. As David Hume famously put it,

Take any action allow’d to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In which-ever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. (Hume 1740: 468)

Other possible ontological models for what moral “facts” might look like are considered in section four below. In later years, however, several alternative philosophical understandings of truth have proliferated which might allow moral expressions to be “true” without requiring any correspondence to external facts per se. Many of these new theories of moral truth hearken to a suggestion by Ludwig Wittgenstein in the early twentieth-century that the meaning of any term is determined by how that term is actually used in discourse. Building on this insight about meaning, Frank Ramsey (1927) extended the account to truth itself. Thus, according to Ramsey, the predicate “is true” does not stand for a property per se, but rather functions as a kind of abbreviation for the indirect assertion of other propositions. For instance, Ramsey suggested that to utter the proposition “The cat is on the mat” is to say the same thing as “The sentence ‘the cat is on the mat’ is true.” The phrase “is true” in the latter utterance adds nothing semantically to what is expressed in the former, since in uttering the former, the speaker is already affirming that the cat is on the mat. This is an instance of the so-called disquotational schema, that is, the view that truth is already implicit in a sentence without the addition of the phrase “is true.” Ramsey wielded this principle to defend a deflationary theory of truth, wherein truth predicates are stripped of any metaphysically substantial property, and reduced instead merely to the ability to be formally represented in a language. Saying that truth is thus stripped of metaphysics is not to say that it is determined by usage in an arbitrary or unprincipled way. This is because, while the deflationary theory defines “truth” merely as the ability to be represented in a language, there are always syntactic rules that a language must follow. The grammar of a language thus constrains what can be properly expressed in that language, and therefore (on the deflationary theory) what can be true. Deflationary truth is in this way constrained by what may be called “warranted assertibility,” and since deflationary truth just is what can be expressed by the grammar of a language, we can say more strongly that truth is warranted assertibility.

Hilary Putnam (1981) has articulated an influential challenge to the deflationary account. He argues that deflationary truth is unable to accommodate the fact that we normally think of truth as eternal and stable. But if truth just is warranted assertibility (or what Putnam calls “rational acceptability”), then it becomes mutable since warranted assertibility varies depending on what information is available. For instance, the proposition “the Earth is flat” could have been asserted with warrant (that is, accepted rationally) a thousand years ago in a way that it could not be today because we now have more information available about the Earth. But, though warranted assertibility changed in this case, we wouldn’t want to say that the truth of the proposition “the Earth is flat” changed. Based on these problems, philosophers like Putnam refine the deflationary theory by substituting a condition of ideal warrant or justification, that is, where warranted assertibility is not relative to what specific information a speaker may have at a specific moment, but to what information would be accessible to an ideal epistemic agent. What kind of information would such an ideal speaker have? Putnam characterizes the ideal epistemic situation as involving information that is both complete (that is, involving everything relevant) and consistent (that is, not logically contradictory). These two conditions combine to affect a convergence of information for the ideal agent— a view Putnam calls “internal realism.”

This tradition of deflating truth—of what Jamie Dreier has described as “sucking the substance out of heavy-duty metaphysical concepts” (Dreier 2004: 26)—has received careful exposition in recent years by Crispin Wright. Wright (1992) defends a theory of truth he calls “minimalism.” Though indebted in fundamental ways to the tradition—from Wittgenstein to Ramsey to Putnam—discussed above, Wright’s position differs importantly from these accounts. Wright agrees with Putnam’s criticism of traditional deflationary theories of truth, namely that they make truth too variable by identifying it with something as mutable as warranted assertibility. However, Wright disagrees with Putnam that truth is constrained by the convergence of information that would be available to an epistemically ideal agent. This is because Wright thinks that it is apparent to speakers of a language that something may be true even if it is not justified in ideal epistemic conditions. Wright calls this apparentness a “platitude.” Platitudes, says Wright, are what ordinary language users pre-theoretically mean, and Wright identifies several specific platitudes we have concerning truth, for example, that a statement can be true without being justified, that truth-apt propositions have negations that are also thereby truth-apt, and so forth. Such platitudes serve the same purpose of checking and balancing truth that warranted assertibility or ideal convergence served in the theories of Ramsey and Putnam (Wright calls this check and balance “superassertability”). As Wright puts it, “If an interpretation of “true” satisfies these platitudes, there is, for minimalism, no further, metaphysical question whether it captures a concept worth regarding as truth” (1992: 34). Wright’s theory of minimalist truth has been extraordinarily influential in metaethics, particularly by non-cognitivists eager to accommodate some of the logical structure that moral discourse apparently evidences, but without viewing moral utterances as expressing beliefs that must literally correspond to facts. Such a non-cognitivist theory of minimalist moral truth is defended by Simon Blackburn (1993), who characterizes the resultant view as “quasi-realism” (as discussed in section 3a above). For a critical discussion of the extent to which non-cognitivist views such as Blackburn’s quasi-realism can leverage Wright’s theory of minimalism, see the debate between Michael Smith (1994) and John Divers and Alexander Miller (1994).

4. Ontological Issues in Metaethics

a. Moral Realisms

If moral truth is understood in the traditional sense of corresponding to reality, what sort of features of reality could suffice to accommodate this correspondence? What sort of entity is “wrongness” or “goodness” in the first place? The branch of philosophy that deals with the way in which things exist is called “ontology”, and metaethical positions may also be divided according to how they envision the ontological status of moral values. Perhaps the biggest schism within metaethics is between those who claim that there are moral facts that are “real” or “objective” in the sense that they exist independently of any beliefs or evidence about them, versus those who think that moral values are not belief-independent “facts” at all, but are instead created by individuals or cultures in sometimes radically different ways. Proponents of the former view are called realists or objectivists; proponents of the latter view are called relativists or subjectivists.

Realism / objectivism is often defended by appeal to the normative or political implications of believing that there are universal moral truths that transcend what any individual or even an entire culture might think about them (see sections two and eight). Realist positions, however, disagree about what precisely moral values are if they are causally independent from human belief or culture. According to some realists, moral values are abstract properties that are “objective” in the same sense that geometrical or mathematical properties might be thought to be objective. For example, it might be thought that the sentence “Dogs are canines” is true in a way that is independent from what humans think about it, without thereby believing that there is a literal, physical thing called “dogs”— for, dogs-in-general (rather than a particular dog, say, Fido) is an abstract concept. Some moral realists envision moral values as real without being physical in precisely this way; and because of the similarity between this view and Plato’s famous Theory of Forms, such moral realists are also sometimes called moral Platonists. According to such realists, moral values are real without being reducible to any other kinds of properties or facts: moral values instead, according to these realists, are ontologically unique (or sui generis) and irreducible to other kinds of properties. Proponents of this type of Platonist or sui generis version of moral realism include G.E. Moore (1903), W.D. Ross (1930), W.D. Hudson (1967), Iris Murdoch (1970, arguably), and Russ Shafer-Landau (2003). Tom Regan (1986) also discusses the effect of this metaethical position on the general intellectual climate of the fin de siècle movement known as the Bloomsbury Group.

Other moral realists, though, conceive of the ontology of moral properties in much more concrete terms. According to these realists, moral properties such as “goodness” are not purely abstract entities, but are always instead realized and embodied in particular physical states of affairs. These moral realists often draw analogies between moral properties and scientific properties such as gravity, velocity, mass, and so forth. These scientific concepts are commonly thought to exist independent of what we think about them, and yet they are not part of an ontologically distinct world of pure, abstract ideas in the way that Plato envisioned. So too might moral properties ultimately be reducible to scientific features of the world in a way that preserves their objectivity. An early proponent of such a naturalistic view is arguably Aristotle himself, who anchored his ethics to an understanding of what biologically makes human life flourish. For a later Aristotelian moral realism, see Paul Bloomfield (2001). However, for questions about the extent to which Aristotelianism can truly pair with moral realism, see Robert Heinaman (1995). Note also that several other metaethicists who share broadly Aristotelian conceptions of human needs and human flourishing nonetheless reject realism, arguing that even a shared human nature still essentially locates moral values in human sensibility rather than in some trans-human moral reality. For examples of such naturalistic moral relativism, see Philippa Foot (2001) and David B. Wong (2006). Similar claims about the ineliminable roles that human sensibility and language play in constituting moral reality have looked less to Aristotle and more to Wittgenstein; although, as with the former, there may be some discomfort allowing views that closely link morality with human sensibilities to be called genuinely “realist.” For examples, see in particular David Wiggins (1976) and Sabina Lovibond (1983). Other notable theorists who have advanced Wittgensteinian accounts of the constitutive role that language and context play in our understanding of morality include G.E.M. Anscombe (1958) and Alasdair MacIntyre (1981), although both are explicitly agnostic about whether this commits them to moral realism or relativism.

The naturalistic tradition of moral realism is continued by contemporary theorists such as Alan Gewirth (1980), Deryck Beyleveld (1992), and Michael Boylan (2004) who similarly seek to ground moral objectivity in certain universal features of humans. Unlike Aristotelian appeals to our biological and social nature, however, these theorists adopt a Kantian stance, which appeals to the capacities and requirements of rational agency—for example, what Gewirth has called “the principle of generic consistency.” While these neo-Kantian theories are more focused on questions about the justification of moral beliefs rather than on the existence of belief-independent values or properties, they may nonetheless be classed as moral realisms in light of their commitment to the objective and universal nature of rationality. For commentary and discussion of such theories, see in particular Steigleder (1999), Boylan (1999), Spence (2006), and Gordon (2009).

Other naturalistic theories have looked to scientific models of property reductionism as a way of understanding moral realism. In the same way that, for instance, our commonsense understanding of “water” refers to a property that, at the scientific level, just is H2O, so too might moral values be reduced to non-moral properties. And, since these non-moral properties are real entities, the resultant view about the values that reduce to them can be considered a form of moral realism—without any need to posit trans-scientific, other-worldly Platonic entities. This general approach to naturalistic realism is often referred to as “Cornell Realism” in light of the fact that several of its prominent advocates studied or taught at Cornell University. Geoff Sayre-McCord (1988) has also famously dubbed it “New Wave Moral Realism.” Individual proponents of such a view may have divergent views concerning how the alleged “reduction” of the moral to the non-moral works precisely. Richard Boyd (1988), for instance, defends the view that the reductive relationship between moral and non-moral properties is a priori and necessary, but not thereby singular; moral properties might instead reduce to a “homeostatic cluster” of different overlapping non-moral properties.

Several other notable examples of scientifically-minded naturalistic moral realism have been defended. Nicholas Sturgeon (1988) has similarly argued in favor of a reduction of moral to non-moral properties, while emphasizing that a reduction at the level of the denotation or extension of our moral terms need not entail a corresponding reduction at the level of the connotation or intension of how we talk about morality. In other words, we can affirm that values just are (sets of) natural properties without thereby thinking we can or should abandon our moral language or explanatory/justificatory processes. David Brink (1989) has articulated a similar type of naturalistic moral realism which emphasizes the epistemological and motivational aspects of Cornell Realism by defending a coherentist account of justification and an externalist theory of motivation, respectively. Peter Railton (1986) has also offered a version of naturalistic moral realism according to which moral properties are reduced to non-moral properties; however, the non-moral properties in question are not so much scientific properties (or clusters of such properties), but are instead constituted by the “objective interests” of ideal epistemic agents or “impartial spectators.” Yet another variety of naturalistic moral realism has been put forward by Frank Jackson and Philip Pettit (1995). According to their view of “analytic moral functionalism,” moral properties are reducible to “whatever plays their role in mature folk morality.” Jackson’s (1998) refinement of this position—which he calls “analytic descriptivism”—elaborates that the “mature folk” properties to which moral properties are reducible will be “descriptive predicates” (although Jackson allows for the possibility that these descriptive predicates need not be physical or even scientific).

A helpful way to understand the differences between all these varieties of moral realism—namely, the Platonic versus the naturalistic versions— is by appeal to a famous argument advanced by G.E. Moore at the beginning of twentieth-century metaethics. Moore—himself an advocate of the Platonic view of morality—argued that moral properties such as “good” cannot be solely defined by scientific, natural properties such as “biological flourishing” or “social coordination” for the simple reason that, given such an alleged definition, we could still always sensibly ask whether such scientific properties were themselves truly good or not. The apparent ability to always keep the moral status of any scientific or natural thing an “open question” led Moore to reject any analysis of morality that defined moral values as anything other than simply “moral,” period. Any attempt to violate this ban must result, Moore believed, in the committing of a “naturalistic fallacy.” Moral Platonists or non-naturalistic realists tend to view Moore’s Open Question Argument as persuasive. Naturalistic realists, by contrast, argue that Moore’s argument is unconvincing on the grounds that not all truths— moral or otherwise— necessarily need to be true solely by definition. After all, such realists will argue, scientific statements such as “Water is H2O” is true even though people can (and did for a long time) question this definition.

Michael Smith (1994) has referred to this realist strategy of defining moral properties as naturalistic properties which humans discover, rather than which are simply true by definition, as “synthetic ethical naturalism.” One argument against this form of moral realism has been developed by Terry Horgan and Mark Timmons (1991), on the basis of a thought-experiment called Moral Twin Earth. This thought-experiment asks us to imagine two different worlds, the actual Earth as we know it and an alternate-reality Earth in which the same moral terms as those on the actual Earth are used to refer to the same natural/scientific properties (as the naturalistic moral realist wants to say). However, Horgan and Timmons point out that we can at the same time imagine that the moral terms on our actual Earth refer to, say, properties that maximize overall happiness (as Utilitarianism maintains), while also imagining that the moral terms used on hypothetical Moral Twin Earth refer to properties of universal rationality (as Kantian normative theorists maintain). But this would show that the moral terms used on actual Earth versus those used on Moral Twin Earth have different meanings, because they refer to different normative theories. This implies that it would be the normative theories themselves that are causing the difference in the meaning of the moral terms, not the natural properties since those are identical across the two worlds. And since naturalistic (a.k.a. Cornell) moral realism maintains that moral properties are identical at some level to natural properties, Horgan and Timmons think this thought-experiment disproves naturalistic realism. In other words, if the naturalistic realists were correct about the reduction of moral to non-moral predicates, then the Earthlings and Twin Earthlings would have to be interpreted not as genuinely disagreeing about morality, but as instead talking past one another altogether; and, according to Horgan and Timmons, this would be highly counter-intuitive, since it seems on the surface that the two parties are truly disagreeing.

Centrally at issue in the Moral Twin Earth argument is the question of how precisely naturalistic realists envision moral properties being “reduced” to natural, scientific properties in the first place. Such realists frequently invoke the metaphysical relationship of supervenience to account for the way that moral properties might connect to scientific properties. For one property or set of properties to supervene on another means that any change in one must necessarily result in a corresponding change in the other. For instance, to say that the color property of greenness supervenes on grass is to say that if two plots of grass are identical in all biological, scientific ways, then they will be green in exactly the same way too. Simon Blackburn (1993: 111-129), however, has raised a serious objection to using this notion to explain moral supervenience. Blackburn claims that if moral properties did supervene on natural properties, then we should be able to imagine two different worlds (akin to Horgan and Timmons’ Moral Twin Earth) where killing is morally wrong in one world, but not wrong in the other world— all we would have to do is imagine two worlds in which the natural, scientific facts were different. And if we can coherently imagine these two worlds, then there is no reason why we should not also be able to imagine a third “mixed” world in which killing is sometimes wrong and sometimes not. But Blackburn does not think we can in fact imagine such a strange morally mixed world— for, he believes that it is part of our conception of morality that moral wrongness or rightness does not just change haphazardly from case to case, all things being equal. As Blackburn says, “While I cannot see an inconsistency in holding this belief [namely, the view that moral propositions report factual states of affairs upon which the moral properties supervene in an irreducible way], it is not philosophically very inviting. Supervenience becomes, for the realist, an opaque, isolated, logical fact for which no explanation can be proffered” (1993: 119). In this way, Blackburn is not objecting to the supervenience relation per se, but rather to attempts to leverage this relation in favor of moral realism. For a critical examination of supervenience in principle, see Kim (1990); Blackburn attempts to refurbish his notion of supervenience in response to Kim’s critique in Blackburn (1993: 130-148).

Apart from the debate between naturalistic versus non-naturalistic moral realists, some metaethicists have explored the possibility that moral properties might be “real” without needing to be fully independent from human sensibility. According to these theories of moral realism, moral values might be akin to so-called “dispositional properties.” A dispositional property (sometimes understood as a “secondary quality”) is envisioned as a sort of latent potential or disposition, inherent in some external object or state of affairs, that becomes activated or actualized through involvement on the part of some other object or state of affairs. Thus, for example, the properties of being fragile or looking red are thought to involve a latent disposition to break under certain conditions or to appear red in a certain light. The suggestion that moral values might be similarly dispositional was made famous by John McDowell (1985). According to this view, moral properties such as “goodness” can still be real at the level of dispositional possibility (in the same way that glass is still fragile even when it is not breaking, or that blood is red even in the darkness), while still only being expressible by reference to the features (actual moral agents, in the case of morality) that would actualize those dispositions. For similar metaethical positions that seek to articulate a model of moral values which are objective, yet relational to aspects of human sensibility, see David Wiggins (1976), Sabina Lovibond (1983), David McNaughton (1988), Mark Platts (1991), Jonathan Dancy (2000), and DeLapp (2009). Arguments against this form of dispositional moral realism typically attempt to leverage alleged disanalogies between moral properties and other, non-moral dispositional properties (see especially Blackburn 1993).

b. Moral Relativisms

Other metaethical positions reject altogether the idea that moral values— whether naturalistic, non-naturalistic, or dispositional—are real or objective in the sense of being independent from human belief or culture in the first place. Such positions instead insist on the fundamentally anthropocentric nature of morality. According to such views, moral values are not “out there” in the world (whether as scientific properties, dispositional properties, or Platonic Forms) at all, but are created by human perspectives and needs. Since these perspectives and needs can vary from person to person or from culture to culture, these metaethical theories are usually referred to as either “subjectivism” or “relativism” (sometimes moral nihilism as well; although, this is a more normatively loaded term). Many of the reasons in favor of metaethical relativism concern either a rejection of the realist ontological models discussed above, or else by appeal to psychological, epistemological, or anthropological considerations (see sections 5, 6, 7 below).

Most forms of metaethical relativism envision moral values as constructed for different, and sometimes incommensurable human purposes such as social coordination, and so forth. This view is explicitly endorsed by Gilbert Harman (1975), but may also be implicitly associated in different ways with any position that conceives of moral value as constructed by divine commands (Adams 1987; see also Divine Command Theory), idealized human rationality (Korsgaard 1996) or perspective (Firth 1952), or a social contract between competing interests (Scanlon 1982; Copp 2007). For this reason, the view is also sometimes known as moral constructivism (compare, Shafer-Landau 2003: 39-52). Furthermore, metaethical relativism must be distinguished from the non-cognitivist metaethical views considered above in section three. Non-cognitivism is a semantic thesis about what moral utterances mean—namely, that moral utterances are neither true nor false at all, but instead express prescriptive endorsements or norms. Metaethical subjectivism/relativism/constructivism, by contrast, acknowledges the semantic accuracy of cognitivism—according to which moral utterances are either true or false— but insists that such utterances are always, as it happens, false. That is, metaethical subjectivism/relativism/constructivism is a thesis about the (lack of) moral facts in the world, not a thesis about what we humans are doing when we try to talk about such facts. And since metaethical subjectivism/relativism/constructivism thinks that our cognitivist moral language is systematically false, it may also be known as moral error theory (Mackie 1977) or moral fictionalism (Kalderon 2005).

Although metaethical relativism is often depicted as embracing a valueless world of moral free-for-all, more sophisticated versions of the theory have attempted to place certain boundaries on morality in a way that still affirms the fundamental human-centeredness of values. Thus, David B. Wong (1984; 2006) has defended a view he calls pluralistic moral relativism according to which moral values are constructed differently by different social groups for different purposes; but in such a way that the degree of relativity will be nonetheless constrained by a generally uniform biological account of human nature and flourishing. A similar conception of metaethical relativism that is nonetheless grounded in some notion of universal human biological characteristics may be found in Philippa Foot (2001).

5. Psychology and Metaethics

One of the most pressing questions within analytic metaethics concerns how morality engages our embodied human psychologies. Specifically, how (if at all) do moral judgments move us to act in accordance with them? Is there any reason to be moral for its own sake, and can we give any psychologically persuasive reasons to others to act morally if they do not already acknowledge such reasons? Is it part of the definition of moral concepts such as “right” and “wrong” that they should or should not be pursued, or is it possible to know that, say, murder is morally wrong, but nonetheless not recognize any reason not to murder?

a. Motivation and Moral Reasons

Those who argue that the psychological motivation to act morally is already implicit in the judgment that something is morally good, are commonly called motivational internalists. Motivational internalists may further be divided into weak motivational internalists or strong motivational internalists, according to the strength of the motivation that they think true moral judgments come pre-packaged with. Thus, the Socratic view that evil is always performed out of ignorance (for no one, goes the argument, would knowingly do something that would morally damage their own character or soul) may be seen as a type of strong motivational internalism. Weaker versions of motivational internalism may insist only that moral judgments supply their own impetus to act accordingly, but that this impetus can (and perhaps often does) get overruled by countervailing motivational forces. Thus, Aristotle’s famous account of “weakness of the will” has been interpreted as a weaker sort of motivational internalism, according to which a person may recognize that something is morally right, and may even want at some level to do what is right, but is nonetheless lured away from such action, perhaps through stronger temptations.

Apart from what actually motivates people to act in accordance with their moral judgments, however, there is the somewhat different question about whether such judgments also supply their own intrinsic reasons to act in accordance with them. Reasons-externalists assert that sincerely judging that something is morally wrong, for instance, automatically supplies a reason for the judger that would justify her acting on the basis of that judgment, that is, a reason that is external to or independent of what the judger herself feels or wants. This need not mean that such a justification is an objectively adequate justification (that would hinge on whether one was a realist or relativist about metaethics), only that it would make sense as a response to the question “Why did you do that?” to say “Because I judged that it was morally right” (compare, McDowell 1978; Shafer-Landau 2003). According to reasons-internalists, however, judging and justifying are two conceptually different matters, such that someone could make a legitimate judgment that an action was morally wrong and still fail to recognize any reason that would justify their not performing it. Instead, sufficiently justifying moral reasons must exist independently and internally to a person’s psychological makeup (compare, Foot 1972; Williams 1979).

Closely related to the debates between internalism and externalism is the question of the metaethical status of alleged psychopaths or sociopaths. According to some moral psychologists, such individuals are characterized by a failure to distinguish moral values from merely conventional values. Several metaethicists have pointed to the apparent existence of psychopaths as support for the truth of either motivational or reasons-externalism; since psychopaths seem to be able to judge that, for instance, murder or lying are morally wrong, but either feel little or no motivation to refrain from these things, or else do not recognize any reason that should justify refraining from these things. Motivational internalists and reasons-externalists, however, have also sought to accommodate the challenge presented by the psychopath, for example, by arguing that the psychopath does not truly, robustly know that what she is doing is wrong, but only knows how to use the word “wrong” in roughly the way that the rest of society does.

A separate issue related to the internalist/externalist debate concerns the apparent psychological uniqueness of moral judgments. Specifically, at least according to the motivational internalist and reasons-externalist, moral judgments are supposed to supply, respectively, their own inherent motivations or justifying reasons, that is, their own intrinsic quality of “to-be-pursuedness.” Yet, this would seem to render morality suspiciously unique—or what J.L. Mackie (1977) calls “metaphysically queer”— since all other, non-moral judgments (for example, scientific, factual, or perceptual judgments) do not seem to provide any inherent motivations or justifications. The objection is not that non-moral judgments (for example, “This coffee is decaffeinated”) supply no motivational or justificatory force, but merely that any such motivation or justificatory force hinges on other psychological factors independent of the judgment itself (that is, the judgment about the coffee being decaffeinated will only motivate or provide a reason for you to drink it if you already have the desire to avoid caffeine). Unlike the factual judgment about the coffee, though, the moral judgment that an action is wrong is supposed to be motivating or reasons-giving regardless of the judger’s personal desires or interests. Motivational internalists or reasons-externalists have responded to this alleged “queerness” by either embracing the uniqueness of moral judgments, or else by attempting to articulate other examples of non-moral judgments which might also inherently supply motivation or reasons.

b. Experimental Metaethics

Not only has psychology been of interest to metaethicists, but metaethics has also been of interest to psychologists. The movement known as experimental philosophy (compare, Appiah 2008; Knobe and Nichols 2008)— which seeks to supplement theoretical philosophical claims with empirical attention to how people actually think and act— has yielded numerous suggestive findings about a variety of metaethical positions. For example, drawing on empirical research in social psychology, several philosophers have suggested that moral judgments, motivations, and evaluations are highly sensitive to situational variables in a way that might challenge the universality or autonomy of morality (Flanagan 1991; Doris 2002). Other moral psychologists have explored the possibilities of divergences in moral reasoning and valuation with respect to gender (Gilligan 1982), ethnicity (Markus and Kitayama 1991; Miller and Bersoff 1992), and political affiliation (McCrae 1992; Haidt 2007).

The specific debate between metaethical realism and relativism has also recently been examined from experimental perspectives. It has been argued that an empirically-informed analysis of people’s actual metaethical commitments (such as they are) is needed as a check and balance on the many frequent appeals to “commonsense morality” or “ordinary moral experience.” Realists as well as relativists have often used such appeals as a means of locating a burden of proof for or against their theories, but the actual experimental findings about lay-people’s metaethical intuitions remain mixed. For examples of realists assuming folk realism, see Brink (1989: 25), Smith (1994: 5), and Shafer-Landau (2003: 23); for examples of relativists assuming folk relativism, see Harman (1985); and for examples of relativists assuming folk realism, see Mackie (1977) and Joyce (2001: 70). William James (1896: 14) offered an early psychological description of humans as “absolutists by instinct,” although James’ specific metaethical commitments remain unclear (compare, Suckiel 1982). On the one hand, Shaun Nichols (2004) has argued that metaethical relativism is particularly pronounced among college undergraduates. On the other hand, William Rottschaefer (1999) has argued instead that moral realism is empirically supported by attention to effective child-rearing practices.

c. Moral Emotions

Another psychological topic that has been of interest to metaethicists is the nature and significance of moral emotions. One aspect of this debate has been the perennial question of whether it is fundamentally rationality which supplies our moral distinctions and motivations, or whether these are instead generated or conditioned by passions and sentiments which are separate from reason. (See section 5a above for more on this debate.) In particular, this debate was one of the dividing issues in eighteenth-century ethics between the so-called Intellectualist School (for example, Ralph Cudworth, William Wollaston, and so forth), which stressed the rational grasp of certain “moral fitnesses” on the one hand, and the Sentimentalist School (for example, Shaftesbury, David Hume, and so forth), which stressed the role played by our non-cognitive “moral sense” on the other hand (compare, Selby-Bigge 1897; see also Darwall 1995 for an application of these views to contemporary metaethical debates about moral motivation and knowledge).

Aside from motivational and epistemological issues, however, moral emotions have been of interest to metaethicists in terms of the apparent phenomenology they furnish. In particular, attention has been given to which metaethical theory, if any, better accommodates the existence of self-regarding “retributive emotions,” such as guilt, regret, shame, and remorse. Martha Nussbaum (1986) and Bernard Williams (1993), for example, have drawn compelling attention to the powerful emotional responses characteristic of Greek tragedy, and the so-called moral luck that such experiences seem to involve. According to Williams (1965), sensitivity to moral dilemmas will reveal a picture of the moral sphere according to which even the best-intentioned actions may leave moral “stains” or “remainders” on our character. Michael Stocker (1990) extends this analysis of moral emotions to more general scenarios of ineliminable conflicts between values, and Kevin DeLapp (2009) explores the specific implications of tragic emotions for theories of moral realism. By contrast, Gilbert Harman (2009) has argued against the moral (let alone metaethical) significance of guilt feelings. Patricia Greenspan (1995), however, has leveraged the phenomenology of guilt (particularly as she identifies it in cases of unavoidable wrong-doing) as a defense of moral realism. For more perspectives on the nature and significance of moral dilemmas, see Gowans (1987). For more on the philosophy of emotions in general, see Calhoun & Solomon (1984).

6. Epistemological Issues in Metaethics

Analytic metaethics also explores questions of how we make moral judgments in the first place, and how (if at all) we are able to know moral truths. The field of moral epistemology can be divided into questions about what moral knowledge is, how moral beliefs can be justified, and where moral knowledge comes from.

a. Thick and Thin Moral Concepts

Moral epistemology explores the contours of moral knowledge itself—not the specific content of individual moral beliefs, but the conceptual characteristics of moral beliefs as a general epistemic category. Here, one of the biggest questions concerns whether moral knowledge involves claims about generic moral values such “goodness” or “wrongness” (so-called “thin” moral concepts) or whether moral knowledge may be obtained at the somewhat more concrete level of concepts such as “courage”, “intemperance”, or “compassion” (which seem to have a “thicker” descriptive content). The general methodology of the thick-thin distinction was popularized by Clifford Geertz (1973) following the introduction of the terminology by Gilbert Ryle (1968). Its specific application to metaethics, however, is due largely to Bernard Williams’ (1985) famous argument that genuine (that is, action-guiding) moral knowledge can only exist at the thicker level of concrete moral concepts. This represents what Williams called the “limits of philosophy,” since philosophical theorizing aims instead at more abstract, thin moral principles. Furthermore, according to Williams, this epistemological point about the thickness of moral knowledge has important implications for the ontology of moral values; namely, Williams defends a kind metaethical relativism on the grounds that, even if thin moral concepts such as “goodness” are universal across different societies, the more specific thick concepts that he thinks really matter to us morally are specified in often divergent ways, for example, two societies that both praise “goodness” may nonetheless have quite different understandings of what counts as “bravery”.

Emphasis on thick moral concepts has been prevalent in virtue ethics in general. For example, Alasdair MacIntyre (1984) has famously defended the neo-Aristotelian view that ethics must be grounded in a “tradition” that is coherent and stable enough to thickly specify virtues and virtuous role-models. Indeed, part of the challenge that MacIntyre sees facing contemporary societies is that increased cross-cultural interconnectedness has fomented a fragmentation of traditional virtue frameworks, engendering a moral cacophony that threatens to undermine moral motivation, knowledge, and even our confidence in what counts as “rational” (MacIntyre 1988). More recently, David B. Wong (2000) has offered a contemporary Confucian response to MacIntyre-style worries about moral fragmentation in democratic societies, arguing that pluralistic societies may still retain a coherent tradition in the form of civic “rituals” such as voting.

A related metaethical issue concerns the scope of moral judgments and the extent to which such judgments may ever legitimately be made universally or whether they ought instead to be indexed to particular situations or contexts; this view is commonly known as moral particularism (compare, Hooker and Little 2000; Dancy 2006).

b. Moral Justification and Explanation

Metaethical positions may also be divided according to how they envision the requirements of justifying moral beliefs. Traditional philosophical accounts of epistemological justification are requisitioned and modified specifically to accommodate moral knowledge. A popular version of a theory of moral-epistemic justification may be called metaethical foundationalism—the view that moral beliefs are epistemically justified by appeal to other moral beliefs, until this justificatory process terminates at some bedrock beliefs whose own justifications are “self-evident.” By contrast, metaethical coherentism requires for the epistemic justification of a moral belief only that it be part of a network of other beliefs, all of which are jointly consistent (compare, Sayre-McCord 1985; Brink 1989). Mark Timmons (1996) also defends a form of metaethical contextualism, according to which justification is determined either by reference to some relevant set of epistemic practices and norms (a view Timmons calls “normative contextualism” and which also bears strong similarity with the movement known as virtue epistemology), or else by reference to some more basic beliefs (a view Timmons calls “structural contextualism” and which seems very similar to foundationalism). Kai Nielsen (1997) has offered another account of contextualist ethical justification with reference to internal systems of religious belief and explanation (see Religious Epistemology).

Early 21st century work in metaethics has gone into exploring precisely what is involved in the “self-evidence” envisioned by foundationalist accounts of moral justification. Roger Crisp (2002) notes that most historical deployments of “self-evidence” in moral epistemology tended to associate it with obviousness or certainty. For instance, the ethical intuitionism of much of the early part of the 20th century (particularly following Moore’s Open Question Argument, as discussed above) tended to adopt this stance toward moral truths (compare, Stratton-Lake 2002). It was this understanding of metaethical foundationalism which led J.L. Mackie (1977) to object to what he saw as the “epistemological queerness” of realist or objectivist ontology. In later years, though, more sophisticated versions of metaethical foundationalism have sought interpretations of the “self-evidence” of basic, justifying moral beliefs in a way that need not involve dogmatic or naive assumptions of obviousness; but might instead require only that such basic moral beliefs are epistemically justified non-inferentially (Audi 1999; Shafer-Landau 2003). One candidate for what it might mean for a moral belief to be epistemically justified non-inferentially has involved an appeal to the model of perceptual beliefs (Blum 1991; DeLapp 2007). Non-moral perceptual beliefs are typically viewed as decisive vis-à-vis justification, provided the perceiver is in appropriate, reliable perceptual conditions. In other words, according to this view, the belief “There is a coffee mug in front of me” is epistemically justified just in case one takes oneself to be perceiving a coffee mug and provided that one is not suffering from hallucinations, merely using one’s peripheral vision, or in a dark room. (See also epistemology of perception.)

Although not addressing this issue of moral perception, Russ Shafer-Landau (2003) has argued on a related note that, ultimately, the difference between metaethical naturalism versus non-naturalism (as described in section 4a) might not be so much ontological or metaphysical, as it is epistemological. Specifically, according to Shafer-Landau, metaethical naturalists are those who require that the epistemic justification of moral beliefs be inferred on the basis of other non-moral beliefs about the natural world; whereas metaethical non-naturalists allow for the epistemic justification of moral beliefs to be terminated with some brute moral beliefs that are themselves sui generis.

Aside from the questions of the scope, source, and justification of moral beliefs, another epistemological facet of metaethics concerns the explanatory role that putative moral properties play with respect to moral beliefs. A useful way to frame this issue is by reference to Roderick Chisholm’s (1981) influential point about direct attribution. Chisholm noted that we refer to external things by attributing properties to them directly. Using this language, we may frame the metaethical question as whether or not our attribution of moral properties to actions, characters, and so forth, is “direct” (that is, external). Gilbert Harman (1977) has famously argued that our attribution of moral properties is not direct in this way. According to Harman, objective moral properties, if they existed, would be explanatorily impotent, in the sense that our specific, first-order moral beliefs can already be sufficiently accounted for by appealing to naturalistic, psychological, or perceptual factors. For example, if we were to witness people gleefully torturing a defenseless animal, we would likely form the belief that their action is morally wrong; but, according to Harman, we could adequately explain this moral evaluation solely by citing various sociological, emotional, behavioral, and perceptual causal factors, without needing to posit any mysterious additional properties that our evaluation is also channeling. This explanatory impotence, Harman believes, constitutes a serious disanalogy between, on the one hand, the role that abstract metaethical properties play in actual (first-order) moral judgments and, on the other hand, the role that theoretical scientific entities play in actual (first-order) perceptual judgments. For example, imagine that we were witnessing the screen-representation of a particle accelerator, instead of people torturing an animal. Although we do not literally see a subatomic particle on the screen (rather, we see a bunch of pixels which we interpret as referring to a subatomic particle) any more than we literally see “wrongness” floating around the animal-torturers, the essential difference between the two cases is that the additional abstract belief that there really are subatomic particles is necessary to explain why we infer them on the basis of screen-pixels; whereas, according to Harman, the alleged property of objective “wrongness” is unnecessary to explain why we disapprove of torture. Nicholas Sturgeon (1988), however, has argued contrary to Harman that second-order metaethical properties do play legitimate explanatory roles, for the simple reason that they are cited in people’s justification of why they find the torturing of animals morally wrong. Thus, for Sturgeon, what will count as the “best explanation” of a phenomenon—namely, the phenomenon of morally condemning the torturing of an animal—must be understood in the broader context of our overall explanatory goals, one of which will be to make sense of why we think that torturing animals is objectively wrong in the first place.

7. Anthropological Considerations

Although much of analytic metaethics concerns rarified debates that can often be highly abstracted from actual, applied moral concerns, several metaethical positions have also drawn heavily on cultural anthropological considerations to motivate or flesh-out their views. After all, as discussed above in section one, it has often been actual, historical moments of cultural instability or diversity that have stimulated metaethical reflection on the nature and status of moral values.

a. Cross-Cultural Differences

One of the most influential anthropological aspects of metaethics concerns the apparent challenge that pervasive and persistent cross-cultural moral disagreement would seem to present for moral realists or objectivists. If, as the realist envisions, moral values were truly universal and objective, then why is it the case that so many different people seem to have such drastically different convictions about what is right and wrong? The more plausible explanation of the fact that people persistently disagree about moral matters, so the argument goes, is simply that there are no objective moral truths capable of settling their dispute. As opposed to the apparent convergence in other, non-moral realms of dispute (for example, scientific, perceptual, and so forth), moral disagreement seems both ubiquitous and largely resistant to rational adjudication. J.L. Mackie (1977) leverages these features of moral disagreement to motivate what he calls The Argument from Relativity. This argument begins with the descriptive, anthropological observation that different cultures endorse different moral values and practices, and then argues as an inference to the most likely explanation of this fact that metaethical relativism best accounts for such cross-cultural discrepancies.

Mackie refers to such cross-cultural moral differences as “well-known” and, indeed, it seems prima facie obvious that different cultures have different practices. Mackie’s argument, however, seeks a diversity of practices that is not merely descriptively different on the surface, but that is deeply morally different, if not ultimately incommensurable. James Rachels (1986) describes the difference between surface, descriptive difference versus deep, moral difference by reference to the well-worn example of the traditional Inuit practice of leaving elders to die from exposure. Although at the surface level of description, this practice seems radically different from contemporary Western attitudes toward the ethical treatment of the elderly (pervasive elder-abuse notwithstanding), the underlying moral justification for the practice—namely, that material resources are limited, the elders themselves choose this fate, the practice is a way for elders to die with dignity, and so forth—sounds remarkably similar in spirit to the familiar sorts of moral values contemporary Westerners invoke.

Cultural anthropology itself has generated controversy regarding the extent as well as the metaethical significance of moral differences at the deep level of fundamental justifications and values. Responding to both the assumption of cultural superiority as well as the Romantic attraction to viewing exotic cultures as Noble Savages, early twentieth-century anthropologists frequently adopted a methodology of relativism, on the grounds that accurate empirical information would be ignored if a cultural difference was examined with any a priori moral bias. An early exponent of this anthropological relativism was William Graham Sumner (1906) who, reflecting on what he referred to as different cultural folkways (that is, traditions or practices), claimed provocatively that, “the folkways are their own warrant.” Numerous anthropologists who were influenced by Franz Boas (1911) adopted a similar refusal to morally evaluate cross-cultural differences, culminating in an explicit embrace of metaethical relativism by anthropologists such as Ruth Benedict (1934) and Melville Herskovits (1952).

Several notable philosophers in the Continental tradition have also affirmed the sociological and anthropological relativism mentioned above. Specifically, the deconstructivism of Jacques Derrida, with its suspicion regarding “logocentric” biases, might be understood as a warning against metaethical objectivism. Instead, a deconstructivist might argue that ethical meaning (like all meaning) is characterized by what Derrida called différance, that is, an intractable un-decidability. (See Derrida (1996), however, for the possibility of a less relativistic deconstructivist ethics.) Other contemporary Continental approaches have similarly eschewed realism. For example, Mary Daly (1978) has defended a radical feminist critique of the sexual biases inherent in how we talk about values. For other perspectives on the possible tensions between feminism and the metaethics of cultural diversity, see Okin (1999) and Nussbaum (1999: 29-54). Michel Foucault (1984) is also well-known for his general criticism of the uses and abuses of power in the construction and expression of moral valuations pertaining to mental health, sexuality, and criminality. Similar critiques concerning the transplantation of a particular set of cultural values to other cultural contexts have been expressed by a number of post-colonialists and literary theorists, who have theorized about the imperialism, silencing (Spivak 1988), Orientalism (Said 1978), and cultural hybridity (Bhabha 1994) such moral universalism may involve.

b. Cross-Cultural Similarities

For all the apparent cross-cultural moral diversity, however, there have also been several suggestions against extending anthropological relativism to the metaethical level. First, a variety of empirical studies seem to suggest that the degree of moral similarity at the deep level of fundamental justifications and values may be greater than Boas and his students anticipated. Thus, for example, Jonathan Haidt (2004) has argued that cross-cultural differences show strong evidence of resolving around a finite number of basic moral values (what Haidt calls “modules”). From a somewhat more abstract perspective, Thomas Kasulis (2002) has also defended the view that cross-cultural differences can be sorted into two fundamental “orientations.” However, the congealing of cross-cultural differences around a small, finite number of basic values need not prove moral realism—for, those basic values may themselves still be ultimately relative to human needs and perspectives (compare, Wong 2006).

There are also several theoretical challenges to inferring metaethical relativism from anthropological differences. For one thing, as Michele Moody-Adams (1997) has argued, metaethical assessments about the degree or depth of moral differences are “empirically underdetermined” by the anthropological description of the practices themselves. For example, anthropological data about the moral content of a culturally different practice may be biased on behalf of the cultural informant who supplies the data or characterization. Similar critiques of cross-cultural moral relativism have leveraged what is known as The Principle of Charity—the hermeneutic insight that differences must at least be commensurable enough to even be framed as “different” from one another in the first place. Thus, goes the argument, if cross-cultural moral differences were so radically different as to be incomparable to one another, we could never truly morally disagree at all; we would instead be simply “talking past” one another (compare Davidson 2001). Much of our ability to translate between the moral practices of one culture and another—an ability central to the very enterprise of comparative philosophy—presupposes that even moral differences are still recognizably moral differences at root.

8. Political Implications of Metaethics

In addition to accommodating or accounting for the existence of moral disagreements, metaethics has also been thought to provide some insight concerning how we should respond to such differences at the normative or political level. Most often, debates concerning the morally appropriate response to moral differences have been framed against analyses concerning the relationship between metaethics and toleration. On the one hand, tolerating practices and values with which one might disagree has been a hallmark of liberal democratic societies. Should this permissive attitude, however, be extended indiscriminately to all values and practices with which one disagrees? Are some moral differences simply intolerable, such that it would undermine one’s own moral convictions to even attempt to tolerate them? More vexingly, is it conceptually possible or desirable to tolerate the intolerance of others (a paradox sometimes referred to as the Liberal’s Dilemma)? Karl Popper (1945) famously argued against the toleration of intolerance, which he saw as an overly-indulgent extension of the concept and one which would undermine the “open society” he believed to be a prerequisite for toleration in the first place. By contrast, John Rawls (1971) has argued that toleration—even of intolerance—is a constitutive part of justice (derivable from what Rawls calls the “liberty principle” of justice), such that failure to be tolerant would entail failure to satisfy one of the requirements of justice. Rawls emphasizes, however, that genuine toleration need not lead to utopia or agreement, and that it is substantially different from a mere modus vivendi, that is, simply putting up with one another because we are powerless to do otherwise. According to Rawls, true toleration requires that we seek to bring our differences into an “overlapping consensus,” which he claims will be possible due to an inherent incompleteness and “looseness in our comprehensive views” (2001: 193).

The value of toleration is often claimed as an exclusive asset of individual metaethical theories. For example, metaethical relativists frequently argue that only by acknowledging the ultimately subjective and conventional nature of morality can we make sense of why we should not morally judge others’ values or practices—after all, according to relativism, there would be no culture-transcendent standard against which to make such judgments. For this reason, Neil Levy claims that, “The perception that relativism promotes, or is the expression of, tolerance of difference is almost certainly the single most important factor in explaining its attraction” (2002: 56). Indeed, even metaethical realists (Shafer-Landau 2004: 30-31) often observe that undergraduate endorsements of relativism seem to be motivated by an anxiety about condemning foreign practices. Despite the apparent leeway with respect to moral differences that metaethical relativism would appear to allow, several realists have argued, by contrast, that relativism could equally be as compatible with intolerance. After all, goes the argument, if nothing is objectively or universally morally wrong, then a fortiori intolerant practices cannot be said to be universally or objectively wrong either. People or cultures who do not approve of an intolerant practice would only be reflecting their own culture’s commitment to toleration (compare Graham 1996). For this reason, several metaethicists have argued that realism alone can support the commitment to toleration as a universal value—such that intolerance can be morally condemned—because only realism allows for the existence of universal, objective moral values (compare, Shafer-Landau 2004: 30-33). Nicholas Rescher (1993) expresses a related worry about what he calls “indifferentism”—a nihilistic nonchalance regarding specific ethical commitments that might be occasioned by an embrace of metaethical relativism. Rescher’s own solution to the potential problem of indifferentism (he calls his view “contextualism” or “perspectival rationalism”) involves the recognition of the reasons-giving nature of circumstances, such that different situations may supply their own “local” justifications for particular political or moral commitments.

The question of which metaethical theory—realism or relativism—can lay better claim to toleration, however, has been complicated by reflection on what “toleration” truly involves and whether it is always, in fact, a moral value. Andrew Cohen (2004), for instance, has argued that “toleration” by definition must involve some negative evaluation of the practice or value that is tolerated. Thus, on this analysis, it would seem that one may only tolerate that which one finds intolerable. This has led philosophers such as Bernard Williams (1996) to question whether toleration—understood as requiring moral disapproval—is even possible, let alone whether it is truly a moral value itself. (For more discussion on toleration, see Heyd 1996.) In a related vein, Richard Rorty (1989) has argued that what a society finds intolerant is itself morally constitutive of that society’s identity, and that recognition of the metaethical contingency of one’s particular social tolerance might itself provide an important sense of political “solidarity.” For these reasons, other philosophers have considered alternative understandings of toleration that might be more amenable to particular metaethical theories. David B. Wong (2006: 228-272), for example, has developed an account of what he calls accommodation, according to which even relativists may still share a higher-order commitment to the need for different practices and values to be arranged in such a way as to minimize social and political friction.

9. References and Further Reading

a. Textual Citations

  • Adams, Robert. (1987). The Virtue of Faith and Other Essays in Philosophical Theology. Oxford University Press.
  • Altham, J.E.J. (1986) “The Legacy of Emotivism,” in Macdonald & Wright, eds. Fact, Science, and Morality. Oxford University Press, 1986.
  • Appiah, Kwame Anthony. (2008). Experiments in Ethics. Harvard University Press.
  • Audi, Robert. (1999). “Moral Knowledge and Ethical Pluralism,” in Greco and Sosa, eds. Blackwell Guide to Epistemology, 1999, ch. 6.
  • Ayer, A.J. (1936). Language, Truth and Logic. Gollancz Press.
  • Benedict, Ruth. (1934). “Anthropology and the Abnormal,” Journal of General Psychology 10: 59-79.
  • Beyleveld, Deryck. (1992). The Dialectical Necessity of Morality. University of Chicago Press.
  • Bhabha, Homi. (1994). The Location of Culture. Routledge Press.
  • Blackburn, Simon. (1984). Spreading the Word. Oxford University Press.
  • Blackburn, Simon. (1993). Essays in Quasi-Realism. Oxford University Press.
  • Blair, Richard. (1995). “A Cognitive Developmental Approach to Morality: Investigating the Psychopath,” Cognition 57: 1-29.
  • Bloomfield, Paul. (2001). Moral Reality. Oxford University Press.
  • Blum, Lawrence. (1991). “Moral Perception and Particularity,” Ethics 101 (4): 701-725.
  • Boas, Franz. (1911). The Mind of Primitive Man. Free Press.
  • Boisvert, Daniel. (2008). “Expressive-Assertivism,” Pacific Philosophical Quarterly 89 (2): 169-203.
  • Boyd, Richard. (1988). “How to be a Moral Realist,” in Essays on Moral Realism, ed. Geoffrey      Sayre-McCord. Cornell University Press 1988, ch. 9.
  • Boylan, Michael. (2004). A Just Society. Rowman & Littlefield Publishers.
  • Boylan, Michael, ed. (1999). Gewirth: Critical Essays on Action, Rationality, and Community. Rowman & Littlefield Publishers.
  • Brink, David. (1989). Moral Realism and the Foundations of Ethics. Cambridge University Press.
  • Calhoun, Cheshire and Solomon, Robert, eds. What Is An Emotion? Oxford University Press.
  • Chisholm, Roderick. (1981). The First Person: An Essay on Reference and Intentionality. University of Minnesota Press.
  • Cohen, Andrew. (2004). “What Toleration Is,” Ethics 115: 68-95.
  • Copp, David. (2007). Morality in a Natural World. Cambridge University Press.
  • Daly, Mary. (1978). Gyn/Ecology: The Metaethics of Radical Feminism. Beacon Press.
  • Dancy, Jonathan. (2006). Ethics without Principles. Oxford University Press.
  • Dancy, Jonathan. (2000). Practical Reality. Oxford University Press.
  • Darwall, Stephen. (2006). “How Should Ethics Relate to Philosophy?” in Metaethics after Moore, eds. Terry Horgan & Mark Timmons. Oxford University Press 2006, ch.1.
  • Darwall, Stephen. (1995). The British Moralists and the Internal ‘Ought’. Cambridge University Press.
  • Davidson, Donald. (2001). Inquiries into Truth and Interpretation. Clarendon Press.
  • DeLapp, Kevin. (2009). “Les Mains Sales Versus Le Sale Monde: A Metaethical Look at Dirty Hands,” Essays in Philosophy 10 (1).
  • DeLapp, Kevin. (2009). “The Merits of Dispositional Moral Realism,” Journal of Value Inquiry 43 (1):        1-18.
  • DeLapp, Kevin. (2007). “Moral Perception and Moral Realism: An ‘Intuitive’ Account of Epistemic Justification,” Review Journal of Political Philosophy 5: 43-64.
  • Derrida, Jacques. (1996). The Gift of Death. University of Chicago Press.
  • Divers, John and Miller, Alexander. (1994). “Why Expressivists about Value Should Not Love Minimalism about Truth,” Analysis 54 (1): 12-19.
  • Dreier, James. (2004). “Meta-ethics and the Problem of Creeping Minimalism,” Philosophical Perspectives 18: 23-44.
  • Doris, John. (2002). Lack of Character. Cambridge University Press.
  • Dworkin, Ronald. (1996). “Objectivity and Truth: You’d Better Believe It,” Philosophy and Public Affairs 25 (2): 87-139.
  • Firth, Roderick. (1952). “Ethical Absolutism and the Ideal Observer Theory,” Philosophy and Phenomenological Research 12: 317-345.
  • Flanagan, Owen. (1991). Varieties of Moral Personality. Harvard University Press.
  • Foot, Philippa. (2001). Natural Goodness. Clarendon Press.
  • Foot, Philippa. (1972). “Morality as a System of Hypothetical Imperatives,” Philosophical Review 81 (3): 305-316.
  • Foucault, Michel. (1984). The Foucault Reader, ed. Paul Rabinow. Pantheon Books.
  • Geach, Peter. (1960). “Ascriptivism”, Philosophical Review 69: 221-225.
  • Geach, Peter. (1965). “Assertion”, Philosophical Review 74: 449-465.
  • Geertz, Clifford. (1973). “Thick Description: Toward an Interpretative Theory of Culture,” in The Interpretation of Cultures: Selected Essays. Basic Books, 1973: 3-30.
  • Gewirth, Alan. (1980). Reason and Morality. University of Chicago Press.
  • Gibbard, Alan. (1990). Wise Choices, Apt Feelings. Harvard University Press.
  • Gilligan, Carol. (1982). In a Different Voice. Harvard University Press.
  • Gordon, John-Stewart, ed. (2009). Morality and Justice: Reading Boylan’s A Just Society. Lexington Books.
  • Gowans, Christopher, ed. (1987). Moral Dilemmas. Oxford University Press.
  • Graham, Gordon. (1996). “Tolerance, Pluralism, and Relativism,” in David Heyd, ed. Toleration: An Elusive Virtue. Princeton University Press, 1996: 44-59.
  • Greenspan, Patricia. (1995). Practical Guilt: Moral Dilemmas, Emotions, and Social Norms. Oxford University Press.
  • Haidt, Jonathan and Graham, Jesse. (2007). “When Morality Opposes Justice: Conservatives Have Moral Intuitions and Liberals May Not Recognize,” Social Justice Research 20 (1): 98-116.
  • Haidt, Jonathan and Joseph, Craig. (2004). “Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues,” Daedalus: 55-66.
  • Hare, R.M. (1982). Moral Thinking. Oxford University Press.
  • Harman, Gilbert. (2009). “Guilt-Free Morality,” Oxford Studies in Metaethics 4: 203-214.
  • Harman, Gilbert. (1977). The Nature of Morality. Oxford University Press.
  • Harman, Gilbert. (1985). “Is There A Single True Morality?” in David Copp and David Zimmerman, eds.    Morality, Reason and Truth. Rowman & Littlefield, 1985: 27-48.
  • Harman, Gilbert. (1975). “Moral Relativism Defended,” Philosophical Review 85 (1): 3-22.
  • Heinaman, Robert, ed. (1995). Aristotle and Moral Realism. Westview Press.
  • Herskovits, Melville. (1952). Man and His Works. A.A. Knopf.
  • Heyd, David, ed. (1996). Toleration: An Elusive Virtue. Princeton University Press.
  • Hooker, Brad and Little, Margaret, eds. (2000). Moral Particularism. Oxford University Press.
  • Horgan, Terence and Timmons, Mark. (1991). “New Wave Moral Realism Meets Moral Twin Earth,” Journal of Philosophical Research 16: 447-465.
  • Hudson, W.D. (1967). Ethical Intuitionism. St. Martin’s Press.
  • Hume, David. (1740). A Treatise on Human Nature. L.A. Selby-Bigge, ed. Oxford University Press, 2e (1978).
  • Hurka, Thomas. (2003) “Moore in the Middle,” Ethics 113 (3): 599-628.
  • Jackson, Frank and Pettit, Philip. (1995). “Moral Functionalism and Moral Motivation,” Philosophical Quarterly 45: 20-40.
  • James, William. (1896). “The Will to Believe,” in The Will to Believe and Other Essays in Popular Philosophy. Dover Publishers, 1956.
  • Joyce, Richard. (2001). The Myth of Morality. Cambridge University Press.
  • Kalderon, Mark, ed. (2005). Moral Fictionalism. Clarendon Press.
  • Kasulis, Thomas. (2002). Intimacy or Integrity: Philosophy and Cultural Difference. University of Hawai’i Press.
  • Kjellberg, Paul and Ivanhoe, Philip, eds. (1996). Essays on Skepticism, Relativism, and Ethics in the Zhuangzi. SUNY Press.
  • Knobe, Joshua and Nichols, Shuan, eds. (2008). Experimental Philosophy. Oxford University Press.
  • Korsgaard, Christine. (1996). The Sources of Normativity. Cambridge University Press.
  • Kramer, Matthew. (2009). Moral Realism as a Moral Doctrine. Wiley-Blackwell Publishers.
  • Levy, Neil. (2002). Moral Relativism: A Short Introduction. Oneworld Publications.
  • Lovibond, Sabina. (1983). Realism and Imagination in Ethics. Minnesota University Press.
  • MacIntyre, Alasdair. (1988). Whose Justice? Which Rationality? Notre Dame Press.
  • MacIntyre, Alasdair. (1984). After Virtue, 2e. Notre Dame Press.
  • Mackie, J.L. (1977). Ethics: Inventing Right and Wrong. Penguin Books.
  • Markus, H.R. and Kitayama, S. (1991). “Culture and the Self: Implications for Cognition, Culture, and Motivation,” Psychological Review 98: 224-253.
  • McCrae, R.R. and John, O.P. (1992). “An Introduction to the Five-Factor Model and Its Applications,” Journal of Personality 60: 175-215.
  • McDowell, John. (1985) “Values and Secondary Qualities,” in Morality and Objectivity, ed. Ted    Honderich. Routledge (1985): 110-29.
  • McDowell, John. (1978). “Are Moral Requirements Hypothetical Imperatives?” Proceedings of the Aristotelian Society, supp. Vol. 52: 13-29.
  • McNaughton, David. (1988). Moral Vision. Blackwell Publishing.
  • Miller, J.G. and Bersoff, D.M. (1992). “Culture and Moral Judgment: How Are Conflicts between Justice and Interpersonal Relationships Resolved?” Journal of Personality and Social Psychology 62:                541-554.
  • Moody-Adams, Michele. (1997). Fieldwork in Familiar Places. Harvard University Press.
  • Moore, G.E. (1903). Principia Ethica. Cambridge University Press.
  • Murdoch, Iris. (1970). The Sovereignty of the Good. Routledge and Kegan Paul Press.
  • Neu, Jerome. (2000). A Tear is an Intellectual Thing. Oxford University Press.
  • Nichols, Shaun. (2004). “After Objectivity: An Empirical Study of Moral Judgment,” Philosophical Psychology 17: 5-28.
  • Nielsen, Kai. (1997). Why Be Moral? Prometheus Books.
  • Nussbaum, Martha. (1999). Sex and Social Justice. Oxford University Press.
  • Nussbaum, Martha. (1986). The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy. Cambridge University Press.
  • Okin, Susan Moller. (1999). Is Multiculturalism Bad for Women? Princeton University Press.
  • Plato. Republic, trans. G.M.A. Grube, in The Complete Works of Plato, ed. John Cooper. Hackett 1997.
  • Plato. Gorgias, trans. Donald Zeyl, in The Complete Works of Plato, ed. John Cooper. Hackett 1997.
  • Platts, Mark. (1991). Moral Realities: An Essay in Philosophical Psychology. Routledge Press.
  • Putnam, Hilary. (1981). Reason, Truth, and History. Cambridge University Press.
  • Rachels, James. (1986). “The Challenge of Cultural Relativism,” in Rachels, The Elements of Moral Philosophy. Random House (1999): 20-36.
  • Railton, Peter. (1986). “Moral Realism,” Philosophical Review 95: 163-207.
  • Ramsey, Frank. (1927). “Facts and Propositions,” Aristotelian Society Supplementary Vol. 7: 153-170.
  • Rawls, John. (2001). Justice As Fairness: A Restatement. Belknap Press.
  • Rawls, John. (1971). A Theory of Justice. Belknap Press.
  • Regan, Tom. (1986). Bloomsbury’s Prophet. Temple University Press.
  • Rescher, Nicholas. (1993). Pluralism: Against the Demand for Consensus. Clarendon Press.
  • Ridge, Michael. (2006). “Ecumenical Expressivism: Finessing Frege,” Ethics 116 (2): 302-336.
  • Rorty, Richard. (1989). Contingency, Irony, and Solidarity. Cambridge University Press.
  • Ross, W.D. (1930). The Right and the Good. Oxford University Press.
  • Rottshaefer, William. (1999). “Moral Learning and Moral Realism: How Empirical Psychology Illuminates Issues in Moral Ontology,” Behavior and Philosophy 27: 19-49.
  • Ryle, Gilbert. (1968). “What is Le Penseur Doing?” in Collected Papers 2 (1971): 480-496.
  • Said, Edward. (1978). Orientalism. Vintage Books.
  • Sayre-McCord, Geoffrey. (1985). “Coherence and Models for Moral Theorizing,” Pacific Philosophical     Quarterly 66:
  • Scanlon, Thomas. (1995) “Fear of Relativism,” in Virtues and Reasons, eds. Hursthouse, Lawrence, Quinn. Oxford University Press (1995): 219-245.
  • Schroeder, Mark. (2008). “What is the Frege-Geach Problem?” Philosophy Compass 3 (4): 703-720.
  • Schueler, G.F. (1988). “Modus Ponens and Moral Realism,” Ethics 98: 492-500.
  • Selby-Bigge, L.A., ed. (1897). The British Moralists of the Eighteenth-century. Clarendon Press.
  • Shafer-Landau, Russ. (2004). Whatever Happened to Good and Evil? Oxford University Press.
  • Shafer-Landau, Russ. (2003). Moral Realism: A Defense. Oxford University Press.
  • Smith, Michael. (1994). The Moral Problem. Blackwell Publishers.
  • Smith, Michael. (1994). “Why Expressivists about Value Should Love Minimalism about Truth,” Analysis 54 (1): 1-11.
  • Spence, Edward. (2006). Ethics within Reason: A Neo-Gewirthian Approach. Lexington Books.
  • Steigleder, Klaus. (1999). Grundlegung der normativen Ethik: Der Ansatz von Alan Gewirth. Alber Publishers.
  • Stephen, Leslie. (1947). English Literature and Society in the Eighteenth Century. Reprinted by University Press of the Pacific, 2003.
  • Stevenson, C.L. (1944). Ethics and Language. Yale University Press.
  • Stocker, Michael. (1990). Plural and Conflicting Values. Oxford University Press.
  • Spivak, Gayatri Chakravorty. (1988). “Can the Subaltern Speak?” in Marxism and the Interpretation of Culture, eds. C. Nelson and L. Grossberg. Macmillan Books, 1988: 271-313.
  • Stratton-Lake, Philip, ed. (2002). Ethical Intuitionism: Re-Evaluations. Oxford University Press.
  • Sturgeon, Nicholas. (1988). “Moral Explanations,” in Essays on Moral Realism, ed. GeoffreySayre-McCord. Cornell University Press 1988, ch. 10.
  • Sturgeon, Nicholas. (1986). “Harman on Moral Explanations of Natural Facts,” Southern Journal of Philosophy 24: 69-78.
  • Suckiel, Ellen Kappy. (1982). The Pragmatic Philosophy of William James. Notre Dame Press.
  • Sumner, William Graham. (1906) Folkways. Ginn Publishers.
  • Tännsjö, Torbjörn. (1990). Moral Realism. Rowman & Littlefield Publishers.
  • Timmons, Mark. (1996). “A Contextualist Moral Epistemology,” in Sinnott-Armstrong, ed. Moral Knowledge? Oxford University Press, 1996.
  • Wiggins, David. (1976). “Truth, Invention, and the Meaning of Life,” in Wiggins, Needs, Values, Truth, 3e. Oxford University Press, 2002: 87-138.
  • Williams, Bernard. (1996). “Toleration: An Impossible Virtue?” in David Heyd, ed. Toleration: An Elusive Virtue. Princeton University Press, 1996: 28-43.
  • Williams, Bernard. (1993). Shame and Necessity. University of California Press.
  • Williams, Bernard. (1985). Ethics and the Limits of Philosophy. Harvard University Press.
  • Williams, Bernard. (1979). “Internal and External Reasons,” in Rational Action, ed. Ross Harrison. Cambridge University Press, 1979: 17-28.
  • Williams, Bernard. (1965). “Ethical Consistency,” Proceedings of the Aristotelian Society, suppl. Vol. 39:   103-124.
  • Wong, David B. (2006). Natural Moralities: A Defense of Pluralistic Relativism. Oxford University Press.
  • Wong, David B. (2000). “Harmony, Fragmentation, and Democratic Ritual,” in Civility, ed. Leroy S. Rouner. University of Notre Dame Press, 2000: 200-222.
  • Wong, David B. (1984). Moral Relativity. University of California Press.
  • Wright, Crispin. (1992). Truth and Objectivity. Harvard University Press.

b. Anthologies and Introductions

  • Fisher, Andrew and Kirchin, Simon, eds. (2006). Arguing about Metaethics. Routledge Press.
  • Harman, Gilbert and Thomson, J.J. (1996). Moral Relativism and Moral Objectivity. Blackwell Publishers.
  • Miller, Alexander. (2003). An Introduction to Contemporary Metaethics. Polity Press.
  • Moser, Paul and Carson, Thomas, eds. (2001). Moral Relativism: A Reader. Oxford University Press.
  • Sayre-McCord, Geoffrey, ed. (1988). Essays on Moral Realism. Cornell University Press.
  • Shafer-Landau, Russ, ed. (2001-2010). Oxford Studies in Metaethics, Vol. 1-5. Oxford University Press.

 

Author Information

Kevin M. DeLapp
Email: kevin.delapp@converse.edu
Converse College
U. S. A.

Theory of Mind

Theory of Mind is the branch of cognitive science that investigates how we ascribe mental states to other persons and how we use the states to explain and predict the actions of those other persons. More accurately, it is the branch that investigates mindreading or mentalizing or mentalistic abilities. These skills are shared by almost all human beings beyond early childhood. They are used to treat other agents as the bearers of unobservable psychological states and processes, and to anticipate and explain the agents’ behavior in terms of such states and processes. These mentalistic abilities are also called “folk psychology” by philosophers, and “naïve  psychology” and “intuitive psychology” by cognitive scientists.

It is important to note that Theory of Mind is not an appropriate term to characterize this research area (and neither to denote our mentalistic abilities) since it seems to assume right from the start the validity of a specific account of the nature and development of mindreading, that is, the view that it depends on the deployment of a theory of the mental realm, analogous to the theories of the physical world (“naïve physics”). But this view—known as theory-theory—is only one of the accounts offered to explain our mentalistic abilities. In contrast, theorists of mental simulation have suggested that what lies at the root of mindreading is not any sort of folk-psychological conceptual scheme, but rather a kind of mental modeling in which the simulator uses her own mind as an analog model of the mind of the simulated agent.

Both theory-theory and simulation-theory are actually families of theories. Some theory-theorists maintain that our naïve theory of mind is the product of the scientific-like exercise of a domain-general theorizing capacity. Other theory-theorists defend a quite different hypothesis, according to which mindreading rests on the maturation of a mental organ dedicated to the domain of psychology. Simulation-theory also shows different facets. According to the “moderate” version of simulationism, mental concepts are not completely excluded from simulation. Simulation can be seen as a process through which we first generate and self-attribute pretend mental states that are intended to correspond to those of the simulated agent, and then project them onto the target. By contrast, the “radical” version of simulationism rejects the primacy of first-person mindreading and contends that we imaginatively transform ourselves into the simulated agent, interpreting the target’s behavior without using any kind of mental concept, not even ones referring to ourselves.

Finally, the claim─common to both theorists of theory and theorists of simulation─that mindreading plays a primary role in human social understanding was challenged in the early 21st century, mainly by phenomenology-oriented philosophers and cognitive scientists.

Table of Contents

  1. Theory-Theory
    1. The Child-Scientist Theory
    2. The Modularist Theory-Theory
    3. First-Person Mindreading and Theory-Theory
  2. Simulation-Theory
    1. Simulation with and without Introspection
    2. Simulation in Low-Level Mindreading
  3. Social Cognition without Mindreading
  4. References and Further Reading
    1. Suggested Further Reading
    2. References

1. Theory-Theory

Social psychologists have investigated mindreading since at least the 1940s. In Heider and Simmel’s (1944) classic studies, participants were presented with animated events involving interacting geometric shapes. When asked to report what they saw, the participants almost invariably treated these shapes as intentional agents with motives and purposes, suggesting the existence of an automatic capacity for mentalistic attribution. Pursuing this line of research would lead to Heider’s The Psychology of Interpersonal Relations (1958), a seminal book which is one of the main historical referents of the scientific inquiry into our mentalistic practice. In this book Heider characterizes “commonsense psychology” as a sophisticated conceptual scheme that has an influence on human perception and action in the social world comparable to that which Kant’s categorical framework has on human perception and action in the physical world (see Malle & Ickes 2000: 201).

Heider’s visionary work played a central role in the origination and definition of attribution theory, that is, the field of social psychology that investigates the mechanisms underlying ordinary explanations of our own and other people’s behavior. However, attribution theory is a quite different way of approaching our mentalistic practice. Heider took commonsense psychology in its real value of knowledge, arguing that scientific psychology has a good deal to learn from it. In contrast, most research on causal attribution has been faithful to behaviorism’s methodological lesson and focused on the epistemic inaccuracy of commonsense psychology.

Two years before Heider’s book, Wilfred Sellars’ (1956) Empiricism and the Philosophy of Mind had suggested that our grasp of mental phenomena does not originate from direct access to our inner life, but is the result of a “folk” theory of mind, which we acquire through some form or other of enculturation. Sellars’ speculation turned out to be very philosophically productive and in agreement with social-psychology research on self-attribution, coming to be known as “Theory-Theory” (a term coined by Morton 1980—henceforth “TT”).

During the 1970s one or other form of TT was seen as a very effective antidote to Cartesianism and philosophical behaviorism. In particular, TT was coupled with Nagel’s (1961) classic account of intertheoretic reduction as deduction of the reduced from the reducing theory via bridge principles in order to turn the ontological problem of the relationship between the mental and the physical into a more tractable epistemological problem concerning the relations between theories. Thus it became possible to take a notion—intertheoretic reduction—rigorously studied by philosophers of science; to examine the relations between folk psychology as a theory including the commonsense mentalistic ontology and its scientific successors (scientific psychology, neuroscience, or some other form of science of the mental); and to let ontological/metaphysical questions be answered by (i) focusing on questions about explanation and theory reduction first and foremost, and then (ii) depending on how those first questions were answered, drawing the appropriate ontological/metaphysical conclusions based on a comparison with how similar questions about explanation and reduction got answered in other scientific episodes and the ontological conclusions philosophers and scientists drew in those cases (this strategy is labelled “the intertheoretic-reduction reformulation of the mind-body problem” in Bickle 2003).

In this context, TT was taken as the major premise in the standard argument for eliminative materialism (see Ramsey 2011: §2.1). In its strongest form, eliminativism predicts that part or all of our folk-psychological theory will vanish into thin air, just as it happened in the past when scientific progress led to the abandonment of the folk theory of witchcraft or the protoscientific theories of phlogiston and caloric fluid. This prediction rests on an argument which moves from considering folk psychology as a massively defective theory to the conclusion that—just as with witches, phlogiston, and caloric fluid—folk-psychological entities do not exist. Thus philosophy of mind joined attribution theory in adopting a critical attitude toward the explanatory adequacy of folk psychology (see, for example, Stich’s 1983 eliminativistic doubts about the folk concept of belief, motivated inter alia by the experimental social psychology literature on dissonance and self-attribution).

Notice, however, that TT can be differently construed depending on whether we adopt a personal or subpersonal perspective (see Stich & Ravenscroft 1994: §4). The debate between intentional realists and eliminativists favored David Lewis’ personal-level formulation of TT. According to Lewis, the folk theory of mind is implicit in our everyday talk about mental states. We entertain “platitudes” regarding the causal relations of mental states, sensory stimuli, and motor responses that can be systematized (or “Ramsified”). The result is a functionalist theory that gives the terms of mentalistic vocabulary their meaning in the same way as scientific theories define their theoretical terms, namely “as the occupants of the causal roles specified by the theory…; as the entities, whatever those may be, that bear certain causal relations to one another and to the referents of the O[bservational]-terms” (Lewis 1972: 211). In this perspective, mindreading can be described as an exercise in reflective reasoning, which involves the application of general reasoning abilities to premises including ceteris paribus folk-psychological generalizations. A good example of this conception of mindreading is Grice’s schema for the derivation of conversational implicatures:

He said that P; he could not have done this unless he thought that Q; he knows (and knows that I know that he knows) that I will realize that it is necessary to suppose that Q; he has done nothing to stop me thinking that Q; so he intends me to think, or is at least willing for me to think, that Q(Grice 1989: 30-1; cit. in Wilson 2005: 1133).

Since the end of the 1970s, however, primatology, developmental psychology, cognitive neuropsychiatry and empirically-informed philosophy have been contributing to a collaborative inquiry into TT. In the context of this literature the term “theory” refers to a “tacit” or “sub-doxastic” structure of knowledge, a corpus of internally represented information that guides the execution of mentalistic capacities. But then the functionalist theory that fixes the meaning of mentalistic terms is not the theory implicit in our everyday, mentalistic talk, but the tacit theory (in Chomsky’s sense) subserving our thought and talk about the mental realm (see Stich & Nichols 2003: 241). On this perspective, the inferential processes that depend on the theory have an automatic and unconscious character that distinguishes them from reflective reasoning processes.

In developmental psychology part of the basis for the study of mindreading skills in children was already in Jean Piaget’s seminal work on egocentrism in the 1930s to 50s, and the work on metacognition (especially metamemory) in the 1970s. But the developmental research on mindreading took off only under the thrust of three discoveries in the 1980s (see Leslie 1998). First, normally developing 2-year-olds are able to engage in pretend play. Second, normally developing children undergo a deep change in their understanding of the psychological states of other people somewhere between the ages of 3 and 4, as indicated especially by the appearance of their ability to solve a variety of “false-belief” problems (see immediately below). Lastly, children diagnosed with autism spectrum disorders are especially impaired in attributing mental states to other people.

In particular, Wimmer & Perner (1983) provided the theory-of-mind research with a seminal experimental paradigm: the “false-belief task.” In the most well-known version of this task, a child watches two puppets interacting in a room. One puppet (“Sally”) puts a toy in location A and then leaves the room. While Sally is out of the room, the other puppet (“Anne”) moves the toy from location A to location B. Sally returns to the room, and the child onlooker is asked where she will look for her toy, in location A or in location B. Now, 4- and 5-year-olds have little difficulty passing this test, judging that Sally will look for her toy in location A although it really is in location B. These correct answers provide evidence that the child realizes that Sally does not know that the toy has been moved, and so will act upon a false belief. Many younger children, typically 3-year-olds, fail such a task, often asserting that Sally will look for the toy in the place where it was moved. Dozens of versions of this task have now been used, and while the precise age of success varies between children and between task versions, in general we can confidently say that children begin to successfully perform the (“verbal”) false-belief tasks at around 4 years (see the meta-analysis in Wellman et al. 2001; see also below, the reference to “non-verbal” false-belief tasks).

Wimmer and Perner’s false-belief task set off a flood of experiments concerning the infant understanding of the mind. In this context, the first hypotheses about the process of acquisition of the naïve theory of mind were suggested. The finding that mentalistic skills emerge very early, in the first 3-4 years, and in a way relatively independent from the development of other cognitive abilities, led some scholars (for example, Simon Baron-Cohen, Jerry Fodor, Alan Leslie) to conceive them as the end-state of the endogenous maturation of an innate theory-of-mind module (or system of modules). This contrasted with the view of other researchers (for example, Alison Gopnik, Josef Perner, Henry Wellman), who maintained that the intuitive theory of mind develops in childhood in a manner comparable to the development of scientific theories.

a. The Child-Scientist Theory

According to a first version of TT, “the child (as little) scientist theory,” the body of internally-represented knowledge that drives the exercise of mentalistic abilities has much the same structure as a scientific theory, and it is acquired, stored, and used in much the same way that scientific theories are: by formulating explanations, making predictions, and then revising the theory or modifying auxiliary hypotheses when the predictions fail.  Gopnik & Meltzoff (1997) put forward this idea in its more radical form. They argue that the body of knowledge underlying mindreading has all the structural, functional and dynamic features that, on their view, characterize most scientific theories. One of the most important features is defeasibility.  As it happens in scientific practice, the child’s naïve theory of mind can also be “annulled,” that is, replaced when an accumulation of counterevidence to it occurs. The child-scientist theory is, therefore, akin to Piaget’s constructivism insofar as it depicts the cognitive development in childhood and early adolescence as a succession of increasingly sophisticated naïve theories. For instance, Wellman (1990) has argued that around age 4 children become able to pass the false-belief tests because they move from an elementary “copy” theory of mind to a fully “representational” theory of mind, which allows them to acknowledge the explanatory role of false beliefs.

The child-scientist theory inherits from Piaget not only the constructivist framework but also the idea that the cognitive development is a process that depends on a domain-general learning mechanism. A domain-general (or general-purpose) psychological structure is one that can be used to do problem solving across many different content domains; it contrasts with a domain-specific psychological structure, which is dedicated to solving a restricted class of problems in a restricted content domain (see Samuels 2000). Now, Piaget’s model of cognitive development posits an innate endowment of reflexes and domain-general learning mechanisms, which enable the child to set up sensorimotor interactions with the environment that unfold a steady improvement in the capacity of problem-solving in any cognitive domain—physical, biological, psychological, and so forth. Analogously, Gopnik & Schulz (2004, 2007) have argued that the learning mechanism that supports all of cognitive development is a domain-general Bayesian mechanism that allows children to extract causal structure from patterns of data.

Another theory-theorist who endorses a domain-general conception of cognitive development is Josef Perner (1991). On his view, it is the appearance of the ability to metarepresent that enables the 4-year-olds to shift from a “situation theory” to a “representation theory,” and thus pass false-belief tests. Children are situation theorists by the age of around 2 years. At 3 they possess a concept, “prelief” (or “betence”), in which the concepts of pretend and belief coexist undifferentiated. The concept of prelief allows the child to understand that a person can “act as if” something was such and such (for example, as if “this banana is a telephone”) when it is not. At 4 children acquire a representational concept of belief which enables them to understand that, like the public representations, inner representations can also misrepresent states of affairs (see Perner, Baker & Hutton 1994). Thus Perner suggests that children first learn to understand the properties of public (pictorial and linguistic) representations; only in a second moment they extend, through a process of analogical reasoning, these characteristics to mental representations. On this perspective, then, the concept of belief is the product of a domain-general metarepresentational capacity that includes but is not limited to metarepresentation of mental states. (But for criticism, see Harris 2000, who argues that pretence and belief are very different and are readily distinguished by context by 3-year olds.)

b. The Modularist Theory-Theory

According to the child-scientist theory, children learn the naïve theory of mind in much the same way that adults learn about scientific theories. By contrast, the modularist version of TT holds that the body of knowledge underlying mindreading lacks the structure of a scientific theory, being stored in one or more innate modules, which gradually become functional (“mature”) during infant development. Inside the module the body of information can be stored as a suite of domain-specific computational mechanisms; or as a system of domain-specific representations; or in both ways (see Simpson et al. 2005: 13).

The notion of modularity as domain-specificity, whose paradigm is Noam Chomsky’s module of language, informs the so-called “core knowledge” hypothesis, according to which human cognition builds on a repertoire of domain-specific systems of knowledge. Studies of children and adults in diverse cultures, human infants, and non-human primates provide evidence for at least four systems of knowledge that serve to represent significant aspects of the environment: inanimate objects and their motions; agents and their goal-directed actions; places and their geometric relations; sets and their approximate numerical relation. These are systems of domain-specific, task-specific representations, which are shared by other animals, persist in adults, and show little variation by culture, language or sex (see Carey & Spelke 1996; Spelke & Kinzler 2007).

And yet a domain-specific body of knowledge is an “inert” psychological structure, which gives rise to behavior only if it is manipulated by some cognitive mechanism. The question arises, then, whether the domain-specific body of information that subserves mentalistic abilities is the database of either a domain-specific or domain-general computational system. In some domains, a domain-specific computational mechanism and a domain-specific body of information can form a single mechanism (for example, a parser is very likely to be a domain-specific computational mechanism that manipulates a domain-specific data structure). But in other domains, as Samuels (1998, 2000) has noticed, domain-specific systems of knowledge might be computed by domain-general rather than domain-specific algorithms (but for criticism, see Carruthers 2006, §4.3).

The existence of a domain-specific algorithm that exploits a body of information specific to the domain of naïve psychology has been proposed by Alan Leslie (1994, 2000). He postulated a specialized component of social intelligence, the “Theory-of-Mind Mechanism” (ToMM), which receives as input information about the past and present behavior of other people and utilizes this information to compute their probable psychological states. The outputs of ToMM are descriptions of psychological states in the form of metarepresentations or M-representations, that is, agent-centered descriptions of behavior, which include a triadic relation that specifies four kinds of information: (i) an agent, (ii) an informational relation that specifies the agent’s attitude (pretending, believing, desiring, and so forth), (iii) an aspect of reality that grounds the agent’s attitude, (iv) the content of the agent’s attitude. Therefore, in order to pretend and understand others’ pretending, the child’s ToMM is supposed to output the M-representation <Mother PRETENDS (of) this banana (that) “it is a telephone”>. Analogously, in order to predict Sally’s behavior in the false-belief test, ToMM is supposed to output the M-representation <Sally BELIEVES (of) her marble (that) “it is in the basket”>. (Note that Leslie coined the term “M-representation” to distinguish his own concept of meta-representation from Perner’s 1991. For Perner uses the term at a personal level to refer to the child’s conscious theory of representation, whereas Leslie utilizes the term at a subpersonal level to designate an unconscious data structure computed by an information-processing mechanism. See Leslie & Thaiss 1992: 231, note 2.)

In the 1980s, Leslie’s ToMM hypothesis was the basis for the development of a neuropsychological perspective on autism. Children suffering from this neurodevelopmental disorder exhibit a triad of impairments: social incompetence, poor verbal and nonverbal communicative skills, and a lack of pretend play. Because social competence, communication, and pretending all rest on mentalistic abilities, Baron-Cohen, Frith & Leslie (1985) speculated that the autistic triad might be the result of an impaired ToMM. This hypothesis was investigated in an experiment in which typically developing 4-year-olds, children with autism (12 years; IQ 82), and children with Down syndrome (10 years; IQ 64) were tested on the Sally and Ann false-belief task. Eighty-five percent of the normally developing children and 86% of the children with Down syndrome passed the test; but only 20% of the autistic children predicted that Sally would look in the basket. This is one of the first examples of psychiatry driven by cognitive neuropsychology (followed by Christopher Frith’s 1992 theory of schizophrenia as late-onset autism).

According to Leslie, the ToMM is the specific innate basis of basic mentalistic abilities, which matures during the infant’s second year. In support of this hypothesis, he cites inter alia his analysis of pretend play that would show that 18-month-old children are able to metarepresent the propositional attitude of pretending. This analysis results, however, in an immediate empirical problem. If the ToMM is fully functional at 18 months, why are children unable to successfully perform false-belief tasks until they are around 4 years old? Leslie’s hypothesis is that although the concept of belief is already in place in children younger than 4, in the false-belief tasks this concept is masked by immaturity in another capacity that is necessary for good performance on the task—namely inhibitory control. Since, by default, the ToMM attributes a belief with content that reflects current reality, to succeed in a false-belief task this default attribution must be inhibited and an alternative nonfactual content for the belief selected instead. This is the task of an executive control mechanism that Leslie calls “Selection Processor” (SP). Thus 3-year-olds fail standard false-belief tasks because they possess the ToMM but not yet the inhibitory SP (see Leslie & Thaiss 1992; Leslie & Polizzi 1998).

The ToMM/SP model seems to find support in a series of experiments that test understanding of false mental and public representations in normal and autistic children. Leslie & Thaiss (1992) have found that normal 3-year-olds fail the standard false-belief tasks, the two non-mental meta-representational tests, the false-map task and Zaitchik’s (1990) outdated-photograph task. In contrast, autistic children are at or near ceiling on the non-mental metarepresentational tests but fail false-belief tasks. Normal 4-year-olds can succeed in all these tasks. According to Leslie and Thaiss, the ToMM/SP model can account for these findings: normal 3-year-olds possess the ToMM but not yet SP; autistic children are impaired in ToMM but not in SP; normal 4-year-olds possess both the ToMM and an adequate SP. By contrast, these results appear to be counterevidence to Perner’s idea that children first understand public representations before then applying that understanding to mental states. If this were right, then autistic children should have difficulty with both kinds of representations. And in fact Perner (1993) suggests that the autistic deficit is due to a genetic impairment of the mechanisms that subserve attention shifting, a damage that interferes with the formation of the database required for the development of a theory of representation in general. But what autistics’ performance in mental and non-mental metarepresentational tasks seems to show is a dissociation between understanding false maps and outdated photographs, on one hand, and understanding false beliefs, on the other. A finding that can be easily explained in the context of Leslie’s domain-specific approach to mindreading, according to which children with autism have a specific deficit in understanding mental representation but not representation in general. In support of this interpretation, fMRI studies showed that activity in the right temporo-parietal junction is high while participants are thinking about false beliefs, but no different from resting levels while participants are thinking about outdated photographs or false maps or signs. This suggests a neural substrate for the behavioral dissociation between pictorial and mental metarepresentational abilities (see Saxe & Kanwisher 2003; for a critical discussion of the domain-specificity interpretation of these behavioral and neuroimaging data, see Gerrans & Stone 2008; Perner & Aichhorn 2008; Perner & Leekam 2008).

Leslie (2005) recruits new data to support his claim that mental metarepresentational abilities emerge from a specialized neurocognitive mechanism that matures during the second year of life. Standard false-belief tasks are “elicited-response” tasks in which children are asked a direct question about an agent’s false belief. But investigations using “spontaneous-response” tasks (Onishi & Baillargeon 2005) seem to suggest that the ability to attribute false beliefs is present much earlier, at the age of 15 months (even at 13 months in Surian, Caldi & Sperber 2007). However, Leslie’s mentalistic interpretation of these data has been challenged by Ruffman & Perner (2005), who have proposed an explanation of Onishi and Baillargeon’s results that assumes that the infants might be employing a non-mentalistic behavior-rule such as, “People look for objects where last seen” (for replies, see Baillargeon et al. 2010).

The ToMM has been considered, contra Fodor, as one of the strongest candidates for central modularity (see, for example, Botterill & Carruthers 1999: 67-8). However, Samuels (2006: 47) has objected that it is difficult to establish whether or not the ToMM’s domain of application is really central cognition. He suggests that the question is still more controversial in light of Leslie’s proposal of modelling ToMM as a relatively low-level mechanism of selective attention, whose functioning depends on SP, which is a non-modular mechanism, penetrable to knowledge and instruction (see Leslie, Friedman & German 2004).

c. First-Person Mindreading and Theory-Theory

During the 1980s and 1990s most of the work in Theory of Mind was concerned with the mechanisms that subserve the attribution of psychological states to others (third-person mindreading). In the last decade, however, an increasing number of psychologists and philosophers have also proposed accounts of the mechanisms underlying the attribution of psychological states to oneself (first-person mindreading).

For most theory-theorists, first-person mindreading is an interpretative activity that depends on mechanisms that capitalize on the same theory of mind used to attribute mental states to other agents. Such mechanisms are triggered by information about mind-external states of affairs, essentially the target’s behavior and/or the situation in which it occurs/occurred. The claim is, then, that there is a functional symmetry between first-person and third-person mentalistic attribution—the “outside access” view of introspection in Robbins (2006: 619); the “symmetrical” or “self/other parity” account of self-knowledge in Schwitzgebel (2010, §2.1).

The first example of a symmetrical account of self-knowledge is Bem’s (1972) “self-perception theory.”  With reference to Skinner’s methodological guidance, but with a position that reveals affinities with symbolic interactionism, Bem holds that one knows one’s own inner states (for example, attitudes and emotions) through a process completely analogous to that occurring when one knows other people’ inner states, that is, by inferring them from the observation/recollection of one’s own behavior and/or the circumstances in which it occurs/occurred. The TT version of the symmetrical account of self-knowledge develops Bem’s approach by claiming that observations and recollections of one’s own behavior and the circumstances in which it occurs/occurred are the input of mechanisms that exploit theories that apply to the same extent to ourselves and to others.

In the well-known social-psychology experiments reviewed by Nisbett & Wilson (1977), the participants’ attitudes and behavior were caused by motivational factors inaccessible to consciousness—such factors as cognitive dissonance, numbers of bystanders in a public crisis, positional and “halo” effects and subliminal cues in problem solving and semantic disambiguation, and so on. However, when explicitly asked about the motivations (causes) of their actions, the subjects did not hesitate to state, sometimes with great eloquence, their very reasonable motives. Nisbett and Wilson explained this pattern of results by arguing that the subjects did not have any direct access to the real causes of their attitudes and behavior; rather, they engaged in an activity of confabulation, that is, they exploited a priori causal theories to develop reasonable but imaginary explanations of the motivational factors of their attitudes and behavior (see also Johansson et al. 2006, where Nisbett and Wilson’s legacy is developed through a new experimental paradigm to study introspection, the “choice blindness” paradigm).

Evidence for the symmetrical account of self-knowledge comes from Nisbett & Bellows’ (1977) utilization of the so-called “actor-observer paradigm.” In one experiment they compared the introspective reports of participants (“actors”) to the reports of a control group of “observers” who were given a general description of the situation and asked to predict how the actors would react. Observers’ predictions were found to be statistically identical to—and as inaccurate as—the reports by the actors. This finding suggests that “both groups produced these reports via the same route, namely by applying or generating similar causal theories” (Nisbett & Wilson 1977: 250-1; see also Schwitzgebel 2010: §§2.1.2 and 4.2.1).

In developmental psychology Alison Gopnik (1993) has defended a symmetrical account of self-knowledge by arguing that there is good developmental evidence of developmental synchronies: children’s understanding of themselves proceeds in lockstep with their understanding of others. For example, since TT assumes that first-person and third-person mentalistic attributions are both subserved by the same theory of mind, it predicts that if the theory is not yet equipped to solve certain third-person false-belief problems, then the child should also be unable to perform the parallel first-person task. A much discussed instance of parallel performance on tasks for self and other is in Gopnik & Astington (1988). In the “Smarties Box” experiment, children were shown with the candy container for the British confection “Smarties” and were asked what they thought was in the container. Naturally they answered “Smarties.” The container was then opened to reveal not Smarties, but a pencil. Children were then asked a series of questions, including “What will [your friend] say is in the box?”, and successively “When you first saw the box, before we opened it, what did you think was inside it?”. It turned out that the children’s ability to answer the question concerning oneself was significantly correlated with their ability to answer the question concerning another. (See also the above-cited Wellman et al. 2001, which offers meta-analytic findings to the effect that performance on false-belief tasks for self and for others is virtually identical at all ages.)

Data from autism have also been used to motivate the claim that first-person and third-person mentalistic attribution has a common basis. An intensely debated piece of evidence comes from a study by Hurlburt, Happé & Frith (1994), in which three people suffering from Asperger syndrome were tested with the descriptive experience sampling method. In this experimental paradigm, subjects are instructed to carry a random beeper, pay attention to the experience that was ongoing at the moment of the beep, and jot down notes about that now-immediately-past experience (see Hurlburt & Schwitzgebel 2007). The study showed marked qualitative differences in introspection in the autistic subjects: unlike normal subjects who report several different phenomenal state types—including inner verbalisation, visual images, unsymbolised thinking, and emotional feelings—the first two autistic subjects reported visual images only; the third subject could report no inner experience at all. According to Frith & Happé (1999: 14), this evidence strengthens the hypothesis that self-awareness, like other-awareness, is dependent on the same theory of mind.

Thus, evidence from social psychology, development psychology and cognitive neuropsychiatry makes a case for a symmetrical account of self-knowledge. As Schwitzgebel (2010: §2.1.3) rightly notes, however, no one advocates a thoroughly symmetrical conception because some margin is always left for some sort of direct self-knowledge. Nisbett & Wilson (1977: 255), for example, draw a sharp distinction between “cognitive processes” (the causal processes underlying judgments, decisions, emotions, sensations) and mental “content” (those judgments, decisions, emotions, sensations themselves). Subjects have “direct access” to this mental content, and this allows them to know it “with near certainty.” In contrast, they have no access to the processes that cause behavior. However, insofar as Nisbett and Wilson do not propose any hypothesis about this alleged direct self-knowledge, their theory is incomplete.

In order to offer an account of this supposedly direct self-knowledge, some philosophers made a more or less radical return to various forms of Cartesianism, construing first-person mindreading as a process that permits the access to at least some mental phenomena in a relatively direct and non-interpretative way. On this perspective, introspective access does not appeal to theories that serve to interpret “external” information, but rather exploits mechanisms that can receive information about inner life through a relatively direct channel— the “inside access” view of introspection in Robbins (2006: 618); the “self-detection” account of self-knowledge in Schwitzgebel (2010: §2.2).

The inside access view comes in various forms. Mentalistic self-attribution may be realized by a mechanism that processes information about the functional profile of mental states, or their representational content, or both kinds of information (see Robbins 2006: 618; for a “neural” version of the inside access view, see below, §2a). A representationalist-functionalist version of the inside access view is Nichols & Stich’s (2003) account of first-person mindreading in terms of “monitoring mechanisms.” The authors begin by drawing a distinction between detection and inference. It is one thing to detect mental states, it is another to reason about mental states, that is, using information about mental states to predict and explain one’s own or other people’s mental states and behavior. Moreover, both the attribution of a mental state and the inferences that one can make about it can be referred to oneself or other people. Thus, we get four possible operations: first- and third-person detection, first- and third-person reasoning. Now, Nichols and Stich’s hypothesis is that whereas third-person detecting and first- and third-person reasoning are all subserved by the same theory of mind, the mechanism for detecting one’s own mental states is quite independent of the mechanism that deals with the mental states of other people. More precisely, the Monitoring Mechanism (MM) theory assumes the existence of a suite of distinct self-monitoring computational mechanisms, including one for monitoring and providing self-knowledge of one’s own experiential states, and one for monitoring and providing self-knowledge of one’s own propositional attitudes. Thus, for example, if X believes that p, and the proper MM is activated, it copies the representation p in X’s “Belief Box”, embeds the copy in a representation schema of the form “I believe that___”, and then places this second-order representation back in X’s Belief Box.

Since the MM theory assumes that first-person mindreading does not involve mechanisms of the sort that figure in third-person mindreading, it implies that the first capacity should be dissociable, both diachronically and synchronically, from the second. In support of this prediction Nichols & Stich (2003) cite developmental data to the effect that, on a wide range of tasks, instead of the parallel performance predicted by TT, children exhibit developmental asynchronies. For example, children are capable of attributing knowledge and ignorance to themselves before they are capable of attributing those states to others (Wimmer et al. 1988). Moreover, they suggest—on the basis, inter alia, of a reinterpretation of the aforementioned Hurlburt, Happé & Frith’s (1994) data—that there is some evidence of a double dissociation between schizophrenic and autistic subjects: the MMs might be intact in autistics despite their impairment in third-person mindreading; in schizophrenics the pattern might be reversed.

The MM theory provides a neo-Cartesian reply to TT—and especially to its eliminativist implications inasmuch as the mentalistic self-attributions based on MMs are immune to the potentially distorting influence of our intuitive theory of psychology. However, the MM theory faces at least two difficulties. To start with, the theory must tell us how MM establishes which attitude type (or percept type) a given mental state belongs to (Goldman 2006: 238-9). A possibility is that there is a separate MM for each propositional attitude type and for each perceptual modality. But then, as Engelbert and Carruthers (2010: 246) remark, since any MM can be selectively impaired, the MM theory predicts a multitude of dissociations—for example, subjects who can self-attribute beliefs but not desires, or visual experiences but not auditory ones, and so on. However, the hypothesis of such a massive dissociability has little empirical plausibility.

Moreover, Carruthers (2011) has offered a book-length argument against the idea of a direct access to propositional attitudes. His neurocognitive framework is Bernard Baars’ Global Workspace Theory model of consciousness (see Gennaro 2005: §4c), in which a range of perceptual systems “broadcast” their outputs (for example, sensory data from the environment, imagery, somatosensory and proprioceptive data) to a complex of conceptual systems (judgment-forming, memory-forming, desire-forming, decision-making systems, and so forth). Among the conceptual systems there is also a multi-componential “mindreading system,” which generates higher-order judgments about the mental states of others and of oneself. By virtue of receiving globally broadcast perceptual states as input, the mindreading system can easily recognize those percepts, generating self-attributions of the form “I see something red,” “It hurts,” and so on. But the system receives no input from the systems that generate propositional attitude events (like judging and deciding). Consequently, the mindreading system cannot directly self-attribute propositional attitude events; it must infer them by exploiting the perceptual input (together with the outputs of various memory systems). Thus, Carruthers (2009: 124) concludes, “self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self-interpretation.” On this perspective, therefore, we do not introspect our own propositional attitude events. Our only form of access to those events is via self-interpretation, turning our mindreading faculty upon ourselves and engaging in unconscious interpretation of our own behavior, physical circumstances, and sensory events like visual imagery and inner speech. Carruthers bases his proposal on considerations to do with the evolution of mindreading and metacognition, the rejection of the above-cited data that according to Nichols & Stich (2003) suggest developmental asynchronies and dissociation between self-attribution and other-attribution, and on evidence about the confabulation of attitudes. Thus, Carruthers develops a very sophisticated version of the symmetrical account of self-knowledge in which the theory-driven mechanisms underlying first- and third-person mindreading can count not only on observations and recollections of one’s own behavior and the circumstances in which it occurs/occurred, but also on the recognition of a multitude of perceptual and quasi-perceptual events.

2. Simulation-Theory

Until the mid-1980s the debate on the nature of mindreading was a debate between the different variants of TT. But in 1986, TT as a whole was impugned by Robert Gordon and, independently, by Jane Heal, who gave life to an alternative which was termed “simulation-theory” (ST). In 1989 Alvin Goldman and Paul Harris began to contribute to this new approach to mindreading. In 2006, Goldman provided the most thoroughly developed, empirically supported defense of a simulationist account of our mentalistic abilities.

According to ST, our third-person mindreading ability does not consist in implicit theorizing but rather in representing the psychological states and processes of others by mentally simulating them, that is, attempting to generate similar states and processes in ourselves. Thus, the same resources that are used in our own psychological states and processes are recycled—usually but not only in imagination—to provide an understanding of psychological states and processes of the simulated target. This has often been compared to the method of Einfühlung exalted by the theorists of Verstehen (see Stueber 2006: 5-19).

In order for a mindreader to engage in this process of imaginative recycling, various information processing mechanisms are needed. The mindreader simulates the psychological etiology of the actions of the target in essentially two steps. First, the simulator generates pretend or imaginary mental states in her own mind which are intended to (at least partly) correspond to those of the target. Second, the simulator feeds the imaginary states into a suitable cognitive mechanism (for example, the decision-making system) that is taken “offline,” that is, it is disengaged from the motor control systems. If the simulator’s decision-making system is similar to the target’s one, and the pretend mental states that the simulator introduces into the decision-making system (at least partly) match the target’s, then the output of the simulator’s decision-making system might reliably be attributed or assigned to the target. On this perspective, there is no need for an internally represented knowledge base and there is no need of a naïve theory of psychology. The simulator exploits a part of her cognitive apparatus as a model for a part of the simulated agent’s cognitive apparatus.

Hence follows one of the main advantages ST is supposed to have over TT—namely its computational parsimony. According to advocates of ST, the body of tacit folk-psychological knowledge which TT attributes to mindreaders imposes too heavy a burden on mental computation. However, such a load will diminish radically if, instead of computing the body of knowledge posited by TT, mindreaders must only co-opt mechanisms that are primarily used online, when they experience a kind of mental state, to run offline simulations of similar states in the target (the argument is suggested by Gordon 1986 and Goldman 1995, and challenged by Stich & Nichols 1992, 1995).

In the early years of the debate over ST, a main focus was on its implications for the controversy between intentional realism and eliminative materialism. Gordon (1986) and Goldman (1989) suggested that by rejecting the assumption that folk psychology is a theory, ST undercuts eliminativism. Stich & Ravenscroft (1994: §5), however, objected that ST undermines eliminativism only provided that the latter adopts the subpersonal version of TT. For ST does not deny the evident fact that human beings have intuitions about the mental, and neither rules out that such intuitions might be systematized by building, as David Lewis suggests, a theory that implies them. Consequently, ST does not refute eliminativism; it instead forces the eliminativist to include among the premises of her argument Lewis’ personal formulation of TT, together with the observation/prediction that the theory implicit in our everyday talk about mental states is or will turn out to be seriously defective.

One of the main objections that theory-theorists raise against ST is the argument from systematic errors in prediction. According to ST errors in prediction can arise either (i) because the predictor’s executive system is different from that of the target, or (ii) because the pretend mental states that the predictor has introduced into the executive system do not match the ones that actually motivate the target. However, Stich & Nichols (1992, 1995; see also Nichols et al. 1996) describe experimental situations in which the participants systematically fail to predict the behavior of targets, and in which it is unlikely that (i) or (ii) is the source of problem. Now, TT can easily explain such systematic errors in prediction: it is sufficient to assume that our naïve theory of psychology lacks the resources required to account for such situations. It is no surprise that a folk theory that is incomplete, partial, and in many cases seriously defective often causes predictive failures. But this option is obviously not available for ST: simulation-driven predictions are “cognitively impenetrable,” that is, they are not affected by the predictor’s knowledge or ignorance about psychological processes (see also Saxe 2005; and the replies by Gordon 2005 and Goldman 2006: 173-4).

More recently, however, a consensus seems to be emerging to the effect that mindreading involves both TT and ST. For example, Goldman (2006) grants a variety of possible roles for theorizing in the context of what he calls “high-level mindreading.” This is the imaginative simulation discussed so far, which is subject to voluntary control, is accessible to consciousness, and involves the ascription of complex mental states such as propositional attitudes. High-level simulation is a species of what Goldman terms “enactment imagination” (a notion that builds on Currie & Ravenscroft’s 2002 concept of “recreative imagination”). Goldman contrasts high-level mindreading to the “low-level mindreading,” which is unconscious, hard-wired, involves the attribution of structurally simple mental states such as face-based emotions (for example, joy, fear, disgust), and relies on simple imitative or mirroring processes (see, for example, Goldman & Sripada 2005). Now, theory definitely plays a role in high-level mindreading. In a prediction task, for example, theory may be involved in the selection of the imaginary inputs that will be introduced into the executive system. In this case, Goldman (2006: 44) admits, mindreading depends on the cooperation of simulation and theorizing mechanisms.

Goldman’s blend of ST and TT (albeit with a strong emphasis on the simulative component) is not the only “hybrid” account of mindreading: for other hybrid approaches, see Botterill & Carruthers (1999), Nichols & Stich (2003), and Perner & Kühberger (2006). And it is right to say that now the debate aims first of all to establish to what extent and in which processes theory or simulation prevails.

a. Simulation with and without Introspection

There is an aspect, however, that makes Goldman’s (2006) account of ST different from other hybrid theories of mindreading, namely the neo-Cartesian priority that he assigns to introspection. On his view, first-person mindreading both ontogenetically precedes and grounds third-person mindreading. Mindreaders need to introspectively access their offline products of simulation before they can project them onto the target. And this, Goldman claims, is a form of “direct access.”

In 1993 Goldman put forward a phenomenological version of the inside access view (see above, §1c), by arguing that introspection is a process of detection and classification of one’s (current) psychological states that does not depend at all on theoretical knowledge, but rather occurs in virtue of information about the phenomenological properties of such states. But in light of criticism (Carruthers 1996; Nichols & Stich 2003), in his 2006 book Goldman has remarkably reappraised the relevance of the qualitative component for the detection of psychological states, pointing out the centrality of the neural properties. Building on Craig’s (2002) account of interoception, as well as Marr’s and Biederman’s computational models of visual object recognition, Goldman now maintains that introspection is a perception-like process that involves a transduction mechanism that takes neural properties of mental states as input and outputs representations in a proprietary code (the introspective code, or the “I-code”). The I-code represents types of mental categories and classifies mental-state tokens in terms of those categories. Goldman also suggests some possible primitives of the I-code. So, for example, our coding of the concept of pain might be the combination of the “bodily feeling” parameter (a certain raw feeling) with the “preference” or “valence” one (a negative valence toward the feeling). Thus, the neural version of the inside access view is an attempt to solve the problem of the recognition of the attitude type, which proved problematic for Nichols and Stich’s representationalist-functionalist approach (see above, §1c). However, since different percept and attitude types are presumably realized in different cerebral areas, each percept or attitude type will depend on a specific informational channel to feed the introspective mechanism. Consequently, Goldman’s theory also seems to be open to the objection of massive dissociability raised to the MM theory (see Engelbert and Carruthers 2010: 247).

Goldman’s primacy of first-person mindreading is, however, rejected by other simulationists. According to Gordon’s (1995, 1996) “radical” version of ST, simulation can occur without introspective access to one’s own mental states. The simulative process begins not with my pretending to be the target, but rather with my becoming the target. As Gordon (1995: 54) puts it, simulation is not “a transfer but a transformation.” “I” changes its referent and the equivalence “I=target” is established. In virtue of this de-rigidification of the personal pronoun, any introspective step is ruled out: one does not first assign a psychological state to oneself to transfer it to the target. Since the simulator becomes the target, no analogical inference from oneself to the other is needed. Still more radically, simulation can occur without having any mentalistic concepts. Our basic competence in the use of utterances of the form “I <propositional attitude> that p” involves not direct access to the propositional attitudes, but only an “ascent routine” through which we express our propositional attitudes in this new linguistic form (see Gordon 2007).

Carruthers has raised two objections to Gordon’s radical ST. First, it is a “step back” to a form of “quasi-behaviorism” (Carruthers 1996: 38). Second, Gordon problematically assumes that our mentalistic abilities are constituted by language (Carruthers 2011: 225-27). In developmental psychology de Villiers & de Villiers (2003) have put forward a constitution-thesis similar to Gordon’s: thinking about mental states comes from internalizing the language with which these states are expressed in the child’s linguistic environment. More specifically, mastery of the grammatical rules for embedding tensed complement clauses under verbs of speech or cognition provides children with a necessary representational format for dealing with false beliefs. However, correlation between linguistic exposure and mindreading does not depend on the use of specific grammatical structures. In a training study Lohman & Tomasello (2003) found that performance on a false-belief task is enhanced by simply using perspective-shifting discourse, without any use of sentential complement syntax. Moreover, syntax is not constitutive of the mentalistic capacities of adults. Varley et al. (2001) and Apperly et al. (2006) provided clear evidence that adults with profound grammatical impairment show no impairments on non-verbal tests of mindreading. Finally, mastery of sentence complements is not even a necessary condition of the development of mindreading in children. Perner et al. (2005) have shown that such mastery may be required for statements about beliefs but not about desires (as in English), for beliefs and desires (as in German), or for neither beliefs nor desires (Chinese); and yet children who learn each of these three languages all understand and talk about desire significantly earlier than belief.

b. Simulation in Low-Level Mindreading

Another argument for a (prevalently) simulationist approach to mindreading consists in pointing out that TT is thoroughly limited to high-level mindreading (essentially the attribution of propositional attitudes), whereas ST is also well equipped to account for forms of low-level mindreading such as the perception of emotions or the recognition of facial expressions and motor intentions (see Slors & Macdonald 2008: 155).

This claim finds its main support in the interplay between ST and neuroscience. In the early 1990s mirror neurons were first described in the ventral premotor cortex and inferior parietal lobe of macaque monkeys. These visuomotor neurons activate not only when the monkey executes motor acts (such as grasping, manipulating, holding, and tearing objects), but also when it observes the same, or similar, acts performed by the experimenter or a conspecific. Although there is only one study that seems to offer direct evidence for the existence of mirror neurons in humans (Mukamel et al. 2010), many neurophysiological and brain imaging investigations support the existence of a human action mirroring system. For example, fMRI studies using action observation or imitation tasks demonstrated activation in areas in the human ventral premotor and parietal cortices assumed to be homologous to the areas in the monkey cortex containing mirror neurons (see Rizzolatti et al. 2002). It should be emphasized that most of the mirror neurons that discharge when a certain type of motor act is performed also activate when the same act is perceived, even though it is not performed with the same physical movement—for example, many mirror neurons that discharge when the monkey grasps food with the hand also activate when it sees a conspecific who grasps food with the mouth. This seems to suggest that mirror neurons code or represent an action at a high level of abstraction, that is, they are receptive not only to a mere movement but indeed to an action.

In 1998, Vittorio Gallese and Goldman wrote a very influential article in which mirror neurons were indicated as the basis of the simulative process. When the mirror neurons in the simulator’s brain are externally activated in observation mode, their activity matches (simulates or resonates with) that of mirror neurons in the target’s brain, and this resonance process retrodictively outputs a representation of the target’s intention from a perception of her movement.

More recently a number of objections have been raised against the “resonance” ST advocated by some researchers that have built on Gallese and Goldman’s hypothesis. Some critics, although admitting the presence of mirror neurons in both non-human and human primates, have drastically reappraised their role in mindreading. For example, Saxe (2009) has argued that there is no evidence that mirror neurons represent the internal states of the target rather than some relatively abstract properties of observed actions (see also Jacob & Jeannerod 2005; Jacob 2008). On the other hand, Goldman himself has mitigated his original position. Unlike Gallese, Keysers & Rizzolatti (2004), who propose mirror systems as the unifying basis of all social cognition, now Goldman (2006) considers mirror neuron activity, or motor resonance in general, as merely a possible part of low-level mindreading. Nonetheless, it is right to say that resonance phenomena are at the forefront of the field of social neuroscience (see Slors & Macdonald 2008: 156).

3. Social Cognition without Mindreading

By the early 21st century, the primacy that both TT and ST assigns to mindreading in social cognition had been challenged. One line of attack has come from philosophers working in the phenomenological tradition, such as Shaun Gallagher, Matthew Ratcliffe, and Dan Zahavi (see Gallagher & Zahavi 2008). Others working more from the analytic tradition, such as Jose Luis Bermúdez (2005, 2006b), Dan Hutto (2008), and Heidi Maibom (2003, 2007) have made similar points. Let’s focus on Bermúdez’ contribution because he offers a very clear account of the kind of cognitive mechanisms that might subserve forms of social understanding and coordination without mindreading (for a brief overview of this literature, see Slors & Macdonald 2008; for an exhaustive examination, see Herschbach 2010).

Bermúdez (2005) argues that the role of high-level mindreading in social cognition needs to be drastically re-evaluated. We must rethink the traditional nexus between intelligent behavior and propositional attitudes, realizing that much social understanding and social coordination are subserved by mechanisms that do not capitalize on the machinery of intentional psychology. For example, a mechanism of emotional sensitivity such as “social referencing” is a form of low-level mindreading that subserve social understanding and social coordination without involving the attribution of propositional attitudes (see Bermúdez 2006a: 55).

To this point Bermúdez is on the same wavelength as simulationists and social neuroscientists in drawing our attention to forms of low-level mindreading that have been largely neglected by philosophers. However, Bermúdez goes a step beyond them and explores cases of social interactions that point in a different direction, that is, situations that involve mechanisms that can no longer be described as mindreading mechanisms. He offers two examples.

(1) In game theory there are social interactions that are modeled without assuming that the agents involved are engaged in explaining or predicting each other’s behavior. In social situations that have the structure of the iterated prisoner’s dilemma, the so-called “tit-for-tat” heuristic simply says: “start out cooperating and then mirror your partner’s move for each successive move” (Axelrod 1984). Applying this heuristic simply requires understanding the moves available to each player (cooperation or defection), and remembering what happened in the last round. So we have here a case of social interaction that is conducted on the basis of a heuristic strategy that looks backward to the results of previous interactions rather than to their psychological etiology. We do not need to infer other players’ reasons; we only have to coordinate our behavior with theirs.

(2) There is another important class of social interactions that involve our predicting and/or explaining the actions of other participants, but in which the relevant predictions and explanations seem to proceed without us having to attribute propositional attitudes. These social interactions rest on what social psychologists call “scripts” (“frames” in artificial intelligence), that is, complex information structures that allow predictions to be made on the basis of the specification of the purpose of some social practice (for example, eating a meal at a restaurant), the various individual roles, and the appropriate sequence of moves.

According to Bermúdez, then, much social interaction is enabled by a suite of relatively simple mechanisms that exploit purely behavioral regularities. It is important to notice that these mechanisms subserve central social cognition (in Fodor’s sense). Nevertheless, they implement relatively simple processes of template matching and pattern recognition, that is, processes that are paradigmatic cases of perceptual processing. For example, when a player A applies the tit-for-tat rule, A must determine what the other player B did in the preceding round. This can be implemented in virtue of a template matching in which A verifies that B’s behavioral pattern matches A’s prototype of cooperation and defection. And also detecting the social roles implicated in a script-based interaction is a case of template matching: one verifies whether the perceived behavior matches one of the templates associated with the script (or the prototype represented in the “frame”).

Bermúdez (2005: 223) notes that the idea that much of what we intuitively identify as central processing is actually implemented by mechanisms of template matching and pattern recognition has been repeatedly put forward by the advocates of the connectionist computationalism, especially by Paul M. Churchland. But unlike the latter, Bermúdez does not carry the reappraisal of the role of propositional attitudes in social cognition to the point of their elimination; he argues that social cognition does not involve high-level mindreading when the social world is “transparent” or “ready-to-hand,” as he says quoting Heidegger’s zuhanden. However, when we find ourselves in social situations that are “opaque,” that is, situations in which all the standard mechanisms of social understanding and interpersonal negotiation break down, it seems that we cannot help but appeal to the type of metarepresentational thinking characteristic of intentional psychology (2005: 205-6).

4. References and Further Reading

a. Suggested Further Reading

  • Apperly, I. (2010). Mindreaders: The Cognitive Basis of “Theory of Mind.” Hove, East Sussex, Psychology Press.
  • Carruthers, P. and Smith, P. K. (eds.) (1996). Theories of Theories of Mind. Cambridge, Cambridge University Press.
  • Churchland, P. M. (1994). “Folk Psychology (2).” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind, Oxford, Blackwell, pp. 308–316.
  • Cundall, M. (2008). “Autism.” In The Internet Encyclopedia of Philosophy..
  • Davies, M. and Stone, T. (eds.) (1995a). Folk Psychology: The Theory of Mind Debate. Oxford, Blackwell.
  • Davies, M. and Stone, T. (eds.) (1995b). Mental Simulation: Evaluations and Applications. Oxford, Blackwell.
  • Decety, J. and Cacioppo, J. T. (2011). The Oxford Handbook of Social Neuroscience. Oxford, Oxford University Press.
  • Doherty, M. J. (2009). Theory of Mind. How Children Understand Others’ Thoughts and Feelings. Hove, East Sussex, Psychology Press.
  • Dokic, J. and Proust, J. (eds.) (2002). Simulation and Knowledge of Action. Amsterdam, John Benjamins.
  • Gerrans, P. (2009). “Imitation and Theory of Mind.” In G. Berntson and J. T. Cacioppo (eds.), Handbook of Neuroscience for the Behavioral Sciences. Chicago, University of Chicago Press, vol. 2, pp. 905–922.
  • Gordon, R. M. (2009). “Folk Psychology as Mental Simulation.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2009 Edition).
  • Hutto, D., Herschbach, M. and Southgate, V. (eds.) (2011). Special Issue “Social Cognition: Mindreading and Alternatives.” Review of Philosophy and Psychology 2(3).
  • Kind, A. (2005). “Introspection.” In The Internet Encyclopedia of Philosophy.
  • Meini, C. (2007). “Naïve psychology and simulations.” In M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind. Dordrecht, Kluwer, pp. 283–294.
  • Nichols, S. (2002). “Folk Psychology.” In Encyclopedia of Cognitive Science. London, Nature Publishing Group, pp. 134–140.
  • Ravenscroft, I. (2010). “Folk Psychology as a Theory.”  In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2010 Edition).
  • Rizzolatti, G., Sinigaglia, C. and Anderson, F. (2007). Mirrors in the Brain. How Our Minds Share Actions, Emotions, and Experience. Oxford, Oxford University Press.
  • Saxe, R. (2009). “The happiness of the fish: Evidence for a common theory of one’s own and others’ actions.” In K. D. Markman, W. M. P. Klein and J. A. Suhr (eds.), The Handbook of Imagination and Mental Simulation. New York, Psychology Press, pp. 257–266.
  • Shanton, K. and Goldman, A. (2010). “Simulation theory.” Wiley Interdisciplinary Reviews: Cognitive Science 1(4): 527–538.
  • Stich, S. and Rey, G. (1998). “Folk psychology.” In E. Craig (ed.), Routledge Encyclopedia of Philosophy. London, Routledge.
  • Von Eckardt, B. (1994). “Folk Psychology (1).” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind. Oxford, Blackwell, pp. 300–307.
  • Weiskopf, D. A. (2011). “The Theory-Theory of Concepts.” In The Internet Encyclopedia of Philosophy.

b. References

  • Apperly, I.A., Samson, D., Carroll, N., Hussain, S. and Humphreys, G. (2006). “Intact first- and second-order false belief reasoning in a patient with severly impaired grammar.” Social Neuroscience 1(3-4): 334-348.
  • Axelrod, R. (1984). The Evolution of Cooperation. New York, Basic Books.
  • Baillargeon, R., Scott, R.M. and He, Z. (2010). “Falsebelief understanding in infants.”  Trends in Cognitive Sciences 14(3): 110–118.
  • Bem, D. J. (1972). “Self-Perception Theory.” In L. Berkowitz (ed.), Advances in Experimental Social Psychology. New York, Academic Press, vol. 6, pp. 1–62.
  • Bermúdez, J. L. (2005). Philosophy of Psychology: A Contemporary Introduction. London, Routledge.
  • Bermúdez, J. L. (2006a). “Commonsense psychology and the interface problem: Reply to Botterill.” SWIF Philosophy of Mind Review 5(3): 54–57.
  • Bermúdez, J. L. (2006b), “Arguing for eliminativism.” In B. L. Keeley (ed.), Paul Churchland. Cambridge, Cambridge University Press, pp. 32–65.
  • Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht, Kluwer.
  • Botterill, G. and Carruthers, P. (1999). The Philosophy of Psychology. Cambridge, Cambridge University Press.
  • Carey, S. and Spelke, E. (1996). “Science and core knowledge.” Philosophy of Science 63: 515–533.
  • Carruthers, P. (1996). “Simulation and self-knowledge.” In P. Carruthers and P. K. Smith (eds.), Theories of Theories of Mind. Cambridge, Cambridge University Press, pp. 22–38.
  • Carruthers, P. (2006). The Architecture of the Mind. Oxford, Oxford University Press.
  • Carruthers, P. (2009). “How we know our own minds: The relationship between mindreading and metacognition.” Behavioral and Brain Sciences 32: 121–138.
  • Carruthers, P. (2011). The Opacity of Mind: The Cognitive Science of Self-Knowledge. Oxford, Oxford University Press.
  • Craig, A. D. (2002). “How do you feel? Interoception: The sense of the physiological condition of the body.” Nature Reviews Neuroscience 3: 655–666.
  • Currie, G. and Ravenscroft, I. (2002). Recreative Minds: Imagination in Philosophy and Psychology. Oxford, Oxford University Press.
  • de Villiers, J. G. and de Villiers P. A. (2003). “Language for thought: Coming to understand false beliefs.” In D. Gentner and S. GoldinMeadow (eds.), Language in Mind. Cambridge, MIT Press, pp. 335–384.
  • Engelbert, M. and Carruthers, P. (2010). “Introspection.” Wiley Interdisciplinary Reviews: Cognitive Science 1: 245–253.
  • Fogassi, L. and Ferrari P. F. (2010). “Mirror systems.” Wiley Interdisciplinary Reviews: Cognitive Science 2(1): 22–38.
  • Frith, C. (1992). Cognitive Neuropsychology of Schizophrenia. Hove, Erlbaum.
  • Frith, U. and Happé, F. (1999). “Theory of mind and self-consciousness: What is it like to be autistic?” Mind & Language 14(1): 1–22.
  • Gallagher, S. and Zahavi, D. (2008). The Phenomenological Mind. London, Routledge.
  • Gallese, V. and Goldman, A. (1998). “Mirror neurons and the simulation theory of mind-reading.” Trends in Cognitive Sciences 12: 493–501.
  • Gallese, V., Keysers, C. and Rizzolatti, G. (2004). “A unifying view of the basis of social cognition.” Trends in Cognitive Sciences 8: 396–403.
  • Gennaro, R. J. (2005). “Consciousness.” In The Internet Encyclopedia of Philosophy..
  • Gerrans, P. and Stone, V. E. (2008). Generous or parsimonious cognitive architecture? Cognitive neuroscience and Theory of Mind. British Journal for the Philosophy of Science 59: 121–141.
  • Goldman, A. I. (1993). “The psychology of folk psychology.” Behavioral and Brain Sciences 16: 15–28.
  • Goldman, A. I. (1989). “Interpretation psychologized.” Mind and Language, 4: 161–185; reprinted in M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, 1995, pp. 74–99.
  • Goldman, A. I. (1995). “In defense of the simulation theory.” In M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, pp. 191–206.
  • Goldman, A. I. (2006). Simulating Minds. Oxford, Oxford University Press.
  • Goldman, A. I. and Sripada, C. (2005). “Simulationist models of face-based emotion recognition.” Cognition 94: 193–213.
  • Gopnik, A. (1993). “How we read our own minds: The illusion of first-person knowledge of intentionality.” Behavioral and Brain Sciences 16: 1–14.
  • Gopnik, A. and Astington, J. W. (1988). “Children’s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction. “ Child Development 59: 26–37.
  • Gopnik, A. and Meltzoff, A. (1997). Words, Thoughts, and Theories. Cambridge, MA, MIT Press.
  • Gopnik, A. and Schulz, L. (2004). “Mechanisms of theory-formation in young children.” Trends in Cognitive Sciences 8(8): 371–377.
  • Gopnik, A. and Schulz, L. (eds.) (2007). Causal Learning: Psychology, Philosophy, and Computation. New York, Oxford University Press.
  • Gordon, R. M. (1986). “Folk psychology as simulation.” Mind and Language, 1: 158–171; reprinted in M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, 1995, pp. 60–73.
  • Gordon, R. M.  (1995). “Simulation without introspection or inference from me to you.” In M. Davies and T. Stone (eds.), Mental simulation: Evaluations and Applications. Oxford, Blackwell, pp. 53–67.
  • Gordon, R. M. (1996). “Radical simulationism.” In P. Carruthers and P. Smith (eds.), Theories of theories of mind. Cambridge, Cambridge University Press, pp. 11–21.
  • Gordon, R. M. (2005). “Simulation and systematic errors in prediction.” Trends in Cognitive Sciences 9: 361–362.
  • Gordon, R. M. (2007). “Ascent routines for propositional attitudes.” Synthese 159: 151–165.
  • Grice, H. P. (1989). Studies in the Way of Words. Cambridge, MA, Harvard University Press.
  • Harris, P. L. (1989). Children and Emotion: The Development of Psychological Understanding. Oxford, Blackwell.
  • Harris, P. L. (2000). The Work of the Imagination. Oxford: Blackwell.
  • Heider, F. (1958). The Psychology of Interpersonal Relations, New York, Wiley.
  • Heider, F. and Simmel, M. (1944). “An experimental study of apparent behavior.” American Journal of Psychology 57: 243–259.
  • Herschbach, M. (2010). Beyond Folk Psychology? Toward an Enriched Account of Social Understanding. PhD dissertation, University of California, San Diego.
  • Hurlburt, R., Happé, F. and Frith, U. (1994). “Sampling the form of inner experience in three adults with Asperger syndrome.” Psychological Medicine 24: 385–395.
  • Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA, MIT Press.
  • Hutto, D. D. (2008). Folk Psychological Narratives: The Sociocultural Basis of Understanding Reasons. Cambridge, MA, MIT Press.
  • Jacob, P. (2008). “What do mirror neurons contribute to human social cognition?” Mind and Language 23: 190–223.
  • Jacob, P. and Jeannerod, M. (2005). “The motor theory of social cognition: A critique.” Trends in Cognitive Science 9: 21–25.
  • Johansson, P., Hall, L., Sikström, S., Tärning, B. and Lind, A. (2006). “How something can be said about telling more than we can know: On choice blindness and introspection.” Consciousness and Cognition 15: 673–692.
  • Leslie, A.M. (1998). “Mind, child’s theory of.” In E. Craig (ed.), Routledge Encyclopedia of Philosophy. London, Routledge.
  • Leslie, A.M. (1994). “ToMM, ToBy, and agency: Core architecture and domain specificity.” In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge, MA, Cambridge University Press, pp. 119–148.
  • Leslie, A. M. (2000). “‘Theory of mind’ as a mechanism of selective attention.” In M. Gazzaniga (ed.), The New Cognitive Neurosciences. Cambridge, MA, MIT Press, 2nd Edition, pp. 1235–1247.
  • Leslie, A. M. (2005). “Developmental parallels in understanding minds and bodies.” Trends in Cognitive Sciences 9(10): 459-462.
  • Leslie, A. M., Friedman, O. and German, T. P. (2004). “Core mechanisms in ‘theory of mind’.” Trends in Cognitive Sciences 8(12): 528–533.
  • Leslie, A. M. and Polizzi, P. (1998). “Inhibitory processing in the false belief task: two conjectures.” Development Science 1: 247–254.
  • Leslie, A.M. and Thaiss, L. (1992). “Domain specificity in conceptual development: Neuropsychological evidence from autism.” Cognition 43: 225–251.
  • Lewis, D. (1972). “Psychophysical and theoretical identifications.” Australasian Journal of Philosophy, 50: 249–258.
  • Lohman, H. and Tomasello, M. (2003). “The role of language in the development of false belief understanding: A training study.” Child Development 74: 1130–1144.
  • Maibom, H. L. (2003). “The mindreader and the scientist.” Mind & Language 18(3): 296–315.
  • Maibom, H. L. (2007). “Social systems.” Philosophical Psychology 20(5): 557-578.
  • Malle, B. F. and Ickes, W. (2000). “Fritz Heider: Philosopher and psychologist.” In G. A. Kimble and M. Wertheimer (eds.), Portraits of Pioneers in Psychology. Washington (DC), American Psychological Association, vol. IV, pp. 195–214.
  • Morton, A. (1980). Frames of Mind. Oxford, Oxford University Press.
  • Mukamel, R., Ekstrom, A.D., Kaplan, J., Iacoboni, M. and Fried, I. (2010). “Single-Neuron Responses in Humans during Execution and Observation of Actions.” Current Biology 20: 750–756.
  • Nagel, E. (1961). The Structure of Science. New York, Harcourt, Brace, and World.
  • Nichols, S. and Stich, S. (2003). Mindreading. Oxford, Oxford University Press.
  • Nichols, S., Stich, S., Leslie, A. and Klein, D. (1996). “Varieties of Off-Line Simulation.” In P. Carruthers and P. Smith (eds.). Theories of Theories of Mind. Cambridge, UK, Cambridge University Press, 39–74.
  • Nisbett, R. E. and Bellows, N. (1977). “Verbal reports about causal influences on social judgments: Private access versus public theories.” Journal of Personality and Social Psychology, 35: 613–624.
  • Nisbett, R. and Wilson, T. (1977). “Telling more than we can know: Verbal reports on mental processes.” Psychological Review 84: 231–259.
  • Onishi, K. H. and Baillargeon, R. (2005). “Do 15-month-old infants understand false beliefs?” Science 308: 255–258.
  • Perner, J. (1991). Understanding the Representational Mind. Cambridge, MA, MIT Press.
  • Perner, J. and Aichhorn, M. (2008). “Theory of Mind, language, and the temporo-parietal junction mystery.” Trends in Cognitive Sciences 12(4): 123–126.
  • Perner, J., Baker, S. and Hutton, D. (1994). “Prelief: The conceptual origins of belief and pretence.” In C. Lewis and P. Mitchell (eds.), Children’s Early Understanding of Mind. Hillsdale, NJ, Erlbaum, pp. 261–286.
  • Perner J., and Kuhlberger, A. (2005). “Mental simulation: Royal road to other minds?” In Malle, B. F. and Hodges, S. D. (eds.), Other Minds. New York, Guilford Press, pp. 166–181.
  • Perner, J. and Leekam, S. (2008). “The curious incident of the photo that was accused of being false: Issues of domain specificity in development, autism, and brain imaging.” The Quarterly Journal of Experimental Psychology 61(1): 76–89.
  • Perner, J., Zauner, P. and Sprung, M. (2005). “What does ‘that’ have to do with point of view? Conflicting desires and ‘want’ in German.” In J. W. Astington and J. A. Baird (eds.), Why Language Matters for Theory of Mind. Oxford, Oxford University Press, pp. 220–244.
  • Ramsey, W. (2011). “Eliminative Materialism.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2011 Edition).
  • Rizzolatti, G., Fogassi, L. and Gallese V. (2002). “Motor and cognitive functions of the ventral premotor cortex.” Current Opinion in Neurobiology 12:149–154.
  • Robbins, P. (2006). “The ins and outs of introspection.” Philosophy Compass 1(6): 617-630.
  • Ruffman, T. and Perner, J. (2005). “Do infants really understand false belief?” Trends in Cognitive Sciences 9(10): 462-463.
  • Samuels, R. (1998). “Evolutionary psychology and the massive modularity hypothesis.” The British Journal for the Philosophy of Science 49: 575–602.
  • Samuels, R. (2000). “Massively modular minds: Evolutionary psychology and cognitive architecture.” In P. Carruthers and A. Chamberlain (eds.). Evolution and the Human Mind. Cambridge, Cambridge University Press, pp. 13–46.
  • Samuels, R. (2006). “Is the mind massively modular?” In R. J. Stainton (ed.), Contemporary Debates in Cognitive Science. Oxford, Blackwell, pp. 37–56.
  • Saxe, R. (2005). “Against simulation: The argument from error.” Trends in Cognitive Science 9: 174–179.
  • Saxe, R. (2009). “The neural evidence for simulation is weaker than I think you think it is.” Philosophical Studies 144: 447-456.
  • Saxe, R. and Kanwisher, N. (2003). “People thinking about thinking people: The role of the temporo-parietal junction in ‘theory of mind’.” NeuroImage 19: 1835–1842.
  • Schwitzgebel, E. (2010). “Introspection.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2010 Edition).
  • Sellars, W. (1956). “Empiricism and the philosophy of mind.” In Science, Perception and Reality. London and New York, Routledge & Kegan Paul, 1963, 127–96.
  • Simpson, T., Carruthers, P., Laurence, S. and Stich, S. (2005). “Introduction: Nativism past and present.” In P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind: Structure and Contents. Oxford, Oxford University Press, pp. 3–19.
  • Slors, M. and Macdonald, C. (2008). “Rethinking folk-psychology: Alternatives to theories of mind.” Philosophical Explorations 11(3): 153–161.
  • Spelke, E.S. and Kinzler, K.D. (2007). “Core knowledge.” Developmental Science 10: 89–96.
  • Stich, S. (1983). From Folk Psychology to Cognitive Science: The Case Against Belief. Cambridge, MA, MIT Press.
  • Stich, S. and Nichols, S. (1992). “Folk Psychology: Simulation or Tacit Theory?” Mind & Language 7(1): 35–71; reprinted in M. Davies and T. Stone (eds.), Folk Psychology. Oxford, Blackwell, 1995, pp. 123–158.
  • Stich, S. and Nichols, S. (1995). “Second Thoughts on Simulation.” In M. Davies and A. Stone (eds.). Mental Simulation: Evaluations and Applications. Oxford, Blackwell, 87–108.
  • Stich, S. and Nichols, S. (2003). “Folk Psychology.” In S. Stich and T. A. Warfield (eds.), The Blackwell Guide to Philosophy of Mind. Oxford, Blackwell, pp. 235–255.
  • Stich, S. and Ravenscroft, I. (1994). “What is folk psychology?” Cognition 50: 447–468.
  • Stueber, K. R. (2006). Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences. Cambridge, MA, MIT Press.
  • Surian, L., Caldi, S. and Sperber, D. (2007). “Attribution of beliefs by 13-month-old infants.” Psychological Science 18(7): 580–586.
  • Wellman, H. M. (1990). The Child’s Theory of Mind, Cambridge, MA, MIT Press.
  • Wellman, H. M., Cross, D. and Watson, J. (2001). “Meta-analysis of theory-of-mind development: The truth about false belief.” Child Development 72: 655–684.
  • Wilson, D. (2005). “New directions for research on pragmatics and modularity.” Lingua 115: 1129–1146.
  • Wimmer, H., Hogrefe, G. and Perner, J. (1988). “Children’s understanding of informational access as a source of knowledge.” Child Development 59: 386-396.
  • Wimmer, H. and Perner, J. (1983). “Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception.” Cognition 13: 103–128.
  • Varley, R., Siegal, M. and Want, S.C. (2001). “Severe impairment in grammar does not preclude theory of mind.” Neurocase 7: 489–493.
  • Zaitchik, D. (1990). “When representations conflict with reality: The preschooler’s problem with false beliefs and ‘false’ photographs.” Cognition 35: 41–68.

 

Author Information

Massimo Marraffa
Email: marraffa@uniroma3.it
University Roma Tre
Italy

Omnipotence

Omnipotence is the property of being all-powerful; it is one of the traditional divine attributes in Western conceptions of God. This notion of an all-powerful being is often claimed to be incoherent because a being who has the power to do anything would, for instance, have the power to draw a round square. However, it is absurd to suppose that any being, no matter how powerful, could draw a round square.  A common response to this objection is to assert that defenders of divine omnipotence never intended to claim that God could bring about logical absurdities. This observation about what is not meant by omnipotence does little, however, to clarify just what is meant by that term. Philosophers have therefore attempted to state necessary and sufficient conditions for omnipotence.

These proposed analyses are evaluated by several criteria. First, it must be determined whether the property described by the analysis captures what theologians and ordinary religious believers mean when they describe God as omnipotent, almighty, or all-powerful. Omnipotence is thought to be a quite impressive property. Indeed, the traditional God’s omnipotence is one of the attributes that make Him worthy of worship. If, therefore, an analysis implies that certain conceivable beings who are not impressive with respect to their power count as omnipotent, then the analysis is inadequate.

Second, when a particular analysis does seem to be in line with the ordinary use of the term, the next question is whether the property described is self-consistent. For instance, many proposed analyses of omnipotence give inconsistent answers to the question of whether an omnipotent being could create a stone too heavy for it to lift. Third, it is necessary to determine whether omnipotence, so understood, could form part of a coherent total religious view. Some analyses of omnipotence require that an omnipotent being be able to do evil, or to break promises, but God has traditionally been regarded as unable to do these things. It has also been argued that the existence of an omnipotent being would be inconsistent with human freedom. Finally, divine omnipotence is one of the premises leading to the alleged contradiction in traditional religious belief known as the Logical Problem of Evil.

A successful analysis of omnipotence is one which captures the ordinary notion, is free from internal contradiction, and is compatible with the other elements of the religious view in which it is embedded.

Table of Contents

  1. The Self-Consistency of Omnipotence
    1. The Stone Paradox
    2. Voluntarism
    3. Act Theories
    4. Result Theories
    5. Omnipotence and Time
  2. Omnipotence and Necessary Moral Perfection
  3. Omnipotence and Human Freedom
  4. Omnipotence and the Problem of Evil
  5. References and Further Reading

1. The Self-Consistency of Omnipotence

a. The Stone Paradox

Could an omnipotent being create a stone too heavy for it to lift? More generally, could an omnipotent being make something it could not control (Mackie 1955: 210)? This question is known as the Paradox of the Stone, or the Paradox of Omnipotence. It appears that answering either “yes” or “no” will mean that the being in question is not omnipotent after all. For suppose that the being cannot create the stone. Then it seems that it is not omnipotent, for there is something that it cannot do. But suppose the being can create the stone. Then, again, there is something it cannot do, namely, lift the stone it has created.

Although the argument is usually initially stated in this form, as it stands it is not quite valid. From the fact that a particular being is able to create a stone it cannot lift, it does not follow that there is in fact something that that being cannot do. It only follows that if the being were to create the stone, then there would be something it could not do. As a result, the paradox is a problem only for necessary omnitemporal omnipotence, that is, for the view that there is a being who exists necessarily and is necessarily omnipotent at every time (Swinburne 1973; Meierding 1980). There is no problem for a being who is only omnipotent at certain times, because the being in question might very well be omnipotent prior to creating the stone (but not after). Furthermore, the stone paradox provides no reason to suppose there could not be a contingently omnitemporally omnipotent being; all the being in question would need to do is to decide not to create the stone, and then it would be omnipotent at every time. Nevertheless, the Stone Paradox is of interest because necessary omnitemporal omnipotence has traditionally been attributed to God.

The Stone Paradox has been the main focus of those attempting to specify exactly what an omnipotent being could, and could not, do. However, even for those who do not wish to insist on necessary omnitemporal omnipotence, a number of questions arise. Could an omnipotent being draw a square circle? Descartes notoriously answered “yes.” However, the Western philosophical and theological traditions have, at least since Aquinas, almost universally given the opposite answer. The view that an omnipotent being could do absolutely anything, even the logically absurd, is known as voluntarism.

Simply rejecting voluntarism does not give an answer to the Stone Paradox. Creating a stone too heavy for its creator to lift is a possible task. Another possible task which an omnipotent being can apparently not perform is coming to know that one has never been omnipotent. For human beings, this is a fairly simple task, but for an omnipotent being it would seem to be impossible. The general problem is this: The fact that it is logically possible that some being perform a specified task (the task itself does not contain a contradiction) does not guarantee that it is logically possible for an omnipotent being to perform that task. Coming to know that one has never been omnipotent is an example of a single task that is logically possible for some being perform, but which is logically impossible for an omnipotent being to perform. The Stone Paradox provides an example of two tasks (creating a stone its creator cannot lift and lifting the stone one has just created) such that each task is logically possible, but it is logically impossible for one task to be performed immediately after the other.

In order to meet these challenges, it is necessary to say something more precise than to simply affirm that an omnipotent being would be able to do whatever is possible. These more precise theories can be divided into two classes: act theories, which say that an omnipotent being would be able to perform any action; and result theories, which say that an omnipotent being would be able to bring about any result.

b. Voluntarism

René Descartes, almost alone in the tradition of Western theology, held that God could do anything, even affirming that “God could have brought it about … that it was not true that twice four make eight” (Descartes 1984-1991: 2:294). If this doctrine is adopted, then the Stone Paradox is dissolved: If an omnipotent being could make contradictions true, then an omnipotent being could make a stone too heavy for it to lift and still lift it (Frankfurt 1964). However, this doctrine is of questionable coherence. To cite just one difficulty, it would seem to follow from the claim that God could make 2 x 4 = 9 that possibly God makes 2 x 4 = 9. However, it is a necessary truth that if God makes 2 x 4 = 9, then 2 x 4 = 9. In standard modal logics, possibly p and necessarily if p then q together entail possibly q, so it seems to follow that possibly 2 x 4 = 9.

Descartes does not accept this consequence, but it is not clear how he can avoid it. It has been suggested that he may be implicitly committed to the rejection of one or more widely accepted modal axioms (Curley 1984). These sorts of absurdities have led to the nearly universal rejection of voluntarism by philosophers and theologians.

c. Act Theories

Once voluntarism is rejected, it is necessary to specify more precisely what is meant by saying that an omnipotent being could do anything. One natural way of doing this is to give a definition of the form:

S is omnipotent =df S can perform any action A such that C

where C specifies some conditions A must satisfy. Such theories of omnipotence may be conveniently referred to as act theories. The simplest (non-voluntarist) act theory is:

(1) S is omnipotent =df S can perform any action A such that A is possible

This act theory deals with the problem of drawing a round square and making 2 x 4 = 9: these are not possible actions. There is some difficulty in saying exactly which acts should count as possible, and this threatens to make the condition too weak. For instance, a being who could perform only physically possible actions would not be omnipotent. The usual response, dating back at least to Aquinas, is to say that an action is possible, in the relevant sense, if and only if it consistent, that is, if it is not self-contradictory.

The Stone Paradox is most effective against act theories. Making a stone one cannot lift is a possible action, so, in order to count as omnipotent according to (1), a being must be able to perform it. However, if any being performs this task then there is a possible task which that being cannot perform immediately afterward, namely, lifting the stone one has just made. It might be objected that this task is not possible for the being in question, but this qualification is not permitted by (1). Definition (1) requires that an omnipotent being should be able to perform any logically possible action, that is, any action which could possibly be performed by any being at all, in any circumstances at all. It is clearly possible that some being perform the action lifting the stone one has just made, so, according to (1), a being who had just performed the action making a stone one cannot lift could not possibly be omnipotent.

This is not a problem for a being who is only contingently omnipotent: such a being might perform the first task, thereby ceasing to be omnipotent, and so be unable to perform the second task, or the being might refrain from performing the first task, and so continue to be omnipotent. However, the Paradox does show that on the contemplated theory no being could be necessarily omnitemporally omnipotent.

It has sometimes been thought that this problem could be solved simply by recognizing that creating a stone an omnipotent being cannot lift is an impossible action, and therefore an omnipotent being need not be able to perform it (Mavrodes 1963). However, this line of objection fails to recognize that, in addition to the impossible action creating a stone an omnipotent being cannot lift, there are also such possible actions as creating a stone one cannot lift and creating a stone its creator cannot lift.

There are further problems. Possible actions also include coming to know that one has never been omnipotent, which, since no one can know falsehoods, no omnipotent being could do. Additionally, this kind of view causes problems for various traditional religious views, such as the assertion by the author of the Epistle to the Hebrews that it is “impossible for God to lie” (Hebrews 6:18) since lying is a possible action.

Medieval philosophers prior to Aquinas often attempted to deal with this problem by claiming that an omnipotent being could perform any action which does not require a defect or infirmity. However, there was very little success in spelling out the meaning of this assertion (Ross 1969: 196-202). Here is a definition which cpatures the basic idea of these Medieval analyses:

(2)   S is omnipotent =df S can perform any action A such that it is logically possible that S does A.

This is similar to the Medieval suggestion since, according to classical theology, God is necessarily without defect or infirmity, so that, if the action A requires a defect or infirmity, (2) does not require that God, in order to count as omnipotent, should be able to do it. However, (2) runs into the famous ‘McEar’ counter-example (Plantinga 1967: 170; La Croix 1977: 183). Suppose that it is a necessary truth about a certain being, known as McEar, that the only action he performs is scratching his ear. It follows that, if McEar can scratch his ear, he is omnipotent, despite his inability to do anything else. This result is clearly unacceptable.

One response, considered by Alvin Plantinga and advocated by Richard La Croix, is to claim merely that an otherwise God-like being who satisfied this definition would be omnipotent. If the concept of God is otherwise coherent, then this claim is probably true. It also has the benefit of being guaranteed not to create any inconsistencies, for it is built into the definition that God has power only to perform those actions such that it is possible that he perform them. However, to adopt this strategy is to give up on the project of providing a general analysis of omnipotence. Furthermore, this claim, on its own, does not answer the question of the Stone Paradox: is it possible for God to create a stone he cannot lift?

Although not everyone agrees that La Croix’s response is satisfactory, it is widely held that the prospects are not good for a consistent general definition or analysis of omnipotence in terms of acts (Ross 1969: 202-210; Geach 1973; Swinburne 1973; Sobel 2004: ch. 9).

d. Result Theories

The main alternatives to act theories of omnipotence are result theories, theories which analyze omnipotence in terms of the results an omnipotent being would be able to bring about. These results are usually thought of as states of affairs or possible worlds. A possible state of affairs is a way the world could be. Philosophers also sometimes recognize impossible states of affairs, that is, ways the world could not be. For instance, the sky’s being blue is a possible state of affairs, and John’s being a married bachelor is an impossible state of affairs. A possible world is a maximal consistent state of affairs, a complete way the world could be.

Equivalent, or approximately equivalent, result theories can be stated in terms either of states of affairs or of possible worlds. The simplest (non-voluntarist) result theory can be stated, in terms of possible worlds, as follows:

(3) S is omnipotent =df S can bring about any possible world

In other words, for any comprehensive way the world could be, an omnipotent being could bring it about that the world was that way. This account of omnipotence was first clearly laid out and endorsed by Leibniz, who pioneered the philosophical use of the notion of a possible world (Leibniz 1985: sects. 7-8, 52, 416). More recently, James Ross has advocated a similar account, though Ross prefers a formulation in terms of states of affairs (Ross 1969: 210-213):

(4) S is omnipotent =df for every contingent state of affairs p, whether p is the case is logically equivalent to the effective choice, by S, that p

Since every state of affairs must either obtain or not, and since two contradictory states of affairs cannot both obtain, an omnipotent being would have to will some maximal consistent set of contingent states of affairs (Ross 1980: 614), that is, some one possible world. Ross’s definition therefore entails Leibniz’s.

The Leibniz-Ross theory neatly handles all of the objections raised against act theories. First, the Stone Paradox depends on the existence of reflexive actions, that is, actions whose descriptions refer back to the actor. Although states of affairs can refer to agents, a state of affairs does not have an actor. Thus, the phrase ‘there being a stone one cannot lift’ fails to specify a state of affairs, since there is no actor for “one” to refer to. In order to specify a state of affairs, it is necessary to replace “one” with some expression that defines which agent or agents cannot lift the stone. However, there being a stone an omnipotent being cannot lift is clearly not a possible state of affairs. An omnipotent being could therefore not bring it about. On the other hand, there being a stone its creator cannot lift is a possible state of affairs, and could be brought about by an omnipotent being, under the Leibniz-Ross theory, for an omnipotent being could bring it about that some other being created a stone which that being could not lift. Therefore, the Stone Paradox is not a problem for the Leibniz-Ross theory.

The Leibniz-Ross theory is likewise invulnerable to the objection regarding coming to know that one is not omnipotent, for, in this theory, an omnipotent being must be essentially omnipotent, and it is not possible that an essentially omnipotent being should come to know that it is not omnipotent. Therefore, as in the stone case, the omnipotent being could bring about someone’s coming to know that she is not omnipotent, but not an omnipotent being’s coming to know that it is not omnipotent. Finally, no analog to the McEar objection arises for the Leibniz-Ross theory.

While there are no obvious contradictions involved in the Leibniz-Ross theory, there are a number of metaphysical consequences which some have thought odd and, indeed, absurd. First, the Leibniz-Ross theory implies that an omnipotent being exists necessarily. According to Leibniz’s formulation, an omnipotent being would be able to actualize any possible world, but it is absurd to suppose that an omnipotent being should actualize a world in which it never existed. It follows that no such world is possible. On Ross’s formulation, the obtaining of any state of affairs is logically equivalent to its being chosen by an omnipotent being. Therefore, the obtaining of the state of affairs of no omnipotent being ever existing is logically equivalent to an omnipotent being effectively choosing that no omnipotent being should ever exist, but if no omnipotent being ever exists, then no omnipotent being ever chooses. As a result, the state of affairs of no omnipotent being ever existing cannot possibly obtain (Ross 1969: 213-214). Leibniz and Ross are both proponents of the ontological argument for the existence of God, so they both regard this as a benefit of this theory of omnipotence. Others have, however, found it implausible.

Although many people find it intuitive to suppose that there are possible worlds in which there is no omnipotent being, the Leibniz-Ross theory of omnipotence rules out this possibility. The Leibniz-Ross theory may narrow the space of possible worlds even further, for God, the being Leibniz and Ross believe to be omnipotent, is also supposed to be necessarily morally perfect, and there are worlds which intuitively seem possible which a necessarily morally perfect being could not, it seems, create–for instance, worlds in which the only sentient creatures suffer excruciating pain throughout every moment of their existence. On the Leibniz-Ross theory, if the omnipotent being could not create these worlds, then these worlds are not possible.

Furthermore, the Leibniz-Ross theory entails that an omnipotent being not only cannot create beings it cannot control, but cannot create beings it does not control (Mann 1977). In the Leibniz-Ross theory, an omnipotent being must choose every state of affairs which is to obtain, including all of the choices of its creatures. This is often thought to be a serious threat to human freedom.

All of these concerns with the Leibniz-Ross theory point the same direction: the suggestion that there are logically possible states of affairs which it is nevertheless logically impossible that an omnipotent being, or an omnipotent being who also has the other traditional divine attributes, should actualize. This line of reasoning has led Plantinga to dub the view that God can actualize any possible world “Leibniz’s Lapse” (Plantinga 1974: 180-184).

There is disagreement about exactly which, or how many, possible states of affairs cannot possibly be brought about by an omnipotent being. For instance, philosophers disagree about whether the claim that an omnipotent being exists is necessarily true, necessarily false, or contingent. If it is a contingent matter whether an omnipotent being exists, then the state of affairs of no omnipotent being ever existing is possible, but nevertheless cannot possibly be brought about by an omnipotent being. Perhaps the most widely accepted examples, and those Plantinga focuses on, are statements about the free choices of creatures. Plantinga believes that it is logically impossible that any being other than Caesar should bring about the possible state of affairs such as Caesar’s freely choosing not to cross the Rubicon, for if Caesar’s not crossing the Rubicon had been brought about by some other being (for example, God), then Caesar would not have freely chosen.

If it is accepted that there are some possible states of affairs which it is impossible that an omnipotent being should bring about, a more complicated analysis of omnipotence is needed. An obvious candidate is:

(5)   S is omnipotent =df  S can bring about any state of affairs p such that it is logically possible that S brings about p

However, this brings back the McEar objection, which the Leibniz-Ross theory had escaped. It is essential to McEar that he never bring about anything other than his own scratching of his ear. It is therefore impossible that McEar bring about some other state of affairs. As a result, this definition, once again, wrongly counts McEar as omnipotent, provided only that he is able to scratch his ear. Some philosophers have responded by arguing that there could not possibly be such a being as McEar (Wierenga 1983: 374-375). Others have given up on the project of giving a general analysis of omnipotence (La Croix 1977). Still others have advocated theories of omnipotence which make special accommodation to creaturely freedom (Flint and Freddoso 1983).

An entirely different approach to the problem is advocated by Erik J. Wielenberg (2000). According to Wielenberg, omnipotence cannot be analyzed simply by consideration of which states of affairs an omnipotent being could or could not bring about. Instead, it is necessary to consider why the being could or could not bring them about. Wielenberg proposes the following analysis:

(6) S is omnipotent =df there is no state of affairs p such that S is unable to bring about p at least partially due to lack of power

This analysis avoids attributing omnipotence to McEar since McEar’s limitation seems to be at least in part due to lack of power. It also solves the problem of the consistency of God’s inability to do evil with omnipotence, since God’s inability to do evil is not due to lack of power. Finally, according to Wielenberg, if it is really true that even an omnipotent being could not bring about Caesar’s freely choosing not to cross the Rubicon, then this must be due not to lack of power, but to the logic of the situation. The chief limitation of Wielenberg’s account is that it makes use of some unanalyzed notions whose analysis philosophers have found quite difficult. These are the notion of lack of power and the notion of one state of affairs obtaining partially due to another state of affairs obtaining. Without analyses of these notions, it is hard to tell whether Wielenberg’s analysis is self-consistent and whether it is consistent with other traditional divine attributes.

e. Omnipotence and Time

The Leibniz-Ross theory entails that the exercise of omnipotent power cannot occur within time. This is because, in this view, to exercise omnipotent power is to choose some particular possible world to be actual. To think of such a choice as occurring in time would be to imagine that some possible world could, at some particular time, become actual, having previously been merely possible. This, however, is absurd (Ross 1980: 621). Therefore, on the Leibniz-Ross theory, an omnipotent being can act only atemporally.

The notion of an atemporal action has, however, been found difficult. To give just one example of such a difficulty, it is widely held that acting requires one to be the cause of certain effects. However, many philosophers have also held that it is part of the concept of a cause that it must occur before its effects. Since something atemporal is neither before nor after anything else, there cannot be an atemporal cause, and, therefore, there cannot be an atemporal action.

On the other hand, even apart from the Leibniz-Ross theory, there are difficulties with the notion of being omnipotent at a time. This is because there are contingent states of affairs about the past, but the notion of changing the past is generally agreed to be incoherent (see Time Travel). Thus, omnipotence at a point in time cannot be defined as, for instance, the ability to bring about any contingent state of affairs because, although many past states of affairs are contingent, nothing done in the present, even by an omnipotent being, could possibly bring about a past state of affairs.

Richard Swinburne has proposed an analysis of omnipotence at a point in time based on definition (5) above (Swinburne 1973):

(7) S is omnipotent at time t =df  S is able at t to bring about any state of affairs p such that it is consistent with the facts about what happened before t that, after t, S should bring about p

If the notion of changing the past is incoherent, then (7) does not require that an omnipotent being be able to change the past. However, (7) inherits (5)’s flaw when it comes to McEar: Since it is inconsistent to suppose that McEar (who, by hypothesis, is necessarily such that he only scratches his ear) does something other than scratch his ear, he need not have the power to do anything else in order to count as omnipotent. Additionally, there are well-known problems with specifying which facts are about the past. For instance, consider the fact that the U.S. Declaration of Independence was issued 232 years before President Obama took office. It is difficult to say whether this is a fact about 1776 or about 2008. (Intuitively, it is about both.) In order for (7) to succeed in dealing with the difficulties of temporal omnipotence, there must be a distinction between those facts which are, and those which are not, about the past. However, relational facts like the one under discussion show that it is quite difficult to draw this distinction.

Some philosophers have attempted to meet this difficulty head-on by adopting particular theories of temporal facts (Flint and Freddoso 1983), while others have tried to sidestep the concern by formulating theories of temporal omnipotence which do not require a distinction between past and non-past facts. For instance, Gary Rosenkrantz and Joshua Hoffman advocate the following analysis (Rosenkrantz and Hoffman 1980):

(8) S is omnipotent at t =df  S is able at t to bring about any state of affairs p such that possibly some agent brings about p, and p is unrestrictedly repeatable

Rosenkrantz and Hoffman introduce a number of further qualifications, but the central point of their account is the notion of unrestricted repeatability. Intuitively, an unrestrictedly repeatable state of affairs is one that can obtain, cease to obtain, and then obtain again indefinitely many times, throughout all of history. Mt. Vesuvius’s erupting is unrestrictedly repeatable, but Mt. Vesuvius’s erupting prior to 1900 is not, since the latter cannot obtain at any time after 1900. Rosenkrantz and Hoffman hold that an omnipotent being could, before 1900, have brought about Mt. Vesuvius’s erupting prior to 1900 by, at that time, bringing about Mt. Vesuvius’s erupting. After 1900, an omnipotent being could still bring about the latter state of affairs, though not the former. Since the former state of affairs is not unrestrictedly repeatable, the inability to bring it about after 1900 is no bar to a being’s counting as omnipotent.

2. Omnipotence and Necessary Moral Perfection

According to the New Testament, “God cannot be tempted with evil” (James 1:13) and it is “impossible for God to lie” (Hebrews 6:18). Traditionally, these divine inabilities are taken quite seriously, and are said to follow from God’s attribute of impeccability or necessary moral perfection. According to this view, it is impossible for God to do evil. It seems, however, that no being could be both omnipotent and necessarily morally perfect, since an omnipotent being could do anything, but there are many things a necessarily morally perfect being could not do.

The argument can be formulated as follows (Morriston 2001: 144). Consider some particularly evil state of affairs, E, such as every sentient being suffering excruciating pain throughout its entire existence. Then:

(1)   If any being is necessarily morally perfect, then there is no possible world at which that being brings about E

(2)   If any being is omnipotent, then that being has the power to bring about E

(3)   If any being has the power to bring about E, then there is some possible world at which that being brings about E

Therefore,

(4)   No being is both necessarily morally perfect and omnipotent

Some theists have simply accepted the conclusion, replacing either necessary moral perfection or omnipotence with some weaker property. For instance, Nelson Pike famously argued that, although no being would deserve the title “God” unless that being were morally perfect, there are nevertheless possible worlds in which the being who is in fact God is not morally perfect, and therefore is not God (Pike 1969). Pike’s view is, in essence, a rather complicated version of the claim that God is only contingently morally perfect, a view which some have regarded as extremely objectionable from a theological standpoint (Geach 1977).

A number of philosophers who have accepted the incompatibility of omnipotence with necessary moral perfection have regarded the latter as more central to religious notions of God, and have argued that divine omnipotence should therefore be rejected (Geach 1977; Morriston 2001; Funkhouser 2006).

Defenders of the compatibility of omnipotence and necessary moral perfection must deny at least one of the premises of the argument, and, indeed, each of them has been denied. Premise (1) is perhaps the most difficult to reject. To be necessarily morally perfect is to be morally perfect in every possible world, but there seem to be some states of affairs such that bringing them about is inconsistent with moral perfection, and so it seems that if any being is necessarily morally perfect, then there are some states of affairs which that being does not bring about in any possible world. However, defenders of certain sorts of divine command theories of ethics are committed to the claim that God is morally perfect only in a trivial sense, and these views will have the result that (1) is false. If what is morally good depends on God’s choice, then, if God chose something else, that something else would be morally good. If this is right, then (1) is false: God could bring about E, but if he did bring about E, then E would be morally good. However, most philosophers regard this line of thought as tending to show the absurdity of these versions of divine command theory, rather than the falsity of (1).

Premise (2) can be rejected by those philosophers who regard omnipotence as the ability to perform any action or bring about any result which is consistent with the actor’s nature, as in definitions (2), (5), and (7).  However, these definitions fall prey to the McEar objection and, more generally, open the door to all kinds of limitations on what an omnipotent being can do.

Many philosophers of action take it as an axiom that there are no necessarily unexercised powers (or abilities, or capacities), and (3) is merely an instance of this general principle. Nevertheless, the rejection of (3) is defended by Wielenberg (2000), who argues by means of the following analogy. Suppose that Hercules is “omni-strong” that is, he has sufficient strength to lift stones of any weight. Suppose, however, that a certain stone is too slippery for him to get a grip on. He therefore cannot lift it. Hercules’ inability to lift the slippery stone does not count against his omni-strength, since the stone is not too heavy for him, but only too slippery.

In the same way, Wielenberg argues, there are many things which it is not possible for God to do. However, God is omnipotent, since it is not for lack of power that God is unable to do these things, but for other reasons, such as his necessary moral perfection. The aptness of Wielenberg’s analogy is still open to dispute, and the principle that there are no necessarily unexercised powers continues to be widely accepted.

3. Omnipotence and Human Freedom

It is sometimes argued that if the existence of an omnipotent agent is possible, then the existence of a non-omnipotent free agent is impossible. According to this line of thought, if Caesar was free, then Caesar, and only Caesar, could have brought about Caesar’s freely refraining from crossing the Rubicon. However, if Caesar could have brought about that state of affairs, then it must be a possible state of affairs, and an omnipotent being could therefore bring it about. This, however, cannot be correct, for if someone other than Caesar brought about Caesar’s refraining, then Caesar would not have refrained freely. Therefore, an omnipotent being could not bring about this state of affairs. But if even an omnipotent being could not bring it about, then surely Caesar, who is not omnipotent, could not bring it about either. Therefore, Caesar was not free and, by parity of reasoning, neither is any other non-omnipotent agent.

The Leibniz-Ross theory renders the problem even more acute. According to Leibniz, God chooses precisely which possible world will obtain. God, therefore, chooses whether Caesar will cross the Rubicon. However, if someone else chooses what Caesar will do, then Caesar is not free. Similarly, for Ross, Caesar’s crossing the Rubicon is logically equivalent to God’s effectively choosing that Caesar cross the Rubicon. The choice is up to God. It is therefore not up to Caesar, at least not in the sense which (according to some philosophers) is required for free will.

Neither Leibniz nor Ross finds this objection particularly troubling. According to Leibniz, since it is possible that Caesar freely refrain from crossing the Rubicon, there must be a possible world which represents him as doing so. In making a world actual, God does not in any way change the intrinsic character of that world (Leibniz 1985: sect. 52). As a result, had God brought about that world, Caesar would still have been free. Similarly, Ross suggests that whatever sort of independence from external determination freedom requires, it certainly does not require that the agent’s choice be independent of its own logical entailments. However, in his view, God’s effectively choosing that the agent so choose is logically equivalent to the agent’s so choosing, and so cannot be inconsistent with freedom (Ross 1980: sect. 2).

Compatibilists about free will may be satisfied with the responses given by Leibniz and Ross. Libertarians, however, have generally not been satisfied, and have argued that an omnipotent being need not have the power to bring about such states of affairs as Caesar’s freely refraining from crossing the Rubicon. Most of those who have been so concerned have followed an approach developed by Plantinga (1974: ch. 9). This approach hinges on the existence of a class of propositions known as counterfactuals of freedom. A counterfactual of freedom is a statement about what an individual would freely choose if faced with a certain hypothetical circumstance. For instance, the claim, “If Caesar were offered a bribe of fifty talents, he would freely refrain from crossing the Rubicon,” is a counterfactual of freedom. Now, suppose that Brutus wants Caesar to freely refrain. If he uses force to prevent Caesar from crossing the Rubicon, then he has not succeeded in bringing it about that Caesar freely refrains, for in this case, Caesar’s refraining has been brought about by Brutus and not by Caesar, and so Caesar did not do it freely. This sort of bringing about is known as strongly actualizing. Only Caesar can strongly actualize Caesar’s freely refraining from crossing the Rubicon. However, if Brutus knows that if Caesar were offered the bribe, he would freely refrain, then there is a sense in which Brutus can bring it about that Caesar freely refrains: Brutus can strongly actualize the state of affairs Caesar’s being offered the bribe, and he knows that if he does this then Caesar will freely refrain. In such a case, Brutus would be said to have weakly actualized Caesar’s freely refraining.

According to Plantinga, in order for creatures to be free, it must not be up to anyone else which counterfactuals of freedom are true of them, so even an omnipotent being could not bring it about that particular counterfactuals of freedom are true. However, an omnipotent being could presumably bring it about that it knows the true counterfactuals of freedom (or if the omnipotent being was also essentially omniscient, then it would already know), and it could presumably strongly actualize many of their antecedents, and so weakly actualize a variety of states of affairs in which non-omnipotent beings acted freely. An omnipotent being could not, however, weakly actualize just any possible state of affairs. For instance, if there were no possible circumstance such that, if Caesar were in that circumstance, he would freely refrain from crossing the Rubicon, then even an omnipotent being could not weakly actualize Caesar’s freely refraining.

Among those who accept Plantinga’s arguments, some have attempted to analyze omnipotence in terms of what an omnipotent being could strongly actualize, and made appropriate qualifications for free actions. It is typically pointed out that it is logically impossible for any being to strongly actualize a state of affairs in which another being makes a free choice, and it suffices for omnipotence that a being be able to strongly actualize those states of affairs which it is logically possible that that being should strongly actualize (Wierenga 1983). This approach, however, runs into McEar-style counterexamples. Others have attempted to analyze omnipotence in terms of what an omnipotent being could weakly actualize. Flint and Freddoso (1983) require that an omnipotent being S be able to weakly actualize any possibly actualized state of affairs which is consistent with the counterfactuals of freedom about beings other than S. However, as Graham Oppy has pointed out, Flint and Freddoso’s analysis also seems to make omnipotence too easy, since on Flint and Freddoso’s account a being who could not strongly actualize such mundane states of affairs as a five-pound stone’s being lifted or a barn’s being painted red could turn out to be omnipotent if it was able to weakly actualize them (Oppy 2004, 74-75).

4. Omnipotence and the Problem of Evil

Divine omnipotence is typically used as a key premise in the famous argument against the existence of God known as the Logical Problem of Evil. The argument can be formulated as follows:

(1)   An omnipotent being would be able to bring about any possible world

(2)   Given the opportunity to bring about some world, a morally perfect being would only bring about the best world available to it

(3)   The actual world is not the best possible world

Therefore,

(4)   The actual world was not brought about by a being who is both omnipotent and morally perfect

The argument is here formulated in Leibnizian terms, and Leibniz notoriously rejected premise (3). Premise (2) has also been rejected: some philosophers have denied that there is a unique best possible world and others, most notably Robert Adams, have argued that even if there is such a world, creating it might not be the best course of action (Adams 1972). However, the premise that is of present concern is (1). Although (1) is accepted by Leibniz and Ross, considerations related to necessary moral perfection and human freedom have led many philosophers to reject it. This is the central premise of Plantinga’s Free Will Defense against the Logical Problem of Evil (Plantinga 1974, ch. 9): If there are worlds that God, though omnipotent, cannot bring about, then the best possible world might be one of these. If this is so, then, despite being both omnipotent and morally perfect, God would bring about a world which was less than the best, such as, perhaps, the actual world.

5. References and Further Reading

  • Adams, Robert Merrihew (1972). Must God create the best? Philosophical Review 81 (3):317-332.
  • Aquinas, St. Thomas. 1921 [1274]. The summa theologica of St. Thomas Aquinas. 2nd ed. Trans. Fathers of the English Dominican Province. London: Burns Oates & Washbourne.
    • Part 1, Qu. 25, Art. 3 argues that omnipotence should be understood as the ability to do anything that is absolutely possible, that is, that does not imply a contradiction.
  • Cowan, J. L. 1965. The paradox of omnipotence. Analysis 25:102-108.
    • Argues, against Mavrodes 1963, that the Stone Paradox cannot be solved by claiming that God can perform only logically possible tasks.
  • Curley, E. M. 1984. Descartes on the creation of the eternal truths. The Philosophical Review 93:569-597.
  • Descartes, Rene. 1984-1991 [1619-1649]. The philosophical writings of Descartes. Trans. JohnCottingham, Robert Stoothoff, Dugald Murdoch, and Anthony Kenny. 3 vols. Cambridge: Cambridge University Press.
    • Defends voluntarism, the thesis that God can do literally anything, even draw a round square. See 2:294 (Sixth Replies) and 3:23-26 (letters to Mersenne).
  • Flint, Thomas P., and Alfred J. Freddoso. 1983. Maximal power. In The existence and nature of God, ed. Alfred J. Freddoso. Notre Dame, IN: University of Notre Dame Press.
    • Combines the apparatus of Plantinga 1974 with an Ockhamist account of foreknowledge to develop a result theory sensitive to issues about time and freedom.
  • Frankfurt, Harry G. 1964. The logic of omnipotence. Philosophical Review 73 (2): 262-263.
    • Points out that if, as Descartes supposed, God can do the logically impossible, then God can create a stone too heavy for him to lift and still lift it.
  • Funkhouser, Eric. 2006. On privileging God’s moral goodness. Faith and Philosophy 23 (4): 409-422.
    • Argues that omnipotence is incompatible with necessary moral perfection, and that omnipotence is not a perfection, and therefore should not be attributed to God.
  • Geach, P. T. 1973. Omnipotence. Philosophy 48 (183): 7-20.
    • Considers four theories of omnipotence and argues that they are all unacceptable.
  • La Croix, Richard R. 1977. The impossibility of defining ‘omnipotence’. Philosophical Studies 32 (2):181-190.
    • Argues that every possible definition of omnipotence either renders omnipotence inconsistent with traditional divine attributes or falls prey to McEar-style counterexamples. This article is responsible for introducing the name ‘McEar.’
  • Leibniz, G. W. 1985 [1710]. Theodicy. Ed. Austin Farrer. Trans. E. M. Huggard. La Salle, Ill.: Open Court.
    • Argues that God’s omnipotence consists in his ability to actualize any possible world, but God is impelled by a’moral necessity’ to choose the best.
  • Mackie, J. L. 1955. Evil and omnipotence. Mind 64 (254): 200-212.
    • Argues that it is incoherent to suppose that a world containing evil was created by an omnipotent and perfectly good being.
  • Mann, William E. 1977. Ross on omnipotence. International Journal for Philosophy of Religion 8 (2):142-147.
    • Shows that, given Ross’s theory of omnipotence (Ross 1969), an omnipotent being cannot freely decide to leave it up to others whether a certain state of affairs should obtain.
  • Mavrodes, George I. 1963. Some puzzles concerning omnipotence. Philosophical Review 72 (2):221-223.
    • Argues that an omnipotent being could not create a stone so heavy he could not lift it, since the notion of a stone too heavy to be lifted by an omnipotent being is incoherent.
  • Meierding, Loren. 1980. The impossibility of necessary omnitemporal omnipotence. International Journal for Philosophy of Religion 11 (1): 21-26.
    • Formalizes Swinburne’s argument that only necessary omnitemporal omnipotence is incoherent (Swinburne 1973).
  • Morriston, Wes. 2001. Omnipotence and necessary moral perfection: are they compatible? Religious Studies 37 (2): 143-160.
    • Argues that no being could be both omnipotent and necessarily morally perfect.
  • Oppy, Graham. 2005. Omnipotence. Philosophy and Phenomenological Research 71 (1): 58-84.
    • Criticizes several recent theories of omnipotence (Rosenkrantz and Hoffman 1980; Flint and Freddoso 1983; Wierenga 1983) and argues that the God of ‘orthodox monotheism’ should not be regarded as omnipotent at all.
  • Pike, Nelson. 1969. Omnipotence and God’s ability to sin. American Philosophical Quarterly 6 (3):208-216.
    • Argues that the individual who is in fact God is able to sin, but that the sentence ‘God sins’ is nevertheless necessarily false.
  • Plantinga, Alvin. 1967. God and other minds: a study of the rational justification of belief in God. Ithaca, NY: Cornell University Press.
    • Ch. 7, sect. 2 introduced the ‘McEar’ counterexample to certain definitions of omnipotence (p. 170).
  • Plantinga, Alvin. 1974. The nature of necessity. Oxford: Clarendon Press.
    • Chapter 9 argues that there are possible worlds which God, though omnipotent, cannot actualize.
  • Rosenkrantz, Gary, and Joshua Hoffman. 1980. What an omnipotent agent can do. International Journal for Philosophy of Religion 11 (1): 1-19.
    • Defends a result theory according to which an omnipotent agent can actualize any unrestrictedly repeatable state of affairs.
  • Ross, James F. 1969. Philosophical theology. Indianapolis: Bobbs Merrill.
    • Omnipotence is the topic of chapter 5. After a survey of Scholastic theories of omnipotence, Ross argues that no act theory of omnipotence can succeed. Ross then presents his own theory according to which a being is omnipotent if for any contingent state of affairs p, it is up to that being to choose whether p obtains.
  • Ross, James F. 1980. Creation. Journal of Philosophy 77 (10): 614-629.
    • Further develops, and defends from objections, the account of omnipotence given in Ross 1969. Section 2 answers the objection that Ross’s theory leaves no room for human freedom (Mann 1977).
  • Swinburne, Richard. 1973. Omnipotence. American Philosophical Quarterly 10: 231-237.
    • Argues that a result theory can, and an act theory cannot, defeat the Stone Paradox. However, it is conceded that the Paradox shows that no temporal being could be essentially omnipotent.
  • Wielenberg, Erik J. 2000. Omnipotence again. Faith and Philosophy 17 (1): 26-47.
    • Criticizes Wierenga 1983 and Flint and Freddoso 1983 and argues for a result theory according to which there is no state of affairs such that lack of power prevents an omnipotent being from actualizing it.
  • Wierenga, Edward R. 1983. Omnipotence defined. Philosophy and Phenomenological Research 43 (3):363-375.
    • Defends a result theory, and argues that a being like McEar is impossible.

Author Information

Kenneth L. Pearce
Email: kpearce@usc.edu
University of Southern California
U. S. A.

Edmund Husserl: Intentionality and Intentional Content

HusserlEdmund Husserl (1859—1938) was an influential thinker of the first half of the twentieth century. His philosophy was heavily influenced by the works of Franz Brentano and Bernard Bolzano, and was also influenced in various ways by interaction with contemporaries such as Alexius Meinong, Kasimir Twardowski, and Gottlob Frege. In his own right, Husserl is considered the founder of twentieth century Phenomenology with influence extending to thinkers such as Martin Heidegger, Jean-Paul Sartre, Maurice Merleau-Ponty, and to contemporary continental philosophy generally. Husserl’s philosophy is also being discussed in connection with contemporary research in the cognitive sciences, logic, the philosophy of language, and the philosophy of mind, as well as in discussions of collective intentionality. At the center of Husserl’s philosophical investigations is the notion of the intentionality of consciousness and the related notion of intentional content (what Husserl first called ‘act-matter’ and then the intentional ‘noema’). To say that thought is “intentional” is to say that it is of the nature of thought to be directed toward or about objects. To speak of the “intentional content” of a thought is to speak of the mode or way in which a thought is about an object. Different thoughts present objects in different ways (from different perspectives or under different descriptions) and one way of doing justice to this fact is to speak of these thoughts as having different intentional contents. For Husserl, intentionality includes a wide range of phenomena, from perceptions, judgments, and memories to the experience of other conscious subjects as subjects (inter-subjective experience) and aesthetic experience, just to name a few. Given the pervasive role he takes intentionality to play in all thought and experience, Husserl believes that a systematic theory of intentionality has a role to play in clarifying and founding most other areas of philosophical concern, such as the theory of consciousness, the philosophy of language, the philosophy of logic, epistemology, and the philosophies of action and value. This article presents the key elements of Husserl’s understanding of intentionality and intentional content, specifically as these are developed in his works Logical Investigations and Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy.

Table of Contents

  1. Intentionality: Background and General Considerations
    1. Intentional Content
  2. Logical Investigations
    1. Intentionality in Logical Investigations
      1. Act-Character
      2. Act-Matter
    2. Intentionality, Meaning and Expression in Logical Investigations
      1. Meaning and Expression
      2. Essentially Occasional Expressions: Indexicals
  3. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: The Perceptual Noema
    1. Noesis and Noema: Terminology and Ontology
    2. Structural Features of the Noema
    3. Systems of Noemata and Explication
    4. Additional Considerations
  4. References and Further Reading
    1. Works by Husserl
    2. Secondary Sources

1. Intentionality: Background and General Considerations

Franz Brentano (1838—1917) is generally credited with having inspired renewed interest in the idea of intentionality, especially in his lectures and in his 1874 book Psychology from an Empirical Standpoint. In this work Brentano is, among other things, concerned to identify the proper sphere or subject matter of psychology. Influenced in various ways by Aristotle’s psychology, by the medieval notion of the intentio of a thought, and by modern philosophical views such as those of Descartes and the empiricists, he identifies intentionality as the mark or distinctive characteristic of the mental. For Brentano this means that every mental phenomenon involves the “intentional inexistence” of an object toward which the mental phenomenon is directed. While every such mental phenomenon has an object, different mental phenomena relate to their objects in different ways depending on whether they are mental acts of presenting something, of judging about something, or of evaluating something as good or bad. Identifying intentionality as the mark of the mental in this way opens up the possibility of studying the mind in terms of its relatedness to objects, the different modes or forms that this relatedness takes (perceiving, imagining, hallucinating, and so forth), and in terms of the relationships that these different modes of intentionality bear to one another (the relationships between presentations, judgments, and evaluations; for example, that every judgment fundamentally depends on a presentation the object of which it is a judgment about). Husserl studied with Brentano from 1884 to 1886 and, along with others such as Alexius Meinong, Kasimir Twardowski, and Carl Stumpf, took away from this experience an abiding interest in the analysis of the intentionality of mind as a key to the clarification of other issues in philosophy.

It is important to note the distinction between intentionality in the sense under discussion here on the one hand and the idea of an intention in the sense of an intelligent agent’s goal or purpose in taking a specific action on the other. The intentionality under consideration here includes the idea of agent’s intentions to do things, but is also much broader, applying to any sort of object-directed thought or experience whatsoever. Thus, while it would be normal to say that “Jack intended to score a point when he kicked the ball toward the goal”, in the sense of ‘intention’ pertinent to Husserl it is equally correct to say that “Jack intended the bird as a blue jay”. This latter being a way of saying that Jack directed his mind toward the bird by thinking of it or perceiving it as a blue jay.

Husserl himself analyzes intentionality in terms of three central ideas: intentional act, intentional object, and intentional content. It is arguably in Husserl’s Logical Investigations that these ideas receive their first systematic treatment as distinct but correlative elements in the structure of thought and experience. This section clarifies these three notions based on Husserl’s main commitments, though not always using his exact terminology.

The intentional act or psychological mode of a thought is the particular kind of mental event that it is, whether this be perceiving, believing, evaluating, remembering, or something else. The intentional act can be distinguished from its object, which is the topic, thing, or state of affairs that the act is about. So the intentional state of seeing a white dog can be analyzed in terms of its intentional act, visually perceiving, and in terms of its intentional object, a white dog. Intentional act and intentional object are distinct since it is possible for the same kind of intentional act to be directed at different objects (perceiving a tree vs. perceiving a pond vs. perceiving a house) and for different intentional acts to be directed at the same object (merely thinking about the Eiffel Tower vs. perceiving the Eiffel Tower vs. remembering the Eiffel Tower). At the same time the two notions are correlative. For any intentional mental event it would make no sense to speak of it as involving an act without an intentional object any more than it would to say that the event involved an intentional object but no act or way of attending to that object (no intentional act). The notion of intentionality as a correlation between subject and object is a prominent theme in Husserl’s Phenomenology.

a. Intentional Content

The third element of the structure of intentionality identified by Husserl is the intentional content. It is a matter of some controversy to what extent and in what way intentional content is truly distinct from the intentional object in Husserl’s writings. The basic idea, however, can be stated without too much difficulty.

The intentional content of an intentional event is the way in which the subject thinks about or presents to herself the intentional object. The idea here is that a subject does not just think about an intentional object simpliciter; rather the subject always thinks of the object or experiences it from a certain perspective and as being a certain way or as being a certain kind of thing. Thus one does not just perceive the moon, one perceives it “as bright”, “as half full” or “as particularly close to the horizon”. For that matter, one perceives it “as the moon” rather than as some other heavenly body. Intentional content can be thought of along the lines of a description or set of information that the subject takes to characterize or be applicable to the intentional objects of her thought. Thus, in thinking that there is a red apple in the kitchen the subject entertains a certain presentation of her kitchen and of the apple that she takes to be in it and it is in virtue of this that she succeeds in directing her thought towards these things rather than something else or nothing at all. It is important to note, however, that for Husserl intentional content is not essentially linguistic. While intentional content always involves presenting an object in one way rather than another, Husserl maintained that the most basic kinds of intentionality, including perceptual intentionality, are not essentially linguistic. Indeed, for Husserl, meaningful use of language is itself to be analyzed in terms of more fundamental underlying intentional states (this can be seen, for example, throughout LI, I). For this reason characterizations of intentional content in terms of “descriptive content” have their limits in the context of Husserl’s thought.

The distinction between intentional object and intentional content can be clarified based on consideration of puzzles from the philosophy of language, such as the puzzle of informative identity statements. It is quite trivial to be told that Mark Twain is Mark Twain. However, for some people it can be informative and cognitively significant to learn that Mark Twain is Samuel Clemens. The notion of intentional content can be used to explain this. When a subject thinks about the identity statement asserting that Mark Twain is Mark Twain, the subject thinks about Mark Twain in the same way (using the same intentional content; perhaps “the author of Huckleberry Finn”) in association with the name on both the left and right sides of the identity, whereas when a subject thinks about the identity statement asserting that Mark Twain is Samuel Clemens what he learns is that different intentional contents (those associated with the names ‘Mark Twain’ and ‘Samuel Clemens’ respectively) are true of the same intentional object. Cases such as this both motivate the distinction between intentional content and intentional object and can be explained in terms of it.

The notion of intentional content as distinct from intentional object is also important in relation to the issue of thought about and reference to non-existent objects. Examples of this include perceptual illusions, thought about fictional objects such as Hamlet or Lilliput, thought about impossible objects such as round-squares, and thought about scientific kinds that turn out not to exist such as phlogiston. What is common to each of these cases is that it seems possible to have meaningful experiences, thoughts and beliefs about these things even though the corresponding objects do not exist, at least not in any ordinary sense of ‘exist’. Identifying intentional content as a distinct and meaningful element of the structure of intentionality makes it possible for Husserl to explain such cases of meaningful thought about the non-existent in a way similar to that of Gottlob Frege and different from the strategy of his fellow student of Brentano, Alexius Meinong. Approaching issues of intentionality from the perspective of logic and the philosophy of language, Frege handled such cases by drawing a distinction between the sense or meaning and the referent (object denoted) of a term, and then saying that non-referring terms such as ‘Ulysses’ have senses, but no referents (Frege 1948). Meinong, on the other hand, was driven by his commitment to the thesis of intentionality to posit a special category of objects, the non-existing objects or objects that have Nichtsein, as the intentional objects of such thoughts (Meinong 1960). For Husserl, such cases involve an intentional act and intentional content where the intentional content does present an intentional object, but there is no real object at all corresponding to the intentional appearance. Given this, one way of reading the distinction between intentional content and intentional object is as a generalization to all mental acts of Frege’s primarily linguistic distinction between the senses and the referents of terms and sentences (for a defense of this interpretation see Føllesdal 1982, while for discussion and resistance to the view, see Drummond 1998). Husserl’s exact understanding of the ontological situation regarding intentional objects is quite involved and undergoes some changes between Logical Investigations and his later phenomenology, beginning with Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy. However, throughout his work Husserl is able to make use of the distinction between intentional content and intentional object to handle cases of meaningful thought about the non-existent without having to posit, in Meinongian fashion, special categories of non-existent objects.

The basic structure of Husserl’s account of intentionality thus involves three elements: intentional act, intentional content and intentional object. For Husserl, the systematic analysis of these elements of intentionality lies at the heart of the theory of consciousness, as well as, in varying ways, of logic, language and epistemology.

2. Logical Investigations

Logical Investigations (hereafter ‘Investigations’), which came out in two volumes in the years 1900 and 1901, represents Husserl’s first definitive treatment of intentionality and is the source of the main ideas that would drive much of his later philosophical thinking. The primary project of the Investigations is to criticize a view in the philosophy of logic called “psychologism” according to which the laws of logic are in some sense natural laws or rules governing the human mind and can thus be studied empirically by psychology. Husserl, notably in agreement with Frege, believed that this view had the undesirable consequences of treating the laws of logic as contingent rather than necessarily true and as being empirically discoverable rather than as known and validated a priori. In the first part of the Investigations, the “Prolegomena to Pure Logic”, Husserl systematically criticizes the psychologistic view and proposes to replace it with his own conception of “pure logic” as the a priori framework for organizing, understanding and validating the results of the formal, natural and social sciences (Husserl called the “theory of scientific theory in general” that pure logic was to be the foundation for ‘Wissenschaftslehre’). For Husserl, pure logic is an a priori system of necessary truths governing entailment and explanatory relationships among propositions that does not in any way depend on the existence of human minds for its truth or validity. However, Husserl maintains that the task of developing a human understanding of pure logic requires investigations into the nature of meaning and language, and into the way in which conscious intentional thought is able to comprehend meanings and come to know logical (and other) truths. Thus the bulk of a work that is intended to lay the foundations for a theory of logic as a priori, necessary, and completely independent of the composition or activities of the mind is devoted precisely to systematic investigations into the way in which language, meaning, thought, and knowledge are intentionally structured by the mind. While this tension is more apparent than real, it was a major source of criticism directed against the first edition of Logical Investigations, one which Husserl was concerned to clarify and defend himself against in his subsequent writings and in the second edition of the Investigations in 1913. Pertinent here is what Husserl had to say about language and expression (LI, I) and about intentionality itself (LI, V & VI).

a. Intentionality in Logical Investigations

In Logical Investigations Husserl developed a view according to which conscious acts are primarily intentional, and a mental act is intentional only in case it has an act-quality and an act-matter. Introducing this key distinction, Husserl writes:

The two assertions ‘2 x 2 = 4’ and ‘Ibsen is the principal founder of modern dramatic realism’, are both, qua assertions, of one kind; each is qualified as an assertion, and their common feature is their judgment-quality. The one, however, judges one content and the other another content. To distinguish such ‘contents’ from other notions of ‘content’ we shall speak here of the matter (material) of judgment. We shall draw similar distinctions between quality and matter in the case of all acts (LI, V § 20, p. 586).

An additional notion in the Investigations, which grows in importance in Husserl’s later work and will be discussed here, is the act-character. Husserl views act-quality, act-matter and act-character as mutually dependent constituents of a concrete particular thought. Just as there cannot be color without saturation, brightness and hue, so for Husserl there cannot be an intentional act without quality, matter and character. The quality of an act (called ‘intentional act’ above) is the kind of act that it is, whether perceiving, imagining, judging, wishing, and so fotrth. The matter of an act is what has been called above its intentional content, it is the mode or way in which an object is thought about, for example a house intended from one perspective rather than another, or Napoleon thought of first as “the victor at Jena”, then as “the vanquished at Waterloo”. The character of an act can be thought of as a contribution of the act-quality that is reflected in the act-matter. Act-character has to do with whether the content of the act, the act-matter, is posited as existing or as merely thought about and with whether the act-matter is taken as given with evidence (fulfillment) or without evidence (emptily intended). The next two sub-sections deal with act-character and act-matter respectively.

i. Act-Character

In the Investigations and in his later work, Husserl sometimes writes of an additional dimension in the analysis of intentionality, which he first calls the “act-character” and then in later writings the “doxic and ontic modalities” (For the former, see for example LI, VI § 7; for the latter, see Ideas, Chapter 4 particularly §§ 103—10). In the Investigations, act-character includes such things as whether the intentional act is merely one of reflecting on a possibility (a “non-positing act”) or one of judging or asserting that something is the case (a “positing act”), as well as the degree of evidence that is available to support the intention of the act as fulfilled or unfulfilled (as genuinely presenting some object in just the way that the act-matter suggests, or not). It seems clear that the character of an act is ultimately traceable to the act-quality, since it has to do with the way in which an act-matter is thought about rather than with what that act-matter itself presents. However, it is a contribution of the act-quality that casts a shadow or a halo around the matter, giving the content of the act a distinctive character. This becomes clearer through consideration of particular cases.

Consider first positing and non-positing acts. When a subject wonders whether or not the train will be on time, the content or act-matter of her intention is that of the train being on time. However, in this case the subject is not positing that the train will be on time, but merely reflecting on this in a non-committal (“non-positing”) way as a possibility. The same difference is present in the case of merely wondering whether Bob is the murderer on the one hand (non-positing act), and forming the firm judgment that he is on the other (positing act) (on positing and non-positing acts, see LI, V §§ 38—42).

The character of an intentional act also has to do with whether it is an “empty” merely signitive intention or whether it is a “non-empty” or fulfilled intention. Here what is at issue is the extent to which a subject has evidence of some sort for accepting the content of their intention. For example, a subject could contemplate, imagine or even believe that “the sun set today will be beautiful with few clouds and lots of orange and red colors” already at eleven in the morning. At this point the intention is an empty one because it merely contemplates a possible state of affairs for which there is no intuitive (experiential) evidence. When the same subject witnesses the sun set later in the day, her intention will either be fulfilled (if the sunset matches what she thought it would be like) or unfulfilled (if the sun set does not match her earlier intention). For Husserl, the difference here too does not have to do with the content or act-matter itself, but rather with the evidential character of the intention (LI VI, §§ 1—12).

Importantly, the distinctions between positing and non-positing acts on the one hand and between empty and fulfilled intentions on the other are separate. It would be possible for a subject to posit the existence of something for which she had no evidence or fulfillment (perhaps the belief that her favorite candidate will win next year’s election), just as it would be possible for a subject to not posit or affirm something for which she did have fulfillment or evidence (such as refraining from believing that water causes sticks immersed in it to bend, in spite of immediate perceptual information supporting this).

ii. Act-Matter

As noted above, the matter of an intentional act is its content: the way in which it presents the intentional object as being. The act-matter is:

that element in an act which first gives it reference to an object, and reference so wholly definite that it not merely fixes the object meant in a general way, but also the precise way in which it is meant. (LI, V § 20, p. 589, italics Husserl’s)

So the act-matter both determines to what object, if any, a thought refers, and determines how the thought presents that object as being. For Husserl, the matter of an intentional act does not consist of only linguistic descriptive content. The notion of act-matter is simply that of the significant object-directed mode of an act, and can be perceptual, imaginative, or memorial, linguistic or non-linguistic, particular and indexical, or general, context-neutral and universal. This makes intentionality and intentional content (act-matter) the fundamental targets of analysis, with the theory of language and expression to be analyzed in terms of these notions rather than the other way around. Husserl is thus committed to the notion that intentionality is primary and language secondary, and so also to the view that meaningful non-linguistic intentional thought and experience are both possible and common (LI, I §§ 9—11, 19, & 20).

Husserl’s understanding of the metaphysics of act-matter is also important. Motivated by his anti-psychologism he wants to treat meanings as objective and independent of the minds of particular subjects. Because of this Husserl views meanings in the Investigations as “ideal species”, a kind of abstract entity akin to a universal. However, having done this Husserl also needs to explain how it is that these abstract meanings can play a role in the intentional thought of actual subjects. Husserl’s solution to this is to say that meanings are ideal species or kinds of act-matter that are then instantiated in the actual act-matter of particular intentional subjects when they think the relevant thoughts. Thus, just as there is an ideal species or universal for shape, which gets instantiated in particular instances of shaped objects in the world, so there is an ideal species or universal of the act-matter “2+2=4”, which gets instantiated in the act-matter of a particular subject when he thinks this thought. Whereas Fregean accounts deal with the fact that one individual can have the same thought at different times and different individuals can think about the same thing at any time by positing a single abstract sense that is the numerically identical content of all of their thoughts, Husserl views particular act-matters or contents as instances of ideal act-matter species. Thus, on Husserl’s view, two subjects are able to think about the same thing in the same way when both of them instantiate exactly similar instances of a single kind of content or act-matter. Thus if John and Sarah are both thinking about how they would like to see the Twins win the 2008 World Series in baseball, they are having the same thought and thinking about the same objects in virtue of instantiating exactly similar act-matter instances of the single act-matter species “the Twins win the 2008 World series in baseball” (LI, I §§ 30—4, V §§ 21 & 45).

b. Intentionality, Meaning and Expression in Logical Investigations

Largely motivated by his concern with developing a pure logic, Husserl devotes the entire first Logical Investigation, “Meaning and Expression”, to an analysis of issues of language, linguistic meaning and linguistic reference. Husserl’s discussion here is systematic and wide ranging, covering many issues that are also of concern to Frege in his analysis of language and that have continued to spur discussion in the philosophy of language up to the present. These include the distinction between linguistic types and tokens, the distinction between words and sentences and the meanings that these express, the distinction between sentence meaning and speaker meaning, the meaning and reference of proper names and the function of indexicals and demonstratives. As noted above, Husserl takes the intentionality of thought to be fundamental and the meaning-expressing and reference fixing capabilities of language to be parasitic on more basic features of intentionality. Here the main features of Husserl’s intentionality-based view of language are discussed.

i. Meaning and Expression

Husserl is interested in analyzing the meaning and reference of language as part of his project of developing a pure logic. This leads him to focus primarily on declarative sentences from ordinary language, rather than on other kinds of potentially meaningful signs (such as the way in which smoke normally indicates or is a sign of fire) and gestures (such as the way in which a grimace might indicate or convey that someone feels pain or is uncomfortable). Husserl thus uses ‘expression’ to refer to declarative sentences in natural language and to parts thereof, such as names, general nouns, indexicals,and so forth (LI, I §§ 1—5).

Husserl maintains that the meaning of an expression cannot be identical to the expression for two reasons. First, expressions in different languages, such as ‘the cat is friendly’ and ‘il gatto è simpatico’ are linguistically different, but have the same meaning. Additionally, the same linguistic expression, such as ‘I am going to the bank’ can have different meanings on different occasions (due in this case to the ambiguity of the word ‘bank’). Thus sameness of word or linguistic expression is neither necessary nor sufficient for sameness of meaning (LI, I §§ 11 & 12).

Husserl also maintains that the meaning of a linguistic expression cannot be identical with its referent or referents. In support of this Husserl appeals to phenomena such as informative identity statements and meaningful linguistic expressions that have no referent, among others. An example of the first sort of case would be Frege’s famous ‘Hesperus is Phosphorus’, where ‘Hesperus’ means “the evening star” and ‘Phosphorus’ means “the morning star”. Both ‘Hesperus’ and ‘Phosphorus’ refer to the planet Venus and so if the meaning of a term just is the object that it refers to, then anyone who knows that Hesperus is Hesperus should also know that Hesperus is Phosphorus, yet clearly this is not the case. Husserl’s own explanation for this would be that a subject who found ‘Hesperus is Phosphorus’ informative would do so because he associated different act-matters or intentional contents with each of these names. Thus Husserl, like Frege, distinguishes the meaning of a term or expression both from that term itself and from the object or objects to which the term refers. Husserl identifies these distinctive linguistic meanings as kinds of intentional act-matter (LI, I §§ 13 & 14).

In the Investigations Husserl describes the normal use of an expression, such as ‘the weather is cool today’, in the following way. A subject who utters this expression to a companion is in an intentional state, which includes an act-matter or intentional content that presents the weather as being cool today. This act-matter instantiates an ideal species or act-matter type “the weather is cool today” and in virtue of doing so directs the utterer’s attention to the actual state of affairs regarding the weather. It is in virtue of these facts about the utterer’s intentional states that the words express, for him, the meaning that they do (which is not, of course, to rule out the possibility of miscommunication; for Husserl the description here is just the standard case). The subject performing the utterance does, in principle, three things for his interlocutor. First, the subject’s utterance “expresses” the ideal meaning “the weather is cool today”. Second, assuming the interlocutor grasps that this is what is being expressed, her attention will itself be directed to the referent of this ideal sense, namely the state of affairs involving the weather today (her act-matter will then also instantiate the relevant ideal act-matter species). Third, the subject will, in making his utterance, “intimate” to his interlocutor that he has certain beliefs or is undergoing certain mental states or experiences. This last point is very important for Husserl. He maintains that in normal cases what a subject intimates in uttering an expression (that he believes that the weather is cool today or that he fears that his country will intervene) is not part of the meaning of that expression, even though it is something that the interlocutor will be able to understand on the basis of the subject’s utterance. It is only in cases where a subject is making an assertion about his experiences, attitudes or mental states (such as ‘I doubt that things will improve this year’) that expressed meaning and intimated meaning coincide (on intimation, see LI, I §§ 7 & 8; the majority of the points summarized here are in the first chapter of LI, I, which is §§ 1—16).

ii. Essentially Occasional Expressions: Indexicals

Husserl recognized clearly the need for a distinction between what he called “objective” expressions on the one hand, and those that are “essentially occasional” on the other. An example of an objective expression would be a statement concerning logic, mathematics or the sciences whose meaning is fixed regardless of the context in which it is used (for example ‘The Pythagorean Theorem is a theorem of geometry’ or ‘7+5=12’). An example of an essentially occasional expression would be a sentence such as ‘I am hungry’, which seems to in some sense change its meaning on different occasions of utterance, depending on who is speaking. According to Husserl, essentially occasional expressions include both indexicals (‘I’, ‘you’, ‘here’, ‘now’, and so forth) and demonstratives (‘this’, ‘that’ , and so forth). Such expressions have two facets of meaning. The first is what Husserl calls a constant “semantic function” associated with particular indexical expressions. For example, “It is the universal semantic function of the word ‘I’ to designate whoever is speaking…” (LI, I §26, p. 315). Husserl recognizes, however, that the sentences expressing these semantic functions cannot simply be substituted for indexicals without affecting the meaning of sentences containing them. A subject who believes “whoever is now speaking is hungry” effectively has an existentially quantified belief to the effect that the person, whoever he or she is, who is now speaking is hungry. In order to capture what such a subject would mean when he says ‘I am hungry’ it is necessary to somehow make it clear that the individual quantified over is indeed the person now speaking, but there seems to be no way to do this other than to re-insert the indexical ‘I’ itself in the sentence. This makes it necessary to identify a second facet or component of indexical content.

To deal with this, Husserl proposes a distinction between the semantic function or “indicating meaning” of indexicals, which remains constant from use to use, and the “indicated” meaning of indexicals, which is fundamentally cued to certain features of the speaker and context of utterance. Thus the “indicating meaning” of ‘I’ is always “whoever is now speaking”, but the indicated meaning of its use on a given occasion is keyed to the “self-awareness” or “self-presentation” of the speaker on that occasion. In general, the indicating meaning of an indexical will specify some general relationship between the utterance of a sentence and some feature of the speaker’s conscious awareness or perceptually given environment, while the indicated meaning will be determined by what the speaker is actually aware of in the context in which the sentence is uttered. In the case of many indexicals, such as ‘you’ and ‘here’ their indicating meaning may be supplied in part by demonstrative pointing to features of the immediate perceptual environment. Thus, Husserl writes, “The meaning of ‘here’ is in part universal and conceptual [semantic function/indicating meaning], inasmuch as it always names a place as such, but to this universal element the direct place-presentation [indicated meaning] attaches, varying from case to case” (LI I § 26, pp. 317—18). Husserl thus has a relatively clear understanding of some of the key issues surrounding indexical thought and reference that have been recently discussed in the work of philosophers of language such as John Perry (1977, 1979), as well as an account of how indexical thought and reference works. The question of whether or not this account is adequate to resolve all of the issues raised by contemporary discussions of indexicals and demonstratives, however, is one that goes beyond the scope of this article (for discussion of this issue in Husserl’s philosophy see Smith and McIntyre 1982, pp. 194—226).

3. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: The Perceptual Noema

In the year 1913 Husserl published both a revised edition of Logical Investigations and the Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy (hereafter, Ideas). Between the first publication of the Investigations and the works of 1913 the main transition in Husserl’s thought is a change in emphasis from the primary project of laying the foundations of a pure a priori logic to the primary project of developing a systematic phenomenology of consciousness with the theory of intentionality at its core. In the Ideas, Husserl proposes the systematic description and analysis of first person consciousness, focusing on the intentionality of this consciousness, as the fundamental first step in both the theory of consciousness itself and, by extension, in all other areas of philosophy as well. With hints of the idea already present in the first edition of Logical Investigations, by 1913 Husserl has come to see first person consciousness as epistemologically and so logically prior to other forms of knowledge and inquiry. Whereas Descartes took his own conscious awareness to be epistemically basic and then immediately tried to infer, based on his knowledge of this awareness, the existence of a God, an external world, and other knowledge, Husserl takes first-person conscious awareness as epistemically basic and then proposes the systematic study of this consciousness itself as a fundamental philosophical task. In order to lay the foundations for this project Husserl proposes a methodology known as the phenomenological reduction.

The phenomenological reduction involves performing what Husserl calls the epoché, which is carried out by “bracketing”, setting in abeyance, or “neutralizing” the existential thesis of the “natural attitude”. The idea behind this is that most people most of the time do not focus their attention on the structure of their experience itself but rather look past this experience and focus their attention and interests on objects and events in the world, which they take to be unproblematically real or existent. This assumption about the unproblematic existence of the objects of experience is the “existential thesis” of the natural attitude. The purpose of the epoché is not to doubt or reject this thesis, but simply to set it aside or put it out of play so that the subject engaging in phenomenological investigation can reorient the focus of her attention to her experiences qua experiences and just as they are experienced. This amounts to a reorienting of the subject’s intentional focus from the natural to the phenomenological attitude. A subject who has performed the epoché and adopted the phenomenological attitude is in a position to objectively describe the features of her experience as she experiences them, the phenomena. Questions of the real existence of particular objects of experience and even of the world or universe themselves are thus set aside in order to make way for the systematic study of first person conscious experience (Ideas, §§ 27—32; Natanson 1973, chapters 2 & 3).

Distinct from the phenomenological reduction, but important for the project of Husserl’s Phenomenology as a whole, is what is sometimes called the “eidetic reduction”. The eidetic reduction involves not just describing the idiosyncratic features of how things appear to one, as might occur in introspective psychology, but focusing on the essential characteristics of the appearances and their structural relationships and correlations with one another. Husserl calls insights into essential features of kinds of things “eidetic intuitions”. Such eidetic intuitions, or intuitions into essence, are the result of a process Husserl calls ‘eidetic’ or ‘free’ variation in imagination. It involves focusing on a kind of object, such as a triangle, and systematically varying features of that object, reflecting at each step on whether the object being reflected upon remains, in spite of its altered feature(s), an instance of the kind under consideration. Each time the object does survive imaginative feature alteration that feature is revealed as inessential, while each feature the removal of which results in the object intuitively ceasing to instantiate the kind (such as addition of a fourth side to a triangle) is revealed as a necessary feature of that kind. Husserl maintained that this procedure can incrementally reveal elements of the essence of a kind of thing, the ideal case being one in which intuition of the full essence of a kind occurs. The eidetic reduction compliments the phenomenological reduction insofar as it is directed specifically at the task of analyzing essential features of conscious experience and intentionality. The considerations leading to the initial positing of the distinction between intentional act, intentional object and intentional content would, according to Husserl, be examples of this method at work and of some of its results in the domain of the mental. Whereas the purpose of the phenomenological reduction is to disclose and thematize first person consciousness so that it can be described and analyzed, the purpose of the eidetic reduction is to focus phenomenological investigations more precisely on the essential or invariant features of conscious intentional experience. (Ideas, §§ 34 & 69—71; Natanson 1973, chapter 4).

There is much debate about the exact significance, especially metaphysical and epistemological, of Husserl’s shift in focus and introduction of the methodology of the phenomenological reduction in the Ideas. Important here is that the notions of intentionality and intentional content remain central to Husserl’s project and so many of the descriptions and results of the Investigations remain relevant for the Ideas. However, Husserl does both modify and expand his views about intentionality, as well as the kinds of analyses of it that he pursues. Whereas in the Investigations Husserl was interested in intentionality specifically in relation to the project of laying the foundations for pure logic, in the Ideas he is interested in giving a systematic account of the ways in which intentionality structures, “constitutes”, and so makes possible all types of cognition, including the awareness of self, time, physical objects, mathematical objects, an intersubjective social world and many other things besides. The sections that follow concentrate on the core ideas concerning intentionality and intentional content from the Ideas, leaving many of these other areas out of consideration.

a. Noesis and Noema: Terminology and Ontology

One change between the Investigations and the Ideas is that Husserl began using the term ‘noesis’ to refer to intentional acts or “act-quality” and ‘noema’ (plural ‘noemata’) to refer to what, in the Investigations had been referred to as “act-matter”. Husserl does not simply change his terminology, however. This change in terminology coincides with an apparent change in metaphysical understanding of the relationship between the noema as an ideal meaning and the particular mental activities of actual subjects, and also with a much more intense interest in analyzing the different elements of the noema, as well as understanding its relationships, both temporal and semantic, to other noemata.

Metaphysically the main change is that Husserl seems to abandon the model of meanings as ideal species that get instantiated in the act-matters of particular subjects in favor of a more direct correlative relationship between the noesis (intentional acts) and the noemata (their objects). In Ideas it is noemata themselves that are the objects of intentional thought, that are graspable and repeatable and that, according to Husserl, are not parts of the intentional acts of conscious subjects. It is a point of interpretative and philosophical contention whether the noema, as Husserl understood it, is better viewed as a sort of abstract Fregean sense that mediates between the subjective noetic acts of individual thinkers and the objective referents of their thoughts (Føllesdal 1982, Smith and McIntyre 1982), or whether the noema is better seen as the object of intentional thought itself as viewed from a particular perspective (Drummond 1990). While the difference between these two interpretations may seem rather small, they are actually quite different in terms of their metaphysical commitments and in terms of the particular issues of meaning, reference, and epistemology that they are able to resolve or be challenged by. For a general introduction and overview see the introduction to (Smith and Smith 1995) and for more detailed discussion of some of the main differences see (Dreyfus and Hall 1982, Zahavi 1994, Drummond 2003). No attempt will be made to resolve this interpretative dispute here, though it is worth noting that the question of the metaphysical status of the noesis, the noema, and the intentional object (if indeed this is to be viewed as a distinct entity in Husserl’s ontology) is in part complicated by Husserl’s methodological procedure of bracketing questions of existence.

b. Structural Features of the Noema

In the Ideas Husserl identifies three central features of the noema, focusing especially on the case of perception. Husserl first distinguishes between a component of sense or descriptive content on the one hand (accounting for the mode of presentation or description under which the object is intended), and a core component standing for or presenting the very identity of the object intended, a sort of pure “X” as Husserl calls it, underlying the various contents or noemata that are correlated with a single object of thought. What Husserl is focusing on here is the idea that to be conscious of an object is not just to be conscious of something under one description or way of viewing it, but it is also to be conscious of the object as an identity of its own, one that is simultaneously given through discrete noematic perspectives or experiences, but is also more than what any one of these experiences presents it as being. When Husserl says that there is a noematic “core” or underlying “X” in the noema, what he means is that when we think of an object we always think of it as an entity with its own identity as well as an object as it appears to us or is thought of by us. Related to this point, Husserl maintains that the intention of an object via a certain noema at one moment involves, not only intending the object as it is currently experienced, but also contains a third element consisting of pointing references to a “horizon” of further possible determinations of the object, to further noemata or ways of being directed to one and the same object that are either motivated by or consistent with the way in which the current intention presents that object. The structure of the noema is thus quite complex, consisting of a noematic core, some descriptive or presentational content, and a horizon containing pointing references to other possible ways (noemata) of experiencing one and the same identical object (some of the most definitive sections on noesis and noema are Ideas, §§ 128—35, however the concepts are first introduced over two chapters from §§ 76—96).

Consider the perceptual experience of a red barn in a field in southeastern Wisconsin. The intentional content or noema of this experience will provide immediate awareness of one side or profile of the barn, perhaps intended as a barn, or perhaps just intended as a structure of some sort. This will be the descriptive sense or content of the intention. However, in this very perception the barn is not experienced as merely a facet or a two-dimensional stretch of color in space. Rather, it is experienced as a three dimensional object possessing other sides, parts and properties, and capable of being explored, investigated and determined, in short intended with regard to each of these further features. The barn, as an object of perception, transcends the information that can be given regarding it, the intention of it that can be made via any given noema, and this fact is a feature that is already intended in the very first thought a subject has about the barn. This is what is meant by the term ‘horizon’ or ‘noematic horizon’. From the first experience, the subject already has a sense of how to go about further determining, further intending and experiencing the object of thought, in this case, the barn. Perhaps the current experience is of the front side of the barn as being red; then this very experience includes as part of its “noematic horizon” the intention that the barn must also have a back side of some sort, and that this side of the barn, along with its color (perhaps it also is red, or perhaps grey, but at any rate it must have some color) can be experienced if the subject walks around to it and looks. In each further experience of the barn, in each further determination of it in thought, it is one and the same barn that is itself given, one and the same definite identity or object “X” that underlies all of the particular presentations of the same object, and that unites them in a “synthesis of identity” to provide a continuous and, ideally, unbroken series of further determinations of the same object, of further intentional experiences in which more is “filled in” or determined about the way the object actually is. Regarding such a system of experiences of the same object, Husserl says,

…There is inherent in each noema a pure object-something as a point of unity and, at the same time, we see how in a noematic respect two sorts of object-concepts are to be distinguished: this pure point of unity, this noematic “object simpliciter,” and the “object in the How of its determinations”—including undeterminednesses which for the time being “remain open” and, in this mode, are co-meant. (Ideas, § 131, p. 314)

Here, the “point of unity” is the underlying core of intended object identity “X”, the “object in the How of its determinations” is the descriptive content or sense, and the “undeterminednesses” constitute the horizon of the current content. Thus, it is possible to distinguish, phenomenologically speaking, between the way in which the object is intended via a particular noema or sense, and the seemingly transcendent self-identical object that is intended, and which is the ultimate determinant of the accuracy or inaccuracy, truth or falsity of the intentions that are directed toward it. While this distinction between the descriptive content and the identical X in a noema is phenomenologically real, this does not mean that these are “really separable” parts of the content in such a way that it would be possible to experience the one in the absence of the other. Indeed, Husserl explicitly denies this possibility.

c. Systems of Noemata and Explication

This conception of the noema, as divided into a descriptive sense and the pure X or identity of the object intended via the sense, leads Husserl to the view that, phenomenologically speaking, it is possible to view an object (the underlying X) as determining a system of possible senses (noemata) or intentions of it, each of which is both (a) about that very same object and (b) able to be consciously recognized as about the same determinable X as the others when they are experienced in a sequence. Thus, in the example of the barn already discussed, a subject might begin by looking at it from the front and focusing on its color. This would be the first noema intending the very object X, the barn perceptually before one, as red. The subject could then go on to have further perceptual intentions of the barn by walking around it. Each time the subject shifts her perspective on or reconceptualizes the object of her thought, she entertains a new content or noema, a new possible way in which the barn can be experienced as being. If the barn is indeed the way she conceptualizes and experiences it, then that thought, that possibility is fulfilled by her ongoing experience. At each step the subject integrates her current experience with the previous one, identifying the X at the core of the current experience with the X at the core of the previous ones, and is at the same time directed toward new possible ways of filling out her experience of the barn in the horizon of the noema (for example by walking around it some more, or by going inside); Husserl refers to this process as a “synthesis of identity”. During the course of this “explication” of the horizon of the noema, it is always possible that some future experience will reveal the ones that have come before to have been in some fundamental way incorrect. For example, if the subject upon walking around to the back side of the barn discovers that it is really not a barn at all, but only a cleverly positioned façade, the original system of intentional experiences she had regarding it will be frustrated and a new system of intentions will begin.

Nevertheless, the idea that a single numerically identical object can be conceived, phenomenologically speaking, as the correlate of systems of contents or noemata all experienceable as directed towards one and the same object X gives rise, for Husserl, to the idea of an object as, phenomenologically speaking, the correlate of a complete set of such experiences. As Husserl puts it, using ‘perfect givenness’ to suggest the ideally possible experience of having gone through all of the possible correct intentions with regard to a given object:

But perfect givenness is nevertheless predesignated as “Idea” (in the Kantian sense)—as a system which, in its eidetic type, is an absolutely determined system of endless processes of continuous appearings, or as a field of these processes, an a priori determined continuum of appearances with different, but determined, dimensions, and governed throughout by a fixed set of eidetic laws…This continuum is determined more precisely as infinite on all sides, consisting of appearances in all its phases of the same determinable X so ordered in its concatenations and so determined with respect to the essential contents that any of its lines yields, in its continuous course, a harmonious concatenation (which itself is to be designated as a unity of mobile appearances) in which the X, given always as one and the same, is more precisely and never “otherwise” continuously-harmoniously determined. (Ideas, § 143, p. 342)

Here, then, we have what amounts to an analysis of the object of an intention considered from a phenomenological perspective. To be an object, phenomenologically speaking, is to be the correlate of a complete maximally consistent system of noematic senses, all synthesizable as directed towards one and the same underlying substrate or object X. This idea itself is given rise to by the three crucial features of the structure of definite intentional content that have been discussed here: the descriptive sense, the core content “X”, and the horizon of possible future experiences of one and the same object

David W. Smith and Ronald McIntyre have further developed Husserl’s account of the horizon of a noema at some length, and propose a distinction between kinds of possible further determinations of the object of a given thought that are predelineated in the horizon of a given noema (1982, pp. 246—56). It is possible to distinguish between (i) possible determinations that are motivated by the current noema or intentional content, (ii) possible determinations that are consistent with but not motivated by the current noema, and (iii) possible determinations that are neither motivated by nor consistent with the current noema. If a subject is intending a given object perceived from a particular side as a barn, then the motivated further determinations in the horizon will include further experiences of that same object as a barn: walking around it will reveal more barn-like sides, going inside will reveal that it is or has been used for certain purposes, more closely examining the material the walls are made of will reveal that they are not papier-mâché, and so forth. Now, there will still be divergent motivated possibilities. For example, barns can be made of either wood, or aluminum, or some combination of these with stone or of some other materials entirely, and they can also have many different colors, designs and particular interior layouts. Nevertheless, what makes each of these possibilities motivated is the fact that it is consistent with the object intended being exactly the kind of thing that it is currently intended as.

By contrast, a possible determination that is consistent with but unmotivated by the current perception of a barn as a barn is that the subject walks around to the back and discovers that the barn is really just a wooden barn façade erected to stimulate tourism in the area. This possible further experience is not totally inconsistent with a current experience of something as a barn, though it is not a motivated possibility relative to such an experience either. Finally, an experience that is neither motivated by nor consistent with the intention of an object as a barn would be the discovery that the current object is merely a complicated video image, or that it is some kind of new and heretofore undiscovered life form that just happens to look exactly like a barn when it is resting. A discovery such as this is, arguably, not even present in the horizon of the original noema to begin with. Husserl referred to experiences where the previously intended identity of an experienced object is entirely cancelled by some current experience as cases where the object intended “explodes”, and where it is unclear that the subject was really thinking about the object actually before her at all even if she was succeeding in referring to it in some minimal sense of the term (Ideas, §§ 138 & 151).

d. Additional Considerations

Husserl’s understanding of the noema in the Ideas retains the explanatory features (in terms of theory of language and its ability to resolves puzzles about meaningful reference to the non-existent, informative identity statements, and so forth) of Logical Investigations account while also incorporating a more nuanced analysis of the structure of intentional content itself and a more holistic understanding of how the intentional content (noema) that a subject is thinking at a given moment is interconnected with other features of that subject’s actual and possible experience (the systems of noemata).

In the Investigations Husserl retains an understanding of the “act-character” of an intentional event as being its quality of positing or not positing the existence of its object and of being evidentially empty or fulfilled. Referring to these characters as “modalities” of belief (“doxic” modalities) and experience, Husserl recognizes both the already identified modalities pertaining to beliefs and also additional “ontic” modalities pertaining to whether a subject takes the content of their intention to be necessary or merely possible, valuable or worthless, beautiful or ugly. The key feature of these noematic characters or modalities is that they are characteristics of thought and experience that affect its overall meaning for the subject but that are not, strictly speaking, represented in the content of the intention (the noema) itself.

The notions of empty and fulfilled intentions in conjunction with Husserl’s understanding of the noematic horizon and of systems of possible interrelated object-experiences allow him to continue the epistemological investigations begun earlier in the Sixth Logical Investigation along two major lines.

The first is the idea that the mere unfulfilled intention of an object or state of affairs, by its nature, dictates certain conditions of fulfillment or conditions under which the thought merely entertained in the current intention would be given with full and complete evidence or intuition. For example, the emptily intended thought of a beautiful sunset with lots of red and gold today has as its primary fulfillment conditions the direct perceptual intuition of a sunset matching in all relevant ways the content that it currently intends emptily. Husserl maintains that intentional beliefs and thoughts involving many different kinds of objects (physical objects, other minds, mathematical objects or proofs, abstract objects, scientific theories) all have fulfillment conditions that dictate what kinds of experiences and thought processes are necessary to bring them to evidential groundedness. Already in Logical Investigations Husserl saw this task as an essential contribution that phenomenology could make to epistemology and the theory of evidence and he continues to carry it out in the final chapters of the Ideas and in his later works.

The second idea that comes into its own with Husserl’s Phenomenology and understanding of the structure of intentionality is the idea of “constitution analysis” (Ideas, §§ 149—53). Husserl’s basic idea here is that consciousness of each kind of object of thought and experience, and of each noetic mode of being aware of the objects of experience (perception, introspection, reflection, imagination, reasoning, and so forth) is the result of a complex interworking of other intentional acts. However, some ways of thinking and experiencing are more basic or fundamental, while others depend or are founded on these basic intentions in very specific ways. As a simple example, the act of judging that something is the case presupposes some other act in which the idea or possibility of this thing’s being the case has been made available. It would be impossible to judge that something is (or is not the case) without a prior act familiarizing one with its existence or possibility in the first place. Husserl views awareness of complex intentional objects as the result of those objects having been “constituted” out of or on the basis of a series of more basic intentional states (Husserl usually identifies the most basic intentional experiences with various aspects of perception and introspection). Thus, a full phenomenological analysis of the cognition of a given kind of complex object, mathematical cognition, for example, will involve an analysis of the different kinds of intentional experiences and operations that underlie and so constitute the complex intentionality in question.

Of particular importance for Husserl in this connection is the notion of “categorial intuition”. In categorial intuition a subject becomes conscious of an articulated state of affairs as the object of her intention. Categorial intuition involves, for example, not just passive awareness of a ship, or just paying attention to particular parts or features of the ship, but rather intending the articulated complex state of affairs that is “the ship’s having two smokestacks” or “the ship’s being about to enter port”. It is intentional awareness of such facts that forms the basis of categorial judgments, and the intentional contents of categorial acts can be understood along the lines of propositions, the relations among and analysis of which is the subject matter of logic. In the present context, what is important is that the intentionality involved in categorial intuition is a complex intentionality built up out of more basic kinds of intentions and intentional transformations, and thus another key example of a phenomena requiring constitution analysis (LI, §§ 40—58). To the extent that understanding the factors that go into forming a belief or intention is relevant to evaluating the epistemic status of that belief, constitution analysis functions together with the analysis of evidence and fulfillment conditions and so comprises a part of Phenomenology’s contribution to epistemology.

It must also be noted, however, that constitution analysis within Phenomenology has an interest entirely independent of the role it plays in epistemology. This interest is that of providing a comprehensive analysis of the essential kinds of intentionality and relationships among them that are involved in making possible different kinds of complex intentional thoughts and experiences. As mentioned already, such constitution analyses include analysis of the constitution of time-consciousness, the constitution of mathematical object awareness, the constitution of bodily awareness, the constitution (subjective and inter-subjective) of the social world, and so forth.

The foregoing considerations go beyond the scope of what would normally be considered a discussion of Husserl’s views specifically on intentionality and intentional content. Hopefully they serve, however, to provide some sense of the interconnection between Husserl’s views concerning intentionality and the other parts of his philosophy.

4. References and Further Reading

a. Works by Husserl

The collected works of Husserl were published in 1950, in Husserliana: Edmund Husserl — Gesammelte Werke, The Hague/Dordrecht: Nijhoff/Kluwer. The following are works by Husserl listed in the chronological order of their German publications (the German publication date is in brackets).

  • [LI] Logical Investigations, trans. J. N. Findlay, London: Routledge [1900/01; 2nd, revised edition 1913], 1973.
    • [Cited in the text as: LI, Investigation # (I, II, etc.) section # (§), and, where quotes are used, page #].
  • “Philosophy as Rigorous Science,” trans. in Q. Lauer (ed.), Phenomenology and the Crisis of Philosophy, New York: Harper [1910], 1965.
  • [Ideas] Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy — First Book: General Introduction to a Pure Phenomenology, trans. F. Kersten. The Hague: Nijhoff [1913], 1982.
    • [Cited in the text as: Ideas, section # (§), and, where quotes are used, page #].
  • On the Phenomenology of the Consciousness of Internal Time (1893-1917), trans. J. B. Brough, Dordrecht: Kluwer [1928], 1990.
  • Formal and Transcendental Logic, trans. D. Cairns. The Hague: Nijhoff [1929], 1969.
  • Cartesian Meditations, trans. D. Cairns, Dordrecht: Kluwer [1931], 1988.
  • The Crisis of European Sciences and Transcendental Philosophy, trans. D. Carr. Evanston: Northwestern University Press [1936/54], 1970.
  • Experience and Judgement, trans. J. S. Churchill and K. Ameriks, London: Routledge [1939], 1973.

b. Secondary Sources

The following works are secondary sources pertinent to Husserl’s views on intentionality and the role that it plays in his phenomenology.

  • Brentano, Franz. Psychology from an Empirical Standpoint, ed. and trans. by L. L. McAlister. London: Routledge, 1973.
    • Brentano’s classic work on intentionality as the mark of the mental. A central influence on Husserl.
  • Dreyfus, Hubert L., and Harrison Hall. Husserl, Intentionality, and Cognitive Science. Cambridge, Mass.: MIT Press, 1982.
    • A classic anthology collecting essays on the relationship of Husserl’s philosophy to cognitive science. This text also includes a number of contributions concerning the correct interpretation of the noema.
  • Drummond, John. “The Structure of Intentionality.” In Welton ed. The New Husserl: A Critical Reader. Bloomington: Indiana University Press, 2003.
    • A comprehensive overview of the main features of Husserl’s conception of intentionality.
  • Drummond, John. “From Intentionality to Intensionality and Back.” Etudes Phenomenologiques 27—28 (1998): 89—126.
    • An analysis of Husserl’s views on intentionality that situates them in their historical context with other members of the Brentano School and attempts to shed some light on the motivations for different interpretations of the noema or intentional content.
  • Drummond, John. Husserlian Intentionality and Non-Foundational Realism. Dodrecht: Kluwer, 1990.
    • A thorough discussion of Husserl’s views including a lengthy exposition and defense of the view that sees the intentional noema as an abstract aspect of the intentional object rather than as a distinct sense.
  • Føllesdal, Dagfinn. “Noema and Meaning in Husserl.” Phenomenology and Philosophical Research, 50 (1990): 263—271.
  • Føllesdal, Dagfinn. “Husserl’s Notion of Noema” in Dreyfus (ed.) 1982.
    • Føllesdal’s articles are considered the classic statement of the “Fregean” interpretation of the noema.
  • Gottlob Frege. “On Sense and Reference.” In Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (eds. and trans.), Oxford: Blackwell, third edition, 1980.
    • The classic source for the distinction between sense and reference and its application to issues of language and, by extension, intentionality.
  • Gurwitsch, Aron. “Husserl’s Theory of the Intentionality of Consciousness.” In Dreyfus (ed.) 1982.
    • A distinctive interpretation of the intentional object as consisting of systems of noemata.
  • Kern, Iso. “The Three Ways to the Transcendental Phenomenological Reduction.” Husserl: Expositions and Appraisals. Eds. Frederick Elliston and Peter McCormick. Notre Dame: University of Notre Dame Press, 1977.
    • A discussion of the phenomenological reduction and of different motivations that lead Husserl to it.
  • Alexius Meinong. 1960, “On the Theory of Objects,” in Roderick Chisholm (ed.), Realism and the Background of Phenomenology, Glencoe, Ill.: Free Press, 76–117
    • A detailed account of the different kinds of existent and non-existent objects that Meinong recognized as categories in his ontology, as well as some discussion of the relationship of these to the intentionality of mind.
  • Mohanty, Jitendranath & McKenna, William (eds). Husserl’s Phenomenology: A Textbook. Lanham: University Press of America, 1989.
    • A collection of essays covering numerous aspects of Husserl’s thought, including his views on intentionality.
  • Mohanty, Jitendranath. Husserl and Frege. Studies in Phenomenology and Existential Philosophy. Bloomington: Indiana University Press, 1982.
    • A comparison of Husserl and Frege’s views, including their views on psychologism and on the distinction between sense and referent.
  • Natanson, Maurice Alexander. Edmund Husserl; Philosopher of Infinite Tasks. Evanston Ill.: Northwestern University Press, 1973.
    • A very accessible introduction Husserl’s Phenomenology, including helpful discussion of the phenomenological reduction and the natural attitude in the early chapters.
  • Perry, John. “The Problem of the Essential Indexical.” Nous 13.1 (1979): 3—21.
  • Perry, John. “Frege on Demonstratives.” The Philosophical Review, 86.4 (1977): 474—97.
    • Classic articles on the semantics of indexicals and demonstratives.
  • Pietersma, Henry. Phenomenological Epistemology. New York: Oxford University Press, 2000.
    • A thorough discussion of the epistemological views of Husserl, Heidegger, and Merleau-Ponty.
  • Smith, Barry, and David Woodruff Smith. The Cambridge Companion to Husserl. Cambridge; New York: Cambridge University Press, 1995.
    • A collection of essays on various aspects of Husserl’s philosophy. The introduction includes a helpful discussion of divergent interpretations of the noema.
  • Smith, Barry. Austrian Philosophy: The Legacy of Franz Brentano. Chicago and LaSalle, Illinois: Open Court Press, 1994.
    • Includes discussion of the background and broader context against which Husserl developed his views of intentionality, including the views Brentano, Meinong, Stumpf, Twardowski and others.
  • Smith, David Woodruff, and Ronald McIntyre. Husserl and Intentionality: A Study of Mind, Meaning, and Language. Synthese Library; V. 154. Dordrecht, Holland: D. Reidel Pub. Co., 1982.
    • A very thorough study. The early parts of the text are a clear introduction to Husserl on language and intentionality, while the rest defends a version of the “Fregean” interpreation of the noema and develops a possible worlds understanding of intentionality based on this.
  • Sokolowski, Robert (ed.). Edmund Husserl and the Phenomenological Tradition. Washington: Catholic University of America Press, 1988.
    • Essays on various aspects of Husserl’s philosophy, including intentionality.
  • Zahavi, Dan, ed. Internalism and Externalism in Phenomenological Perspective. Special Issue: Synthese, 160. 2008.
    • A special issue containing essays by six philosophers addressing various aspects of the relationship between Husserl’s Phenomenology and contemporary discussions of semantic internalism and externalism.
  • Gallagher, Shaun, and Zahavi, Dan. The Phenomenological Mind: an Introduction to Phenomenology and Cognitive Science. London: Routledge, 2008.
    • An introduction to Phenomenology and intentionality, including intersections of these ideas with contemporary cognitive science.
  • Zahavi, Dan. “Husserl’s Noema and the Internalism-Externalism Debate.” Inquiry 47 (2004): 42-66.
    • A discussion of the relationship between Husserl’s Phenomenology and the semantic internalism-externalism debate, the article also includes discussion of the main differences between competing interpretations of the noema within Husserl scholarship.
  • Zahavi, Dan. Husserl’s Phenomenology. Stanford: Stanford University Press, 2003.
    • A comprehensive discussion of Husserl’s Phenomenology, including issues of intentionality and intentional content.

Author Information

Andrew D. Spear
Email: speara@gvsu.edu
Grand Valley State University
U. S. A.

American Enlightenment Thought

Although there is no consensus about the exact span of time that corresponds to the American Enlightenment, it is safe to say that it occurred during the eighteenth century among thinkers in British North America and the early United States and was inspired by the ideas of the British and French Enlightenments.  Based on the metaphor of bringing light to the Dark Age, the Age of the Enlightenment (Siècle des lumières in French and Aufklärung in German) shifted allegiances away from absolute authority, whether religious or political, to more skeptical and optimistic attitudes about human nature, religion and politics.  In the American context, thinkers such as Thomas Paine, James Madison, Thomas Jefferson, John Adams and Benjamin Franklin invented and adopted revolutionary ideas about scientific rationality, religious toleration and experimental political organization—ideas that would have far-reaching effects on the development of the fledgling nation.  Some coupled science and religion in the notion of deism; others asserted the natural rights of man in the anti-authoritarian doctrine of liberalism; and still others touted the importance of cultivating virtue, enlightened leadership and community in early forms of republican thinking. At least six ideas came to punctuate American Enlightenment thinking: deism, liberalism, republicanism, conservatism, toleration and scientific progress. Many of these were shared with European Enlightenment thinkers, but in some instances took a uniquely American form.

Table of Contents

  1. Enlightenment Age Thinking
    1. Moderate and Radical
    2. Chronology
    3. Democracy and the Social Contract
  2. Six Key Ideas
    1. Deism
    2. Liberalism
    3. Republicanism
    4. Conservatism
    5. Toleration
    6. Scientific Progress
  3. Four American Enlightenment Thinkers
    1. Franklin
    2. Jefferson
    3. Madison
    4. Adams
  4. Contemporary Work
  5. References and Further Reading

1. Enlightenment Age Thinking

The pre- and post-revolutionary era in American history generated propitious conditions for Enlightenment thought to thrive on an order comparable to that witnessed in the European Enlightenments.   In the pre-revolutionary years, Americans reacted to the misrule of King George III, the unfairness of Parliament (“taxation without representation”) and exploitative treatment at the hands of a colonial power: the English Empire.  The Englishman-cum-revolutionary Thomas Paine wrote the famous pamphlet The Rights of Man, decrying the abuses of the North American colonies by their English masters.  In the post-revolutionary years, a whole generation of American thinkers would found a new system of government on liberal and republican principles, articulating their enduring ideas in documents such as the Declaration of Independence, the Federalist Papers and the United States Constitution.

 

Although distinctive features arose in the eighteenth-century American context, much of the American Enlightenment was continuous with parallel experiences in British and French society.  Four themes recur in both European and American Enlightenment texts: modernization, skepticism, reason and liberty. Modernization means that beliefs and institutions based on absolute moral, religious and political authority (such as the divine right of kings and the Ancien Régime) will become increasingly eclipsed by those based on science, rationality and religious pluralism.  Many Enlightenment thinkers—especially the French philosophes, such as Voltaire, Rousseau and Diderot—subscribed to some form of skepticism, doubting appeals to miraculous, transcendent and supernatural forces that potentially limit the scope of individual choice and reason.  Reason that is universally shared and definitive of the human nature also became a dominant theme in Enlightenment thinkers’ writings, particularly Immanuel Kant’s “What is Enlightenment?” and his Groundwork of the Metaphysics of Morals.  The fourth theme, liberty and rights assumed a central place in theories of political association, specifically as limits state authority originating prior to the advent of states (that is, in a state of nature) and manifesting in social contracts, especially in John Locke’s Second Treatise on Civil Government and Thomas Jefferson’s drafts of the Declaration of Independence.  

a. Moderate and Radical

Besides identifying dominant themes running throughout the Enlightenment period, some historians, such as Henry May and Jonathan Israel, understand Enlightenment thought as divisible into two broad categories, each reflecting the content and intensity of ideas prevalent at the time.  The moderate Enlightenment signifies commitments to economic liberalism, religious toleration and constitutional politics.   In contrast to its moderate incarnation, the radical Enlightenment conceives enlightened thought through the prism of revolutionary rhetoric and classical Republicanism.  Some commentators argue that the British Enlightenment (especially figures such as James Hutton, Adam Ferguson and Adam Smith) was essentially moderate, while the French (represented by Denis Diderot, Claude Adrien Helvétius and François Marie Arouet) was decidedly more radical.  Influenced as it was by the British and French, American Enlightenment thought integrates both moderate and radical elements.

b. Chronology

American Enlightenment thought can also be appreciated chronologically, or in terms of three temporal stages in the development of Enlightenment Age thinking.  The early stage stretches from the time of the Glorious Revolution of 1688 to 1750, when members of Europe’s middle class began to break free from the monarchical and aristocratic regimes—whether through scientific discovery, social and political change or emigration outside of Europe, including America.  The middle stage extends from 1751 to just a few years after the start of the American Revolution in 1779. It is characterized by an exploding fascination with science, religious revivalism and experimental forms of government, especially in the United States.  The late stage begins in 1780 and ends with the rise of Napoléon Bonaparte, as the French Revolution comes to a close in 1815—a period in which the European Enlightenment was in decline, while the American Enlightenment reclaimed and institutionalized many of its seminal ideas.  However, American Enlightenment thinkers were not always of a single mind with their European counterparts.  For instance, several American Enlightenment thinkers—particularly James Madison and John Adams, though not Benjamin Franklin—judged the French philosophes to be morally degenerate intellectuals of the era.

c. Democracy and the Social Contract

Many European and American Enlightenment figures were critical of democracy.  Skepticism about the value of democratic institutions was likely a legacy of Plato’s belief that democracy led to tyranny and Aristotle’s view that democracy was the best of the worst forms of government.  John Adams and James Madison perpetuated the elitist and anti-democratic idea that to invest too much political power in the hands of uneducated and property-less people was to put society at constant risk of social and political upheaval.  Although several of America’s Enlightenment thinkers condemned democracy, others were more receptive to the idea of popular rule as expressed in European social contract theories.  Thomas Jefferson was strongly influenced by John Locke’s social contract theory, while Thomas Paine found inspiration in Jean-Jacques Rousseau’s.  In the Two Treatises on Government (1689 and 1690), Locke argued against the divine right of kings and in favor of government grounded on the consent of the governed; so long as people would have agreed to hand over some of their liberties enjoyed in a pre-political society or state of nature in exchange for the protection of basic rights to life, liberty and property.  However, if the state reneged on the social contract by failing to protect those natural rights, then the people had a right to revolt and form a new government. Perhaps more of a democrat than Locke, Rousseau insisted in The Social Contract (1762) that citizens have a right of self-government, choosing the rules by which they live and the judges who shall enforce those rules. If the relationship between the will of the state and the will of the people (the “general will”) is to be democratic, it should be mediated by as few institutions as possible.

2. Six Key Ideas

 

At least six ideas came to punctuate American Enlightenment thinking: deism, liberalism, republicanism, conservatism, toleration and scientific progress. Many of these were shared with European Enlightenment thinkers, but in some instances took a uniquely American form.

a. Deism

European Enlightenment thinkers conceived tradition, custom and prejudice (Vorurteil) as barriers to gaining true knowledge of the universal laws of nature.  The solution was deism or understanding God’s existence as divorced from holy books, divine providence, revealed religion, prophecy and miracles; instead basing religious belief on reason and observation of the natural world.   Deists appreciated God as a reasonable Deity.  A reasonable God endowed humans with rationality in order that they might discover the moral instructions of the universe in the natural law.  God created the universal laws that govern nature, and afterwards humans realize God’s will through sound judgment and wise action.  Deists were typically (though not always) Protestants, sharing a disdain for the religious dogmatism and blind obedience to tradition exemplified by the Catholic Church.  Rather than fight members of the Catholic faith with violence and intolerance, most deists resorted to the use of tamer weapons such as humor and mockery.

Both moderate and radical American Enlightenment thinkers, such as James Madison, Benjamin Franklin, Alexander Hamilton, John Adams and George Washington, were deists.   Some struggled with the tensions between Calvinist orthodoxy and deist beliefs, while other subscribed to the populist version of deism advanced by Thomas Paine in The Age of Reason.  Franklin was remembered for stating in the Constitutional Convention that “the longer I live, the more convincing proof I see of this truth—that God governs in the affairs of men.”  In what would become known as the Jefferson Bible (originally The Life and Morals of Jesus of Nazareth), Jefferson chronicles the life and times of Jesus Christ from a deist perspective, eliminating all mention of miracles or divine intervention.  God for deists such as Jefferson never loomed large in humans’ day-to-day life beyond offering a moral or humanistic outlook and the resource of reason to discover the content of God’s laws.  Despite the near absence of God in human life, American deists did not deny His existence, largely because the majority of the populace still remained strongly religious, traditionally pious and supportive of the good works (for example monasteries, religious schools and community service) that the clergy did.

b. Liberalism

Another idea central to American Enlightenment thinking is liberalism, that is, the notion that humans have natural rights and that government authority is not absolute, but based on the will and consent of the governed.  Rather than a radical or revolutionary doctrine, liberalism was rooted in the commercial harmony and tolerant Protestantism embraced by merchants in Northern Europe, particularly Holland and England.  Liberals favored the interests of the middle class over those of the high-born aristocracy, an outlook of tolerant pluralism that did not discriminate between consumers or citizens based on their race or creed, a legal system devoted to the protection of private property rights, and an ethos of strong individualism over the passive collectivism associated with feudal arrangements. Liberals also preferred rational argumentation and free exchange of ideas to the uncritical of religious doctrine or governmental mandates.  In this way, liberal thinking was anti-authoritarian.  Although later liberalism became associated with grassroots democracy and a sharp separation of the public and private domains, early liberalism favored a parliamentarian form of government that protected liberty of expression and movement, the right to petition the government, separation of church and state and the confluence of public and private interests in philanthropic and entrepreneurial endeavors.

The claim that private individuals have fundamental God-given rights, such as to property, life, liberty and to pursue their conception of  good, begins with the English philosopher John Locke, but also finds expression in Thomas Jefferson’s drafting of the Declaration of Independence.  The U.S. Bill of Rights, the first ten amendments to the Constitution, guarantees a schedule of individual rights based on the liberal ideal.  During the constitutional convention, James Madison responded to the anti-Federalists’ demand for a bill of rights as a condition of ratification by reviewing over two-hundred proposals and distilling them into an initial list of twelve suggested amendments to the Constitution, covering the rights of free speech, religious liberty, right to bear arms and habeas corpus, among others.  While ten of those suggested were ratified in 1791, one missing amendment (stopping laws created  by Congress to increase its members’ salaries from taking effect until the next legislative term) would have to wait until 1992 to be ratified as the Twenty-seventh Amendment.  Madison’s concern that the Bill of Rights should apply not only to the federal government would eventually be accommodated with the passage of the Fourteenth Amendment (especially its due process clause) in 1868 and a series of Supreme Court cases throughout the twentieth-century interpreting each of the ten amendments as “incorporated” and thus protecting citizens against state governments as well.

c. Republicanism

Classical republicanism is a commitment to the notion that a nation ought to be ruled as a republic, in which selection of the state’s highest public official is determined by a general election, rather than through a claim to hereditary right.  Republican values include civic patriotism, virtuous citizenship and property-based personality.  Developed during late antiquity and early renaissance, classic republicanism differed from early liberalism insofar as rights were not thought to be granted by God in a pre-social state of nature, but were the products of living in political society.  On the classical republican view of liberty, citizens exercise freedom within the context of existing social relations, historical associations and traditional communities, not as autonomous individuals set apart from their social and political ties.  In this way, liberty for the classical republican is positively defined by the political society instead of negatively defined in terms of the pre-social individual’s natural rights.

While prefigured by the European Enlightenment, the American Enlightenment also promoted the idea that a nation should be governed as a republic, whereby the state’s head is popularly elected, not appointed through a hereditary blood-line.  As North American colonists became increasingly convinced that British rule was corrupt and inimical to republican values, they joined militias and eventually formed the American Continental Army under George Washington’s command.   The Jeffersonian ideal of the yeoman farmer, which had its roots in the similar Roman ideal, represented the eighteenth-century American as both a hard-working agrarian and as a citizen-soldier devoted to the republic.  When elected to the highest office of the land, George Washington famously demurred when offered a royal title, preferring instead the more republican title of President.  Though scholarly debate persists over the relative importance of liberalism and republicanism during the American Revolution and Founding (see Recent Work section), the view that republican ideas were a formative influence on American Enlightenment thinking has gained widespread acceptance.

d. Conservatism

Though the Enlightenment is more often associated with liberalism and republicanism, an undeniable strain of conservatism emerged in the last stage of the Enlightenment, mainly as a reaction to the excesses of the French Revolution.  In 1790 Edmund Burkeanticipated the dissipation of order and decency in French society following the revolution (often referred to as “the Terror”) in his Reflections on the Revolution in France.  Though it is argued that Burkean conservatism was a reaction to the Enlightenment (or anti-Enlightenment), conservatives were also operating within the framework of Enlightenment ideas.  Some Enlightenment claims about human nature are turned back upon themselves and shown to break down when applied more generally to human culture.  For instance, Enlightenment faith in universal declarations of human rights do more harm than good when they contravene the conventions and traditions of specific nations, regions and localities. Similar to the classical republicans, Burke believed that human personality was the product of living in a political society, not a set of natural rights that predetermined our social and political relations. Conservatives attacked the notion of a social contract (prominent in the work of Hobbes, Locke and Rousseau) as a mythical construction that overlooked the plurality of groups and perspectives in society, a fact which made brokering compromises inevitable and universal consent impossible.  Burke only insisted on a tempered version, not a wholesale rejection of Enlightenment values.

Conservatism featured strongly in American Enlightenment thinking.  While Burke was critical of the French Revolution, he supported the American Revolution for disposing of English colonial misrule while creatively readapting British traditions and institutions to the American temperament.  American Enlightenment thinkers such as James Madison and John Adams held views that echoed and in some cases anticipated Burkean conservatism, leading them to criticize the rise of revolutionary France and the popular pro-French Jacobin clubs during and after the French Revolution.  In the forty-ninth Federalist Paper, James Madison deployed a conservative argument against frequent appeals to democratic publics on constitutional questions because they threatened to undermine political stability and substitute popular passion for the “enlightened reason” of elected representatives. Madison’s conservative view was opposed to Jefferson’s liberal view that a constitutional convention should be convened every twenty years, for “[t]he earth belongs to the living generation,” and so each new generation should be empowered to reconsider its constitutional norms.

e. Toleration

Toleration or tolerant pluralism was also a major theme in American Enlightenment thought.  Tolerance of difference developed in parallel with the early liberalism prevalent among Northern Europe’s merchant class.  It reflected their belief that hatred or fear of other races and creeds interfered with economic trade, extinguished freedom of thought and expression, eroded the basis for friendship among nations and led to persecution and war. Tiring of religious wars (particularly as the 16th century French wars of religion and the 17th century Thirty Years War), European Enlightenment thinkers imagined an age in which enlightened reason not religious dogmatism governed relations between diverse peoples with loyalties to different faiths. The Protestant Reformation and the Treaty of Westphalia significantly weakened the Catholic Papacy, empowered secular political institutions and provided the conditions for independent nation-states to flourish.

American thinkers inherited this principle of tolerant pluralism from their European Enlightenment forebearers.  Inspired by the Scottish Enlightenment thinkers John Knox and George Buchanan, American Calvinists created open, friendly and tolerant institutions such as the secular public school and democratically organized religion (which became the Presbyterian Church).   Many American Enlightenment thinkers, including Benjamin Franklin, Thomas Jefferson and James Madison, read and agreed with John Locke’s A Letter Concerning Toleration.  In it, Locke argued that government is ill-equipped to judge the rightness or wrongness of opposing religious doctrines, faith could not be coerced and if attempted the result would be greater religious and political discord.   So, civil government ought to protect liberty of conscience, the right to worship as one chooses (or not to worship at all) and refrain from establishing an official state-sanctioned church.  For America’s founders, the fledgling nation was to be a land where persons of every faith or no faith could settle and thrive peacefully and cooperatively without fear of persecution by government or fellow citizens.  Ben Franklin’s belief that religion was an aid to cultivating virtue led him to donate funds to every church in Philadelphia.  Defending freedom of conscience, James Madison would write that “[c]onscience is the most sacred of all property.”  In 1777, Thomas Jefferson drafted a religious liberty bill for Virginia to disestablish the government-sponsored Anglican Church—often referred to as “the precursor to the Religion Clauses of the First Amendment”—which eventually passed with James Madison’s help.

f. Scientific Progress

The Enlightenment enthusiasm for scientific discovery was directly related to the growth of deism and skepticism about received religious doctrine.  Deists engaged in scientific inquiry not only to satisfy their intellectual curiosity, but to respond to a divine calling to expose God’s natural laws.  Advances in scientific knowledge—whether the rejection of the geocentric model of the universe because of Copernicus, Kepler and Galileo’s work or the discovery of natural laws such as Newton’s mathematical explanation of gravity—removed the need for a constantly intervening God.  With the release of Sir Isaac Newton’s Principia in 1660, faith in scientific progress took institutional form in the Royal Society of England, the Académie des Sciences in France and later the Academy of Sciences in Germany.  In pre-revolutionary America, scientists or natural philosophers belonged to the Royal Society until 1768, when Benjamin Franklin helped create and then served as the first president of the American Philosophical Society.  Franklin became one of the most famous American scientists during the Enlightenment period because of his many practical inventions and his theoretical work on the properties of electricity.

3. Four American Enlightenment Thinkers

What follows are brief accounts of how four significant thinkers contributed to the eighteenth-century American Enlightenment: Benjamin Franklin, Thomas Jefferson, James Madison and John Adams.

a. Franklin

Benjamin Franklin, the author, printer, scientist and statesman who led America through a tumultuous period of colonial politics, a revolutionary war and its momentous, though no less precarious, founding as a nation.  In his Autobiography, he extolled the virtues of thrift, industry and money-making (or acquisitiveness).  For Franklin, the self-interested pursuit of material wealth is only virtuous when it coincides with the promotion of the public good through philanthropy and voluntarism—what is often called “enlightened self-interest.”  He believed that reason, free trade and a cosmopolitan spirit serve as faithful guides for nation-states to cultivate peaceful relations. Within nation-states, Franklin thought that “independent entrepreneurs make good citizens” because they pursue “attainable goals” and are “capable of living a useful and dignified life.” In his autobiography, Franklin claims that the way to “moral perfection” is to cultivate thirteen virtues (temperance, silence, order, resolution, frugality, industry, sincerity, justice, moderation, cleanliness, tranquility, chastity, and humility) as well as a healthy dose of “cheerful prudence.”  Franklin favored voluntary associations over governmental institutions as mechanisms to channel citizens’ extreme individualism and isolated pursuit of private ends into productive social outlets.  Not only did Franklin advise his fellow citizens to create and join these associations, but he also founded and participated in many himself.  Franklin was a staunch defender of federalism, a critic of narrow parochialism, a visionary leader in world politics and a strong advocate of religious liberty.

b. Jefferson

A Virginian statesman, scientist and diplomat, Jefferson is probably best known for drafting the Declaration of Independence.  Agreeing with Benjamin Franklin, he substituted “pursuit of happiness” for “property” in Locke’s schedule of natural rights, so that liberty to pursue the widest possible human ends would be accommodated.  Jefferson also exercised immense influence over the creation of the United States’ Constitution through his extended correspondence with James Madison during the 1787 Constitutional Convention (since Jefferson was absent, serving as a diplomat in Paris).  Just as Jefferson saw the Declaration as a test of the colonists’ will to revolt and separate from Britain, he also saw the Convention in Philadelphia, almost eleven years later, as a grand experiment in creating a new constitutional order.  Panel four of the Jefferson Memorial records how Thomas Jefferson viewed constitutions: “I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times.”  Jefferson’s words capture the spirit of organic constitutionalism, the idea that constitutions are living documents that transform over time in pace with popular thought, imagination and opinion.

c. Madison

Heralded as the “Father of the Constitution,” James Madison was, besides one of the most influential architects of the U.S. Constitution, a man of letters, a politician, a scientist and a diplomat who left an enduring legacy on American philosophical thought.  As a tireless advocate for the ratification of the Constitution, Madison advanced his most groundbreaking ideas in his jointly authoring The Federalist Papers with John Jay and Alexander Hamilton.  Indeed, two of his most enduring ideas—the large republic thesis and the argument for separation-of-powers and checks-and-balances—are contained there.  In the tenth Federalist paper, Madison explains the problem of factions, namely, that the development of groups with shared interests (advocates or interest groups) is inevitable and dangerous to republican government.  If we try to vanquish factions, then we will in turn destroy the liberty upon which their existence and activities are founded. Baron d’ Montesquieu, the seventeenth-century French philosopher, believed that the only way to have a functioning republic, one that was sufficiently democratic, was for it to be small, both in population and land mass (on the order of Ancient Athens or Sparta).  He then argues that a large and diverse republic will stop the formation of a majority faction; if small groups cannot communicate over long distances and coordinate effectively, the threat will be negated and liberty will be preserved (“you make it less probable that a majority of the whole will have a common motive to invade the rights of other citizens”).  When factions formed inside the government, a clever institutional design of checks and balances (first John Adams’s idea, where each branch would have a hand in the others’ domain) would avert excessive harm, so that “ambition must be made to counteract ambition” and, consequently, government will effectively “control itself.”

d. Adams

John Adams was also a founder, statesman, diplomat and eventual President who contributed to American Enlightenment thought.  Among his political writings, three stand out: Dissertation on the Canon and Feudal Law (1776), A Defense of the Constitutions of Government of the United States of America, Against the Attack of M. Turgot (1787-8), and Discourses on Davila (1791).  In the Dissertation, Adams faults Great Britain for deciding to introduce canon and feudal law, “the two greatest systems of tyranny,” to the North American colonies. Once introduced, elections ceased in the North American colonies, British subjects felt enslaved and revolution became inevitable.   In the Defense, Adams offers an uncompromising defense of republicanism.  He disputes Turgot’s apology for unified and centralized government, arguing that insurance against consolidated state power and support for individual liberty require separating government powers between branches and installing careful checks and balances.  Nevertheless, a strong executive branch is needed to defend the people against “aristocrats” who will attempt to deprive liberty from the mass of people.  Revealing the Enlightenment theme of conservatism, Adams criticized the notion of unrestricted popular rule or pure democracy in the Discourses.  Since humans are always desirous of increasing their personal power and reputation, all the while making invidious comparisons, government must be designed to constrain the effects of these passionate tendencies.  Adams writes: “Consider that government is intended to set bounds to passions which nature has not limited; and to assist reason, conscience, justice, and truth in controlling interests which, without it, would be as unjust as uncontrollable.”

4. Contemporary Work

Invocations of universal freedom draw their inspiration from Enlightenment thinkers such as John Locke, Immanuel Kant, and Thomas Jefferson, but come into conflict with contemporary liberal appeals to multiculturalism and pluralism.  Each of these Enlightenment thinkers sought to ground the legitimacy of the state on a theory of rational-moral political order reflecting universal truths about human nature—for instance, that humans are carriers of inalienable rights (Locke), autonomous agents (Kant), or fundamentally equal creations (Jefferson).  However, many contemporary liberals—for instance, Graeme Garrard, John Gray and Richard Rorty—fault Enlightenment liberalism for its failure to acknowledge and accommodate the differences among citizens’ incompatible and equally reasonable religious, moral and philosophical doctrines, especially in multicultural societies.  According to these critics, Enlightenment liberalism, rather than offering a neutral framework, discloses a full-blooded doctrine that competes with alternative views of truth, the good life, and human nature.  This pluralist critique of Enlightenment liberalism’s universalism makes it difficult to harmonize the American Founders’ appeal to universal human rights with their insistence on religious tolerance.  However, as previously noted, evidence of Burkean conservatism offers an alternative to the strong universalism that these recent commentators criticize in American Enlightenment thought.

What in recent times has been characterized as the ‘Enlightenment project’ is the general idea that human rationality can and should be made to serve ethical and humanistic ends.  If human societies are to achieve genuine moral progress, parochialism, dogma and prejudice ought to give way to science and reason in efforts to solve pressing problems. The American Enlightenment project signifies how America has taken a leading role in promoting Enlightenment ideals during that period of human history commonly referred to as ‘modernity.’  Still, there is no consensus about the exact legacy of American Enlightenment thinkers—for instance, whether republican or liberal ideas are predominant.  Until the publication of J. G. A. Pocock’s The Machiavellian Moment (1975), most scholars agreed that liberal (especially Lockean) ideas were more dominant than republican ones.  Pockock’s work initiated a sea change towards what is now the widely accepted view that liberal and republican ideas had relatively equal sway during the eighteenth-century Enlightenment, both in America and Europe.  Gordon Wood and Bernard Bailyn contend that republicanism was dominant and liberalism recessive in American Enlightenment thought.  Isaac Kramnick still defends the orthodox position that American Enlightenment thinking was exclusively Lockean and liberal, thus explaining the strongly individualistic character of modern American culture.

5. References and Further Reading

  • Bailyn, Bernard. The Ideological Origins of the American Revolution. Harvard: Harvard University Press, 1967.
  • Ferguson, Robert A. The American Enlightenment. Cambridge: Harvard University Press, 1997.
  • Hampson, Norman. The Enlightenment: An Evaluation of its Assumptions. London: Penguin, 1968.
  • Himmelfarb, Gertrude. The Roads to Modernity: The British, French and American Enlightenments. London: Vintage, 2008.
  • Israel, Jonathan. A Resolution of the Mind—Radical Enlightenment and the Intellectual Origins of Modern Democracy. Princeton: Princeton University Press, 2009.
  • Kramnick, Isaac. Age of Ideology: Political Thought, 1750 to the Present. New York: Prentice Hall, 1979.
  • May, Henry F. The Enlightenment in America. Oxford: Oxford University Press, 1978.
  • O’Brien, Conor Cruise. The Long Affair: Thomas Jefferson and the French Revolution, 1785-1800. London: Pimlico, 1998.
  • O’Hara, Kieron. The Enlightenment: A Beginner’s Guide. Oxford: OneWorld, 2010.
  • Pockock, John G. A. The Machiavellian Moment: Florentine Political Thought and the American Republican Tradition. Princeton: Princeton University Press, 1975.
  • Wilson, Ellen J. and Peter H. Reill. Encyclopedia of the Enlightenment. New York: Book Builders Inc., 2004.
  • Wood, Gordon. The Creation of the American Republic. Chapel Hill: University of North Carolina Press, 1969.

Author Information

Shane J. Ralston
Email: sjr21@psu.edu
Pennsylvania State University
U. S. A.

Occasionalism

In the minds of most philosophers with a passing familiarity with early-modern philosophy, occasionalism is typically regarded as a laughable ad hoc or ‘for want of anything better’ solution to the mind-body problem, first opened up in Descartes’ Meditations. As typically presented in philosophy textbooks, the doctrine (usually identified exclusively with Nicholas Malebranche) certainly seems laughable: beginning from the assumption that the actual transmission of anything between body and mind is impossible, occasionalism holds that, for example, when my finger is pricked by a needle, no physical effect—neither the puncture of the needle nor the activity of my nerves—reaches my mind, but rather God directly produces the sensation of the prick within my mind on the occasion of the needle’s contact with my finger. Similarly, when I will to retract my finger away from the needle, my incorporeal will is utterly impotent to produce any such corporeal movement, so God again intercedes and directly produces the movement of the finger on the occasion of my willing.

Such supposedly was the doctrine of occasionalism, which, when presented in such a manner, occasions little more than an eye-roll from modern readers. Yet, this “textbook view” of occasionalism (much like the contemporary fixation on Descartes’ Meditations over his Principles of Philosophy) has everything to do with the interests, problems, and concerns of philosophy in the late and post-modern periods, and almost nothing to do with the actual doctrine of occasionalism in its own historical context. Indeed, occasionalism is not peculiar to early-modern philosophy or Cartesianism at all, but was an influential school in both Latin and Islamic medieval philosophy extending back to the tenth century. Moreover, for a strange and systematically theological system of metaphysics, occasionalism is the progenitor of a number of remarkable developments in Western philosophy, some of which laid the foundation for the development of modern science itself.

Table of Contents

  1. Introduction
  2. Motivations for Occasionalism
    1. Islamic and Latin Medieval Occasionalism
    2. Cartesian Occasionalism
  3. Primary Arguments for Occasionalism
    1. Causation is Not a Phenomenon
    2. No Forces or Powers
    3. No Necessary Connection
    4. Continual Creation
  4. The Place of Occasionalism in the History of Philosophy
  5. References and Further Reading
    1. Primary Sources in English
    2. Secondary Sources

1. Introduction

In spite of its historical deficiencies, the aforementioned “textbook view” of occasionalism was not entirely off the mark. The Cartesian occasionalists generally—but not exclusively—made appeal to the doctrine as a solution to the problem of mind-body interaction. Moreover, this interpretation actually has its origins in the period itself. Both G. W. Leibniz and Bernard le Bovier de Fontenelle notably described occasionalism as primarily a reaction to Descartes’ failure to explain the mind-body union (See Leibniz, “to Arnauld, 9 Oct. 1687,” Philosophical Papers, 522; Fontenelle, Doutes, 1:529-30). Nonetheless, Leibniz and Fontenelle were mistaken in their interpretations. As the first true Cartesian occasionalist, Louis de La Forge, argues:

I think most people would not believe me if I said that it is no more difficult to conceive how the human mind, without being extended, can move the body and how the body without being a spiritual thing can act on the mind, and to conceive how a body has the power to move itself and to communicate motion to another body. Yet there is nothing more true. (Traité, 143)

While the commitments of individual philosophers varied, in its pure form, occasionalism was a global denial of causality outside the direct and immediate volitional activity of God—both between bodies and between minds and bodies.

This is important to note as it forms the locus of the distinction between three classic metaphysical models of the causal relationship between God and his Creation: occasionalism, concurrentism, conservationism. Conservationism can best be described as the common view among the lay followers of the Abrahamic faith, as Malebranche himself notes (Recherche, 677). It holds that God created the world in the beginning, but that since that moment and with the exception of miracles, the world runs causally of its own accord and on the basis of its own powers and principles, without the need for God to be continually and perpetually involved. In spite of its mass appeal, conservationism was almost never taken seriously by Christian or Islamic theologians and was denounced as heretical for a variety of reasons that need not concern us here, for the much more important historical distinction was between concurrentism and occasionalism. Owing it origins to Augustine, concurrentism became the causal metaphysic of St. Thomas Aquinas and his legion followers. It holds that both God and finite created causes contribute to the production of particular effects, namely that God “concurs” or assents to the natural activity of the cause and thereby contributes his potency to the production of its effects, without which such a cause would be impotent and incapable of producing its customary effect. Occasionalism, by contrast, holds that finite creatures are utterly impotent by themselves, contribute nothing metaphysically to the production of any effects to which they may be associated, but instead serve only as merely nominal indicators or occasions for the one sole cause in the universe: God. Thus, while Aquinas’ account of the regular operations of nature is grounded in a grand system of agent causes and their patients, for the occasionalist, the regular operations of nature are governed by a system of occasional causes that cohere only on the basis of the regularity of God’s will concerning them.

This raises the question: What exactly is an occasional cause? One example would be a placebo, a designation that could be applied to almost anything, but is understood as such insofar as it serves as the cause of the “placebo effect.” Yet, as has been noted in clinical analyses of the placebo effect, this causal conception is clearly mistaken insofar as a placebo is typically an inert compound or pointless “therapy” that does not actually cause anything in particular, much less its salutary effect. Nonetheless, without the presence and administration of the placebo, the effect would not follow, or not follow as often as it does, and thus a placebo may be understood as an indispensable cause that serves as the occasion for whatever psycho-physical causality that takes place in the body which produces the placebo effect.

So then, what does an occasionalist metaphysic and account of causality look like? Well, to begin with the classic example of mind-body interaction described in the summary: when I look out the window of my office, there is no real causal connection between the clouds and sky as physical objects and the representative idea I have of them in my mind; rather, God immediately and directly produces such a correspondent image in my mind upon the occasion of me turning my head and looking out the window at them. Similarly, there is no real causal connection between the activity of my will to turn my head to the right and look out my window and the physical action of my head turning; for my head moves on the basis of the physical contraction of opposing muscle groups in my neck, which pull on and rotate my cervical vertebrae, thereby effecting the turn. Moreover, for reasons that will be seen, there is no real causal connection between the contraction of these muscles and the movement of my head; rather, God immediately and directly produces the movement of my head on the occasion of the contraction of the muscles in my neck, which are similarly produced by him on the occasion of my will to turn my head to the right.

This elaborate metaphysical and theological description of such a simple action raises the question: Why would any philosopher advance such a bizarre and counter-intuitive theory to explain such basic phenomena?

2. Motivations for Occasionalism

Given the customary prejudice of philosophers towards occasionalism (supposing they’ve heard of it at all), it is necessary to consider the motivation(s) underlying such a strange doctrine, which nonetheless attracted many of the greatest minds of medieval and early-modern philosophy.

The main figures behind the development of occasionalist thought in the Middle Ages were, as might be expected, concerned predominantly with theological issues. Numerous passages in the Old and New Testament are ambiguously suggestive of an occasionalist reading, such as Job 38:12-41, 1 Corinthians 12:6, and Isaiah 26:12. To quote one passage, cited by Malebranche in favor of occasionalism: “This is what the Lord, your protector, says, the one who formed you in the womb: ‘I am the Lord, who made everything, who alone stretched out the sky, who fashioned the earth all by myself’” (Isaiah 44:22). The important part of this quote is not the claim of God (even the conservationists accepted that God acted alone in the moment of creation), but rather Isaiah’s claim that, as Malebranche puts it, “only God acts and forms children in their mother’s womb” (Recherche, 677).

However, such Scriptural testimony was far too ambiguous to inspire or justify occasionalism on its own terms. Rather, occasionalism was born of a dispute centered on the deeply problematic relationship between Greek rationalist philosophy and the dogmas of the Abrahamic religions that seemed incommensurable with this tradition, namely the doctrine of creation ex nihilo and the possibility of miracles. There was a pervasive tendency in later antiquity among those educated in Greek philosophy to be embarrassed by the “abominations of reason,” latent in their religious creeds, which impelled them to attempt a synthesis. These attempts to harmonize Abrahamic monotheism with the philosophy of the pagans invariably provoked a reaction from their less philosophically inclined co-religionists who sought to uphold the dogmas of the Faith without intellectual rationalizations or prevarications. These reactions divide into two almost diametrically opposed camps corresponding to the two great bursts of occasionalist thought in the history of philosophy.

a. Islamic and Latin Medieval Occasionalism

In the Islamic tradition, the thought of the Arab polymath and father of Islamic philosophy, al-Kindi (801-873), marks the tentative beginning of a syncretism of Islam and Greek philosophy. This syncretism was further developed in the 9th and 10th centuries by a school of philosophers known as the Mu’tazalites, the premiere representatives of whom were al-Farabi (c. 872-950) and Avicenna (c. 980-1037). The metaphysical system of the Mu’tazalites was a hybrid of Aristotelianism and Neoplatonism typical of late-antiquity. Though al-Farabi and Avicenna remained nominal Muslims, their rationalist philosophical beliefs stood at considerable odds with the depiction of God and his relation to the world in the Qur’an: most notably, their critics accused them of denying the Abrahamic doctrine of creation ex nihilo and being incapable, on account of their necessitarian conception of causality, to explain the existence or possibility of miracles.

This latter issue over miracles in particular attracted the ire of certain Islamic theologians who were followers of a fundamentalist school begun in the early 10th century by al-Ash’ari (874-936), the most illustrious member of whom was al-Ghazali (1058-1111). The Mutazalites held, in customary rationalist manner, that causes are logically sufficient for the production of their effects and thus entail their existence in an essentially logical and syllogistic manner. While any particular cause (for example fire) may not be in-itself sufficient for the production of its effect (namely burning), given the presence of certain necessary conditions (for example air, and combustible substrate), the effect would follow necessarily from the presence and existence of the cause. That is to say, for fire and a combustible material to be brought together in the presence of oxygen, yet fail to produce burning, was regarded as a logical impossibility tantamount to a formal contradiction.

The objection of the Ash’irites to this principle is not difficult to understand: a natural order that operates on the basis of causes that logically necessitate their effects cannot be reconciled with the existence of miracles, which, as attested to in Holy Scripture, often depend on such an “impossible” disjunction between cause and effect. For example, there is the famous example of the “Burning Bush” from Exodus 3:1-21, which describes a combustible material that is on fire, but was not consumed by the flames. Another example is a story from the Book of Daniel of the three youths (Abednego, Meshach, and Shadrach) who were thrown into Nebuchadnezzar’s “Fiery Furnace,” yet miraculously escaped burning due to interference by an angel of God. Miracles such as these were interpreted literally by Ash’irite theologians and regarded as involving the presence of a natural cause but the absence of its customary effect due to a supernatural intervention by God.

This disjunction of causes and effects in instances of miracles was not itself problematic as long as Jews, Christians, and Muslims believed that God could do the impossible. Yet, such an interpretation of the divine omnipotence was strongly resisted by almost every important theologian of the Abrahamic religions and the orthodox conception of the limits of God’s power was identified as coextensive with the logically possible. To quote the Islamic theologian, al-Ghazali: “No one has power over the Impossible. What the Impossible means is the affirmation of something together with its denial…that which is not impossible is within [God’s] power” (Tahafut, 194). This is a very important point for it requires that, if miracles such as the above did indeed happen, they must have been—pace the assertion of ancient philosophers—logically possible on their own terms. Thus, the concession that God cannot do the impossible puts the onus on the believer in miracles to explain how such causal syncopations are possible. That is to say, it requires the believer to do philosophy—critical analytic philosophy—and thereby defeat the ancient philosophers at their own game.

This Islamic dispute was transferred essentially wholesale to the West through Averroës and Maimonidies in the 12th century and formed the basis of the nominalist reaction against Thomistic scholasticism, which they regarded as being similarly necessitarian and incompatible with the divine omnipotence.

b. Cartesian Occasionalism

By the time of Descartes, the nature of the occasionalist impulse had changed dramatically. Nowhere among the Cartesian occasionalists does one encounter the deep concern over the divine omnipotence or for reconciling philosophy with the testimony of Scripture typical of the Medievals. Even Malebranche, who—alone among his cohort—offered a few (weak) theological arguments in favor of occasionalism, never seemed bothered by the particular theological concerns of his medieval predecessors, even though—again, alone among his cohort—he demonstrated familiarity with them (See LO, 680). Instead, Cartesian occasionalism was a tendency and development organic to Cartesianism itself, which the successors of Descartes were driven to pursue exclusively under the pressure of severe problems in the Cartesian systems of physics and metaphysics and not from any particular religious motivation. These pressures included:

The Mind-Body Problem

This problem, while hardly unique to Descartes, was nonetheless forced by his substance dualism into a more radical and metaphysical framework than had been the case otherwise. Now, as noted in the introduction, the classic textbook view of occasionalism as an ad hoc solution to Descartes’ mind-body problem is almost entirely without warrant. Nonetheless, the mind-body problem was a particular area of concern for Descartes’ successors and occasionalism provided such a convenient solution that this “textbook” view took hold with considerable facility. Nonetheless, Steven Nadler argues that the mind-body problem was not a “specific” problem engendering Cartesian occasionalism and moreover “was not even recognized as a special case of some more general causal problem” (Nadler, 1997, 76). For the Cartesians, the nature of efficient causality was a metaphysical problem in itself.

The Rejection of Scholastic Forms and Causal Powers

Descartes describes the substantial forms of the Scholastics as having been “introduced by philosophers solely to account for the proper actions of natural things, of which they were supposed to be the principles and bases” (CSMK III, 208). Yet, Descartes is adamant that “no natural action at all can be explained by these substantial forms,” insofar as they “account” for the “proper actions of natural things” by metaphysical reification rather than epistemological explanation. They are thus “occult” and inscrutable (CSMK III, 208-9), and moreover otiose and redundant as explanations of phenomena, which, as Descartes is adamant, may be entirely accounted for in terms of local movements (CSM I, 83).

This mechanistic account of causal interaction allowed for a novel argument against the possibility of corporeal efficacy, which follows from Descartes’ rejection of substantial forms combined with his insistence that the qualities of body are exhausted by their mere geometric extension and whatever minimal features may be directly derived from as much. The point is, nowhere contained in the purely quantitative idea of extension is any notion of qualitative powers, forms, disposition, potentialities, and the like, from which it may be concluded that matter was essentially passive and inert.

Cartesian Nominalism

Unlike the Scholastics who regarded motion to be an accident, the Cartesians regarded motion to be a mode of body—thereby denying the Scholastic presumption of a metaphysically real distinction between a thing and its qualities, and instead insisting that there was no ontological difference between the “modes of being [façons d’ être]” of a thing and the thing itself (Lennon, 1974, 34). Given this, it would be as impossible to conceive a body transferring its motion to another body as it would be possible to conceive a body transferring its shape or divisibility to another body.

Continual Creation

Lastly, there is Descartes’ acceptance and advancement of the doctrine that God preserves the world via continual creation (See CSM II, 33; CSM I, 200). This was a customary supposition of occasionalism since al-Ghazali and the Ash’irite occasionalists. While Descartes’ commitment to this doctrine is insufficiently distinct from what might be maintained by a Thomistic concurrentist to qualify incontrovertibly as occasionalism, his successors would interpret the matter more forcefully and in a manner that rendered the concurrence of secondary causes otiose.

3. Primary Arguments for Occasionalism

Throughout the seven centuries of its history, occasionalist philosophy has been advanced and defended through a plethora of different arguments. Remarkably, there does not seem to be any particular “master argument” that appears across all the figures in this tradition. Certain arguments are more common or carried greater cache than others, but occasionalism was never an axiomatic system of metaphysics, and thus the principles and arguments behind it are more of a liquid coacervate than a structured edifice. Some of the strongest and most common arguments made against the efficacy of secondary causes and in favor of the system of occasional causes shall be examined here.

a. Causation is Not a Phenomenon

In observing a particular causal interaction, one does not see the actual causality underlying the interaction, but only a succession of events. This claim is most commonly identified with Hume, but it is actually of considerable antiquity and has often stood as the opening gambit of occasionalism since its very beginning. It was first advanced by al-Baqillani in the 10th century and reiterated by al-Ghazali, who argues:

Fire, which is an inanimate thing, has no action. How can one prove that it is an agent? The only argument is from the observation of the fact of burning at the time of contact with fire. But observation only shows that one is with the other, not that it is by it and has no other cause than it. (Tahafut, 186)

Virtually every philosopher associated with occasionalism would repeat this argument in some form or another. Even after the disappearance of medieval occasionalism in the 15th and 16th centuries, the argument would resurface among the earliest of the Cartesian occasionalists, Louis de La Forge (1632-1666) and Géraud de Cordemoy (1624-1684). La Forge notes:

I will be told, is it not clear and evident that heavy things move downwards, that light things rise upwards, and that bodies communicate their motion to one another? I agree, but there is a big difference between the obviousness of the effect and that of the cause. The effect is very clear here, for what do our senses show use more clearly than the various movements of bodies? But do they show us the force which carries heavy things downwards, light things upwards, and how one body has the power to make another body move? (Traité, 143; emphasis added)

Cordemoy concurs and reformulates the argument in more classically Cartesian terms, namely concerning colliding bodies:

When we say, for example, that body B drives body C away from its place, if we examine well what is acknowledged for certain in this case, we will only see that body B was moved, that it encountered C, which was at rest, and that since this encounter, the first ceased to be moved [and] the second commenced to be. (Discernement, 137; trans. Albondi, 59)

This is the formula of which Hume is typically given credit.

b. No Forces or Powers

The rejection of ‘forces’ or ‘powers’ internal to a particular piece of matter follows empirically from the above denial that we can actually see causation, as well as rationally from the argument, made in antiquity by Sextus Empiricus: “since…so much divergency is shown to exist in objects, we shall not be able to state what character belongs to the object in respect of its real essence, but only what belongs to it in respect of this particular rule of conduct, or law, or habit, and so on” (Outlines of Pyrrhonism, I. XIV, 163). Avicenna attempted to respond to this point by developing a claim made by Aristotle (See Physics 196b) that postulates an inductive “hidden syllogism” [qiyas khafiyy] tacit within causal judgments that allows for the inference of causal powers:

A tested experience is exemplified by our judgment that scammony purges bile. For when this [observed association] is repeated many times, it no longer belongs to the category of what occurs coincidentally. The mind then judges that it is of the nature of scammony to purge bile, and it acquiesces in it. Thus, purging bile is a necessary accident of scammony…and [scammony] necessitates it [the effect of purging bile] by some proximate power within it, or property in it, or a relation connected with it. It becomes correct [to conclude] through this kind of demonstration that there is a cause in scammony by nature and associated with it, which purges bile. (al-Burhan, 95; trans. Kogan, 87-88)

Avicenna’s ambiguity regarding the correct conclusion of this “demonstration” and the source of necessity between scammony and its purgative power is revealing, particularly in his indecisive conflation of “a cause in scammony by nature” with one merely “associated with it.”

Al-Ghazali seizes on this ambiguity and declares that Avicenna’s “kind of demonstration” underlying causal judgments is not a demonstration at all for it lacks any entailment: “existence with a thing does not prove being by it” (Tahafut, 186). To prove this point, al-Ghazali provides an example:

Suppose there is a blind man whose eyes are diseased, and who has not heard from anyone of the difference between night and day. If one day his disease is cured, and he can consequently see colours, he will guess that the agent of the perception of the forms of colours which has now been acquired by his eyes is the opening of the eyes. (Tahafut, 186)

This particular argument is essentially identical to Hume’s famous example in the Enquiry concerning the causal expectations of Adam when encountering fire and water for the first time (See Enquiry, VI.2, 27).

The Cartesians regarded suppositions of ‘force’ or ‘power’ inhering in bodies as occult properties incapable of being clearly and distinctly understood. Following Descartes, they regarded material bodies as effectively hypostatizations of Euclidian geometry, the qualities of which are exhausted by their mere geometric extension and whatever minimal features may be directly derived from as much. The point is, for the Cartesians, we have a clear and distinct idea of the essence of body as res extensa. Nowhere contained in this purely quantitative idea is any notion of qualitative powers, forms, disposition, potentialities, and the like. As Malebranche asks the reader:

Consult the idea of extension and judge by that idea, which represents bodies if anything does, whether they can have some property other than the passive faculty of receiving various shapes and various motions. Is it not evident to the last degree that properties of extension can consist only in relations of distance? (Dialouges, VII.2 147)

From this minimalist and quantitative conception of matter, the Cartesians concluded that matter was existentially passive and inert and derided the Scholastic-Aristotelian epistemology of causal explanation as fundamentally animistic—a point that seems evident in Aquinas’ claim:

[Real relations exist in] those things which by their own very nature are ordered to each other, and have a mutual inclination…as in a heavy body is found an inclination and order to the centre; and hence there exists in the heavy body a certain respect in regard to the centre and the same applies to other things. (Summa theologica, 1, q. 28, a. 1)<