Critical Thinking

Critical Thinking is the process of using and assessing reasons to evaluate statements, assumptions, and arguments in ordinary situations. The goal of this process is to help us have good beliefs, where “good” means that our beliefs meet certain goals of thought, such as truth, usefulness, or rationality. Critical thinking is widely regarded as a species of informal logic, although critical thinking makes use of some formal methods. In contrast with formal reasoning processes that are largely restricted to deductive methods—decision theory, logic, statistics—the process of critical thinking allows a wide range of reasoning methods, including formal and informal logic, linguistic analysis, experimental methods of the sciences, historical and textual methods, and philosophical methods, such as Socratic questioning and reasoning by counterexample.

The goals of critical thinking are also more diverse than those of formal reasoning systems. While formal methods focus on deductive validity and truth, critical thinkers may evaluate a statement’s truth, its usefulness, its religious value, its aesthetic value, or its rhetorical value. Because critical thinking arose primarily from the Anglo-American philosophical tradition (also known as “analytic philosophy”), contemporary critical thinking is largely concerned with a statement’s truth. But some thinkers, such as Aristotle (in Rhetoric), give substantial attention to rhetorical value.

The primary subject matter of critical thinking is the proper use and goals of a range of reasoning methods, how they are applied in a variety of social contexts, and errors in reasoning. This article also discusses the scope and virtues of critical thinking.

Critical thinking should not be confused with Critical Theory. Critical Theory refers to a way of doing philosophy that involves a moral critique of culture. A “critical” theory, in this sense, is a theory that attempts to disprove or discredit a widely held or influential idea or way of thinking in society. Thus, critical race theorists and critical gender theorists offer critiques of traditional views and latent assumptions about race and gender. Critical theorists may use critical thinking methodology, but their subject matter is distinct, and they also may offer critical analyses of critical thinking itself.

Table of Contents

  1. Clarity
  2. Argument and Evaluation
  3. Formal Reasoning
    1. Categorical Logic
    2. Propositional Logic
    3. Modal Logic
    4. Predicate Logic
    5. Other Formal Systems
  4. Informal Reasoning
    1. Generalization
    2. Analogy
    3. Causal Reasoning
    4. Abduction
  5. Detecting Poor Reasoning
    1. Formal Fallacies
    2. Informal Fallacies
    3. Heuristics and Biases
  6. The Scope and Virtues of Good Reasoning
    1. Context
    2. The Principle of Charity/Humility
    3. The Principle of Caution
    4. The Expansiveness of Critical Thinking
    5. Productivity and the Limits of Rationality
  7. Approaches to Improving Reasoning through Critical Thinking
    1. Classical Approaches
    2. The Paul/Elder Model
    3. Other Approaches
  8. References and Further Reading

1. Clarity

The process of evaluating a statement traditionally begins with making sure we understand it; that is, a statement must express a clear meaning. A statement is generally regarded as clear if it expresses a proposition, which is the meaning the author of that statement intends to express, including definitions, referents of terms, and indexicals, such as subject, context, and time. There is significant controversy over what sort of “entity” propositions are, whether abstract objects or linguistic constructions or something else entirely. Whatever its metaphysical status, it is used here simply to refer to whatever meaning a speaker intends to convey in a statement.

The difficulty with identifying intended propositions is that we typically speak and think in natural languages (English, Swedish, French), and natural languages can be misleading. For instance, two different sentences in the same natural language may express the same proposition, as in these two English sentences:

Jamie is taller than his father.
Jamie’s father is shorter than he.

Further, the same sentence in a natural language can express more than one proposition depending on who utters it at a time:

I am shorter than my father right now.

The pronoun “I” is an indexical; it picks out, or “indexes,” whoever utters the sentence and, therefore, expresses a different proposition for each new speaker who utters it. Similarly, “right now” is a temporal indexical; it indexes the time the sentence is uttered. The proposition it is used to express changes each new time the sentence is uttered and, therefore, may have a different truth value at different times (as, say, the speaker grows taller: “I am now five feet tall” may be true today, but false a year from now). Other indexical terms that can affect the meaning of the sentence include other pronouns (he, she, it) and definite articles (that, the).

Further still, different sentences in different natural languages may express the same proposition. For example, all of the following express the proposition “Snow is white”:

Snow is white. (English)

Der Schnee ist weiss. (German)

La neige est blanche. (French)

La neve é bianca. (Italian)

Finally, statements in natural languages are often vague or ambiguous, either of which can obscure the propositions actually intended by their authors. And even in cases where they are not vague or ambiguous, statements’ truth values sometimes vary from context to context. Consider the following example.

The English statement, “It is heavy,” includes the pronoun “it,” which (when used without contextual clues) is ambiguous because it can index any impersonal subject. If, in this case, “it” refers to the computer on which you are reading this right now, its author intends to express the proposition, “The computer on which you are reading this right now is heavy.” Further, the term “heavy” reflects an unspecified standard of heaviness (again, if contextual clues are absent). Assuming we are talking about the computer, it may be heavy relative to other computer models but not to automobiles. Further still, even if we identify or invoke a standard of heaviness by which to evaluate the appropriateness of its use in this context, there may be no weight at which an object is rightly regarded as heavy according to that standard. (For instance, is an object heavy because it weighs 5.3 pounds but not if it weighs 5.2 pounds? Or is it heavy when it is heavier than a mouse but lighter than an anvil?) This means “heavy” is a vague term. In order to construct a precise statement, vague terms (heavy, cold, tall) must often be replaced with terms expressing an objective standard (pounds, temperature, feet).

Part of the challenge of critical thinking is to clearly identify the propositions (meanings) intended by those making statements so we can effectively reason about them. The rules of language help us identify when a term or statement is ambiguous or vague, but they cannot, by themselves, help us resolve ambiguity or vagueness. In many cases, this requires assessing the context in which the statement is made or asking the author what she intends by the terms. If we cannot discern the meaning from the context and we cannot ask the author, we may stipulate a meaning, but this requires charity, to stipulate a plausible meaning, and humility, to admit when we discover that our stipulation is likely mistaken.

2. Argument and Evaluation

Once we are satisfied that a statement is clear, we can begin evaluating it. A statement can be evaluated according to a variety of standards. Commonly, statements are evaluated for truth, usefulness, or rationality. The most common of these goals is truth, so that is the focus of this article.

The truth of a statement is most commonly evaluated in terms of its relation to other statements and direct experiences. If a statement follows from or can be inferred from other statements that we already have good reasons to believe, then we have a reason to believe that statement. For instance, the statement “The ball is blue” can be derived from “The ball is blue and round.” Similarly, if a statement seems true in light of, or is implied by, an experience, then we have a reason to believe that statement. For instance, the experience of seeing a red car is a reason to believe, “The car is red.” (Whether these reasons are good enough for us to believe is a further question about justification, which is beyond the scope of this article, but see “Epistemic Justification.”) Any statement we derive in these ways is called a conclusion. Though we regularly form conclusions from other statements and experiences—often without thinking about it—there is still a question of whether these conclusions are true: Did we draw those conclusions well? A common way to evaluate the truth of a statement is to identify those statements and experiences that support our conclusions and organize them into structures called arguments. (See also, “Argument.”)

An argument is one or more statements (called premises) intended to support the truth of another statement (the conclusion). Premises comprise the evidence offered in favor of the truth of a conclusion. It is important to entertain any premises that are intended to support a conclusion, even if the attempt is unsuccessful. Unsuccessful attempts at supporting a proposition constitute bad arguments, but they are still arguments. The support intended for the conclusion may be formal or informal. In a formal, or deductive, argument, an arguer intends to construct an argument such that, if the premises are true, the conclusion must be true. This strong relationship between premises and conclusion is called validity. This relationship between the premises and conclusion is called “formal” because it is determined by the form (that is, the structure) of the argument (see §3). In an informal, or inductive, argument, the conclusion may be false even if the premises are true. In other words, whether an inductive argument is good depends on something more than the form of the argument. Therefore, all inductive arguments are invalid, but this does not mean they are bad arguments. Even if an argument is invalid, its premises can increase the probability that its conclusion is true. So, the form of inductive arguments is evaluated in terms of the strength the premises confer on the conclusion, and stronger inductive arguments are preferred to weaker ones (see §4). (See also, “Deductive and Inductive Arguments.”)

Psychological states, such as sensations, memories, introspections, and intuitions often constitute evidence for statements. Although these states are not themselves statements, they can be expressed as statements. And when they are, they can be used in and evaluated by arguments. For instance, my seeing a red wall is evidence for me that, “There is a red wall,” but the physiological process of seeing is not a statement. Nevertheless, the experience of seeing a red wall can be expressed as the proposition, “I see a red wall” and can be included in an argument such as the following:

    1. I see a red wall in front of me.
    2. Therefore, there is a red wall in front of me.

This is an inductive argument, though not a strong one. We do not yet know whether seeing something (under these circumstances) is reliable evidence for the existence of what I am seeing. Perhaps I am “seeing” in a dream, in which case my seeing is not good evidence that there is a wall. For similar reasons, there is also reason to doubt whether I am actually seeing. To be cautious, we might say we seem to see a red wall.

To be good, an argument must meet two conditions: the conclusion must follow from the premises—either validly or with a high degree of likelihood—and the premises must be true. If the premises are true and the conclusion follows validly, the argument is sound. If the premises are true and the premises make the conclusion probable (either objectively or relative to alternative conclusions), the argument is cogent.

Here are two examples:

Example 1:

    1. Earth is larger than its moon.
    2. Our sun is larger than Earth.
    3. Therefore, our sun is larger than Earth’s moon.

In example 1, the premises are true. And since “larger than” is a transitive relation, the structure of the argument guarantees that, if the premises are true, the conclusion must be true. This means the argument is also valid. Since it is both valid and has true premises, this deductive argument is sound.

 Example 2:

    1. It is sunny in Montana about 205 days per year.
    2. I will be in Montana in February.
    3. Hence, it will probably be sunny when I am in Montana.

In example 2, premise 1 is true, and let us assume premise 2 is true. The phrase “almost always” indicates that a majority of days in Montana are sunny, so that, for any day you choose, it will probably be a sunny day. Premise 2 says I am choosing days in February to visit. Together, these premises strongly support (though they do not guarantee) the conclusion that it will be sunny when I am there, and so this inductive argument is cogent.

In some cases, arguments will be missing some important piece, whether a premise or a conclusion. For instance, imagine someone says, “Well, she asked you to go, so you have to go.” The idea that you have to go does not follow logically from the fact that she asked you to go without more information. What is it about her asking you to go that implies you have to go? Arguments missing important information are called enthymemes. A crucial part of critical thinking is identifying missing or assumed information in order to effectively evaluate an argument. In this example, the missing premise might be that, “She is your boss, and you have to do what she asks you to do.” Or it might be that, “She is the woman you are interested in dating, and if you want a real chance at dating her, you must do what she asks.” Before we can evaluate whether her asking implies that you have to go, we need to know this missing bit of information. And without that missing bit of information, we can simply reply, “That conclusion doesn’t follow from that premise.”

The two categories of reasoning associated with soundness and cogency—formal and informal, respectively—are considered, by some, to be the only two types of argument. Others add a third category, called abductive reasoning, according to which one reasons according to the rules of explanation rather than the rules of inference. Those who do not regard abductive reasoning as a third, distinct category typically regard it as a species of informal reasoning. Although abductive reasoning has unique features, here it is treated, for reasons explained in §4d, as a species of informal reasoning, but little hangs on this characterization for the purposes of this article.

3. Formal Reasoning

Although critical thinking is widely regarded as a type of informal reasoning, it nevertheless makes substantial use of formal reasoning strategies. Formal reasoning is deductive, which means an arguer intends to infer or derive a proposition from one or more propositions on the basis of the form or structure exhibited by the premises. Valid argument forms guarantee that particular propositions can be derived from them. Some forms look like they make such guarantees but fail to do so (we identify these as formal fallacies in §5a). If an arguer intends or supposes that a premise or set of premises guarantee a particular conclusion, we may evaluate that argument form as deductive even if the form fails to guarantee the conclusion, and is thus discovered to be invalid.

Before continuing in this section, it is important to note that, while formal reasoning provides a set of strict rules for drawing valid inferences, it cannot help us determine the truth of many of our original premises or our starting assumptions. And in fact, very little critical thinking that occurs in our daily lives (unless you are a philosopher, engineer, computer programmer, or statistician) involves formal reasoning. When we make decisions about whether to board an airplane, whether to move in with our significant others, whether to vote for a particular candidate, whether it is worth it to drive ten miles faster the speed limit even if I am fairly sure I will not get a ticket, whether it is worth it to cheat on a diet, or whether we should take a job overseas, we are reasoning informally. We are reasoning with imperfect information (I do not know much about my flight crew or the airplane’s history), with incomplete information (no one knows what the future is like), and with a number of built-in biases, some conscious (I really like my significant other right now), others unconscious (I have never gotten a ticket before, so I probably will not get one this time). Readers who are more interested in these informal contexts may want to skip to §4.

An argument form is a template that includes variables that can be replaced with sentences. Consider the following form (found within the formal system known as sentential logic):

    1. If p, then q.
    2. p.
    3. Therefore, q.

This form was named modus ponens (Latin, “method of putting”) by medieval philosophers. p and q are variables that can be replaced with any proposition, however simple or complex. And as long as the variables are replaced consistently (that is, each instance of p is replaced with the same sentence and the same for q), the conclusion (line 3), q, follows from these premises. To be more precise, the inference from the premises to the conclusion is valid. “Validity” describes a particular relationship between the premises and the conclusion, namely: in all cases, the conclusion follows necessarily from the premises, or, to use more technical language, the premises logically guarantee an instance of the conclusion.

Notice we have said nothing yet about truth. As critical thinkers, we are interested, primarily, in evaluating the truth of sentences that express propositions, but all we have discussed so far is a type of relationship between premises and conclusion (validity). This formal relationship is analogous to grammar in natural languages and is known in both fields as syntax. A sentence is grammatically correct if its syntax is appropriate for that language (in English, for example, a grammatically correct simple sentence has a subject and a predicate—“He runs.” “Laura is Chairperson.”—and it is grammatically correct regardless of what subject or predicate is used—“Jupiter sings.”—and regardless of whether the terms are meaningful—“Geflorble rowdies.”). Whether a sentence is meaningful, and therefore, whether it can be true or false, depends on its semantics, which refers to the meaning of individual terms (subjects and predicates) and the meaning that emerges from particular orderings of terms. Some terms are meaningless—geflorble; rowdies—and some orderings are meaningless even though their terms are meaningful—“Quadruplicity drinks procrastination,” and “Colorless green ideas sleep furiously.”.

Despite the ways that syntax and semantics come apart, if sentences are meaningful, then syntactic relationships between premises and conclusions allow reasoners to infer truth values for conclusions. Because of this, a more common definition of validity is this: it is not possible for all the premises to be true and the conclusion false. Formal logical systems in which syntax allows us to infer semantic values are called truth-functional or truth-preserving—proper syntax preserves truth throughout inferences.

The point of this is to note that formal reasoning only tells us what is true if we already know our premises are true. It cannot tell us whether our experiences are reliable or whether scientific experiments tell us what they seem to tell us. Logic can be used to help us determine whether a statement is true, but only if we already know some true things. This is why a broad conception of critical thinking is so important: we need many different tools to evaluate whether our beliefs are any good.

Consider, again, the form modus ponens, and replace p with “It is a cat” and q with “It is a mammal”:

    1. If it is a cat, then it is a mammal.
    2. It is a cat.
    3. Therefore, it is a mammal.

In this case, we seem to “see” (in a metaphorical sense of see) that the premises guarantee the truth of the conclusion. On reflection, it is also clear that the premises might not be true; for instance, if “it” picks out a rock instead of a cat, premise 1 is still true, but premise 2 is false. It is also possible for the conclusion to be true when the premises are false. For instance, if the “it” picks out a dog instead of a cat, the conclusion “It is a mammal” is true. But in that case, the premises do not guarantee that conclusion; they do not constitute a reason to believe the conclusion is true.

Summing up, an argument is valid if its premises logically guarantee an instance of its conclusion (syntactically), or if it is not possible for its premises to be true and its conclusion false (semantically). Logic is truth-preserving but not truth-detecting; we still need evidence that our premises are true to use logic effectively.

            A Brief Technical Point

Some readers might find it worth noting that the semantic definition of validity has two counterintuitive consequences. First, it implies that any argument with a necessarily true conclusion is valid. Notice that the condition is phrased hypothetically: if the premises are true, then the conclusion cannot be false. This condition is met if the conclusion cannot be false:

        1. If it is a cat, then it is a mammal.
        2. It is a cat.
        3. Two added to two equals four.

This is because the hypothetical (or “conditional”) statement would still be true even if the premises were false:

        1. If it is blue, then it flies.
        2. It is an airplane.
        3. Two added to two equals four.

It is true of this argument that if the premises were true, the conclusion would be since the conclusion is true no matter what.

Second, the semantic formulation also implies that any argument with necessarily false premises is valid. The semantic condition for validity is met if the premises cannot be true:

        1. Some bachelors are married.
        2. Earth’s moon is heavier than Jupiter.

In this case, if the premise were true, the conclusion could not be false (this is because anything follows syntactically from a contradiction), and therefore, the argument is valid. There is nothing particularly problematic about these two consequences. But they highlight unexpected implications of our standard formulations of validity, and they show why there is more to good arguments than validity.

Despite these counterintuitive implications, valid reasoning is essential to thinking critically because it is a truth-preserving strategy: if deductive reasoning is applied to true premises, true conclusions will result.

There are a number of types of formal reasoning, but here we review only some of the most common: categorical logic, propositional logic, modal logic, and predicate logic.

a. Categorical Logic

Categorical logic is formal reasoning about categories or collections of subjects, where subjects refers to anything that can be regarded as a member of a class, whether objects, properties, or events or even a single object, property, or event. Categorical logic employs the quantifiers “all,” “some,” and “none” to refer to the members of categories, and categorical propositions are formulated in four ways:

A claims: All As are Bs (where the capitals “A” and “B” represent categories of subjects).

E claims: No As are Bs.

I claims: Some As are Bs.

O claims: Some As are not Bs.

Categorical syllogisms are syllogisms (two-premised formal arguments) that employ categorical propositions. Here are two examples:

  1. All cats are mammals. (A claim) 1. No bachelors are married. (E claim)
  2. Some cats are furry. (I claim) 2. All the people in this building are bachelors. (A claim)
  3. Therefore, some mammals are furry. (I claim) 3. Thus, no people in this building are married. (E claim)

There are interesting limitations on what categorical logic can do. For instance, if one premise says that, “Some As are not Bs,” may we infer that some As are Bs, in what is known as an “existential assumption”? Aristotle seemed to think so (De Interpretatione), but this cannot be decided within the rules of the system. Further, and counterintuitively, it would mean that a proposition such as, “Some bachelors are not married,” is false since it implies that some bachelors are married.

Another limitation on categorical logic is that arguments with more than three categories cannot be easily evaluated for validity. The standard method for evaluating the validity of categorical syllogisms is the Venn diagram (named after John Venn, who introduced it in 1881), which expresses categorical propositions in terms of two overlapping circles and categorical arguments in terms of three overlapping circles, each circle representing a category of subjects.

Venn diagram for claim and Venn diagram for argument

A, B, and C represent categories of objects, properties, or events. The symbol “” comes from mathematical set theory to indicate “intersects with.” “A∩B” means all those As that are also Bs and vice versa. 

Though there are ways of constructing Venn diagrams with more than three categories, determining the validity of these arguments using Venn diagrams is very difficult (and often requires computers). These limitations led to the development of more powerful systems of formal reasoning.

b. Propositional Logic

Propositional, or sentential, logic has advantages and disadvantages relative to categorical logic. It is more powerful than categorical logic in that it is not restricted in the number of terms it can evaluate, and therefore, it is not restricted to the syllogistic form. But it is weaker than categorical logic in that it has no operators for quantifying over subjects, such as “all” or “some.” For those, we must appeal to predicate logic (see §3c below).

Basic propositional logic involves formal reasoning about propositions (as opposed to categories), and its most basic unit of evaluation is the atomic proposition. “Atom” means the smallest indivisible unit of something, and simple English statements (subject + predicate) are atomic wholes because if either part is missing, the word or words cease to be a statement, and therefore ceases to be capable of expressing a proposition. Atomic propositions are simple subject-predicate combinations, for instance, “It is a cat” and “I am a mammal.” Variable letters such as p and q in argument forms are replaced with semantically rich constants, indicated by capital letters, such as A and B. Consider modus ponens again (noting that the atomic propositions are underlined in the English argument):

Argument Form English Argument Semantic Replacement
1. If p, then q. 1. If it is a cat, then it is a mammal. 1. If C, then M
2. p. 2. It is a cat. 2. C
3. Therefore, q. 3. Therefore, it is a mammal. 3. M

As you can see from premise 1 of the Semantic Replacement, atomic propositions can be combined into more complex propositions using symbols that represent their logical relationships (such as “If…, then…”). These symbols are called “operators” or “connectives.” The five standard operators in basic propositional logic are:

Operator/Connective Symbol Example Translation
“not” ~ or ¬ or It is not the case that p. ~p
“and” & or • Both p and q. p & q
“or” v Either p or q. p v q
“If…, then…” à or ⊃ If p, then q. p ⊃ q
“if and only if” ≡ or ⬌ or iff p if and only if q. p ≡ q

These operations allow us to identify valid relations among propositions: that is, they allow us to formulate a set of rules by which we can validly infer propositions from and validly replace them with others. These rules of inference (such as modus ponens; modus tollens; disjunctive syllogism) and rules of replacement (such as double negation; contraposition; DeMorgan’s Law) comprise the syntax of propositional logic, guaranteeing the validity of the arguments employing them.

Two Rules of Inference:

Conjunction Argument Form Propositional Translation
1. It is raining. 1. p 1. R
2. It is windy. 2. q 2. W
3. Therefore, it is raining and it is windy. 3. /.: (p & q) 3. /.: (R & W)
Disjunctive Syllogism Argument Form Propositional Translation
1. Either it is raining or my car is dirty. 1. (p v q) 1. (R v C)
2. My car is not dirty. 2. ~q 2. ~C
3. Therefore, it is raining. 3. /.: p 3. /.: R

 

Two Rules of Replacement:

Material Implication Replacement Form Propositional Translation
If it is raining, then the sidewalk is wet if and only if either it is not raining or the sidewalk is wet. (p ⊃ q) ≡ (~p v q) (R ⊃ W) ≡ (~R v W)

 

DeMorgan’s Laws                               Replacement Form Propositional Translation
It is not the case that the job is a good fit for you and you hate it if and only if it either is not a good fit for your or you do not hate it. ~(p & q) ≡ (~p v ~q) ~(F & H) ≡ (~F v ~H)
It is not the case that he is either a lawyer or a nice guy if and only if he is neither a lawyer nor a nice guy. ~(p v q) ≡ (~p & ~q) ~(L v N) ≡ (~L & ~N)

For more, see “Propositional Logic.”

c. Modal Logic

Standard propositional logic does not capture every type of proposition we wish to express (recall that it does not allow us to evaluate categorical quantifiers such as “all” or “some”). It also does not allow us to evaluate propositions expressed as possibly true or necessarily true, modifications that are called modal operators or modal quantifiers.

Modal logic refers to a family of formal propositional systems, the most prominent of which includes operators for necessity (□) and possibility (◊) (see §3d below for examples of other modal systems). If a proposition, p, is possibly true, ◊p, it may or may not be true. If p is necessarily true, □p, it must be true; it cannot be false. If p is necessarily false, either ~◊p or □~p, it must be false; it cannot be true.

There is a variety of modal systems, the weakest of which is called K (after Saul Kripke, who exerted important influence on the development of modal logic), and it involves only two additional rules:

Necessitation Rule:   If A is a theorem of K, then so is □A.

Distribution Axiom:  □(AB) ⊃ (□A⊃□B).  [If it is necessarily the case that if A, then B, then if it is necessarily the case that A, it is necessarily the case that B.]

Other systems maintain these rules and add others for increasing strength. For instance, the (S4) modal system includes axiom (4):

(4)  □A ⊃ □□A  [If it is necessarily the case that A, then it is necessarily necessary that A.]

An influential and intuitive way of thinking about modal concepts is the idea of “possible worlds” (see Plantinga, 1974; Lewis 1986). A world is just the set of all true propositions. The actual world is the set of all actually true propositions—everything that was true, is true, and (depending on what you believe about the future) will be true. A possible world is a way the actual world might have been. Imagine you wore green underwear today. The actual world might have been different in that way: you might have worn blue underwear. In this interpretation of modal quantifiers, there is a possible world in which you wore blue underwear instead of green underwear. And for every possibility like this, and every combination of those possibilities, there is a distinct possible world.

If a proposition is not possible, then there is no possible world in which that proposition is true. The statement, “That object is red all over and blue all over at the same time” is not true in any possible worlds. Therefore, it is not possible (~◊P), or, in other words, necessarily false (□~P). If a proposition is true in all possible worlds, it is necessarily true. For instance, the proposition, “Two plus two equal four,” is true in all possible worlds, so it is necessarily true (□P) or not possibly false (~◊~P).

All modal systems have a number of controversial implications, and there is not space to review them here. Here we need only note that modal logic is a type of formal reasoning that increases the power of propositional logic to capture more of what we attempt to express in natural languages. (For more, see “Modal Logic: A Contemporary View.”)

d. Predicate Logic

Predicate logic, in particular, first-order predicate logic, is even more powerful than propositional logic. Whereas propositional logic treats propositions as atomic wholes, predicate logic allows reasoners to identify and refer to subjects of propositions, independently of their predicates. For instance, whereas the proposition, “Susan is witty,” would be replaced with a single upper-case letter, say “S,” in propositional logic, predicate logic would assign the subject “Susan” a lower-case letter, s, and the predicate “is witty” an upper-case letter, W, and the translation (or formula) would be: Ws.

In addition to distinguishing subjects and predicates, first-order predicate logic allows reasoners to quantify over subjects. The quantifiers in predicate logic are “All…,” which is comparable to “All” quantifier in categorical logic and is sometimes symbolized with an upside-down A: ∀ (though it may not be symbolized at all), and “There is at least one…,” which is comparable to “Some” quantifier in categorical logic and is symbolized with a backward E: ∃. E and O claims are formed by employing the negation operator from propositional logic. In this formal system, the proposition, “Someone is witty,” for example, has the form: There is an x, such that x has the property of being witty, which is symbolized: (∃x)(Wx). Similarly, the proposition, “Everyone is witty,” has the form: For all x, x has the property of being witty, which is symbolized (∀x)(Wx) or, without the ∀: (x)(Wx).

Predicate derivations are conducted according to the same rules of inference and replacement as propositional logic with the exception of four rules to accommodate adding and eliminating quantifiers.

Second-order predicate logic extends first-order predicate logic to allow critical thinkers to quantify over and draw inferences about subjects and predicates, including relations among subjects and predicates. In both first- and second-order logic, predicates typically take the form of properties (one-place predicates) or relations (two-place predicates), though there is no upper limit on place numbers. Second-order logic allows us to treat both as falling under quantifiers, such as everything that is (specifically, that has the property of being) a tea cup and everything that is a bachelor is unmarried.

e. Other Formal Systems

It is worth noting here that the formal reasoning systems we have seen thus far (categorical, propositional, and predicate) all presuppose that truth is bivalent, that is, two-valued. The two values critical thinkers are most often concerned with are true and false, but any bivalent system is subject to the rules of inference and replacement of propositional logic. The most common alternative to truth values is the binary code of 1s and 0s used in computer programming. All logics that presuppose bivalence are called classical logics. In the next section, we see that not all formal systems are bivalent; there are non-classical logics. The existence of non-classical systems raises interesting philosophical questions about the nature of truth and the legitimacy of our basic rules of reasoning, but these questions are too far afield for this context. Many philosophers regard bivalent systems as legitimate for all but the most abstract and purely formal contexts. Included below is a brief description of three of the most common non-classical logics.

Tense logic, or temporal logic, is a formal modal system developed by Arthur Prior (1957, 1967, 1968) to accommodate propositional language about time. For example, in addition to standard propositional operators, tense logic includes four operators for indexing times: P “It has at some time been the case that…”; F “It will at some time be the case that…”; H “It has always been the case that…”; and G “It will always be the case that….”

Many-valued logic, or n-valued logic, is a family of formal logical systems that attempts to accommodate intuitions that suggest some propositions have values in addition to true and false. These are often motivated by intuitions that some propositions have neither of the classic truth values; their truth value is indeterminate (not just undeterminable, but neither true nor false), for example, propositions about the future such as, “There will be a sea battle tomorrow.” If the future does not yet exist, there is no fact about the future, and therefore, nothing for a proposition to express.

Fuzzy logic is a type of many-valued logic developed out of Lotfi Zadeh’s (1965) work on mathematical sets. Fuzzy logic attempts to accommodate intuitions that suggest some propositions have truth value in degrees, that is, some degree of truth between true and false. It is motivated by concerns about vagueness in reality, for example whether a certain color is red or some degree of red, or whether some temperature is hot or some degree of hotness.

Formal reasoning plays an important role in critical thinking, but not very often. There are significant limits to how we might use formal tools in our daily lives. If that is true, how do critical thinkers reason well when formal reasoning cannot help? That brings us to informal reasoning.

4. Informal Reasoning

Informal reasoning is inductive, which means that a proposition is inferred (but not derived) from one or more propositions on the basis of the strength provided by the premises (where “strength” means some degree of likelihood less than certainty or some degree of probability less than 1 but greater than 0; a proposition with 0% probability is necessarily false).

Particular premises grant strength to premises to the degree that they reflect certain relationships or structures in the world. For instance, if a particular type of event, p, is known to cause or indicate another type of event, q, then upon encountering an event of type p, we may infer that an event of type q is likely to occur. We may express this relationship among events propositionally as follows:

    1. Events of type p typically cause or indicate events of type q.
    2. An event of type p occurred.
    3. Therefore, an event of type q probably occurred.

If the structure of the world (for instance, natural laws) makes premise 1 true, then, if premise 2 is true, we can reasonably (though not certainly) infer the conclusion.

Unlike formal reasoning, the adequacy of informal reasoning depends on how well the premises reflect relationships or structures in the world. And since we have not experienced every relationship among objects or events or every structure, we cannot infer with certainty that a particular conclusion follows from a true set of premises about these relationships or structures. We can only infer them to some degree of likelihood by determining to the best of our ability either their objective probability or their probability relative to alternative conclusions.

The objective probability of a conclusion refers to how likely, given the way the world is regardless of whether we know it, that conclusion is to be true. The epistemic probability of a conclusion refers to how likely that conclusion is to be true given what we know about the world, or more precisely, given our evidence for its objective likelihood.

Objective probabilities are determined by facts about the world and they are not truths of logic, so we often need evidence for objective probabilities. For instance, imagine you are about to draw a card from a standard playing deck of 52 cards. Given particular assumptions about the world (that this deck contains 52 cards and that one of them is the Ace of Spades), the objective likelihood that you will draw an Ace of Spades is 1/52. These assumptions allow us to calculate the objective probability of drawing an Ace of Spades regardless of whether we have ever drawn a card before. But these are assumptions about the world that are not guaranteed by logic: we have to actually count the cards, to be sure we count accurately and are not dreaming or hallucinating, and that our memory (once we have finished counting) reliably maintains our conclusions. None of these processes logically guarantees true beliefs. So, if our assumptions are correct, we know the objective probability of actually drawing an Ace of Spades in the real world. But since there is no logical guarantee that our assumptions are right, we are left only with the epistemic probability (the probability based on our evidence) of drawing that card. If our assumptions are right, then the objective probability is the same as our epistemic probability: 1/52. But even if we are right, objective and epistemic probabilities can come apart under some circumstances.

Imagine you draw a card without looking at it and lay it face down. What is the objective probability that that card is an Ace of Spades? The structure of the world has now settled the question, though you do not know the outcome. If it is an Ace of Spades, the objective probability is 1 (100%); it is the Ace of Spades. If it is not the Ace of Spades, the objective probability is 0 (0%); it is not the Ace of Spades. But what is the epistemic probability? Since you do not know any more about the world than you did before you drew the card, the epistemic probability is the same as before you drew it: 1/52.

Since much of the way the world is is hidden from us (like the card laid face down), and since it is not obvious that we perceive reality as it actually is (we do not know whether the actual coins we flip are evenly weighted or whether the actual dice we roll are unbiased), our conclusions about probabilities in the actual world are inevitably epistemic probabilities. We can certainly calculate objective probabilities about abstract objects (for instance, hypothetically fair coins and dice—and these calculations can be evaluated formally using probability theory and statistics), but as soon as we apply these calculations to the real world, we must accommodate the fact that our evidence is incomplete.

There are four well-established categories of informal reasoning: generalization, analogy, causal reasoning, and abduction.

a. Generalization

Generalization is a way of reasoning informally from instances of a type to a conclusion about the type. This commonly takes two forms: reasoning from a sample of a population to the whole population, and reasoning from past instances of an object or event to future instances of that object or event. The latter is sometimes called “enumerative induction” because it involves enumerating past instances of a type in order to draw an inference about a future instance. But this distinction is weak; both forms of generalization use past or current data to infer statements about future instances and whole current populations.

A popular instance of inductive generalization is the opinion poll: a sample of a population of people is polled with respect to some statement or belief. For instance, if we poll 57 sophomores enrolled at a particular college about their experiences of living in dorms, these 57 comprise our sample of the population of sophomores at that particular college. We want to be careful how we define our population given who is part of our sample. Not all college students are like sophomores, so it is not prudent to draw inferences about all college students from these sophomores. Similarly, sophomores at other colleges are not necessarily like sophomores at this college (it could be the difference between a liberal arts college and a research university), so it is prudent not to draw inferences about all sophomores from this sample at a particular college.

Let us say that 90% of the 57 sophomores we polled hate the showers in their dorms. From this information, we might generalize in the following way:

  1. We polled 57 sophomores at Plato’s Academy. (the sample)
  2. 90% of our sample hates the showers in their dorms. (the polling data)
  3. Therefore, probably 90% of all sophomores at Plato’s Academy hate the showers in their dorms. (a generalization from our sample to the whole population of sophomores at Plato’s Academy)

Is this good evidence that 90% of all sophomores at that college hate the showers in their dorms?

A generalization is typically regarded as a good argument if its sample is representative of its population. A sample is representative if it is similar in the relevant respects to its population. A perfectly representative sample would include the whole population: the sample would be identical with the population, and thus, perfectly representative. In that case, no generalization is necessary. But we rarely have the time or resources to evaluate whole populations. And so, a sample is generally regarded as representative if it is large relative to its population and unbiased.

In our example, whether our inference is good depends, in part, on how many sophomores there are. Are there 100, 2,000? If there are only 100, then our sample size seems adequate—we have polled over half the population. Is our sample unbiased? That depends on the composition of the sample. Is it comprised only of women or only of men? If this college is not co-ed, that is not a problem. But if the college is co-ed and we have sampled only women, our sample is biased against men. We have information only about female freshmen dorm experiences, and therefore, we cannot generalize about male freshmen dorm experiences.

How large is large enough? This is a difficult question to answer. A poll of 1% of your high school does not seem large enough to be representative. You should probably gather more data. Yet a poll of 1% of your whole country is practically impossible (you are not likely to ever have enough grant money to conduct that poll). But could a poll of less than 1% be acceptable? This question is not easily answered, even by experts in the field. The simple answer is: the more, the better. The more complicated answer is: it depends on how many other factors you can control for, such as bias and hidden variables (see §4c for more on experimental controls).

Similarly, we might ask what counts as an unbiased sample. An overly simple answer is: the sample is taken randomly, that is, by using a procedure that prevents consciously or unconsciously favoring one segment of the population over another (flipping a coin, drawing lottery balls). But reality is not simple. In political polls, it is important not to use a selection procedure that results in a sample with a larger number of members of one political party than another relative to their distribution in the population, even if the resulting sample is random. For example, the two most prominent parties in the U.S. are the Democratic Party and the Republican Party. If 47% of the U.S. is Republican and 53% is Democrat, an unbiased sample would have approximately 47% Republicans and 53% Democrats. But notice that simply choosing at random may not guarantee that result; it could easily occur, just by choosing randomly, that our sample has 70% Democrats and 30% Republicans (suppose our computer chose, albeit randomly, from a highly Democratic neighborhood). Therefore, we want to control for representativeness in some criteria, such as gender, age, and education. And we explicitly want to avoid controlling for the results we are interested in; if we controlled for particular answers to the questions on our poll, we would not learn anything—we would get all and only the answers we controlled for.

Difficulties determining representativeness suggest that reliable generalizations are not easy to construct. If we generalize on the basis of samples that are too small or if we cannot control for bias, we commit the informal fallacy of hasty generalization (see §5b). In order to generalize well, it seems we need a bit of machinery to guarantee representativeness. In fact, it seems we need an experiment, one of the primary tools in causal reasoning (see §4c below).

b. Analogy

Argument from Analogy, also called analogical reasoning, is a way of reasoning informally about events or objects based on their similarities. A classic instance of reasoning by analogy occurs in archaeology, when researchers attempt to determine whether a stone object is an artifact (a human-made item) or simply a rock. By comparing the features of an unknown stone with well-known artifacts, archaeologists can infer whether a particular stone is an artifact. Other examples include identifying animals’ tracks by their similarities with pictures in a guidebook and consumer reports on the reliability of products.

To see how arguments from analogy work in detail, imagine two people who, independently of one another, want to buy a new pickup truck. Each chooses a make and model he or she likes, and let us say they decide on the same truck. They then visit a number of consumer reporting websites to read reports on trucks matching the features of the make and model they chose, for instance, the year it was built, the size of the engine (6 cyl. or 8 cyl.), the type of transmission (2WD or 4WD), the fuel mileage, and the cab size (standard, extended, crew). Now, let us say one of our prospective buyers is interested in safety—he or she wants a tough, safe vehicle that will protect against injuries in case of a crash. The other potential buyer is interested in mechanical reliability—he or she does not want to spend a lot of time and money fixing mechanical problems.

With this in mind, here is how our two buyers might reason analogically about whether to purchase the truck (with some fake report data included):

Buyer 1

  1. The truck I have in mind was built in 2012, has a 6-cylinder engine, a 2WD transmission, and a king cab.
  2. 62 people who bought trucks like this one posted consumer reports and have driven it for more than a year.
  3. 88% of those 62 people report that the truck feels very safe.
  4. Therefore, the truck I am looking at will likely be very safe.

Buyer 2

  1. The truck I have in mind was built in 2012, has a 6-cylinder engine, a 2WD transmission, and a king cab.
  2. 62 people who bought trucks like this one posted consumer reports and have driven it for more than a year.
  3. 88% of those 62 people report that the truck has had no mechanical problems.
  4. Therefore, the truck I am looking at will likely have no mechanical problems.

Are the features of these analogous vehicles (the ones reported on) sufficiently numerous and relevant for helping our prospective truck buyers decide whether to purchase the truck in question (the one on the lot)? Since we have some idea that the type of engine and transmission in a vehicle contribute to its mechanical reliability, Buyer 2 may have some relevant features on which to draw a reliable analogy. Fuel mileage and cab size are not obviously relevant, but engine specifications seem to be. Are these specifications numerous enough? That depends on whether anything else that we are not aware of contributes to overall reliability. Of course, if the trucks having the features we know also have all other relevant features we do not know (if there are any), then Buyer 2 may still be able to draw a reliable inference from analogy. Of course, we do not currently know this.

Alternatively, Buyer 1 seems to have very few relevant features on which to draw a reliable analogy. The features listed are not obviously related to safety. Are there safety options a buyer may choose but that are not included in the list? For example, can a buyer choose side-curtain airbags, or do such airbags come standard in this model? Does cab size contribute to overall safety? Although there are a number of similarities between the trucks, it is not obvious that we have identified features relevant to safety or whether there are enough of them. Further, reports of “feeling safe” are not equivalent to a truck actually being safe. Better evidence would be crash test data or data from actual accidents involving this truck. This information is not likely to be on a consumer reports website.

A further difficulty is that, in many cases, it is difficult to know whether many similarities are necessary if the similarities are relevant. For instance, if having lots of room for passengers is your primary concern, then any other features are relevant only insofar as they affect cab size. The features that affect cab size may be relatively small.

This example shows that arguments from analogy are difficult to formulate well. Arguments from analogy can be good arguments when critical thinkers identify a sufficient number of features of known objects that are also relevant to the feature inferred to be shared by the object in question. If a rock is shaped like a cutting tool, has marks consistent with shaping and sharpening, and has wear marks consistent with being held in a human hand, it is likely that rock is an artifact. But not all cases are as clear.

It is often difficult to determine whether the features we have identified are sufficiently numerous or relevant to our interests. To determine whether an argument from analogy is good, a person may need to identify a causal relationship between those features and the one in which she is interested (as in the case with a vehicle’s mechanical reliability). This usually takes the form of an experiment, which we explore below (§4c).

Difficulties with constructing reliable generalizations and analogies have led critical thinkers to develop sophisticated methods for controlling for the ways these arguments can go wrong. The most common way to avoid the pitfalls of these arguments is to identify the causal structures in the world that account for or underwrite successful generalizations and analogies. Causal arguments are the primary method of controlling for extraneous causal influences and identifying relevant causes. Their development and complexity warrant regarding them as a distinct form of informal reasoning.

c. Causal Reasoning

Causal arguments attempt to draw causal conclusions (that is, statements that express propositions about causes: x causes y) from premises about relationships among events or objects. Though it is not always possible to construct a causal argument, when available, they have an advantage over other types of inductive arguments in that they can employ mechanisms (experiments) that reduce the risks involved in generalizations and analogies.

The interest in identifying causal relationships often begins with the desire to explain correlations among events (as pollen levels increase, so do allergy symptoms) or with the desire to replicate an event (building muscle, starting a fire) or to eliminate an event (polio, head trauma in football).

Correlations among events may be positive (where each event increases at roughly the same rate) or negative (where one event decreases in proportion to another’s increase). Correlations suggest a causal relationship among the events correlated.

graphs of correlations

But we must be careful; correlations are merely suggestive—other forces may be at work. Let us say the y-axis in the charts above represents the number of millionaires in the U.S. and the x-axis represents the amount of money U.S. citizens pay for healthcare each year. Without further analysis, a positive correlation between these two may lead someone to conclude that increasing wealth causes people to be more health conscious and to seek medical treatment more often. A negative correlation may lead someone to conclude that wealth makes people healthier and, therefore, that they need to seek medical care less frequently.

Unfortunately, correlations can occur without any causal structures (mere coincidence) or because of a third, as-yet-unidentified event (a cause common to both events, or “common cause”), or the causal relationship may flow in an unexpected direction (what seems like the cause is really the effect). In order to determine precisely which event (if any) is responsible for the correlation, reasoners must eliminate possible influences on the correlation by “controlling” for possible influences on the relationship (variables).

Critical thinking about causes begins by constructing hypotheses about the origins of particular events. A hypothesis is an explanation or event that would account for the event in question. For example, if the question is how to account for increased acne during adolescence, and we are not aware of the existence of hormones, we might formulate a number of hypotheses about why this happens: during adolescence, people’s diets change (parents no longer dictate their meals), so perhaps some types of food cause acne; during adolescence, people become increasingly anxious about how they appear to others, so perhaps anxiety or stress causes acne; and so on.

After we have formulated a hypothesis, we identify a test implication that will help us determine whether our hypothesis is correct. For instance, if some types of food cause acne, we might choose a particular food, say, chocolate, and say: if chocolate causes acne (hypothesis), then decreasing chocolate will decrease acne (test implication). We then conduct an experiment to see whether our test implication occurs.

Reasoning about our experiment would then look like one of the following arguments:

Confirming Experiment Disconfirming Experiment
1. If H, then TI 1. If H, then TI.
2. TI. 2. Not-TI.
3. Therefore, probably H. 3. Therefore, probably Not-H.

There are a couple of important things to note about these arguments. First, despite appearances, both are inductive arguments. The one on the left commits the formal fallacy of affirming the consequent, so, at best, the premises confer only some degree of probability on the conclusion. The argument on the right looks to be deductive (on the face of it, it has the valid form modus tollens), but it would be inappropriate to regard it deductively. This is because we are not evaluating a logical connection between H and TI, we are evaluating a causal connection—TI might be true or false regardless of H (we might have chosen an inappropriate test implication or simply gotten lucky), and therefore, we cannot conclude with certainty that H does not causally influence TI. Therefore, “If…, then…” statements in experiments must be read as causal conditionals and not material conditionals (the term for how we used conditionals above).

Second, experiments can go wrong in many ways, so no single experiment will grant a high degree of probability to its causal conclusion. Experiments may be biased by hidden variables (causes we did not consider or detect, such as age, diet, medical history, or lifestyle), auxiliary assumptions (the theoretical assumptions by which evaluating the results may be faulty), or underdetermination (there may be a number of hypotheses consistent with those results; for example, if it is actually sugar that causes acne, then chocolate bars, ice cream, candy, and sodas would yield the same test results). Because of this, experiments either confirm or disconfirm a hypothesis; that is, they give us some reason (but not a particularly strong reason) to believe our hypothesized causes are or are not the causes of our test implications, and therefore, of our observations (see Quine and Ullian, 1978). Because of this, experiments must be conducted many times, and only after we have a number of confirming or disconfirming results can we draw a strong inductive conclusion. (For more, see “Confirmation and Induction.”)

Experiments may be formal or informal. In formal experiments, critical thinkers exert explicit control over experimental conditions: experimenters choose participants, include or exclude certain variables, and identify or introduce hypothesized events. Test subjects are selected according to control criteria (criteria that may affect the results and, therefore, that we want to mitigate, such as age, diet, and lifestyle) and divided into control groups (groups where the hypothesized cause is absent) and experimental groups (groups where the hypothesized cause is present, either because it is introduced or selected for).

Subjects are then placed in experimental conditions. For instance, in a randomized study, the control group receives a placebo (an inert medium) whereas the experimental group receives the hypothesized cause—the putative cause is introduced, the groups are observed, and the results are recorded and compared. When a hypothesized cause is dangerous (such as smoking) or its effects potentially irreversible (for instance, post-traumatic stress disorder), the experimental design must be restricted to selecting for the hypothesized cause already present in subjects, for example, in retrospective (backward-looking) and prospective (forward-looking) studies. In all types of formal experiments, subjects are observed under exposure to the test or placebo conditions for a specified time, and results are recorded and compared.

In informal experiments, critical thinkers do not have access to sophisticated equipment or facilities and, therefore, cannot exert explicit control over experimental conditions. They are left to make considered judgments about variables. The most common informal experiments are John Stuart Mill’s five methods of inductive reasoning, called Mill’s Methods, which he first formulated in A System of Logic (1843). Here is a very brief summary of Mill’s five methods:

(1) The Method of Agreement

If all conditions containing the event y also contain x, x is probably the cause of y.

For example:

“I’ve eaten from the same box of cereal every day this week, but all the times I got sick after eating cereal were times when I added strawberries. Therefore, the strawberries must be bad.”

(2) The Method of Difference

If all conditions lacking y also lack x, x is probably the cause of y.

For example:

“The organization turned all its tax forms in on time for years, that is, until our comptroller, George, left; after that, we were always late. Only after George left were we late. Therefore, George was probably responsible for getting our tax forms in on time.”

(3) The Joint Method of Agreement and Difference

If all conditions containing event y also contain event x, and all events lacking y also lack x, x is probably the cause of y.

For example:

“The conditions at the animal shelter have been pretty regular, except we had a string of about four months last year when the dogs barked all night, every night. But at the beginning of those four months we sheltered a redbone coonhound, and the barking stopped right after a family adopted her. All the times the redbone hound wasn’t present, there was no barking. Only the time she was present was there barking. Therefore, she probably incited all the other dogs to bark.”

(4) The Method of Concomitant Variation

If the frequency of event y increases and decreases as event x increases and decreases, respectively, x is probably the cause of y.

For example:

“We can predict the amount of alcohol sales by the rate of unemployment. As unemployment rises, so do alcohol sales. As unemployment drops, so do alcohol sales. Last quarter marked the highest unemployment in three years, and our sales last quarter are the highest they had been in those three years. Therefore, unemployment probably causes people to buy alcohol.”

(5) The Method of Residues

If a number of factors x, y, and z, may be responsible for a set of events A, B, and C, and if we discover reasons for thinking that x is the cause of A and y is the cause of B, then we have reason to believe z is the cause of C.

For example:

“The people who come through this medical facility are usually starving and have malaria, and a few have polio. We are particularly interested in treating the polio. Take this patient here: she is emaciated, which is caused by starvation; and she has a fever, which is caused by malaria. But notice that her muscles are deteriorating, and her bones are sore. This suggests she also has polio.”

d. Abduction

Not all inductive reasoning is inferential. In some cases, an explanation is needed before we can even begin drawing inferences. Consider Darwin’s idea of natural selection. Natural selection is not an object, like a blood vessel or a cellular wall, and it is not, strictly speaking, a single event. It cannot be detected in individual organisms or observed in a generation of offspring. Natural selection is an explanation of biodiversity that combines the process of heritable variation and environmental pressures to account for biomorphic change over long periods of time. With this explanation in hand, we can begin to draw some inferences. For instance, we can separate members of a single species of fruit flies, allow them to reproduce for several generations, and then observe whether the offspring of the two groups can reproduce. If we discover they cannot reproduce, this is likely due to certain mutations in their body types that prevent them from procreating. And since this is something we would expect if natural selection were true, we have one piece of confirming evidence for natural selection. But how do we know the explanations we come up with are worth our time?

Coined by C. S. Peirce (1839-1914), abduction, also called retroduction, or inference to the best explanation, refers to a way of reasoning informally that provides guidelines for evaluating explanations. Rather than appealing to types of arguments (generalization, analogy, causation), the value of an explanation depends on the theoretical virtues it exemplifies. A theoretical virtue is a quality that renders an explanation more or less fitting as an account of some event. What constitutes fittingness (or “loveliness,” as Peter Lipton (2004) calls it) is controversial, but many of the virtues are intuitively compelling, and abduction is a widely accepted tool of critical thinking.

The most widely recognized theoretical virtue is probably simplicity, historically associated with William of Ockham (1288-1347) and known as Ockham’s Razor. A legend has it that Ockham was asked whether his arguments for God’s existence prove that only one God exists or whether they allow for the possibility that many gods exist. He supposedly responded, “Do not multiply entities beyond necessity.” Though this claim is not found in his writings, Ockham is now famous for advocating that we restrict our beliefs about what is true to only what is absolutely necessary for explaining what we observe.

In contemporary theoretical use, the virtue of simplicity is invoked to encourage caution in how many mechanisms we introduce to explain an event. For example, if natural selection can explain the origin of biological diversity by itself, there is no need to hypothesize both natural selection and a divine designer. But if natural selection cannot explain the origin of, say, the duck-billed platypus, then some other mechanism must be introduced. Of course, not just any mechanism will do. It would not suffice to say the duck-billed platypus is explained by natural selection plus gremlins. Just why this is the case depends on other theoretical virtues; ideally, the virtues work together to help critical thinkers decide among competing hypotheses to test. Here is a brief sketch of some other theoretical virtues or ideals:

Conservatism – a good explanation does not contradict well-established views in a field.

Independent Testability – a good explanation is successful on different occasions under similar circumstances.

Fecundity – a good explanation leads to results that make even more research possible.

Explanatory Depth – a good explanation provides details of how an event occurs.

Explanatory Breadth – a good explanation also explains other, similar events.

Though abduction is structurally distinct from other inductive arguments, it functions similarly in practice: a good explanation provides a probabilistic reason to believe a proposition. This is why it is included here as a species of inductive reasoning. It might be thought that explanations only function to help critical thinkers formulate hypotheses, and do not, strictly speaking, support propositions. But there are intuitive examples of explanations that support propositions independently of however else they may be used. For example, a critical thinker may argue that material objects exist outside our minds is a better explanation of why we perceive what we do (and therefore, a reason to believe it) than that an evil demon is deceiving me, even if there is no inductive or deductive argument sufficient for believing that the latter is false. (For more, see “Charles Sanders Peirce: Logic.”)

5. Detecting Poor Reasoning

Our attempts at thinking critically often go wrong, whether we are formulating our own arguments or evaluating the arguments of others. Sometimes it is in our interests for our reasoning to go wrong, such as when we would prefer someone to agree with us than to discover the truth value of a proposition. Other times it is not in our interests; we are genuinely interested in the truth, but we have unwittingly made a mistake in inferring one proposition from others. Whether our errors in reasoning are intentional or unintentional, such errors are called fallacies (from the Latin, fallax, which means “deceptive”). Recognizing and avoiding fallacies helps prevent critical thinkers from forming or maintaining defective beliefs.

Fallacies occur in a number of ways. An argument’s form may seem to us valid when it is not, resulting in a formal fallacy. Alternatively, an argument’s premises may seem to support its conclusion strongly but, due to some subtlety of meaning, do not, resulting in an informal fallacy. Additionally, some of our errors may be due to unconscious reasoning processes that may have been helpful in our evolutionary history, but do not function reliably in higher order reasoning. These unconscious reasoning processes are now widely known as heuristics and biases. Each type is briefly explained below.

a. Formal Fallacies

Formal fallacies occur when the form of an argument is presumed or seems to be valid (whether intentionally or unintentionally) when it is not. Formal fallacies are usually invalid variations of valid argument forms. Consider, for example, the valid argument form modus ponens (this is one of the rules of inference mentioned in §3b):

modus ponens (valid argument form)

1. p → q 1. If it is a cat, then it is a mammal.
2. p 2. It is a cat.
3. /.: q 3. Therefore, it is a mammal.

In modus ponens, we assume or “affirm” both the conditional and the left half of the conditional (called the antecedent): (p à q) and p. From these, we can infer that q, the second half or consequent, is true. This a valid argument form: if the premises are true, the conclusion cannot be false.

Sometimes, however, we invert the conclusion and the second premise, affirming that the conditional, (p à q), and the right half of the conditional, q (the consequent), are true, and then inferring that the left half, p (the antecedent), is true. Note in the example below how the conclusion and second premise are switched. Switching them in this way creates a problem.

modus ponens affirming the consequent
(valid argument form) (formal fallacy)
1. p → q 1. p → q
2. p 2. q q, the consequent of the conditional in premise 1, has been “affirmed” in premise 2
3. /.: q 3. /.: p (?)

To get an intuitive sense of why “affirming the consequent” is a problem, consider this simple example:

affirming the consequent

    1. If it is a cat, then it is a mammal.
    2. It is a mammal.
    3. Therefore, it is a cat.(?)

From the fact that something is a mammal, we cannot conclude that it is a cat. It may be a dog or a mouse or a whale. The premises can be true and yet the conclusion can still be false. Therefore, this is not a valid argument form. But since it is an easy mistake to make, it is included in the set of common formal fallacies.

Here is a second example with the rule of inference called modus tollens. Modus tollens involves affirming a conditional, (p à q), and denying that conditional’s consequent: ~q. From these two premises, we can validly infer the denial of the antecedent: ~p. But if we switch the conclusion and the second premise, we get another fallacy, called denying the antecedent.

modus tollens denying the antecedent
(valid argument form) (formal fallacy)
1. p → q 1. p → q p, the antecedent of the conditional in premise 1, has been “denied” in premise 2
2. ~q 2. ~p
3. ~p 3. /.: ~q(?)
1. If it is a cat, then it is a mammal. 1. If it is a cat, then it is a mammal.
2. It is not a mammal. 2. It is not a cat.
3. Therefore, it is not a cat. 3. Therefore, it is not a mammal.(?)

Technically, all informal reasoning is formally fallacious—all informal arguments are invalid. Nevertheless, since those who offer inductive arguments rarely presume they are valid, we do not regard them as reasoning fallaciously.

b. Informal Fallacies

Informal fallacies occur when the meaning of the terms used in the premises of an argument suggest a conclusion that does not actually follow from them (the conclusion either follows weakly or with no strength at all). Consider an example of the informal fallacy of equivocation, in which a word with two distinct meanings is used in both of its meanings:

    1. Any law can be repealed by Congress.
    2. Gravity is a law.
    3. Therefore, gravity can be repealed by Congress.

In this case, the argument’s premises are true when the word “law” is rightly interpreted, but the conclusion does not follow because the word law has a different referent in premise 1 (political laws) than in premise 2 (a law of nature). This argument equivocates on the meaning of law and is, therefore, fallacious.

Consider, also, the informal fallacy of ad hominem, abusive, when an arguer appeals to a person’s character as a reason to reject her proposition:

“Elizabeth argues that humans do not have souls; they are simply material beings. But Elizabeth is a terrible person and often talks down to children and the elderly. Therefore, she could not be right that humans do not have souls.”

The argument might look like this:

    1. Elizabeth is a terrible person and often talks down to children and the elderly.
    2. Therefore, Elizabeth is not right that humans do not have souls.

The conclusion does not follow because whether Elizabeth is a terrible person is irrelevant to the truth of the proposition that humans do not have souls. Elizabeth’s argument for this statement is relevant, but her character is not.

Another way to evaluate this fallacy is to note that, as the argument stands, it is an enthymeme (see §2); it is missing a crucial premise, namely: If anyone is a terrible person, that person makes false statements. But this premise is clearly false. There are many ways in which one can be a terrible person, and not all of them imply that someone makes false statements. (In fact, someone could be terrible precisely because they are viciously honest.) Once we fill in the missing premise, we see the argument is not cogent because at least one premise is false.

Importantly, we face a number of informal fallacies on a daily basis, and without the ability to recognize them, their regularity can make them seem legitimate. Here are three others that only scratch the surface:

Appeal to the People: We are often encouraged to believe or do something just because everyone else does. We are encouraged to believe what our political party believes, what the people in our churches or synagogues or mosques believe, what people in our family believe, and so on. We are encouraged to buy things because they are “bestsellers” (lots of people buy them). But the fact that lots of people believe or do something is not, on its own, a reason to believe or do what they do.

Tu Quoque (You, too!): We are often discouraged from pursuing a conclusion or action if our own beliefs or actions are inconsistent with them. For instance, if someone attempts to argue that everyone should stop smoking, but that person smokes, their argument is often given less weight: “Well, you smoke! Why should everyone else quit?” But the fact that someone believes or does something inconsistent with what they advocate does not, by itself, discredit the argument. Hypocrites may have very strong arguments despite their personal inconsistencies.

Base Rate Neglect: It is easy to look at what happens after we do something or enact a policy and conclude that the act or policy caused those effects. Consider a law reducing speed limits from 75 mph to 55 mph in order to reduce highway accidents. And, in fact, in the three years after the reduction, highway accidents dropped 30%! This seems like a direct effect of the reduction. However, this is not the whole story. Imagine you looked back at the three years prior to the law and discovered that accidents had dropped 30% over that time, too. If that happened, it might not actually be the law that caused the reduction in accidents. The law did not change the trend in accident reduction. If we only look at the evidence after the law, we are neglecting the rate at which the event occurred without the law. The base rate of an event is the rate that the event occurs without the potential cause under consideration. To take another example, imagine you start taking cold medicine, and your cold goes away in a week. Did the cold medicine cause your cold to go away? That depends on how long colds normally last and when you took the medicine. In order to determine whether a potential cause had the effect you suspect, do not neglect to compare its putative effects with the effects observed without that cause.

For more on formal and informal fallacies and over 200 different types with examples, see “Fallacies.”

c. Heuristics and Biases

In the 1960s, psychologists began to suspect there is more to human reasoning than conscious inference. Daniel Kahneman and Amos Tversky confirmed these suspicions with their discoveries that many of the standard assumptions about how humans reason in practice are unjustified. In fact, humans regularly violate these standard assumptions, the most significant for philosophers and economists being that humans are fairly good at calculating the costs and benefits of their behavior; that is, they naturally reason according to the dictates of Expected Utility Theory. Kahneman and Tversky showed that, in practice, reasoning is affected by many non-rational influences, such as the wording used to frame scenarios (framing bias) and information most vividly available to them (the availability heuristic).

Consider the difference in your belief about the likelihood of getting robbed before and after seeing a news report about a recent robbery, or the difference in your belief about whether you will be bitten by a shark the week before and after Discovery Channel’s “Shark Week.” For most of us, we are likely to regard their likelihood as higher after we have seen these things on television than before. Objectively, they are no more or less likely to happen regardless of our seeing them on television, but we perceive they are more likely because their possibility is more vivid to us. These are examples of the availability heuristic.

Since the 1960s, experimental psychologists and economists have conducted extensive research revealing dozens of these unconscious reasoning processes, including ordering bias, the representativeness heuristic, confirmation bias, attentional bias, and the anchoring effect. The field of behavioral economics, made popular by Dan Ariely (2008; 2010; 2012) and Richard Thaler and Cass Sunstein (2009), emerged from and contributes to heuristics and biases research and applies its insights to social and economic behaviors.

Ideally, recognizing and understanding these unconscious, non-rational reasoning processes will help us mitigate their undermining influence on our reasoning abilities (Gigerenzer, 2003). However, it is unclear whether we can simply choose to overcome them or whether we have to construct mechanisms that mitigate their influence (for instance, using double-blind experiments to prevent confirmation bias).

6. The Scope and Virtues of Good Reasoning

Whether the process of critical thinking is productive for reasoners—that is, whether it actually answers the questions they are interested in answering—often depends on a number of linguistic, psychological, and social factors. We encountered some of the linguistic factors in §1. In closing, let us consider some of the psychological and social factors that affect the success of applying the tools of critical thinking.

a. Context

Not all psychological and social contexts are conducive for effective critical thinking. When reasoners are depressed or sad or otherwise emotionally overwhelmed, critical thinking can often be unproductive or counterproductive. For instance, if someone’s child has just died, it would be unproductive (not to mention cruel) to press the philosophical question of why a good God would permit innocents to suffer or whether the child might possibly have a soul that could persist beyond death. Other instances need not be so extreme to make the same point: your company’s holiday party (where most people would rather remain cordial and superficial) is probably not the most productive context in which to debate the president’s domestic policy or the morality of abortion.

The process of critical thinking is primarily about detecting truth, and truth may not always be of paramount value. In some cases, comfort or usefulness may take precedence over truth. The case of the loss of a child is a case where comfort seems to take precedence over truth. Similarly, consider the case of determining what the speed limit should be on interstate highways. Imagine we are trying to decide whether it is better to allow drivers to travel at 75 mph or to restrict them to 65. To be sure, there may be no fact of the matter as to which is morally better, and there may not be any difference in the rate of interstate deaths between states that set the limit at 65 and those that set it at 75. But given the nature of the law, a decision about which speed limit to set must be made. If there is no relevant difference between setting the limit at 65 and setting it at 75, critical thinking can only tell us that, not which speed limit to set. This shows that, in some cases, concern with truth gives way to practical or preferential concerns (for example, Should I make this decision on the basis of what will make citizens happy? Should I base it on whether I will receive more campaign contributions from the business community?). All of this suggests that critical thinking is most productive in contexts where participants are already interested in truth.

b. The Principle of Charity/Humility

Critical thinking is also most productive when people in the conversation regard themselves as fallible, subject to error, misinformation, and deception. The desire to be “right” has a powerful influence on our reasoning behavior. It is so strong that our minds bias us in favor of the beliefs we already hold even in the face of disconfirming evidence (a phenomenon known as “confirmation bias”). In his famous article, “The Ethics of Belief” (1878), W. K. Clifford notes that, “We feel much happier and more secure when we think we know precisely what to do, no matter what happens, than when we have lost our way and do not know where to turn. … It is the sense of power attached to a sense of knowing that makes men desirous of believing, and afraid of doubting” (2010: 354).

Nevertheless, when we are open to the possibility that we are wrong, that is, if we are humble about our conclusions and we interpret others charitably, we have a better chance at having rational beliefs in two senses. First, if we are genuinely willing to consider evidence that we are wrong—and we demonstrate that humility—then we are more likely to listen to others when they raise arguments against our beliefs. If we are certain we are right, there would be little reason to consider contrary evidence. But if we are willing to hear it, we may discover that we really are wrong and give up faulty beliefs for more reasonable ones.

Second, if we are willing to be charitable to arguments against our beliefs, then if our beliefs are unreasonable, we have an opportunity to see the ways in which they are unreasonable. On the other hand, if our beliefs are reasonable, then we can explain more effectively just how well they stand against the criticism. This is weakly analogous to competition in certain types of sporting events, such as basketball. If you only play teams that are far inferior to your own, you do not know how good your team really is. But if you can beat a well-respected team on fair terms, any confidence you have is justified.

c. The Principle of Caution

In our excitement over good arguments, it is easy to overextend our conclusions, that is, to infer statements that are not really warranted by our evidence. From an argument for a first, uncaused cause of the universe, it is tempting to infer the existence of a sophisticated deity such as that of the Judeo-Christian tradition. From an argument for the compatibilism of the free will necessary for moral responsibility and determinism, it is tempting to infer that we are actually morally responsible for our behaviors. From an argument for negative natural rights, it is tempting to infer that no violation of a natural right is justifiable. Therefore, it is prudent to continually check our conclusions to be sure they do not include more content than our premises allow us to infer.

Of course, the principle of caution must itself be used with caution. If applied too strictly, it may lead reasoners to suspend all belief, and refrain from interacting with one another and their world. This is not, strictly speaking, problematic; ancient skeptics, such as the Pyrrhonians, advocated suspending all judgments except those about appearances in hopes of experiencing tranquility. However, at least some judgments about the long-term benefits and harms seem indispensable even for tranquility, for instance, whether we should retaliate in self-defense against an attacker or whether we should try to help a loved one who is addicted to drugs or alcohol.

d. The Expansiveness of Critical Thinking

The importance of critical thinking cannot be overstated because its relevance extends into every area of life, from politics, to science, to religion, to ethics. Not only does critical thinking help us draw inferences for ourselves, it helps us identify and evaluate the assumptions behind statements, the moral implications of statements, and the ideologies to which some statements commit us. This can be a disquieting and difficult process because it forces us to wrestle with preconceptions that might not be accurate. Nevertheless, if the process is conducted well, it can open new opportunities for dialogue, sometimes called “critical spaces,” that allow people who might otherwise disagree to find beliefs in common from which to engage in a more productive conversation.

It is this possibility of creating critical spaces that allows philosophical approaches like Critical Theory to effectively challenge the way social, political, and philosophical debates are framed. For example, if a discussion about race or gender or sexuality or gender is framed in terms that, because of the origins those terms or the way they have functioned socially, alienate or disproportionately exclude certain members of the population, then critical space is necessary for being able to evaluate that framing so that a more productive dialogue can occur (see Foresman, Fosl, and Watson, 2010, ch. 10 for more on how critical thinking and Critical Theory can be mutually supportive).

e. Productivity and the Limits of Rationality

Despite the fact that critical thinking extends into every area of life, not every important aspect of our lives is easily or productively subjected to the tools of language and logic. Thinkers who are tempted to subject everything to the cold light of reason may discover they miss some of what is deeply enjoyable about living. The psychologist Abraham Maslow writes, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (1966: 16). But it is helpful to remember that language and logic are tools, not the projects themselves. Even formal reasoning systems depend on axioms that are not provable within their own systems (consider Euclidean geometry or Peano arithmetic). We must make some decisions about what beliefs to accept and how to live our lives on the basis of considerations outside of critical thinking.

Borrowing an example from William James (1896), consider the statement, “Religion X is true.” James says that, while some people find this statement interesting, and therefore, worth thinking critically about, others may not be able to consider the truth of the statement. For any particular religious tradition, we might not know enough about it to form a belief one way or the other, and even suspending judgment may be difficult, since it is not obvious what we are suspending judgment about.

If I say to you: ‘Be a theosophist or be a Mohammedan,’ it is probably a dead option, because for you neither hypothesis is likely to be alive. But if I say: ‘Be an agnostic or be a Christian,’ it is otherwise: trained as you are, each hypothesis makes some appeal, however small, to your belief (2010: 357).

Ignoring the circularity in his definition of “dead option,” James’s point seems to be that if you know nothing about a view or what statements it entails, no amount of logic or evidence could help you form a reasonable belief about that position.

We might criticize James at this point because his conclusion seems to imply that we have no duty to investigate dead options, that is, to discover if there is anything worth considering in them. If we are concerned with truth, the simple fact that we are not familiar with a proposition does not mean it is not true or potentially significant for us. But James’s argument is subtler than this criticism suggests. Even if you came to learn about a particularly foreign religious tradition, its tenets may be so contrary to your understanding of the world that you could not entertain them as possible beliefs of yours. For instance, you know perfectly well that, if some events had been different, Hitler would not have existed: his parents might have had no children, or his parents’ parents might have had no children. You know roughly what it would mean for Hitler not to have existed and the sort of events that could have made it true that he did not exist. But how much evidence would it take to convince you that, in fact, Hitler did not exist, that is, that your belief that Hitler did exist is false? Could there be an argument strong enough? Not obviously. Since all the information we have about Hitler unequivocally points to his existence, any arguments against that belief would have to affect a very broad range of statements; they would have to be strong enough to make us skeptical of large parts of reality.

7. Approaches to Improving Reasoning through Critical Thinking

Recall that the goal of critical thinking is not just to study what makes reasons and statements good, but to help us improve our ability to reason, that is, to improve our ability to form, hold, and discard beliefs according to whether they meet the standards of good thinking. Some ways of approaching this latter goal are more effective than others. While the classical approach focuses on technical reasoning skills, the Paul/Elder model encourages us to think in terms of critical concepts, and irrationality approaches use empirical research on instances of poor reasoning to help us improve reasoning where it is least obvious we need it and where we need it most. Which approach or combination of approaches is most effective depends, as noted above, on the context and limits of critical thinking, but also on scientific evidence of their effectiveness. Those who teach critical thinking, of all people, should be engaged with the evidence relevant to determining which approaches are most effective.

a. Classical Approaches

The classic approach to critical thinking follows roughly the structure of this article: critical thinkers attempt to interpret statements or arguments clearly and charitably, and then they apply the tools of formal and informal logic and science, while carefully attempting to avoid fallacious inferences (see Weston, 2008; Walton, 2008; Watson and Arp, 2015). This approach requires spending extensive time learning and practicing technical reasoning strategies. It presupposes that reasoning is primarily a conscious activity, and that enhancing our skills in these areas will improve our ability to reason well in ordinary situations.

There are at least two concerns about this approach. First, it is highly time intensive relative to its payoff. Learning the terminology of systems like propositional and categorical logic and the names of the fallacies, and practicing applying these tools to hypothetical cases requires significant time and energy. And it is not obvious, given the problems with heuristics and biases, whether this practice alone makes us better reasoners in ordinary contexts. Second, many of the ways we reason poorly are not consciously accessible (recall the heuristics and biases discussion in §5c). Our biases, combined with the heuristics we rely on in ordinary situations, can only be detected in experimental settings, and addressing them requires restructuring the ways in which we engage with evidence (see Thaler and Sunstein, 2009).

b. The Paul/Elder Model

Richard Paul and Linda Elder (Paul and Elder, 2006; Paul, 2012) developed an alternative to the classical approach on the assumption that critical thinking is not something that is limited to academic study or to the discipline of philosophy. On their account, critical thinking is a broad set of conceptual skills and habits aimed at a set of standards that are widely regarded as virtues of thinking: clarity, accuracy, depth, fairness, and others. They define it simply as “the art of analyzing and evaluating thinking with a view to improving it” (2006: 4). Their approach, then, is to focus on the elements of thought and intellectual virtues that help us form beliefs that meet these standards.

The Paul/Elder model is made up of three sets of concepts: elements of thought, intellectual standards, and intellectual traits. In this model, we begin by identifying the features present in every act of thought. They use “thought” to mean critical thought aimed at forming beliefs, not just any act of thinking, musing, wishing, hoping, remembering. According to the model, every act of thought involves:

point of view concepts
purpose interpretation and inference
implications and consequences information
assumptions question at issue

These comprise the subject matter of critical thinking; that is, they are what we are evaluating when we are thinking critically. We then engage with this subject matter by subjecting them to what Paul and Elder call universal intellectual standards. These are evaluative goals we should be aiming at with our thinking:

clarity breadth
accuracy logic
precision significance
relevance fairness
depth

While in classical approaches, logic is the predominant means of thinking critically, in the Paul/Elder model, it is put on equal footing with eight other standards. Finally, Paul and Elder argue that it is helpful to approach the critical thinking process with a set of intellectual traits or virtues that dispose us to using elements and standards well.

intellectual humility intellectual perseverance
intellectual autonomy confidence in reason
intellectual integrity intellectual empathy
intellectual courage fairmindedness

To remind us that these are virtues of thought relevant to critical thinking, they use “intellectual” to distinguish these traits from their moral counterparts (moral integrity, moral courage, and so on).

The aim is that, as we become familiar with these three sets of concepts and apply them in everyday contexts, we become better at analyzing and evaluating statements and arguments in ordinary situations.

Like the classical approach, this approach presupposes that reasoning is primarily a conscious activity, and that enhancing our skills will improve our reasoning. This means that it still lacks the ability to address the empirical evidence that many of our reasoning errors cannot be consciously detected or corrected. It differs from the classical approach in that it gives the technical tools of logic a much less prominent role and places emphasis on a broader, and perhaps more intuitive, set of conceptual tools. Learning and learning to apply these concepts still requires a great deal of time and energy, though perhaps less than learning formal and informal logic. And these concepts are easy to translate into disciplines outside philosophy. Students of history, psychology, and economics can more easily recognize the relevance of asking questions about an author’s point of view and assumptions than perhaps determining whether the author is making a deductive or inductive argument. The question, then, is whether this approach improves our ability to think better than the classical approach.

c. Other Approaches

A third approach that is becoming popular is to focus on the ways we commonly reason poorly and then attempt to correct them. This can be called the Rationality Approach, and it takes seriously the empirical evidence (§5c) that many of our errors in reasoning are not due to a lack of conscious competence with technical skills or misusing those skills, but are due to subconscious dispositions to ignore or dismiss relevant information or to rely on irrelevant information.

One way to pursue this approach is to focus on beliefs that are statistically rare or “weird.” These include beliefs of fringe groups, such as conspiracy theorists, religious extremists, paranormal psychologists, and proponents of New Age metaphysics (see Gilovich, 1992; Vaughn and Schick, 2010; Coady, 2012). If we recognize the sorts of tendencies that lead to these controversial beliefs, we might be able to recognize and avoid similar tendencies in our own reasoning about less extreme beliefs, such as beliefs about financial investing, how statistics are used to justify business decisions, and beliefs about which public policies to vote for.

Another way to pursue this approach is to focus directly on the research on error, those ordinary beliefs that psychologists and behavioral economists have discovered we reason poorly, and to explore ways of changing how we frame decisions about what to believe (see Nisbett and Ross, 1980; Gilovich, 1992; Ariely, 2008; Kahneman, 2011). For example, in one study, psychologists found that judges issue more convictions just before lunch and the end of the day than in the morning or just after lunch (Danzinger, et al., 2010). Given that dockets do not typically organize cases from less significant crimes to more significant crimes, this evidence suggests that something as irrelevant as hunger can bias judicial decisions. Even though hunger has nothing to do with the truth of a belief, knowing that it can affect how we evaluate a belief can help us avoid that effect. This study might suggest something as simple as that we should avoid being hungry when making important decisions. The more we learn ways in which our brains use irrelevant information, the better we can organize our reasoning to avoid these mistakes. For more on how decisions can be improved by restructuring our decisions, see Thaler and Sunstein, 2009.

A fourth approach is to take more seriously the role that language plays in our reasoning. Arguments involve complex patterns of expression, and we have already seen how vagueness and ambiguity can undermine good reasoning (§1). The pragma-dialectics approach (or pragma-dialectical theory) is the view that the quality of an argument is not solely or even primarily a matter of its logical structure, but is more fundamentally a matter of whether it is a form of reasonable discourse (Van Eemeren and Grootendorst, 1992). The proponents of this view contend that, “The study of argumentation should … be construed as a special branch of linguistic pragmatics in which descriptive and normative perspectives on argumentative discourse are methodically integrated” (Van Eemeren and Grootendorst, 1995: 130).

The pragma-dialectics approach is a highly technical approach that uses insights from speech act theory, H. P. Grice’s philosophy of language, and the study of discourse analysis. Its use, therefore, requires a great deal of background in philosophy and linguistics. It has an advantage over other approaches in that it highlights social and practical dimensions of arguments that other approaches largely ignore. For example, argument is often public (external), in that it creates an opportunity for opposition, which influences people’s motives and psychological attitudes toward their arguments. Argument is also social in that it is part of a discourse in which two or more people try to arrive at an agreement. Argument is also functional; it aims at a resolution that can only be accommodated by addressing all the aspects of disagreement or anticipated disagreement, which can include public and social elements. Argument also has a rhetorical role (dialectical) in that it is aimed at actually convincing others, which may have different requirements than simply identifying the conditions under which they should be convinced.

These four approaches are not mutually exclusive. All of them presuppose, for example, the importance of inductive reasoning and scientific evidence. Their distinctions turn largely on which aspects of statements and arguments should take precedence in the critical thinking process and on what information will help us have better beliefs.

8. References and Further Reading

  • Ariely, Dan. 2008. Predictably Irrational: The Hidden Forces that Shape Our Decisions. New York: Harper Perennial.
  • Ariely, Dan. 2010. The Upside of Irrationality. New York: Harper Perennial.
  • Ariely, Dan. 2012. The (Honest) Truth about Dishonesty. New York: Harper Perennial.
  • Aristotle. 2002. Categories and De Interpretatione, J. L. Akrill, editor. Oxford: University of Oxford Press.
  • Clifford, W. K. 2010. “The Ethics of Belief.” In Nils Ch. Rauhut and Robert Bass, eds., Readings on the Ultimate Questions: An Introduction to Philosophy, 3rd ed. Boston: Prentice Hall, 351-356.
  • Chomsky, Noam. 1957/2002. Syntactic Structures. Berlin: Mouton de Gruyter.
  • Coady, David. What To Believe Now: Applying Epistemology to Contemporary Issues. Malden, MA: Wiley-Blackwell, 2012.
  • Danzinger, Shai, Jonathan Levav, and Liora Avnaim-Pesso. 2011. “Extraneous Factors in Judicial Decisions.” Proceedings of the National Academy of Sciences of the United States of America. Vol. 108, No. 17, 6889-6892. doi: 10.1073/pnas.1018033108.
  • Foresman, Galen, Peter Fosl, and Jamie Carlin Watson. 2017. The Critical Thinking Toolkit. Malden, MA: Wiley-Blackwell.
  • Fogelin, Robert J. and Walter Sinnott-Armstrong. 2009. Understanding Arguments: An Introduction to Informal Logic, 8th ed. Belmont, CA: Wadsworth Cengage Learning.
  • Gigerenzer, Gerd. 2003. Calculated Risks: How To Know When Numbers Deceive You. New York: Simon and Schuster.
  • Gigerenzer, Gerd, Peter Todd, and the ABC Research Group. 2000. Simple Heuristics that Make Us Smart. Oxford University Press.
  • Gilovich, Thomas. 1992. How We Know What Isn’t So. New York: Free Press.
  • James, William. “The Will to Believe”, in Nils Ch. Rauhut and Robert Bass, eds., Readings on the Ultimate Questions: An Introduction to Philosophy, 3rd ed. Boston: Prentice Hall, 2010, 356-364.
  • Kahneman, Daniel. 2011. Thinking Fast and Slow. New York: Farrar, Strauss and Giroux.
  • Lewis, David. 1986. On the Plurality of Worlds. Oxford Blackwell.
  • Lipton, Peter. 2004. Inference to the Best Explanation, 2nd ed. London: Routledge.
  • Maslow, Abraham. 1966. The Psychology of Science: A Reconnaissance. New York: Harper & Row.
  • Mill, John Stuart. 2011. A System of Logic, Ratiocinative and Inductive. New York: Cambridge University Press.
  • Nisbett, Richard and Lee Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, NJ: Prentice Hall.
  • Paul, Richard. 2012. Critical Thinking: What Every Person Needs to Survive in a Rapidly Changing World. Tomales, CA: The Foundation for Critical Thinking.
  • Paul, Richard and Linda Elder. 2006. The Miniature Guide to Critical Thinking Concepts and Tools, 4th ed. Tomales, CA: The Foundation for Critical Thinking.
  • Plantinga, Alvin. 1974. The Nature of Necessity. Oxford Clarendon.
  • Prior, Arthur. 1957. Time and Modality. Oxford, UK: Oxford University Press.
  • Prior, Arthur. 1967. Past, Present and Future. Oxford, UK: Oxford University Press.
  • Prior, Arthur. 1968. Papers on Time and Tense. Oxford, UK: Oxford University Press.
  • Quine, W. V. O. and J. S. Ullian. 1978. The Web of Belief, 2nd ed. McGraw-Hill.
  • Russell, Bertrand. 1940/1996. An Inquiry into Meaning and Truth, 2nd ed. London: Routledge.
  • Thaler, Richard and Cass Sunstein. 2009. Nudge: Improving Decisions about Health, Wealth, and Happiness. New York: Penguin Books.
  • van Eemeren, Frans H. and Rob Grootendorst. 1992. Argumentation, Communication, and Fallacies: A Pragma-Dialectical Perspective. London: Routledge.
  • van Eemeren, Frans H. and Rob Grootendorst. 1995. “The Pragma-Dialectical Approach to Fallacies.” In Hans V. Hansen and Robert C. Pinto, eds. Fallacies: Classical and Contemporary Readings. Penn State University Press, 130-144.
  • Vaughn, Lewis and Theodore Schick. 2010. How To Think About Weird Things: Critical Thinking for a New Age, 6th ed. McGraw-Hill.
  • Walton, Douglas. 2008. Informal Logic: A Pragmatic Approach, 2nd ed. New York: Cambridge University Press.
  • Watson, Jamie Carlin and Robert Arp. 2015. Critical Thinking: An Introduction to Reasoning Well, 2nd ed. London: Bloomsbury Academic.
  • Weston, Anthony. 2008. A Rulebook for Arguments, 4th ed. Indianapolis: Hackett.
  • Zadeh, Lofti. 1965. “Fuzzy Sets and Systems.” In J. Fox, ed., System Theory. Brooklyn, NY: Polytechnic Press, 29-39.

 

Author Information

Jamie Carlin Watson
Email: jamie.c.watson@gmail.com
University of Arkansas for Medical Sciences
U. S. A.