Abstractionism

Abstractionism is a philosophical account of the ontology of mathematics according to which abstract objects are grounded in a process of abstraction (although not every view that places abstraction front and center is a version of abstractionism, as we shall see). Abstraction involves arranging a domain of underlying objects into classes and then identifying an object corresponding to each class—the abstract of that class. While the idea that the ontology of mathematics is obtained, in some sense, via abstraction has its origin in ancient Greek thought, the idea found new life, and a new technical foundation, in the late 19th century due to pioneering work by Gottlob Frege. Although Frege’s project ultimately failed, his central ideas were reborn in the late 20th century as a view known as neo-logicism.

This article surveys abstractionism in five stages. §1 looks at the pre-19th century history of abstraction and its role in the philosophy of mathematics. §2 takes some time to carefully articulate what, exactly, abstractionism is, and to provide a detailed description of the way that abstraction is formalized, within abstractionist philosophy of mathematics, using logical formulas known as abstraction principles. §3 looks at the first fully worked out version of abstractionism—Frege’s logicist reconstruction of mathematics—and explores the various challenges that such a view faces. The section also examines the fatal flaw in Frege’s development of this view: Russell’s paradox. §4 presents a survey of the 20th century neo-logicist revival of Frege’s abstractionist program, due to Crispin Wright and Bob Hale, and carefully explicates the way in which this new version of an old idea deals with various puzzles and problems. Finally, §5 takes a brief tour of a re-development of Frege’s central ideas: Øystein Linnebo’s dynamic abstractionist account.

Table of Contents

  1. A Brief History of Abstractionism
  2. Defining Abstractionism
  3. Frege’s Logicism
    1. Hume’s Principle and Frege’s Theorem
    2. Hume’s Principle and the Caesar Problem
    3. Hume’s Principle and Basic Law V
    4. Basic Law V and Russell’s Paradox
  4. Neo-Logicism
    1. Neo-Logicism and Comprehension
    2. Neo-Logicism and the Bad Company Problem
    3. Extending Neo-Logicism Beyond Arithmetic
    4. Neo-Logicism and the Caesar Problem
  5. Dynamic Abstraction
  6. References and Further Reading

1. A Brief History of Abstractionism

Abstractionism, very broadly put, is a philosophical account of the epistemology and metaphysics of mathematics (or of abstract objects more generally) according to which the nature of, and our knowledge of, the subject matter of mathematics is grounded in abstraction. More is said about the sort of abstraction that is at issue in abstractionist accounts of the foundations of mathematics below (and, in particular, more about why not every view that involves abstraction is an instance of abstractionism in the sense of the term used here), but first, something needs to be said about what, exactly, abstraction is.

Before doing so, a bit of mathematical machinery is required. Given a domain of entities \Delta (these could be objects, or properties, or some other sort of “thing”), one says that a relation R is an equivalence relation on \Delta if and only if the following three conditions are met:

  1. R is reflexive (on \Delta):

For any \alpha in \Delta, R(\alpha, \alpha).

  1. R is symmetric (on \Delta):

For any \alpha, \beta in \Delta, if R(\alpha, \beta) then R(\beta, \alpha).

  1. R is transitive (on \Delta):

For any \alpha, \beta, \delta in \Delta, if R(\alpha, \beta) and R(\beta, \delta), then R(\alpha, \delta).

Intuitively, an equivalence relation R partitions a collection of entities \Delta into sub-collections X_1, X_2, \dots, where each X_i is a subset of \Delta; the X_is are exclusive (no entity in \Delta is a member of more than one of the classes X_1, X_2, \dots); the X_is are exhaustive (every entity in \Delta is in one of the classes X_1, X_2, \dots); and an object x in one of the sub-collections X_i is related by R to every other object in that same sub-collection, and is related by R to no other objects in \Delta. The classes X_i are known as the equivalence classes generated by R on \Delta.

Abstraction is a process that begins via the identification of an equivalence relation on a class of entities—that is, a class of objects (or properties, or other sorts of “thing”) is partitioned into equivalence classes based on some shared trait. To make things concrete, let us assume that the class with which we begin is a collection of medium-sized physical objects, and let us divide this class into sub-classes of objects based on whether they are the same size (that is, the equivalence relation in question is sameness of size). We then (in some sense) abstract away the particular features of each object that distinguishes it from the other objects in the same equivalence class, identifying (or creating?) an object (the abstract) corresponding to each equivalence class (and hence corresponding to or codifying the trait had in common by all and only the members of that equivalence class). Thus, in our example, we abstract away all properties, such as color, weight, or surface texture, that vary amongst objects in the same equivalence class. The novel objects arrived at by abstraction—sizes—capture what members of each equivalence class have in common, and thus we obtain a distinct size corresponding to each equivalence class of same-sized physical objects.

Discussions of abstraction, and of the nature of the abstracts so obtained, can be found throughout the history of Western philosophy, going back to Aristotle’s Prior Analytics (Aristotle 1975). Another well-discussed ancient example is provided by (one way of interpreting) Definition 5 of Book V of Euclid’s Elements. In that definition, Euclid introduces the notion of ratio as follows:

Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth, when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order. (Euclid 2012, V.5)

Simply put, Euclid is introducing a complicated equivalence relation:

being in the same ratio

that holds (or not) between pairs of magnitudes. Two pairs of magnitudes (a, b) and (c, d) stand in the being in the same ratio relation if and only if, for any numbers e and f we have:

a \times e > b \times f if and only if c \times e > d \times f;

a \times e = b \times f if and only if c \times e = d \times f;

a \times e < b \times f if and only if c \times e < d \times f.

Taken literally, it is not clear that Euclid’s Definition 5 is a genuine instance of the process of abstraction, since Euclid does not seem to explicitly take the final step: introducing individual objects—that is, ratios—to “stand for” the relationship that holds between pairs of magnitudes that instantiate the being in the same ratio relation. But, to take that final step, we need merely introduce the following (somewhat more modern) notation:

a : b = c : d

where a : b = c : d if and only if a and b stand in the same ratio to one another as c and d. If we take the logical form of this equation at face value—that is, as asserting the identity of the ratio a : b and the ratio c : dthen we now have our new objects, ratios, and the process of abstraction is complete.

We can (somewhat anachronistically, but nevertheless helpfully) reformulate this reconstruction of the abstraction involved in the introduction of ratios as the following Ratio Principle:

    \begin{align*}{\sf RP}: (\forall a)(\forall b)(\forall c)(\forall d)[a : b = c : d \leftrightarrow (\forall e)(\forall f)(&(a \times e > b \times f \leftrightarrow c \times e > d \times f) \\ \land \ &(a \times e = b \times f \leftrightarrow c \times e = d \times f) \\ \land \ &(a \times e < b \times f \leftrightarrow c \times e < d \times f))] \end{align*}

The new objects, ratios, are introduced in the identity statement on the left-hand side of the biconditional, and their behavior (in particular, identity conditions for ratios) is governed by the equivalence relation occurring on the right-hand side of the biconditional.

As this discussion of Euclid illustrates, it is often unclear (especially prior to the late 19th century, see below) whether a particular definition or discussion is meant to be an application of abstraction, since it is unclear which of the following is intended:

  1. The definition or discussion merely introduces a new relation that holds between various sorts of object (for example, it introduces the relation being in the same ratio), but does nothing more.
  2. The definition or discussion is meant to explicate the relationships that hold between previous identified and understood objects (for example, it precisely explains when two ratios are identical, where it is assumed that we already know, in some sense, what ratios are).
  3. The definition or discussion is meant to introduce a new sort of object defined in terms of a relation that holds between objects of a distinct, and previously understood, sort (for example, it introduces ratios as novel objects obtained via application of the process of abstraction to the relation being in the same ratio).

Only the last of these counts as abstraction, properly understood (at least, in terms of the understanding of abstraction mobilized in the family of views known as abstractionism).

With regard to those cases that are explicit applications of abstraction—that is, cases where an equivalence relation on a previously understood class of entities is used to introduce new objects (abstracts) corresponding to the resulting equivalence classes—there are three distinct ways that the objects so introduced can be understood:

  1. The abstract corresponding to each equivalence class is identified with a canonical representative member of that equivalence class (for example, we identify the ratio 1 : 2 with the particular pair of magnitudes ⟨1 meter, 2 meters⟩).
  2. The abstract corresponding to each equivalence class is identified with that equivalence class (for example, we identity the ratio 1 : 2 with the equivalence class of pairs of magnitudes that are in the same ratio as ⟨1 meter, 2 meters⟩).
  3. The abstract corresponding to each equivalence class is taken to be a novel abstract.

Historically, uses of abstraction within number theory have taken the first route, since the abstract corresponding to an equivalence class of natural numbers (or of any sub-collection of a collection of mathematical objects with a distinguished well-ordering) can always be taken to be the least number in that equivalence class. Somewhat surprisingly, perhaps, the second option—the identification of abstracts with the corresponding equivalence classes themselves—was somewhat unusual before Frege’s work. The fact that it remains unusual after Frege’s work, however, is less surprising, since the dangers inherent in this method were made clear by the set-theoretic paradoxes that plagued his work. The third option—taking the abstracts to be novel abstract objects—was relatively common within geometry by the 19th century, and it is this method that is central to the philosophical view called neo-logicism, discussed in §4 below.

This brief summary of the role of abstraction in the history of mathematics barely scratches the surface, of course, and the reader interested in a more detailed presentation of the history of abstraction prior to Frege’s work is encouraged to consult the early chapters of the excellent (Mancosu 2016). But it is enough for our purposes, since our primary target is not abstraction in general, but its use in abstractionist approaches to the philosophy of mathematics (and, as noted earlier, of abstract objects more generally).

2. Defining Abstractionism

Abstractionism, as we will understand the term here, is an account of the foundations of mathematics that involves the use of abstraction principles (or of principles equivalent to, or derived from, abstraction principles, see the discussion of dynamic abstraction in §5 below). An abstraction principle is a formula of the form:

{\sf A}_E : (\forall \alpha)(\forall \beta)[@(\alpha) = @(\beta) \leftrightarrow E(\alpha, \beta)]

where \alpha and \beta range over the same type (typically objects, concepts, n-ary relations, or sequences of such), E is an equivalence relation on entities of that type, and @ is a function from that type to objects. “@” is the abstraction operator, and terms of the form “@(\alpha)” are abstraction terms. The central idea underlying all forms of abstractionism is that abstraction principles serve to introduce mathematical concepts by providing identity conditions for the abstract objects falling under those concepts (that is, objects in the range of @_E) in terms of the equivalence relation E(X, Y).

Since this all might seem a bit esoteric at first glance, a few examples will be useful. One of the most well-discussed abstraction principles—one that we will return to when discussing the Caesar problem in §3 below—is the Directions Principle:

{\sf DP}: (\forall l_1)(\forall l_2)[d(l_1) = d(l_2) \leftrightarrow l_1 \parallel l_2]

where l_1 and l_2 are variables ranging over (straight) lines, x \parallel y is the parallelism relation, and d(\xi) is an abstraction operator mapping lines to their directions. Thus, a bit more informally, this principle says something like:

For any two lines l_1 and l_2, the direction of l_1 is identical to the direction of l_2 if and only if l_1 is parallel to l_2.

On an abstractionist reading, the Directions Principle introduces the concept direction, and it provides access to new objects falling under this concept—that is, directions—via abstraction. We partition the class of straight lines into equivalence classes, where each equivalence class is a collection of parallel lines (and any line parallel to a line in one of these classes is itself in that class), and then we obtain new objects—directions—by applying the abstraction operator d(x) to a line, resulting in the direction of that line (which will be the same object as the direction of any other line in the same equivalence class of parallel lines).

It should now be apparent that the Directions Principle is not the first abstraction principle that we have seen in this essay: the Ratio Principle is also an abstraction principle which serves, on an abstractionist reading, to introduce the concept ratio and whose abstraction operator x : y provides us with new objects falling under this concept.

The Directions Principle involves a unary objectual abstraction operator d(x): that is, the abstraction operator in the Directions Principle maps individual objects (that is, individual lines) to their abstracts (that is, their directions). The Ratio Principle is a bit more complicated. It involves a binary objectual abstraction operator: the abstraction operator maps pairs of objects (that is, pairs of magnitudes) to their abstracts (that is, the ratio of that pair). But the Directions Principle and the Ratio Principle have this much in common: the argument or arguments of the abstraction operator are objectual—they are objects.

It turns out, however, that much of the philosophical discussion of abstraction principles has focused on a different, and much more powerful, kind of abstraction principleconceptual abstraction principles. In a conceptual abstraction principle, the abstraction operator takes, not an object or sequence of objects, but a concept (or relation, or a sequence of concepts and relations, and so forth) as its argument. Here, we will be using the term “concept” in the Fregean sense, where concepts are akin to properties and are whatever it is that second-order unary variable range over, since, amongst other reasons, this is the terminology used by most of the philosophical literature on abstractionism. The reader uncomfortable with this usage can uniformly substitute “property” for “concept” throughout the remainder of this article.

Thus, a conceptual abstraction principle requires higher-order logic for its formation—for a comprehensive treatment of second- and higher-order logic, see (Shapiro 1991). The simplest kind of conceptual abstraction principle, and the kind to which we will restrict our attention in the remainder of this article, are unary conceptual abstraction principles of the form:

{\sf A}_E : (\forall X)(\forall Y)[@(X) = @(Y) \leftrightarrow E(X, Y)]

where X and Y are second-order variables ranging over unary concepts, and E(X, Y) is an equivalence relation on concepts.

The two most well-known and well-studied conceptual abstraction principles are Hume’s Principle and Basic Law V. Hume’s Principle is:

{\sf HP}: (\forall X)(\forall Y)[\#(X) = \#(Y) \leftrightarrow X \approx Y]

where X \approx Y abbreviates the purely logical second-order claim that there is a one-to-one onto mapping from X to Y, that is:

    \begin{align*}F \approx G =_{df} (\exists R)[&(\forall x)(F(x) \rightarrow (\exists ! y)(R(x,y) \land G(y))) \\ \land \ &(\forall x)(G(x) \rightarrow (\exists ! y)(F(y) \land R(y, x)))]\end{align*}

Hume’s Principle introduces the concept cardinal number and the cardinal numbers that fall under that concept. Basic Law V is:

{\sf BLV}: (\forall X)(\forall Y)[\S(X) = \S(Y) \leftrightarrow (\forall z)(X(z) \leftrightarrow Y(z))]

which (purports to) introduce the concept set or extension. As we shall see in the next section (and as is hinted in the parenthetical comment in the previous sentence), one of these abstraction principles does a decidedly better job than the other.

As already noted, although the process of abstraction has been a central philosophical concern since philosophers began thinking about mathematics, abstractionism only arose once abstraction principles were introduced. And, although he was not the first to use them—again, see (Mancosu 2016)—it is in the work of Gottlob Frege in the late 19th century that made abstraction principles first become a central concern in the philosophy of mathematics, and Frege’s logicism is the first defense of a full-blown version of abstractionism. Thus, we now turn to Frege.

3. Frege’s Logicism

Frege’s version of abstractionism is (appropriately enough, as we shall see) known as logicism. The primary motivation behind the project was to defend arithmetic and real and complex analysis (but interestingly, not geometry) from Kant’s charge that these areas of mathematics were a priori yet synthetic (Kant 1787/1999). The bulk of Frege’s defense of logicism occurs in his three great books, which can be summarized as follows:

  • Begriffsschrift, or Concept Script (Frege 1879/1972): Frege invents modern higher-order logic.
  • Die Grundlagen Der Arithmetic, or The Foundations of Arithmetic (Frege 1884/1980): Frege criticizes popular accounts of the nature of mathematics, and provides an informal exposition of his logicism.
  • Grundgesetze der Arithmetik, or Basic Laws of Arithmetic (Frege 1893/1903/2013): Frege further develops the philosophical details of his logicism, and carries out the formal derivations of the laws of arithmetic in an extension of the logic of Begriffsscrift.

Here we will examine a reconstruction of Frege’s logicism based on both the Grundlagen and Grundesetze. It should be noted, however, that there are subtle differences between the project informally described in the Grundlagen and the project carried out formally in Grundgesetze, differences we will for the most part ignore here. For discussion of some of these differences, see (Heck 2013) and (Cook & Ebert 2016). We will also carry out this re-construction in contemporary logical formalism, but it should also be noted that Frege’s logical system differs from contemporary higher-order logic in a number of crucial respects. For discussion of some of these differences, see (Heck 2013) and (Cook 2013).

As noted, Frege’s main goal was to argue that arithmetic was analytic. Frege’s understanding of the analytic/synthetic distinction, much like his account of the apriori/a posteriori distinction, has a decidedly epistemic flavor:

Now these distinctions between a priori and a posteriori, synthetic and analytic, concern not the content of the judgement but the justification for making the judgement. Where there is no such justification, the possibility of drawing the distinctions vanishes. An a priori error is thus as complete a nonsense as, say, a blue concept. When a proposition is called a posteriori or analytic in my sense, this is not a judgement about the conditions, psychological, physiological and physical, which have made it possible to form the content of the proposition in our consciousness; nor is it a judgement about the way in which some other man has come, perhaps erroneously, to believe it true; rather, it is a judgement about the ultimate ground upon which rests the justification for holding it to be true. (Frege 1884/1980, §3)

In short, on Frege’s view, whether or not a claim is analytic or synthetic, a priori or a posteriori, depends on the kind of justification that it would be appropriate to give for that judgment (or judgments of that kind). Frege fills in the details regarding exactly what sorts of justification are required for analyticity and aprioricity later in the same section:

The problem becomes, in fact, that of finding the proof of the proposition, and of following it up right back to the primitive truths. If, in carrying out this process, we come only on general logical laws and on definitions, then the truth is an analytic one, bearing in mind that we must take account also of all propositions upon which the admissibility of any of the definitions depends. If, however, it is impossible to give the proof without making use of truths which are not of a general logical nature, but belong to some special science, then the proposition is a synthetic one. For a truth to be a posteriori, it must be impossible to construct a proof of it without including an appeal to facts, that is, to truths which cannot be proved and are not general, since they contain assertions about particular objects. But if, on the contrary, its proof can be derived exclusively from general laws, which themselves neither need not admit of proof, then the truth is a priori. (Frege 1884/1980, §3)

Thus, for Frege, a judgment is analytic if and only if it has a proof that depends solely upon logical laws and definitions, and a judgment is a priori if and only if it has a proof that depends only upon self-evident, general truths. All logical laws and definitions are self-evident general truths, but not vice versa. This explains the fact mentioned earlier, that Frege did not think his logicism applicable to geometry. For Frege, geometry relied on self-evident general truths about the nature of space, but these truths were neither logical truths nor definitions—hence geometry was a priori, but not analytic.

Thus, Frege’s strategy for refuting Kant’s claim that arithmetic was synthetic was simple: logic (and anything derivable from logic plus definitions) is analytic, hence, if we reduce arithmetic to logic, then we will have shown that arithmetic is analytic after all (and similarly for real and complex analysis, and so forth).

Before digging in to the details of Frege’s attempt to achieve this reduction of arithmetic to logic, however, a few points of clarification are worth making. First, as we shall see below, not all versions of abstractionism are versions of logicism, since not all versions of abstractionism will take abstraction principles to be truths of logic. The converse fails as well: Not all versions of logicism are versions of abstractionism: (Tennant 1987) contains a fascinating constructivist, proof-theoretically oriented attempt to reduce arithmetic to logic that, although it involves operators that are typeset similarly to our abstraction operator \#(X), nevertheless involves no abstraction principles. Second, Frege’s actual primary target was neither to show that arithmetic was logical nor to show that it could be provided a foundation via abstraction generally or via abstraction principles in particular. His primary goal was to show that arithmetic was, contra Kant, analytic, and both the use of abstraction principles and the defense of these principles as logical truths were merely parts of this project. These distinctions are important to note, not only because they are, after all, important, but also because the terminology for the various views falling under the umbrella of abstractionism is not always straightforwardly accurate (for example neo-logicism is not a “new” version of logicism).

The first half of Grundlagen is devoted to Frege’s unsparing refutation of a number of then-current views regarding the nature of mathematical entities and the means by which we obtain mathematical knowledge, including the views put forth by Leibniz, Mill, and Kant. While these criticisms are both entertaining and, for the most part, compelling, it is Frege’s brief comments on Hume that are most relevant for our purposes. In his discussion of Hume, Frege misattributes a principle to him that becomes central both to his own project and to the later neo-logicist programs discussed below—the abstraction principle known (rather misleadingly) as Hume’s Principle.

a. Hume’s Principle and Frege’s Theorem

Frege begins by noting that Hume’s Principle looks rather promising, in many ways, as a potential definition of the concept cardinal number. First, despite the fact that this abstraction principle is likely not what Hume had in mind when he wrote that:

When two numbers are so combined as that the one has always an unit answering to every unit of the other we pronounce them equal; and it is for want of such a standard of equality in extension that geometry can scarce be esteemed a perfect and infallible science. (Hume 1888)[i.3.1]

Hume’s Principle nevertheless seems to codify a plausible idea regarding the nature of cardinal number: two numbers n and m are the same if and only if, for any two concepts X and Y where the number of Xs is n and the number of Ys is m, there is a one-one onto mapping from the Xs to the Ys. Second, and much more importantly for our purposes, Hume’s Principle, plus some explicit definitions formulated in terms of higher-order logic plus the abstraction operator \#, allows us to prove all of the second-order axioms of Peano Arithmetic:

Dedekind-Peano Axioms:

  1. \mathbb{N}(0)
  2. \neg(\exists x)(P(x, 0))
  3. (\forall x)(\mathbb{N}(x) \rightarrow (\exists y)(\mathbb{N}(y) \land P(x, y)))
  4. (\forall x)(\forall y)(\forall z)((P(x, z) \land P(y, z)) \rightarrow x = y)
  5. (\forall F)[F(0) \land (\forall x)(\forall y)((F(x) \land P(x, y)) \rightarrow F(y)) \rightarrow (\forall x)(\mathbb{N}(x) \rightarrow F(x))]

We can express the Peano Axioms a bit more informally as:

  1. Zero is a natural number.
  2. No natural number is the predecessor of zero.
  3. Every natural number is the predecessor of some natural number.
  4. If two natural numbers are the predecessor of the same natural number, then they are identical.
  5. Any property that holds of zero, and holds of a natural number if it holds of the predecessor of that natural number, holds of all natural numbers.

The definitions of zero, the predecessor relation, and the natural number predicate are of critical importance to Frege’s reconstruction of arithmetic. The definitions of zero and of the predecessor relation P(x, y) are relatively simple. Zero is just the cardinal number of the empty concept:

0 =_{df} \#(x \neq x)

The predecessor relation is defined as:

P(a, b) =_{df} (\exists F)(\exists y)[b = \#(F(x)) \land F(y) \land a = \#(F(x) \land x \neq y)]

Thus, P holds between two objects a and b (that is, a is the predecessor of b) just in case there is some concept F and object y falling under F such that b is the cardinal number of F (that is, it is the number of Fs) and a is the cardinal number of the concept that holds of exactly the objects that F holds of, except for y (that is, it is the number of the Fs that are not y).

Constructing the definition of the natural number concept \mathbb{N} is somewhat more complicated, however. First, we need to define the notion of a concept F(x) being hereditary on a relation R(x, y):

{\sf Her}[F(x), R(x, y)] =_{df} (\forall x)(\forall y)((F(x) \land R(x, y)) \rightarrow F(y))

Intuitively, F(x) is hereditary on R(x, y) if and only if, whenever we have two objects a and b, if a falls under the concept F(x), and a is related by R to b, then b must fall under F(x) as well.

Next, Frege uses hereditariness to define the strong ancestral of a relation R(x, y):

R^*(a, b) =_{df} (\forall F)[({\sf Her}[F(x), R(x, y)] \land (\forall x)(R(a, x) \rightarrow F(x))) \rightarrow F(b)]

The definition of the anscestral is imposing, but the idea is straightforward: given a relation R, the strong ancestral of R is a second relation R^* such that R^* holds between two objects a and b if and only if these is a sequence of objects:

a, c_1, c_2, \dots, c_n, b

such that:

R(a, c_1), R(c_1, c_2), R(c_2, c_3), \dots, R(c_{n-1}, c_n), R(c_n, b)

This operation is called the ancestral for a reason: the relation that holds between oneself and one’s ancestors is the ancestral of the parenthood relation.

For Frege’s purposes, a slightly weaker notion—the weak ancestral—turns out to be a bit more convenient:

R^{*=}(a, b) =_{df} R^*(a, b) \lor (a = b)

The weak ancestral of a relation R holds between two objects a and b  just in case either the strong ancestral does, or a and b are identical. Returning to our intuitive genealogical example, the difference between the weak ancestral and the strong ancestral of the parenthood relation is that the weak ancestral holds between any person and themselves. Thus, it is the strong ancestral that most closely corresponds to the everyday notion of ancestor, since we do not usually say that someone is their own ancestor.

Finally, we can define the natural numbers as those objects a such that the weak ancestral of the predecessor relation holds between zero and a:

\mathbb{N}(a) =_{df} P^{*=}(0, a)

In other words, an object is a natural number if and only if either it is 0, or 0 is its predecessor (that is, it is 1), or zero is the predecessor of its predecessor (that is, it is 2), or 0 is the predecessor of the predecessor of its predecessor (that is, it is 3), and so forth.

It is worth noting that all of this work defining the concept of natural number is, in fact, necessary. One might think at first glance that we could just take the following notion of cardinal number:

C(a) =_{df} (\exists Y)(a = \#(Y(x))

and use that instead of the much more complicated \mathbb{N}(x). This, however, won’t work: Since Hume’s Principle entails all of the Peano Axioms for arithmetic, it thereby entails that there are infinitely many objects (since there are infinitely many natural numbers). Hence there is a cardinal number—that is, an object falling under C(x)—that is not a finite natural number, namely anti-zero, the number of the universal concept (the term “anti-zero” is due to (Boolos 1997)):

\Omega =_{df} \#(x=x)

Infinite cardinals numbers like anti-zero do not satisfy the Peano Axioms (anti-zero is its own predecessor, for example), thus, if we are to do arithmetic based on Hume’s Principle, we need to restrict our attention to those numbers falling in \mathbb{N}(x).

In the Grundlagen Frege sketches a proof that, given these definitions, we can prove the Peano Axioms, and he carries it out in full formal detail in Grundgesetze. This result, which is a significant mathematical result independently of its importance to abstractionist accounts of the foundations of mathematics, has come to be known as Frege’s Theorem. The derivation of the Peano Axioms from Hume’s Principle plus these definitions is long and complicated, and we will not present it here. The reader interested in reconstructions of, and discussions of, the proof of Frege’s Theorem should consult (Wright 1983), (Boolos 1990a), (Heck 1993), and (Boolos & Heck 1998).

b. Hume’s Principle and the Caesar Problem

This all looks quite promising so far. We have an abstraction principle that introduces the concept cardinal number (and, as our definitions above demonstrate, the sub-concept natural number), and this abstraction principle entails a quite strong (second-order) version of the standard axioms for arithmetic. In addition, although Frege did not prove this, Hume’s Principle is consistent. We can build a simple model as follows. Let the domain be the natural numbers \mathbb{N}, and then interpret the abstraction operator \# as follows:

\#(P) =\begin{cases} n + 1, & \text{If $P$ holds of $n$-many objects in $\mathbb{N}$} \\ 0, & \text{Otherwise (that is, if $P$ holds of infinitely many objects in $\mathbb{N}$).} \end{cases}

This simple argument can be extended to show that Hume’s Principle has models whose domains are of size \kappa for any infinite cardinal \kappa (Boolos 1987). Thus, Hume’s Principle seems like a good candidate for an abstractionist definition of the concept cardinal number.

Frege, however, rejected the idea that Hume’s Principle could serve as a definition of cardinal number. This was not because he was worried that Hume’s Principle failed to be true, or even that it failed to be analytic. On the contrary, as we shall see below, Frege eventually proves a version of Hume’s Principle from other principles that he takes to be logical truths, and hence analytic. Thus, the proved version of Hume’s Principle (were Frege’s project successful) would inherit the analyticity of the principles used to prove it.

Frege instead rejects Hume’s Principle as a definition of the concept cardinal number because it does not settle questions regarding which particular objects the numbers are—questions that, on Frege’s view, an adequate definition should settle. In particular, although abstraction principles provide us with a criterion for determining whether or not two abstracts of the same kind—that is, two abstracts introduced by the same abstraction principle—are identical, they are silent with regard to whether, or when, an abstract introduced by an abstraction principle might be identical to an object introduced by some other means. Frege raises this problem with respect to Hume’s Principle as follows:

. . . but we can never—to take a crude example—decide by means of our definitions whether any concept has the number Julius Caesar belonging to it, or whether that conqueror of Gaul is a number or is not. (Frege 1884/1980, §55)

and he returns to the problem again, pointing out that the Directions Principle fares no better:

It will not, for instance, decide for us whether England is the same as the direction of the Earth’s axis—if I may be forgiven an example which looks nonsensical. Naturally no one is going to confuse England with the direction of the Earth’s axis; but that is no thanks to our definition of direction. (Frege 1884/1980, 66)

The former passage has led to this problem being known as the Caesar Problem.

The root of the Caesar Problem is this. Although abstraction principles provide criteria for settling identities between pairs of abstraction terms of the same type—hence Hume’s Principles provides a criterion for settling identities of the form:

\#(F) = \#(G)

for any concepts F and G—abstraction principles do not provide any guidance for settling identities where one of the terms is not an abstraction term. In short, and using our favorite example, Hume’s Principle provides no guidance for settling any identities of the form:

t = \#(F)

where t is not an abstraction term (hence t might be an everyday name like “England” or “Julius Caesar”). Both:

t = \#(F)

and:

t \neq \#(F)

can be consistently added to Hume’s Principle (although obviously not both at once).

Frege’s worry here is not that, as a result of this, we are left wondering whether the number seven really is identical to Julius Caesar. As he notes, we know that it is not. The problem is that an adequate definition of the concept natural number should tell us this, and Hume’s Principle fails to weigh in on the matter.

That being said, Frege’s worry does not stem from thinking that a definition of a mathematical concept should answer all questions about that concept (after all, the definition of cardinal number should not be expected to tell us what Frege’s favorite cardinal number was). Rather, Frege is concerned here with the idea that a proper definition of a concept should, amongst other things, draw a sharp line between those things that fall under the concept and those that do not—that is, a definition of a mathematical concept should determine the kinds of objects that fall under that concept. Hume’s Principle does not accomplish this, and thus it cannot serve as a proper definition of the concept in question. We will return to the Caesar Problem briefly in our discussion of neo-logicism below. But first, we need to look at Frege’s response.

c. Hume’s Principle and Basic Law V

Since Frege rejected the idea that Hume’s Principle could serve as a definition of cardinal number, but appreciated the power and simplicity that the reconstruction of Peano Arithmetic based on Hume’s Principle provided, he devised a clever strategy: to provide an explicit definition of cardinal number that depended on previously accepted and understood principles, and then derive Hume’s Principle using those principles and the explicit definition in question.

As a result, there are two main ingredients in Frege’s final account of the concept cardinal number. The first is the following explicit definition of the concept in question (noting that “equal” here indicates equinumerosity, not identity):

My definition is therefore as follows:

The number which belongs to the concept F is the extension of the concept “equal to the concept F”. (Frege 1884/1980, §68)

Thus, Frege’s definition of cardinal numbers specifies that the cardinal numbers are a particular type of extension. But of course, this isn’t very helpful until we know something about extensions. Thus, the second central ingredient in the account is a principle that governs extensions of concepts generally—a principle we have already seen: Basic Law V.

We should pause here to note that the version of Basic Law V that Frege utilized in Grundgesetze did not assign extensions to sets, but instead assigned value ranges to functions. Thus, a better (but still slightly anachronistic) way to represent Frege’s version of this principle would be something like:

(\forall f)(\forall g)[\S(f) = \S(g) \leftrightarrow (\forall z)(f(z) = g(z))]

where f and g range over unary functions from objects to objects. Since Frege thought that concepts were a special case of functions (in particular, a concept is a function that maps each object to either the true or the false), the conceptual version of Basic Law V given in §1 above is a special case of Frege’s basic law. Hence, we will work with the conceptual version here and below, since (i) this allows our discussion of Frege to align more neatly with our discussion of neo-logicism in the next section, and (ii) any derivation of a contradiction from a special case of a general principle is likewise a derivation of a contradiction from the general principle itself.

Given Basic Law V, we can formalize Frege’s definition of cardinal number as follows:

\#(F) =_{df} \S((\exists Y)(x = \S(Y) \land Y \approx F))

where \S is the abstraction operator found in Basic Law V, which maps each concept to its extension. In other words, on Frege’s account the cardinal number corresponding to a concept F is the extension (or “value-range”, in Frege’s terminology) of the concept which holds of an object x just in case it is the extension of a concept that is equinumerous to F.

Frege informally sketches a proof that Hume’s Principle follows from Basic Law V plus this definition in Grundlagen, and he provides complete formal proofs in Grundgesetze. For a careful discussion of this result, see (Heck 2013). Thus, Basic Law V plus this definition of cardinal number entails Hume’s Principle, which then (with a few more explicit definitions) entails full second-order Peano Arithmetic. So what went wrong? Why aren’t we all Fregean logicists?

d. Basic Law V and Russell’s Paradox

Before looking at what actually did go wrong, it is worth heading off a potential worry that one might have at this point. As already noted, Frege rejected Hume’s Principle as a definition of cardinal number because of the Caesar Problem. But Basic Law V, like Hume’s Principle, is an abstraction principle. And, given any abstraction principle:

{\sf A}_E : (\forall X)(\forall Y)[@(X) = @(Y) \leftrightarrow E(X, Y)]

if {\sf A}_E is consistent, then {\sf A}_E will entail neither:

t = \S(F)

nor:

t \neq \S(F)

(where t is not an abstraction term). Since Frege obviously believed that Basic Law V was consistent, he should have also realized that it fails to settle the very sorts of identity claims that led to his rejection of Hume’s Principle. Thus, shouldn’t Frege have rejected Basic Law V for the same reasons?

The answer is “no”, and the reason is simple: Frege did not take Basic Law V to be a definition of Extension. As just noted, he couldn’t, due to the Caesar Problem. Instead, Frege merely claims that Basic Law V is exactly that—a basic law, or a basic axiom of the logic that he develops in Grundgesetze. Frege never provides a definition of extension, and he seems to think that a definition of this concept is not required. For example, at the end of a footnote in Grundlagen suggesting that, in the definition of cardinal number given above, we could replace “extension of a concept” with just “concept”, he says that:

I assume that it is known what the extension of a concept is. (Frege 1884/1980, §69)

Thus, this is not the reason that Frege’s project failed.

The reason that Frege’s logicism did ultimately fail, however, is already hinted at in our discussion of Basic Law V and the Caesar Problem. Note that we took a slight detour through an arbitrary (consistent) abstraction principle in order to state that (non-)worry. The reason for this complication is simple: Basic Law V does prove one or the other of:

t = \S(F)

and:

t \neq \S(F)

In fact, it proves both (and any other formula, for that matter), because it is inconsistent.

In 1902, just as the second volume of Grundgesetze was going to press, Frege received a letter from a young British logician by the name of Bertrand Russell. In the letter Russell sketched a derivation of a contradiction within the logical system of Grundgesetze—one which showed the inconsistency of Basic Law V in particular. We can reconstruct the reasoning as follows. First, consider the (“Russell”) concept expressed by the following predicate:

R(x) =_{df} (\exists Y)(x = \S(Y) \land \neg Y(x))

Simply put, the Russell concept R holds of an object a just in case that object is the extension of a concept that does not hold of a. Now, clearly, if extensions are coherent at all, then the extension of this concept should be self-identical—that is:

\S(R(x)) = \S(R(x))

which, by the definition of R, gives us:

\S(R(x)) = \S(\exists Y)(x = \S(Y) \land \neg Y(x))

We then apply Basic Law V to obtain:

(\forall x)[R(x) \leftrightarrow (\exists Y)(x = \S(Y) \land \neg Y(x))]

An application of universal instantiation, replacing the variable x with \S(R(x)), provides:

R(\S(R(x))) \leftrightarrow (\exists Y)(\S(R(x)) = \S(Y) \land \neg Y(\S(R(x))))

The following is a truth of higher-order logic:

\neg R(\S(R(x))) \leftrightarrow (\exists Y)((\forall z)(R(z) \leftrightarrow Y(z)) \land \neg Y(\S(R(x))))

Given Basic Law V, however, the preceding claim is equivalent to:

\neg R(\S(R(x))) \leftrightarrow (\exists Y)(\S(R(x)) = \S(Y) \land \neg Y(\S(R(x))))

But now we combine this with the formula three lines up, to get:

R(\S(R(x))) \leftrightarrow \neg R(\S(R(x)))

an obvious contradiction.

This paradox is known as Russell’s Paradox, and is often presented in a somewhat different context—naïve set theory—where it involves, not Frege’s abstraction-principle based extension operator, but consideration of the set of all sets that are not members of themselves.

After receiving Russell’s letter, Frege added an Afterword to the second volume of Grundgesetze, where he proposed an amended version of Basic Law V that stated, roughly put, that two concepts receive the same extension if and only if they hold of exactly the same objects except possibly disagreeing on their (shared) extension. This version turned out to have similar problems. For a good discussion, see (Cook 2019).

Eventually, however, Frege abandoned logicism. Other efforts to reduce all of mathematics to logic were attempted, the most notable of which was Bertrand Russell and Alfred North Whitehead’s attempted reduction of arithmetic to a complicated logical theory known as ramified type theory in their three-volume Principia Mathematica (Russell & Whitehead 1910/1912/1913). But while the system of Principia Mathematica adopted Frege’s original idea of reducing mathematics to logic, it did not do so via the mobilization of abstraction principles, and hence is somewhat orthogonal to our concerns. The next major chapter in abstractionist approaches to mathematics would not occur for almost a century.

4. Neo-Logicism

The revival of abstractionism in the second half of the 20th century is due in no small part to the publication of Crispin Wright’s Frege’s Conception of Numbers as Objects (Wright 1983), although other publications from around this time, such as (Hodes 1984), explored some of the same ideas. In this work Wright notes that Hume’s Principle, unlike Basic Law V, is consistent. Thus, given Frege’s Theorem, which ensures that full second-order Peano Arithmetic follows from Hume’s Principle plus the definitions covered in the last section, we can arrive at something like Frege’s original logicist project if we can defend Hume’s Principle as (or as something much like) an implicit definition of the concept cardinal number. In a later essay Wright makes the point as follows:

Frege’s Theorem will ensure . . . that the fundamental laws of arithmetic can be derived within a system of second order logic augmented by a principle whose role is to explain, if not exactly to define, the general notion of identity of cardinal number. . . If such an explanatory principle . . . can be regarded as analytic, then that should suffice . . . to demonstrate the analyticity of arithmetic. Even if that term is found troubling, as for instance by George Boolos, it will remain that Hume’s Principle—like any principle serving to implicitly define a certain concept—will be available without significant epistemological presupposition . . . Such an epistemological route would be an outcome still worth describing as logicism. (Wright 1997, 210—211)

Subsequent work on neo-logicism has focused on a number of challenges.

The first, and perhaps most obvious, is to fully develop the story whereby abstraction principles are implicit definitions of mathematical concepts that not only provide us with terminology for talking about the abstract objects in question, but somehow guarantee that those objects exist. The account in question has been developed for the most part in individual and joint essays by Crispin Wright and Bob Hale—many of these essays are contained in the excellent collection (Hale & Wright 2001a). The central idea underlying the approach is a principle called the syntactic priority thesis, which, although it has its roots in Frege’s work, finds perhaps its earliest explicit statement in Wright’s Frege’s Conception of Numbers as Objects (but see also (Dummett 1956)):

When it has been established . . . that a given class of terms are functioning as singular terms, and when it has been verified that certain appropriate sentences containing them are, by ordinary criteria, true, then it follows that those terms do genuinely refer. (Wright 1983, 14)

This principle turns the intuitive account of the connection between singular terms and the objects to which they purport to refer on its head. Instead of explaining when a singular term refers, and to what it refers, in terms of (in some sense) prior facts regarding the existence of certain objects (in particular, the objects to which the terms in question purport to refer), the syntactic priority thesis instead explains what it is for certain sorts of object to exist in terms of (in some sense) prior facts regarding whether or not appropriate singular terms appear in true (atomic) sentences.

Wright and Hale then argue that, first, the apparent singular terms (that is, abstraction terms) appearing on the left-hand side of abstraction principles such as Hume’s Principle are genuine singular terms, and, second, that Hume’s Principle serves as a genuine definition of these terms, guaranteeing that there are true atomic sentences that contain those terms. In particular, since for any concept P:

(\forall z)(P(z) \leftrightarrow P(z))

is a logical truth, Hume’s Principle entails that any identity claim of the form:

\#(P) = \#(P)

is true. As a result, terms of the form \#(P) refer (and refer to the abstract objects known as cardinal numbers). Hence, both the existence of the abstract objects that serve as the subject matter of arithmetic, and our ability to obtain knowledge of such objects, is guaranteed.

a. Neo-Logicism and Comprehension

Another problem that the neo-logicist faces involves responding to Russell’s Paradox. Neo-logicism involves the claim that abstraction principles are implicit definitions of mathematical concepts. But, as Russell’s Paradox makes clear, it would appear that not every abstraction principle can play this role. Thus, the neo-logicist owes us an account of the line that divides the acceptable abstraction principles—that is, the ones that serve as genuine definitions of mathematical concepts—from those that are unacceptable.

Before looking at ways we might draw such a line between acceptable and unacceptable abstraction principles, it is worth noting that proceeding in this fashion is not forced upon the neo-logicist. In our presentation of Russell’s Paradox in the previous section, a crucial ingredient of the argument was left implicit. The second-order quantifiers in an abstraction principle such as Basic Law V range over concepts, and hence Basic Law V tells us, in effect, that each distinct concept receives a distinct extension. But, in order to get the Russell’s Paradox argument going, we need to know that there is a concept corresponding to the Russell predicate R(x).

Standard accounts of second-order logic ensure that there is a concept corresponding to each predicate by including the comprehension scheme:

Comprehension : For any formula \Phi(x) where X does not occur free in \Phi(y):

(\exists X)(\forall y)(X(y) \leftrightarrow \Phi(y))

Frege did not have an explicit comprehension principle in his logic, but instead had inference rules that amounted to the same thing. If we substitute R(x) in for \Phi(y) in the comprehension scheme, it follows that there is a concept corresponding to R(x), and hence we can run the Russell’s Paradox reasoning.

But now that we have made the role of comprehension explicit, another response to Russell’s Paradox becomes apparent. Why not reject comprehension, rather than rejecting Basic Law V? In other words, maybe it is the comprehension scheme that is the problem, and Basic Law V (and in fact any abstraction principle) is acceptable.

Of course, we don’t want to just drop comprehension altogether, since then we have no guarantee that any concepts exist, and as a result there is little point to the second-order portion of our second-order logic. Instead, the move being suggested is to replace comprehension with some restricted version that entails the existence of enough concepts that abstraction principles such as Hume’s Principle and Basic Law V can do significant mathematical work for us, but does not entail the existence of concepts, like the one corresponding to the Russell predicate, that lead to contradictions. A good bit of work has been done exploring such approaches. For example, we might consider reformulating the comprehension scheme so that it only applies to predicates \Phi(x) that are predicative (that is, contain no bound second-order variables) or are \Delta^1_1 (that is, are equivalent both to a formula all of whose second-order quantifiers are universal and appear at the beginning of the formula, and to a formula all of whose second-order quantifiers are existential and appear at the beginning of the formula). (Heck 1996) shows that Basic Law V is consistent with the former version of comprehension, and (Wehmeier 1999) and (Ferreira & Wehmeier 2002) show that Basic Law V is consistent with the latter (considerably stronger) version.

One problem with this approach is that if we restrict the comprehension principles used in our neo-logicist reconstruction of mathematical theories, then the quantifiers that occur in the theories so reconstructed are weakened as well. Thus, if we adopt comprehension restricted to some particular class of predicates, then even if we can prove the induction axiom for arithmetic:

(\forall F)[F(0) \land (\forall x)(\forall y)((F(x) \land P(x, y)) \rightarrow F(y)) \rightarrow (\forall x)(\mathbb{N}(x) \rightarrow F(x))]

it is not clear that we have what we want. The problem is that, in this situation, we have no guarantee that induction will hold of all predicates that can be formulated in our (second-order) language, but instead are merely guaranteed that induction will hold for those predicates that are in the restricted class to which our favored version of comprehension applies. It is not clear that this should count as a genuine reconstruction of arithmetic, since induction is clearly meant to hold for any meaningful condition whatsoever (and presumably any condition that can be formulated within second-order logic is meaningful). As a result, most work on neo-logicism has favored the other approach: retain full comprehension, accept that Basic Law V is inconsistent, and then search for philosophically well-motivated criteria that separate the good abstraction principles from the bad.

b. Neo-Logicism and the Bad Company Problem

At first glance, one might think that the solution to this problem is obvious: Can’t we just restrict our attention to the consistent abstraction principles? After all, isn’t that the difference between Hume’s Principle and Basic Law V—the former is consistent, while the latter is not? Why not just rule out the inconsistent abstraction principles, and be done with it?

Unfortunately, things are not so simple. First off, it turns out that there is no decision procedure for determining which abstraction principles are consistent and which are not. In other words, there is no procedure or algorithm that will tell us, of an arbitrary abstraction principle, whether that abstraction principle implies a contradiction (like Basic Law V) or not (like Hume’s Principle). See (Heck 1992) for a simple proof.

Second, and even more worrisome, is the fact that the class of individually consistent abstraction principles is not itself consistent. In other words, there are pairs of abstraction principles such that each of them is consistent, but they are incompatible with each other. A simple example is provided by the Nuisance Principle:

{\sf NP}: (\forall A)(\forall Y)[\mathcal{N}(X) = \mathcal{N}(y) \leftrightarrow {\sf Fin}((X \setminus Y) \cup (Y \setminus X))]

where {\sf Fin}(X) abbreviates the purely logical second-order claim that there are only finitely many Xs. This abstraction principle, first discussed in (Wright 1997), is a simplification of a similar example given in (Boolos 1990a). Informally, this principle says that the nuisance of X is identical to the nuisance of Y if and only if the collection of things that either fall under X but not Y, or fall under Y but not X, is finite. Even more simply, the nuisance of X is identical to the nuisance of Y if and only if X and Y differ on at most finitely many objects.

Now, the Nuisance Principle is consistent—in fact, it has models of size n for any finite number n. The problem, however, is that it has no infinite models. Since, as we saw in our discussion of Frege’s Theorem, Hume’s Principle entails the existence of infinitely many cardinal numbers, and thus all of its models have infinite domains, there is no model that makes both the Nuisance Principle and Hume’s Principle true. Thus, restricting our attention to consistent abstraction principles won’t do the job.

Unsurprisingly, Wright did not leave things there, and in the same essay in which he presents the Nuisance principle he proposes a solution to the problem:

A legitimate abstraction, in short, ought to do no more than introduce a concept by fixing truth conditions for statements concerning instances of that concept . . . How many sometime, someplace zebras there are is a matter between that concept and the world. No principle which merely assigns truth conditions to statements concerning objects of a quite unrelated, abstract kind—and no legitimate second-order abstraction can do any more than that—can possibly have any bearing on the matter. What is at stake . . . is, in effect, conservativeness in (something close to) the sense of that notion deployed in Hartry Field’s exposition of his nominalism. (Wright 1997, 296)

The reason that Wright invokes the version of conservativeness mobilized in (Field 2016) is that the standard notion of conservativeness found in textbooks on model theory won’t do the job. That notion is formulated as follows:

A formula \Phi in a language \mathcal{L}_1 is conservative over a theory \mathcal{T} in a language \mathcal{L}_2 where \mathcal{L}_2 \subseteq \mathcal{L}_1 if any only if, for any formula \Psi \in \mathcal{L}_2, if:

\Phi, \mathcal{T} \models \Psi

then:

\mathcal{T} \models \Psi

In other words, given a theory \mathcal{T}, a formula \Phi (usually involving new vocabulary not included in \mathcal{T}) is conservative over \mathcal{T} if and only if any formula in the language of \mathcal{T} that follows from the conjunction of \Phi and \mathcal{T} follows from \mathcal{T} alone. In other words, if \Phi is conservative over \mathcal{T}, then although \Phi may entail new things not entailed by \mathcal{T}, it entails no new things that are expressible in the language of \mathcal{T}.

Now, while this notion of conservativeness is extremely important in model theory, it is, as Wright realized, too strong to be of use here, since even Hume’s Principle is not conservative in this sense. Take any theory that is compatible with the existence of only finitely many things (that is, has finite models), and let {\sf Inf} abbreviate the purely logical second-order claim expressing that the universe contains infinitely many objects. Then:

{\sf HP}, \mathcal{T} \models {\sf Inf}

but:

\mathcal{T} \not\models {\sf Inf}

This example makes the problem easy to spot: Acceptable abstraction principles, when combined with our favorite theories, may well entail new claims not entailed by those theories. For example, Hume’s Principle entails that there are infinitely many objects. What we want to exclude are abstraction principles that entail new claims about the subject matter of our favorite (non-abstractionist) theories. Thus, Hume’s Principle should not entail that the subject matter of \mathcal{T} involves infinitely many objects unless \mathcal{T} already entails that claim. Hence, what we want is something like the following: An abstraction principle {\sf A}_E is conservative in the relevant sense if and only if, given any theory \mathcal{T} and formula \Phi about some domain of objects \Delta, if {\sf A}_E combined with \mathcal{T} restricted to its intended, non-abstract domain entails \Phi restricted to its intended, non-abstract domain, then \mathcal{T} (unrestricted) should entail \Phi (unrestricted). This will block the example above, since, if \mathcal{T} is our theory of zebras (to stick with Wright’s example), then although Hume’s Principle plus \mathcal{T} entails the existence of infinitely many objects, it does not entail the existence of infinitely many zebras (unless our zebra theory does).

We can capture this idea more precisely via the following straightforward adaptation of Field’s criterion to the present context:

{\sf A}_E is Field-conservative if and only if, for any theory \mathcal{T} and formula \Phi not containing @_E, if:

{\sf A}_E, \mathcal{T}^{\neg (\exists Y)(x = @_E(Y))} \models \Phi^{\neg (\exists Y)(x = @_E(Y))}

then:

\mathcal{T} \models \Phi

The superscripts indicate that we are restricting each quantifier in the formula (or set of formulas) in question to the superscripted predicate. Thus, given a formula \Phi and a predicate \Psi(x), we obtain \Phi^{\Psi(x)} by replacing each quantifier in \Phi with a new quantifier whose range is restricted to \Psi(x) along the following pattern:

(\forall x) \dots becomes (\forall x)(\Psi(x) \rightarrow \dots

(\exists x) \dots becomes (\exists x)(\Psi(x) \land \dots

(\forall X) \dots becomes (\forall X)((\forall x)(X(x) \rightarrow \Psi(x)) \rightarrow \dots

(\exists X) \dots becomes (\exists X)((\forall x)(X(x) \rightarrow \Psi(x)) \land \dots

Thus, according to this variant of conservativeness, an abstraction principle {\sf A}_E is conservative if, whenever that abstraction principle plus a theory \mathcal{T} whose quantifiers have been restricted to those objects that are not abstracts governed by {\sf A}_E entails a formula \Phi whose quantifiers have been restricted to those objects that are not abstracts governed by {\sf A}_E, then the theory \mathcal{T} (without such restriction) entails \Phi (also without such restriction).

Hume’s Principle (and many other abstraction principles) are conservative in this sense. Further, the idea that Field-conservativeness is a necessary condition on acceptable abstraction principles has been widely accepted in the neo-logicist literature. But Field-conservativeness, even combined with consistency, cannot be sufficient for acceptability, for a very simple (and now familiar-seeming) reason: It turns out that there are pairs of abstraction principles that are each both consistent and Field conservative, but which are incompatible with each other.

The first such pair of abstraction principles is presented in (Weir 2003). Here is a slight variation on his construction. First, we define a new equivalence relation:

    \begin{align*}X \Leftrightarrow Y =_{df} (\forall z)(X(z) \leftrightarrow Y(z)) \lor (&(\exists z)(\exists w)(X(z) \land X(w) \land z \neq w)\\ \land \ &(\exists z)(\exists w)(X(z) \land X(w) \land z \neq w))\end{align*}

In other words, ⇔ holds between two concepts X and Y if and only if either they both hold of no more than one object, and they hold of the same objects, or they both hold of more than one object. Next, let {\sf Limit} abbreviate the purely logical second-order formula expressing the claim that the size of the universe is a limit cardinal, and {\sf Succ} abbreviate the purely logical second-order formula expressing the claim that the size of the universe is a successor cardinal. (Limit cardinals and successor cardinals are types of infinite cardinal numbers. The following facts are all that one needs for the example to work: Every cardinal number is either a limit cardinal or a successor cardinal (but not both); given any limit cardinal, there is a larger successor cardinal; and given any successor cardinal, there is a larger limit cardinal. For proofs of these result, and much more information on infinite cardinal numbers, the reader is encouraged to consult (Kunen 1980)). Now consider:

    \begin{align*} {\sf A}_{E_1} : \ & (\forall X)(\forall Y)[@_1(X) = @_1(Y) \leftrightarrow ({\sf Limit} \land X \Leftrightarrow Y) \lor (\forall z)(X(z) \leftrightarrow Y(z))]\\ {\sf A}_{E_2} : \ & (\forall X)(\forall Y)[@_2(X) = @_2(Y) \leftrightarrow ({\sf Succ} \land X \Leftrightarrow Y) \lor (\forall z)(X(z) \leftrightarrow Y(z))] \end{align*}

Both {\sf A}_{E_1} and {\sf A}_{E_2} are consistent: {\sf A}_{E_1} is satisfiable on domains whose cardinality is an infinite limit cardinal and is not satisfiable on domains whose cardinality is finite or an infinite successor cardinal (on the latter sort of domains it behaves analogously to Basic Law V). Things stand similarly for {\sf A}_{E_2}, except with the roles of limit cardinals and successor cardinals reversed. Further, both principles are Field-conservative. The proof of this fact is complex, but depends essentially on the fact that ⇔ generates equivalence classes in such a way that, on any infinite domain, the number of equivalence classes of concepts is identical to the number of concepts. See (Cook & Linnebo 2018) for more discussion. But, since no cardinal number is both a limit cardinal and a successor cardinal, there is no domain that makes both principles true simultaneously. Thus, Field-conservativeness is not enough to guarantee that an abstraction principle is an acceptable neo-logicist definition of a mathematical concept.

The literature on Bad Company has focused on developing more nuanced criteria that we might impose on acceptable abstraction principles, and most of these have focused on three kinds of consideration:

  • Satisfiability: On what sizes of domain is the principle satisfiable?
  • Fullness: On what sizes of domain does the principle in question generate as many abstracts as there are objects in the domain?
  • Monotonicity: Is it the case that, if we move from one domain to a larger domain, the principle generates at least as many abstracts on the latter as it did on the former?

The reader is encouraged to consult (Cook & Linnebo 2018) for a good overview of the current state of the art with regard to proposals for dealing with the Bad Company problems that fall under one of (or a combination of) these three types of approach.

c. Extending Neo-Logicism Beyond Arithmetic

The next issue facing neo-logicism is extending the account to other branches of mathematics. The reconstruction of arithmetic from Hume’s Principle is (at least, in a technical sense), the big success story of neo-logicism, but if this is as far as it goes, then the view is merely an account of the nature of arithmetic, not an account of the nature of mathematics more generally. Thus, if the neo-logicist is to be successful, then they need to show that the approach can be extended to all (or at least much of) mathematics.

The majority of work done in this regard has focused on the two areas of mathematics that tend, in addition to arithmetic, to receive the most attention in the foundations of mathematics: set theory and real analysis. Although this might seem at first glance to be somewhat limited, it is well-motivated. The neo-logicist has already reconstructed arithmetic using Hume’s Principle, which shows that neo-logicism can handle (countably) infinite structures. If the neo-logicist can reconstruct real analysis, then this would show that the account can deal with continuous mathematical structures. And if the neo-logicist can reconstruct set theory as well, then this would show that the account can handle arbitrarily large transfinite structures. These three claims combined would make a convincing case for the claim that most if not all of modern mathematics could be so reconstructed. Neo-logicist reconstructions of real analysis have followed the pattern of Dedekind-cut-style set-theoretic treatments of the real numbers. They begin with the natural numbers as given to us by Hume’s Principle. We then use the (ordered) Pairing Principle:

{\sf Pair} : (\forall x)(\forall y)(\forall z)(\forall w)[\langle x, y \rangle = \langle z, w \rangle \leftrightarrow (x = z \land y = w)]

to obtain pairs of natural numbers, and then apply another principle that provides us with a copy of the integers as equivalence classes of pairs of natural numbers. We then use the Pairing Principle again, to obtain ordered pairs of these integers, and then apply another principle to obtain a copy of the rational numbers as equivalence classes of ordered pairs of our copy of the integers. Finally, we use another principle to obtain an abstract corresponding to each “cut” on the natural ordering of the rational numbers, obtaining a collection of abstracts isomorphic to the standard real numbers.

Examples of this sort of reconstruction of the real numbers can be found in (Hale 2000) and (Shapiro 2000). There is, however, a significant difference between the two approaches found in these two papers. Shapiro’s construction halts when he has applied the abstraction principle that provides an abstract for each “cut” on the copy of the rationals, since at this point we have obtained a collection of abstracts whose structure is isomorphic to the standard real numbers. Hale’s, construction, however, involves one more step: he applies a version of the Ratio Principle discussed earlier to this initial copy of the reals, and claims that the structure that results consists of the genuine real numbers (and the abstracts from the prior step, while having the same structure, were merely a copy—not the genuine article).

The difference between the two approaches stems from a deeper disagreement with regard to what, exactly, is required for a reconstruction of a mathematical theory to be successful. The disagreement traces back directly to Frege, who writes in Grundgesetze that:

So the path to be taken here steers between the old approach, still preferred by H. Hankel, of a geometrical foundation for the theory of irrational numbers and the approaches pursued in recent times. From the former we retain the conception of a real number as a magnitude-ratio, or measuring number, but separate it from geometry and indeed form all specific kinds of magnitudes, thereby coming closer to the more recent efforts. At the same time, however, we avoid the emerging problems of the latter approaches, that either measurement does not feature at all, or that it features without any internal connection grounded in the nature of the number itself, but is merely tacked on externally, from which it follows that we would, strictly speaking, have to state specifically for each kind of magnitude how it should be measured, and how a number is thereby obtained. Any general criteria for where the numbers can be used as measuring numbers and what shape their application will then take, are here entirely lacking.

So we can hope, on the one hand, not to let slip away from us the ways in which arithmetic is applied in specific areas of knowledge, without, on the other hand, contaminating arithmetic with the objects, concepts, relations borrowed from these sciences and endangering its special nature and autonomy. One may surely expect arithmetic to present the ways in which arithmetic is applied, even though the application itself is not its subject matter. (Frege 1893/1903/2013, §159)

Wright sums up Frege’s idea here nicely:

This is one of the clearest passages in which Frege gives expression to something that I propose we call Frege’s Constraint: that a satisfactory foundation for a mathematical theory must somehow build its applications, actual and potential, into its core—into the content it ascribes to the statements of the theory—rather than merely “patch them on from the outside.” (Wright 2000, 324)

The reason for Hale’s extra step should now be apparent. Hale accepts Frege’s constraint, and further, he agrees with Frege that a central part of the explanation for the wide-ranging applicability of the real numbers within science is the fact that they are ratios of magnitudes. At the penultimate step of his construction (the one corresponding to Shapiro’s final step) we have obtained a manifold of magnitudes, but the final step is required in order to move from the magnitudes themselves to the required ratios. Shapiro, on the other hand, is not committed to Frege’s Constraint, and as a result is satisfied with merely obtaining a collection of abstract objects whose structure is isomorphic to the structure of the real numbers. As a result, he halts a step earlier than Hale does. This disagreement with regard to the role that Frege’s constraint should play within neo-logicism remains an important point of contention amongst various theorists working on the view.

The other mathematical theory that has been a central concern for neo-logicism is set theory. The initially most obvious approach to obtaining a powerful neo-logicist theory of sets—Basic Law V—is of course inconsistent, but the approach is nevertheless attractive, and as a result the bulk of work on neo-logicist set theory has focused on various ways that we might restrict Basic Law V so that the resulting principle is both powerful enough to reconstruct much or all of contemporary work in set theory yet also, of course, consistent. The principle along these lines that has received by far the most attention is the following principle proposed in (Boolos 1989):

{{\sf NewV} : (\forall X)(\forall Y)[\S_{\sf NewV}(X) = \S_{\sf NewV}(Y) \leftrightarrow ({\sf Big}(X) \land {\sf Big}(Y)) \lor (\forall z)(X(z) \leftrightarrow Y(z))]}

where {\sf Big}(X) is an abbreviation for the purely logical second-order claim that there is a mapping from the Xs onto the universe—that is:

{{\sf Big}(X) =_{df} (\exists R)(\forall y)(\exists z)(X(z) \land R(z, y)) \land (\forall y)(\forall z)(\forall w)(R(y, w) \land R(y, w) \rightarrow z = w)}

{\sf NewV} behaves like Basic Law V on concepts that hold of fewer objects than are contained in the domain as a whole, providing each such concept with its own unique extension, but it maps all concepts that hold of as many objects as there are in the domain as a whole to a single, “dummy” object. This principle is meant to capture the spirit of Georg Cantor’s analysis of the set-theoretic paradoxes. According to Cantor, those concepts that do not correspond to a set (for example, the concept corresponding to the Russell predicate) fail to do so because they are in some sense “too big” (Hallett 1986).

{\sf NewV} is consistent, and, given the following definitions:

    \begin{align*}{\sf Set}(x) &=_{df} (\exists Y)(x = \S_{\sf NewV}(Y) \land \neg{\sf Big}(Y))\\ x \in y &=_{df} (\exists Z)(Z(x) \land y = \S_{\sf NewV}(Z))\end{align*}

it entails the extensionality, empty set, pairing, separation, and replacement axioms familiar from Zermelo-Fraenkel set theory (ZFC), and it also entails a slightly reformulated version of the union axiom. It does not entail the axioms of infinity, powerset, or foundation.

{\sf NewV} is not Field-conservative, however, since it implies that there is a well-ordering on the entire domain—see (Shapiro & Weir 1999) for details. Since, as we saw earlier, there is wide agreement that acceptable abstraction principles ought to be conservative in exactly this sense, neo-logicists will likely need to look elsewhere for their reconstruction of set theory.

Thus, while current debates regarding the reconstruction of the real numbers concern primarily philosophical issues, or which of various technical reconstructions is to be preferred based on philosophical considerations such as Frege’s Constraint, there remains a very real question regarding whether anything like contemporary set theory can be given a mathematically adequate reconstruction on the neo-logicist approach.

d. Neo-Logicism and the Caesar Problem

The final problem that the neo-logicist is faced with is one that is already familiar: the Caesar Problem. Frege, of course, side-stepped the Caesar Problem by denying, in the end, that abstraction principles such as Hume’s Principle or Basic Law V were definitions. But the neo-logicist accepts that these abstraction principles are (implicit) definitions of the mathematical concepts in question. An adequate definition of a mathematical concept should meet the following two desiderata:

  • Identity Conditions: An adequate definition should explicate the conditions under which two entities falling under that definition are identical or distinct.
  • Demarcation Conditions: An adequate definition should explicate the conditions under which an entity falls under that definition or not.

In short, if Hume’s Principle is to serve as a definition of the concept cardinal number, then it should tell us when two cardinal numbers are the same, and when they are different, and it should tell us when an object is a cardinal number, and when it is not. As we have already seen, Hume’s Principle (and other abstraction principles) do a good job on the first task, but fall decidedly short on the second.

Neo-logicist solutions to the Caesar Problem typically take one of three forms. The first approach is to deny the problem, arguing that it does not matter if the object picked out by the relevant abstraction term of the form \#(P) really is the number two, so long as that object plays the role of two in the domain of objects that makes Hume’s Principle true (that is, so long as it is appropriately related to the other objects referred to by other abstraction terms of the form \#(Q)). Although this is not the target of the essay, the discussion of the connections between logicism and structuralism about mathematics in (Wright 2000) touches on something like this idea. The second approach is to argue that, although abstraction principles as we have understood them here do not settle identity claims of the form t = @(P) (where t is not an abstraction term), we merely need to reformulate them appropriately. Again, although the Caesar Problem is not the main target of the essay, this sort of approach is pursued in (Cook 2016), where versions of abstraction principles involving modal operators are explored. Finally, the third approach involves admitting that abstraction principles alone are susceptible to the Caesar Problem, but arguing that abstraction principles alone need not solve it. Instead, identities of the form t = @(P) (where t is not an abstraction term) are settled via a combination of the relevant abstraction principle plus additional metaphysical or semantic principles. This is the approach taken in (Hale & Wright 2001b), where the Caesar Problem is solved by mobilizing additional theoretical constraints regarding categories—that is, maximal sortal concepts with uniform identity conditions—arguing that objects from different categories cannot be identical.

Before moving on to other versions of abstractionism, it is worth mentioning a special case of the Caesar Problem. Traditionally, the Caesar Problem is cast as a puzzle about determining the truth conditions of claims of the form:

t = @(P)

where t is not an abstraction term. But there is a second sort of worry that arises along these lines, one that involves identities where each term is an abstraction term, but they are abstraction terms governed by distinct abstraction principles. For concreteness, consider two distinct (consistent) conceptual abstraction principles:

    \begin{align*} {\sf A}_{E_1} : \ &(\forall X)(\forall Y)[@_1(X) = @_1(Y) \leftrightarrow E_1(X, Y)]\\ {\sf A}_{E_2} : \ &(\forall X)(\forall Y)[@_2(X) = @_2(Y) \leftrightarrow E_2(X, Y)] \end{align*}

For reasons similar to those that underlie the original Caesar Problem, the conjunction of these two principles fails to settle any identities of the form:

@_1(P) = @_2(Q)

This problem, which has come to be called the \mathbb{C}\text{-}\mathbb{R} Problem (since one particular case would be when {\sf A}_{E_1} introduces the complex numbers, and {\sf A}_{E_2} introduces the real numbers) is discussed in (Fine 2002) and (Cook & Ebert 2005). The former suggests (more for reasons of technical convenience than for reasons of philosophical principle) that we settle such identities by requiring that identical abstracts correspond to identical equivalence classes. Thus, given the two abstraction principles above, we would adopt the following additional Identity Principle:

{\sf IP}: (\forall X)(\forall Y)[@_1(X) = @_2(Y) \leftrightarrow (\forall Z)(E_1(Z, X) \leftrightarrow E_2(Z, Y))]

If, for example, we apply the Identity Principle to the abstracts governed by {\sf NewV} and those governed by Hume’s Principle, then we can conclude that:

\S_{\sf NewV}(x \neq x) = \#(x \neq x)

That is, \varnothing = 0. After all, the equivalence class of concepts containing the empty concept according to the equivalence relation mobilized in {\sf NewV} is identical to the equivalence class of concepts containing the empty concept according to the equivalence relation mobilized in Hume’s Principle (both contain the empty concept and no other concepts). But the following claim would turn out to be false (where a is any term):

\S_{\sf NewV}(x = a) = \#(x = a)

That is, for any object \alpha, \{\alpha\} \neq 1. Again, the reason is simple. The equivalence class given by the equivalence relation from {\sf NewV}, applied to the concept that holds of a and a alone, gives us an equivalence class that contains only that concept, while the equivalence class given by the equivalence relation from Hume’s Principle, applied to the concept that holds of a and a alone, gives us an equivalence class that contains any concept that holds of exactly one object.

While this solution is technically simple and elegant, (Cook & Ebert 2005) raises some objections. The most striking of which is a generalization of the examples above: Cook and Ebert suggest that any account that makes some numbers (in particular, zero) identical to some sets (in particular, the empty set), but does not either entail that all numbers are sets, or that all sets are numbers, is metaphysically suspect at best.

5. Dynamic Abstraction

Now that we’ve looked closely at both Frege’s logicist version of abstractionism and contemporary neo-logicism, we’ll finish up this essay by taking a brief look at another variation on the abstractionist theme.

Øystein Linnebo has formulated a version of abstractionism—dynamic abstraction—that involves modal notions, but in a way very different from the way in which these notions are mobilized in more traditional work on neo-logicism (Linnebo 2018). Before summarizing this view, however, we need to note that this account presupposes a rather different reading of the second-order variable involved in conceptual abstraction principles—the plural reading. Thus, a formula of the form:

(\exists X) \Phi(X)

should not be read as:

There is a concept X such that \Phi holds of X.

but rather as:

There are objects—the Xs—such that those objects are \Phi.

We will continue to use the same notation as before, but the reader should keep this difference in mind.

Linnebo begins the development of his novel version of abstractionism by pointing out that Basic Law V can be recast as a pair of principles. The first:

(\forall X)(\exists y)(y = \S(X))

says that every plurality of objects has an extension, and the second:

{(\forall X)(\forall Y)(\forall z)(\forall w)[(z = \S(X) \land w = \S(Y)) \rightarrow (z = w \leftrightarrow (\forall v)(X(v) \leftrightarrow Y(v)))]}

says that given any two pluralities and their extensions, the latter are identical just in case the former are co-extensive.

Linnebo then reformulates these principles, replacing identities of the form x = \S(Y) with a relational claim of the form {\sf Set}(Y, x) (this is mostly for technical reasons, involving the desire to avoid the need to mobilize free logic within the framework). {\sf Set}(X, y) should be read as:

y is the set of Xs.

We then obtain what he calls the principle of Collapse:

{\sf Coll}: (\forall X)(\exists y)({\sf Set}(X, y))

and the principle of Extensionality:

{{\sf Ext}: (\forall X)(\forall Y)(\forall z)(\forall w)[({\sf Set}(X, z) \land {\sf Set}(Y, w)) \rightarrow (z = w \leftrightarrow (\forall v)(X(v) \leftrightarrow Y(v)))]}

which says that given any two pluralities and the corresponding sets, the latter are identical just in case the former are co-extensive. Now, these principles are jointly just as inconsistent as the original formulation of Basic Law V was. But Linnebo suggests a new way of conceiving of the process of abstraction: We understand the universal quantifiers in these principles to range over a given class of entities, and the existential quantifiers then give us new entities that are abstracted off of this prior ontology. As a result, one gets a dynamic picture of abstraction: instead of a abstraction principle describing the abstracts that arise as a result of consideration of all objects—including all abstracts—in a static, unchanging universal domain, we instead conceive of our ontology in terms of an ever-expanding series of domains, obtained via application of the extensions-forming abstraction operation on each domain to obtain a new, more encompassing domain.

Linnebo suggests that we can formalize these ideas precisely via adopting a somewhat non-standard application of the modal operators \Box and \Diamond. Loosely put, we read \Box \Phi as saying “on any domain, \Phi” and \Diamond \Phi as saying “the domain can be expanded such that \Phi”. Using these operators, we can formulate new, dynamic versions of Collapse and Extension. The modalized version of Collapse

{\sf Coll^\Diamond}: \Box(\forall X)\Diamond(\exists y)({\sf Set}(X, y))

says that, given any domain and any plurality of objects from that domain, there is a (possibly expanded) domain where the set containing the members of that plurality exists, and the modalized version of Extension:

{{\sf Ext^\Diamond}: (\forall X)(\forall Y)(\forall z)(\forall w)[({\sf Set}(X, z) \land {\sf Set}(Y, w)) \rightarrow (z = w \leftrightarrow \Box(\forall v)(X(v) \leftrightarrow Y(v)))]}

says that, given any pluralities and the sets corresponding to them, the latter are identical if and only if the former are necessarily coextensive (note that a plurality, unlike a concept, has the same instances in every world). This version of Basic Law V, which entails many of the standard set-theoretic axioms, is consistent. In fact, it is consistent with a very strong, modal version of comprehension for pluralities (Linnebo 2018, 68). Thus, the dynamic abstraction approach, unlike the neo-logicism of Wright and Hale, allows for a particularly elegant abstractionist reconstruction of set theory.

Of course, if the dynamic version of Basic Law V is consistent on this approach, then the dynamic version of any abstraction principle is. As a result, given any neo-logicist abstraction principle:

{\sf A}_E : (\forall X)(\forall Y)[@_E(X) = @_E(Y) \leftrightarrow E(X, Y)]

there will be a corresponding pair of dynamic principles:

{\sf Coll}^\Diamond_E : \Box(\forall X)\Diamond(\exists y)({y = @_E(X)}\sf Abs(X, y))

and:

{{\sf Ext}^\Diamond_E: (\forall X)(\forall Y)(\forall z)(\forall w)[({\sf Abs}_E(X, z) \land {\sf Abs}_E(Y, w)) \rightarrow (z = w \leftrightarrow \Box E(X, Y))]}

where {\sf Abs}_E(X, y) says something like:

y is the E-abstract of the Xs.

And {\sf Coll}^\Diamond_E and {\sf Ext}^\Diamond_E, unlike {\sf A}_E, are guaranteed to be (jointly) consistent.

Thus, although Linnebo must still grapple with the Caesar Problem and many of the other issues that plague neo-logicism—and the reader is encouraged to consult the relevant chapters of (Linnebo 2018) to see what he says in this regard—his dynamic abstraction account does not suffer from the Bad Company Problem: all forms of abstraction, once they are re-construed dynamically, are in Good Company.

6. References and Further Reading

  • Aristotle, (1975), Posterior Analytics J. Barnes (trans.), Oxford: Oxford University Press.
  • Bueno, O. & Ø. Linnebo (eds.) (2009), New Waves in Philosophy of Mathematics, Basingstoke UK: Palgrave.
  • Boolos, G. (1987), ‘The Consistency of Frege’s Foundations of Arithmetic, in (Thompson 1987): 211—233.
  • Boolos, G. (1989), “Iteration Again”, Philosophical Topics 17(2): 5—21.
  • Boolos, G. (1990a), “The Standard of Equality of Numbers”, in (Boolos 1990b): 3—20.
  • Boolos, G. (ed.) (1990b), Meaning and Method: Essays in Honor of Hilary Putnam, Cambridge: Cambridge University Press.
  • Boolos, G. (1997) “Is Hume’s Principle Analytic?”, in (Heck 1997): 245—261.
  • Boolos, G. (1998), Logic, Logic, and Logic, Cambridge MA: Harvard University Press.
  • Boolos, G. & R. Heck (1998), “Die Grundlagen der Arithmetik §82—83”, in (Boolos 1998): 315—338.
  • Cook, R. (ed.) (2007), The Arché Papers on the Mathematics of Abstraction, Dordrecht: Springer.
  • Cook, R. (2009), “New Waves on an Old Beach: Fregean Philosophy of Mathematics Today”, in (Bueno & Linnebo 2009): 13—34.
  • Cook, R. (2013), “How to Read Frege’s Grundgesetze” (Appendix to (Frege 1893/1903/2013): A1—A41.
  • Cook, R. (2016), “Necessity, Necessitism, and Numbers” Philosophical Forum 47: 385—414.
  • Cook, R. (2019), “Frege’s Little Theorem and Frege’s Way Out”, in (Ebert & Rossberg 2019): 384—410.
  • Cook, R. & P. Ebert (2005), “Abstraction and Identity”, Dialectica 59(2): 121—139.
  • Cook, R. & P. Ebert (2016), “Frege’s Recipe”, The Journal of Philosophy 113(7): 309—345.
  • Cook, R. & Ø. Linnebo (2018), “Cardinality and Acceptable Abstraction”, Notre Dame Journal of Formal Logic 59(1): 61—74.
  • Dummett, M. (1956), “Nominalism”, Philosophical Review 65(4):491—505.
  • Dummett, M. (1991), Frege: Philosophy of Mathematics. Cambridge MA: Harvard University Press.
  • Ebert, P. & M. Rossberg (eds.) (2016), Abstractionism: Essays in Philosophy of Mathematics, Oxford: Oxford University Press.
  • Ebert, P. & M. Rossberg (eds.) (2019), Essays on Frege’s Basic Laws of Arithmetic, Oxford: Oxford University Press.
  • Euclid (2012), The Elements, T. Heath (trans.), Mineola, New York: Dover.
  • Ferreira, F. & K. Wehmeier (2002), “On the Consistency of the \Delta^1_1CA Fragment of Frege’s Grundgesetze”, Journal of Philosophical Logic 31: 301—311.
  • Field, H (2016), Science Without Numbers, Oxford: Oxford University Press.
  • Fine, K. (2002), The Limits of Abstraction, Oxford: Oxford University Press.
  • Frege, G. (1879/1972) Conceptual Notation and Related Articles (T. Bynum trans.), Oxford: Oxford University Press.
  • Frege, G. (1884/1980), Die Grundlagen der Arithmetik (The Foundations of Arithmetic) 2nd Ed., J. Austin (trans.), Chicago: Northwestern University Press.
  • Frege, G. (1893/1903/2013) Grundgesetze der Arithmetik Band I & II (The Basic Laws of Arithmetic Vols. I & II) P. Ebert & M. Rossberg (trans.), Oxford: Oxford University Press.
  • Hale, B. (2000), “Reals by Abstraction”, Philosophia Mathematica 8(3): 100—123.
  • Hale, B. & C. Wright (2001a), The Reason’s Proper Study, Oxford: Oxford University Press.
  • Hale, B. & C. Wright (2001b), “To Bury Caesar. . . ”, in (Hale & Wright 2001a): 335—396.
  • Hallett, M. (1986), Cantor Set Theory and Limitation of Size, Oxford: Oxford University Press.
  • Heck, R. (1992), “On the Consistency of Second-Order Contextual Definitions”, Nous 26: 491—494.
  • Heck, R. (1993), “The Development of Arithmetic in Frege’s Grundgesetze der Arithmetik ”, Journal of Symbolic Logic 10: 153—174.
  • Heck, R. (1996), “The Consistency of Predicative Fragments of Frege’s Grundgesetze der Arithmetik ”, History and Philosophy of Logic 17: 209—220.
  • Heck, R. (ed.) (1997), Language, Thought, and Logic: Essays in Honour of Michael Dummett, Oxford: Oxford University Press.
  • Heck, R. (2013), Reading Frege’s Grundgesetze, Oxford: Oxford University Press.
  • Hodes, H. (1984), “Logicism and the Ontological Commitments of Artihmetic”, The Journal of Philosophy 81: 123—149.
  • Hume, D. (1888), A Treatise of Human Nature, Oxford: Clarendon Press.
  • Kant, I. (1987/1999), Critique of Pure Reason, P. Guyer & A. Wood (trans.), Cambridge: Cambridge University Press.
  • Kunen, K. (1980), Set Theory: An Introduction to Independence Proofs, Amsterdam: North Holland.
  • Linnebo, Ø. (2018), Thin Objects: An Abstractionist Account, Oxford: Oxford University Press.
  • Mancosu, P. (2016), Abstraction and Infinity, Oxford: Oxford University Press.
  • Russell, B. & A. Whitehead (1910/1912/1913) Principia Mathematica Volumes 1—3, Cambridge, Cambridge University Press.
  • Shapiro, S. (1991), Foundations without Foundationalism: The Case for Second-Order Logic, Oxford: Oxford University Press.
  • Shapiro, S. (2000), “Frege Meets Dedekind: A Neo-Logicist Treatment of Real Analysis”, Notre Dame Journal of Formal Logic 41(4): 335—364.
  • Shapiro, S. & A. Weir (1999), “New V, ZF, and Abstraction”, Philosophia Mathematica 7(3): 293—321.
  • Tennant, N. (1987) Anti-Realism and Logic: Truth as Eternal, Oxford: Oxford University Press.
  • Thompson, J. ed. (1987), On Being and Saying: Essays in Honor of Richard Cartwright, Cambridge MA: MIT Press.
  • Wehmeier, K. (1999), “Consistent Fragments of Grundgesetze and the Existence of Non-Logical Objects”, Synthese 121: 309—328.
  • Weir, A., (2003), “Neo-Fregeanism: An Embarrassment of Riches?” Notre Dame Journal of Formal Logic 44(1): 13—48.
  • Wright, C. (1983), Frege’s Conception of Numbers as Objects, Aberdeen: Aberdeen University Press.
  • Wright, C. (1997), “On the Philosophical Significance of Frege’s Theorem”, in (Heck 1997): 201—244.
  • Wright, C. (2000), “Neo-Fregean Foundations for Real Analysis: Some Reflections on Frege’s Constraint”, Notre Dame Journal of Formal Logic 41(4): 317—344.

 

Author Information

Roy T. Cook
Email: cookx432@umn.edu
University of Minnesota
U. S. A.