What Else Science Requires of Time

This article is one of the two supplements of the main Time article.

Table of Contents

  1. What are Theories of Physics?
  2. Relativity Theory
  3. Quantum Theory
    1. The Standard Model
  4. Big Bang
    1. Cosmic Inflation
    2. Eternal Inflation and the Multiverse
  5. Infinite Time

1. What are Theories of Physics?

Scientists propose a theory both because they believe it might be true or at least might be helpful in understanding or predicting phenomena and because they believe the theory is worth more investigation. They also desire to unify their theories.

Physicists usually think of physical theories as special groups of laws of nature, and they think of a law as a specification of patterns over time of possible events.  The laws as a whole describe all the physically allowed situations and processes.

In the philosophy of science literature there are two central treatments of the term law. One says all laws are human products and are not approximations to any underlying laws by which the world works. The laws are mere regularities in how things do behave; this is the Humean Theory or Regularity Theory. The second treatment says there are these underlying laws that fix how nature necessarily must behave, and the laws written in the science books are designed to specify those underlying laws or to approximate them or to simplify them. This second philosophical theory of laws is called Non-Humean or Necessitarian. It is more popular than the former theory, and this article uses its terminology.

Due to the influence of Isaac Newton, physicists normally have assumed  that the laws of physics are time-translation invariant. This invariance implies the laws of physics we have now are the same laws we had in the past and will have in the future. That assumption has been challenged by a few persons in the 21st century, but the assumption is still retained by the vast majority of physicists.

And physicists have normally assumed that the laws have no exceptions. So, to give a frivolous example, if E = mc2 has been supposed to be a law of nature, but then it is discovered that under condition C we have E=mc3, then physicists would say that the old law was not really a law of nature after all and that the exceptionless law of nature is: E = mc2 if not-C, and E=mc3 if C.

Physicists commonly call their theories “models.” They think of theories as consistent sets of laws of nature, perhaps a list of equations; but scientists and philosophers of science require the achievement of more goals than just the acquisition of theories. The philosopher David Wallace says that what else is required is “some account of what there is in the world according to the theory, how [the world] behaves, what causes what, and what the explanations are.” (Wallace 2021, p. 18).

All physicists and most all philosophers believe there can be both brute events and brute laws, so Leibniz’s Principle of Sufficient Reason is not strictly correct. And nearly all scientists and philosophers of science recognize both the underdetermination of scientific theories by the available evidence and also the dependence of explanation not only on theories but also on auxiliary hypotheses or background assumptions. They also recognize that science is inherently provisional, so scientists need to be on the lookout for potentially required revisions and refinements to their theories.

Some physical theories are fundamental, and some are not. Fundamental theories are foundational in the sense that their laws cannot be derived from the laws of other physical theories even in principle. For example, the second law of thermodynamics is not fundamental, nor are the laws of plate tectonics in geophysics. The following three theories are fundamental: (i) the general theory of relativity, (ii) quantum mechanics, including the standard model of particle physics, and (iii) the big bang theory (the standard cosmological model). Their amalgamation is what Nobel Prize winner Frank Wilczek called the Core Theory, the theory of everything physical. Wilczek claimed:

[T]he Core has such a proven record of success over an enormous range of applications that I can’t imagine people will ever want to junk it. I’ll go further: I think the Core provides a complete foundation for biology, chemistry, and stellar astrophysics that will never require modification. (Well, “never” is a long time. Let’s say for a few billion years.)

The Core gives us all the fundamental laws we will ever need in order to explain ordinary phenomena. All  other physical laws supervene on the laws of the Core, even though as a practical matter one rarely would or should try to explain the laws of these fields by reducing them to true statements within the Core. The Core theory does not contain the concepts of potato, planet,  or person; these are emergent concepts need in good explanations.

The Core has been tested in many extreme circumstances and with great sensitivity, so physicists have high confidence in it. Physicists are motivated to find where it fails because such a discovery can lead to great praise from the rest of the physics community. Wilczek says the Core will never need modification for understanding biology, chemistry, stellar astrophysics, computer science and engineering, but he would agree that the Core will need revision for more esoteric phenomena such as neutrinos changing their identity over time, or the imbalance between matter and antimatter, or the incompatibility of general relativity and quantum mechanics.

The Core theory rests upon an assumption, the well-accepted Laplacian Paradigm that the correct way to do physics is to find laws which describe how a state of a system at one time turns into a different state at another time. David Deutsch, Chiara Marletto, and their corroborators (Deutsch 2013) have challenged that paradigm and proposed their alternative, Constructor Theory, which requires time to be an emergent characteristic of nature.

2. Relativity Theory

Time is fundamental in current relativity theory. When the term relativity theory is used, it usually means the general theory of relativity of 1915, but sometimes it means the special theory of relativity of 1905; hopefully the context will disambiguate. The general theory of relativity was co-discovered independently by the physicist Albert Einstein and the mathematician David Hilbert. Because Einstein published slightly sooner, it is called Einstein’s general theory of relativity. Both theories have been well-tested; and they continue to be tested. They are almost universally accepted, and today’s physicists understand them better than Einstein did.

The relationship between the special and general theories is slightly complicated. Both theories are about motion of objects and both approach agreement with Newton’s theory the slower the speed, the weaker the gravitational forces, and the lower the energy of those objects. Special relativity implies the laws of physics are the same for all inertial observers, that is, observers who are moving at a constant velocity relative to each other. General relativity implies the laws are the same even for observers accelerating relative to each other, such as changing their velocity due to the influence of gravitation. General relativity holds in all reference frames, but special relativity holds only for inertial reference frames, namely non-accelerating frames.

Regarding the difference between special and general relativity, special relativity allows objects to have mass but not gravity. It always requires a flat geometry—that is, a Euclidean geometry for space and a Minkowski geometry for spacetime. General relativity does not have those restrictions. General relativity is a specific theory of gravity, assuming the theory is supplemented by a specification of the distribution of matter-energy at some time. Special relativity is not a specific theory but rather a general framework for theories, and it is not a specific version of general relativity. Nor is general relativity a generalization of special relativity. The main difference between the two is that, in general relativity, spacetime does not simply exist passively as a background arena for events. Instead, spacetime is dynamical in the sense that changes in the distribution of matter and energy are changes in the curvature of spacetime (though not necessarily vice versa).

In the Core theories, the word time is a theoretical term, and the dimension of time is treated somewhat like a dimension of space. Space is a set of all possible point-locations. Time is a set of all possible point-times. Spacetime is a set of all possible point-events. Spacetime is presumed to be four-dimensional and also smooth, with time being a distinguished, one-dimensional sub-space of spacetime. Because the time dimension is so different from a space dimension, physicists very often speak of (3+1)-dimensional spacetime rather than 4-dimensional spacetime. Technically, any spacetime, no matter how many dimensions it has, is required to be a differentiable manifold with a metric tensor field defined on it that tells what geometry it has at each point.

The special and general theories of relativity imply that to place a reference frame upon spacetime is to make a choice about which part of spacetime is the space part and which is the time part. No choice is objectively correct, although some choices are more convenient for some purposes. This relativity of time, namely the dependency of time upon a choice of reference frame, is one of the most significant implications of both the special and general theories of relativity.

Since the discovery of relativity theory, scientists have come to believe that any objective description of the world can be made only with statements that are invariant under changes in the reference frame. Saying, “It is 8:00” does not have a truth value unless a specific reference frame is implied, such as one fixed to Earth with time being the time that is measured by our civilization’s standard clock. This relativity of time to reference frames is behind the remark that Einstein’s theories of relativity imply time itself is not objectively real but spacetime is real.

In regard to the idea of relativity to frame, Newton would say that if you are seated in a vehicle moving along a road, then your speed relative to the vehicle is zero, but your speed relative to the road is not zero. Einstein would agree. However, he would surprise Newton by saying the length of your vehicle is slightly different in the two reference frames, the one in which the vehicle is stationary and the one in which the road is stationary. Equally surprising to Newton, the duration of the event of your drinking a cup of coffee while in the vehicle is slightly different in those two reference frames. These relativistic effects are called space contraction and time dilation, respectively. So, both length and duration are frame dependent and, for that reason, say physicists, they are not objectively real characteristics of objects. Speeds also are relative to reference frame, with one exception. The speed of light in a vacuum has the same value c in all frames.

Relativity theory allows great latitude in selecting the classes of simultaneous events, as shown in this diagram. Because there is no single objectively-correct frame to use for specifying which events are present and which are past—but only more or less convenient ones—one philosophical implication of the relativity of time is that it seems to be more difficult to defend McTaggart’s A-theory that implies the temporal properties of events such as “is happening now” or “happened in the past” are intrinsic to the events and are objective, frame-free properties of those events.

Relativity theory challenges other ingredients of the manifest image of time. For two events A and B occurring at the same place but at different times, relativity theory implies their temporal order is absolute in the sense of being independent of the frame of reference, and this agrees with common sense and thus the manifest image of time, but if they are distant from each other and occur close enough in time to be within each other’s absolute elsewhere, then event A can occur before event B in one reference frame, but after B in another frame, and simultaneously with B in yet another frame. No person before Einstein ever imagined time has such a strange feature.

The special and general theories of relativity provide accurate descriptions of the world when their assumptions are satisfied. Both have been carefully tested. The special theory does not mention gravity,  and it assumes there is no curvature to spacetime, but the general theory requires curvature in the presence of mass and energy, and it requires the curvature to change as their distribution changes. The presence of gravity in the general theory has enabled the theory to be used to explain phenomena that cannot be explained with special relativity and Newton’s theory of gravity and Maxwell’s theory of electromagnetism.

Because of the relationship between spacetime and gravity, the equations of general relativity are much more complicated than are those of special relativity. But general relativity assumes the equations of special relativity hold at least in all infinitesimal regions of spacetime.

To give one example of the complexity just mentioned, the special theory clearly implies there is no time travel to events in one’s own past.  Experts do not agree on whether the general theory has this same implication because the equations involving the phenomena are too complex to solve directly. Approximate solutions have to be used, yet still there is disagreement about time travel.

Regarding curvature of time and of space, the presence of mass at a point implies intrinsic spacetime curvature at that point, but not all spacetime curvature implies the presence of mass. Empty spacetime can still have curvature, according to relativity theory. This point has been interpreted by many philosophers as a good reason to reject Leibniz’s classical relationism. The point was first mentioned by Arthur Eddington.

Two accurate, synchronized clocks do not stay synchronized if they undergo different gravitational forces. This is a second kind of time dilation, in addition to dilation due to speed. So, a correct clock’s time depends on the clock’s history of both speed and gravitational influence. Gravitational time dilation would be especially apparent if a clock were to approach a black hole. The rate of ticking of a clock approaching the black hole slows radically upon approach to the horizon of the hole as judged by the rate of a clock that remains safely back on Earth. This slowing is sometimes misleadingly described as time slowing down. After a clock falls through the event horizon, it can no longer report its values to Earth, and when it reaches the center of the hole not only does it stop ticking, but it also reaches the end of time, the end of its proper time.

The general theory of relativity theory has additional implications for time. In 1948-9, the logician Kurt Gödel discovered radical solutions to Einstein’s equations, solutions in which there are closed time-like curves in graphical representations of spacetime. The unusual curvature is due to the rotation of all the matter in Gödel’s possible universe. As one progresses forward in time along one of these curves, one arrives back at one’s starting point. Fortunately, there is no empirical evidence that our own universe has this rotation. Here is Einstein’s reaction to Gödel’s work on time travel:

Kurt Gödel’s essay constitutes, in my opinion, an important contribution to the general theory of relativity, especially to the analysis of the concept of time. The problem involved here disturbed me already at the time of the building of the general theory of relativity, without my having succeeded in clarifying it.

Time, as described by all the Core theories is smooth and not quantized. What does that mean? In mathematical physics, the ordering of instants by the happens-before relation of temporal precedence is complete in the sense that there are no gaps in the sequence of instants. Any interval of time is smooth, so the points of time form a linear continuum. Unlike physical objects, physical time is believed to be infinitely divisible—that is, divisible in the sense of the actually infinite, not merely in Aristotle’s sense of potentially infinite. Regarding the density of instants, the ordered instants are so densely packed that between any two there is a third so that no instant has a next instant. Regarding continuity, time’s being a linear continuum implies that there is a nondenumerable infinity of instants between any two non-simultaneous instants. The rational number line does not have so many points between any pair of different points; it is not continuous the way the real number line is, but rather contains many gaps. The real numbers such as pi, which is not a rational number, fill the gaps.

The actual temporal structure of events can be embedded in the real numbers, at least locally, but how about the converse? That is, to what extent is it known that the real numbers can be adequately embedded into the structure of the instants, at least locally? This question is asking for the justification of saying time is not discrete or atomistic. The problem here is that the shortest duration ever measured is about 250 zeptoseconds. A zeptosecond is 10−21 second. For times shorter than about 10-43 second, which is the physicists’ favored candidate for the duration of an atom of time, science has no experimental grounds for the claim that between any two events there is a third. Instead, the justification of saying the reals can be embedded into an interval of instants is that (i) the assumption of continuity is very useful because it allows the mathematical methods of calculus to be used in the physics of time; (ii) there are no known inconsistencies due to making this assumption; and (iii) there are no better theories available. The qualification earlier in this paragraph about “at least locally” is there in case there is time travel to the past so that the total duration of the time loop is finite. A circle is continuous, and one-dimensional, but it is finite, and it is like the real numbers only locally.

One can imagine two empirical tests that would reveal time’s discreteness if it were discrete—(1) being unable to measure a duration shorter than some experimental minimum despite repeated tries, yet expecting that a smaller duration should be detectable with current equipment if there really is a smaller duration, and (2) detecting a small breakdown of Lorentz invariance. But if any experimental result that purportedly shows discreteness is going to resist being treated as a mere anomaly, perhaps due to error in the measurement apparatus, then it should be backed up with a confirmed theory that implies the value for the duration of the atom of time. This situation is an instance of the kernel of truth in the physics joke that no observation is to be trusted until it is backed up by theory.

The speed of light in a vacuum has the same value in all reference frames. The value is customarily called “c.” It is commonly remarked that, according to relativity theory, nothing can go faster than light. The remark needs some clarification, else it is incorrect. Here are three ways to go faster than light. (1) First, the medium needs to be specified. The speed of light in certain crystals can be much less than c, say 40 miles per hour, and a horse outside the crystal could outrun the light beam. (2) Second, the limit c applies only locally to objects within space relative to other nearby objects within space, and it requires that no object pass another object locally at faster than c. However, globally the general theory of relativity places no restrictions on how fast space itself can expand. So, two distant galaxies can drift apart from each other at faster than the speed of light simply because the intervening space expands. (3) Imagine standing still outside on the flat ground and aiming your laser pointer forward toward an extremely distant galaxy. Now aim the pointer down at your feet. When that happens, the point of intersection of the pointer and the plane of the ground will move toward your feet faster than the speed c. The point of intersection is merely a geometrical object, not a physical object, so its speed is not restricted by relativity theory.

The age of a photon created at the big bang is the same age as when it blasts into Earth.  If you wish to stop aging, turn yourself into a group of photons.

3. Quantum Theory

Time is not quantized in quantum theory, and it is fundamental and not emergent, as it is in relativity theory. The broad term quantum mechanics, or equivalently, quantum theory, includes both quantum field theory, which is quantum mechanics applied to fields, and the standard model of particle physics. Quantum theory is mostly useful at the scale of atomic and subatomic phenomena, although strictly speaking it applies everywhere and at all scales, including to all of chemistry, biology, and psychology. Quantum theory describes the behavior of all the forces other than gravity. Like general relativity, quantum theory has never been shown to disagree with a careful observation even though its implications do not agree with those of relativity theory at the Planck scale. Surprisingly, physicists still do not agree on the exact formulation of quantum theory. Its many competing interpretations are really competing versions of the theory. That is why there is no agreement on what the axioms of quantum theory are.

Quantum theory describes objects by using probabilities and waves, so quantum objects are unlike objects that are described by Newtonian and relativistic physics as having definite positions and velocities. Philosophers of physics do not agree on the nature of an object’s location according to quantum theory, nor on how an object persists.

Hilbert space is the space of all possible quantum states. A quantum state that is localized in position is not localized in momentum, and vice versa. This limitation is one version of the Heisenberg Uncertainty Principle. Restrictions follow regarding what can be simultaneously observed or measured.

Despite quantum theory being the most successful theory in the history of physics, philosophers of physics do not agree on whether quantum theory is a theory about reality or instead merely a tool for making measurements. Nor do they agree on whether the quantum wave function is a representation of reality or instead a representation of our knowledge of reality. Also, physicists do not agree on whether we currently possess the fundamental laws of quantum mechanics, as Everett suggested, or instead only an incomplete version of the laws, as Einstein suggested.

One of the earliest versions of quantum theory is due to Niels Bohr in the 1920s. It is called the Copenhagen Interpretation. Bohr’s complementarity idea for interpreting quantum mechanics is that all particles have both wave and particle aspects. A full description of the particle requires specifying both its wave character and its particle character. Consider a proton. Examined as a particle, a proton has a definite width. Examined as a wave, the proton is a relatively stable “bump” in a proton field, and the bump has no definite width.

The Copenhagen Interpretation implies that when a single nucleus of radioactive uranium decays at a specific time, there is no determining cause for the decay; the best our quantum theory can say is that there was a certain probability of the decay occurring at that time and that there were certain probabilities for other possible experimental outcomes. The statistical veil of quantum theory cannot be penetrated. So, quantum mechanics is indeterministic. And assuming that reasons are causes, the Copenhagen Interpretation also is inconsistent with Leibniz’s Principle of Sufficient Reason.

However, there are also deterministic interpretations of quantum theory without that statistical veil. The many-worlds interpretation or Everettian interpretation of quantum theory is deterministic over the totality of worlds, though not within a single world such as the actual world. The theory does not violate Leibniz’s Principle  of Sufficient Reason, but it implies we cannot know which world is determined to happen next—for example, a world in which the nucleus of a specific uranium atom decayed or a world in which it did not decay. What we can know in advance is, at best, only the probability of those worlds. When measurements are made, the Everettian interpretation, like the Copenhagen interpretation, treats the actual universe indeterministically. Both interpretations agree that a closed physical system evolves over time deterministically.

Schrödinger’s wave function describes how states of a quantum system evolve over time. This quantum wave function at one time determines the wave function at all other times. So, if Laplace’s Demon knew the wave function, it could compute the function at all later and all earlier times, and predict the future and know the past. However, Heisenberg’s Uncertainty Principle implies that, if there exists more precise information about the time when an event occurs, then there exists only less precise information about the energy involved. Because of this lack of precision in principle, it follows that probability is ineliminable. But philosophical debate continues about whether the existence of this probability is an epistemic constraint or a sign of indeterminism.

In addition to the previous remark about the relationship between time and energy, quantum theory implies another deep relationship between the two. Quantum theory does contain a law of conservation of energy, but the law frequently is explained as requiring that, in an isolated region of space, the total amount of energy cannot change no matter what happens in the region. The energy can only change its form. As a description of the quantum version of the law of conservation of energy, this explanation is not strictly correct. The law is violated for very short time intervals and is less likely to be violated as the time interval increases.  The reason why is that, in a closed system, quantum theory allows so-called “virtual particles” to be created out of the vacuum.  These particles borrow energy from the vacuum and pay it back very quickly. What happens is that, when a pair of energetic virtual particles—say, an electron and anti-electron—are created from the vacuum, the two exist for only a very short time before being annihilated or reabsorbed and thereby giving back their energy. The greater the energy of the virtual pair, the shorter the time interval that the two virtual particles exist before being reabsorbed. So, strictly speaking, quantum theory does allow energy to be created from nothing. Some theologians have been outraged by this conclusion, suggesting that only God has the power to create something from nothing. In opposition, the physicist Alan Guth was quoted as saying: “I have often heard it said that there is no such thing as a free lunch. It now appears that the universe itself is a free lunch.

Virtual particles cause space-time to warp around them, and then to un-warp as the particles disappear very quickly. This coming in and out of existence creates all sorts of ultra-microscopic fluctuations known collectively as the “quantum foam” or “space-time foam.”  The existence of this foam is why “quantum mechanics makes things jittery and turbulent.”

The effect of all these particles wiggling into and out of being is a thrumming “vacuum energy” that fills the cosmos and pushes outward on space itself. This activity is the most likely explanation for dark energy—the reason the universe, rather than staying static or even expanding at a steady rate, is accelerating outward faster and faster every moment (Moskowitz 2021, p. 26).

Dark energy exists everywhere, but dark matter does not.  Dark matter is a source of gravity and it resides primarily within a cluster of galaxies, unlike dark energy. Dark energy is a source of negative pressure that creates spatial expansion. It might have a constant value everywhere in any given amount of space, and that is why it is often called the Cosmological Constant. Observations in the first quarter of the 21st century indicate the total matter-energy of the universe is about 71% dark energy, 25% dark matter, and 4% normal matter (such as everything in the periodic table). It is suspected that ten percent of our Milky Way’s dark matter is composed of 100 million dark, black holes that were formed early in our universe’s history. None of these black holes has yet been observationally detected.

Regarding the quantum foam, John Wheeler suggested that the ultramicroscopic structure of spacetime for periods on the order of the Planck time (about 5.4 x 10-44 seconds) in regions about the size of the Planck length (about 1.6 x 10-35 meters) probably is a quantum foam of rapidly changing curvature of spacetime, with black holes and virtual particle-pairs and perhaps wormholes rapidly forming and dissolving.

The Planck time is the time it takes light to travel a Plank length. The terms Planck length and Planck time were inventions of Max Planck in the early twentieth-century during his quest to find basic units of length and time that could be expressed in terms only of universal constants. He defined the Planck unit of time algebraically as

√(ħG/c5).

is the square root symbol. ħ is Planck’s constant in quantum theory divided by 2π; G is the gravitational constant in Newtonian mechanics; c is the speed of light in a vacuum, and is from relativity theory. Three different theories of physics are tied together in this one expression. The  Planck time is a theoretically interesting unit of time, but not a practical one. No known experimental procedure can detect events that are this brief.

There are no isolated particles according to quantum mechanics. Every particle is surrounded by many other particles, mostly virtual particles.  So far, this article has spoken of virtual particles as if they are ordinary but short-lived particles. This is not quite correct. Virtual particles are not exactly particles like the other particles of the quantum fields. Both are excitations of these fields, and they both have gravitational effects and thus effects on time, but virtual particles are not equivalent to ordinary quantum particles, although the longer lived ones are more like ordinary particle excitations than the short lived ones.

Virtual particles are just a way to calculate the behavior of quantum fields, by pretending that ordinary particles are changing into weird particles with impossible energies, and tossing such particles back and forth between themselves. A real photon has exactly zero mass, but the mass of a virtual photon can be absolutely anything. What we mean by “virtual particles” are subtle distortions in the wave function of a collection of quantum fields…but everyone calls them particles [in order to keep their names simple] (Carroll 2019, p. 316).

There is another unusual feature of quantum theory that involves time. It is entanglement. Ontologically, the key idea is that if a particle becomes entangled with one or more other particles, then it loses some of its individuality. Even though both the special and general theory of relativity place the speed limit c on how fast a causal influence can propagate through space, classical quantum mechanics does not have this limit. A quantum measurement of one member of an entangled pair of particles will instantaneously determine the value of any similar measurement to be made on the other member of the pair. This is philosophically significant because, in 1935, E. Schrodinger had said:

Measurements on (spatially) separated systems cannot directly influence each other—that would be magic.

Yet the magic seems to exist. With entangled pairs, there is instantaneous, coordinated behavior across great distances.  Here is an example. Let two electrons interact so that they become entangled, and their spins become entangled. What is exciting and special is that, although the two entangled electrons were created so that they will give, say, the same values when their spins are measured, it can be shown that they were not created with the same specific spin. It isn’t that both started out with spin up or both started out with spin down but only that they later will be found to have the same spin. To appreciate this point, separate them by a great distance from each other. Now perform a measurement of spin on one of the electrons. Suppose this electron’s spin is measured to be up. If a similar measurement were to be made on the very distant electron, its spin would be found to be up also. And this second measurement can be made before a particle moving at the speed of light has time to carry information to the very distant, second particle about what happened back at the first particle. The transmission of coordinated behavior happens in zero time.  It is hard for us who are influenced by the manifest image to believe that the two electrons did not start out with the spins that they were later found to have.

This coordinated behavior cannot be used to send data faster than c, but it is probably the most shocking implication of quantum theory for our manifest image. There has been serious speculation that the two particles in this system must be connected somehow, say by a wormhole. The two definitely have lost some of their individuality that they had before they became entangled.

Some physicists, including the philosopher David Albert, suggest that the explanation of non-local phenomena such as entanglement requires some notion of absolute simultaneity, and therefore a revision in the general theory of relativity.

All physicists believe that relativity and quantum theory are logically contrary for very high energy phenomena occurring in very small regions, such as near the big bang. So, the two theories need to be replaced with a theory customarily called quantum gravity that is “more fundamental.” It is usually not made clear what it is that makes a fundamental theory be fundamental, but the overall, vague idea is that a fundamental theory should not leave anything clearly in need of explanation. For more discussion of what is meant or should be meant by the terms fundamental theory, more fundamental theory, and final theory, see (Crowther 2019). Regardless of this fine philosophical point, a successful theory of quantum gravity may have radical implications for our understanding of time. Two prominent suggestions are that (1) time will be understood to be quantized, that is, to be discrete rather than smooth, and that (2) spacetime emerges from more fundamental entities. Another prominent suggestion is that time is fundamental and not emergent, but space emerges. Because there is no well-accepted theory of quantum gravity, so far the best game in town is that spacetime is not emergent, time and space both emerge from spacetime, and both are smooth and not quantized.

But there have been serious proposals from the physicist Leonard Susskind and his colleagues that the holographic principle holds and it implies that space itself is only two-dimensional, not three-dimensional, or at least it implies that space can have a completely accurate two-dimensional representation. The inspiration comes from applying quantum theory and relativity theory to the study of black holes. If one thinks of the universe as a gigantic message, one would naively believe the maximum amount of information within its event horizon is proportional to the volume of the hole, but it fact it has been learned that it is proportional to the area of the horizon. The implication of this for our entire universe is that the maximum amount of information it contains is proportional to the two-dimensional area of a distant boundary enclosing the universe. There has been much speculation that there is a gigantic two-dimensional boundary, and our supposed three-dimensional reality is in fact only a holographic projection of this more fundamental two-dimensional reality. (See Susskind, The Black Hole War, 2008.)

a. Standard Model

The standard model of particle physics was proposed in the 1970s, and it has subsequently been very well-tested. It is our civilization’s most precise and powerful theory of physics. For example, it explains why the periodic table has the values it has, and it explains why glass is solid and transparent, but grapes are soft and not transparent.

The standard model of particle physics is really a loose collection of theories about different particle fields. It describes three of the four fundamental forces (the strong force, the weak force, electromagnetic force, but not the gravitational force), and it describes the thirteen known subatomic particles other than the graviton—six quarks, six leptons, and the Higgs-Boson.

The theory sets limits of what exists and what can happen. It implies, for example, that a photon cannot decay into two photons. It also implies that every proton consists in part of two up quarks and one down quark that interact with each other by exchanging gluons. The gluons bind the particles together through the strong nuclear force. In addition to the three charged quarks, most of the mass of the proton consists in other virtual quarks, antiquarks and gluons that exist over only very short time scales. Because of the time scale, they are called “virtual particles.”

The properties of spacetime points that serve to distinguish any particle from any other are a spacetime point’s values for mass, spin, and charge at that point. There are no other differences among points, so in that sense fundamental physics is simple. Charge, though, is not simply electric charge. There are three kinds of color charge, and two weak charges.

Except for gravity, the standard model describes all the universe’s forces and interactions and particles and fields, and it contains the theories of quantum electrodynamics, quantum chromodynamics, and the electroweak theory. Strictly speaking, these theories are about interactions rather than forces. A force is just one kind of interaction. Some interactions do not involve forces but rather they change one kind of particle into another kind. The weak interaction, for example, can transform a neutron into a proton.

Most every kind of event and process in the universe is produced by one or more of the four interactions.  When any particle interacts, say with another particle, the two particles exchange other particles, the so-called carriers of the interactions. So, when milk is spilled onto the floor, what is going on at a fundamental level is that the particles of the milk and the particles in the floor and the particles in the surrounding air exchange a great many carrier particles with each other, and the exchange is what is called “spilling milk onto the floor.” Yet all these varied particles are just tiny fluctuations of fields. The scientific image here has moved very far away from the manifest image.

According to the standard model, but not according to relativity theory, all particles must move at the speed c unless they interact with other fields. All the particles in your body such as its protons and electrons would move at the speed c if they were not continually interacting with the Higgs Field. The Higgs field can be thought as being like a sea of molasses that slows down all protons and electrons and gives them the mass and inertia they have. Ultimately that is why, when you flip a switch to turn on an electric light, the free electrons constituting the wire’s current cannot move as fast as the massless photons of the electric signal. Neutrinos are not affected by the Higgs Field, but they move slightly less than c because they are slightly affected by the weak interaction.

There has been an intense search for a possible fifth force beyond the standard model. So far it is without success. However, ultimately the standard model may need revision if only because, among all the matter described by the standard model, there is no candidate for dark matter.

4. Big Bang

The classical big bang theory of cosmology implies that the universe once was in a very small, very dense, and very hot state. Then the entire universe suddenly exploded in a “big bang,” and it has been expanding and cooling ever since.

The explosion was an explosion of space, not merely an explosion of material within an already existing space. Judging primarily from today’s rate of expansion of the universe plus the assumption that gravity has been the main force affecting the change of the universe’s size, it is estimated the explosion began about 13.8 billion years ago. At that time, the universe would have had an ultramicroscopic volume. The explosive process created new space, and it is still creating new space. In fact, in the late 20th century the classical theory was revised to say the expansion rate is accelerating due to a mysterious dark energy.

Here is a radial diagram of how the universe looks to an observer at the sun. Distances away from the sun are on a logarithmic scale back to the beginning of the big bang, which is represented as the outer circle. The diagram shows how much the universe has expanded since it was of ultramicroscopic size:
graph of universe

Attribution: Unmismoobjetivo, CC BY-SA 3.0, via Wikimedia Commons

It is assumed that a radial diagram centered on any other star or any place in the universe would be very much like the above diagram, except near its center. Looking out from the center of the diagram you see back in time—the farther out you look the farther back in time. Looking farther and farther out from our solar system is looking into times when the universe had lower and lower entropy.

The big bang theory in some form or other (with or without inflation) is accepted by nearly all astronomers, astrophysicists, and philosophers of physics, but it is not as firmly accepted as is the theory of relativity. The theory originated with several people, although Edwin Hubble’s observations were the most influential. In 1922, the Russian physicist Alexander Friedmann discovered that the theory of general relativity allows an expanding universe. Unfortunately, Einstein reacted by saying this is a mere physical possibility but surely not a feature of the actual universe. Then the Belgian physicist Georges Lemaître independently suggested in 1927 that there is some evidence the universe is expanding, and he defended his claim using previously published measurements to show a pattern that the greater the distance of a galaxy from Earth the greater the galaxy’s speed away from Earth. He calculated these speeds from the Doppler shifts in their light frequency. In 1929, the American astronomer Edwin Hubble observed clusters of galaxies moving away from each other, with the more distant clusters moving away at greater speeds, and these observations were crucially influential in causing scientists to accept what is now called the big bang theory of the universe. Both Lemaître’s calculations and Hubble’s observations suggest that, if time were reversed, all the galaxies would meet in a very small volume. Also, on a large scale most galaxies are flying away from each other. In this sense, space is expanding, even though molecules, planets, and galaxies themselves are not expanding.

As clusters get farther apart, the electromagnetic radiation they emit gets more red-shifted as it arrives at Earth. That is, the best explanation of the red-shift is that the universe is expanding.

The acceptance of the theory of relativity has established that space curves near all masses. However, the theory has no implications about curvature at the cosmic level. Regarding this curvature, the above radial picture of the universe can be misinterpreted by not distinguishing the universe from the observable universe. The diagram shows that the volume of the observable universe is spherical. The sphere with its contents is called “our Hubble Bubble.” However, the picture should not be interpreted as showing that the universe itself at the largest scale has spherical cosmic curvature. The big bang theory presupposes that the ultramicroscopic universe at the time of the big bang did have an extremely large curvature, but it is supposed that now the universe has very little or no curvature and so is nearly “flat.”

Here is another picture that displays the information differently, with time increasing to the right and space increasing both up and down and into and out of the picture:

big bang graphic

Attribution: NASA/WMAP Science Team

Clicking on the picture will produce an expanded picture with more detail.

The term big bang does not have a precise definition. It does not always refer to a single, first event; rather, it more often refers to a brief duration of early events as the universe underwent a rapid expansion. Actually, the big bang theory itself is not a specific theory, but rather a framework for more specific big bang theories.

Astronomers on Earth detect microwave radiation arriving in all directions from the light produced about 380,000 years after the big bang. At that time, the universe suddenly turned transparent, allowing photons for the first time to move freely without being immediately reabsorbed by other particles. This primordial electromagnetic radiation has now reached Earth with its frequency changed; it has now become microwave radiation because its wavelength was continually stretched (red-shifted) as the universe expanded.  Measuring this incoming Cosmic Microwave Background (CMB) radiation reveals it to be extremely uniform in all directions in the sky. The energy or temperature of the radiation is much less than when it was created. It varies with angle, but only by a ten thousandth of a degree of temperature. This almost uniform temperature implies the earliest times of the big bang had even greater uniformity, and it implies the entropy of the big bang was very low. The miniscule microwave temperature fluctuations in different directions are traces of ultramicroscopic fluctuations in the density of primordial material very early during the big bang. These early, small fluctuations, probably are quantum fluctuations, and they may be the origin of what later became the first galaxies. Perhaps all the large-scale structure in today’s universe was triggered by quantum uncertainty.

The initial expansion rate was spectacularly large, but it ended quickly, and the universe then expanded at a constant rate. However, since then the expansion rate has not been uniform because there is a second source of expansion, the repulsion of dark energy. The influence of dark energy was initially insignificant, but its key feature is that it does not dilute as the space it is within undergoes expansion. So, finally, after about seven billion years of space’s expanding, the dark energy became an influential factor and started to accelerate the expansion, and it is becoming more and more significant. This influence is shown in the above diagram as the curvature that occurs just below and before the word “etc.” Dark energy is also called vacuum energy and the cosmological constant. It is probably the energy of space itself and not of a particle within space.

The initial evidence for this dark energy came from observations in 1998 of Doppler shifts of supernovas. These observations are best explained by the assumption that distances between supernovas are increasing at an accelerating rate. Because of this rate increase, it is estimated that the volume of the universe will double every 1010 years. Any galaxy cluster that is now 100 light-years away from our Milky Way will, in another 13.8 billion years, be more than 200 light-years away and will be moving much faster away from us. Eventually, it will be moving so fast away from us that it will become invisible. In enough time, all galaxies other than the Milky Way will become invisible. After that, all the stars in the Milky Way will become invisible. In that sense, astronomers are never going to see more than they could see now.

Regarding the universe’s expansion, atoms are not currently expanding. They are held together tightly by the electromagnetic force and strong force (with a little help from the weak force and gravity) which overpower the current value of the repulsive force of dark energy or whatever it is that is causing the expansion of space. What is expanding now is the average distances between clusters of galaxies. It is as if the clusters are exploding away from each other, and, in the future, they will be very much farther away from each other. According to the cosmologist Sean Carroll, currently, the “idea that the universe is overall expanding is only true on the largest scales. It’s an approximation that gets better and better as you consider galaxies that are farther and farther away.”

Eventually, though, as the rate of expansion of space escalates, all clusters of galaxies will become torn apart. Then galaxies themselves, then all solar systems, and ultimately even molecules and atoms and all other configurations of elementary particles will be torn apart.

Why does the big bang theory say space exploded instead of saying matter-energy exploded into a pre-existing space? This is a subtle issue. If it had said matter-energy exploded but space did not, then there would be uncomfortable questions: Where is the point in space that it exploded from, and why that point? Picking one would be arbitrary. And there would be these additional uncomfortable questions: How large is this pre-existing space? When was it created? Experimental observations clearly indicate that some clusters of galaxies must be separating from each other faster than the speed of light, but adding that they do this because they are moving that fast within a pre-existing space would require an ad hoc revision of the theory of relativity to make exceptions to Einstein’s speed limit. So, it is much more “comfortable” to say the big bang is an explosion of space or spacetime, not an explosion of matter-energy within spacetime.

The term “our observable universe” and the synonymous term “our Hubble bubble,” refer to everything that a person on Earth could in principle observe. However, there are distant places in the universe in which an astronomer there could see things that Earth astronomers could not observe because they are closer to those objects than Earth is. Physicists are agreed that, because of this reasoning, there exist objects that are in the universe but not in our observable universe. Because those unobservable objects are also the product of our big bang, cosmologists assume that the unobservable objects are similar to the objects we on Earth observe—that those objects form atoms and galaxies, and that time behaves there as it does here. But there is no guarantee that this convenient assumption is correct.

Because the big bang happened about 14 billion years ago, you might think that no visible object can be more than 14 billion light-years from Earth, but this would be a mistake. The increasing separation of clusters of galaxies over the last 14 billion years is why astronomers can see about 45 billion light-years in any direction and not just 14 billion light-years.

Some distant galaxies are moving so fast away from us that they are invisible. Their speed of recession is greater than c. Nevertheless, assuming general relativity is correct, in our universe nothing is passing, or ever has passed, or will pass anything at faster than c; so, in that sense, c is still our cosmic speed limit.

When contemporary physicists speak of the age of our universe and of the time since our big bang, they are implicitly referring to cosmic time measured in the cosmological rest frame. This is time measured in a unique reference frame in which the average motion of all the galaxies is stationary and the Cosmic Microwave Background radiation is as close as possible to being the same in all directions. This frame is not one in which the Earth is stationary. Cosmic time is time measured by a clock that would be sitting as still as possible while the universe expands around it. In cosmic time, t = 0 years is when the big bang occurred, and t = 13.8 billion years is our present. If you were at rest at the spatial origin in this frame, then the Cosmic Microwave Background radiation on a very large scale would have the same temperature in any direction. This is at the ring in the radial diagram above when the universe first became transparent to light. When the universe was smaller than it is now and it was about 100 million light years across, the universe’s matter would be about uniformly distributed. At that scale, it is as if all the galaxies are dust particles floating in a large room, and at the center of the room the distribution of dust in one direction is the same as in any other direction, and in any region of the room there is as much dust as in any other region. On a finer scale, the matter in the universe is unevenly distributed.

The cosmic rest frame is a unique, privileged reference frame for astronomical convenience, but there is no reason to suppose it is otherwise privileged. It is not the frame sought by the A-theorist who believes in a unique present, nor by Isaac Newton who believed in absolute rest, nor by Maxwell who believed in his nineteenth century aether.

The cosmic frame’s spatial origin point is described as follows:

In fact, it isn’t quite true that the cosmic background heat radiation is completely uniform across the sky. It is very slightly hotter (i.e., more intense) in the direction of the constellation of Leo than at right angles to it…. Although the view from Earth is of a slightly skewed cosmic heat bath, there must exist a motion, a frame of reference, which would make the bath appear exactly the same in every direction. It would in fact seem perfectly uniform from an imaginary spacecraft traveling at 350 km per second in a direction away from Leo (towards Pisces, as it happens)…. We can use this special clock to define a cosmic time…. Fortunately, the Earth is moving at only 350 km per second relative to this hypothetical special clock. This is about 0.1 percent of the speed of light, and the time-dilation factor is only about one part in a million. Thus to an excellent approximation, Earth’s historical time coincides with cosmic time, so we can recount the history of the universe contemporaneously with the history of the Earth, in spite of the relativity of time.

Similar hypothetical clocks could be located everywhere in the universe, in each case in a reference frame where the cosmic background heat radiation looks uniform. Notice I say “hypothetical”; we can imagine the clocks out there, and legions of sentient beings dutifully inspecting them. This set of imaginary observers will agree on a common time scale and a common set of dates for major events in the universe, even though they are moving relative to each other as a result of the general expansion of the universe…. So, cosmic time as measured by this special set of observers constitutes a type of universal time… (Davies 1995, pp. 128-9).

It is a convention that cosmologists agree to use the cosmic time of this special reference frame, but it is an interesting fact and not a convention that our universe is so organized that there is such a useful cosmic time available to be adopted by the cosmologists. Not all physically possible spacetimes obeying the laws of general relativity can have such a cosmic time.

a. Cosmic Inflation

According to one of the more popular big bang theories, the cosmic inflation theory, the universe underwent an inflationary expansion soon after t = 0. It was a sudden expansion with an exponentially increasing rate for a very short time. Nobody knows whether it expanded uniformly in all directions. It began for some unknown reason, and, again for some unknown reason, stopped inflating very soon. This inflation theory cannot be inferred from the Core Theory.

The theory of inflation implies the universe has no overall curvature today, which is 13.8 billion years after the big bang. Assuming there is no curvature, our observations and energy measurements of the energy that is present in the universe indicate some energy is missing that would ensure this lack of curvature. The energy that is missing is called dark energy. It is also called the cosmological constant because it appears to have the same value everywhere. It’s the energy of otherwise empty space. After the universe’s initial inflation stopped, the universe’s expansion continued, but its rate of expansion slowed down until about seven billion years ago when its expansion rate began speeding up due to the influence of the dark energy. The expansion rate will continue to increase.

The theory of inflation is a version of the big bang theory that offers the most popular solution to the big bang’s horizon problem. This is the problem of accounting for the fact that, looking in any direction, we see almost the same temperature of the light reaching us. All these regions, regardless of direction, are very nearly in thermal equilibrium. This is a remarkable feature because distant regions at different angles were not in any causal contact when their light was generated 380,000 years after the big bang. This nearly identical temperature is a sign that when these regions were in causal contact early in the history of the big bang, the universe had a very nearly uniform temperature and a very low entropy. It was very homogenous. The low entropy is needed to account for subsequent entropy increase, and thus the second law of thermodynamics.

If the cosmic inflation did occur as some wiggle in a primordial inflation field, then it is likely that primordial gravity waves were created. They would now have stretched to an extremely high wavelength. These waves might be detected by a future gravity wave detector.

The big bang theory is considered to be confirmed in the field of astronomy, but the theory of inflation is still unconfirmed. One popular alternative theory that doesn’t require an initial inflation is ekpyrotic theory. It uses branes, which are 3D membranes within a higher dimensional space; and brane collisions can create big bangs.

Here is part of the argument in favor of an initial inflation. The cosmic microwave background (CMB) radiation reaching Earth from all directions is on average the same cold temperature everywhere, namely about 2.7 degrees Kelvin or about negative 455 degrees Fahrenheit, but with small temperature differences in different directions on the order of a hundred-thousandth of a degree. Room temperature, by comparison, is 300 degrees Kelvin or 80 degrees Fahrenheit. The classical big bang theory can account for the number 2.7 but not for the temperature being uniform in all directions at the largest scale nor for the very slight deviations in uniformity in temperature on the order of a hundred-thousandth of a degree. The big bang theory of inflation can account for these cosmological features, and in the early 21st century, the majority of cosmologists say there are no better accounts. Nevertheless, the claim that there are no better accounts is challenged in (Ijjas, et. al. 2017).

The theory of inflation postulates that extremely early in the big bang process there was exponential inflation of space, or perhaps a small patch of space, due to the presence of a small amount of very dense, repulsive, primordial, material having negative pressure—that is, negative gravity. In other words, it was very explosive. Newton-style gravity cannot be repulsive, but Einstein’s theory does not rule out repulsive gravity. The addition by Einstein of the so-called cosmological constant term to his equations allows for this repulsive gravity, but, unfortunately, Einstein himself did not consider the possibility that there could be repulsive gravity or cosmic inflation.

Assuming the big bang began at time t = 0, then the epoch of inflation (the epoch of repulsive gravity or anti-gravity) began at t = 10−36 seconds and lasted until about t = 10−34 seconds, during which time the volume of space increased by a factor of at least 1026, and any initial unevenness in the distribution of energy was almost all smoothed out from the large-scale perspective, in analogy to how your blowing up a balloon removes the initial folds and creases.  The speed of this inflationary expansion was much faster than light speed, but that fact does not violate Einstein’s general theory of relativity because his theory is a local theory, and locally during inflation no entity need pass by any other at faster than the speed of light.

At the end of that inflationary epoch when t = 10−33 seconds, the explosive material decayed for some unknown reason and left only normal matter with attractive gravity. This decay began the period of the so-called quark soup. At this time, our universe continued to expand, although now at a nearly constant rate. It went into its “coasting” phase. Regardless of any previous curvature in our universe, by the time the inflationary period ended, the overall structure of space had very little curvature, so it was extremely homogeneous, as is the universe we observe today in its largest scale. But at the very beginning of the inflationary period, there were some very tiny imperfections due to quantum fluctuations. Regions of the densest quantum fluctuations attracted more material than the less dense regions, and these dense regions turned into what would eventually become galaxies. The quantum fluctuations themselves have left their traces in the very slight hundred-thousandth of a degree differences in the temperature at different angles of the CMB radiation.

Before inflation began, for some unknown reason the universe contained an unstable inflaton field or false vacuum field. This field underwent a spontaneous phase transition (analogous to superheated liquid water suddenly and spontaneously expanding into steam). That phase transition caused the highly repulsive primordial material to hyper-inflate exponentially in volume for a very short time. During this primeval inflationary epoch, the gravitational field’s stored, negative gravitational energy was rapidly released, and all space wildly expanded. At the end of this early inflationary epoch, the highly repulsive material decayed for some as yet unknown reason into ordinary matter and energy, and the universe’s expansion rate settled down to just below the rate of expansion observed in the universe today. During the inflationary epoch, the entropy continually increased, so the second law of thermodynamics was not violated.

Guth described the inflationary period this way:

There was a period of inflation driven by the repulsive gravity of a peculiar kind of material that filled the early universe. Sometimes I call this material a “false vacuum,” but, in any case, it was a material which in fact had a negative pressure, which is what allows it to behave this way. Negative pressure causes repulsive gravity. Our particle physics tells us that we expect states of negative pressure to exist at very high energies, so we hypothesize that at least a small patch of the early universe contained this peculiar repulsive gravity material which then drove exponential expansion. Eventually, at least locally where we live, that expansion stopped because this peculiar repulsive gravity material is unstable; and it decayed, becoming normal matter with normal attractive gravity. At that time, the dark energy was there, the experts think. It has always been there, but it’s not dominant. It’s a tiny, tiny fraction of the total energy density, so at that stage at the end of inflation the universe just starts coasting outward. It has a tremendous outward thrust from the inflation, which carries it on. So, the expansion continues, and as the expansion happens the ordinary matter thins out. The dark energy, we think, remains approximately constant. If it’s vacuum energy, it remains exactly constant. So, there comes a time later where the energy density of everything else drops to the level of the dark energy, and we think that happened about five or six billion years ago. After that, as the energy density of normal matter continues to thin out, the dark energy [density] remains constant [and] the dark energy starts to dominate; and that’s the phase we are in now. We think about seventy percent or so of the total energy of our universe is dark energy, and that number will continue to increase with time as the normal matter continues to thin out. (World Science U Live Session: Alan Guth, published November 30, 2016 at https://www.youtube.com/watch?v=IWL-sd6PVtM.)

Before about t = 10-46 seconds, there was a single basic force rather than the four we have now. The four basic forces are: the force of gravity, the strong nuclear force, the weak force, and the electromagnetic force. At about t = 10-46 seconds, the energy density of the primordial field was down to about 1015 GEV, which allowed spontaneous symmetry breaking (analogous to the spontaneous phase change in which steam cools enough to spontaneously change to liquid water); this phase change created the gravitational force as a separate basic force. The other three forces had not yet appeared as separate forces.

During the period of inflation, the universe (our bubble) expanded from the size of a proton to the size of a marble. Later at t = 10-12 seconds, there was more spontaneous symmetry breaking. First the strong nuclear force, then the weak nuclear force and electromagnetic forces became separate forces. For the first time, the universe now had exactly four separate forces. At t = 10-10 seconds, the Higgs field turned on (that is, came into existence). This slowed down many kinds of particles by giving them mass so they no longer moved at light speed.

Much of the considerable energy left over at the end of the inflationary period was converted into matter, antimatter, and radiation, such as quarks, antiquarks, and photons. The universe’s temperature escalated with this new radiation, and this period is called the period of cosmic reheating. Matter-antimatter pairs of particles combined and annihilated, removing the antimatter from the universe, and leaving a small amount of matter and even more radiation. At t = 10-6 seconds, quarks combined together and thereby created protons and neutrons. After t = 3 minutes, the universe had cooled sufficiently to allow these protons and neutrons to start combining strongly to produce hydrogen, deuterium, and helium nuclei. At about t = 379,000 years, the temperature was low enough for these nuclei to capture electrons and to form the initial hydrogen, deuterium, and helium atoms of the universe. With these first atoms coming into existence, the universe became transparent in the sense that this primarily orange-red light was now able to travel freely without always being absorbed very soon by surrounding particles. Due to the expansion of the universe since then, this early light is today invisible and much lower in frequency than it was 379,000 years ago. That radiation is now detected on Earth as having a wavelength of 1.9 millimeters, and it is called the cosmic microwave background radiation or CMB. The energy is continually arriving at the Earth’s surface from all directions. It is almost homogenous and almost isotropic.

As the universe expands, the CMB radiation loses energy; but this energy is not lost from the universe, nor is the law of conservation of energy violated. There is conservation because the same amount of work goes into the expanded spacetime.

In the literature in both physics and philosophy, descriptions of the big bang often speak of it as if it were the first event, but the big bang theory does not require there to be a first event. This description mentioning the first event is a philosophical position, not something demanded by the scientific evidence. Physicists James Hartle and Stephen Hawking once suggested that looking back to the big bang is just like following the positive real numbers back to ever-smaller positive numbers without ever reaching the smallest positive one. If Hartle and Hawking are correct that time is strictly analogous to this, then the big bang had no beginning point event, no initial time.

The classical big bang theory is based on the assumption that the universal expansion of clusters of galaxies can be projected all the way back to a singularity, a zero volume, at t = 0. Physicists agree that the projection must become untrustworthy for any times less than the Planck time because this short period is where the inconsistency between relativity theory and quantum theory become significant. If a theory of quantum gravity does get confirmed, it is expected to provide more reliable information about the Planck epoch from t=0 to the Planck time, and it may even allow physicists to answer the questions, “What caused the big bang?” and “Did anything happen before then?”

b. Eternal Inflation and the Multiverse

Most of the big bang inflationary theories are theories of eternal inflation, of the eternal creation of more and more separate big bangs. The inflaton field is the fuel of our big bang. Presumably, say advocates of eternal inflation, not all the inflaton fuel is used up in producing just one big bang, so the remaining fuel is available to create other big bangs, which themselves inflate. And presumably, there is no reason why this process should ever end. Because some big bang has the potential to become infinitely large in both space and time, potentially there are infinities of potentially infinite spacetimes.

After any single big bang, eventually the initial hyper-inflation stops. But expansion does not stop, and so it produces what cosmologists call a bubble universe. An eternally inflating inflaton field produces an infinity of bubbles. Our own bubble that was produced by our big bang is called the Hubble Bubble. Sometimes cosmologists require that the bubble be our visible universe rather than the entire universe.

The original theory of inflation was created by Guth in 1981, and the theory of eternal inflation was created by Gott, Linde, and Vilenkin in the early 1980s. There is no consensus among physicists about whether there is more than one universe. The multiplicity of universes also is called parallel worlds, many worlds, alternative universes, and alternate worlds. Each universe of the multiverse normally is required to use the same physics and the same mathematics, a restriction not required by a logically possible universe of the sort proposed by the philosopher David Lewis.

On one version of eternal inflation, called the multiverse theory, implies there are not simply a multiplicity of big bangs, but rather at each event in which various subsequent events are physically possible, the universe splits into various new universes, one for each of those possibilities. Each of these new universes will have had the same big bang. So, a person who can choose different breakfasts splits into various persons each having chosen a different breakfast. No one of these copies of the person is the one real person, so identity over time is not a fundamental feature of reality, according to multiverse theory.

New energy is not required to create these inflationary universes, so there are no implications about whether energy is or is not conserved in the multiverse. Some of the universes occur far away in our space, but most do not occur within our space. They are spatially disconnected from us. Also, in some of these universes, there may be no time dimension. A great many universes can occur if the multiverse theory is correct, but there are limits to the variety. Not all logically possible universes are allowed. Only the physically possible universes are allowed. And there are some multiverse theories that do not allow very much variety, for example, the mass of the electron might have to be the same in all universes. There is much uncertainty among competing theories of the multiverse because of the lack of experimental tests.

Could the expansion of our universe eventually slow down? Yes. Could the expansion of the multiverse eventually slow down? No. The primordial or earlier explosive material in any single universe decays quickly, but as it decays the part that has not decayed becomes much larger, and so the expansion of the multiverse continues. The rate of creation of new bubble universes is increasing exponentially. Unfortunately, physicists cannot make sense of the remark that one universe came into existence before some other. The happens-before relation is not well-defined on the set of universes, although it is difficult to speak informally of universe creation without using the relation.

Normally, philosophers of science say that what makes a theory scientific is not that it can be falsified, as the philosopher Karl Popper proposed, but rather than there can be experimental evidence for it or against it. Because it is so difficult to design experiments that would provide evidence for or against the multiverse theories, many physicists complain that their fellow physicists who are developing these theories are doing technical metaphysical speculation, not physics. However, the response from defenders of multiverse theories is usually that they can imagine someday, perhaps in future centuries, running crucial experiments, and, besides, the term physics is best defined as being whatever physicists do.

5. Infinite Time

Is time infinitely divisible? Yes, because general relativity theory and quantum theory require time to be a continuum. But this answer will change to “no” if these theories are eventually replaced by a Core Theory that quantizes time. “Although there have been suggestions that spacetime may have a discrete structure,” Stephen Hawking said in 1996, “I see no reason to abandon the continuum theories that have been so successful.” Two decades later, he and other physicists were much less sure.

Stephen Hawking, James Hartle, and others said the difficulty of knowing whether the past and future are infinite in duration turns on our ignorance of whether the universe’s positive energy is exactly canceled out by its negative energy. All the energy of gravitation and spacetime curvature is negative. If the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, including the law of conservation of energy, then time is infinite in the past and future. Here is the argument for this conclusion. The law of conservation of energy implies energy can change forms, but if the total were ever to be non-zero, then it can never become zero or have been zero because any change in the total to or from non-zero would violate the law of conservation of energy.  . So, there always have been states whose total energy is non-zero energy, and there always will be states of non-zero energy. That implies there can be no first instant or last instant and thus that time is eternal.

There is no solid evidence that the total is non-zero, but a slim majority of the experts’ favor a non-zero total, although their confidence in this is not strong. Assuming there is a non-zero total, the favored theory of the future of the universe is the big chill theory. The big chill theory implies the future never ends and the universe just keeps getting chillier as space expands and gets more dilute. Dark energy creation causes this expansion of space. So, there always will be new events produced from old events regardless of whether the universe does or does not reach thermodynamic equilibrium.

Here are more details of the big chill theory. The last star will burn out in 1015 years. Then all the stars and dust within each galaxy will fall into black holes. Then the material between galaxies will fall into black holes as well, and finally in about 10100 years all the black holes will evaporate, leaving only a soup of elementary particles that gets less dense and therefore “chillier” as the universe’s expansion continues. The microwave background radiation will red shift more and more into longer wavelength radio waves. Future space will look much like a vacuum. But because of vacuum energy, the temperature will only approach, but never quite reach, zero on the Kelvin scale. Thus the universe descends into a “big chill,” having the same amount of total energy it always has had.

The situation is very different from that of the big chill theory if the total energy of the universe is now zero. In this case, time is not fundamental (nor is spacetime), but is likely to be emergent from a finite collection of moments as described in the timeless Wheeler-DeWitt equation of quantum mechanics.

Here is more commentary about this from Carroll (2016, pp. 197-8):

There are two possibilities: one where the universe is eternal, one where it had a beginning. That’s because the Schrödinger equation of quantum mechanics turns out to have two very different kinds of solutions, corresponding to two different kinds of universe.

One possibility is that time is fundamental, and the universe changes as time passes. In that case, the Schrödinger equation is unequivocal: time is infinite. If the universe truly evolves, it always has been evolving and always will evolve. There is no starting and stopping. There may have been a moment that looks like our Big Bang, but it would have only been a temporary phase, and there would be more universe that was there even before the event.

The other possibility is that time is not truly fundamental, but rather emergent. Then, the universe can have a beginning. The Schrödinger equation has solutions describing universes that don’t evolve at all: they just sit there, unchanging.

…And if that’s true, then there’s no problem at all with there being a first moment in time. The whole idea of “time” is just an approximation anyway.

Back to the main “Time” article for references and citations.

Author Information

Bradley Dowden
Email: dowden@csus.edu
California State University, Sacramento
U. S. A.