What Else Science Requires of Time

This article is one of the three supplements of the main Time article. Another is “Frequently Asked Questions about Time.”

Table of Contents

  1. What are Theories of Physics?
    1. The Core Theory
  2. Relativity Theory
  3. Quantum Theory
    1. The Standard Model
  4. Big Bang
    1. Cosmic Inflation
    2. Eternal Inflation and the Multiverse
  5. Infinite Time

1. What are Theories of Physics?

The answer to this question is philosophically controversial, and there is a vast literature on the topic. Here are some brief remarks.

The confirmed theories of physics are our civilization’s most valuable tools for explaining, predicting, and understanding the natural phenomena that physicists study. One of the best features of a good theory in physics is that it allows us to calculate the results of many observations from few assumptions. We humans are lucky that we happen to live in a universe that is so explainable, predictable and understandable, and that is governed by so few laws.

Theories of physics are, among other things, a set of laws. The laws are the main, general claims of a theory. The claim that Mars is farther from the Sun than is the Earth is true, but it does not qualify as being a law because it is not general enough. In our fundamental theories of physics, the standard philosophical presupposition is that a state of a physical system describes what there is at some time, and a law of the theory—an “evolution law” or “dynamical law”—describes how the system evolves from a state at one time into a state at another time, perhaps with a probability attached. All evolution laws in our fundamental theories are differential equations. Nearly all the laws are time-reversible, which means that the evolution can be into either an earlier time or a later time. The most important, proposed exception to time-reversibility is the treatment in quantum theory of the measurement process. It is discussed below.

In Medieval Christian theology, the laws of nature were considered to be God’s commands, but today saying nature ‘obeys’ scientific laws is considered by scientists to be a harmless metaphor.

All laws were once assumed to be local in the sense that they need to mention only the here are now and not the there and then. Also, presumably these laws are the same at all times. We have no a priori reason to think physical theories must be time-reversible, local, and time-translation invariant, but these assumptions have been very fruitful throughout much of the history of physics—until new problems arose, such as in the description of quantum measurement and entanglement given below.

The term theory in this article is used in a technical sense, not in the sense of an explanation as in the remark, “My theory is that the mouse stole the cheese,” nor in the sense of a prediction as in the remark, “My theory is that the mouse will steal the cheese.” The general theory of relativity is an example of our intended sense of the term “theory.” Theories in science are designed for producing interesting explanations, not for encompassing all the specific facts. That is why there is no scientific theory that specifies your age nor one that specifies when you woke up last Tuesday. Some theories are expressed fairly precisely, and some are expressed less precisely. The fairly precise ones are often called models of nature, and in physics the laws in those models are expressed in the language of mathematics as mathematical equations.

Most researchers would say the model should tell us how the system being modeled would behave if certain conditions were to be changed in a specified way, for example, if the density were doubled or those three moons orbiting the planet were not present. Doing this is telling us about the causal structure of the system being modeled.

Due to the influence of Isaac Newton, subsequent physicists have assumed that the laws of physics are time-translation invariant. This invariance over time implies the laws of physics we have now are the same laws that held in the past and will hold in the future. This is not implying that if you bought an ice cream cone yesterday, you will buy one tomorrow. Also, the “law” that the laws of physical science do not change from one time to another and thus is time translation invariant is not itself time translation invariant, so it is considered to be a meta-law rather than a law.

The laws of our fundamental theories contain many constants such as the fine-structure constant, the value for the speed of light, Planck’s constant, and the value of the mass of an electron. We cannot calculate those constants. Instead, we measure a constant as precisely as possible, and then select a specific value for the constant and place this value in the theories containing the constant. A virtue is to not have too many constants. If there were too many, then the theory could never be disproved by data and so would explain nothing and could be labeled as “pseudoscience.” Regarding the divide between science and pseudoscience, the leading answer is that:

what is really essential in order for a theory to be scientific is that some future information, such as observations or measurements, could plausibly cause a reasonable person to become either more or less confident of its validity. This is similar to Popper’s criteria of falsifiability, while being less restrictive and more flexible (Dan Hooper).

One implication is that the hypothesis that some fact holds because God made it so is unscientific.

a. The Core Theory

Some physical theories are fundamental, and some are not. Fundamental theories are foundational in the sense that their laws cannot be derived from the laws of other physical theories even in principle. For example, the second law of thermodynamics is not fundamental, nor are the laws of plate tectonics in geophysics despite their being critically important to their sciences. The following two theories are fundamental: (i) the theory of relativity, and (ii) quantum theory. Their amalgamation is what Nobel Prize winner Frank Wilczek called the Core Theory, the theory of almost everything physical. (For the experts: More technically, this amalgamated theory is the effective quantum field theory that includes both the weak field limit of Einstein’s General Theory of Relativity and the Standard Model of Particle Physics, and no assumption is made about the existence of space and time below the Planck length and Planck time.) Scientists believe this Core Theory holds not just in our solar system, but all across the universe. Wilczek claimed:

[T]he Core has such a proven record of success over an enormous range of applications that I can’t imagine people will ever want to junk it. I’ll go further: I think the Core provides a complete foundation for biology, chemistry, and stellar astrophysics that will never require modification. (Well, “never” is a long time. Let’s say for a few billion years.)

This claim that the Core gives us all the fundamental laws we will ever need in order to explain the phenomena of our ordinary lives implies that it is all that is needed to explain the cause of your future great grandchild’s death and why that particular leaf is now lying in the street. The Core Theory does not include the big bang theory, and it does not use the terms time’s arrow or now. The concept of time in the Core Theory is primitive or “brute.” It is not definable, but rather it is used to define and explain other concepts.

It is believed by most physicists that the Core Theory can be used in principle to adequately explain the behavior of a potato, a galaxy, and a brain. The hedge phrase “in principle” is important. One cannot replace it with “in practice” or “practically.” Practically there are many limitations on the use of the Core theory. Here are some of the limitations. There is a margin of error in any measurement, so a user of the Core Theory does not have access to all the needed data for a prediction such as the position of ever particle in a system; and, even if this were available, the complexity of the needed calculations would be prohibitive. There is quantum uncertainty that Heisenberg expressed with his Uncertainty Principle (see below for more on this). And there is limit of predictability in a chaotic system due to the butterfly effect that magnifies small errors in an initial measurement into very large errors later in the time evolution of the system. In addition, the Core theory does not explicitly contain the concepts of a potato, galaxy, and brain. They are emergent concepts that are needed in good explanations at a higher scale, the macroscopic scale. Commenting on these practical limitations for the study of galaxies, the cosmologist Andrew Ponzen said “Ultimately, galaxies are less like machines and more like animals—loosely understandable, rewarding to study, but only partially predictable.”

Regarding the effect of quantum mechanics on ontology, potatoes, galaxies and brains have been considered by a number of twentieth-century philosophers to be just different mereological sums of particles, but the majority viewpoint among philosophers of physics in the twenty-first century is that potatoes, galaxies and brains are, instead, fairly stable patterns over time of the relevant quantum fields. All those ordinary objects have wave-like properties.

For a great many investigations, it is helpful to treat objects as being composed of particles rather than fields. A proton or even a planet might be treated as a particle for certain purposes. Electrons, quarks, and neutrinos are fundamental particles, and they are considered to be structureless, having no inside. String theory disagrees and treats all these particles as being composed of very tiny one-dimensional objects called “strings” that move in a higher-dimensional space, but due to lack of experimental support, string theory is considered to be as yet unconfirmed.

The Core has been tested in many extreme circumstances and with great sensitivity, so physicists have high confidence in it. There is no doubt that for the purposes of doing physics the Core theory provides a demonstrably superior representation of reality to that provided by its alternatives. But all physicists know the Core is not strictly true and complete, and they know that some features will need revision—revision in the sense of being modified or extended. Physicists are motivated to discover how to revise it because such a discovery can lead to great praise from the rest of the physics community. Wilczek says the Core will never need modification for understanding (in principle) the special sciences of biology, chemistry, stellar astrophysics, computer science and engineering, but he would agree that the Core needs revision in order to adequately explain why 95 percent of the universe consists of dark energy and dark matter, why the universe has more matter than antimatter, why neutrinos change their identity over time, and why the energy of empty space is as small as it is. One metaphysical presupposition here is that the new theory will be logically consistent and will have eliminated the present inconsistencies between relativity theory and quantum theory.

The Core Theory presupposes that time exists, that it emerges from spacetime, and that spacetime is fundamental and not emergent. Within the Core Theory, relativity theory allows space to curve, ripple, and expand; and thIs curving, rippling, and expanding can vary over time. Quantum Theory does not allow any of this, although a future revision of Quantum Theory within the Core Theory is expected to allow it.

The Core Theory presupposes the well-accepted Laplacian Paradigm that implies that physicists should search for laws describing how a state of a system at one time turns into a different state at another time. These are the evolution laws or dynamical laws. David Deutsch, Chiara Marletto, and their corroborators (Deutsch 2013) have challenged that paradigm and proposed Constructor Theory which requires time to emerge from a non-temporal substrate. Time is not a fundamental feature of nature. Also, it turns the tables on classical reductionism by claiming that the small-scale, microscopic laws of nature are all emergent properties of the larger-scale laws, not vice versa.

2. Relativity Theory

Time is fundamental in relativity theory, and the theory has a great impact upon our understanding of the nature of time. When the term relativity theory is used, it usually means the general theory of relativity of 1915, but sometimes it means the special theory of relativity of 1905. The special theory is the theory of space and time when you do not pay attention to gravity, and the general theory is when you do. Both the special and general theories have been well-tested; and they are almost universally accepted. Today’s physicists understand them better than Einstein did.

Although the Einstein field equations in his general theory:

are exceedingly difficult to manipulate, they are conceptually fairly simple. At their heart, they relate two things: the distribution of energy in space, and the geometry of space and time. From either one of these two things, you can—at least in principle—work out what the other has to be. So, from the way that mass and other energy is distributed in space, one can use Einstein’s equations to determine the geometry of that space, And from that geometry, we can calculate how objects will move through it (Dan Hooper).

The relationship between the special and general theories is slightly complicated. Both theories are about motion of objects and both approach agreement with Newton’s theory the slower the speed of objects, the weaker the gravitational forces, and the lower the energy of those objects. Special relativity implies the laws of physics are the same for all inertial observers, that is, observers who are moving at a constant velocity relative to each other will find that all phenomena obey the same laws. Observers are frames of reference, or persons of negligible mass and volume making measurements from a stationary position in a frame of reference. General relativity implies the laws are the same even for observers accelerating relative to each other, such as changing their velocity due to the influence of gravitation. General relativity holds in all reference frames, but special relativity holds only for inertial reference frames, namely non-accelerating frames.

Special relativity allows objects to have mass but not gravity. It always requires a flat geometry—that is, a Euclidean geometry for space and a Minkowskian geometry for spacetime. General relativity does not have those restrictions. General relativity is a specific theory of gravity, assuming the theory is supplemented by a specification of the distribution of matter-energy at some time. Newton’s main laws of F = ma and F =  GmM/r2 hold only in special situations. Special relativity is not a specific theory but rather a general framework for theories, and it is not a specific version of general relativity. Nor is general relativity a generalization of special relativity. The main difference between the two is that, in general relativity, spacetime does not simply exist passively as a background arena for events. Instead, spacetime is dynamical in the sense that changes in the distribution of matter and energy are changes in the curvature of spacetime (though not necessarily vice versa).

The theory of relativity is generally considered to be a theory based on causality:

One can take general relativity, and if you ask what in that sophisticated mathematics is it really asserting about the nature of space and time, what it is asserting about space and time is that the most fundamental relationships are relationships of causality. This is the modern way of understanding Einstein’s theory of general relativity….If you write down a list of all the causal relations between all the events in the universe, you describe the geometry of spacetime almost completely. There is still a little bit of information that you have to put in, which is counting, which is how many events take place…. Causality is the fundamental aspect of time. (Lee Smolin).

In the Core theories, the word time is a theoretical term, and the dimension of time is treated somewhat like a single dimension of space. Space is a set of all possible point-locations. Time is a set of all possible point-times. Spacetime is a set of all possible point-events. Spacetime is presumed to be four-dimensional and also a continuum of points, with time being a distinguished, one-dimensional sub-space of spacetime. Because the time dimension is so different from a space dimension, physicists very often speak of (3+1)-dimensional spacetime rather than 4-dimensional spacetime. Technically, any spacetime, no matter how many dimensions it has, is required to be a differentiable manifold with a metric tensor field defined on it that tells what geometry it has at each point. Both relativity theory and quantum theory assume that three-dimensional space is isotropic (rotation symmetric) and homogeneous (translation symmetric) and that there is translation symmetry in time. Regarding all these symmetries, the physical laws need to obey them, but specific physical systems within space-time need not.

(For the experts: General relativistic spacetimes are manifolds built from charts involving open subsets of R4. General relativity does not consider a time to be a set of simultaneous events that do or could occur at that time; that is a Leibnizian conception. Instead General relativity specifies time in terms of the light cone structures at each place. The theory requires spacetime to have at least four dimensions, not exactly four dimensions.)

Relativity theory implies time is a continuum of instantaneous times that is free of gaps just like a mathematical line. This continuity of time was first emphasized by the philosopher John Locke in the late seventeenth century, but it is meant here in a more detailed, technical sense that was developed only toward the end of the 19th century for calculus.

continuous vs discrete

According to both relativity theory and quantum mechanics, time is not discrete or quantized or atomistic. Instead, the structure of point-times is a linear continuum with the same structure as the mathematical line or as the real numbers in their natural order. For any point of time, there is no next time because the times are packed together so tightly. Time’s being a continuum implies that there is a non-denumerably infinite number of point-times between any two non-simultaneous point-times. Some philosophers of science have objected that this number is too large, and we should use Aristotle’s notion of potential infinity and not the late 19th century notion of a completed infinity. Nevertheless, accepting the notion of an actual nondenumerable infinity is the key idea used to solve Zeno’s Paradoxes and to remove inconsistencies in calculus.

The fundamental laws of physics assume the universe is a collection of point events that form a four-dimensional continuum, and the laws tell us what happens after something else happens or because it happens. These laws describe change but do not themselves change. At least that is what laws are in the first quarter of the twenty-first century, but one cannot know a priori that this is always how laws must be. Even though the continuum assumption is not absolutely necessary to describe what we observe, but so far it has proved to be too difficult to revise our theories in order to remove the assumption and retain consistency with all our experimental data. Calculus has proven its worth.

No experiment is so fine-grained that it could show times to be infinitesimally close together, although there are possible experiments that could show the assumption to be false if the graininess of time were to be large enough to be detectable.

Not only is there some uncertainty or worry about the correctness of relativity in the tiniest realms, there is also uncertainty about whether it works differently on cosmological scales than it does at the scale of atoms, space ships, and solar systems, but so far there are no rivals theories that have been confirmed.

In the twenty-first century, one of the most important goals in physics is to discover/invent a theory of quantum gravity that unites the best parts of quantum theory and of the theory of relativity. Einstein claimed in 1916 that his general theory of relativity needed to be replaced by a theory of quantum gravity. A great many physicists of the 21st century believe a successful theory of quantum gravity will require quantizing time so that there are atoms of time. But so far this is just an opinion.

If there is such a thing as an atom of time and thus such a thing as an actual next instant and a previous instant, then time cannot be like the real number line, because no real number has a next number. It is speculated that if time were discrete, a good estimate for the duration of an atom of time is 10-44 seconds, the so-called Planck time. No physicist can yet suggest a practical experiment that is sensitive to this tiny scale of phenomena. For more discussion, see (Tegmark 2017).

The special and general theories of relativity imply that to place a reference frame upon spacetime is to make a choice about which part of spacetime is the space part and which is the time part. No choice is objectively correct, although some choices are very much more convenient for some purposes. This relativity of time, namely the dependency of time upon a choice of reference frame, is one of the most significant philosophical implications of both the special and general theories of relativity.

Since the discovery of relativity theory, scientists have come to believe that any objective description of the world can be made only with statements that are invariant under changes in the reference frame. Saying, “It is 8:00” does not have a truth value unless a specific reference frame is implied, such as one fixed to Earth with time being the time that is measured by our civilization’s standard clock. This relativity of time to reference frames is behind the remark that Einstein’s theories of relativity imply time itself is not objectively real but spacetime is.

Regarding relativity to frame, Newton would say that if you are seated in a vehicle moving along a road, then your speed relative to the vehicle is zero, but your speed relative to the road is not zero. Einstein would agree. However, he would surprise Newton by saying the length of your vehicle is slightly different in the two reference frames, the one in which the vehicle is stationary and the one in which the road is stationary. Equally surprising to Newton, the duration of the event of your drinking a cup of coffee while in the vehicle is slightly different in those two reference frames. These relativistic effects are called space contraction and time dilation, respectively. So, both length and duration are frame dependent and, for that reason, say physicists, they are not objectively real characteristics of objects. Speeds also are relative to reference frame, with one exception. The speed of light in a vacuum has the same value c in all frames that are allowed by relativity theory. Space contraction and time dilation change in tandem so that the speed of light in a vacuum is always the same number.

Relativity theory allows great latitude in selecting the classes of simultaneous events, as shown in this diagram. Because there is no single objectively-correct frame to use for specifying which events are present and which are past—but only more or less convenient ones—one philosophical implication of the relativity of time is that it seems to be easier to defend McTaggart’s B theory of time and more difficult to defend McTaggart’s A-theory that implies the temporal properties of events such as “is happening now” or “happened in the past” are intrinsic to the events and are objective, frame-free properties of those events. In brief, the relativity to frame makes it difficult to defend absolute time.

Relativity theory challenges other ingredients of the manifest image of time. For two point-events A and B occurring at the same place but at different times, relativity theory implies their temporal order is one of simultaneity, and this order is absolute in the sense of being independent of the frame of reference. This agrees with common sense and thus the manifest image of time, but if A and B are distant from each other and occur close enough in time to be within each other’s absolute elsewhere, then relativity theory implies event A can occur before event B in one reference frame, but after B in another frame, and simultaneously with B in yet another frame. No person before Einstein ever imagined time has such a strange feature.

The special and general theories of relativity provide accurate descriptions of the world when their assumptions are satisfied. Both have been carefully tested. The special theory does not mention gravity, and it assumes there is no curvature to spacetime, but the general theory requires curvature in the presence of mass and energy, and it requires the curvature to change as their distribution changes. The presence of gravity in the general theory has enabled the theory to be used to explain phenomena that cannot be explained with either special relativity or Newton’s theory of gravity or Maxwell’s theory of electromagnetism.

Because of the relationship between spacetime and gravity, the equations of general relativity are much more complicated than are those of special relativity. But general relativity assumes the equations of special relativity hold at least in all infinitesimal regions of spacetime.

To give one example of the complexity just mentioned, the special theory clearly implies there is no time travel to events in one’s own past. Experts do not agree on whether the general theory has this same implication because the equations involving the phenomena are too complex for them to solve directly. Approximate solutions have to be used, yet still there is disagreement about this kind of time travel.

Because of the complexity of Einstein’s equations, all kinds of tricks of simplification and approximation are needed in order to use the laws of the theory on a computer for all but the simplest situations.

Regarding curvature of time and of space, the presence of mass at a point implies intrinsic spacetime curvature at that point, but not all spacetime curvature implies the presence of mass. Empty spacetime can still have curvature, according to relativity theory. This point has been interpreted by many philosophers as a good reason to reject Leibniz’s classical relationism. The point was first mentioned by Arthur Eddington.

Two accurate, synchronized clocks do not stay synchronized if they undergo different gravitational forces. This is a second kind of time dilation, in addition to dilation due to speed. So, a correct clock’s time depends on the clock’s history of both speed and gravitational influence. Gravitational time dilation would be especially apparent if a clock were to approach a black hole. The rate of ticking of a clock approaching the black hole slows radically upon approach to the horizon of the hole as judged by the rate of a clock that remains safely back on Earth. This slowing is sometimes misleadingly described as time slowing down. After a clock falls through the event horizon, it can still report its values to Earth, and when it reaches the center of the hole not only does it stop ticking, but it also reaches the end of time, the end of its proper time.

The general theory of relativity theory has additional implications for time. In 1948-9, the logician Kurt Gödel discovered radical solutions to Einstein’s equations, solutions in which there are what are called “closed time-like curves” in graphical representations of spacetime. The unusual curvature is due to the rotation of all the matter in Gödel’s possible universe. As one progresses forward in time along one of these curves, one arrives back at one’s starting point. Fortunately, there is no empirical evidence that our own universe has this rotation. Here is Einstein’s reaction to Gödel’s work on time travel:

Kurt Gödel’s essay constitutes, in my opinion, an important contribution to the general theory of relativity, especially to the analysis of the concept of time. The problem involved here disturbed me already at the time of the building of the general theory of relativity, without my having succeeded in clarifying it.

Several remarks above have been made about the microstructure of time, but let’s review these in more detail. In mathematical physics, the ordering of instants by the happens-before relation of temporal precedence is complete in the sense that there are no gaps in the sequence of instants. Any interval of time is a continuum, so the points of time form a linear continuum. Unlike physical objects, physical time is believed to be infinitely divisible—that is, divisible in the sense of the actually infinite, not merely in Aristotle’s sense of potentially infinite. Regarding the density of instants, the ordered instants are so densely packed that between any two there is a third so that no instant has a next instant. Regarding continuity, time’s being a linear continuum implies that there is a nondenumerable infinity of instants between any two non-simultaneous instants. The rational number line does not have so many points between any pair of different points; it is not continuous the way the real number line is, but rather contains many gaps. The real numbers such as pi, which is not a rational number, fill the gaps.

The actual temporal structure of events can be embedded in the real numbers, at least locally, but how about the converse? That is, to what extent is it known that the real numbers can be adequately embedded into the structure of the instants, at least locally? This question is asking for the justification of saying time is not discrete or atomistic. The problem here is that the shortest duration ever measured is about 250 zeptoseconds. A zeptosecond is 10−21 second. For times shorter than about 10-43 second, which is the physicists’ favored candidate for the duration of an atom of time, science has no experimental grounds for the claim that between any two events there is a third. Instead, the justification of saying the reals can be embedded into an interval of instants is that (i) the assumption of continuity is very useful because it allows the mathematical methods of calculus to be used in the physics of time; (ii) there are no known inconsistencies due to making this assumption; and (iii) there are no better theories available. The qualification earlier in this paragraph about “at least locally” is there in case there is time travel to the past so that the total duration of the time loop is finite. A circle is continuous, and one-dimensional, but it is like the real numbers only locally.

One can imagine two empirical tests that would reveal time’s discreteness if it were discrete—(1) being unable to measure a duration shorter than some experimental minimum despite repeated tries, yet expecting that a smaller duration should be detectable with current equipment if there really is a smaller duration, and (2) detecting a small breakdown of Lorentz invariance. But if any experimental result that purportedly shows discreteness is going to resist being treated as a mere anomaly, perhaps due to error in the measurement apparatus, then it should be backed up with a confirmed theory that implies the value for the duration of the atom of time. This situation is an instance of the kernel of truth in the physics joke that no observation is to be trusted until it is backed up by theory.

It is commonly remarked that, according to relativity theory, nothing can go faster than light, not even the influence of gravity. The remark needs some clarification, else it is incorrect. Here are three ways to go faster than light. (1) First, the medium needs to be specified. The speed of light in certain crystals can be much less than c, say 40 miles per hour, and if so, then a horse outside the crystal could outrun the light beam. (2) Second, the limit c applies only locally to objects within space relative to other nearby objects within space, and it requires that no object pass another object locally at faster than c. However, globally the general theory of relativity places no restrictions on how fast space itself can expand. So, two galaxies can drift apart from each other at faster than the speed of light if the intervening space expands sufficiently rapidly. (3) Imagine standing still outside on the flat ground and aiming your laser pointer forward and parallel to the ground. Now change the angle in order to aim the pointer down at your feet. During that process of changing the angle, the point of intersection of the pointer and the tangent plane of the ground will move toward your feet faster than the speed c. This does not violate relativity theory because the point of intersection is merely a geometrical object, not a physical object, so its speed is not restricted by relativity theory.

For more about special relativity, see Special Relativity: Proper Times, Coordinate Systems, and Lorentz Transformations.

3. Quantum Theory

Time is a continuum in quantum theory, just as it is in the theory of relativity and Newton’s mechanics, but change over time is treated in quantum theory very differently than in classical theories. Quantum theory is a special-relativistic theory of quantum mechanics. It also includes the Standard Model of particle physics, which is a theory of all the known forces of nature except for the gravitational force and all the known fundamental particles of nature except the graviton. Quantum theory has its name because it implies that various phenomena, such as energy and charge, are quantized in the sense that they do not change continuously but only in multiples of minimum discrete steps in a shortest time—so-called quantum steps.

Quantum theory is our most successful theory in all of science, and it is very well understand mathematically despite its not being well understood intuitively or informally or philosophically. The variety of phenomena it can be used to successfully explain is remarkable. For four examples, it explains (i) why you can see through a glass window but not a potato, (ii) why the Sun has lived so long without burning out, (iii) why atoms are stable so that the negatively-charged electrons do not crash into the positively-charged nucleus, (iv) why the periodic table of elements has the structure and values it has. Without quantum theory, all these must be taken to be brute facts of nature.

Surprisingly, physicists still do not agree on the exact formulation of quantum theory. Its many so-called “interpretations” are really competing versions of the theory. That is why there is no agreement on what the axioms of quantum theory are. Also, there is a disagreement among philosophers of physics regarding whether the competing interpretations are (1) empirically equivalent and underdetermined by (all possible) experimental evidence and so must be decided upon by such features as their degree of mathematical elegance and simplicity, or (2) are not empirically equivalent theories but, instead, are theories that may in the future be confirmed or refuted by experimental evidence.

Indeterminism

Determinism implies an predictability in principle, and knowing the way things are now, they should be able to predict how things will be. The world’s best scientist cannot predict precisely the weather tomorrow for England or for Saturn, but this is not a sufficient reason to conclude that the world is indeterministic. Maybe future meteorologists will be able to do better.

Classical physicists envisioned the world to be deterministic in the sense that, given a precise specification of the way thing are at some initial time, called the “initial state,” then any later state, the “final state,” is fixed.

But consider nuclear fission. For example, the nucleus of a uranium atom explodes and sends out particles from the nucleus. If we had enough information about the initial situation before the fission, then a determinist about quantum theory would expect that applying the laws of quantum theory to the initial state will tell us when the atom will fission and what its later state is at any later time. Unfortunately, quantum theory does not tell us this, at least as it is usually interpreted. At best, quantum theory can provide information about the probability that the atom will fission between now and some later time. So, the key principle of causal determinism, “same cause, same effect” fails. This is unlike in classical physics where probability is always a sign of human ignorance and measurement error regarding the initial conditions, and once the data is available, then the state at other times can be predicted precisely using the laws of nature, at least in principle, assuming there are no limitations about available computing power.

Einstein reacted to this apparent indeterminism by proposing that there would be a future discovery of laws about as yet unknown variables that, when taken into account, would make quantum theory be deterministic. David Bohm agreed with Einstein and went some way in this direction by building a revision of quantum theory, but his interpretation has not succeeded in moving the needle of scientific opinion.

Physicists normally wish to assume that our universe’s total information is conserved over time. At no time is energy created,  and at no time is any energy destroyed. All the universe’s information was present at the Big Bang, and it persists today. According to George Musser:

Information preservation is a synonym for determinism…. This comes with the important caveat that the information we’re talking about is the global quantum state, which evolves according to the Schrödinger equation. …Any subsystem of the universe will see information generation or destruction. Sci. Am., Jan. 2023, p. 6.

This wish for information conservation can not be satisfied if the Copenhagen Interpretation of quantum theory must be accepted.

The Copenhagen Interpretation

The classical Interpretation of quantum theory was the product of Niels Bohr and his colleagues in the 1920s. It is called the Copenhagen Interpretation because Bohr lived in Copenhagen. According to its advocates, it has implications about time reversibility, determinism, the conservation of information, locality, the principle that causes affect the future and not the past, and the reality of the world independently of its being observed—namely, that they all fail.

In the famous two-slit experiment, an electron shot toward an otherwise impenetrable plate might pass through it by entering through the plate’s left slit or a parallel right slit. Unlike macroscopic objects such as bullets entering through the slit in a steel wall, the electron is understood in the Copenhagen interpretation as going through both slits at the same time, and then interfering with itself on the other side and casting a unique pattern of dots on the optical screen behind the plate. So, a particle can be in multiple places at once. The optical screen is similar to a computer monitor that displays a pixel-dot when and where an electron collides with it. See this diagram with the interference pattern displayed on the right (the front view). This interference pattern occurs even if the electrons are shot at the optical screen only once per second. Their collective behavior over time looks like electrons interfere with other electrons in a manner that is called diffraction, a clear indicator of behaving like a wave.

But the interference does not occur if the electrons are actively observed during the experiment by, say, a light being shined on each slit to see which slit each electron went through. Then the electrons behave as tiny bullets. Here is a diagram of that situation:

Comparison of the two diagrams has led a great many researchers to conclude that, when an electron is not observed at the moment of passing through the slits, it passes through both two slits. When unobserved, it passes through only one . This interference vs. non-interference on the screen has been repeatedly confirmed experimentally.

To explain the experiment, Bohr proposed an anti-realist interpretation of the world by saying there is no way the world is when it is not being observed. Eugene Wigner, a Nobel Prize winning physicist, stressed that according to quantum theory there is a definite reality only when a conscious being is observing it. This prompted Einstein to ask a supporter of Bohr’s interpretation whether he really believed that the moon exists only when it is being looked at.

The two-slit experiment has caused philosophers of physics to disagree about what quantum theory implies about what an object is, what it means for an object to have a location, how an object maintains its identity over time, and whether consciousness of the measurer is required in order to make reality become determinate and not “fuzzy” or “blurry.”

In regard to the principle that causes affect the future and not the past, Princeton physicist John Wheeler famously remarked in his 1983 book Quantum Theory and Measurement: “Equipment operating in the here and now has an undeniable part in bringing about that which appears to have happened.”

Measurement

According to the Copenhagen Interpretation, during the measurement process the wave function describing the state “collapses” instantaneously and so discontinuously. Confirming this claim via an experiment faces the obstacle that no measurement can detect such a short interval of time,

Yet what we do already know from experiments is that the apparent speed at which the collapse process sweeps through space, cleaning the fuzz away, is faster than light. This cuts against the grain of relativity in which light sets an absolute limit for speed (Andrew Pontzen).

According to the Copenhagen Interpretation, during any measurement, the initial state of the system changes so abruptly after the measurement that, from the new state, the prior state  cannot be deduced. Different initial states may transition into the same final state. So, time reversibility fails.

When a measurement occurs, it is almost correct to explain this as follows: At the beginning of the measurement, the system “could be in any one of various possibilities, we’re not sure which.” But not quite. Strictly speaking, before the measurement is made the system is in a superposition of multiple states, one for each possible outcome of the measurement, with each outcome having a fixed probability of occurring; and the measurement itself is a procedure that removes the superposition and realizes just one of those states. Informally, this is sometimes summarized in the remark that measurement turns the situation from fuzzy to definite.

For an instant, a measurement on an electron can say it is there at this specific place, but immediately afterward it becomes fuzzy again, and once again there is no single truth about where an electron is precisely, but only a single truth about the probability for finding the electron in some region, if a sufficiently precise measurement were made.

Many opponents of the Copenhagen Interpretation have reacted this way:

In the wake of the Solvay Conference (in 1927), popular opinion within the physics community swung Bohr’s way, and the Copenhagen approach to quantum mechanics settled in as entrenched dogma. It’s proven to be an amazingly successful tool at making predictions for experiments and designing new technologies. But as a fundamental theory of the world, it falls woefully short (Sean Carroll).

George Ellis, co-author with Stephen Hawking of the definitive book The Large-Scale Structure of Space-Time, identifies a key difficulty with our understanding of quantum measurement in interpretations that imply the wave function collapses during measurement: “Usually, it is assumed that the measurement apparatus does not obey the rules of quantum theory, but this [assumption] contradicts the presupposition that all matter is at its foundation quantum mechanical in nature.”

Those who want to avoid having to bring consciousness of the measurer into quantum physics and who want to restore time-reversibility and determinism typically recommend adopting a different interpretation of quantum mechanics that changes how measurement is understood. Einstein had a proposal, the Hidden Variable Interpretation. He hoped that by adding new laws specifying the behavior of so-called “underlying variables” affecting the system, then determinism, time-reversibility, and information conservation would be restored, and there would be no need to speak of a discontinuous collapse of the wave function during measurement. Einstein’s proposal never gathered much support.

The Many-Worlds Interpretation is one of the most popular replacements for the Copenhagen Interpretation. This proposal removes the radical distinction between the measurer and what is measured and replaces it with a continuously evolving wave function for the combined system of measurement process plus measurer. The laws of the Many-Worlds Interpretation are time-reversible and deterministic, and there is no need for the anti-realist stance. It is also called the Everettian interpretation. It implies that, during any measurement having some integer number n of possible outcomes, the universe splits instantaneously into n copies of itself, each with a different outcome. If we find that 8 is the value we see for the outcome of our own measuring apparatus, then the counterparts of us living in the other universes see outcomes other than 8. Clearly, the weirdness of the Copenhagen interpretation has been traded for a new kind of weirdness.

In the Many-Worlds interpretation, there is no access from one universe to another. They exist “in parallel.” Information is conserved in the multiverse as a whole, but not within any single universe. If we had access to all information about all the many worlds (the multiverse’s wave function) and had unlimited computational capacity, then we could see that the multiverse of many worlds evolves deterministically and time-reversibly and that the wave function for the multiverse never collapses discontinuously. A measurement need not involve a conscious measurer, nor even a measurement apparatus. A measurement does not produce a discontinuous transition of the wave function, a discontinuous change from one state of the universe to another. In a single universe, the ideally best available information provides only the probability of a measurement outcome, a probability that must be less than 1. So, in this sense, probability remains at the heart of our world. (By the way, this multiverse theory requires a different multiverse from the multiverse theory of chaotic inflation for the Big Bang. That multiverse is described here.)

For Everett, what the Copenhagen Interpretations calls fuzziness or a superposition of states, he calls a superposition of alternate universes.

Some experts say quantum theory is not directly about reality but rather is merely a tool for making measurements. This is an instrumentalist proposal. Experts do not agree on whether the quantum wave function is a representation of reality or instead a representation of our possible knowledge of reality. Wave functions

might simply characterize our knowledge; in particular, the incomplete knowledge we have about the outcome of future quantum measurements. This is known as the ‘epistemic’ approach to quantum mechanics as it thinks of wave functions as capturing something about what we know, as opposed to ‘ontological’ approaches that treat the wave function as describing objective reality (Carroll 2019, 197-8).

And there is no consensus on whether we currently possess the fundamental laws of quantum theory, as Everett believed, or instead only an incomplete version of the laws, as Einstein believed.

Heisenberg’s Uncertainty Principle 

The Heisenberg uncertainty principle for time plus energy implies that the uncertainties in the simultaneous measurements of time and energy in either energy emission or energy absorption must obey the inequality ΔE Δt ≥ h/4π. Here ΔE is the (standard deviation of the) uncertainty in the energy. Δt is the uncertainty in the time. h is Planck’s constant. Δt is the uncertainty of the duration of the measurement of energy. These uncertainties are produced over a collection of measurements because any single measurement has (in principle and not counting practical measurement error) a precise value and is not “fuzzy” or uncertain. Repeated measurements necessarily produce a spread in values that reveal the fuzzy, wavelike characteristics of the phenomenon being measured, and these measurements collectively obey the Heisenberg inequality. Philosophers of physics do not agree on whether Δt is a lack of precision in nature herself, as the Copenhagen Interpretation implies, or is a lack of knowledge of precise results in measurements or some inevitable disturbance during measuring. Heisenberg himself thought of his uncertainty principle as being about how the measurer necessarily disturbs the measurement.

One very significant implication of these remarks about the uncertainty principle for time and energy is that there can be violations in the classical law of the conservation of energy. That law says the total energy of a closed and isolated system is always conserved and can only change its form but not disappear or increase. A falling rock has energy of motion during its fall to the ground, but when collides with the ground the energy changes its form by heating the ground and the rock and creating the sound energy of the collision. This classical law can be violated by ΔE for a time Δt, as described by Heisenberg’s Uncertainty Principle. Quantum theory does contain a more sophisticated law of conservation of energy than this. The classical law is often violated for very short time intervals and is less likely to be violated as the time interval increases. Some philosophers of physics have described this as something coming from nothing. The quantum vacuum, however, is not really nothing, as is explained in the following sub-section.

Quantum Foam

Quantum theory appears to allow so-called “virtual particles” to be created out of the quantum vacuum. These particles are real, but they borrow energy from the vacuum and pay it back very quickly. What happens is that, when a pair of energetic virtual particles—say, an electron and anti-electron—are created from the vacuum, the two exist for only a very short time before being annihilated or reabsorbed, thereby giving back their borrowed energy. The greater the energy of the virtual pair, the shorter the time interval that the two exist before being reabsorbed, as described by Heisenberg’s Uncertainty Principle. The physicist John Wheeler first suggested that the ultramicroscopic structure of spacetime for periods on the order of the Planck time (about 5.4 x 10-44 seconds) in regions about the size of the Planck length (about 1.6 x 10-35 meters) probably is a quantum foam of rapidly changing curvature of spacetime, with black holes and virtual particle-pairs and perhaps wormholes rapidly forming and dissolving.

The Planck time is the time it takes light to travel a Plank length. The terms Planck length and Planck time were inventions of Max Planck in the early twentieth-century during his quest to find basic units of length and time that could be expressed in terms only of universal constants. He defined the Planck unit of time algebraically as √(ħG/c5) is the square root symbol. ħ is Planck’s constant in quantum theory divided by 2π; G is the gravitational constant in Newtonian mechanics; c is the speed of light in a vacuum in relativity theory. Three different theories of physics are tied together in this one expression. The Planck time is a theoretically interesting unit of time, but not a practical one. No known experimental procedure can detect events that are this brief.

There are no isolated particles according to quantum mechanics. Every ordinary elementary particle is surrounded by a cloud of virtual particles. Many precise experiments can be explained only by assuming there is this cloud.

So far, this article has spoken of virtual particles as if they are ordinary, but short-lived, particles. This is not quite correct. Virtual particles are not exactly particles like the other particles of the quantum fields. Both are excitations of these fields, and they both have gravitational effects and thus effects on time, but virtual particles are not equivalent to ordinary quantum particles, although the longer lived ones are more like ordinary particle excitations than the short lived ones.

Virtual particles are just a way to calculate the behavior of quantum fields, by pretending that ordinary particles are changing into weird particles with impossible energies, and tossing such particles back and forth between themselves. A real photon has exactly zero mass, but the mass of a virtual photon can be absolutely anything. What we mean by “virtual particles” are subtle distortions in the wave function of a collection of quantum fields…but everyone calls them particles [in order to keep their names simple] (Carroll 2019, p. 316).

Entanglement

Classical theories imply locality. An object is directly influenced only by its immediate surroundings. Quantum theory seems to imply the universe is not local. One particle can affect a distant particle instantly. Einstein called this “spooky action-at-a-distance”. It is due to quantum entanglement.

If two particles are entangled, this does not mean that if you move one of them then the other one moves, too. It is not that kind of entanglement.

Ontologically, the key idea about quantum entanglement is that if a particle becomes entangled with one or more other particles within the system, then it loses some of its individuality. The whole system is more than the sum of its sub-parts. A quantum measurement of one member of an entangled pair of particles will instantaneously determine the value of any similar measurement that will eventually be made on the other member of the pair, no matter how far away it is in space  and time. The entanglement is produced locally but the entanglement is not a local feature; it persists as the two particles fly off in different directions. Even though it is correct to describe the situation as action at a distance, the entanglement is only about correlation, and it cannot be used to cause information to be transferred instantaneously from one place to another. That feature of special relativity is preserved in quantum theory.

Speaking about entanglement in 1935, Erwin Schrödinger said:

Measurements on (spatially) separated systems cannot directly influence each other—that would be magic.

Einstein agreed. Yet the magic seems to exist. With entangled pairs, there is instantaneous, coordinated behavior across great distances.

To explore this “magical” feature of the quantum world, let’s separate the two entangled particles by a great distance and measure their spins at close to the same time. This way, the first measurement outcome cannot have directly affected the second measurement outcome via sending some ordinary signal between them because the signal would have had to move faster than light speed to get there by the time the second measurement is made. The transmission of coordinated behavior happens in zero time. It is hard for us who are influenced by the manifest image to believe that the two electrons did not start out with the spins that they were later measured to have, but careful observations have repeatedly confirmed this nonlocality. It has been shown repeatedly that the assumption that the two entangled particles started out with the same spin is inconsistent with the data produced in the experiment. Some researchers have concluded that, because quantum theory implies that non-locality occurs most everywhere, this is the default, and what needs to be explained is any occurrence of locality.

Approximate Solutions

Like the equations of the theory of relativity, the equations of quantum theory are very difficult to solve and use except in very simple situations. The equations cannot be used directly in today’s computers. There have been many Nobel-Prize winning advances in chemistry by finding methods of approximating quantum theory in order to simulate the results of chemical activity with a computer. For one example, Martin Karplus won the Nobel Prize for chemistry in 2013 for creating approximation methods for computer programs that describe the behavior of the retinal molecule in our eye’s retina. It has almost 160 electrons, but he showed that, for describing how light strikes the molecule and begins the chain reaction that produces the electrical signals that our brain interprets during vision, chemists needs to pay attention only to the molecule’s outer electrons, that is, to the electron clouds that are farthest out from the nucleus.

a. Standard Model

The Standard Model of particle physics was proposed in the 1970s, and subsequently it has been revised and well tested. The Model is designed to describe elementary particles and the physical laws that govern them. The Standard Model is really a loose collection of theories about different particle fields, and it describes all known non-gravitational fields. It is our civilization’s most precise and powerful theory of physics.

The theory sets limits of what exists and what can happen. It implies that a particle can be affected by some forces but not others. It implies that a photon cannot decay into two photons. It implies that protons attract electrons and never repel them. It also implies that every proton consists in part of two up quarks and one down quark that interact with each other by exchanging gluons. The gluons “glue” the particles together via the strong nuclear force just as photons glue electrons to protons via the electromagnetic force. Gravitons, the carrier particles for gravity, glue a moon to a planet and a planet to a star. Unlike how Isaac Newton envisioned forces, all forces are transmitted by particles. That is, all forces have carrier particles that “carry” the force from one place to another. The gluons are massless and transmit the strong force at nearly light speed; this force “glues” the quarks together inside a proton. More than 90% of the mass of the proton consists in a combination of virtual quarks, virtual antiquarks and virtual gluons. Because the virtual particles exist over only very short time scales, they are too difficult to detect by any practical experiment, and so they are called “virtual particles.” However, this word “virtual” does not imply “not real.”

The properties of spacetime points that serve to distinguish any particle from any other are a spacetime point’s values for mass, spin, and charge at that point. Nothing else. There are no other differences among what is at a point, so in that sense fundamental physics is very simple. Charge, though, is not simply electromagnetic charge. There are three kinds of color charge for the strong nuclear force, and two kinds of charge for the weak nuclear force.

Except for gravity, the Standard Model describes all the universe’s forces. Strictly speaking, these theories are about interactions rather than forces. A force is just one kind of interaction. Another kind of interaction does not involve forces but rather it changes one kind of particle into another kind. The neutron, for example, changes its appearance depending on how it is probed. The weak interaction can transform a neutron into a proton. It is because of transformations like this that the concepts of something being made of something else and of one thing being a part of a whole become imprecise for very short durations and short distances. So, classical mereology—the formal study of parts and the wholes they form—fails.

Interaction in the field of physics is very exotic. When a particle interacts with another particle, the two particles exchange other particles, the so-called carriers of the interactions. So, when milk is spilled onto the floor, what is going on is that the particles of the milk and the particles in the floor and the particles in the surrounding air exchange a great many carrier particles with each other, and the exchange is what is called “spilling milk onto the floor.” Yet all these varied particles are just tiny fluctuations of fields. This scenario indicates one important way in which the scientific image has moved very far away from the manifest image.

According to the Standard Model, but not according to general relativity theory, all particles must move at light speed c unless they interact with other fields. All the particles in your body such as its protons and electrons would move at the speed c if they were not continually interacting with the Higgs Field, a fundamental field. The Higgs Field can be thought as being like a sea of molasses that slows down all protons and electrons and gives them the mass and inertia they have. Neutrinos are not affected by the Higgs Field, but they move slightly less than c because they are slightly affected by the weak interaction.

As of the first quarter of the twenty-first century, the Standard Model is incomplete because it cannot account for gravity or dark matter or dark energy or the fact that there is more matter than anti-matter. When a new version of the Standard Model does all this, then it will perhaps become the long-sought “theory of everything.”

 

4. Big Bang

The classical big bang theory implies that the universe once was extremely small, dense, hot, nearly uniform, and expanding; and it had extremely high energy density and severe curvature of its spacetime. Now the universe has lost all these properties except one: it is still expanding. Some cosmologists believe time began with the big bang, at the famous cosmic time t = 0, but the big bang theory itself does not imply anything about when time began, nor whether anything was happening before the big bang, although those features could be added into a revised theory of the big bang.

The big bang explosion was a rapid expansion of space itself, not an expansion of something in a pre-existing void. However, the big bang theory is a theory of the observable universe, not of the whole universe. The observable universe is the part of the universe that is in principle observable by creatures on Earth. Scientists have no idea about the universe as a whole; it might or might not be like the observable universe. So, it is more accurate to say the classical big bang theory implies that the observable universe once was extremely small, dense, hot, and so forth.

The big bang theory was controversial when it was created in the 1920s, but it finally was considered to be confirmed in the 1970s. Its primary competitor during this time was the steady state theory. That theory allows space to expand in volume, but this is compensated for by providing spontaneous creation of matter in order to keep the universe’s density constant. This spontaneous creation violated the increasingly attractive principle of the conservation of energy.

Before the 1960s, physicists were unsure whether proposals about cosmic origins were pseudoscientific and so should not be discussed in a well-respected physics journal.

The explosion began 13.8 billion years ago. At that time, the observable universe would have had an ultramicroscopic volume. The explosion created new space, and this process continues to create new space. In fact, in 1998, the classical theory of the big bang was revised to say the expansion rate is not constant but has been accelerating slightly for the last five billion years due to the pervasive presence of dark energy. Dark energy has this name because so little is known about it other than that its amount per unit volume stays constant as space expands. That is, it does not dilute.

The big bang theory in some form or other (with or without inflation) is accepted by nearly all cosmologists, astronomers, astrophysicists, and philosophers of physics, but it is not as firmly accepted as is the theory of relativity. The big bang theory originated with several people, although Edwin Hubble’s observations in 1929 of galaxy recession from us were the most influential on its gaining recognition by other cosmologists. In 1922, the Russian physicist Alexandr Friedmann discovered that the general theory of relativity allows an expanding universe. Unfortunately, Einstein reacted to the discovery by saying this is a mere physical possibility and not a feature of the actual universe. The Belgian physicist Georges Lemaître suggested in 1927 that there is some evidence the universe is expanding, and he defended his claim using previously published measurements to show a pattern that the greater the distance of a galaxy from Earth the greater the galaxy’s speed away from Earth. He calculated these speeds from the Doppler shifts in their light frequency. In 1929, the American astronomer Edwin Hubble carefully recorded clusters of galaxies moving away from each other in a fairly regular manner because the farther galaxies were moving away at faster speeds. These observations were crucially influential in causing scientists to accept what is now called the big bang theory of the universe.

Currently, space is expanding because most clusters of galaxies are flying away from each other, even though molecules, planets, and galaxies themselves are not now expanding. Eventually, according to the most popular version of the big bang theory, in the very distant future, even these objects will expand away from each other and all structures of particles will be annihilated, leaving only an expanding soup of elementary particles as the universe approaches thermodynamic equilibrium.

The acceptance of the theory of relativity has established that space curves near all masses. However, the theory of relativity has no implications about curvature of space at the cosmic level. The universe presumably has no edge, but the observable universe does. The observable universe is a sphere containing 350 billion large galaxies; it is called “our Hubble Bubble” and also “our pocket universe.” Its diameter is about 93 billion light years, but it is rapidly growing more every day.

The big bang theory presupposes that the ultramicroscopic-sized observable universe at a very early time had an extremely large curvature, but most cosmologists believe that the universe has straightened out and now no longer has any spatial curvature on the largest scale of billions of light years. Also, astronomical observations reveal that the current distribution of matter in the universe tends towards uniformity as the scale increases. At very large scales it is homogeneous and isotropic. The version of the big bang theory called “inflation theory” is a popular method of explaining these features of our universe at the cosmic level.

Here is a picture that displays the evolution of the observable universe since the big bang. Time is increasing to the right while space increases up, down, out and into the picture:

big bang graphic

Attribution: NASA/WMAP Science Team

Clicking on the picture will produce an expanded picture with more detail. (The picture shows only two spatial dimensions of the three in our universe.)

The term big bang does not have a precise definition. It does not always refer to a single, first event; rather, it more often refers to a brief duration of early events as the universe underwent a rapid expansion. In fact, the idea of a first event is primarily a product of accepting the theory of relativity, which is known to fail in the limit as the universe’s volume approaches zero, the so-called singularity. Actually, the big bang theory itself is not a specific theory, but rather a framework for more specific big bang theories.

Astronomers on Earth detect microwave radiation arriving in all directions. It is the light produced about 380,000 years after the big bang. It was then that the universe turned transparent for the first time, so it gives us a picture of the universe in its infancy. The radiation began its journey toward Earth then because the universe had cooled to 3,000 degrees Kelvin, which was cool enough to form atoms and to allow photons for the first time to move freely without being immediately reabsorbed by neighboring particles. This primordial electromagnetic radiation has now reached Earth as the universe’s most ancient light. But it is no longer bright light nor light of the same frequency that it had originally. Because of space’s expansion during the light’s travel to Earth, it has dimmed and its wavelength has increased and become microwave radiation with a corresponding temperature of 2.728 degrees Celsius above absolute zero (the coldest possible temperature).  The microwave’s wavelength is about two millimeters and is small compared to the 100-millimeter wavelength of the microwaves in our kitchen ovens. Measuring this incoming Cosmic Microwave Background (CMB) radiation reveals it to be extremely uniform in all directions in the sky.

Uniform, but not perfectly uniform. CMB radiation varies very slightly with the angle it is viewed from. The variation is a ten thousandth of a degree of temperature. These temperature fluctuations in different directions are traces of ultramicroscopic fluctuations in the density of material very early during the big bang process. These early, small fluctuations, probably began as quantum fluctuations (perhaps in the inflaton field), and they probably are the origin of what later became the galaxies and the voids between galaxies. Probably all the large-scale structure in today’s universe was triggered by primeval quantum uncertainty.

After inflation ended, the universe’s expansion rate did not drop to zero. It became comparatively low, but it has not been constant. Also, its rate accelerates slightly because there is a another source of expansion—the repulsion of dark energy. The influence of dark energy was initially insignificant for billions of years, but its key feature is that it does not dilute as the space undergoes expansion. So, finally, after about seven billion years of space’s expanding after the big bang, the dark energy became an influential factor and started to significantly accelerate the expansion. Today the expansion rate is becoming more and more significant. Today, the diameter of the observable universe doubles every 10 billion years. This influence from dark energy is shown in the above diagram by the presence of the curvature that occurs just below and before the abbreviation “etc.” Future curvature will be much greater. Most cosmologists believe this dark energy is the energy of space itself.

The initial evidence for dark energy came from observations in 1998 of Doppler shifts of supernovas. These observations are best explained by the assumption that distances between supernovas are increasing at an accelerating rate. Because of this rate increase, any receding galaxy cluster that is now 100 light-years away from our Milky Way will be more than 200 light-years away in another 13.8 billion years, and it will be moving away from us much faster than it is now. One day, it will be moving so fast away that it will become invisible because the recession speed will exceed light speed. In enough time, every galaxy other than the Milky Way will become invisible. After that, all the stars in the Milky Way will gradually become invisible, with the more distant ones disappearing first. We will lose sight of all our neighbors. In that sense, astronomers are never going to see more than they could see now.

Regarding the universe’s expansion, atoms are not currently expanding. They are held together tightly by the electromagnetic force and strong force (with a little help from the weak force and gravity) which overpower the current value of the repulsive force of dark energy or whatever it is that is causing the expansion of space. What is expanding now is the average distances between clusters of galaxies. The clusters are exploding away from each other, and, in the future, they will be very much farther away from each other. Currently, the “idea that the universe is overall expanding is only true on the largest scales. It’s an approximation that gets better and better as you consider galaxies that are farther and farther away” (Sean Carroll).

Eventually, though, as the rate of expansion escalates, all clusters of galaxies will become torn apart. Then galaxies themselves will become torn apart, then all solar systems, and ultimately even molecules and atoms and all other configurations of elementary particles. We approach the heat death of the universe.

Why does the big bang theory say space exploded instead of saying matter-energy exploded into a pre-existing space? This is a subtle issue. If it had said matter-energy exploded but space did not, then there would be uncomfortable questions: Where is the point in space that it exploded from, and why that point? Picking one would be arbitrary. And there would be these additional uncomfortable questions: How large is this pre-existing space? When was it created? Experimental observations clearly indicate that some clusters of galaxies must be separating from each other faster than the speed of light, but adding that they do this because they are moving that fast within a pre-existing space would require an ad hoc revision of the theory of relativity to make exceptions to Einstein’s speed limit. So, it is much more “comfortable” to say the big bang is an explosion of space, not an explosion of matter-energy within space.

The term “our observable universe” and the synonymous term “our Hubble bubble,” refer to everything that a person on Earth could in principle observe. Cosmologists presume that there are distant places in the universe in which an astronomer there could see more things than are visible from here on Earth. Physicists are agreed that, because of this reasoning, there exist objects that are in the universe but not in our observable universe. Because those unobservable objects are also the product of our big bang, cosmologists assume that they are similar to the objects we on Earth can observe—that those objects form atoms and galaxies, and that time behaves there as it does here. But there is no guarantee that this convenient assumption is correct.

Because the big bang happened about 14 billion years ago, you might think that no visible object can be more than 14 billion light-years from Earth, but this would be a mistake that does not take into account the fact that the universe has been expanding all that time. The increasing separation of clusters of galaxies over the last 14 billion years is why astronomers can see about 45 billion light-years in any direction and not merely 14 billion light-years.

When contemporary physicists speak of the age of our universe and of the time since our big bang, they are implicitly referring to cosmic time measured in the cosmological rest frame. This is time measured in a unique reference frame in which the average motion of all the galaxies is stationary and the Cosmic Microwave Background radiation is as close as possible to being the same in all directions. This frame is not one in which the Earth is stationary. Cosmic time is time measured by a clock that would be sitting as still as possible while the universe expands around it. In cosmic time, t = 0 years is when the big bang began, and t = 13.8 billion years is our present. If you were at rest at the spatial origin in this frame, then the Cosmic Microwave Background radiation on a very large scale would have about the same average temperature in any direction. This is at the ring in the radial diagram above when the universe first became transparent to light. When the universe was smaller than it is now and it was about 100 million light years across, the universe’s matter would be about uniformly distributed. At that scale, it is as if all the galaxies are dust particles floating in a large room, and at the center of the room the distribution of dust in one direction is the same as in any other direction, and in any region of the room there is as much dust as in any other region. On a finer scale, the matter in the universe is, of course, unevenly distributed.

The cosmic rest frame is a unique, privileged reference frame for astronomical convenience, but there is no reason to suppose it is otherwise privileged. It is not the frame sought by the A-theorist who believes in a unique present, nor by Isaac Newton who believed in absolute rest, nor by James Clerk Maxwell who believed in an aether at rest and that waved whenever a light wave passed through.

The cosmic frame’s spatial origin point is described as follows:

In fact, it isn’t quite true that the cosmic background heat radiation is completely uniform across the sky. It is very slightly hotter (i.e., more intense) in the direction of the constellation of Leo than at right angles to it…. Although the view from Earth is of a slightly skewed cosmic heat bath, there must exist a motion, a frame of reference, which would make the bath appear exactly the same in every direction. It would in fact seem perfectly uniform from an imaginary spacecraft traveling at 350 km per second in a direction away from Leo (towards Pisces, as it happens)…. We can use this special clock to define a cosmic time…. Fortunately, the Earth is moving at only 350 km per second relative to this hypothetical special clock. This is about 0.1 percent of the speed of light, and the time-dilation factor is only about one part in a million. Thus to an excellent approximation, Earth’s historical time coincides with cosmic time, so we can recount the history of the universe contemporaneously with the history of the Earth, in spite of the relativity of time.

Similar hypothetical clocks could be located everywhere in the universe, in each case in a reference frame where the cosmic background heat radiation looks uniform. Notice I say “hypothetical”; we can imagine the clocks out there, and legions of sentient beings dutifully inspecting them. This set of imaginary observers will agree on a common time scale and a common set of dates for major events in the universe, even though they are moving relative to each other as a result of the general expansion of the universe…. So, cosmic time as measured by this special set of observers constitutes a type of universal time… (Davies 1995, pp. 128-9).

It is a convention that cosmologists agree to use the cosmic time of this special reference frame, but it is an interesting fact and not a convention that our universe is so organized that there is such a useful cosmic time available to be adopted by the cosmologists. Not all physically possible spacetimes obeying the laws of general relativity can have such a cosmic time.

a. Cosmic Inflation

According to one somewhat popular revision of the classical big bang theory, the cosmic inflation theory, the universe was created from quantum fluctuations of an inflaton field. The universe underwent a cosmological phase transition of the field for some unknown reason, and, again for some unknown reason, it stopped inflating very soon after it began, although it did not stop everywhere at the same time. Afterwards, the universe continued expanding at a more or less constant rate.

By the time that inflation was over, every particle was left in isolation, surrounded by a vast expanse of empty space extending in every direction. And then—only a fraction of a fraction of an instant later—space was once again filled with matter and energy. Our universe got a new start and a second beginning. After a trillionth of a second, all four of the known forces were in place, and behaving much as they do in our world today. And although the temperature and density of our universe were both dropping rapidly during this era, they remained mind-boggingly high—all of space was at a temperature of 1015 degrees. Exotic particles like Higgs bosons and top quarks were as common as electrons and photons. Every last corner of space teemed with a dense plasma of quarks and gluons, alongside many other forms of matter and energy. After expanding for another millionth of a second, our universe had cooled down enough to enable quarks and gluons to bind together forming the first protons and neutrons (Dan Hooper, At the Edge of Time, p. 2).

About half the cosmologists do not believe in cosmic inflation. They hope there is another explanation of the phenomena that inflation theory explains. The theory provides an explanation for why there are not point-like magnetic monopoles most everywhere (called the monopole problem), why the microwave radiation that arrives on Earth from all directions is so uniform (the cosmic horizon problem), why we have been unable to detect proton decay (the proton decay problem), and why there is currently so little curvature of space (the flatness problem). It is difficult to solve these problems in some other way than by assuming inflation.

The theory of cosmic strings has been the major competitor to the theory of cosmic inflation, but the above problems are more difficult to solve with strings and without inflation. Crudely, we can say the big bang theory is considered to be confirmed, but the theory of inflation is still unconfirmed. Princeton cosmologist Paul Steinhardt and Nobel Prize winner Roger Penrose are two of inflation’s noteworthy opponents.

According to the theory of inflation, assuming the big bang began at time t = 0, then the epoch of inflation (the epoch of radically repulsive gravity) began at about t = 10−36 seconds and lasted until about t = 10−33 seconds, during which time the volume of space increased by a factor of 1026, and any initial unevenness in the distribution of energy was almost all smoothed out, that is, smoothed out from the large-scale perspective, in analogy to how blowing up a balloon removes its initial folds and creases.

To appreciate just how fast the initial inflation was, consider this analogy. Although the universe at the beginning of inflation was actually much smaller than the size of a proton, think of it instead as being the size of a marble. If so, then during the inflation period the marble expanded to a gigantic sphere whose radius was the distance that now would reach from Earth to the nearest supercluster of galaxies.

The speed of this inflationary expansion was much faster than light speed. That speed does not violate Einstein’s general theory of relativity because his theory is a local theory that places no limits on the speed of expansion of space itself.

At the end of that inflationary epoch at, say, t = 10−33 seconds or so, the explosive material decayed for some unknown reason and left only normal matter with attractive gravity. At this time, our universe continued to expand, although now at a slow, nearly constant, rate. It went into its “coasting” phase. Regardless of any previous curvature in our universe, by the time the inflationary period ended, the overall structure of space had very little spatial curvature, and its space was extremely homogeneous. Today, we see that the universe is homogeneous on its largest scale.

But at the very beginning of the inflationary period, there surely were some very tiny imperfections due to quantum fluctuations. The densest regions attracted more material than the less dense regions, and these dense regions turned into what would eventually become galaxies. Those quantum fluctuations have left their traces in the very slight hundred-thousandth of a degree differences in the temperature of the cosmic microwave background radiation at different angles as one looks out into space from Earth.

Before inflation began, for some unknown reason the universe contained an unstable inflaton field or false vacuum field. This energetic field underwent a spontaneous phase transition (analogous to superheated liquid water suddenly and spontaneously expanding into steam). That phase transition caused the highly repulsive primordial material to hyper-inflate exponentially in volume for a very short time. During this primeval inflationary epoch, the gravitational field’s stored, negative, gravitational energy was rapidly released, and all space wildly expanded. At the end of this early inflationary epoch, the highly repulsive material decayed for some as yet unknown reason into ordinary matter and energy, and the universe’s expansion rate stopped increasing exponentially and the rate dropped precipitously and was nearly constant. During the inflationary epoch, the entropy continually increased, so the second law of thermodynamics was not violated.

Alan Guth described the inflationary period this way:

There was a period of inflation driven by the repulsive gravity of a peculiar kind of material that filled the early universe. Sometimes I call this material a “false vacuum,” but, in any case, it was a material which in fact had a negative pressure, which is what allows it to behave this way. Negative pressure causes repulsive gravity. Our particle physics tells us that we expect states of negative pressure to exist at very high energies, so we hypothesize that at least a small patch of the early universe contained this peculiar repulsive gravity material which then drove exponential expansion. Eventually, at least locally where we live, that expansion stopped because this peculiar repulsive gravity material is unstable; and it decayed, becoming normal matter with normal attractive gravity. At that time, the dark energy was there, the experts think. It has always been there, but it’s not dominant. It’s a tiny, tiny fraction of the total energy density, so at that stage at the end of inflation the universe just starts coasting outward. It has a tremendous outward thrust from the inflation, which carries it on. So, the expansion continues, and as the expansion happens the ordinary matter thins out. The dark energy, we think, remains approximately constant. If it’s vacuum energy, it remains exactly constant. So, there comes a time later where the energy density of everything else drops to the level of the dark energy, and we think that happened about five or six billion years ago. After that, as the energy density of normal matter continues to thin out, the dark energy [density] remains constant [and] the dark energy starts to dominate; and that’s the phase we are in now. We think about seventy percent or so of the total energy of our universe is dark energy, and that number will continue to increase with time as the normal matter continues to thin out. (World Science U Live Session: Alan Guth, published November 30, 2016 at https://www.youtube.com/watch?v=IWL-sd6PVtM.)

Before about t = 10-46 seconds, there was a single basic force rather than the four we have now. The four basic forces are: the force of gravity, the strong nuclear force, the weak force, and the electromagnetic force. At about t = 10-46 seconds, the energy density of the primordial field was down to about 1015 GEV, which allowed spontaneous symmetry breaking (analogous to the spontaneous phase change in which steam cools enough to spontaneously change to liquid water); this phase change created the gravitational force as a separate basic force. The other three forces had not yet appeared as separate forces.

Later, after inflation began and then ended, at t = 10-12 seconds, there was more spontaneous symmetry breaking. First the strong nuclear force, then the weak nuclear force and finally the electromagnetic forces became separate forces. For the first time, the universe now had exactly four separate forces. At t = 10-10 seconds, the Higgs field turned on. This slowed down many kinds of particles by giving them mass so they no longer moved at light speed.

Much of the considerable energy left over at the end of the inflationary period was converted into matter, antimatter, and radiation, such as quarks, antiquarks, and photons. The universe’s temperature escalated with this new radiation, and this period is called the period of cosmic reheating. Matter-antimatter pairs of particles combined and annihilated, removing all the antimatter and almost all the matter from the universe, and leaving a small amount of matter and even more radiation. At t = 10-6 seconds, the universe had cooled enough that quarks combined together and created protons and neutrons. After t = 3 minutes, the universe had cooled sufficiently to allow these protons and neutrons to start combining strongly to produce hydrogen, deuterium, and helium nuclei. At about t = 379,000 years, the temperature was low enough (around 2,700 degrees C) for these nuclei to capture electrons and to form the initial hydrogen, deuterium, and helium atoms of the universe. With these first atoms coming into existence, the universe became transparent in the sense that short wavelength light (about a millionth of a meter) was now able to travel freely without always being absorbed very soon by surrounding particles. Due to the expansion of the universe since then, this early light’s wavelength expanded and is today invisible on Earth because it is at much longer wavelength than it was 379,000 years ago. That radiation is now detected on Earth as having a wavelength of 1.9 millimeters, and it is called the cosmic microwave background radiation or CMB. That energy is continually arriving at the Earth’s surface from all directions. It is almost homogenous and almost isotropic.

As the universe expands, the CMB radiation loses energy; but this energy is not lost from the universe, nor is the law of conservation of energy violated. There is conservation because the same amount of energy is gained by going into expanding the space.

In the literature in both physics and philosophy, descriptions of the big bang often speak of it as if it were the first event, but the big bang theory does not require there to be a first event, an event that had no prior event. This description mentioning the first event is a philosophical position, not something demanded by the scientific evidence. Physicists James Hartle and Stephen Hawking once suggested that looking back to the big bang is just like following the positive real numbers back to ever-smaller positive numbers without ever reaching the smallest positive one. There isn’t a smallest one. If Hartle and Hawking are correct that time is strictly analogous to this, then the big bang had no beginning point event, no initial time.

The classical big bang theory is based on the assumption that the universal expansion of clusters of galaxies can be projected all the way back to a singularity, a zero volume, at t = 0. Physicists agree that the projection must become untrustworthy for any times less than the Planck time. If a theory of quantum gravity ever gets confirmed, it is expected to provide more reliable information about the Planck epoch from t=0 to the Planck time, and it may even allow physicists to answer the questions, “What caused the big bang?” and “Did anything happen before then?”

For a short lecture by Guth on these topics aimed at students, see https://www.youtube.com/watch?v=ANCN7vr9FVk.

b. Eternal Inflation and the Multiverse

Although there is no consensus among physicists about whether there is more than one universe, many of the big bang inflationary theories are theories of eternal inflation, of the eternal creation of more big bangs or multiple universes. The theory is also called chaotic inflation and the inflationary multiverse. The key idea is that once inflation gets started it cannot easily be turned off. The inflaton field (note the spelling) is the fuel of our big bang and all of these other big bangs. Advocates of eternal inflation say that not all the inflaton fuel is used up in producing just one big bang, so the remaining fuel is available to create other big bangs, at an exponentially increasing rate because the inflaton fuel increases much faster than it gets used. Presumably, there is no reason why this process should ever end, so there will be a potentially infinite number of universes in the multiverse. Also, there is no good reason to suppose our actual universe was the first one. Technically, whether one big bang occurred before or after another is not well defined. One cannot make sense of time across the multiverse. But many popular writers plunge ahead with their idea of time from the manifest image by saying they did not drop their napkin at lunch yesterday but the counterpart of themselves who lives in another universe did drop it at the same time.

A helpful mental image here is to think of a large space filled with bubbles of all sizes, all of which are growing. Each bubble is its own universe, and each might have its own physical constants, its own number of dimensions, even its own laws of physics. In some of these universes, there may be no time at all. Regardless of whether a single bubble universe is inflating or no longer inflating, the space between the bubbles is inflating and more bubbles are being born. Because the space between bubbles is inflating, nearby bubbles are quickly hurled apart. That implies there is a low probability that our bubble universe contains any empirical evidence of having interacted with a nearby bubble. The rate of creation of new bubble universes increases exponentially.

After any single big bang, eventually the hyper-inflation ends within that universe. We say that bit of inflaton fuel has been used up. However, the expansion within that universe does not. Our own expanding bubble was produced by our big bang 13.8 billion years ago. It is called the Hubble Bubble. That term is ambiguous because often cosmologists use this to denote only the visible universe, the detectable part of our universe, rather than our entire universe.

The inflationary multiverse is not the quantum multiverse predicted by the many-worlds interpretation of quantum theory. The many-worlds interpretation says every possible outcome of a quantum measurement persists in a newly created world, the parallel universes. If you turn left when you could have turned right, then two universes are instantly created, one in which you turned left, and a different one in which you turned right. A key feature of both the inflationary multiverse and the quantum multiverse is that the wave function does not collapse when a measurement occurs. Unfortunately both theories are called the multiverse theory as well as the many-worlds theory, so a reader needs to be alert to the use of the term. The Everettian Theory is the theory of the quantum multiverse but not of the inflationary multiverse.

The original theory of inflation was created by Guth and Linde in the 1980s. The theory of eternal inflation with a multiverse was created by Linde in 1983 building on some influential work by Gott and Vilenkin. The multiplicity of universes of the inflationary multiverse also is called parallel worlds, many worlds, alternative universes, alternate worlds, and branching universes. Many names for denoting the same thing. Each universe of the multiverse normally is required to use some of the same physics (there is no agreement on how much) and all the same mathematics. This restriction is not required by a logically possible universe of the sort proposed by the philosopher David Lewis.

New energy is not required to create these inflationary universes, so there are no implications about whether energy is or is not conserved in the multiverse.

Normally, philosophers of science say that what makes a theory scientific is not that it can be falsified (as the philosopher Karl Popper proposed), but rather is that there can be experimental evidence for it or against it. Because it is so difficult to design experiments that would provide evidence for or against the multiverse theories, many physicists complain that their fellow physicists who are developing these theories are doing technical metaphysical speculation, not physics. However, the response from defenders of multiverse research is usually that they can imagine someday, perhaps in future centuries, running crucial experiments, and, besides, the term physics is best defined as being whatever physicists do.

For one clear explanation of the multiverse, see episode 200 of Sean Carroll’s Mindscape podcast called “Solo: The Philosophy of the Multiverse.”

5. Infinite Time

Is time infinitely divisible? Yes, because general relativity theory and quantum theory require time to be a continuum. But this answer will change to “no” if these theories are eventually replaced by a Core Theory that quantizes time. “Although there have been suggestions by some of the best physicists that spacetime may have a discrete structure,” Stephen Hawking said in 1996, “I see no reason to abandon the continuum theories that have been so successful.” Twenty-five years later, the physics community became much less sure that Hawking is correct.

Did time begin at the big bang, or was there a finite or infinite time period before our big bang? The answer is unknown. There are many theories that imply an answer to the question, but the major obstacle in choosing among them is that the theories cannot be tested practically.

Stephen Hawking and James Hartle said the difficulty of knowing whether the past and future are infinite in duration turns on our ignorance of whether the universe’s positive energy is exactly canceled out by its negative energy. All the energy of gravitation and spacetime curvature is negative. If the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, including the law of conservation of energy, then time is infinite in the past and future. Here is the argument for this conclusion. The law of conservation of energy implies energy can change forms, but if the total were ever to be non-zero, then the total energy can never become zero in the future nor have been zero in the past because any change in the total to zero from non-zero or from non-zero to zero would violate the law of conservation of energy. So, if the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, then there always have been states whose total energy is non-zero, and there always will be states of non-zero energy. That suggests there can be no first instant or last instant and thus that time is eternal.

There is no solid evidence that the total is non-zero, but a slim majority of the experts favor a non-zero total, although their confidence in this is not strong. Assuming there is a non-zero total, the favored theory of the future of the universe is the big chill theory. The big chill theory implies the universe just keeps getting chillier forever as space expands and gets more dilute, and so there always will be changes and thus new events produced from old events.

Here are more details of the big chill theory. The last star will burn out in 1015 years. Then all the stars and dust within each galaxy will fall into black holes. Then the material between galaxies will fall into black holes as well, and finally in about 10100 years all the black holes will evaporate, leaving only a soup of elementary particles that gets less dense and therefore “chillier” as the universe’s expansion continues. The microwave background radiation will red shift more and more into longer wavelength radio waves. Future space will expand toward thermodynamic equilibrium. But because of vacuum energy, the temperature will only approach, but never quite reach, zero on the Kelvin scale. Thus the universe descends into a “big chill,” having the same amount of total energy it always has had.

The situation is very different if the total energy of the universe is zero. In this case, time is not fundamental (nor is spacetime). Perhaps time is emergent from a finite collection of moments as described in the timeless Wheeler-DeWitt equation of quantum mechanics (namely the Schrödinger wave equation when there is no change).

Here is more commentary about the end of time:

In classical general relativity, the Big Bang is the beginning of spacetime; in quantum general relativity—whatever that may be, since nobody has a complete formulation of such a theory as yet—we don’t know whether the universe has a beginning or not.

There are two possibilities: one where the universe is eternal, one where it had a beginning. That’s because the Schrödinger equation of quantum mechanics turns out to have two very different kinds of solutions, corresponding to two different kinds of universe.

One possibility is that time is fundamental, and the universe changes as time passes. In that case, the Schrödinger equation is unequivocal: time is infinite. If the universe truly evolves, it always has been evolving and always will evolve. There is no starting and stopping. There may have been a moment that looks like our Big Bang, but it would have only been a temporary phase, and there would be more universe that was there even before the event.

The other possibility is that time is not truly fundamental, but rather emergent. Then, the universe can have a beginning. …And if that’s true, then there’s no problem at all with there being a first moment in time. The whole idea of “time” is just an approximation anyway (Carroll 2016, 197-8).

Back to the main “Time” article for references and citations.

Author Information

Bradley Dowden
Email: dowden@csus.edu
California State University, Sacramento
U. S. A.