What Else Science Requires of Time

This article is one of the three supplements of the main Time article. The two others are “Frequently Asked Questions about Time” and “Special Relativity: Proper Times, Coordinate Systems, and Lorentz Transformations (by Andrew Holster).”

Table of Contents

  1. What are Theories of Physics?
    1. The Core Theory
  2. Relativity Theory
  3. Quantum Theory
    1. The Standard Model
  4. Big Bang
    1. Cosmic Inflation
    2. Eternal Inflation and Many Worlds
  5. Infinite Time

1. What are Theories of Physics?

The answer to this question is philosophically controversial, and there is a vast literature on the topic. Here are some brief remarks.

The confirmed theories of physics are our civilization’s most valuable tools for explaining, predicting, and understanding the natural phenomena that physicists study. One of the best features of a good theory in physics is that it allows us to calculate the results of many observations from few assumptions. We humans are lucky that we happen to live in a universe that is so explainable, predictable and understandable, and that is governed by so few laws.

The term theory in this article is used in a technical sense, not in the sense of an explanation as in the remark, “My theory is that the mouse stole the cheese,” nor in the sense of a prediction as in the remark, “My theory is that the mouse will steal the cheese.” The general theory of relativity is an example of our intended sense of the term “theory.” In physics it is usually not helpful to try to explain some phenomena by appealing to something purpose.

Because theories in science are designed for producing interesting explanations, not for encompassing all the specific facts, that is why there is no scientific theory that specifies your age nor one that specifies when you woke up last Tuesday. Some theories are expressed fairly precisely, and some are expressed less precisely. The fairly precise ones that have simplifying assumptions are often called models of nature or models  of the world. In physics, the fundamental laws in those models are expressed in the language of mathematics as mathematical equations.

Most researchers would say the model should tell us how the system being modeled would behave if certain conditions were to be changed in a specified way, for example, if the density were doubled or those three moons orbiting the planet were not present. Knowing how the system would behave under different conditions helps us understand the causal structure of the system being modeled.

Theories of physics are, among other things, a set of laws and a set of ways to link its statements to the real, physical world. Do its laws actually govern us? In Medieval Christian theology, the laws of nature were considered to be God’s commands, but today saying nature ‘obeys’ scientific laws or we are ‘governed’ by laws is considered by scientists to be a harmless metaphor. Scientific laws are called laws because they constrain what can happen; they imply this can happen and that cannot. It was Pierre Laplace who first declared that fundamental scientific laws are hard and fast rules with no exceptions.

The philosopher David Lewis claimed that a scientific law is whatever provides a lot of information in a compact and simple expression. This is a justification for saying a law must be a general claim.  The claim that Mars is farther from the Sun than is the Earth is true, but it does not qualify as being a law because it is not general enough. The Second Law of Thermodynamics is general enough.

In our fundamental theories of physics, the standard philosophical presupposition is that a state of a physical system describes what there is at some time, and a law of the theory—an “evolution law” or “dynamical law”—describes how the system evolves from a state at one time into a state at another time. All evolution laws in our fundamental theories are differential equations. Nearly all the fundamental laws are time-reversible, which means that the evolution can be into either an earlier time or a later time. The most important, proposed exception to time-reversibility is the treatment in quantum theory of the measurement process. It is discussed below. The second law of thermodynamics says entropy tends to increase, so it is not time-reversible, but it is not a fundamental law.

All laws were once assumed to be local in the sense that they need to mention only the here are now and not the there and then. Also, presumably these laws are the same at all times. We have no a priori reason to think physical theories must be time-reversible, local, and time-translation invariant, but these assumptions have been very fruitful throughout much of the history of physics.

Due to the influence of Isaac Newton, subsequent physicists have assumed that the laws of physics are time-translation invariant. This invariance over time implies the laws of physics we have now are the same laws that held in the past and will hold in the future. This is not implying that if you bought an ice cream cone yesterday, you will buy one tomorrow. Also, the principle that the laws of physical science do not change from one time to another and thus are time translation invariant is not itself time translation invariant, so it is considered to be a meta-law rather than a law.

The laws and principles of physics are not accepted absolutely, like dogmas. Any currently-accepted law  or principle might need to be revised in the future to account for some unusual observations or experiments. However, some laws are believed more strongly than others, and so are more likely to be changed than others if future observations indicate a change is needed.

The laws of our fundamental theories contain many constants such as the fine-structure constant, the value for the speed of light in a vacuum, Planck’s constant, and the value of the rest mass of an electron and proton. For some of these constants, such as the mass of a proton, the Standard Model indicates that we should be able to compute the value exactly, but practical considerations of solving the equations to obtain a value even to two decimal places have been insurmountable, so we make do with a good measurement. That is, we measure the constant as precisely as possible, and then select a best, specific value for the constant to place into the theories containing the constant. A virtue of a theory is to not have too many constants. If there were too many, then the theory could never be disproved by data because the constants could be changed to account for any data, and so the theory would explain nothing and would be pseudoscience. Regarding the divide between science and pseudoscience, the leading answer is that:

what is really essential in order for a theory to be scientific is that some future information, such as observations or measurements, could plausibly cause a reasonable person to become either more or less confident of its validity. This is similar to Popper’s criteria of falsifiability, while being less restrictive and more flexible (Dan Hooper).

a. The Core Theory

Some physical theories are fundamental, and some are not. Fundamental theories are foundational in the sense that their laws cannot be derived from the laws of other physical theories even in principle. For example, the second law of thermodynamics is not fundamental, nor are the laws of plate tectonics in geophysics despite their being critically important to their sciences. The following two theories are fundamental: (i) the general theory of relativity, and (ii) quantum theory. Their amalgamation is what Nobel Prize winner Frank Wilczek called the Core Theory, the theory of almost everything physical. The hedge “almost” is there because It is not a theory of gravity, or dark matter, or dark energy, for example. If it were, it would be called a “theory of everything.” (For the experts: More technically, this amalgamated theory is the effective quantum field theory that includes both the weak field limit of Einstein’s General Theory of Relativity and the Standard Model of Particle Physics, and no assumption is made about the existence of space and time below the Planck length and Planck time.) Most all scientists believe this Core Theory holds not just in our solar system, but all across the universe, and it held yesterday and will hold tomorrow. Wilczek claimed:

[T]he Core has such a proven record of success over an enormous range of applications that I can’t imagine people will ever want to junk it. I’ll go further: I think the Core provides a complete foundation for biology, chemistry, and stellar astrophysics that will never require modification. (Well, “never” is a long time. Let’s say for a few billion years.)

This implies one could think of chemistry is applied quantum theory. The Core Theory does not include the Big Bang Theory, and it does not use the terms time’s arrow or now. The concept of time in the Core Theory is primitive or “brute.” It is not definable, but rather it is used to define and explain other concepts.

It is believed by most physicists that the Core Theory can be used in principle to adequately explain the behavior of a potato, a galaxy, and a brain. The hedge phrase “in principle” is important. One cannot replace it with “in practice” or “practically.” Practically there are many limitations on the use of the Core theory. Here are some of the limitations. There is a margin of error in any measurement, so a user of the Core Theory does not have access to all the needed data for a prediction such as the position of ever particle in a system; and, even if this were available, the complexity of the needed calculations would be prohibitive. There is limit of predictability in a chaotic system due to the butterfly effect that magnifies small errors in an initial measurement into very large errors later in the time evolution of the system. There is quantum uncertainty that Heisenberg expressed with his Uncertainty Principle (see below for more on this). In addition, the Core theory does not explicitly contain the concepts of a potato, galaxy, and brain. They are emergent concepts that are needed in good explanations at a higher scale, the macroscopic scale. Commenting on these various practical limitations for the study of galaxies, the cosmologist Andrew Ponzen said “Ultimately, galaxies are less like machines and more like animals—loosely understandable, rewarding to study, but only partially predictable.”

Regarding the effect of quantum theory on ontology, the world’s potatoes, galaxies and brains have been considered by a number of twentieth-century philosophers to be just different mereological sums of particles, but the majority viewpoint among philosophers of physics in the twenty-first century is that potatoes, galaxies and brains are, instead, fairly stable patterns over time of interacting quantized fields.

For a great many investigations, it is helpful to treat objects as being composed of particles rather than fields. A proton or even a planet might be usefully treated as a particle for certain purposes. Electrons, quarks, and neutrinos are fundamental particles, and they are considered to be structureless, having no inside. Superstring theory disagrees and treats all these particles as being composed of very tiny one-dimensional objects called “strings” that move in a higher-dimensional space, but due to lack of experimental support, string theory is considered to be as yet unconfirmed. String theory in some form or other is the leading candidate for a theory of quantum gravity that resolves the contradictions between quantum theory and relativity theory.

The Core has been tested in many extreme circumstances and with great sensitivity, so physicists have high confidence in it. There is no doubt that for the purposes of doing physics the Core theory provides a demonstrably superior representation of reality to that provided by its alternatives. But all physicists know the Core is not strictly true and complete, and they know that some features will need revision—revision in the sense of being modified or extended. Physicists are motivated to discover how to revise it because such a discovery can lead to great praise from the rest of the physics community. Wilczek says the Core will never need modification for understanding (in principle) the special sciences of biology, chemistry, stellar astrophysics, computer science and engineering, but he would agree that the Core needs revision in order to adequately explain why 95 percent of the universe consists of dark energy and dark matter, why the universe has more matter than antimatter, why neutrinos change their identity over time, and why the energy of empty space is as small as it is. One metaphysical presupposition here is that the new theory will be logically consistent and will have eliminated the present inconsistencies between relativity theory and quantum theory.

The Core Theory presupposes that time exists, that it emerges from spacetime, and that spacetime is fundamental and not emergent. Within the Core Theory, relativity theory allows space to curve, ripple, and expand; and thIs curving, rippling, and expanding can vary over time. Quantum Theory does not allow any of this, although a future revision of Quantum Theory within the Core Theory is expected to allow this.

The Core Theory also presupposes reductionism in the sense that large-scale laws are nearly all based on the small-scale laws, for example, that the laws of geology are based on the fundamental laws of physics. The only exception seems to be with quantum coherence in which the behavior of a group of particles is not fully describable by complete knowledge of the behavior of the individual particles.

The Core Theory also presupposes an idea Laplace had in 1800 that is now called the Laplacian Paradigm—that laws should have the form of describing how a state of a system at one time turns into a different state at another time. These are the evolution laws or dynamical laws. David Deutsch, Chiara Marletto, and their corroborators (Deutsch 2013) have challenged that paradigm and proposed Constructor Theory which requires time to emerge from a non-temporal substrate. So, time is not a fundamental feature of nature. Also, it turns the tables on classical reductionism by claiming that the small-scale, microscopic laws of nature are all emergent properties of the larger-scale laws, not vice versa.

2. Relativity Theory

Time is fundamental in relativity theory, and the theory has had a great impact upon our understanding of the nature of time. When the term relativity theory is used, it usually means the general theory of relativity of 1915, but sometimes it means the special theory of relativity of 1905. The special theory is the theory of space and time when you do not pay attention to gravity, and the general theory is when you do. Both the special and general theories have been well-tested; and they are almost universally accepted. Today’s physicists understand them better than Einstein did.

Although the Einstein field equations in his general theory:

are exceedingly difficult to manipulate, they are conceptually fairly simple. At their heart, they relate two things: the distribution of energy in space, and the geometry of space and time. From either one of these two things, you can—at least in principle—work out what the other has to be. So, from the way that mass and other energy is distributed in space, one can use Einstein’s equations to determine the geometry of that space, And from that geometry, we can calculate how objects will move through it (Dan Hooper).

The theory of relativity implies the fundamental laws of nature are the same for a physical system regardless of what time it is.

The relationship between the special and general theories is slightly complicated. Both theories are about motion of objects and both approach agreement with Newton’s theory the slower the speed of objects, the weaker the gravitational forces, and the lower the energy of those objects. Special relativity implies the laws of physics are the same for all inertial observers, that is, observers who are moving at a constant velocity relative to each other will find that all phenomena obey the same laws. Observers are frames of reference, or persons of negligible mass and volume making measurements from a stationary position in a frame of reference. General relativity implies the laws are the same even for observers accelerating relative to each other, such as changing their velocity due to the influence of gravitation. And acceleration is absolute, not relative to a frame. General relativity holds in all reference frames, but special relativity holds only for inertial reference frames, namely non-accelerating frames.

Special relativity allows objects to have mass but not gravity. It always requires a flat geometry—that is, a Euclidean geometry for space and a Minkowskian geometry for spacetime. General relativity does not have those restrictions. General relativity is a specific theory of gravity, assuming the theory is supplemented by a specification of the distribution of matter-energy at some time. Newton’s main laws of F = ma and F =  GmM/r2 hold only in special situations. Special relativity is not a specific theory but rather a general framework for theories, and it is not a specific version of general relativity. Nor is general relativity a generalization of special relativity. The main difference between the two is that, in general relativity, spacetime does not simply exist passively as a background arena for events. Instead, spacetime is dynamical in the sense that changes in the distribution of matter and energy are changes in the curvature of spacetime (though not necessarily vice versa).

The theory of relativity is generally considered to be a theory based on causality:

One can take general relativity, and if you ask what in that sophisticated mathematics is it really asserting about the nature of space and time, what it is asserting about space and time is that the most fundamental relationships are relationships of causality. This is the modern way of understanding Einstein’s theory of general relativity….If you write down a list of all the causal relations between all the events in the universe, you describe the geometry of spacetime almost completely. There is still a little bit of information that you have to put in, which is counting, which is how many events take place…. Causality is the fundamental aspect of time. (Lee Smolin).

In the Core theories, the word time is a theoretical term, and the dimension of time is treated somewhat like a single dimension of space. Space is a set of all possible point-locations. Time is a set of all possible point-times. Spacetime is a set of all possible point-events. Spacetime is presumed to be four-dimensional and also a continuum of points, with time being a distinguished, one-dimensional sub-space of spacetime. Because the time dimension is so different from a space dimension, physicists very often speak of (3+1)-dimensional spacetime rather than 4-dimensional spacetime. Both relativity theory and quantum theory assume that three-dimensional space is isotropic (rotation symmetric) and homogeneous (translation symmetric) and that there is translation symmetry in time. Regarding all these symmetries, the physical laws need to obey them, but specific physical systems within space-time need not; your body could become very different if you walk across the road instead of along the road.

(For the experts: Technically, any spacetime, no matter how many dimensions it has, is required to be a differentiable manifold with a metric tensor field defined on it that tells what geometry it has at each point. General relativistic spacetimes are manifolds built from charts involving open subsets of R4. General relativity does not consider a time to be a set of simultaneous events that do or could occur at that time; that is a Leibnizian conception. Instead General relativity specifies time in terms of the light cone structures at each place. The theory requires spacetime to have at least four dimensions, not exactly four dimensions.)

Relativity theory implies time is a continuum of instantaneous times that is free of gaps just like a mathematical line. This continuity of time was first emphasized by the philosopher John Locke in the late seventeenth century, but it is meant here in a more detailed, technical sense that was developed only toward the end of the 19th century for calculus.

continuous vs discrete

According to both relativity theory and quantum theory, time is not discrete or quantized or atomistic. Instead, the structure of point-times is a linear continuum with the same structure as the mathematical line or as the real numbers in their natural order. For any point of time, there is no next time because the times are packed together so tightly. Time’s being a continuum implies that there is a non-denumerably infinite number of point-times between any two non-simultaneous point-times. Some philosophers of science have objected that this number is too large, and we should use Aristotle’s notion of potential infinity and not the late 19th century notion of a completed infinity. Nevertheless, accepting the notion of an actual nondenumerable infinity is the key idea used to solve Zeno’s Paradoxes and to remove inconsistencies in calculus.

The fundamental laws of physics assume the universe is a collection of point events that form a four-dimensional continuum, and the laws tell us what happens after something else happens or because it happens. These laws describe change but do not themselves change. At least that is what laws are in the first quarter of the twenty-first century, but one cannot know a priori that this is always how laws must be. Even though the continuum assumption is not absolutely necessary to describe what we observe, so far it has proved to be too difficult to revise our theories in order to remove the assumption and retain consistency with all our experimental data. Calculus has proven its worth.

No experiment is so fine-grained that it could show times to be infinitesimally close together, although there are possible experiments that could show the assumption to be false if the graininess of time were to be large enough to be detectable.

Not only is there some uncertainty or worry about the correctness of relativity in the tiniest realms, there is also uncertainty about whether it works differently on cosmological scales than it does at the scale of atoms, houses, and solar systems, but so far there are no rivals theories that have been confirmed.

In the twenty-first century, one of the most important goals in physics is to discover/invent a theory of quantum gravity that unites the best parts of quantum theory with the theory of relativity. Einstein claimed in 1916 that his general theory of relativity needed to be replaced by a theory of quantum gravity. Subsequent physicists generally agree with him, but that theory has not been found so far. A great many physicists of the 21st century believe a successful theory of quantum gravity will require quantizing time so that there are atoms of time. But this is just an opinion, not a fact.

If there is such a thing as an atom of time and thus such a thing as an actual next instant and a previous instant, then time cannot be like the real number line, because no real number has a next number. It is speculated that if time were discrete, a good estimate for the duration of an atom of time is 10-44 seconds, the so-called Planck time. No physicist can yet suggest a practical experiment that is sensitive to this tiny scale of phenomena. For more discussion, see (Tegmark 2017).

The special and general theories of relativity imply that to place a reference frame upon spacetime is to make a choice about which part of spacetime is the space part and which is the time part. No choice is objectively correct, although some choices are very much more convenient for some purposes. This relativity of time, namely the dependency of time upon a choice of reference frame, is one of the most significant philosophical implications of both the special and general theories of relativity.

Since the discovery of relativity theory, scientists have come to believe that any objective description of the world can be made only with statements that are invariant under changes in the reference frame. Saying, “It is 8:00” does not have a truth value unless a specific reference frame is implied, such as one fixed to Earth with time being the time that is measured by our civilization’s standard clock. This relativity of time to reference frames is behind the remark that Einstein’s theories of relativity imply time itself is not objectively real but spacetime is.

Regarding relativity to frame, Newton would say that if you are seated in a vehicle moving along a road, then your speed relative to the vehicle is zero, but your speed relative to the road is not zero. Einstein would agree. However, he would surprise Newton by saying the length of your vehicle is slightly different in the two reference frames, the one in which the vehicle is stationary and the one in which the road is stationary. Equally surprising to Newton, the duration of the event of your drinking a cup of coffee while in the vehicle is slightly different in those two reference frames. These relativistic effects are called space contraction and time dilation, respectively. So, both length and duration are frame dependent and, for that reason, say physicists, they are not objectively real characteristics of objects. Speeds also are relative to reference frame, with one exception. The speed of light in a vacuum has the same value c in all frames that are allowed by relativity theory. Space contraction and time dilation change in tandem so that the speed of light in a vacuum is always the same number.

Relativity theory allows great latitude in selecting the classes of simultaneous events, as shown in this diagram. Because there is no single objectively-correct frame to use for specifying which events are present and which are past—but only more or less convenient ones—one philosophical implication of the relativity of time is that it seems to be easier to defend McTaggart’s B theory of time and more difficult to defend McTaggart’s A-theory that implies the temporal properties of events such as “is happening now” or “happened in the past” are intrinsic to the events and are objective, frame-free properties of those events. In brief, the relativity to frame makes it difficult to defend absolute time.

Relativity theory challenges other ingredients of the manifest image of time. For two point-events A and B occurring at the same place but at different times, relativity theory implies their temporal order is one of simultaneity, and this order is absolute in the sense of being independent of the frame of reference. This agrees with common sense and thus the manifest image of time, but if A and B are distant from each other and occur close enough in time to be within each other’s absolute elsewhere, then relativity theory implies event A can occur before event B in one reference frame, but after B in another frame, and simultaneously with B in yet another frame. No person before Einstein ever imagined time has such a strange feature.

The special and general theories of relativity provide accurate descriptions of the world when their assumptions are satisfied. Both have been carefully tested. The special theory does not mention gravity, and it assumes there is no curvature to spacetime, but the general theory requires curvature in the presence of mass and energy, and it requires the curvature to change as their distribution changes. The presence of gravity in the general theory has enabled the theory to be used to explain phenomena that cannot be explained with either special relativity or Newton’s theory of gravity or Maxwell’s theory of electromagnetism.

Because of the relationship between spacetime and gravity, the equations of general relativity are much more complicated than are those of special relativity. But general relativity assumes the equations of special relativity hold at least in all infinitesimal regions of spacetime.

To give one example of the complexity just mentioned, the special theory clearly implies there is no time travel to events in one’s own past. Experts do not agree on whether the general theory has this same implication because the equations involving the phenomena are too complex for them to solve directly. Approximate solutions have to be used, yet still there is disagreement about this kind of time travel.

Because of the complexity of Einstein’s equations, all kinds of tricks of simplification and approximation are needed in order to use the laws of the theory on a computer for all but the simplest situations.

Regarding curvature of time and of space, the presence of mass at a point implies intrinsic spacetime curvature at that point, but not all spacetime curvature implies the presence of mass. Empty spacetime can still have curvature, according to relativity theory. This point has been interpreted by many philosophers as a good reason to reject Leibniz’s classical relationism. The point was first mentioned by Arthur Eddington.

Two accurate, synchronized clocks do not stay synchronized if they undergo different gravitational forces. This is a second kind of time dilation, in addition to dilation due to speed. So, a correct clock’s time depends on the clock’s history of both speed and gravitational influence. Gravitational time dilation would be especially apparent if a clock were to approach a black hole. The rate of ticking of a clock approaching the black hole slows radically upon approach to the horizon of the hole as judged by the rate of a clock that remains safely back on Earth. This slowing is sometimes misleadingly described as time slowing down. After a clock falls through the event horizon, it can still report its values to Earth, and when it reaches the center of the hole not only does it stop ticking, but it also reaches the end of time, the end of its proper time.

The general theory of relativity theory has additional implications for time. In 1948-9, the logician Kurt Gödel discovered radical solutions to Einstein’s equations, solutions in which there are what are called “closed time-like curves” in graphical representations of spacetime. The unusual curvature is due to the rotation of all the matter throughout Gödel’s universe. As one progresses forward in time along one of these curves, one arrives back at one’s starting point—thus, time travel! Fortunately, there is no empirical evidence that our own universe has this rotation. Some physicists are not convinced from Gödel’s work that time travel is possible in our own universe even if it obeys Einstein’s general theory of relativity. Here is Einstein’s reaction to Gödel’s work on time travel:

Kurt Gödel’s essay constitutes, in my opinion, an important contribution to the general theory of relativity, especially to the analysis of the concept of time. The problem involved here disturbed me already at the time of the building of the general theory of relativity, without my having succeeded in clarifying it.

Let’s explore the microstructure of time in more detail. In mathematical physics used in both relativity theory and quantum theory, the ordering of instants by the happens-before relation of temporal precedence is complete in the sense that there are no gaps in the sequence of instants. Any interval of time is a continuum, so the points of time form a linear continuum. Unlike physical objects, physical time and physical space are believed to be infinitely divisible—that is, divisible in the sense of the actually infinite, not merely in Aristotle’s sense of potentially infinite. Regarding the density of instants, the ordered instants are so densely packed that between any two there is a third so that no instant has a very next instant. Regarding continuity, time’s being a linear continuum implies that there is a nondenumerable infinity of instants between any two non-simultaneous instants. The rational number line does not have so many points between any pair of different points; it is not continuous the way the real number line is, but rather contains many gaps. The real numbers such as pi and the square root of two fill the gaps.

The actual temporal structure of events can be embedded in the real numbers, at least locally, but how about the converse? That is, to what extent is it known that the real numbers can be adequately embedded into the structure of the instants, at least locally? This question is asking for the justification of saying time is not discrete, that is, not atomistic. The problem here is that the shortest duration ever measured is about 250 zeptoseconds. A zeptosecond is 10−21 second. For times shorter than about 10-43 second, which is the physicists’ favored candidate for the duration of an atom of time, science has no experimental grounds for the claim that between any two events there is a third. Instead, the justification of saying the reals can be embedded into the structure of the instants is that (i) the assumption of continuity is very useful because it allows the mathematical methods of calculus to be used in the physics of time; (ii) there are no known inconsistencies due to making this assumption; and (iii) there are no better theories available. The qualification earlier in this paragraph about “at least locally” is there in case there is time travel to the past. A circle is continuous, and one-dimensional, but it is like the real numbers only locally.

One can imagine two empirical tests that would reveal time’s discreteness if it were discrete—(1) being unable to measure a duration shorter than some experimental minimum despite repeated tries, yet expecting that a smaller duration should be detectable with current equipment if there really is a smaller duration, and (2) detecting a small breakdown of Lorentz invariance. But if any experimental result that purportedly shows discreteness is going to resist being treated as a mere anomaly, perhaps due to error in the measurement apparatus, then it should be backed up with a confirmed theory that implies the value for the duration of the atom of time. This situation is an instance of the kernel of truth in the physics joke that no observation is to be trusted until it is backed up by theory.

It is commonly remarked that, according to relativity theory, nothing can go faster than c, the speed of light, not even the influence of gravity. The remark needs some clarification, else it is incorrect. Here are three ways to go faster than the speed c. (1) First, the medium needs to be specified. c is the speed of light in a vacuum. The speed of light in certain crystals can be much less than c, say 40 miles per hour, and if so, then a horse outside the crystal could outrun the light beam. (2) Second, the limit c applies only to objects within space relative to other objects within space, and it requires that no object pass another object locally at faster than c. However, the general theory of relativity places no restrictions on how fast space itself can expand. So, two galaxies can fly apart from each other at faster than the speed c of light if the intervening space expands sufficiently rapidly. (3) Imagine standing still outside on the flat ground and aiming your laser pointer forward and parallel to the ground. Now change the angle in order to aim the pointer down at your feet. During that process of changing the angle, the point of intersection of the pointer and the tangent plane of the ground will move toward your feet faster than the speed c. This does not violate relativity theory because the point of intersection is merely a geometrical object, a point, not a physical object, so its speed is not restricted by relativity theory.

For more about special relativity, see Special Relativity: Proper Times, Coordinate Systems, and Lorentz Transformations.

3. Quantum Theory

Time is a continuum in quantum theory, just as it is in the theory of relativity and Newton’s mechanics, but change over time is treated in quantum theory very differently than in classical theories.

Quantum theory is a combination of quantum mechanics and the special theory of relativity, but not the general theory of relativity. It also includes the Standard Model of particle physics, which is a theory of all the known forces of nature except for the gravitational force and of all known particles except the graviton, the particle of gravity. Quantum theory has its name because it implies that some qualities or properties, such as energy and charge, are quantized in the sense that they do not change continuously but only in multiples of minimum discrete steps in a shortest time. The minimum changes are called quantum steps.

Quantum theory is our most successful theory in all of science, more so even than relativity theory, and it is well tested and very well understand mathematically despite its not being well understood intuitively or informally or philosophically. The variety of phenomena it can be used to successfully explain is remarkable. For four examples, it explains (i) why you can see through a glass window but not a potato, (ii) why the Sun has lived so long without burning out, (iii) why atoms are stable so that the negatively-charged electrons do not crash into the positively-charged nucleus, (iv) why the periodic table of elements has the structure and most of the values it has. Without quantum theory, all these facts must be taken to be brute facts of nature.

Surprisingly, physicists still do not agree on the exact formulation of quantum theory. Its many so-called “interpretations” are really competing versions of the theory. That is why there is no agreement on what the axioms of quantum theory are. Also, there is a disagreement among philosophers of physics regarding whether the competing interpretations are (1) empirically equivalent and underdetermined by (all possible) experimental evidence and so must be decided upon by such features as their degree of mathematical elegance and simplicity, or (2) are not empirically equivalent theories but, instead, are theories that may in the future be confirmed or refuted by experimental evidence.

All current interpretations of quantum theory appear to prohibit time-like loops that allow a particle to travel along a path of spacetime that curves into its own past, although this is allowed by the general theory of relativity. To be more cautious, Gödel and Einstein believed the general theory of relativity allowed this, but some 21st century experts on relativity are not yet convinced that Gödel and Einstein interpreted the theory correctly.

Indeterminism

Determinism implies predictability in principle, and it implies the universe is not random.

Classical physicists envisioned the world to be deterministic in the sense that, given a precise and complete specification of the way things are at some time, called the “initial state,” then any later state, the so-called “final state,” is fixed, at least in principle, even if practically there are no available instruments that would provide the information about the initial state, and even if practically the required computations are too difficult to perform.

According to quantum theory, a state of an isolated system is described very differently than in all earlier theories of physics. It is described by the Schrödinger wave function. Schrödinger’s wave equation for that function describes how the state changes from one time to another. In this equation, time is fundamental, but space is not. However, the wave function at a time and place specifies the probability of detecting, say, an electron at that time and place. So, probability is at the heart of quantum theory. Because of the probability, if you were to set up your system the way it was the first time, then the outcome the second time might be different. Therefore, the key principle of causal determinism, “same cause, same effect,” fails.

Einstein reacted to this quantum indeterminism by proposing that there would be a future discovery of as yet unknown variables or properties that, when taken into account by a revised Schrödinger equation, would make quantum theory be deterministic. David Bohm agreed with Einstein and went some way in this direction by building a revision of quantum theory, but his interpretation has not succeeded in moving the needle of scientific opinion.

Physicists normally wish to assume that our universe’s total information is conserved over time—all the universe’s quantum information was present at the Big Bang, and it persists today. This principle of the conservation of information fails according to the classical interpretation of quantum theory, the Copenhagen Interpretation.

The Copenhagen Interpretation

The classical Interpretation of quantum theory was the product of Niels Bohr and his colleagues in the 1920s. It is called the Copenhagen Interpretation because Bohr taught at the University of Copenhagen. According to its advocates, it has implications about time reversibility, determinism, the conservation of information, locality, the principle that causes affect the future and not the past, and the reality of the world independently of its being observed—namely, that they all fail.

In the famous two-slit experiment, an electron shot toward an otherwise impenetrable plate might pass through it by entering through the plate’s left slit or a parallel right slit. The slits are very narrow and closely aligned. Unlike macroscopic objects such as bullets entering through a narrow slit in a steel wall and which are at only one location at a time, a single electron is understood in the Copenhagen interpretation as going through both slits at the same time, and then interfering with itself on the other side and then striking the optical screen behind the plate at only a single location and thereby helping to cast a unique pattern of dots on the screen. This pattern is very similar to the pattern obtained by diffraction of classical waves. The favored explanation of the two-slit experiment is to assume so-called “wave-particle duality,” namely that a single particle has wavelike properties, and a wave (a wave train, not just a wave crest) has particle-like properties. Also before it is detected, it is in a cloud of possibilities such as being in the state of having gone through only the left slit, plus the state of having gone through only the right slit.

The optical screen that displays the dots is similar to a computer monitor that displays a pixel-dot when and where an electron collides with it. See the diagram below of electrons passing  through slits in a screen (such as a piece of steel containing two narrow, parallel slits) and then hitting an optical screen that is behind two slits. In the diagram below, the interference pattern that is produced is displayed on the right (the front view). This interference pattern occurs even if the electrons are shot infrequently at the optical screen, such as only once per second. Surely, if electrons were like bullets, a bullet hitting the screen cannot be affected by what the previous bullet did a second earlier. Because the collective electron behavior over time looks so much like optical wave diffraction, this behavior is considered to be definitive evidence of electrons behaving as waves.

But the interference does not occur if the electrons are actively observed during the experiment by, say, a light being shined on each slit to see which slit each electron went through. When observed going through the slits, the electron behavior changes, and they act like tiny bullets with no diffraction and no other wave behavior. Here is a diagram of that situation:

Comparison of the two diagrams has led a great many researchers to conclude that, when an electron is not observed at the moment of passing through the slits or before colliding with the screen, it passes through both two slits (and is in two places at once). When unobserved before the first screen, it passes through only one slit.

According to the Copenhagen Interpretation of the two-slit experiment, observing the electron going through the slits collapses the wave function so it describes a single outcome, while deleting the other possibilities. To restate this, before the measurement, the electron is in a two-places-at-once-state of going through the left slit and of going through the right slit, and the measurement interaction collapses this superposition state into a single state of the electron’s going through the slit where it is detected.

To explain the two-slit experiment, Bohr proposed an anti-realist interpretation of the world by saying there is no determinate, unfuzzy way the world is when it is not being observed. There is only a cloud of possible values for each property of the system that might be measured. Eugene Wigner, a Nobel Prize winning physicist, promoted the claim that there exists a determinate, unfuzzy reality only when a conscious being is observing it. This prompted Einstein to ask a supporter of Bohr whether he really believed that the moon exists only when it is being looked at.

The two-slit experiment has caused philosophers of physics to disagree about what quantum theory implies an object is, what it means for an object to have a location, how an object maintains its identity over time, and whether consciousness of the measurer is required in order to make reality become determinate and not “fuzzy” or “blurry.” Also, in regard to the classical principle that causes affect the future and never the past, Princeton physicist John Wheeler famously remarked in his 1983 book Quantum Theory and Measurement: “Equipment operating in the here and now has an undeniable part in bringing about that which appears to have happened.” Opponents of the Copenhagen Interpretation have remarked that these interpretations of quantum theory are too weird to be true.

Measurement

According to the Copenhagen Interpretation, during the measurement process the wave function describing the fuzzy, superposition-state “collapses” instantaneously or nearly instantaneously from the superposition of states to a single state with a definite value for whatever is measured. Using a detector to measure which slit the electron went through in the two-slit experiment is the paradigm example of the collapse of the wave function.

Attempting to confirm this claim about the speed of the collapse via an experiment faces the obstacle that no measurement can detect such a short interval of time,

Yet what we do already know from experiments is that the apparent speed at which the collapse process sweeps through space, cleaning the fuzz away, is faster than light. This cuts against the grain of relativity in which light sets an absolute limit for speed (Andrew Pontzen).

During the collapse, one of the possible values for the measurement becomes the actual specific value, and the other possibilities are deleted. And quantum information is lost. According to the Copenhagen Interpretation, during any measurement, from full knowledge of the new state, the prior state cannot be deduced. Different initial states may transition into the same final state. So, time reversibility fails. There can be no un-collapsing.

When a measurement occurs, it is almost correct to explain this as follows: At the beginning of the measurement, the system “could be in any one of various possibilities, we’re not sure which.” But not quite. Strictly speaking, before the measurement is made the system is in a superposition of multiple states, one for each possible outcome of the measurement, with each outcome having a fixed probability of occurring as determined by quantum theory; and the measurement itself is procedure that removes the superposition and randomly realizes just one of those states. Informally, this is sometimes summarized in the remark that measurement turns the situation from fuzzy to definite.

For an instant, a measurement on an electron can say it is there at this specific place, but immediately afterward it becomes fuzzy again, and once again there is no single truth about where an electron is precisely, but only a single truth about the probabilities for finding the electron in various places if certain kinds of measurements were to be made.

Many opponents of the Copenhagen Interpretation have reacted in this way:

In the wake of the Solvay Conference (in 1927), popular opinion within the physics community swung Bohr’s way, and the Copenhagen approach to quantum mechanics settled in as entrenched dogma. It’s proven to be an amazingly successful tool at making predictions for experiments and designing new technologies. But as a fundamental theory of the world, it falls woefully short (Sean Carroll).

George Ellis, co-author with Stephen Hawking of the definitive book The Large-Scale Structure of Space-Time, identifies what he believes is a key difficulty with our understanding of quantum measurement in interpretations that imply the wave function collapses during measurement: “Usually, it is assumed that the measurement apparatus does not obey the rules of quantum theory, but this [assumption] contradicts the presupposition that all matter is at its foundation quantum mechanical in nature.”

Those who want to avoid having to bring consciousness of the measurer into quantum physics and who want to restore time-reversibility and determinism and conservation of quantum information typically recommend adopting a different interpretation of quantum mechanics that changes how measurement is understood. Einstein had a proposal, the Hidden Variable Interpretation. He hoped that by adding new laws specifying the behavior of so-called “underlying variables” affecting the system, then determinism, time-reversibility, and information conservation would be restored, and there would be no need to speak of a discontinuous collapse of the wave function during measurement. The “spookiness” would be gone. Also, quantum probabilities would be epistemological; they would be caused by our lack of knowledge of the hidden variables. Einstein’s proposal never gathered much support.

The Many-Worlds Interpretation and Branching Time

The Many-Worlds Interpretation is a popular replacement for the Copenhagen Interpretation. It introduces many worlds or multiple universes. Our own is just one of many, perhaps infinitely many. Anything that can happen according to quantum mechanics in our universe does happen in some universe or other.

This proposal removes the radical distinction between the measurer and what is measured and replaces it with a continuously evolving wave function for the combined system of measurement process plus measurer for the entire universe. Our being stuck in a single world, though, implies that during measurements it will appear as if there is collapse of the wave function for the system under study, but the wave function for the totality of the multiverse does not collapse. The laws of the Many-Worlds Interpretation are time-reversible symmetric and deterministic, and there is no need for the anti-realist stance. Also, quantum information is never lost in the sum of all worlds. It is an open question  whether the multiverse theory should require the same fundamental scientific laws in all universes.

The Many-Worlds Interpretation is frequently called the Everettian interpretation for its founder Hugh Everett III. It implies that, during any measurement having some integer number n of possible outcomes, the universe splits instantaneously into n copies of itself, each with a different outcome. If a measurement can produce any value from 0 to 10, and we find that “8” is the value we see for the outcome of our own measuring apparatus, then the counterparts of us who live in the other universes and who have the same memories as we have see outcomes other than “8”. Clearly, the weirdness of the Copenhagen interpretation has been traded for a new kind of weirdness.

In the Many-Worlds interpretation, there is no access from one world to another. They exist “in parallel” and not within the same physical space, so any two are neither far from nor close to each other. Information is conserved, but not within any single universe. If we had access to all information about all the many worlds (the multiverse’s wave function) and had unlimited computational capacity, then we could see that the multiverse of many worlds evolves deterministically and time-reversibly and see that the wave function for the multiverse never collapses discontinuously. Unfortunately, nobody can know the exact wave function for the entire multiplicity of universes. In a single universe, the ideally best available information can be used to predict only the probability of a measurement outcome, a probability that must be less than 1. So, in this sense, probability remains at the heart of our own world.

The notion that it takes consciousness to have a measurement has been rejected in favor of the idea that, when a system is measured, all that is required is that the system be in a superposition,  then interact with and become entangled with its environment. This interaction process is called “decoherence,” an exotic kind of breaking apart. The state of a system of one free electron can be ‘measured’ by its hitting an air molecule. Not every interaction leads to decoherence though, but it takes careful work to create the kind of interaction that preserves it. Preserving coherence is the most difficult goal to achieve in improving a quantum computer, and cooling is one of the main techniques used to reduce interactions that cause decoherence. These interactions are called “noise” in a quantum computer. According to the Many-Worlds Interpretation, the moon is there when it is not being looked at because the moon is always interacting with some particle or other and thereby decohering and, in that sense, getting measured. Decoherence is also why the moon’s quantum properties are not visible to us at our macroscale. Nevertheless, the moon is a quantum object (an object obeying the rules of quantum theory), like all other objects.

Although not all cosmologists who accept the Everettian or Many-Worlds Interpretation of quantum mechanics agree with each other, Sean Carroll’s particular position is that new universes are created whenever there is decoherence.

The multiverse of the Many-Worlds Interpretation is a different multiverse from the multiverse of chaotic inflation that is described below in the section about extending the Big Bang Theory. Those universes exist within a single background physical space, unlike in the multiverse of the Many-Worlds Interpretation. Not every expert here agrees, but many suggest that in both kinds of multiverse, time is better envisioned, not as linear, but rather as increasingly branching into the times of the new universes. Time itself branches and is not linear, and there can be no un-branching or branch removal. If Leibniz were alive, he might say that, despite all the many branches coming into existence, we live in the best of all possible branches. The reason for saying “not every expert here agrees” is that even though everyone agrees on what the wave function is doing and that it gets new parts when there is an interaction, they do not all want to say a new part is describing a new world.

What the Copenhagen Interpretations calls quantum fuzziness or a superposition of states, Everett calls a superposition of many alternate universes. One advantage of accepting all these admittedly weird alternate universes is that in one clear sense the multiverse is deterministic and has information conservation. Although any single universe fails to be deterministic and information-preserving, the evolution of the global state of the multiverse is deterministic and information-preserving, and the multiverse evolves according to the Schrödinger equation. At least this is so on an ontological approach to the wave function; but, on an epistemic approach, quantum theories are not directly about reality but rather are merely tools for making measurements. This is an instrumentalist proposal.

Experts do not agree on whether the quantum wave function is a representation of reality, or only of our possible knowledge of reality. And there is no consensus on whether we currently possess the fundamental laws of quantum theory, as Everett believed, or instead only an incomplete version of the fundamental laws, as Einstein believed.

Heisenberg’s Uncertainty Principle 

 In quantum mechanics, various Heisenberg Uncertainty Principles restrict the simultaneous values of, for example, a particle’s position and momentum. The uncertainty in the values cannot both be zero at the same time. Another Heisenberg uncertainty principle restricts time and energy. It implies that the uncertainties in the simultaneous measurements of time and energy in energy emission (or absorption) must obey the inequality ΔE Δt ≥ h/4π. Here h is Planck’s constant. ΔE is the (standard deviation of the) uncertainty in the value of the energy during a time interval. Δt is the uncertainty in the time. You cannot have values for E and t more precisely than this. A system cannot have such a precise value of E that ΔE is zero because the inequality would be violated. According to ontological approaches to quantum mechanics, there are no such precise values to be known.

These uncertainties are detected over a collection of measurements because any single measurement has (in principle and not counting practical measurement error) a precise value and is not “fuzzy” or uncertain. Repeated measurements necessarily produce a spread in values that reveal the fuzzy, wavelike characteristics of the phenomenon being measured, and these measurements collectively obey the Heisenberg inequality. Heisenberg himself thought of his uncertainty principle as being about how the measurer necessarily disturbs the measurement and not about how nature itself does not have definite values.

One other significant implication of these remarks about the uncertainty principle for time and energy is that there can be violations in the classical law of the conservation of energy. The classical law says the total energy of a closed and isolated system is always conserved and can only change its form but not disappear or increase. A falling rock has kinetic energy of motion during its fall to the ground, but when it collides with the ground, the kinetic energy changes its form by heating the ground, heating the rock, and creating the sound energy of the collision. No energy is lost in the process. This classical law can be violated in two ways: (1) if the universe (or the isolated system being studied) expands in volume, and (2) by being violated by an amount ΔE for a time Δt, as described by Heisenberg’s Uncertainty Principle. The classical law is often violated for very short time intervals and is less likely to be violated as the time interval increases. Some philosophers of physics have described this violation as something coming from nothing and something disappearing into nothing. The quantum “nothing” or quantum vacuum, however, is not really what classical philosophers call “nothing,” Quantum theory (rather than quantum mechanics) does contain a more sophisticated law of conservation of energy that has no violations and that accounts for the deviations from the classical law.

Quantum Foam

Quantum theory allows so-called “virtual particles” to be created out of the quantum vacuum without violating the more sophisticated law of conservation of energy. Despite their name, these particles are real, but they are unusual, because they borrow energy from the vacuum and pay it back very quickly. What happens is that, when a pair of energetic virtual particles—say, an electron and anti-electron—are created from energy in the vacuum, the two exist for only a very short time before being annihilated or reabsorbed, thereby giving back their borrowed energy. The greater the energy of the virtual pair, the shorter the time interval that the two exist before being reabsorbed, as described by Heisenberg’s Uncertainty Principle. In short, the more energy that is borrowed, the quicker it is paid back.

The physicist John Wheeler first suggested that the ultramicroscopic structure of spacetime for periods on the order of the Planck time (about 5.4 x 10-44 seconds) in regions about the size of the Planck length (about 1.6 x 10-35 meters) probably is a quantum foam of rapidly changing curvature of spacetime, with black holes and virtual particle-pairs and perhaps wormholes rapidly forming and dissolving.

The Planck time is the time it takes light to travel a Plank length. The terms Planck length and Planck time were inventions of Max Planck in the early twentieth-century during his quest to find basic units of length and time that could be expressed in terms only of universal constants. He defined the Planck unit of time algebraically as √(ħG/c5) is the square root symbol. ħ is Planck’s constant in quantum theory divided by 2π; G is the gravitational constant in Newtonian mechanics; c is the speed of light in a vacuum in relativity theory. Three different theories of physics are tied together in this one expression. The Planck time is a theoretically interesting unit of time, but not a practical one. No known experimental procedure can detect events that are this brief.

Quantum field theory is an amalgam of the theory of quantum mechanics and the special theory of relativity. There are no isolated particles in a vacuum according to quantum field theory because every ordinary elementary particle is surrounded by a cloud of virtual particles. Many precise experiments can be explained only by assuming there is this cloud.

So far, this article has spoken of virtual particles as if they are ordinary, but short-lived, particles. This is not quite correct. Virtual particles are not exactly particles like the other particles of the quantum fields. Both are excitations of these fields, and they both have gravitational effects and thus effects on time, but virtual particles are not equivalent to ordinary quantum particles, although the longer lived ones are more like ordinary particle excitations than the short lived ones.

Virtual particles are just a way to calculate the behavior of quantum fields, by pretending that ordinary particles are changing into weird particles with impossible energies, and tossing such particles back and forth between themselves. A real photon has exactly zero mass, but the mass of a virtual photon can be absolutely anything. What we mean by “virtual particles” are subtle distortions in the wave function of a collection of quantum fields…but everyone calls them particles [in order to keep their names simple] (Carroll 2019, p. 316).

For more presentation of the ontological implications of quantum field theory, see the last section of the supplementary article “Frequently Asked Questions about Time.”

Entanglement and Non-Locality

Classical theories imply locality, the feature that says an object is influenced immediately and directly only by its immediate surroundings. All the interpretations of quantum theory other than the Many-Worlds Interpretation imply the universe is not local. One particle can be coordinated with a distant particle instantly. Einstein discovered this phenomenon. Technically, it called “quantum entanglement.” Some physicists speak of it as “spooky action at a distance,” and many scientists have attributed this phrase to Einstein himself, but he never said it; only other scientists and science reporters say it. For Einstein, it cannot be spooky action because it is not action. Entanglement is, though, a correlation over a distance.

If some properties of two particles somehow become entangled, this does not mean that, if you move one of them, then the other one moves, too. It is not that kind of entanglement. It is about a particle’s suddenly having a definite property it did not previously have. This entanglement leads to non-locality. A quantum measurement of a certain property of one member of an entangled pair of particles will instantaneously or nearly instantaneously determine the value of that property found by any similar measurement that will eventually be made on the other member of the pair, no matter how far away  and how close in time to the first measurement. This is very unintuitive, but the only reasonable explanation is that neither particle has a definite value for the property until the first one was measured, after which the second one’s value is almost immediately fixed. This is at least spooky correlation at a distance.

For example, suppose two electrons have entangled spins, so that if any one has spin-up when measured, then the other always has spin-down when measured in the same direction, even if the particles are very far away from each other and both are measured at about the same time. The most important feature here is that the values of the spin properties of the entangled pair were not fixed at the time they became entangled. The value of the spin for the first electron is random or in a superposition of up and down, and only a measurement of spin will fix its value. it might be up; it might be down, but measuring the spin of the first particle to be up immediately fixes the value of spin of the second particle to be down. This initial randomness prevents use of the correlation for sending a useful signal.

Here is another way of describing the odd situation of the two entangled particles. The first is Alice’s particle; the second is Bob’s. Because of the correlation any pair of their measurements will be to be found up-down or else down-up. Alice can look at her system and instantly learn about Bob’s, and she would like to use this fact  to communicate quickly with Bob. They agree on the secret code that if Bob measures his electron to be spin-down, then he should buy the junk bonds, else he should not buy the junk bonds. Suppose Alice wants to use their secret code to tell Bob to buy the bonds. Unfortunately, Alice cannot force her particle always to be up, as a means of causing the second particle to be down. She might measure her particle to be down, and that would send the wrong stock signal to Bob. So, the correlation cannot be used for communication or action or causality. The limitation on the speed of communications, actions, and causal influences that holds in special relativity is preserved even in quantum theory.

In 1935, Erwin Schrödinger said:

Measurements on (spatially) separated systems cannot directly influence each other—that would be magic.

Einstein agreed. Yet the magic seems to exist.

Becoming entangled is a physical process, and it comes in degrees. The above discussion presumed a high level of entanglement.

Ontologically, the key idea about quantum entanglement is that if a particle becomes entangled with one or more other particles within the system, then it loses some of its individuality. The whole system is more than the sum of its sub-parts. The state of an entangled group of particles is not determined by the sum of the states of each separate particle. And vice versa. If you have the maximum information about the state of an entangled system of particles, you know hardly anything about the state of any individual particle. In that sense, quantum mechanics has led to the downfall  of reductionism.

It is easy to create entangled pairs. Colliding two energetic photons will produce an entangled pair of an electron and an anti-electron whose spins along some axis are entangled. Most entanglement occurs over a short distance. But in order to explore this “magical” feature of the quantum world, researchers have separated two entangled particles by a great distance and measured their spins at the same time. This way, the first measurement outcome cannot have directly affected the second measurement outcome via sending some ordinary signal between them because the signal would have had to move faster than light speed to get there by the time the second measurement is made. Nevertheless, the transmission of coordinated behavior happens in zero time or in nearly zero time. It is hard for us who are influenced by the manifest image to believe that the two entangled electrons did not start out with the spins that they were later measured to have, but careful observations have repeatedly confirmed this nonlocality. It has been shown repeatedly that any assumption that the two entangled particles started out with the same spin is inconsistent with the data produced in the observations.

But entanglement needs to be better understood. The philosopher David Albert has commented that “In order to make sense of this ‘instaneity,’ it looks as if there is a danger that one may require an absolute notion of simultaneity of exactly the kind that the special theory of relativity denied.”

Leonard Susskind has emphasized that it is not just particles that can become entangled. Parts of space can be entangled with each other, and it is this entanglement that “holds space together.” Some researchers have concluded that, because quantum theory implies that non-locality occurs most everywhere, this is the default, and what needs to be explained is any occurrence of locality.

Approximate Solutions

Like the equations of the theory of relativity, the equations of quantum theory are very difficult to solve and use except in very simple situations. The equations cannot be used directly in today’s computers. There have been many Nobel-Prize winning advances in chemistry by finding methods of approximating quantum theory in order to simulate the results of chemical activity within a computer. For one example, Martin Karplus won the Nobel Prize for chemistry in 2013 for creating approximation methods for computer programs that describe the behavior of the retinal molecule in our eye’s retina. The molecule has almost 160 electrons, but he showed that, for describing how light strikes the molecule and begins the chain reaction that produces the electrical signals that our brain interprets during vision, chemists can successfully use an approximation; they need to pay attention only to the molecule’s outer electrons, that is, to the electrons in the electron cloud that is farthest out from the nucleus.

a. Standard Model

The Standard Model of particle physics was proposed in the 1970s, and subsequently it has been revised and well tested. The Model is designed to describe elementary particles and the physical laws that govern them. The Standard Model is really a loose collection of theories about different particle fields, and it describes all known non-gravitational fields. It is our civilization’s most precise and powerful theory of physics.

The theory sets limits of what exists and what can happen. It implies that a particle can be affected by some forces but not others. It implies that a photon cannot decay into two photons. It implies that protons attract electrons and never repel them. It also implies that every proton consists in part of two up quarks and one down quark that interact with each other by exchanging gluons. The gluons “glue” the particles together via the strong nuclear force just as photons glue electrons to protons via the electromagnetic force. Gravitons, the carrier particles for gravity, glue a moon to a planet and a planet to a star. Unlike how Isaac Newton envisioned forces, all forces are transmitted by particles. That is, all forces have carrier particles that “carry” the force from one place to another. The gluons are massless and transmit the strong force at nearly light speed; this force “glues” the quarks together inside a proton. More than 90% of the mass of the proton consists in a combination of virtual quarks, virtual antiquarks and virtual gluons. Because the virtual particles exist over only very short time scales, they are too difficult to detect by any practical experiment, and so they are called “virtual particles.” However, this word “virtual” does not imply “not real.”

The properties of spacetime points that serve to distinguish any particle from any other are a spacetime point’s values for mass, spin, and charge at that point. Nothing else. There are no other differences among what is at a point, so in that sense fundamental physics is very simple. Charge, though, is not simply electromagnetic charge. There are three kinds of color charge for the strong nuclear force, and two kinds of charge for the weak nuclear force.

Except for gravity, the Standard Model describes all the universe’s forces. Strictly speaking, these theories are about interactions rather than forces. A force is just one kind of interaction. Another kind of interaction does not involve forces but rather it changes one kind of particle into another kind. The neutron, for example, changes its appearance depending on how it is probed. The weak interaction can transform a neutron into a proton. It is because of transformations like this that the concepts of something being made of something else and of one thing being a part of a whole become imprecise for very short durations and short distances. So, classical mereology—the formal study of parts and the wholes they form—fails.

Interaction in the field of physics is very exotic. When a particle interacts with another particle, the two particles exchange other particles, the so-called carriers of the interactions. So, when milk is spilled onto the floor, what is going on is that the particles of the milk and the particles in the floor and the particles in the surrounding air exchange a great many carrier particles with each other, and the exchange is what is called “spilling milk onto the floor.” Yet all these varied particles are just tiny fluctuations of fields. This scenario indicates one important way in which the scientific image has moved very far away from the manifest image.

According to the Standard Model, but not according to general relativity theory, all particles must move at light speed c unless they interact with other fields. All the particles in your body such as its protons and electrons would move at the speed c if they were not continually interacting with the Higgs Field. The Higgs Field can be thought as being like a “sea of molasses” that slows down all protons and electrons and gives them the mass and inertia they have. Neutrinos are not affected by the Higgs Field, but they move slightly less than c because they are slightly affected by the field of the weak interaction.

As of the first quarter of the twenty-first century, the Standard Model is incomplete because it cannot account for gravity or dark matter or dark energy or the fact that there is more matter than anti-matter. When a new version of the Standard Model does all this, then it will perhaps become the long-sought “theory of everything.”

4. Big Bang

The classical Big Bang Theory implies that the universe once was extremely small, extremely dense, extremely hot, nearly uniform, and expanding; and it had extremely high energy density and severe curvature of its spacetime at all scales. Now the universe has lost all these properties except one: it is still expanding. Some cosmologists believe time began with the Big Bang, at the famous cosmic time t = 0, but the Big Bang Theory itself does not imply anything about when time began, nor whether anything was happening before the Big Bang, although those features could be added into a revised theory of the Big Bang.

The Big Bang explosion was a rapid expansion of space itself, not an expansion of something in a pre-existing void. Think of the expansion as being due to the creation of new space everywhere.

The Big Bang Theory is only a theory of the observable universe, not of the whole universe. The observable universe is the part of the universe that is in principle observable by creatures on Earth. But surely there is more than we can in principle observe. Scientists have no well-confirmed idea about the universe as a whole; it might or might not be like the observable universe.

The Big Bang Theory was very controversial when it was created in the 1920s. Before the 1960s, physicists were unsure whether proposals about cosmic origins were pseudoscientific and so should not be discussed in a well-respected physics journal. By 1930, there was general agreement among cosmologists that the universe was expanding, but it was not until the 1970s that there was general agreement that the Big Bang Theory is correct. The theory’s primary competitor during this time was the steady state theory. That theory allows space to expand in volume while this expansion is compensated for by providing spontaneous creation of matter in order to keep the universe’s overall density constant over time. This spontaneous creation violated the increasingly attractive principle of the conservation of energy.

The Big Bang explosion began approximately 13.8 billion years ago (although a minority of cosmologists suggest it might be as young as 11.4 billion years old). At that time, the observable universe would have had an ultramicroscopic volume. The explosion created new space, and this explosive process of particles flying away from each other continues to create new space today. In fact, in 1998, the classical theory of the Big Bang was revised to say the expansion rate has been accelerating slightly for the last five billion years due to the pervasive presence of dark energy. Dark energy has this name because so little is known about it other than that its amount per unit volume stays constant as space expands. That is, it does not dilute. There are two possibilities for what it is. Dark energy is either what is referred to as the “cosmological constant” or “the energy of the vacuum.” Firstly it might be:

a nonzero ground-state energy of the universe that will exist indefinitely into the future. Or second, it could be energy stored in yet another invisible background scalar field in the universe. If this is the case, then the next obvious question is, will this energy be released in yet another, future inflationary-like phase transition as the universe continues to cool down? At this time the answer is up for grabs ” (Lawrence M. Krauss, The Greatest Story Ever Told—So Far: Why Are We Here?).

One hopes that, if it is the latter of the two possibilities, then that phase transition will not happen very soon.

The Big Bang Theory in some form or other (with or without inflation) is accepted by nearly all cosmologists, astronomers, astrophysicists, and philosophers of physics, but it is not as firmly accepted as is the theory of relativity.

The Big Bang Theory originated with several people, although Edwin Hubble’s very careful observations in 1929 of galaxy recession from Earth were the most influential pieces of evidence in its favor. He showed that on average the farther a galaxy is from Earth, the faster is recedes from Earth. In 1922, the Russian physicist Alexander Friedmann discovered that the general theory of relativity allows an expanding universe. Unfortunately, Einstein reacted to this discovery by saying this is a mere physical possibility and not a feature of the actual universe. He later retracted this claim, thanks in large part to the influence of Hubble’s data. The Belgian physicist Georges Lemaître suggested in 1927 that there is some evidence the universe is expanding, and he defended his claim using previously published measurements to show a pattern that the greater the distance of a galaxy from Earth the greater the galaxy’s speed away from Earth. He calculated these speeds from the Doppler shifts in their light frequency, as did Hubble.

Currently, space is expanding because most clusters of galaxies are flying away from each other, even though molecules, planets, and galaxies themselves are not now expanding. Eventually, according to the most popular version of the Big Bang Theory, in the very distant future, even these objects will expand away from each other and all structures of particles will be annihilated, leaving only an expanding soup of elementary particles as the universe chills and approaches thermodynamic equilibrium.

The acceptance of the theory of relativity has established that space curves locally near all masses. However, the theory of relativity has no implications about curvature of space at the cosmic level. The universe presumably has no edge, but the observable universe does. The observable universe is a sphere containing 350 billion large galaxies; it is called “our Hubble Bubble” and also “our pocket universe.” Its diameter is about 93 billion light years, but it is rapidly growing more every day.

The Big Bang Theory presupposes that the ultramicroscopic-sized observable universe at a very early time had an extremely large curvature, but most cosmologists believe that the universe has straightened out and now no longer has any significant spatial curvature on the largest scale of billions of light years. Also, astronomical observations reveal that the current distribution of matter in the universe tends towards uniformity as the scale increases. At very large scales it is homogeneous and isotropic.

Here is a picture that displays the evolution of the observable universe since the Big Bang—although the picture displays only two spatial dimensions of it. Time is increasing to the right while space increases both up and down and in and out of the picture:

Big Bang graphic

Attribution: NASA/WMAP Science Team

Clicking on the picture will produce an expanded picture with more detail. (The picture shows only two spatial dimensions of the three in our universe.)

The term Big Bang does not have a precise definition. It does not always refer to a single, first event; rather, it more often refers to a brief duration of early events as the universe underwent a rapid expansion. In fact, the idea of a first event is primarily a product of accepting the theory of relativity, which is known to fail in the limit as the universe’s volume approaches zero, the so-called singularity. Actually, the Big Bang Theory itself is not a specific theory, but rather a framework for more specific Big Bang theories.

Astronomers on Earth detect microwave radiation arriving in all directions. It is the cooled down heat from the Big Bang. More specifically, it is electromagnetic radiation produced about 380,000 years after the Big Bang when the universe suddenly turned transparent for the first time. Mapping the microwave radiation gives us a picture of the universe in its infancy. At that time, the universe had cooled down to 3,000 degrees Kelvin, which was cool enough to form atoms and to allow photons for the first time to move freely without being immediately reabsorbed by neighboring particles. This primordial electromagnetic radiation has now reached Earth as the universe’s most ancient light. Because of space’s expansion during the light’s travel to Earth, the radiation has cooled and dimmed, and its wavelength has increased and become microwave radiation with a corresponding temperature of only 2.73 degrees Celsius above absolute zero. The microwave’s wavelength is about two millimeters and is small compared to the 100-millimeter wavelength of the microwaves in kitchen ovens. Measuring this incoming Cosmic Microwave Background (CMB) radiation reveals it to be extremely uniform in all directions in the sky.

Extremely uniform, but not perfectly uniform. CMB radiation varies very slightly with the angle it is viewed from. The variation is a ten thousandth of a degree of temperature. These small temperature fluctuations of the currently arriving radiation indicate fluctuations in the density of the matter of the early plasma and so are probably the origin of what later would become today’s galaxies and the voids between them because the high density regions will contract under the pull of gravity and form stars. The temperature fluctuations, in turn, probably began much earlier as quantum effects.

After the early rapid expansion ended, the universe’s expansion rate became constant and comparatively low for billions of years. This rate is now accelerating slightly because there is a another source of expansion—the repulsion of dark energy. The influence of dark energy was initially insignificant for billions of years, but its key feature is that it does not dilute as the space undergoes expansion. So, finally, after about seven or eight billion years of space’s expanding after the Big Bang, the dark energy became an influential factor and started to significantly accelerate the expansion. Today the expansion rate is becoming more and more significant. For example, the diameter of today’s observable universe will double in about 10 billion years. This influence from dark energy is shown in the above diagram by the presence of the curvature that occurs just below and before the abbreviation “etc.” Future curvature will be much greater. Most cosmologists believe this dark energy is the energy of space itself.

The initial evidence for dark energy came from observations in 1998 of Doppler shifts of supernovas. These observations are best explained by the assumption that distances between supernovas are increasing at an accelerating rate. Because of this rate increase, any receding galaxy cluster that is currently 100 light-years away from our Milky Way will be more than 200 light-years away in another 13.8 billion years, and it will be moving away from us much faster than it is now. One day, it will be moving so fast away that it will become invisible because the recession speed will exceed light speed. In enough time, every galaxy other than the Milky Way will become invisible. After that, the stars in the Milky Way will gradually become invisible, with the more distant ones disappearing first. We will lose sight of all our neighbors.

The universe is currently expanding, so everything is moving a bit from everything else. But the influence is not currently significant except at the level of galaxy clusters getting farther away from other galaxy clusters as new space is created between them. But the influence is accelerating, and so someday all solar systems, and ultimately even all configurations of elementary particles will expand and break apart. We approach the heat death of the universe, the big chill.

The term “our observable universe” and the synonymous term “our Hubble bubble,” refer to everything that some person on Earth could in principle observe. Cosmologists presume that there are distant places in the universe in which an astronomer there could see more things than are observable from here on Earth. Physicists are agreed that, because of this reasoning, there exist objects that are in the universe but not in our observable universe. Because those unobservable objects are also the product of our Big Bang, cosmologists assume that they are similar to the objects we on Earth can observe—that those objects form atoms and galaxies, and that time behaves there as it does here. But there is no guarantee that this convenient assumption is correct. Occam’s Razor suggests it is correct, but that is the sole basis for such a claim. So, it is more accurate to say the classical Big Bang Theory implies that the observable universe once was extremely small, dense, hot, and so forth.

Because the Big Bang happened about 13.8 billion years ago, you might think that no observable object can be more than 13.8 billion light-years from Earth, but this would be a mistake that does not take into account the fact that the universe has been expanding all that time. The relative distance between galaxies is increasing over time. That is why astronomers can see about 45 billion light-years in any direction and not merely 13.8 billion light-years.

When contemporary physicists speak of the age of our universe and of the time since our Big Bang, they are implicitly referring to cosmic time measured in the cosmological rest frame. This is time measured in a unique reference frame in which the average motion of all the galaxies is stationary and the Cosmic Microwave Background radiation is as close as possible to being the same in all directions. This frame is not one in which the Earth is stationary. Cosmic time is time measured by a clock that would be sitting as still as possible while the universe expands around it. In cosmic time, the time of t = 0 years is when the time that the Big Bang began, and t = 13.8 billion years is our present. If you were at rest at the spatial origin in this frame, then the Cosmic Microwave Background radiation on a very large scale would have about the same average temperature in any direction.

The cosmic rest frame is a unique, privileged reference frame for astronomical convenience, but there is no reason to suppose it is otherwise privileged. It is not the frame sought by the A-theorist who believes in a unique present, nor by Isaac Newton who believed in absolute rest, nor by James Clerk Maxwell who believed in an aether at rest and that waved whenever a light wave passed through.

The cosmic frame’s spatial origin point is described as follows:

In fact, it isn’t quite true that the cosmic background heat radiation is completely uniform across the sky. It is very slightly hotter (i.e., more intense) in the direction of the constellation of Leo than at right angles to it…. Although the view from Earth is of a slightly skewed cosmic heat bath, there must exist a motion, a frame of reference, which would make the bath appear exactly the same in every direction. It would in fact seem perfectly uniform from an imaginary spacecraft traveling at 350 km per second in a direction away from Leo (towards Pisces, as it happens)…. We can use this special clock to define a cosmic time…. Fortunately, the Earth is moving at only 350 km per second relative to this hypothetical special clock. This is about 0.1 percent of the speed of light, and the time-dilation factor is only about one part in a million. Thus to an excellent approximation, Earth’s historical time coincides with cosmic time, so we can recount the history of the universe contemporaneously with the history of the Earth, in spite of the relativity of time.

Similar hypothetical clocks could be located everywhere in the universe, in each case in a reference frame where the cosmic background heat radiation looks uniform. Notice I say “hypothetical”; we can imagine the clocks out there, and legions of sentient beings dutifully inspecting them. This set of imaginary observers will agree on a common time scale and a common set of dates for major events in the universe, even though they are moving relative to each other as a result of the general expansion of the universe…. So, cosmic time as measured by this special set of observers constitutes a type of universal time… (Davies 1995, pp. 128-9).

It is a convention that cosmologists agree to use the cosmic time of this special reference frame, but it is an interesting fact and not a convention that our universe is so organized that there is such a useful cosmic time available to be adopted by the cosmologists. Not all physically possible spacetimes obeying the laws of general relativity can have this sort of cosmic time.

In the 2020s, the standard model of cosmology and thus of the Big Bang is known as the lambda-CDM model or Λ-CDM model. Lambda is the force accelerating the expansion, and CDM is cold dark matter. The cold, dark matter is expected by some physicists to consist in as yet undiscovered weakly interacting massive particles, called wimps. A competing theory implies the dark matter is fuzzy, ultralight particles called axions.

a. Cosmic Inflation

According to one somewhat popular revision of the classical Big Bang Theory, the cosmic inflation theory, the universe was created from quantum fluctuations in an inflaton field, then the field underwent a cosmological phase transition for some unknown reason causing an exponentially accelerating expansion of space, and, then for some unknown reason it stopped inflating very soon after it began. At that time, the universe continued expanding at a more or less constant rate for billions of years.

By the time that inflation was over, every particle was left in isolation, surrounded by a vast expanse of empty space extending in every direction. And then—only a fraction of a fraction of an instant later—space was once again filled with matter and energy. Our universe got a new start and a second beginning. After a trillionth of a second, all four of the known forces were in place, and behaving much as they do in our world today. And although the temperature and density of our universe were both dropping rapidly during this era, they remained mind-boggingly high—all of space was at a temperature of 1015 degrees. Exotic particles like Higgs bosons and top quarks were as common as electrons and photons. Every last corner of space teemed with a dense plasma of quarks and gluons, alongside many other forms of matter and energy. After expanding for another millionth of a second, our universe had cooled down enough to enable quarks and gluons to bind together forming the first protons and neutrons (Dan Hooper, At the Edge of Time, p. 2).

About half the cosmologists do not believe in cosmic inflation. They hope there is another explanation of the phenomena that inflation theory explains. The theory provides an explanation for (i) why there is currently so little curvature of space on large scales (the flatness problem), (ii) why the microwave radiation that arrives on Earth from all directions is so uniform (the cosmic horizon problem), (iii) why there are not point-like magnetic monopoles most everywhere (called the monopole problem), and (iv) why we have been unable to detect proton decay that has been predicted (the proton decay problem). It is difficult to solve these problems in some other way than by assuming inflation.

The theory of primordial cosmic strings has been the major competitor to the theory of cosmic inflation, but the above problems are more difficult to solve with strings and without inflation, and the anisotropies of the Cosmic Microwave Background (CMB) radiation are consistent with inflation but not with primordial cosmic strings. . The theory of inflation is accepted by a great many members of the community of professional cosmologists, but it is not as firmly accepted as is the Big Bang Theory. Princeton cosmologist Paul Steinhardt and Neil Turok of the Perimeter Institute are three of inflation’s noteworthy opponents, although Steinhardt once made important contributions to the creation of inflation theory. One of their major complaints is that at the time of the Big Bang, there should have been a great many long wavelength gravitational waves created, and today we have the technology that should have detected these waves, but we find no evidence for them.

According to the theory of inflation, assuming the Big Bang began at time t = 0, then the epoch of inflation (the epoch of radically repulsive gravity) began at about t = 10−36 seconds and lasted until about t = 10−33 seconds, during which time the volume of space increased by a factor of 1026, and any initial unevenness in the distribution of energy was almost all smoothed out, that is, smoothed out from the large-scale perspective, somewhat in analogy to how blowing up a balloon removes its initial folds and creases and looks flat when a small section of it is viewed close up.

Although the universe at the beginning of inflation was actually much smaller than the size of a proton, think of it instead as having been the size of a marble. Then during the inflation period this marble-sized object expands abruptly to a gigantic sphere whose radius is the distance that now would reach from Earth to the nearest supercluster of galaxies. This would be a spectacular change in something marble-sized.

The speed of this inflationary expansion was much faster than light speed. However, this fast expansion speed does not violate Einstein’s general theory of relativity because this theory places no limits on the speed of expansion of space itself.

At the end of that inflationary epoch at about t = 10−33 seconds or so, the inflation stopped. In more detail, what this means is that the explosive material decayed for some unknown reason and left only normal matter with attractive gravity. Meanwhile, our universe continued to expand, although now at a slow, nearly constant, rate. It went into its “coasting” phase. Regardless of any previous curvature in our universe, by the time the inflationary period ended, the overall structure of space on the largest scales had very little spatial curvature, and its space was extremely homogeneous. Today, we see evidence that the universe is homogeneous on its largest scale.

But at the very beginning of the inflationary period, there surely were some very tiny imperfections due to the earliest quantum fluctuations in the inflaton field. These quantum imperfections inflated into small perturbations or slightly bumpy regions at the end of the inflationary period. The densest regions attracted more material than the less dense regions, and these dense regions would eventually turn into future galaxies. The less dense regions would eventually evolve into the voids between the galaxies. Those early quantum fluctuations have now left their traces in the very slight hundred-thousandth of a degree differences in the temperature of the cosmic microwave background radiation at different angles as one now looks out into space from Earth with microwave telescopes.

Let’s re-describe the process of inflation. Before inflation began, for some as yet unknown reason the universe contained an unstable inflaton field or false vacuum field. For some other, as yet unknown reason, this energetic field expanded and cooled and underwent a spontaneous phase transition (somewhat analogous to what happens when cooling water spontaneously freezes into ice). That phase transition caused the highly repulsive primordial material to hyper-inflate exponentially in volume for a very short time. To re-describe this yet again, during the primeval inflationary epoch, the gravitational field’s stored, negative, repulsive, gravitational energy was rapidly released, and all space wildly expanded. At the end of this early inflationary epoch at about t = 10−33 seconds, the highly repulsive material decayed for some as yet unknown reason into ordinary matter and energy, and the universe’s expansion rate stopped increasing exponentially, and the expansion rate dropped precipitously and became nearly constant. During the inflationary epoch, the entropy continually increased, so the second law of thermodynamics was not violated.

Alan Guth described the inflationary period this way:

There was a period of inflation driven by the repulsive gravity of a peculiar kind of material that filled the early universe. Sometimes I call this material a “false vacuum,” but, in any case, it was a material which in fact had a negative pressure, which is what allows it to behave this way. Negative pressure causes repulsive gravity. Our particle physics tells us that we expect states of negative pressure to exist at very high energies, so we hypothesize that at least a small patch of the early universe contained this peculiar repulsive gravity material which then drove exponential expansion. Eventually, at least locally where we live, that expansion stopped because this peculiar repulsive gravity material is unstable; and it decayed, becoming normal matter with normal attractive gravity. At that time, the dark energy was there, the experts think. It has always been there, but it’s not dominant. It’s a tiny, tiny fraction of the total energy density, so at that stage at the end of inflation the universe just starts coasting outward. It has a tremendous outward thrust from the inflation, which carries it on. So, the expansion continues, and as the expansion happens the ordinary matter thins out. The dark energy, we think, remains approximately constant. If it’s vacuum energy, it remains exactly constant. So, there comes a time later where the energy density of everything else drops to the level of the dark energy, and we think that happened about five or six billion years ago. After that, as the energy density of normal matter continues to thin out, the dark energy [density] remains constant [and] the dark energy starts to dominate; and that’s the phase we are in now. We think about seventy percent or so of the total energy of our universe is dark energy, and that number will continue to increase with time as the normal matter continues to thin out. (World Science U Live Session: Alan Guth, published November 30, 2016 at https://www.youtube.com/watch?v=IWL-sd6PVtM.)

Before about t = 10-46 seconds, there was a single basic force rather than the four we have now. The four basic forces (or basic interactions) are: the force of gravity, the strong nuclear force, the weak force, and the electromagnetic force. At about t = 10-46 seconds, the energy density of the primordial field was down to about 1015 GEV, which allowed spontaneous symmetry breaking (analogous to the spontaneous phase change in which water cools enough to spontaneously change to ice); this phase change created the gravitational force as a separate basic force. The other three forces had not yet appeared as separate forces.

Later, at t = 10-12 seconds, there was even more spontaneous symmetry breaking. First the strong nuclear force, then the weak nuclear force and finally the electromagnetic force became separate forces. For the first time, the universe now had exactly four separate forces. At t = 10-10 seconds, the Higgs field turned on. This slowed down many kinds of particles by giving them mass so they no longer moved at light speed.

Much of the considerable energy left over at the end of the inflationary period was converted into matter, antimatter, and radiation, such as quarks, antiquarks, and photons. The universe’s temperature escalated with this new radiation; this period is called the period of cosmic reheating. Matter-antimatter pairs of particles combined and annihilated, removing from the universe all the antimatter and almost all the matter. At t = 10-6 seconds, this matter and radiation had cooled enough that quarks combined together and created protons and neutrons. After t = 3 minutes, the universe had cooled sufficiently to allow these protons and neutrons to start combining strongly to produce hydrogen, deuterium, and helium nuclei. At about t = 379,000 years, the temperature was low enough (around 2,700 degrees C) for these nuclei to capture electrons and to form the initial hydrogen, deuterium, and helium atoms of the universe. With these first atoms coming into existence, the universe became transparent in the sense that short wavelength light (about a millionth of a meter) was now able to travel freely without always being absorbed very soon by surrounding particles. Due to the expansion of the universe since then, this early light’s wavelength expanded and is today invisible on Earth because it is at much longer wavelength than it was 379,000 years ago. That radiation is now detected on Earth as having a wavelength of 1.9 millimeters, and it is called the cosmic microwave background radiation or CMB. That energy is continually arriving at the Earth’s surface from all directions. It is almost homogenous and almost isotropic.

As the universe expands, the CMB radiation loses energy; but this energy is not lost from the universe, nor is the law of conservation of energy violated. There is conservation because the same amount of energy is gained by going into expanding the space.

In the literature in both physics and philosophy, descriptions of the Big Bang often speak of it as if it were the first event, but the Big Bang Theory does not require there to be a first event, an event that had no prior event. Any description mentioning the first event is a philosophical position, not something demanded by the scientific evidence. Physicists James Hartle and Stephen Hawking once suggested that looking back to the Big Bang is just like following the positive real numbers back to ever-smaller positive numbers without ever reaching the smallest positive one. There isn’t a smallest positive number. If Hartle and Hawking are correct that time is strictly analogous to this, then the Big Bang had no beginning point event, no initial time.

The classical Big Bang Theory is based on the assumption that the universal expansion of clusters of galaxies can be projected all the way back to a singularity, to a zero volume at t = 0. The assumption is faulty. Physicists now agree that the projection to a smaller volume  must become untrustworthy for any times less than the Planck time. If a theory of quantum gravity ever gets confirmed, it is expected to provide more reliable information about the Planck epoch from t=0 to the Planck time, and it may even allow physicists to answer the questions, “What caused the Big Bang?” and “Did anything happen before then?”

For a short lecture by Guth on these topics aimed at students, see https://www.youtube.com/watch?v=ANCN7vr9FVk.

b. Eternal Inflation and Many Worlds

Although there is no consensus among physicists about whether there is more than one universe, many of the Big Bang inflationary theories are theories of eternal inflation, of the eternal creation of more Big Bangs and thus more universes. The theory is called the theory of chaotic inflation, and the theory of the inflationary multiverse and the Many-Worlds Interpretation and occasionally the Multiverse Theory (although this is different from the multiverse theory of Hugh Everett). The key idea is that once inflation gets started it cannot easily be turned off.

The inflaton field is the fuel of our Big Bang and of all of the other Big Bangs. Advocates of eternal inflation say that not all the inflaton fuel is used up in producing just one Big Bang, so the remaining fuel is available to create other Big Bangs, at an exponentially increasing rate because the inflaton fuel increases much faster than it gets used. Presumably, there is no reason why this process should ever end, so there will be a potentially infinite number of universes in the multiverse. Also, there is no good reason to suppose our actual universe was the first one. Technically, whether one Big Bang occurred before or after another is not well defined.

A helpful mental image here is to think of the multiverse as a large, expanding space filled with bubbles of all sizes, all of which are growing. Each bubble is its own universe, and each might have its own physical constants, its own number of dimensions, even some laws of physics different from ours. In some of these universes, there may be no time at all. Regardless of whether a single bubble universe is inflating or no longer inflating, the space between the bubbles is inflating and more bubbles are being born at an exponentially increasing rate. Because the space between bubbles is inflating, nearby bubbles are quickly hurled apart. That implies there is a low probability that our bubble universe contains any empirical evidence of having interacted with a nearby bubble.

After any single Big Bang, eventually the hyper-inflation ends within that universe. We say its bit of inflaton fuel has been used up. However, after the hyper-inflation ends, the expansion within that universe does not. Our own expanding bubble was produced by our Big Bang 13.8 billion years ago. It is called the Hubble Bubble.

The inflationary multiverse is not the quantum multiverse predicted by the many-worlds interpretation of quantum theory. The many-worlds interpretation says every possible outcome of a quantum measurement persists in a newly created world, a parallel universe. If you turn left when you could have turned right, then two universes are instantly created, one in which you turned left, and a different one in which you turned right. A key feature of both the inflationary multiverse and the quantum multiverse is that the wave function does not collapse when a measurement occurs. Unfortunately both theories are called the multiverse theory as well as the many-worlds theory, so a reader needs to be alert to the use of the term. The Everettian Theory is the theory of the quantum multiverse but not of the inflationary multiverse.

The original theory of inflation was created by Guth and Linde in the early 1980s. The theory of eternal inflation with a multiverse was created by Linde in 1983 by building on some influential work by Gott and Vilenkin. The multiplicity of universes of the inflationary multiverse also is called parallel worlds, many worlds, alternative universes, alternate worlds, and branching universes—many names denoting the same thing. Each universe of the multiverse normally is required to use some of the same physics (there is no agreement on how much) and all the same mathematics. This restriction is not required by a logically possible universe of the sort proposed by the philosopher David Lewis.

New energy is not required to create these inflationary universes, so there are no implications about whether energy is or is not conserved in the multiverse.

Normally, philosophers of science say that what makes a theory scientific is not that it can be falsified (as the philosopher Karl Popper proposed), but rather is that there can be experimental evidence for it or against it. Because it is so difficult to design experiments that would provide evidence for or against the multiverse theories, many physicists complain that their fellow physicists who are developing these theories are doing technical metaphysical speculation, not physics. However, the response from defenders of multiverse research is usually that they can imagine someday, perhaps in future centuries, running crucial experiments, and, besides, the term physics is best defined as being whatever physicists do professionally.

5. Infinite Time

Is time infinitely divisible? Yes, because general relativity theory and quantum theory require time to be a continuum. But this answer will change to “no” if these theories are eventually replaced by a Core Theory that quantizes time. “Although there have been suggestions by some of the best physicists that spacetime may have a discrete structure,” Stephen Hawking said in 1996, “I see no reason to abandon the continuum theories that have been so successful.” Twenty-five years later, the physics community became much less sure that Hawking is correct.

Did time begin at the Big Bang, or was there a finite or infinite time period before our Big Bang? The answer is unknown. There are many theories that imply an answer to the question, but the major obstacle in choosing among them is that the theories cannot be tested practically.

Stephen Hawking and James Hartle said the difficulty of knowing whether the past and future are infinite in duration turns on our ignorance of whether the universe’s positive energy is exactly canceled out by its negative energy. All the energy of gravitation and spacetime curvature is negative. If the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, including the law of conservation of energy, then time is infinite in the past and future. Here is the argument for this conclusion. The law of conservation of energy implies energy can change forms, but if the total were ever to be non-zero, then the total energy can never become zero in the future nor have been zero in the past because any change in the total to zero from non-zero or from non-zero to zero would violate the law of conservation of energy. So, if the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, then there always have been states whose total energy is non-zero, and there always will be states of non-zero energy. That suggests there can be no first instant or last instant and thus that time is eternal.

There is no solid evidence that the total is non-zero, but a slim majority of the experts favor a non-zero total, although their confidence in this is not strong. Assuming there is a non-zero total, there is no favored theory of the universe’s past, but the favored theory of the future is the big chill theory. The big chill theory implies the universe just keeps getting chillier forever as space expands and gets more dilute, and so there always will be changes and thus new events produced from old events.

Here are more details of the big chill theory. The last star will burn out in 1015 years. Then all the stars and dust within each galaxy will fall into black holes. Then the material between galaxies will fall into black holes as well, and finally in about 10100 years all the black holes will evaporate, leaving only a soup of elementary particles that gets less dense and therefore “chillier” as the universe’s expansion continues. The microwave background radiation will red shift more and more into longer wavelength radio waves. Future space will expand toward thermodynamic equilibrium. But because of vacuum energy, the temperature will only approach, but never quite reach, zero on the Kelvin scale. Thus the universe descends into a “big chill,” having the same amount of total energy it always has had.

Here is some final commentary about the end of time:

In classical general relativity, the Big Bang is the beginning of spacetime; in quantum general relativity—whatever that may be, since nobody has a complete formulation of such a theory as yet—we don’t know whether the universe has a beginning or not.

There are two possibilities: one where the universe is eternal, one where it had a beginning. That’s because the Schrödinger equation of quantum mechanics turns out to have two very different kinds of solutions, corresponding to two different kinds of universe.

One possibility is that time is fundamental, and the universe changes as time passes. In that case, the Schrödinger equation is unequivocal: time is infinite. If the universe truly evolves, it always has been evolving and always will evolve. There is no starting and stopping. There may have been a moment that looks like our Big Bang, but it would have only been a temporary phase, and there would be more universe that was there even before the event.

The other possibility is that time is not truly fundamental, but rather emergent. Then, the universe can have a beginning. …And if that’s true, then there’s no problem at all with there being a first moment in time. The whole idea of “time” is just an approximation anyway (Carroll 2016, 197-8).

Back to the main “Time” article for references and citations.

Author Information

Bradley Dowden
Email: dowden@csus.edu
California State University, Sacramento
U. S. A.