What Else Science Requires of Time
Table of Contents
- What are Theories of Physics?
- Relativity Theory
- Quantum Theory
- Big Bang
- Infinite Time
The answer to this question is philosophically controversial, and there is a vast literature on the topic. Here are some brief remarks.
A state describes what there is at some time. A law describes how things change over time. It describes how one state evolves into another state. A theory is a set of these laws. The fundamental theories have no exceptions, and their laws can be formulated as equations. The laws are local in the sense that they pay attention to the here are now and not to the distant universe, nor to the past or future. These laws are the same everywhere and at all times. We have no a priori reason to think theories must be like this, but the assumptions have been very fruitful. We are lucky that we live in a world governed by so few laws.
The term theory in this article is used in a technical sense, not in the sense of an explanation as in the remark, “My theory is that the mouse stole the cheese,” nor in the sense of a prediction as in the remark, “My theory is that the mouse will steal the cheese.” The general theory of relativity is an example of our intended sense of the term “theory.” Theories in science are designed for producing explanations, not to encompass all the specific facts. That is why there is no scientific theory that specifies your phone number. Some theories are expressed fairly precisely, and some are expressed less precisely. The fairly precise ones are often called models, and in physics the laws in those models are expressed in the language of mathematics. These are the theories of mathematical physics discussed below.
A fairly precise theory of physics is not simply a jumble of precisely expressed facts. It is a carefully crafted collection of specifications regarding possible states of a physical system at a given time plus constraints on what states the system can have at future times and past times. Statements describing these constraints are called the theory’s laws of evolution or its dynamical laws or just its dynamics. Think of the laws as mathematical equations. In the equations of motion, time is treated as an independent variable; and, in the reference frame or coordinate system that is used, the time coordinate is not a spatial coordinate. Time coordinates are names of point-times.
For simplicity, the preceding explanation of what a theory is has presumed that a theory is to be characterized syntactically as a collection of sentences. This position has been challenged by philosophers in various ways, one of which is that theories should be characterized semantically, not syntactically. See the articles Models and Scientific Representation for more on this.
Scientists retain and accept a theory because they believe its implications fit the data sufficiently. The acceptance implies they intend the theory to be true or approximately true or at least to be helpful in explaining phenomena and predicting novel phenomena. The less strongly they believe the theory is realistic, that is, true, the more likely they are to call the model a toy model. Also, all other things being equal, a theory is treated holistically in the sense that if it implies eight propositions, it would be improper to “cherry pick” the implications and say, “I choose to believe the first four but will be agnostic about the latter four.”
All scientists expect any single scientific theory to be logically consistent. But there is controversy about the goal of further unification. Some philosophers of physics are comfortable with the disunity of science whereas others hope to discover that all specific theories are logical consequences of a single, more over-arching theory, a so-called theory of everything. Most all scientists do not want to spend time trying to formalize or axiomatize their theories. That is pursuing too much unity. And scientists would like their theories to be as concise as possible, in keeping with the hope that the map will be much smaller than the territory mapped.
Scientific theories can have the status of their merely being proposed but not yet confirmed; then they are called “hypotheses,” although they are much more elaborate than than, say, the hypothesis that tigers cannot survive on a diet of celery and whisky. The theories discussed below are our most well-confirmed theories—confirmed in the sense of being accepted by the community of scientists on the basis of the theory’s ability to account for empirical evidence. A physical theory is valuable in part because, when it has been confirmed, we are justified in believing its implications even if those specific implications have never been observed. Confirmed astrophysical theory implies all the stars are hot, so we believe the stars we may never observe are also hot. The point of understanding-via-theory is to be able to extrapolate beyond the data set. And the better confirmed the theory, the more we trust in the existence of references for its theoretical terms.
Almost all physicists and most philosophers of physics are realists about the fundamental theories. Bas van Fraassen and Nancy Cartwright caution, however, that a theory’s having explanatory power is not a clear sign that it is true or even approximately true. Cartwright says in How the Laws of Physics Lie, “When it comes to the test, fundamental laws are far worse off than the phenomenological laws they are supposed to explain.”
This article does not take a position on that controversial issue nor on the controversial metaphysical issue of whether realism in mathematical physics also requires realism for the mathematical structures successfully used in the physics such as realism about integers and triangles. For more on that issue, see (Azzouni 2015).
On the point about extrapolating from the data set, Stephen Wolfram suggests that our confirmed theories are only human-specific, whereas almost all other philosophers and physicists believe that if our theories are confirmed, then they are objective in the sense of holding for all possible conscious beings, even non-DNA-based beings far across the galaxy.
The laws are the main, general claims of a theory. The claim that Mars is farther from the sun than the earth is does not qualify as being a law because it is not general enough. The principle that the laws of physical science do not change from one time to another and thus are time translation invariant is not itself time translation invariant so the principle is considered to be a meta-law rather than a law. Whether it is true is another matter that is discussed back in the main Time article.
Laws describe what will happen in given circumstances. Are they more than that? In the literature on the philosophy of science, there are two central treatments of a law. One says laws are not approximations to any underlying laws that the universe obeys. The laws are mere regularities in how objects do behave, either deterministic or probabilistic regularities. The laws supervene on the totality of physical states. The supposed necessities and dispositions of objects to behave a certain way are only human projections. This is the Humean Theory or Regularity Theory. The second treatment says there are these underlying laws that fix how nature necessarily must behave, and the laws written in the science books are designed to specify those underlying laws or to approximate them. This second philosophical theory of laws is called Non-Humean or Necessitarian.
Due to the influence of Isaac Newton, subsequent physicists have assumed that the laws of physics are time-translation invariant. This invariance over time implies the laws of physics we have now are the same laws that held in the past and will hold in the future. That assumption has been challenged by a few persons in the twenty-first century, especially Lee Smolin, but the assumption is still retained by the vast majority of physicists.
Physicists ideally hope their laws have no exceptions. To give a frivolous example of this, if E = mc2 has been supposed to be a law of nature, but it is discovered that under condition C we have E = mc3, then physicists would respond that the old law was not really a law of nature after all and that the new exceptionless law of nature is: E = mc2 if not C, else E = mc3. But one can imagine that this common assumption might need to be revised some day.
Scientists and philosophers of science require the achievement of more goals than just the acquisition of theories. The philosopher of physics David Wallace says that what else is required is “some account of what there is in the world according to the theory, how [the world] behaves, what causes what, and what the explanations are.” (Wallace 2021, p. 18). One might add that philosophers desire to know what is ontologically basic.
All physicists and most all philosophers believe there can be both explanatorily brute events and brute laws, so Leibniz’s Principle of Sufficient Reason is not strictly correct. They presume the notion of being self-explainable is not likely to be coherent. All scientists recognize that science is inherently provisional, so scientists need to be on the lookout for potentially required revisions of their theories.
Does scientific progress have “momentum” in the sense that we know there will always be new scientific revolutions? No. For all we know, it is possible that significant progress in revising theories to obtain better theories will stop some day.
In the 1890s, the philosopher C.S. Peirce first clearly asked the questions: Why are our scientific laws the laws? Do the laws evolve or does only our knowledge of them evolve?
Some physical theories are fundamental, and some are not. Fundamental theories are foundational in the sense that their laws cannot be derived from the laws of other physical theories even in principle. For example, the second law of thermodynamics is not fundamental, nor are the laws of plate tectonics in geophysics. The following three theories are fundamental: (i) the general theory of relativity, (ii) quantum mechanics, including the standard model of particle physics, and (iii) the big bang theory (the standard cosmological model). Their amalgamation is what Nobel Prize winner Frank Wilczek called the Core Theory, the theory of everything physical. Scientists believe it holds, not just in our solar system, but all across the universe. Wilczek claimed:
[T]he Core has such a proven record of success over an enormous range of applications that I can’t imagine people will ever want to junk it. I’ll go further: I think the Core provides a complete foundation for biology, chemistry, and stellar astrophysics that will never require modification. (Well, “never” is a long time. Let’s say for a few billion years.)
This claim that the Core gives us all the fundamental laws we will ever need in order to explain the phenomena of our ordinary lives implies that it is all that is needed to explain the cause of your future great grandchild’s death and why that particular leaf is now lying in the street. The Core Theory never uses the terms time’s arrow or now or even noon. The concept of time in the Core Theory is primitive or “brute.” It is used to define and explain other temporal concepts such as simultaneous and earlier.
The Core theory does not contain the concepts of potato, planet, or person; these are emergent concepts that are needed in good explanations at the higher scales. Potatoes, planets and persons have been considered by a number of twentieth-century philosophers to be just a mereological sum of particles, but the majority viewpoint in the twenty-first century is that potatoes, planets and persons are, instead, stable patterns over time of the relevant quantum fields, and there is serious speculation that the ontologically fundamental entity in the universe is not matter and not a field, but rather is information.
The Core has been tested in many extreme circumstances and with great sensitivity, so physicists have high confidence in it. There is no doubt that for the purposes of doing physics the Core theory provides a demonstrably superior representation of reality to that provided by its alternatives, including the manifest image. But all physicists know the Core is not true, and they know that all its fundamental theories need some revision. Physicists are motivated to find where it fails because such a discovery can lead to great praise from the rest of the physics community. Wilczek says the Core will never need modification for understanding the special sciences of biology, chemistry, stellar astrophysics, computer science and engineering, but he would agree that the Core will need revision for more esoteric issues such as neutrinos changing their identity over time, the imbalance between matter and antimatter, the incompatibility of general relativity and quantum mechanics, and why the energy of empty space is as small as it is.
The Core theory presupposes that time exists, that it emerges from spacetime, and that spacetime is fundamental and not emergent. Within the Core theory, relativity theory allows space to curve, ripple, and expand; and the curving, rippling, and expanding can change over time. Quantum theory does not allow any of this, although a future revision of the Core theory via a theory of quantum gravity will surely allow all these features of relativity.
The Core theory rests upon another assumption, the well-accepted Laplacian Paradigm that implies that physicists should search for laws describing how a state of a system at one time turns into a different state at another time. David Deutsch, Chiara Marletto, and their corroborators (Deutsch 2013) have challenged that paradigm and proposed their alternative, Constructor Theory, which, among other things, requires time to be an emergent characteristic of nature from a non-temporal substrate.
Time is fundamental in relativity theory, and the theory has a great deal to say about the nature of time. When the term relativity theory is used, it usually means the general theory of relativity of 1915, but sometimes it means the special theory of relativity of 1905. Both the special and general theories have been well-tested; and they continue to be tested. They are almost universally accepted, and today’s physicists understand them better than Einstein did.
The relationship between the special and general theories is slightly complicated. Both theories are about motion of objects and both approach agreement with Newton’s theory the slower the speed of objects, the weaker the gravitational forces, and the lower the energy of those objects. Special relativity implies the laws of physics are the same for all inertial observers, that is, observers who are moving at a constant velocity relative to each other will find that all phenomena obey the same laws. General relativity implies the laws are the same even for observers accelerating relative to each other, such as changing their velocity due to the influence of gravitation. General relativity holds in all reference frames, but special relativity holds only for inertial reference frames, namely non-accelerating frames.
Special relativity allows objects to have mass but not gravity. It always requires a flat geometry—that is, a Euclidean geometry for space and a Minkowskian geometry for spacetime. General relativity does not have those restrictions. General relativity is a specific theory of gravity, assuming the theory is supplemented by a specification of the distribution of matter-energy at some time. Special relativity is not a specific theory but rather a general framework for theories, and it is not a specific version of general relativity. Nor is general relativity a generalization of special relativity. The main difference between the two is that, in general relativity, spacetime does not simply exist passively as a background arena for events. Instead, spacetime is dynamical in the sense that changes in the distribution of matter and energy are changes in the curvature of spacetime (though not necessarily vice versa).
The theory of relativity is generally considered to be a theory based on causality:
One can take general relativity, and if you ask what in that sophisticated mathematics is it really asserting about the nature of space and time, what it is asserting about space and time is that the most fundamental relationships are relationships of causality. This is the modern way of understanding Einstein’s theory of general relativity….If you write down a list of all the causal relations between all the events in the universe, you describe the geometry of spacetime almost completely. There is still a little bit of information that you have to put in, which is counting, which is how many events take place…. Causality is the fundamental aspect of time. (Lee Smolin)
In the Core theories, the word time is a theoretical term, and the dimension of time is treated somewhat like a single dimension of space. Space is a set of all possible point-locations. Time is a set of all possible point-times. Spacetime is a set of all possible point-events. Spacetime is presumed to be four-dimensional and also smooth, with time being a distinguished, one-dimensional sub-space of spacetime. Because the time dimension is so different from a space dimension, physicists very often speak of (3+1)-dimensional spacetime rather than 4-dimensional spacetime. Technically, any spacetime, no matter how many dimensions it has, is required to be a differentiable manifold with a metric tensor field defined on it that tells what geometry it has at each point. Both relativity theory and quantum theory assume that three-dimensional space is isotropic (rotation symmetric) and homogeneous (translation symmetric) and that there is translation symmetry in time. Although physical laws determine the totality of physically allowed situations and processes, specific physical systems within space-time need not show these symmetries; only the physical laws need to.
(For the experts: General relativistic spacetimes are manifolds built from charts involving open subsets of R4. General relativity does not consider a time to be a set of simultaneous events that do or could occur at that time; that is a Newtonian conception. Instead General relativity defines time in terms of the light cone structures at each place. The theory requires spacetime to have at least four dimensions, not exactly four dimensions.)
Relativity theory implies time is smooth, continuous, and free of gaps, like a mathematical line. This feature was first emphasized by the philosopher John Locke in the late seventeenth century, but it is meant here in a more detailed, technical sense that was developed toward the end of the 19th century for calculus.
According to both relativity theory and quantum mechanics, time is not discrete or quantized or atomistic. Instead, the structure of point-times is a linear continuum with the same structure as the mathematical line or as the real numbers in their natural order. For any point of time, there is no next time because the times are packed together so tightly. Time’s being a continuum implies that there is a non-denumerably infinite number of point-times between any two non-simultaneous point-times. Some philosophers of science have objected that this number is too large, and we should use Aristotle’s notion of potential infinity and not the late 19th century notion of a completed infinity. Nevertheless, accepting the notion of an actual nondenumerable infinity is the key idea used to solve Zeno’s Paradoxes and to remove inconsistencies in calculus.
The fundamental laws of physics assume the universe is a collection of point events that form a four-dimensional continuum, and the laws tell us what happens after something else happens or because it happens. These laws describe change but do not themselves change. At least that is what laws are in the first quarter of the twenty-first century, but one cannot know a priori that this is always how laws must be.
Although relativity theory treats time as having the same exotic structure as the mathematical line in that it consists of a continuum of temporal points, no experiment is so fine-grained that it could show times to be that close together, although there are possible experiments that could show the assumption to be false if it were false and if the graininess of time were to be large enough.
In the twenty-first century, one of the most important goals in physics is to discover/invent a theory of quantum gravity that unites the best parts of quantum theory and the theory of relativity. Einstein claimed in 1916 that his general theory of relativity needed to be replaced by a theory of quantum gravity. A great many physicists of the 21st century believe a successful theory of quantum gravity will require quantizing time so that there are atoms of time.
If there is such a thing as an atom of time and thus such a thing as a next instant and a previous instant, then time cannot be like the real number line, because no real number has a next number. It is speculated that if time were discrete, a good estimate for the duration of an atom of time is 10-44 seconds, the so-called Planck time. No physicist can yet suggest a practical experiment that is sensitive to this tiny scale of phenomena. For more discussion, see (Tegmark 2017).
The special and general theories of relativity imply that to place a reference frame upon spacetime is to make a choice about which part of spacetime is the space part and which is the time part. No choice is objectively correct, although some choices are very much more convenient for some purposes. This relativity of time, namely the dependency of time upon a choice of reference frame, is one of the most significant implications of both the special and general theories of relativity.
Since the discovery of relativity theory, scientists have come to believe that any objective description of the world can be made only with statements that are invariant under changes in the reference frame. Saying, “It is 8:00” does not have a truth value unless a specific reference frame is implied, such as one fixed to Earth with time being the time that is measured by our civilization’s standard clock. This relativity of time to reference frames is behind the remark that Einstein’s theories of relativity imply time itself is not objectively real but spacetime is real.
In regard to the idea of relativity to frame, Newton would say that if you are seated in a vehicle moving along a road, then your speed relative to the vehicle is zero, but your speed relative to the road is not zero. Einstein would agree. However, he would surprise Newton by saying the length of your vehicle is slightly different in the two reference frames, the one in which the vehicle is stationary and the one in which the road is stationary. Equally surprising to Newton, the duration of the event of your drinking a cup of coffee while in the vehicle is slightly different in those two reference frames. These relativistic effects are called space contraction and time dilation, respectively. So, both length and duration are frame dependent and, for that reason, say physicists, they are not objectively real characteristics of objects. Speeds also are relative to reference frame, with one exception. The speed of light in a vacuum has the same value c in all frames. And space contraction and time dilation change together so that the speed of light in a vacuum is always the same number.
Relativity theory allows great latitude in selecting the classes of simultaneous events, as shown in this diagram. Because there is no single objectively-correct frame to use for specifying which events are present and which are past—but only more or less convenient ones—one philosophical implication of the relativity of time is that it seems to be more difficult to defend McTaggart’s A-theory that implies the temporal properties of events such as “is happening now” or “happened in the past” are intrinsic to the events and are objective, frame-free properties of those events. In brief, the relativity to frame makes it difficult to defend absolute time.
Relativity theory challenges other ingredients of the manifest image of time. For two events A and B occurring at the same place but at different times, relativity theory implies their temporal order is absolute in the sense of being independent of the frame of reference, and this agrees with common sense and thus the manifest image of time, but if they are distant from each other and occur close enough in time to be within each other’s absolute elsewhere, then relativity theory implies event A can occur before event B in one reference frame, but after B in another frame, and simultaneously with B in yet another frame. No person before Einstein ever imagined time has such a strange feature.
The special and general theories of relativity provide accurate descriptions of the world when their assumptions are satisfied. Both have been carefully tested. The special theory does not mention gravity, and it assumes there is no curvature to spacetime, but the general theory requires curvature in the presence of mass and energy, and it requires the curvature to change as their distribution changes. The presence of gravity in the general theory has enabled the theory to be used to explain phenomena that cannot be explained with special relativity and Newton’s theory of gravity and Maxwell’s theory of electromagnetism.
Because of the relationship between spacetime and gravity, the equations of general relativity are much more complicated than are those of special relativity. But general relativity assumes the equations of special relativity hold at least in all infinitesimal regions of spacetime.
To give one example of the complexity just mentioned, the special theory clearly implies there is no time travel to events in one’s own past. Experts do not agree on whether the general theory has this same implication because the equations involving the phenomena are too complex to solve directly. Approximate solutions have to be used, yet still there is disagreement about time travel.
Regarding curvature of time and of space, the presence of mass at a point implies intrinsic spacetime curvature at that point, but not all spacetime curvature implies the presence of mass. Empty spacetime can still have curvature, according to relativity theory. This point has been interpreted by many philosophers as a good reason to reject Leibniz’s classical relationism. The point was first mentioned by Arthur Eddington.
Two accurate, synchronized clocks do not stay synchronized if they undergo different gravitational forces. This is a second kind of time dilation, in addition to dilation due to speed. So, a correct clock’s time depends on the clock’s history of both speed and gravitational influence. Gravitational time dilation would be especially apparent if a clock were to approach a black hole. The rate of ticking of a clock approaching the black hole slows radically upon approach to the horizon of the hole as judged by the rate of a clock that remains safely back on Earth. This slowing is sometimes misleadingly described as time slowing down. After a clock falls through the event horizon, it can no longer report its values to Earth, and when it reaches the center of the hole not only does it stop ticking, but it also reaches the end of time, the end of its proper time.
The general theory of relativity theory has additional implications for time. In 1948-9, the logician Kurt Gödel discovered radical solutions to Einstein’s equations, solutions in which there are closed time-like curves in graphical representations of spacetime. The unusual curvature is due to the rotation of all the matter in Gödel’s possible universe. As one progresses forward in time along one of these curves, one arrives back at one’s starting point. Fortunately, there is no empirical evidence that our own universe has this rotation. Here is Einstein’s reaction to Gödel’s work on time travel:
Kurt Gödel’s essay constitutes, in my opinion, an important contribution to the general theory of relativity, especially to the analysis of the concept of time. The problem involved here disturbed me already at the time of the building of the general theory of relativity, without my having succeeded in clarifying it.
In mathematical physics, the ordering of instants by the happens-before relation of temporal precedence is complete in the sense that there are no gaps in the sequence of instants. Any interval of time is smooth, so the points of time form a linear continuum. Unlike physical objects, physical time is believed to be infinitely divisible—that is, divisible in the sense of the actually infinite, not merely in Aristotle’s sense of potentially infinite. Regarding the density of instants, the ordered instants are so densely packed that between any two there is a third so that no instant has a next instant. Regarding continuity, time’s being a linear continuum implies that there is a nondenumerable infinity of instants between any two non-simultaneous instants. The rational number line does not have so many points between any pair of different points; it is not continuous the way the real number line is, but rather contains many gaps. The real numbers such as pi, which is not a rational number, fill the gaps.
The actual temporal structure of events can be embedded in the real numbers, at least locally, but how about the converse? That is, to what extent is it known that the real numbers can be adequately embedded into the structure of the instants, at least locally? This question is asking for the justification of saying time is not discrete or atomistic. The problem here is that the shortest duration ever measured is about 250 zeptoseconds. A zeptosecond is 10−21 second. For times shorter than about 10-43 second, which is the physicists’ favored candidate for the duration of an atom of time, science has no experimental grounds for the claim that between any two events there is a third. Instead, the justification of saying the reals can be embedded into an interval of instants is that (i) the assumption of continuity is very useful because it allows the mathematical methods of calculus to be used in the physics of time; (ii) there are no known inconsistencies due to making this assumption; and (iii) there are no better theories available. The qualification earlier in this paragraph about “at least locally” is there in case there is time travel to the past so that the total duration of the time loop is finite. A circle is continuous, and one-dimensional, but it is finite, and it is like the real numbers only locally.
One can imagine two empirical tests that would reveal time’s discreteness if it were discrete—(1) being unable to measure a duration shorter than some experimental minimum despite repeated tries, yet expecting that a smaller duration should be detectable with current equipment if there really is a smaller duration, and (2) detecting a small breakdown of Lorentz invariance. But if any experimental result that purportedly shows discreteness is going to resist being treated as a mere anomaly, perhaps due to error in the measurement apparatus, then it should be backed up with a confirmed theory that implies the value for the duration of the atom of time. This situation is an instance of the kernel of truth in the physics joke that no observation is to be trusted until it is backed up by theory.
It is commonly remarked that, according to relativity theory, nothing can go faster than light. The remark needs some clarification, else it is incorrect. Here are three ways to go faster than light. (1) First, the medium needs to be specified. The speed of light in certain crystals can be much less than c, say 40 miles per hour, and a horse outside the crystal could outrun the light beam. (2) Second, the limit c applies only locally to objects within space relative to other nearby objects within space, and it requires that no object pass another object locally at faster than c. However, globally the general theory of relativity places no restrictions on how fast space itself can expand. So, two distant galaxies can drift apart from each other at faster than the speed of light simply because the intervening space expands. (3) Imagine standing still outside on the flat ground and aiming your laser pointer forward toward an extremely distant galaxy. Now aim the pointer down at your feet. During that process, the point of intersection of the pointer and the tangent plane of the ground will move toward your feet faster than the speed c. This does not violate relativity theory because the point of intersection is merely a geometrical object, not a physical object, so its speed is not restricted by relativity theory.
Quantum theory is a special relativistic theory of quantum mechanics including the standard model of particle physics. Quantum mechanics and quantum theory have their names because they imply that various phenomena, such as energy and charge, are quantized in the sense that they do not change continuously but only in multiples of minimum discrete steps, so-called quantum steps. When in popular discourse a quantum leap is described as a large, significant leap, this is a faulty description. A quantum leap is actually a smallest possible leap. Think of a quantum leap as an abrupt change. Relativity theory does not quantize energy and quantum theory does, so this is one of the several ways that the two theories disagree with each other. But not everything is quantized in quantum mechanics; that is, some observables have a continuum of outcomes rather than discrete possible outcomes. Time is a continuum in both quantum mechanics and quantum theory as it is in the theory of relativity and Newton’s mechanics.
Quantum theory is our most successful theory in all of science. The range and variety of phenomena it can successfully explain is remarkable. For two examples, it explains why you can see through a glass window but not a potato and why a glass window is hard, unlike light which is extremely soft. Before quantum theory, these were simply brute facts of nature.
For philosophers, the most important impact of quantum theory on our understanding of the universe is that either (1) the universe is non-local (because there is entanglement within any composite system or what Einstein called “spooky action-at-a-distance”), or else (2) measurements do not have unique outcomes (because every possible outcome occurs in one of the many alternative worlds). Both disjuncts imply non-separability. That is, they imply that if there is entanglement within a composite system within the actual world, then even if you knew everything possible to know about a composite system, you still will be ignorant about some of the behavior of its individual parts, so its parts cannot actually be “separated” from the whole. This is science’s biggest impact on our manifest image.
Surprisingly, physicists still do not agree on the exact formulation of quantum theory. Its many competing interpretations are really competing versions of the theory. That is why there is no agreement on what the axioms of quantum theory are. There is a disagreement among philosophers of physics regarding whether the competing interpretations are (1) empirically equivalent and underdetermined by the evidence and so must be decided upon by such features as their mathematical elegance and simplicity, or (2) these competing interpretations are not empirically equivalent theories and, instead, they have the standing of not yet being refuted by experimental evidence.
“Anyone who is not shocked by quantum theory has not understood it,” said Niels Bohr. Quantum theory does not imply what events occur, but only the probability that they occur, so it is not a deterministic theory of our universe. That is, the equations of quantum mechanics do not tell us precisely where a particle is at a later time but only the probabilities of finding it in various places if a measurement were to be made there. Think of your own situation like this. At any moment you are faced with a probability distribution of what will happen over possible next moments. In the next moment, you could step left or step right with equal probabilioty, the sun could keep shining (with high probability) or stop shining (with low probability), and so forth. You are faced with a probability distribution at any time regarding what might happen. That probability distribution evolves deterministically according to quantum theory, but that does not remove the probability.
Because quantum theory describes objects by using probabilities and waves, quantum objects are unlike objects that are described by Newtonian and relativistic physics. These latter, classical theories imply objects have definite positions and velocities simultaneously, but that implication appears to be inconsistent with Heisenberg’s uncertainty principle in quantum theory.
Also, in quantum field theory, two particles with the same quantum values are absolutely identical except for location, just as two instances of the number seven are perfectly the same, whereas our best engineered instances of two bullets are not quite the same.
The famous two-slit experiment is usually interpreted as showing that a particle can be in two places at once. Unfortunately, philosophers of physics do not agree on its implications. They do not agree on what quantum theory implies about what an object is, what it means for it to have a location, nor on how an object maintains its identity over time before and after passing through a slit. Nor do they agree on what happens during a measurement, especially whether the quantum wave function does or does not collapse to a specific value (called “decoherence”). Assuming it does collapse during measurement, scientists disagree on whether the collapse is instantaneous or just brief. George Ellis, co-author with Stephen Hawking of the definitive book The Large-Scale Structure of Space-Time, identifies a key difficulty with our understanding of quantum measurement in those theories that say the wave function collapses during measurement: “Usually, it is assumed that the measurement apparatus does not obey the rules of quantum theory, but this contradicts the presupposition that all matter is at its foundation quantum mechanical in nature.”
Regarding probability, physicists disagree with each other as to whether the probability is objective or subjective. Advocates of the relational interpretation of quantum theory insist that the quantum state of a system depends on the observer. Advocates of the many-worlds or multiverse interpretation of quantum theory say Schrödinger’s cat is alive in half the universes that branch off from the beginning of the cat experiment, and there are no fundamental probabilities in Nature because Nature is deterministic, or at least it is when all universes are brought into the picture. Regarding our one actual universe, the 50% probability is a product of our lack of knowledge of what is going on; we are not a god so we cannot have an objective gods-eye view of the multiverse and its wave function and so we cannot know how our particular universe will evolve forward in time, but an omniscient being is not encumbered by using probabilities.
The state of a system in quantum theory is very different than in Newtonian theory which treats the universe as being a deterministic, clockwork universe in which there are well-defined material entities within space and time.
In quantum mechanics, the state of a system is a superposition of all the possible measurement outcomes, known as the “wave function” of the system. The wave function is a combination of every result you could get by doing an observation, with different weights for each possibility. The state of an electron in an atom, for example, will be a superposition of all the allowed orbits with fixed energies. The superposition representing a given quantum state might be heavily concentrated on one specific outcomes–the electron might be almost perfectly localized in an orbit with some particular energy–but in principle every possible measurement outcomes can be part of the quantum state…. When we say that a quantum state is a superposition, we don’t mean “it could be any one of various possibilities, we’re not sure which.” We mean “it is a weighted combination of all those possibilities at the same time.” If you could somehow play “quantum poker,” your opponent would really have some combination of each of the possible hands all at once, and their hand would become one specific alternative only once they turned over the cards for you to look at them. (Carroll, 2016, p. 163)
The wave function is a vector describing the state of a system. A system’s wave function evolves smoothly and time-reversibly and deterministically, at least when a measurement is not made on the system. But philosophers disagree about whether a state should be interpreted realistically or instrumentally. Despite quantum theory being the most successful theory in the history of physics, philosophers of physics do not agree on whether quantum theory is a theory about reality or instead merely a tool for making measurements. Nor do they agree on whether the quantum wave function is a representation of reality or instead a representation of our knowledge of reality. Physicists do not agree on whether we currently possess the fundamental laws of quantum theory, as Everett believed, or instead only an incomplete version of the laws, as Einstein believed.
There are many competing interpretations of quantum theory. David Alpert says the many-worlds theory is the least likely to be true, and Sean Carroll says it is the most likely to be true. The earliest theory was developed by Niels Bohr in the 1920s. It is called the Copenhagen Interpretation. Bohr’s complementarity idea for interpreting quantum mechanics is that all particles have both wave and particle aspects. A full description of the particle requires specifying both its wave character and its particle character. The implication is that there is no experiment that can provide a precise result simultaneously for the value of an electron’s velocity and position. The implication has no practical effect upon a measurement of your kitchen countertop.
Consider a proton. Examined as a particle, a proton has a definite width. Examined as a wave, the proton is a relatively stable “bump” in a proton field. The bump has no definite width. So, in an imperfect sense, a proton both has and does not have a definite width. The amplitude of the bump is quantized; the amplitude cannot change continuously but only in quantized steps.
The Copenhagen Interpretation says that, when someone makes a measurement on a system, this process collapses the wave function that describes the system, and the probability of collapsing to a particular value of the measurement is the square of the amplitude of the wave function. However, if no measurement is being made, then the system does not collapse, but rather is described completely by the wave function that obeys the Schrödinger equation. Bohr gives this situation an antirealist interpretation: there is no way the world is when it is not being observed. “It is wrong to think that the task of physics is to find out how nature is,” he said. Physics is a merely instrument for telling us what we can say about nature. Many physicists object to this anti-realist interpretation of quantum theory. The Everett Interpretation, for example, is realist, universal, and allows no collapse of the wave function.
The philosopher David Chalmers has promoted a radical theory of quantum theory, one that promotes property dualism. This theory implies consciousness is a fundamental feature of nature. He speculates that there are intrinsic properties of consciousness that can interact with ordinary physical properties, and he suggests that consciousness collapses the wave function in a manner similar to how measurement does in the Copenhagen Interpretation.
The issue of determinism looms large in quantum theory. The Copenhagen Interpretation implies that when a single nucleus of radioactive uranium decays at a specific time, there is no determining cause for the decay; the best our quantum theory can say is that there was a certain probability of the decay occurring at that time and that there were certain probabilities for other possible experimental outcomes. According to the Copenhagen Interpretation, the statistical veil of quantum theory cannot be penetrated. So, quantum mechanics is indeterministic. Identical measurement situations need not lead to the same result. And assuming that reasons are causes, the Copenhagen Interpretation also is inconsistent with Leibniz’s Principle of Sufficient Reason.
However, there are deterministic interpretations of quantum theory that imply a very different kind of statistical veil. The many-worlds interpretation or Everettian interpretation of quantum theory is deterministic over the totality of worlds, though not within a single world such as the actual world. The many-worlds theory implies we cannot know which world is determined to happen next—for example, a world in which Schrödinger’s cat is alive or a world in which the cat is dead. What we can know in advance is only the probabilities of our finding ourselves to be in various worlds .
Schrödinger’s wave function describes how states of a quantum system evolve over time. This quantum wave function at one time determines the wave function at all other times. So, if Laplace’s Demon knew the wave function, it could compute the function at all later and all earlier times. However, rather paradoxically, Heisenberg’s uncertainty principle of quantum theory implies that, if there exists more precise information about the time when an event occurs, then there must exist only less precise information about the energy involved. Because of this lack of precision in principle, it follows that probability is ineliminable. But philosophical debate continues about whether the existence of this probability is an epistemic constraint or a sign of physical indeterminism.
The uncertainty principle implies that the uncertainties in the simultaneous measurements of time and energy emission or energy absorption must obey the inequality ΔE ∙ Δt ≥ h/4π, where ΔE is the (standard deviation of the) uncertainty in the energy, Δt is the uncertainty in the time, and h is Planck’s constant. Depending on the experimental setup, Δt could be the duration for making the measurement of energy, or it could be the duration that a measured particle state exists. These uncertainties are produced over a collection of measurements because any single measurement has, in principle, a precise value and is not “fuzzy.” Repeated measurements produce a spread in values that reveal the wavelike characteristics of the phenomenon being measured. Normally the spread is defined to be the variance or standard deviation of the measurements. Philosophers of physics do not agree on whether Δt is a lack of precision in nature herself or a lack of knowledge of precise results in measurements or some inevitable disturbance during measuring. Heisenberg himself thought of his uncertainty principle as being about how there must be disturbances in measurements. Regardless, Δt is a measure of the spread of values for multiple measurements of duration, and one can think of the uncertainty principle as a limitation on the statistics of measurements.
One significant implication of these remarks about the uncertainty principle for time and energy is that there can be violations in the classical law of conservation of energy. The classical law can be violated by ΔE for a time Δt. Quantum theory does contain a law of conservation of energy, but that law frequently is described carelessly as requiring that, in an isolated region of space, the total amount of energy cannot change no matter what happens within the region; the energy only can change its form. This explanation is not strictly correct. That version of the law is frequently violated for very short time intervals and is less likely to be violated as the time interval increases. Over the long term, though, energy is always conserved.
Consider what happens during one of these violations. In an isolated system, quantum theory allows so-called virtual particles to be created out of the quantum vacuum. These particles are real, but they borrow energy from the vacuum and pay it back very quickly. What happens is that, when a pair of energetic virtual particles—say, an electron and anti-electron—are created from the vacuum, the two exist for only a very short time before being annihilated or reabsorbed and thereby giving back their borrowed energy. The greater the energy of the virtual pair, the shorter the time interval that the two exist before being reabsorbed, as described by Heisenberg’s uncertainty principle. At the shortest intervals, black holes would be continually created and the microscopic structure of spacetime would become a turbulent sea, the so-called quantum foam. If so, the smooth structure of spacetime is only an approximation that works above the Planck scale. It is an open question whether Wheeler’s quantum foam exists. So, strictly speaking, quantum theory does allow something to be created from nothing. Some theologians have been outraged by this conclusion, suggesting that only God has the power to create something from nothing.
Virtual particles cause space-time to warp around them, and then to un-warp as the particles disappear very quickly. This coming in and out of existence creates all sorts of ultra-microscopic fluctuations known collectively as the quantum foam or space-time foam. The existence of this foam is why quantum mechanics implies there is turbulence at the smallest scales.
The effect of all these particles wiggling into and out of being is a thrumming “vacuum energy” that fills the cosmos and pushes outward on space itself. This activity is the most likely explanation for dark energy—the reason the universe, rather than staying static or even expanding at a steady rate, is accelerating outward faster and faster every moment (Moskowitz 2021, p. 26).
Regarding the quantum foam, John Wheeler suggested that the ultramicroscopic structure of spacetime for periods on the order of the Planck time (about 5.4 x 10-44 seconds) in regions about the size of the Planck length (about 1.6 x 10-35 meters) probably is a quantum foam of rapidly changing curvature of spacetime, with black holes and virtual particle-pairs and perhaps wormholes rapidly forming and dissolving.
The Planck time is the time it takes light to travel a Plank length. The terms Planck length and Planck time were inventions of Max Planck in the early twentieth-century during his quest to find basic units of length and time that could be expressed in terms only of universal constants. He defined the Planck unit of time algebraically as
√ is the square root symbol. ħ is Planck’s constant in quantum theory divided by 2π; G is the gravitational constant in Newtonian mechanics; c is the speed of light in a vacuum in relativity theory. Three different theories of physics are tied together in this one expression. The Planck time is a theoretically interesting unit of time, but not a practical one. No known experimental procedure can detect events that are this brief.
There are no isolated particles according to quantum mechanics. Every particle is surrounded by many other particles, mostly virtual particles. So far, this article has spoken of virtual particles as if they are ordinary but short-lived particles. This is not quite correct. Virtual particles are not exactly particles like the other particles of the quantum fields. Both are excitations of these fields, and they both have gravitational effects and thus effects on time, but virtual particles are not equivalent to ordinary quantum particles, although the longer lived ones are more like ordinary particle excitations than the short lived ones.
Virtual particles are just a way to calculate the behavior of quantum fields, by pretending that ordinary particles are changing into weird particles with impossible energies, and tossing such particles back and forth between themselves. A real photon has exactly zero mass, but the mass of a virtual photon can be absolutely anything. What we mean by “virtual particles” are subtle distortions in the wave function of a collection of quantum fields…but everyone calls them particles [in order to keep their names simple] (Carroll 2019, p. 316).
To summarize the above discussion about virtual particles and quantum foam, one can say that if the fundamental theories could be trusted, then time is a smooth continuum. However, there is reason not to trust the fundamental theories. A great many physicists believe time might be required to not be smooth and to break up for durations around the Planck interval of 10-43 seconds. The reasoning involves quantum mechanics, in particular Heisenberg’s uncertainty principle. For these very short durations, very large amounts of radiation can be “borrowed.” For Planck intervals in comparable small volumes, John Wheeler conjectured that the gravity of this large amount of energy would become so strong that black holes would be created and the microscopic structure of spacetime would become a turbulent sea, the so-called quantum foam. If so, the smooth structure of spacetime is only an approximation that works above the Planck scale. It is an open question whether Wheeler’s quantum foam exists.
Entanglement is an unusual feature of quantum theory that involves time. Ontologically, the key idea is that if a particle becomes entangled with one or more other particles, then it loses some of its individuality. Even though both the special and general theory of relativity place the speed limit c on how fast a causal influence can propagate through space, classical quantum mechanics does not have this limit. A quantum measurement of one member of an entangled pair of particles will instantaneously determine the value of any similar measurement that might be made on the other member of the pair. The concept of determining the value is not quite the same as the concept of causing, and entanglement cannot be used to cause information to be transferred from one place to another at faster than the speed of light.
Speaking about entanglement in 1935, Erwin Schrödinger said:
Measurements on (spatially) separated systems cannot directly influence each other—that would be magic.
Einstein agreed. Yet the magic seems to exist. With entangled pairs, there is instantaneous, coordinated behavior across great distances. Here is an example. Consider the production of two entangled electrons with correlated spins. Think of spin as the inertia of orientation, the kind of thing that keeps a spinning top pointing in the same direction. What is exciting and special is that, although the two entangled electrons were created so that they will give, let’s say, the same values when their spins are measured, it can be shown that they were not created with the same spin. It isn’t that both started out with spin up or both started out with spin down but only that they later will have to be found to have the same spin. To appreciate this “magical” point, separate the two by a great distance. Now perform a measurement of spin on one of the two entangled electrons. Suppose this electron’s spin is measured to be up. If a similar measurement were to be made on the very distant electron, its spin would be found to be up also. And this second measurement can be made before a particle moving at the speed of light has time to carry information to the very distant, second particle about what happened back at the first particle. The transmission of coordinated behavior happens in zero time. It is hard for us who are influenced by the manifest image to believe that the two electrons did not start out with the spins that they were later found to have. The manifest image presupposes this locality. Quantum theory implies entanglement or non-locality occurs most everywhere so that is the default, and what needs explaining is any occurrence of locality—or else the multiverse theory is correct and measurements do not have unique outcomes.
Some physicists, including the philosopher David Albert, suggest that the explanation of non-local phenomena such as entanglement requires some notion of absolute simultaneity, and therefore a revision in the general theory of relativity.
All physicists believe that relativity and quantum theory are logically contrary. So, the two theories need to be replaced with a theory customarily called quantum gravity that is “more fundamental.” It is usually not made clear what it is that makes a fundamental theory be fundamental, but the overall, vague idea is that a fundamental theory should not leave anything clearly in need of explanation that might be given an explanation. For more discussion of what is meant or should be meant by the terms fundamental theory, more fundamental theory, and final theory, see (Crowther 2019).
The standard model of particle physics was proposed in the 1970s, and it has subsequently been perfected and very well-tested. It is our civilization’s most precise and powerful theory of physics. For example, it can be used to explain why the periodic table has the values it has, and it explains why glass is solid and transparent, but grapes are soft and not transparent.
The standard model of particle physics is really a loose collection of theories about different particle fields. It describes all known fields and forces except gravity, and all known particles except the graviton.
The theory sets limits of what exists and what can happen. It implies, for example, that a photon cannot decay into two photons. It implies that protons attract electrons and never repels them. It also implies that every proton consists in part of two up quarks and one down quark that interact with each other by exchanging gluons. The gluons glue the particles together via the strong nuclear force just as photons glue electrons to protons via the electromagnetic force. Gravitons, the carrier particles for gravity, glues a moon to a planet and a planet to a star. Unlike how Isaac Newton envisioned forces, all forces are transmitted by particles. That is, all forces have carrier particles that “carry” the force from one place to another. The gluons are massless and transmit the strong force; this force “glues” the quarks together inside a proton. More than 90% of the mass of the proton consists in virtual quarks, virtual antiquarks and virtual gluons. Because they exist over only very short time scales, they are too difficult to detect by any practical experiment, and so they are called “virtual particles.” However, this word “virtual” does not mean “not real.”
The properties of spacetime points that serve to distinguish any particle from any other are a spacetime point’s values for mass, spin, and charge at that point. There are no other differences among points, so in that sense fundamental physics is simple. Charge, though, is not simply electromagnetic charge. There are three kinds of color charge for the strong nuclear force, and two kinds of charge for the weak nuclear force.
Except for gravity, the standard model describes all the universe’s forces and interactions, but strictly speaking, these theories are about interactions rather than forces. A force is just one kind of interaction. Some interactions do not involve forces but rather they change one kind of particle into another kind. The weak interaction, for example, can transform a neutron into a proton.
Most every kind of event and process in the universe is produced by one or more of the four interactions. When any particle interacts, say with another particle, the two particles exchange other particles, the so-called carriers of the interactions. So, when milk is spilled onto the floor, what is going on is that the particles of the milk and the particles in the floor and the particles in the surrounding air exchange a great many carrier particles with each other, and the exchange is what is called “spilling milk onto the floor.” Yet all these varied particles are just tiny fluctuations of fields. The scientific image here has moved very far away from the manifest image.
According to the standard model, but not according to relativity theory, all particles must move at the speed c unless they interact with other fields. All the particles in your body such as its protons and electrons would move at the speed c if they were not continually interacting with the Higgs Field. The Higgs field can be thought as being like a sea of molasses that slows down all protons and electrons and gives them the mass and inertia they have. Neutrinos are not affected by the Higgs Field, but they move slightly less than c because they are slightly affected by the weak interaction.
As of the first quarter of the twenty-first century, the standard model is incomplete because it cannot account for gravity, dark matter, dark energy, and the fact that there is more matter than anti-matter. When a new version of the standard model does all this, then it will perhaps become the long-sought “theory of everything.”
The classical big bang theory implies that the observable universe once was extremely small, dense, hot, nearly uniform, and expanding; and it had extremely high energy density and severe curvature of its spacetime. Now it has lost all these properties except that it is still expanding and is nearly uniform on the largest scale.
There is much evidence for this, but the best one piece of evidence is that from our observations of the motions of galaxies we notice that, if time were reversed, all the galaxies would come together at the same time.
The big bang explosion was a swelling of space, not an explosion in a pre-existing void. It happened everywhere and not at the center of anything.
It is not known whether the universe existed before the big bang, and the classical big bang theory has nothing to say about how the bang began.
In the 1960s, the big bang theory replaced the steady state theory as the dominant theory of cosmology, and the theory transitioned from a speculation to a fact. The steady state theory allowed space to expand in volume, but it compensated for this by providing spontaneous creation of matter in order to keep the universe’s density steady, thus violating the law of the conservation of energy. Before the 1960s, physicists were unsure whether proposals about cosmic origins were pseudoscientific and so should not be discussed in a well-respected physics journal. The term “big bang” was a derisive term coined by proponents of the steady state theory to emphasize that the big bang theory is incorrect, but due to the subsequent wide acceptance of the theory the term no longer has negative connotations.
Judging primarily from today’s rate of spatial expansion of the universe plus the assumption that gravity has been the main force affecting the change of the universe’s size, it is estimated the explosion began 13.8 billion years ago. At that time, the universe would have had an ultramicroscopic volume. The explosive process created new space, and it is still creating new space. In fact, in 1998, the classical theory of the big bang was revised to say the expansion rate is not constant but has been accelerating for the last five billion years due to the pervasive presence of dark energy. Dark energy has this name because so little is known about it other than that its amount per unit volume stays constant as space expands. That is, it does not dilute.
Here is a radial diagram of how the universe looks to an observer at the sun. Distances away from the sun are on a logarithmic scale back to the beginning of the big bang, which is represented as the outer circle. The diagram shows in reverse how much the universe has expanded since it was of ultramicroscopic size:
The presentation is in reverse because the current, large volume of the universe is displayed as the small center of the diagram and the old, tiny volume is displayed as the large outer ring.
It is assumed that a radial diagram centered on any other star or any place in the universe would be very much like the above diagram, especially the farther one gets from its center. Looking out from the center of the diagram you see back in time—the farther out you look the farther back in time. Looking farther and farther out is looking into times when the universe had lower and lower entropy. The principal thing to remember when looking at this diagram is that the farther out from the center of the diagram, the smaller the universe; the outer boundary of the diagram represents an ultramicroscopic universe at the beginning of the big bang. Scientists a very sure there was a big bang, and they know a great deal about the universe one second after the big bang, but they know very little about it less than one microsecond after the big bang.
The big bang theory in some form or other (with or without inflation) is accepted by nearly all cosmologists, astronomers, astrophysicists, and philosophers of physics, but it is not as firmly accepted as is the theory of relativity. The theory originated with several people, although Edwin Hubble’s observations in 1929 were the most influential. In 1922, the Russian physicist Alexandr Friedmann discovered that the theory of general relativity allows an expanding universe. Unfortunately, Einstein reacted by saying this is a mere physical possibility but surely not a feature of the actual universe. Then the Belgian physicist Georges Lemaître suggested in 1927 that there is some evidence the universe is expanding, and he defended his claim using previously published measurements to show a pattern that the greater the distance of a galaxy from Earth the greater the galaxy’s speed away from Earth. He calculated these speeds from the Doppler shifts in their light frequency. In 1929, the American astronomer Edwin Hubble carefully recorded clusters of galaxies moving away from each other, with the more distant clusters moving away at greater speeds, and these observations were crucially influential in causing scientists to accept what is now called the big bang theory of the universe. Both Lemaître’s calculations and Hubble’s observations suggest that, if time were reversed, all the galaxies would meet in a very small volume. Currently, space is expanding because most clusters of galaxies are flying away from each other, even though molecules, planets, and galaxies themselves are not now expanding. Eventually, even they will expand.
As clusters get farther apart, the electromagnetic radiation they emit gets more red-shifted on its way to Earth. The best explanation of the red-shift is that the universe is expanding. The expansion is also why the sky is dark at night instead of exceedingly bright.
The acceptance of the theory of relativity has established that space curves near all masses. However, the theory has no implications about curvature at the cosmic level. Regarding this curvature, the above radial picture of the universe can be misinterpreted by not distinguishing the universe from the observable universe. The diagram shows only the spherical observable universe. This is what could in principle be seen from Earth. The sphere with its contents of 350 billion large galaxies is called “our Hubble Bubble” and “our pocket universe.” Its diameter is about 93 billion light years, but it is rapidly growing. However, the picture should not be interpreted as implying the larger universe itself now has spherical curvature. The big bang theory presupposes that the ultramicroscopic universe at at a very early time did have an extremely large curvature, but most cosmologists believe that the universe has straightened out and now no longer has any spatial curvature on the largest scale of billions of light years. Astronomical observations reveal that the current distribution of matter in the universe tends towards uniformity as the scale increases. At very large scales it is homogeneous and isotropic.
Here is another picture that displays the same information differently, with time increasing to the right and (two dimensions of our three-dimensional) space increasing up, down, out and into the picture:
Attribution: NASA/WMAP Science Team
Clicking on the picture will produce an expanded picture with more detail.
The term big bang does not have a precise definition. It does not always refer to a single, first event; rather, it more often refers to a brief duration of early events as the universe underwent a rapid expansion. Actually, the big bang theory itself is not a specific theory, but rather a framework for more specific big bang theories.
Astronomers on Earth detect microwave radiation arriving in all directions from the light produced about 380,000 years after the big bang. It was then that the universe turned transparent for the first time. This occurred because the universe had cooled to 3,000 degrees Kelvin, which was cool enough to form atoms and to allow photons for the first time to move freely without being immediately reabsorbed by neighboring particles. This primordial electromagnetic radiation has now reached Earth as the universe’s most ancient light. But it is no longer bright light. Its wavelength has increased; it has now become microwave radiation because its wavelength was continually stretched (red-shifted) as the universe expanded during the time of its travel toward Earth. Measuring this incoming Cosmic Microwave Background (CMB) radiation reveals it to be very uniform in all directions in the sky. The energy or temperature of the radiation once was high but now is only 2.728 degrees Celsius above absolute zero (the coldest possible temperature). This temperature is not perfectly smooth, though. It varies slightly with angle by a ten thousandth of a degree of temperature. This almost uniform temperature implies the earliest times of the big bang had even greater uniformity, and it implies the entropy of the big bang was very low. The miniscule microwave temperature fluctuations in different directions are traces of ultramicroscopic fluctuations in the density of primordial material very early during the big bang. These early, small fluctuations, probably are quantum fluctuations, and they probably are the origin of what later became the first galaxies. Probably all the large-scale structure in today’s universe was triggered by early quantum uncertainty.
Since the first second of the big bang, the universe’s expansion rate has not been uniform because there is a another source of expansion, the repulsion of dark energy. The influence of dark energy was initially insignificant, but its key feature is that it does not dilute as the space it is within undergoes expansion. So, finally, after about seven billion years of space’s expanding, the dark energy became an influential factor and started to accelerate the expansion. The expansion rate is becoming more and more significant. This influence is shown in the above diagram as the curvature that occurs just below and before the word “etc.” Most cosmologists believe dark energy is the energy of space itself.
The initial evidence for this dark energy came from observations in 1998 of Doppler shifts of supernovas. These observations are best explained by the assumption that distances between supernovas are increasing at an accelerating rate. Because of this rate increase, it is estimated that the volume of the universe will double every 1010 years. Any galaxy cluster that is now 100 light-years away from our Milky Way will, in another 13.8 billion years, be more than 200 light-years away and will be moving much faster away from us. Eventually, it will be moving so fast away from us that it will become invisible. In enough time, all galaxies other than the Milky Way will become invisible. After that, all the stars in the Milky Way will become invisible. In that sense, astronomers are never going to see more than they could see now.
Regarding the universe’s expansion, atoms are not currently expanding. They are held together tightly by the electromagnetic force and strong force (with a little help from the weak force and gravity) which overpower the current value of the repulsive force of dark energy or whatever it is that is causing the expansion of space. What is expanding now is the average distances between clusters of galaxies. It is as if the clusters are exploding away from each other, and, in the future, they will be very much farther away from each other. According to the cosmologist Sean Carroll, currently the “idea that the universe is overall expanding is only true on the largest scales. It’s an approximation that gets better and better as you consider galaxies that are farther and farther away.”
Eventually, though, as the rate of expansion escalates, all clusters of galaxies will become torn apart. Then galaxies themselves will become torn apart, then all solar systems, and ultimately even molecules and atoms and all other configurations of elementary particles.
Why does the big bang theory say space exploded instead of saying matter-energy exploded into a pre-existing space? This is a subtle issue. If it had said matter-energy exploded but space did not, then there would be uncomfortable questions: Where is the point in space that it exploded from, and why that point? Picking one would be arbitrary. And there would be these additional uncomfortable questions: How large is this pre-existing space? When was it created? Experimental observations clearly indicate that some clusters of galaxies must be separating from each other faster than the speed of light, but adding that they do this because they are moving that fast within a pre-existing space would require an ad hoc revision of the theory of relativity to make exceptions to Einstein’s speed limit. So, it is much more “comfortable” to say the big bang is an explosion of space or spacetime, not an explosion of matter-energy within spacetime.
The term “our observable universe” and the synonymous term “our Hubble bubble,” refer to everything that a person on Earth could in principle observe. However, there are distant places in the universe in which an astronomer there could see more things than are visible from Earth. Physicists are agreed that, because of this reasoning, there exist objects that are in the universe but not in our observable universe. Because those unobservable objects are also the product of our big bang, cosmologists assume that the unobservable objects are similar to the objects we on Earth observe—that those objects form atoms and galaxies, and that time behaves there as it does here. But there is no guarantee that this convenient assumption is correct.
Because the big bang happened about 14 billion years ago, you might think that no visible object can be more than 14 billion light-years from Earth, but this would be a mistake that does not take into account the fact that the universe has been expanding all that time. The increasing separation of clusters of galaxies over the last 14 billion years is why astronomers can see about 45 billion light-years in any direction and not merely 14 billion light-years.
Some distant galaxies are moving so fast away from us that they are invisible. Their speed of recession is greater than c. Nevertheless, assuming general relativity is correct, nothing in our universe is passing, or ever has passed, or will pass anything at faster than c; so, in that sense, c is still our cosmic speed limit.
When contemporary physicists speak of the age of our universe and of the time since our big bang, they are implicitly referring to cosmic time measured in the cosmological rest frame. This is time measured in a unique reference frame in which the average motion of all the galaxies is stationary and the Cosmic Microwave Background radiation is as close as possible to being the same in all directions. This frame is not one in which the Earth is stationary. Cosmic time is time measured by a clock that would be sitting as still as possible while the universe expands around it. In cosmic time, t = 0 years is when the big bang occurred, and t = 13.8 billion years is our present. If you were at rest at the spatial origin in this frame, then the Cosmic Microwave Background radiation on a very large scale would have the same temperature in any direction. This is at the ring in the radial diagram above when the universe first became transparent to light. When the universe was smaller than it is now and it was about 100 million light years across, the universe’s matter would be about uniformly distributed. At that scale, it is as if all the galaxies are dust particles floating in a large room, and at the center of the room the distribution of dust in one direction is the same as in any other direction, and in any region of the room there is as much dust as in any other region. On a finer scale, the matter in the universe is unevenly distributed.
The cosmic rest frame is a unique, privileged reference frame for astronomical convenience, but there is no reason to suppose it is otherwise privileged. It is not the frame sought by the A-theorist who believes in a unique present, nor by Isaac Newton who believed in absolute rest, nor by Maxwell who believed in his nineteenth century aether.
The cosmic frame’s spatial origin point is described as follows:
In fact, it isn’t quite true that the cosmic background heat radiation is completely uniform across the sky. It is very slightly hotter (i.e., more intense) in the direction of the constellation of Leo than at right angles to it…. Although the view from Earth is of a slightly skewed cosmic heat bath, there must exist a motion, a frame of reference, which would make the bath appear exactly the same in every direction. It would in fact seem perfectly uniform from an imaginary spacecraft traveling at 350 km per second in a direction away from Leo (towards Pisces, as it happens)…. We can use this special clock to define a cosmic time…. Fortunately, the Earth is moving at only 350 km per second relative to this hypothetical special clock. This is about 0.1 percent of the speed of light, and the time-dilation factor is only about one part in a million. Thus to an excellent approximation, Earth’s historical time coincides with cosmic time, so we can recount the history of the universe contemporaneously with the history of the Earth, in spite of the relativity of time.
Similar hypothetical clocks could be located everywhere in the universe, in each case in a reference frame where the cosmic background heat radiation looks uniform. Notice I say “hypothetical”; we can imagine the clocks out there, and legions of sentient beings dutifully inspecting them. This set of imaginary observers will agree on a common time scale and a common set of dates for major events in the universe, even though they are moving relative to each other as a result of the general expansion of the universe…. So, cosmic time as measured by this special set of observers constitutes a type of universal time… (Davies 1995, pp. 128-9).
It is a convention that cosmologists agree to use the cosmic time of this special reference frame, but it is an interesting fact and not a convention that our universe is so organized that there is such a useful cosmic time available to be adopted by the cosmologists. Not all physically possible spacetimes obeying the laws of general relativity can have such a cosmic time.
According to one popular revision of the classical big bang theory, the cosmic inflation theory, the universe underwent an inflationary expansion soon after t = 0. It was a sudden and hyperfast expansion, a cosmological phase transition, with an exponentially increasing rate for a very short time. Nobody knows whether it expanded uniformly in all directions. It began at some early time for some unknown reason, and, again for some unknown reason, it stopped inflating very soon after it began.
The inflation was initiated soon after the grand unified force broke up into three separate forces—the strong force, the weak force, and the electromagnetic force.
About half the cosmologists do not believe in inflation; they hope there is another explanation of the phenomena for which the inflation theory was devised. The theory was created in order to explain why there are not point-like magnetic monopoles most everywhere (called the monopole problem), why the microwave radiation that arrives on earth from all directions is so uniform (the cosmic horizon problem), why we have been unable to detect proton decay (the proton decay problem), and why there is currently so little curvature of space (the flatness problem). These problems are difficult to solve without assuming inflation.
The horizon problem is the problem of accounting for the fact that, looking in any direction, we see almost the same temperature of the microwave radiation reaching us. This is a remarkable feature because distant regions at different angles probably had different temperature and presumably were not in any causal contact when their light was generated 380,000 years after the big bang . What could bring them to have the same temperature? The answer is that “rapid inflation brings it about” say advocates of inflation theory. The inflation diluted the significance of any early temperature differences.
The flatness problem is due to the universe having no overall curvature today and nearly a homogeneous temperature although it had extremely large curvature and high temperature at the beginning of the big bang. This is a positive feature of cosmic inflation theory because the current lack of curvature and temperature differences have been very difficult to explain without there being inflation.
Assuming there is no curvature these days, our observations and measurements of the current energy indicate some energy is missing that would ensure this lack of curvature. The energy that is missing is called dark energy. It is also called the cosmological constant because it appears to have the same value everywhere. It is the energy of otherwise empty space. After the universe’s initial inflation stopped, the universe’s expansion continued, but about seven billion years ago its expansion rate began speeding up due to the influence of the dark energy. The expansion rate will continue to increase.
If the cosmic inflation did occur, then it is likely that primordial gravity waves were created. They would now have stretched to an extremely high wavelength. These waves might be detected by a future gravity wave detector.
The big bang theory is considered to be confirmed, but the theory of inflation is still unconfirmed. Princeton cosmologist Paul Steinhardt and Nobel Prize winner Roger Penrose are two of its noteworthy opponents.
Here is part of the argument in favor of an initial inflation. The cosmic microwave background (CMB) radiation reaching Earth from all directions is on average the same cold temperature everywhere, namely about 2.7 degrees Kelvin or about negative 270 degrees C or negative 455 degrees Fahrenheit, but with small temperature differences in different directions on the order of a hundred-thousandth of a degree. Room temperature, by comparison, is 300 degrees Kelvin or 80 degrees Fahrenheit. The classical big bang theory can account for the number 2.7 but not for the temperature being uniform in all directions at the largest scale nor for the very slight deviations in uniformity in temperature on the order of a hundred-thousandth of a degree. The big bang theory of inflation can account for these cosmological features.
The theory of inflation postulates that extremely early in the big bang there was exponential inflation of space, or perhaps a small patch of space, due to the presence of a small amount of very dense, repulsive, primordial, material having negative pressure—that is, negative gravity. In other words, it was very explosive. Newton-style gravity cannot be repulsive, but Einstein’s theory does not rule out repulsive gravity. The addition by Einstein of the so-called cosmological constant term to his equations allows for this repulsive gravity, but, unfortunately, Einstein himself did not consider the possibility that there could be repulsive gravity or cosmic inflation.
Assuming the big bang began at time t = 0, then the epoch of inflation (the epoch of radically repulsive gravity) began perhaps at t = 10−36 seconds and lasted until about t = 10−34 or t = 10−33 seconds, during which time the volume of space increased by a factor of at least 1026, and any initial unevenness in the distribution of energy was almost all smoothed out, that is, smoothed out from the large-scale perspective, in analogy to how blowing up a balloon removes its initial folds and creases.
To appreciate just how fast the initial inflation was, consider this analogy. Although the universe at the beginning of inflation was a lump of repulsive gravity material much smaller than the size of a proton that then expanded to the size of a marble at the end of the inflationary period, think of it instead as if inflation began with the universe being the size of a marble. Then during that period from t = 10−36 seconds to t = 10−34 seconds, the marble expanded to a sphere whose radius reaches from Earth to the nearest supercluster of galaxies.
The speed of this inflationary expansion was much faster than light speed. This does not violate Einstein’s general theory of relativity because his theory is a local theory, and locally during inflation no entity passed by any other at faster than the speed of light.
At the end of that inflationary epoch at, say, t = 10−33 seconds or so, the explosive material decayed for some unknown reason and left only normal matter with attractive gravity. That is, gravity turned from negative to positive. This decay began the post-inflation period of the so-called quark soup. At this time, our universe continued to expand, although now at a nearly constant rate. It went into its “coasting” phase. Regardless of any previous curvature in our universe, by the time the inflationary period ended, the overall structure of space had very little spatial curvature, and its space was extremely homogeneous. Today, we see that the universe is homogeneous on its largest scale. But at the very beginning of the inflationary period, there were some very tiny imperfections due to quantum fluctuations. The densest regions attracted more material than the less dense regions, and these dense regions turned into what would eventually become galaxies. The quantum fluctuations themselves have left their traces in the very slight hundred-thousandth of a degree differences in the temperature of the cosmic microwave background radiation at different angles as one looks out into space from earth.
To add some more detail to the story of inflation, before inflation began, for some unknown reason the universe contained an unstable inflaton field or false vacuum field. This field underwent a spontaneous phase transition (analogous to superheated liquid water suddenly and spontaneously expanding into steam). That phase transition caused the highly repulsive primordial material to hyper-inflate exponentially in volume for a very short time. During this primeval inflationary epoch, the gravitational field’s stored, negative, gravitational energy was rapidly released, and all space wildly expanded. At the end of this early inflationary epoch, the highly repulsive material decayed for some as yet unknown reason into ordinary matter and energy, and the universe’s expansion rate settled down to just below the rate of expansion observed in the universe today. During the inflationary epoch, the entropy continually increased, so the second law of thermodynamics was not violated.
Alan Guth described the inflationary period this way:
There was a period of inflation driven by the repulsive gravity of a peculiar kind of material that filled the early universe. Sometimes I call this material a “false vacuum,” but, in any case, it was a material which in fact had a negative pressure, which is what allows it to behave this way. Negative pressure causes repulsive gravity. Our particle physics tells us that we expect states of negative pressure to exist at very high energies, so we hypothesize that at least a small patch of the early universe contained this peculiar repulsive gravity material which then drove exponential expansion. Eventually, at least locally where we live, that expansion stopped because this peculiar repulsive gravity material is unstable; and it decayed, becoming normal matter with normal attractive gravity. At that time, the dark energy was there, the experts think. It has always been there, but it’s not dominant. It’s a tiny, tiny fraction of the total energy density, so at that stage at the end of inflation the universe just starts coasting outward. It has a tremendous outward thrust from the inflation, which carries it on. So, the expansion continues, and as the expansion happens the ordinary matter thins out. The dark energy, we think, remains approximately constant. If it’s vacuum energy, it remains exactly constant. So, there comes a time later where the energy density of everything else drops to the level of the dark energy, and we think that happened about five or six billion years ago. After that, as the energy density of normal matter continues to thin out, the dark energy [density] remains constant [and] the dark energy starts to dominate; and that’s the phase we are in now. We think about seventy percent or so of the total energy of our universe is dark energy, and that number will continue to increase with time as the normal matter continues to thin out. (World Science U Live Session: Alan Guth, published November 30, 2016 at https://www.youtube.com/watch?v=IWL-sd6PVtM.)
Before about t = 10-46 seconds, there was a single basic force rather than the four we have now. The four basic forces are: the force of gravity, the strong nuclear force, the weak force, and the electromagnetic force. At about t = 10-46 seconds, the energy density of the primordial field was down to about 1015 GEV, which allowed spontaneous symmetry breaking (analogous to the spontaneous phase change in which steam cools enough to spontaneously change to liquid water); this phase change created the gravitational force as a separate basic force. The other three forces had not yet appeared as separate forces.
Later, after inflation began and then ended, at t = 10-12 seconds, there was more spontaneous symmetry breaking. First the strong nuclear force, then the weak nuclear force and finally the electromagnetic forces became separate forces. For the first time, the universe now had exactly four separate forces. At t = 10-10 seconds, the Higgs field turned on (that is, came into existence). This slowed down many kinds of particles by giving them mass so they no longer moved at light speed.
Much of the considerable energy left over at the end of the inflationary period was converted into matter, antimatter, and radiation, such as quarks, antiquarks, and photons. The universe’s temperature escalated with this new radiation, and this period is called the period of cosmic reheating. Matter-antimatter pairs of particles combined and annihilated, removing all the antimatter and almost all the matter from the universe, and leaving a small amount of matter and even more radiation. At t = 10-6 seconds, the universe had cooled enough that quarks combined together and created protons and neutrons. After t = 3 minutes, the universe had cooled sufficiently to allow these protons and neutrons to start combining strongly to produce hydrogen, deuterium, and helium nuclei. At about t = 379,000 years, the temperature was low enough (around 2,700 degrees C) for these nuclei to capture electrons and to form the initial hydrogen, deuterium, and helium atoms of the universe. With these first atoms coming into existence, the universe became transparent in the sense that this short wavelength light (about a millionth of a meter) was now able to travel freely without always being absorbed very soon by surrounding particles. Due to the expansion of the universe since then, this early light is today invisible on earth because it is at much longer wavelength than it was 379,000 years ago. That radiation is now detected on Earth as having a wavelength of 1.9 millimeters, and it is called the cosmic microwave background radiation or CMB. That energy is continually arriving at the Earth’s surface from all directions. It is almost homogenous and almost isotropic.
As the universe expands, the CMB radiation loses energy; but this energy is not lost from the universe, nor is the law of conservation of energy violated. There is conservation because the same amount of energy is gained by going into expanding the space.
In the literature in both physics and philosophy, descriptions of the big bang often speak of it as if it were the first event, but the big bang theory does not require there to be a first event, an event that had no prior event. This description mentioning the first event is a philosophical position, not something demanded by the scientific evidence. Physicists James Hartle and Stephen Hawking once suggested that looking back to the big bang is just like following the positive real numbers back to ever-smaller positive numbers without ever reaching the smallest positive one. There isn’t a smallest one. If Hartle and Hawking are correct that time is strictly analogous to this, then the big bang had no beginning point event, no initial time.
The classical big bang theory is based on the assumption that the universal expansion of clusters of galaxies can be projected all the way back to a singularity, a zero volume, at t = 0. Physicists agree that the projection must become untrustworthy for any times less than the Planck time. If a theory of quantum gravity ever gets confirmed, it is expected to provide more reliable information about the Planck epoch from t=0 to the Planck time, and it may even allow physicists to answer the questions, “What caused the big bang?” and “Did anything happen before then?”
For a short lecture by Guth on these topics addressed to students, see https://www.youtube.com/watch?v=ANCN7vr9FVk.
Although there is no consensus among physicists about whether there is more than one universe, many of the big bang inflationary theories are theories of eternal inflation, of the eternal creation of more big bangs. The idea, also called chaotic inflation, is from Andrei Linde. His key idea is that once inflation gets started it cannot easily be turned off. The inflaton field is the fuel of our big bang and all other big bangs. Presumably, say advocates of eternal inflation, not all the inflaton fuel is used up in producing just one big bang, so the remaining fuel is available to create other big bangs, which themselves inflate and lead to still more big bangs, at an exponentially increasing rate. The inflaton fuel increases faster than it gets used. Presumably, there is no reason why this process should ever end, so there will be a potentially infinite number of universes in the multiverse. Also, there is no good reason to suppose our actual universe was the first one.
After any single big bang, eventually the initial hyper-inflation stops in some region. The expansion of this region does not stop, however, and it produces what cosmologists call an expanding bubble universe. Our own bubble that was produced by our big bang is called the Hubble Bubble. That term is ambiguous because often cosmologists require that the bubble be just our visible universe rather than our universe. At any time in the multiverse, most of the space is inflating.
The original theory of inflation was created by Guth and Linde in about 1980. The theory of eternal inflation with a multiverse was created by Linde in 1983 plus work by Gott and Vilenkin. The multiplicity of universes also is called parallel worlds, many worlds, alternative universes, and alternate worlds. Each universe of the multiverse normally is required to use some of the same physics (there is no agreement on which “some”) and the same mathematics. This restriction is not required by a logically possible universe of the sort proposed by the philosopher David Lewis. These multiple universes in the multiverse are “elsewhere,” but there is no agreement on whether they are scattered across the “space” of the multiverse, with our small, observable universe having its own location in this space. Each universe has its own space, but it is better not to think of multiple universes as existing in a common space at all. There have been searches by astronomers for evidence that an alternative universe within our space has collided with our universe, but these have failed. A little more sense can be made of two universes existing at the same time, but this idea is a bit vague and not worked out clearly, and good sense cannot yet be made of the idea that your counterpart in another specific universe had a 21st birthday before you did.
There are competing versions of the multiverse theory, but one version, called the Many Worlds Interpretation of quantum mechanics, implies that at each event the universe splits into various new universes, one for each possibility that could have happened. There are many worlds containing a person just like you, but the phosphorous atom in your counterpart’s right eye will not be the very same phosphorous atom that is in your right eye.
New energy is not required to create these inflationary universes, so there are no implications about whether energy is or is not conserved in the multiverse.
In some of these multiple universes, there may be no time dimension.
Could the expansion of our universe eventually slow down? Yes. Could the expansion of the multiverse eventually slow down? No. The primordial or earlier explosive material in any single universe decays quickly, but as it decays the part that has not decayed keeps increasing, and so the expansion of the multiverse continues. The rate of creation of new bubble universes increases exponentially.
Normally, philosophers of science say that what makes a theory scientific is not that it can be falsified (as the philosopher Karl Popper proposed), but rather than there can be experimental evidence for it or against it. Because it is so difficult to design experiments that would provide evidence for or against the multiverse theories, many physicists complain that their fellow physicists who are developing these theories are doing technical metaphysical speculation, not physics. However, the response from defenders of multiverse theories is usually that they can imagine someday, perhaps in future centuries, running crucial experiments, and, besides, the term physics is best defined as being whatever physicists do.
For an authoritative explanation of the multiverse, see episode 200 of Sean Carroll’s Mindscape podcast called “Solo: The Philosophy of the Multiverse.”
Is time infinitely divisible? Yes, because general relativity theory and quantum theory require time to be a continuum. But this answer will change to “no” if these theories are eventually replaced by a Core Theory that quantizes time. “Although there have been suggestions that spacetime may have a discrete structure,” Stephen Hawking said in 1996, “I see no reason to abandon the continuum theories that have been so successful.” Two decades later, he and other physicists were much less sure.
Did time begin at the big bang, or was there a finite or infinite time period before our big bang? The answer is unknown.
Stephen Hawking and James Hartle said the difficulty of knowing whether the past and future are infinite in duration turns on our ignorance of whether the universe’s positive energy is exactly canceled out by its negative energy. All the energy of gravitation and spacetime curvature is negative. If the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, including the law of conservation of energy, then time is infinite in the past and future. Here is the argument for this conclusion. The law of conservation of energy implies energy can change forms, but if the total were ever to be non-zero, then the total energy can never become zero in the future or once have been zero because any change in the total to zero from non-zero or from non-zero to zero would violate the law of conservation of energy. So, if the total of the universe’s energy is non-zero and if quantum mechanics is to be trusted, then there always have been states whose total energy is non-zero energy, and there always will be states of non-zero energy. That implies there can be no first instant or last instant and thus that time is eternal.
There is no solid evidence that the total is non-zero, but a slim majority of the experts favor a non-zero total, although their confidence in this is not strong. Assuming there is a non-zero total, the favored theory of the future of the universe is the big chill theory. The big chill theory implies the future never ends and the universe just keeps getting chillier as space expands and gets more dilute. Empty space is self-repulsive, and presumably it will expand forever. So, there always will be new events produced from old events.
Here are more details of the big chill theory. The last star will burn out in 1015 years. Then all the stars and dust within each galaxy will fall into black holes. Then the material between galaxies will fall into black holes as well, and finally in about 10100 years all the black holes will evaporate, leaving only a soup of elementary particles that gets less dense and therefore “chillier” as the universe’s expansion continues. The microwave background radiation will red shift more and more into longer wavelength radio waves. Future space will look much like a vacuum. But because of vacuum energy, the temperature will only approach, but never quite reach, zero on the Kelvin scale. Thus the universe descends into a “big chill,” having the same amount of total energy it always has had.
The situation is very different from that of the big chill theory if the total energy of the universe is now zero. In this case, time is not fundamental (nor is spacetime). Perhaps time is emergent from a finite collection of moments as described in the timeless Wheeler-DeWitt equation of quantum mechanics (namely the Schrödinger wave equation when there is no change).
Here is more commentary about this from Carroll (2016, pp. 197-8):
There are two possibilities: one where the universe is eternal, one where it had a beginning. That’s because the Schrödinger equation of quantum mechanics turns out to have two very different kinds of solutions, corresponding to two different kinds of universe.
One possibility is that time is fundamental, and the universe changes as time passes. In that case, the Schrödinger equation is unequivocal: time is infinite. If the universe truly evolves, it always has been evolving and always will evolve. There is no starting and stopping. There may have been a moment that looks like our Big Bang, but it would have only been a temporary phase, and there would be more universe that was there even before the event.
The other possibility is that time is not truly fundamental, but rather emergent. Then, the universe can have a beginning. The Schrödinger equation has solutions describing universes that don’t evolve at all: they just sit there, unchanging.
…And if that’s true, then there’s no problem at all with there being a first moment in time. The whole idea of “time” is just an approximation anyway.
Back to the main “Time” article for references and citations.
California State University, Sacramento
U. S. A.