Fine-Tuning
The term “fine-tuning” is used to characterize sensitive dependences of facts or properties on the values of certain parameters. Technological devices are paradigmatic examples of fine-tuning. Whether they function as intended depends sensitively on parameters that describe the shape, arrangement, and material properties of their constituents, e.g., the constituents’ conductivity, elasticity and thermal expansion coefficient. Technological devices are the products of actual “fine-tuners”—engineers and manufacturers who designed and built them—but for fine-tuning in the broad sense of this article to obtain, sensitivity with respect to the values of certain parameters is sufficient.
Philosophical debates in which “fine-tuning” appears are often about the universe’s fine-tuning for life: according to many physicists, the fact that the universe is able to support life depends delicately on various of its fundamental characteristics, notably on the form of the laws of nature, on the values of some constants of nature, and on aspects of the universe’s conditions in its very early stages. Various reactions to the universe’s fine-tuning for life have been proposed: that it is a lucky coincidence which we have to accept as a primitive given; that it will be avoided by future best theories of fundamental physics; that the universe was created by some divine designer who established life-friendly conditions; and that fine-tuning for life indicates the existence of multiple other universes with conditions very different from those in our own universe. Sections 1–4 of the present article review the case for this fine-tuning for life, the reactions to it, and major criticisms of these reactions. Section 5 turns from fine-tuning for life to the criterion of naturalness—a condition of no fine-tuning in a rather different sense which applies to theories in quantum field theory and plays a large role in contemporary particle physics and cosmology.
- 1. Fine-Tuning for Life: the Evidence
- 2. Does Fine-Tuning for Life Require a Response?
- 3. Fine-Tuning and Design
- 4. Fine-Tuning and the Multiverse
- 5. Fine-Tuning and Naturalness
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Fine-Tuning for Life: the Evidence
1.1 Examples from Physics
Our best current theories of fundamental physics are the Standard Model of elementary particle physics and the theory of general relativity. The Standard Model accounts for three of the known four fundamental forces of nature—the strong, the weak, and the electromagnetic force—while general relativity accounts for the fourth—gravity. Arguments according to which our universe is fine-tuned for life are aimed at showing that life could not have existed for the vast majority of other forms of the laws of nature, other values of the constants of nature, and other conditions in the very early universe.
The following is an—incomplete—list of suggested instances of fine-tuning for life. (For popular overviews see Leslie 1989: ch. 2, Rees 2000, Davies 2006, and Lewis & Barnes 2016; for more technical ones see Hogan 2000, Uzan 2011, Barnes 2012, Adams 2019 and the contributions to Sloan et al. 2020.)
1.1.1 Fine-tuned constants
- The strength of gravity, when measured against the strength of electromagnetism, seems fine-tuned for life (Rees 2000: ch. 3; Uzan 2011: sect. 4; Lewis & Barnes 2016: ch. 4). If gravity had been absent or substantially weaker, galaxies, stars and planets would not have formed in the first place. Had it been only slightly weaker (and/or electromagnetism slightly stronger), main sequence stars such as the sun would have been significantly colder and would not explode in supernovae, which are the main source of many heavier elements (Carr & Rees 1979). If, in contrast, gravity had been slightly stronger, stars would have formed from smaller amounts of material, which would have meant that, inasmuch as still stable, they would have been much smaller and more short-lived (Adams 2008; Barnes 2012: sect. 4.7.1).
- The strength of the strong nuclear force, when measured against that of electromagnetism, seems fine-tuned for life (Rees 2000: ch. 4; Lewis & Barnes 2016: ch. 4). Had it been stronger by more than about \(50\,\%\), almost all hydrogen would have been burned in the very early universe (MacDonald & Mullan 2009). Had it been weaker by a similar amount, stellar nucleosynthesis would have been much less efficient and few, if any, elements beyond hydrogen would have formed. For the production of appreciable amounts of both carbon and oxygen in stars, even much smaller deviations of the strength of the strong force from its actual value would be fatal (Hoyle et al. 1953; Barrow & Tipler 1986: 252–253; Oberhummer et al. 2000; Barnes 2012: sect. 4.7.2).
- The difference between the masses of the two lightest quarks—the up- and down-quark—seems fine-tuned for life (Carr & Rees 1979; Hogan 2000: sect. 4; Hogan 2007; Adams 2019: sect. 2.25). Partly, the fine-tuning of these two masses obtains relative to the strength of the weak force (Barr & Khan 2007). Changes in the difference between them have the potential to affect the stability properties of the proton and neutron, which are bound states of these quarks, or lead to a much simpler and less complex universe where bound states of quarks other than the proton and neutron dominate. Similar effects would occur if the mass of the electron, which is roughly ten times smaller than the mass difference between the down- and up-quark, would be somewhat larger in relation to that difference. There are also absolute constraints on the masses of the two lightest quarks (Adams 2019: fig. 5).
- The strength of the weak force seems to be fine-tuned for life (Carr & Rees 1979). If it were weaker by a factor of about \(10\), there would have been much more neutrons in the early universe, leading very quickly to the formation of initially deuterium and tritium and soon helium. Long-lived stars such as the sun, which depend on hydrogen that they can burn to helium, would not exist. Further possible consequences of altering the strength of the weak force for the existence of life are explored by Hall et al. (2014).
- The cosmological constant characterizes the energy density \(\rho_V\) of the vacuum. On theoretical grounds, outlined in Section 5 of this article, one would expect it to be larger than its actual value by an immense number of magnitudes. (Depending on the specific assumptions made, the discrepancy is between \(10^{50}\) and \(10^{123}\).) However, only values of \(\rho_V\) a few order of magnitude larger than the actual value are compatible with the formation of galaxies (Weinberg 1987; Barnes 2012: sect. 4.6; Schellekens 2013: sect. 3). This constraint is relaxed if one considers universes with different baryon-to-photon ratios and different values of the number Q (discussed below), which quantifies density fluctuations in the early universe (Adams 2019: sect. 4.2)
1.1.2 Fine-tuned conditions in the early universe
- The global cosmic energy density \(\rho\) in the very early universe is extremely close to its so-called critical value \(\rho_c\). The critical value \(\rho_c\) is defined by the transition from negatively curved universes (\(\rho<\rho_c\)) to flat (critical density \(\rho=\rho_c\)) to positively curved (\(\rho>\rho_c\)) universes. Had \(\rho\) not been extremely close to \(\rho_c\) in the very early universe, life could not have existed: for slightly larger values, the universe would have recollapsed quickly and time would not have sufficed for stars to evolve; for slightly smaller values, the universe would have expanded so quickly that stars and galaxies would have failed to condense out (Rees 2000: ch. 6; Lewis & Barnes 2016: ch. 5).
- The relative amplitude \(Q\) of density fluctuations in the early universe, known to be roughly \(2\cdot10^{-5}\), seems fine-tuned for life (Tegmark & Rees 1998; Rees 2000: ch. 8). If \(Q\) had been smaller by about one order of magnitude, the universe would have remained essentially structureless since the pull of gravity would not have sufficed to create astronomic structures like galaxies and stars. If, in contrast, \(Q\) had been significantly larger, galaxy-sized structures would have formed early in the history of the universe and soon collapsed into black holes.
- The initial entropy of the universe must have been exceedingly low. According to Penrose, universes “resembling the one in which we live” (2004: 343) populate only one part in \(10^{10^{123}}\) of the available phase space volume.
1.1.3 Fine-tuned laws
It has been claimed that the laws of physics are fine-tuned for life not only with respect to the constants that appear in them but also with respect to their form itself. Three of the four known fundamental forces—gravity, the strong force, and electromagnetism—play key roles in the organisation of complex material systems. A universe in which one of these forces is absent—and the others are present as in our own universe—would most likely not give rise to life, at least not in any form that resembles life as we know it. The fundamental force whose existence is least clearly needed for life is the weak force (Harnik et al. 2006). A universe without any weak force but with all the other forces of the Standard Model in place and suitably adjusted might be habitable (Grohs et al. 2018). Further general features of the actual laws of nature that have been claimed to be necessary for the existence of life are the quantization principle and the Pauli exclusion principle in quantum theory (Collins 2009: 213f.).
1.2 Are Conditions Really Fine-Tuned for Life?
Considerations according to which the laws of nature, values of the constants, and boundary conditions of the universe are fine-tuned for life refer to life in general, not merely human life. According to them, a universe with different laws, constants, and boundary conditions would almost certainly not give rise to any form of life. A common worry about such considerations is that they are ill-founded due to lack of a widely accepted definition of “life”. Another worry is that we may seriously underestimate life’s propensity to appear under different laws, constants, and boundary conditions because we are biased to assume that all possible kinds of life will resemble life as we know it. A joint response to both worries is that, according to the fine-tuning considerations, universes with different laws, constants, and boundary conditions would typically give rise to much less structure and complexity, which would seem to make them life-hostile, irrespective of how exactly one defines “life” (Lewis & Barnes 2016: 255–274).
Victor Stenger (2011) is extremely critical of considerations according to which the laws, constants, and boundary conditions of our universe are fine-tuned. According to Stenger, the form of the laws of nature is fixed by the reasonable—very weak—requirement that they be “point-of-view-invariant” in that, as he claims, the laws “will be the same in any universe where no special point of view is present” (p. 91). Luke Barnes criticizes this claim (2012: sect. 4.1), arguing that it relies on confusingly identifying point-of-view-invariance with non-trivial symmetry properties that the laws in our universe happen to exhibit. Notably, as Barnes emphasizes, neither general relativity nor the Standard Model of elementary particle physics are without conceptually viable, though perhaps empirically disfavoured, alternatives.
A further criticism by Stenger is that considerations according to which the conditions in our universe are fine-tuned for life routinely fail to consider the consequences of varying more than one parameter at a time. In response to this criticism, Barnes (2012: sect. 4.2) gives an overview of various studies such as Barr and Khan 2007 and Tegmark et al. 2006 that explore the complete parameter space of (segments of) the Standard Model and arrives at the conclusion that the life-permitting range in multidimensional parameter space is likely very small.
Fred Adams (2019) cautions against claims that the universe is extremely fine-tuned for life. According to him, the range of parameters for which the universe would have been habitable is quite considerable. In addition, as he sees it, the universe could have been more, rather than less, life-friendly. Notably, if the vacuum energy density had been smaller, the primordial fluctuations (quantified by Q) had been larger, the baryon-to-photon ratio had been larger, the strong force had been slightly stronger, and gravity slightly weaker, there might have been more opportunities for life to develop (Adams 2019: sect. 10.3). If Adams is right, our universe may just be garden-variety habitable rather than maximally life-supporting.
1.3 Fine-Tuning in Biology
Biological organisms are fine-tuned for life in the sense that their ability to solve problems of survival and reproduction depends crucially and sensitively on specific details of their behaviour and physiology. For example, many animals rely on their visual apparatus to spot prey, predators, or potential mates. The proper functioning of their visual apparatus, in turn, depends sensitively on physiological details of their eyes and brain.
Biological fine-tuning has a long tradition of being regarded as evidence for divine design (Paley 1802), but modern biology regards it as the product of Darwinian evolution, notably as driven by natural and sexual selection. Relatively recently, some researchers have claimed that some specific “fine-tuned” features of organisms cannot possibly be the outcomes of Darwinian evolutionary development alone and that interventions by some designer must be invoked to account for them. For example, Michael Behe (1996) claims that the so-called flagellum, a bacterial organ that enables motion, is irreducibly complex in the sense that it cannot be the outcome of consecutive small-scale individual evolutionary steps, as they are allowed by standard, Darwinian, evolutionary theory. In a similar vein, William Dembski (1998) argues that some evolutionary steps hypothesized by Darwinian are so improbable that one would not rationally expect them to occur even once in a volume the size of the visible universe. Behe and Dembski conclude that an intelligent designer likely intervened in the evolutionary course of events.
The overwhelming consensus in modern biology is that the challenges to Darwinian evolutionary theory brought forward by Behe, Dembski and others can be met. According to Kenneth Miller (1999), Behe’s arguments fail to establish that there are no plausible small-step evolutionary paths which have Behe’s allegedly “irreducibly complex” features as outcomes. For example, as Miller argues, there is in fact strong evidence for a Darwinian evolutionary history of the flagellum and its constituents (Miller 1999: 147–148).
2. Does Fine-Tuning for Life Require a Response?
Many researchers believe that the fine-tuning of the universe’s laws, constants, and boundary conditions for life calls for inferring the existence of a divine designer (see Section 3) or a multiverse—a vast collection of universes with differing laws, constants, and boundary conditions (see Section 4). The inference to a divine designer or a multiverse typically rests on the idea that, in view of the required fine-tuning, life-friendly conditions are in some sense highly improbable if there is only one, un-designed, universe. It is controversial, however, whether this idea can coherently be fleshed out in terms of any philosophical account of probability.
2.1 In Which Sense Are Life-Friendly Conditions Improbable?
Considerations as reviewed in Section 1.1 according to which the laws, constants and boundary conditions in our universe are fine-tuned for life are based on investigations of physical theories and their parameter spaces. It may therefore seem natural to expect that the relevant probabilities in the light of which fine-tuning for life is improbable will be physical probabilities. On closer inspection, however, it is difficult to see how this could be the case: according to the standard view of physical possibility, alternative physical laws and constants are physically impossible by the definition of physical possibility (Colyvan et al. 2005: 329). Accordingly, alternative laws and constants trivially have physical probability zero, whereas the actual laws and constants have physical probability one. If the laws and constants that physics has so far determined turned out to be merely effective laws and constants fixed by some random process in the early universe which might be governed by more fundamental physical laws, it would start to make sense to apply the concept of physical probability to those effective laws and constants (Juhl 2006: 270). However, the fine-tuning considerations as outlined in Section 1.1 do not seem to be based on speculations about any such process, so they do not seem to implicitly rely on the notion of physical probability in that sense.
Attempts to apply the notion of logical probability to fine-tuning for life are beset with difficulties as well. Critics argue that, from a logical point of view, arbitrary real numbers are possible values of the constants (McGrew et al. 2001; Colyvan et al. 2005). According to them, any probability measure over the real numbers as values of the constants that differs from the uniform measure would be arbitrary and unmotivated. The uniform measure itself, however, assigns zero probability to any finite interval. By this standard, the life-permitting range, if finite, trivially has probability zero, which would mean that life-friendly constants are highly improbable whether or not fine-tuning in the sense of Section 1.1 is required for life. This conclusion seems counterintuitive, but Koperski (2005) argues that it is not as unacceptable for proponents of the view that life-friendly conditions are improbable and require a response as it may initially seem.
Motivated by the difficulties that arise in attempts to apply the physical and logical notions of probability to fine-tuning for life, contemporary accounts often appeal to an essentially epistemic notion of probability (e.g., Monton 2006; Collins 2009). According to these approaches, life-friendly conditions are improbable in that we would not rationally expect them. An obvious problem for this view is that life-friendly conditions are not literally unexpected for us: as a matter of fact, we have long been aware that the conditions are right for life in our universe, so the epistemic probability of life-friendly conditions appears to be trivially \(1\). As Monton (2006) highlights, to make sense of the idea that life-friendly conditions are improbable in an epistemic sense, we must find a way of strategically abstracting from some of our background knowledge, notably from our knowledge that life exists, and assess the probability of life’s existence from that perspective. (See Section 3.3 for further discussion.)
Views according to which life-friendly conditions are epistemically improbable face the challenge to provide reasons as to why we should not expect life-friendly conditions from an epistemic perspective which ignores that life exists. One response to this challenge is to point out that there is no clear systematic pattern in the actual, life-permitting, combination of values of the constants (Donoghue 2007: sect. 8), which suggests that this combination is disfavoured in terms of elegance and simplicity. Another response is to appeal to the criterion of naturalness (see Section 5), which would lead one to expect values for at least two constants of nature—the cosmological constant and the mass of the Higgs particle—which differ radically from the actual ones. Neither elegance and simplicity nor naturalness dictate any specific probability distribution over the values of the constants, however, let alone over the form of the laws itself. But proponents of the view that fine-tuning for life is epistemically improbable can appeal to these criteria to argue that life-friendly conditions will be ascribed very low probability by any probability distribution that respects these criteria.
2.2 Does Improbable Fine-Tuning Call for a Response?
Even if fine-tuned conditions are improbable in some substantive sense, it might be wisest to regard them as primitive coincidences which we have to accept without resorting to such speculative responses as divine design or a multiverse. It is indeed uncontroversial that being improbable does not by itself automatically amount to requiring a theoretical response. For example, any specific sequence of outcomes in a long series of coin tosses has low initial probability (namely, \(2^{-N}\) if the coin is fair, which approaches zero as the number \(N\) of tosses increases), but one would not reasonably regard any specific sequence of outcomes as calling for some theoretical response, e.g., a re-assessment of our initial probability assignment. The same attitude is advocated by Gould (1983) and Carlson and Olsson (1998) with respect to fine-tuning for life. Leslie concedes that improbable events do not in general call for an explanation, but he argues that the availability of reasonable candidate explanations of fine-tuning for life—namely, the design hypothesis and the multiverse hypothesis—suggests that we should not “dismiss it as how things just happen to be” (Leslie 1989: 10). Views similar to Leslie’s are defended by van Inwagen (1993), Bostrom (2002: 23–41), and Manson and Thrush (2003: 78–82).
Cory Juhl (2006) argues along independent lines that we should not regard fine-tuning for life as calling for a response. According to Juhl, forms of life are plausibly “causally ramified” in that they “causally depend, for [their] existence, on a large and diverse collection of logically independent facts” (2006: 271). He argues that one would expect “causally ramified” phenomena to depend sensitively on the values of potentially relevant parameters such as, in the case of life, the values of the constants and boundary conditions. According to him, fine-tuning for life therefore does not require “exotic explanations involving super-Beings or super-universes” (2006: 273).
The sense in which fine-tuning for life fails to be surprising according to Juhl differs from the sense in which it is surprising according to authors such as Leslie, van Inwagen, Bostrom, Manson and Thrush: while the latter hold that life-friendly conditions are rationally unexpected from an epistemic point of view which sets aside our knowledge that life exists, Juhl holds that—given our knowledge that life exists and is causally ramified—it is unsurprising that life depends sensitively, for its existence, on the constants and boundary conditions.
2.3 Avoiding Fine-Tuning for Life Through New Physics?
Biological fine-tuning for survival and reproduction, as marvellous as it often appears, is regarded as unmysterious by biologists because evolution as driven by natural and sexual selection can generate it (see Section 1.3). One may hope that, similarly, future developments in fundamental physics will reveal principles or mechanisms which explain the life-friendly conditions in our universe.
There are two different types of scenarios of how future developments in physics could realize this hope: first, physicists may hit upon a so-called theory of everything according to which, as envisaged by Albert Einstein,
nature is so constituted that it is possible logically to lay down such strongly determined laws that within these laws only rationally completely determined constants occur (not constants, therefore, whose numerical values could be changed without destroying the theory). (Einstein 1949: 63)
Einstein’s idea is that, ultimately, the laws and constants of physics will turn out to be dictated completely by fundamental general principles. This would make considerations about alternative laws and constants obsolete, and thereby undermine any perspective according to which these are fine-tuned for life.
Unfortunately, developments in the last few decades have not been kind to hopes of the sort expressed by Einstein. In the eyes of many physicists, string theory is still the most promising candidate “theory of everything” in that it potentially offers a unified account of all known forces of nature, including gravity. (See Susskind 2005 for a popular introduction, Rickles 2014 for a philosopher’s historical account and Dawid 2013 for a recent, favourable, methodological appraisal.) But according to our present understanding of string theory, the theory has an enormous number of lowest energy states, or vacua, which would manifest themselves at the empirical level in terms of radically different effective physical laws and different values of the constants. These would be the laws and constants that we have empirical access to, and so string theory would not come close to uniquely determining the laws and constants in the manner envisaged by Einstein.
A second type of scenario according to which future developments in physics may eliminate at least some fine-tuning for life would be a dynamical account of the generation of life-friendly conditions, in analogy to the Darwinian “dynamical” evolutionary account of biological fine-tuning for survival and reproduction. Inflationary cosmology (Guth 1981, 2000) is a paradigm candidate example of such an account in that it dynamically explains why the total cosmic energy density \(\Omega\) in the early universe is extremely close to the so-called critical value \(\Omega_c\) (see Section 1.1)—or, equivalently, why the overall spatial curvature of the universe is close to zero. According to inflationary cosmology, the very early universe undergoes a period of exponential or near-exponential expansion (“inflation”) which effectively flattens out space and results in near-zero post-inflation curvature, leading to a total energy density \(\Omega\) extremely close to the critical density \(\Omega_c\). Further claimed achievements of inflationary cosmology include its ability to account for the observed near-perfect isotropy of the universe and the absence of magnetic monopoles. The strongest empirical support for inflationary cosmology, however, is now widely believed to come from of its apparently correct predictions of the shape of the cosmic microwave background fluctuations (PLANCK collaboration 2014).
Inflationary cosmology’s attractions notwithstanding, its suggested achievements are not universally recognized. (See Steinhardt & Turok [2008] for harsh criticism by two eminent cosmologists and Earman & J. Mosterín [1999] and McCoy [2015] for critical appraisals by philosophers.) However, even if its dynamical accounts of the flatness, isotropy, and absence of magnetic monopoles in the early universe are correct, there is little reason to accept that similar accounts will be forthcoming for many other constants, boundary conditions, or even laws of nature that seem fine-tuned for life: whereas, notably, the critical energy density \(\Omega_c\) has independently specifiable dynamical properties that characterize it as a systematically distinguished value of the energy density \(\Omega\), the actual values of most other constants and parameters that characterize boundary conditions are not similarly distinguished and do not form any clear systematic pattern (Donoghue 2007: sect. 8). This makes it difficult to imagine that future physical theories will indeed reveal dynamical mechanisms which inevitably lead to these values (Lewis & Barnes 2016: 181f.).
3. Fine-Tuning and Design
A classic response to the observation that the conditions in our universe seem fine-tuned for life is to infer the existence of a cosmic designer who created life-friendly conditions. If one identifies this designer with some supernatural agent or God, the inference from fine-tuning for life to the existence of a designer becomes a version of the teleological argument. Indeed, many regard the argument from fine-tuning for a designer as the strongest version of the teleological argument that contemporary science affords.
3.1 The Argument from Fine-Tuning for Design Using Probabilities
Expositions of the argument from fine-tuning for design are typically couched in terms of probabilities (e.g., Holder 2002; Craig 2003; Swinburne 2004; Collins 2009); see also the review Manson 2009. An elementary Bayesian formulation considers the rational impact of the observation \(R\)—that the constants (and laws and boundary conditions) are right for life—on our degree of belief concerning the design hypothesis \(D\)—that there is a cosmic designer. According to standard Bayesian conditioning, our posterior degree of belief \(P^+(D)\) after taking into account \(R\) is given by our prior conditional degree of belief \(P(D\mid R)\). Analogously, our posterior \(P^+(\neg D)\) that there is no cosmic designer is given by our prior conditional degree of belief \(P(\neg D\mid R)\). By Bayes’ theorem, the ratio between the two posteriors is
\[\begin{equation} \label{simpledes} \frac{P^+(D)}{P^+(\neg D)} = \frac{P(D\mid R)}{P(\neg D\mid R)} = \frac{P(R\mid D)}{P(R\mid \neg D)} \frac{P(D)}{P(\neg D)}\,. \end{equation} \]Proponents of the argument from fine-tuning for design argue that, in view of the required fine-tuning, life-friendly conditions are highly improbable if there is no divine designer; see Barnes 2020 for a careful case for this claim. Thus, the conditional probability \(P(R\mid \neg D)\) should be set close to zero. In contrast, it is highly likely according to them that the constants are right for life if there is indeed a designer. Thus the conditional probability \(P(R\mid D)\) should be given a value not far from \(1\). If a sufficiently powerful divine being exists—the idea goes—it is only to be expected that she/he will be interested in creating, or at least enabling, intelligent life, which means that we can expect the constants to be right for life on that assumption. This motivates the likelihood inequality
\[\begin{equation} P(R\mid D) > P(R\mid \neg D), \label{likelihoods} \end{equation} \]which expresses that life-friendly conditions confirm the designer hypothesis and which likelihoodists such as Sober (2003) regard as at the core of the argument from fine-tuning for design.
Bayesians focus not only on likelihoods but also on priors and posteriors, and in their eyes the crucial significance of the inequality \(\eqref{likelihoods}\) is that it leads to a ratio \(P^+(D)/P^+(\neg D)\) of posteriors that is much larger than the ratio \(P(D)/P(\neg D)\) of the priors. Whether belief in a designer is rational depends ultimately on the priors as well, but unless those values dramatically favour \(\neg D\) over \(D\) in that \(P(\neg D)\gg P(D)\), the posteriors will favour design in that \(P^+(D)> P^+(\neg D)\). Bayesian proponents of the argument from fine-tuning for design conclude that our degree of belief in the existence of some divine designer should be greater than \(1/2\) in view of the fact that there is life, given the required fine-tuning.
3.2 The Anthropic Objection
We could not possibly have existed in conditions that are incompatible with the existence of observers. The famous weak anthropic principle (WAP) (Carter 1974) suggests that this apparently trivial point may have important consequences:
[W]e must be prepared to take account of the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers. (Carter 1974: 293, emphasis due to Carter)
Our methods of empirical observation are unavoidably biased towards detecting conditions which are compatible with the existence of observers. For example, even if life-hostile places vastly outnumber life-friendly places in our universe, we should not be surprised to find ourselves in one of the relatively few places that are life-friendly and seek an explanation for this finding, simply because—in virtue of being living organisms—we could not possibly have found ourselves in a life-hostile place. Biases that result from the fact that what we observe must be compatible with the existence of observers are referred to as observation selection effects. The observation selection effects emphasized by the weak anthropic principle with respect to location in the universe are emphasized by what Carter dubs the strong anthropic principle (SAP) with respect to the universe as a whole:
[T]he Universe (and hence the fundamental parameters on which it depends) must be such as to admit within it the creation of observers within it at some stage. (Carter 1974: 294)
Carter’s formulation of the SAP has led some authors, most influentially Barrow and Tipler (1986), to misinterpret it along teleological lines and as thereby categorically different from the WAP. But, as Carter himself highlights (1983: 352), see also Leslie (1989: 135–145), the SAP is meant to highlight exactly the same type of bias as the WAP and is literally stronger than the WAP only when conjoined with a version of the multiverse hypothesis.
The so-called anthropic objection against the argument from fine-tuning for design argues that that argument breaks down once our biasedness due to the observation selection effects emphasized by the weak and strong anthropic principles is taken into account. Elliott Sober (2003, 2009) advocates this objection. According to him, the argument from fine-tuning for design requires not the likelihood inequality \(\eqref{likelihoods}\) but the much more problematic
\[\begin{equation} P(R\mid D,\textit{OSE}) > P(R\mid \neg D,\textit{OSE})\, \label{ose} \end{equation} \]where “OSE” stands for “observation selection effect”. Sober himself spells out OSE as “We exist, and if we exist, the constants must be right” (2003: 44). According to this interpretation, \(\eqref{ose}\) is patently false: our existence as living organisms entails that the constants are right for life, which means that the terms on both sides of \(\eqref{ose}\) are trivially \(1\) and hence equal, so \(\eqref{ose}\) does not hold on Sober’s analysis.
Critics of the anthropic objection argue that Sober’s reasoning delivers highly implausible results when transferred to examples where rational inferences are less controversial. Most famous is Leslie’s firing squad (Leslie 1989: 13f.), in which a prisoner expects to be executed by a firing squad but, to his own surprise, finds himself alive after all the marksmen have fired and wonders whether they intended to miss. The firing squad scenario involves an observation selection effect because the prisoner cannot contemplate his post-execution situation unless he somehow survives the execution. His observations, in other words, are “biased” towards finding himself alive (see Juhl [2007] and Kotzen [2012] for further useful examples). Sober’s analysis, applied to the firing squad scenario, suggests that it would not be rational for the prisoner to suspect that the marksmen intended to miss (unless independent evidence suggests so) because that would mean overlooking the observation selection effect that he faces. But, as Leslie, Weisberg (2005) and Kotzen (2012) argue, this recommendation seems very implausible.
According to Weisberg, Sober’s analysis fails due to its incorrect identification of the observation selection effect OSE with “We exist, and if we exist, the constants must be right”. Weisberg argues that the weaker, purely conditional, statement “If we exist, the constants must be right” (Weisberg 2005: 819, Weisberg’s wording differs) suffices to capture the observation selection effect. But if we interpret “OSE” as this statement, there is no reason to suppose that the inequality \(\eqref{ose}\) fails and the argument from fine-tuning for design appears vindicated inasmuch as the anthropic objection is concerned. (See Sober 2009 for Sober’s response.)
To resolve the difficulty of accommodating observation selection effects in likelihood arguments, Kotzen (2012) suggests that bias due to such effects be taken into account as evidence rather than background information. Notably, instead of \(\eqref{ose}\) Kotzen proposes to consider
\[\begin{equation} P(R,I\mid D) > P(R,I\mid \neg D)\,, \label{ose_kotzen} \end{equation} \]where \(I\) contains information about the observation process, including observation selection effects (Kotzen 2012: 835). According to this analysis, the argument from fine-tuning for design can be saved from the anthropic objection for a variety of ways to spell out the information \(I\) about the observation process and anthropic bias.
3.3 Life-Friendly Conditions as Old Evidence
Views according to which life-friendly conditions are improbable in an epistemic sense due to the required fine-tuning are challenged to come to terms with the fact that, as a matter of fact, we have long known that our universe is life-friendly, which means that life-friendly conditions are not literally unexpected for us. As a consequence of this fact, the Bayesian version of the argument from fine-tuning for a designer as outlined in Section 3.1 must adopt some solution to Bayesianism’s notorious problem of old evidence (Glymour 1980) because \(R\)—that the constants are right for life—is inevitably old evidence for us.
An obvious choice, endorsed by Monton (2006), who is critical of the argument from fine-tuning for design, and Collins (2009), who supports it, is the so-called counterfactual or ur-probability solution to the problem of old evidence, as defended by Howson (1991). The main advantage of this solution as applied to the argument from fine-tuning for design is that it allows to essentially preserve the argument, including \(\eqref{simpledes}\) and \(\eqref{likelihoods}\), with the sole refinement that one must consistently construe all prior probabilities \(P(\cdot)\), conditional and unconditional, as “ur-probabilities”, i.e., rational credences of some counterfactual epistemic agent who is unaware that the constants are right for life. Somewhat bizarrely, as Monton points out (2006: 416), such an agent would have to be at least temporarily unaware of her/his existence (or at least her/his existence as a form of life) because otherwise she/he could not possibly be unaware that the conditions are right for life. Tentative suggestions concerning the background knowledge that can reasonably be ascribed to such an agent are developed by Monton (2006: sect. 4) and Collins (2009: sect. 4.3).
An advantage of approaching the argument from fine-tuning for design using the ur-probability solution is that it offers proponents of the argument a clear-cut rejection of the anthropic objection: as in Kotzen’s (2012) approach, the fact that we exist is treated not as background knowledge but as evidence taken into account by Bayesian conditioning. The appropriate comparison between likelihoods to consider is thus not Sober’s \(\eqref{ose}\)—at least not under Sober’s own interpretation of “OSE” as including “We exist”—but rather \(\eqref{likelihoods}\) or \(\eqref{ose_kotzen}\), both of which evade the anthropic objection.
3.4 Can We Expect a Designer to Design?
The likelihood inequality \(\eqref{likelihoods}\) on which the fine-tuning argument for design rests is based on the assumption that, reasonably, \(P(R\mid \neg D)\) is very small because life-friendly conditions are improbable if there is no designer. This assumption can be challenged, as already discussed in 2.1. But the likelihood inequality \(\eqref{likelihoods}\) also rests on the assumption that \(P(R\mid D)\) is comparatively large, i.e., on the view that, if there is indeed a designer, life-friendly conditions are more to be expected than if there is no designer. This assumption can be challenged as well.
Reasonable assignments of \(P(R\mid D)\) depend on how exactly the designer hypothesis \(D\) is spelled out. According to Swinburne, the most promising candidate designer is “the God of traditional theism” whom he characterizes as “a being essentially eternal, omnipotent (in the sense that He can do anything logically possible), omniscient, perfectly free, and perfectly good” (2003: 107). Swinburne argues that we can be at least moderately confident that the God of traditional theism, if he exists, “will bring about an orderly, spatially extended, world in which humans have a location” (2003: 113; note that Swinburne operates with a generalized, non-biological concept of “humans”). Hence, according to Swinburne, life-friendly conditions, conditional on the existence of the God of traditional theism, do not have very low probability, i.e. \(P(R\mid D)\), plausibly, is not many orders of magnitude smaller than \(1\). According to Rota (2016: 119f.), even if we assign to \(P(R\mid D)\) a value as low as \(1\) in a billion, this suffices for the fine-tuning argument for a divine designer to be strong, simply because, in view of the fine-tuning considerations, life-friendly conditions in the absence of a designer are so utterly unexpected. A similar point is made by Hawthorne and Isaacs (2018).
Criticisms of the view that life-friendly constants are to be expected if there is a designer have a long tradition and go back to John Venn (1866) and John Maynard Keynes (1921). More recently, Sober has voiced general reservations about our abilities to competently judge what a divine designer, if real, would do:
Our judgements about what counts as a sign of intelligent design must be based on empirical information about what designers often do and what they rarely do. As of now, these judgements are based on our knowledge of human intelligence. The more our hypotheses of intelligent designers depart from the human case, the more in the dark we are as to what the ground rules are for inferring intelligent design. (Sober 2003: 38)
In a similar spirit, Narveson complains that we are in no position to predict how a cosmic designer would behave because “[b]odiless minded super-creators are a category that is way, way out of control” (Narveson 2003: 99). According to Sober and Narveson it is particularly problematic for theists to confidently assume that God, if she/he exists, would create life-friendly conditions and, at the same time, react to the problem of evil by highlighting our inability to understand “the mysterious ways of the Deity” (Narveson 2003: 99). Manson (2020) provides an updated defense of the position of Sober and Narveson against the arguments of Rota (2016) as well as Hawthorne and Isaacs (2018).
One can construct versions of the designer hypothesis \(D\) that are tailored to fulfil the likelihood inequality \(\eqref{likelihoods}\) by defining the designer as a being with both the intention and ability to create life-friendly conditions. However, one may question whether such tailored versions of the designer hypothesis have sufficient independent motivation and plausibility to deserve serious consideration in the first place. To use Bayesian terms, one may hesitate to ascribe them non-negligible prior probabilities \(P(D)\).
Motivating a non-negligible prior \(P(D)\) for design is especially challenging in the framework of the ur-probability solution to the problem of old evidence because it constrains the background evidence to facts that do not entail the existence of life. Collins argues that if we focus only on a limited class of constants \(C\), the background evidence that we can use to motivate the prior \(P(D)\) is allowed to “includ[e] the initial conditions of the universe, the laws of physics, and the values of all the other constants except C”. But appeals to the sacred texts of religions cannot be used to motivate the ascription of a non-negligible ur-prior \(P(D)\) because they presuppose, and thus entail, the existence of life. Notably, as pointed out by Monton, “[i]n formulating an urprobability for the existence of God, one cannot take into account Biblical accounts about Jesus” (2006: 418). According to Monton (2006: 419), proponents of the argument from fine-tuning for design may, however, try to motivate a non-negligible ur-prior \(P(D)\) by resorting to arguments for the existence of God that are either a priori, e.g., ontological argument, or appeal only to very general empirical facts that do not entail that the conditions are right for life, e.g., the cosmological argument. According to Swinburne (2004: ch. 5), the hypothesis of traditional theism is a simple one and, as such, warrants the ascription of a non-negligible prior.
3.5 An Alternative Argument from Fine-Tuning for Design
The argument from fine-tuning for design as reviewed in Section 3.1 treats the fact that life requires fine-tuned conditions as background knowledge and assesses the evidential significance of the observation that life-friendly conditions obtain against that background. An alternative argument from fine-tuning for design, explored by John Roberts (2012) and independently investigated and endorsed by Roger White (2011) in a reply to Weisberg (2010), treats our knowledge that the conditions are right for life as background information and assesses the rational impact of physicists’ insight that life requires fine-tuned conditions against this background. An advantage of this alternative is that it fits better with our actual epistemic situation: that the conditions are right for life is something we have known for a long time; our actual new evidence is that the laws of physics—as White (2011) and Weisberg (2012) put it—are stringent rather than lax in the constraints that they impose on the constants and boundary conditions if there is to be life.
The central likelihood inequality around which White’s version of the argument revolves is
\[\begin{equation} P(S\mid D,O) > P(S\mid \neg D,O)\,, \label{alternative} \end{equation} \]where “\(D\)” is, again, the designer hypothesis, “\(S\)” is the proposition that the laws are stringent (i.e., that life requires delicate fine-tuning of the constants) and “\(O\)” is our background knowledge that life exists (White 2011: 678). (See Roberts [2012: 296] for an assumption that plays an analogous role as \(\eqref{alternative}\).) The inequality \(\eqref{alternative}\) expresses the statement that stringent laws confirm the designer hypothesis, given our background knowledge that life exists. Does it plausibly hold for reasonable probability assignments? White argues that it does and supports this claim by giving a rigorous derivation of \(\eqref{alternative}\) from assumptions that he regards as plausible. Crucial among them is the inequality
\[ \begin{equation} P(D\mid S) \ge P(D\mid \neg S)\,, \label{alternative1} \end{equation} \]which White motivates by arguing that “the fact that the laws put stringent conditions on life does not by itself provide any evidence against design” (White 2011: 678). Put differently, according to White, absent information that life exists, information that the laws are stringent does at least not speak against the existence of a designer.
Weisberg (2012) criticizes \(\eqref{alternative1}\)—and takes his criticism to undermine \(\eqref{alternative}\)—arguing that it is implausible by the design theorist’s own standards. The design theorist holds a combination of views according to which, on the one hand, life is more probable if there is a designer than if there is no designer and life is less probable if the laws are stringent rather than lax. If one adds to this combination of views the assumption that none of the possible life-friendly conditions has higher probability than the others, both if there is a designer and if there is no designer, it dictates that—bracketing knowledge that life exists—stringent laws speak against the existence of a designer, i.e., it dictates \(P(D\mid S) < P(D\mid \neg S)\), contrary to \(\eqref{alternative1}\). Absent any evidence that life exists, evidence that the laws are stringent speaks against the existence of life in that stringent laws make life unexpected.
A possible response for the design theorist, anticipated by Weisberg (2012: 713), would be to support \(\eqref{alternative1}\) by arguing that the designer would plausibly first choose either stringent or lax laws, sidestepping her intention to enable the existence of life at that stage or actively preferring stringent laws, and only then choosing life-friendly constants. A problem with this response, similar to the difficulties discussed in Section 3.4, is that we have little experience with cosmic designers and, therefore, difficulties to predict the hypothesized designer’s preferences and likely actions.
4. Fine-Tuning and the Multiverse
According to the multiverse hypothesis, there are multiple universes, some of them radically different from our own. Many of those who believe that fine-tuning for life requires some theoretical response regard it as the main alternative beside the designer hypothesis. The idea that underlies it is that, if there is a sufficiently diverse multiverse in which the conditions differ between universes, it is only to be expected that there is at least one where they are right for life. As the strong anthropic principle highlights (see Section 3.2), the universe in which we, as observers, find ourselves must be one where the conditions are compatible with the existence of observers. This suggests that, on the assumption that there is a sufficiently diverse multiverse, it is neither surprising that there is at least one universe that is hospitable to life nor—since we could not have found ourselves in a life-hostile universe—that we find ourselves in a life-friendly one. Many physicists (e.g., Susskind [2005], Greene [2011], Tegmark [2014]) and philosophers (e.g., Leslie [1989], Smart [1989], Parfit [1998], Bradley [2009]) regard this line of thought as suggesting the inference to a multiverse as a rational response to the finding that the conditions are right for life in our universe despite the required fine-tuning.
4.1 The Argument from Fine-Tuning for the Multiverse as An Inference to An Anthropic Explanation?
The argument from fine-tuning for the multiverse as just sketched is sometimes characterized as an inference to the multiverse as the best explanation of fine-tuning for life—an explanation which, in view of its appeal to anthropic reasoning, is sometimes characterized as “anthropic” (e.g., Leslie 1986, 1989: ch. 6; McMullin 1993: 376f., sect. 7; Bostrom 2002). It is controversial, however, whether this characterization is adequate. A paradigmatic anthropic “explanation”, characterized as such by Carter in the seminal paper (1974) that introduces the anthropic principles, is astrophysicist Robert Dicke’s (1961) account of coincidences between large numbers in cosmology. A prominent example of such a coincidence is that the relative strength of electromagnetism and gravity as acting on an electron/proton pair is of roughly the same order of magnitude (namely, \(10^{40}\)) as the age of the universe, measured in natural units of atomic physics. Impressed by this and other coincidences, Dirac (1938) stipulated that they might hold universally and as a matter of physical principle. He conjectured that the strength of gravity may decrease as the age of the universe increases, which would indeed make it possible for the coincidence to hold across all cosmic times.
Dicke (1961), criticizing Dirac, argues that standard cosmology with time-independent gravity suffices to account for the coincidence, provided that we take into account the fact that our existence is tied to the presence of mainline stars like then sun and of various chemical elements produced in supernovae. As Dicke shows, this requirement dictates that we could only have found ourselves in that cosmic period in which the coincidence holds. Accordingly, contrary to Dirac, there is no reason to assume that gravity varies with time to make the coincidence unsurprising. Carter (1974) and Leslie (1986, 1989: ch. 6) describe Dicke’s account as an “anthropic explanation” of the coincidence that impressed Dirac, and Leslie discusses it continuously with the argument from fine-tuning for the multiverse. (Earman [1987: 309], however, disputes that Dicke’s account is adequately characterized as an “explanation”.) But whereas Dicke’s account of the coincidence uses life’s existence as background knowledge to show that standard cosmology suffices to make the coincidence expectable, the standard argument from fine-tuning for the multiverse, as reviewed in what follows, treats life’s existence as requiring a theoretical response (rather than as background knowledge) and advocates the multiverse hypothesis as the best such response. Friederich (2019b; 2021: ch. 6) outlines how to set up the fine-tuning argument for the multiverse so that it uses anthropic reasoning analogously to the Dicke/Carter accounts of large number coincidences.
4.2 The Argument from Fine-Tuning for the Multiverse Using Probabilities
More often than as an inference to the best explanation the argument from fine-tuning for the multiverse is formulated using probabilities, in analogy to the argument from fine-tuning for design (see Section 3.1). In a simple version of the argument, reasonable probability assignments are compared for a single-universe hypothesis \(U\) (where the universe has uniform laws and constants) and a rival multiverse hypothesis \(M\) according to which there are many universes with conditions that differ between universes. (For the purposes of the discussion about fine-tuning for life, hypotheses according to which there is only a single universe with constants that vary across space-time qualify as versions of the multiverse hypothesis. They seem to be disfavoured by the available evidence, however, see Uzan 2003 for a review.)
As in Bradley 2009, we consider as the fine-tuning evidence the proposition \(R\) that there is (at least) one universe with the right constants for life. Using Bayesian conditioning and Bayes’ theorem one obtains for the ratio of the posteriors
\[ \begin{equation} \frac{P^+(M)}{P^+(U)} = \frac{P(M\mid R)}{P(U\mid R)} = \frac{P(R\mid M)}{P(R\mid U)} \frac{P(M)}{P(U)}\,. \label{simplemult} \end{equation} \]If the multiverse according to \(M\) is sufficiently vast and varied, life unavoidably appears somewhere in it, so the conditional prior \(P(R\mid M)\) must be \(1\) (or very close to \(1\)). If we assume that, on the assumption that there is only a single universe, it is improbable that it has the right conditions for life (see Section 2.1 for discussion), the conditional prior \(P(R\mid U)\) must be much smaller than \(1\). This gives \(P(R\mid M) \gg P(R\mid U)\), which entails \(P(R\mid M)/P(R\mid U)\gg1\), which in turn entails a ratio of posteriors that is much larger than the ratio of the priors: \(\frac{P(M\mid R)}{P(U\mid R)}\gg\frac{P(M)}{P(U)}\). Unless we have prior reasons to dramatically prefer a single universe over the multiverse, i.e., unless \(P(U)\gg P(M)\), the ratio of the posteriors \(\frac{P^+(M)}{P^+(U)}\) will be larger than \(1\).
Just as the argument from fine-tuning for design, the argument from fine-tuning for the multiverse must come to terms with the problem that the existence of life is old evidence for us. If one applies Howson’s ur-probability solution to it, one must consistently interpret all the probabilities in equation \(\eqref{simplemult}\) as assigned from the perspective of a counterfactual epistemic agent who is unaware of her/his own existence. At least prima facie, it is unclear what background knowledge can be assumed for an agent in that curious condition (see Section 3.3 for considerations). Juhl (2007) speculates that motivating a non-negligible prior \(P(M)\) is impossible without implicitly relying on evidence which entails that the conditions are right for life. If this is correct, it means that running the fine-tuning argument for the multiverse as in equation \(\eqref{simplemult}\) based on an empirically well motivated non-negligible prior \(P(M)\) would inevitably involve fallacious double-counting (“double-dipping”, as Juhl calls it (2007: 554)) of the fine-tuning evidence \(R\).
4.3 The Inverse Gambler’s Fallacy Charge
The inverse gambler’s fallacy, identified by Ian Hacking (1987), consists in inferring from an event with a remarkable outcome that there have likely been many more events of the same type in the past, most with less remarkable outcomes. For example, the inverse gambler’s fallacy is committed by someone who enters a casino and, upon witnessing a remarkable outcome at the nearest table—say, a five-fold six in a quintuple die toss—concludes that the toss is most likely part of a large sequence of tosses. Critics of the argument from fine-tuning for the multiverse accuse it of committing the inverse gambler’s fallacy. According to them, the argument commits this fallacy by, as White puts it,
supposing that the existence of many other universes makes it more likely that this one—the only one that we have observed—will be life-permitting. (White 2000: 263)
Versions of this criticism are endorsed by Draper et al. (2007) and Landsman (2016). Hacking (1987) regards only those versions of the argument from fine-tuning for the multiverse as guilty of the inverse gambler’s fallacy that infer the existence of multiple universes in a temporal sequence.
Adherents of the inverse gambler’s fallacy charge against the argument from fine-tuning for the multiverse object against focusing on the impact of the proposition \(R\)—that the conditions are right for life in some universe. According to them, we should instead consider the impact of the more specific proposition \(H\): that the conditions are right for life here, in this universe. If we replace \(R\) by \(H\), they argue, it becomes clear that the argument breaks down because the existence of other universe does not raise the probability that this universe here is life-friendly.
Many philosophers defend the argument from fine-tuning for the multiverse against this objection (McGrath 1988; Leslie 1988; Bostrom 2002; Manson & Thrush 2003; Juhl 2005; Bradley 2009; Epstein 2017). In an early response to Hacking, McGrath (1988) argues that the analogy between the argument from fine-tuning for the multiverse and a person who randomly enters a casino and witnesses a remarkable outcome is misleading: while the person entering a casino could have found any arbitrary outcome, we could not have found ourselves in a universe with conditions that are not right for life. The appropriate analogy to consider, according to McGrath, involves someone who is allowed to enter the casino only if and when some specific remarkable outcome occurs and who, upon being called in and finding that this outcome has occurred, infers the existence of other trials in the past. In that scenario, the inference to multiple trials (in the past) is indeed rational, and so, according to McGrath, is the inference from fine-tuned conditions to multiple universes.
The adequacy of McGrath’s casino analogy is contested as well. Whereas in McGrath’s analogy the epistemic agent waits outside the casino until the remarkable outcome occurs and she/he is called in, “it is not as though we were disembodied spirits waiting for a big bang to produce some universe which could accommodate us”, as White puts it (2000: 268). Epstein (2017: 653) retorts that “it is also not as though we were disembodied spirits keenly observing [the universe] ɑ—our designated potential home—and hoping that it, in particular, would be able to accommodate us.” Epstein’s diagnosis is that the inverse gambler’s fallacy charge rests on the Requirement of Total Evidence in Bayesianism, which, according to him, is to be rejected in cases like these. Draper (2020) as well as Barrett and Sober (2020) defend the Requirement of Total Evidence and, in doing so, attack Epstein’s criticism of the inverse gambler’s fallacy charge. Bradley (2009) offers further casino analogies beyond those considered by Hacking, McGrath and White which, according to him, speak in favour of rejecting the inverse gambler’s fallacy charge, but White’s diagnosis continues to find support, e.g., by Landsman (2016). Friederich (2019a; 2021: ch. 4) suggests that the question of whether we can rationally infer a multiverse from fine-tuning for life is so different from questions encountered in more familiar contexts such as the casino scenarios that whether or not the inference from fine-tuning to a multiverse commits the inverse gambler’s fallacy may not have a determinate answer by the standards of accepted rationality criteria.
4.4 Independently Motivating and Testing the Multiverse Hypothesis?
As just outlined, it is controversial whether it is rational to infer the existence of multiple universes from our universe’s fine-tuning for life. However, if we had strong independent evidence for other universes with life-hostile conditions, attempts to account for why our own universe is life-friendly would most likely seem futile. Thus independent evidence for some multiverse scenario could have a strong impact on what we regard as a rational response to fine-tuning for life. Proponents of the argument from fine-tuning for the multiverse could moreover welcome such evidence as potentially helping to motivate a non-negligible prior \(P(M)\) for the multiverse.
Many physicists nowadays believe that a specific version of the multiverse hypothesis is indeed suggested by contemporary developments in fundamental physics, notably by the combination of inflationary cosmology and string theory, both of which have been introduced in Section 2.3. According to many advocates of inflationary cosmology, the process of inflation results in causally isolated space-time regions, so-called “island universes”. This process is in general “eternal” in that the formation of island universes never ends. As a result, it leads to the production of a vast (and, according to most models, infinite) “multiverse” of island universes (Guth 2000).
As remarked in Section 2.3, string theory has an enormous number of lowest energy states (vacua), which would manifest themselves at the level of observations and experiments in terms of different higher level physical laws and values of the constants. When combined with the idea of island universes as suggested by inflationary cosmology one obtains a cosmological picture in which there are infinitely many island universes where all the different string theory vacua—corresponding to different higher-level physical laws and constants in these laws—are actually realized in the different island universes. This so-called landscape multiverse qualifies as a concrete multiverse scenario in the sense of the argument from fine-tuning for the multiverse. A necessary condition is of course that the collection of island universes that are part of the landscape multiverse includes, as is widely believed to be the case, at least one universe with the same effective (higher-level) laws and constants as our own.
Unfortunately, concrete multiverse scenarios such as the landscape multiverse are extremely difficult to test, precisely because they entail that different universes exhibit very different conditions. The broad consensus in the literature on multiverse cosmology is that, in order for a multiverse scenario to qualify as empirically confirmed, it must entail that those conditions that we find in our own universe are typical among those found by observers across the multiverse. Widely used formulations of typicality are Vilenkin’s principle of mediocrity (Vilenkin 1995) and Bostrom’s self-sampling assumption (Bostrom 2002). Typicality principles can be regarded as refinements of the anthropic principles (Bostrom 2002) in the form of indifference principles of self-locating belief (Elga 2004): inasmuch as we are ignorant about who and where among observers we are, they recommend to reason as if we were equally likely to be any of the observers who we might possibly be, given our empirical evidence.
Typicality principles have the benefit of making multiverse theories at least in principle testable (Aguirre 2007; Barnes 2017). They are controversial, however, because it is contested whether typicality is always a reasonable assumption (Hartle & Srednicki 2007; Smolin 2007) and because it is difficult to specify with respect to which reference class of observers typicality should be assumed. In practice, observer proxies are chosen such as share of baryon matter clustered in large galaxies (Martel et al. 1998) or entropy gradient (Bousso et al. 2007). These difficulties are exacerbated in cosmological scenarios such as the landscape multiverse in which reference classes of observers that one might reasonably choose are all infinite. The problem of regularizing those infinities corresponds to the so-called measure problem of cosmology, according to some cosmologists “the greatest crisis in physics today” (Tegmark 2014: 314). (See Schellekens [2013: sect. VI.B] for an introduction to the measure problem aimed at physicists, Smeenk [2014] for a philosopher’s sceptical assessment of its solvability, and Dorr & Arntzenius [2017] for a more optimistic perspective.) Friederich (2021: ch. 8) argues that the freedom to choose, for example, an observer proxy and a cosmic measure makes typicality-based predictions from multiverse theories untrustworthy and susceptible to confirmation bias.
The persisting difficulties with testing multiverse theories are a prime reason why the multiverse idea itself continues to be viewed very critically by many leading physicists (e.g., Ellis 2011).
5. Fine-Tuning and Naturalness
According to many contemporary physicists, the most deeply problematic instances of fine-tuning do not concern fine-tuning for life but violations of naturalness—a principle of theory choice in particle physics and cosmology that can be characterized as a no fine-tuning criterion.
5.1 Introducing Naturalness
The idea that underlies naturalness is that the phenomena described by some physical theory should not depend sensitively on specific details of a more fundamental (currently unknown) theory to which it is an effective low-energy approximation. In what follows, the motivation, significance, and implementation of this idea in the framework of quantum field theory are explained. For a more detailed introduction aimed at physicists see Giudice (2008), for one aimed at philosophers of physics see Williams (2015).
Modern physics regards our currently best theories of particle physics collected in the Standard Model as effective field theories. Effective field theories are low-energy effective approximations to hypothesized more fundamental physical theories whose details are currently unknown. An effective field theory has an in-built limit to its range of applicability, determined by some energy scale \(\Lambda\). When applied to phenomena associated with energies higher than \(\Lambda\) the effective field theory will not deliver correct results. At this point, the more fundamental theory must be considered to which the effective field theory supposedly is a low-energy approximation. For the theories collected in the Standard Model, it is known that they cannot possibly be empirically adequate beyond energies around the Planck scale \(\Lambda_{\textrm{Planck}}\approx 10^{19} \,\textrm{GeV}\), where—presently unknown—quantum gravitational effects become relevant. However, the Standard Model may well be empirically inadequate already at energy scales significantly below the Planck scale. For example, if there is some presently unknown particle with mass \(M\) smaller than the Planck scale \(\Lambda_{\textrm{Planck}}\) but beyond the range of current accelerator technology which interacts with particles described by the Standard Model, the cut-off scale \(\Lambda\) where the Standard Model becomes inadequate may well be \(M\) rather than \(\Lambda_{\textrm{Planck}}\).
In an effective field theory, any physical quantity \(g_{\Fphys}\) can be represented as the sum of a so-called bare quantity \(g_0\) and a contribution \(\Delta g\) from vacuum fluctuations corresponding to energies up to the cut-off \(\Lambda\):
\[ \begin{equation} g_{\Fphys} = g_0 + \Delta g. \label{natural} \end{equation} \]The bare quantity \(g_0\) can be regarded as a black box that sums up effects associated with energies beyond the cut-off scale \(\Lambda\) where unknown effects must be taken into account. Viewing a theory as an effective field theory means viewing it as a self-contained description of phenomena up to the cut-off scale \(\Lambda\). This perspective suggests that one may only consider an effective theory as natural if the physical quantity \(g_{\Fphys}\) can be of its actual order of magnitude without any need for a delicate cancellation between \(g_0\) and \(\Delta g\) to many orders of magnitude. Since the bare quantity \(g_0\) sums up information about physics beyond the cut-off scale \(\Lambda\), such a delicate cancellation between \(g_0\) and \(\Delta g\) would mean that the order of magnitude of the physical quantity \(g_{\Fphys}\) would be different if phenomena associated with energies beyond the cut-off scale \(\Lambda\) were slightly different.
One can characterize violations of naturalness as instances of fine-tuning in that, where naturalness is violated, low-energy phenomena depend sensitively on the details of some unknown fundamental theory concerning phenomena at very high energies. Physicists have developed ways of quantifying fine-tuning in this sense (Barbieri & Guidice 1988), critically discussed by Grinbaum (2012). It is controversial whether, to the degree that violations of naturalness can be seen as instances of fine-tuning, they should be regarded as problematic. Wetterich (1984) suggests that any fine-tuning of bare parameters is unproblematic because those parameters depend on the chosen regularization scheme and have no independent physical meaning. As highlighted by Rosaler and Harlander (2019), Wetterich’s perspective depends on an understanding of quantum field theories as defined by entire trajectories \(g^i(\Lambda)\) in parameter space.
An alternative criterion of naturalness—sometimes dubbed absolute naturalness (see Wells [2015] for an empirical motivation)—is that a theory is natural if and only if it can be formulated using dimensionless numbers that are all of order \(1\). More permissive is ’t Hooft’s technical naturalness criterion (’t Hooft 1980), according to which a theory is natural if it can be formulated in terms of numbers that are either of order \(1\) or very small but such that, if they were exactly zero, the theory would have an additional symmetry. The motivation for this prima facie arbitrary criterion is that it elegantly reproduces verdicts based on the above formulation of naturalness according to which low-energy phenomena should not depend sensitively on the details of some more fundamental theory with respect to high energies.
5.2 Violations of Naturalness: Examples
A prime example of a violation of naturalness occurs in quantum field theories with a spin \(0\) scalar particle such as the Higgs particle. In this case, the dependence of the squared physical mass on the cut-off \(\Lambda\) is quadratic:
\[ \begin{equation} m_{\FH,\Fphys}^2 = m_{\FH,0}^2 + \Delta m^2 = m_{\FH,0}^2 + h_t\Lambda^2 + \ldots\,. \label{Higgs} \end{equation} \]The physical mass of the Higgs particle is empirically known to be \(m_{\FH,\Fphys}\approx125\,\textrm{GeV}\). The dominant contribution to \(\Delta m^2\), specified as \(h_t\Lambda^2\) in equation \(\eqref{Higgs}\), is due to the interaction between the Higgs particle and the heaviest fermion, the top quark, where \(h_t\) is some parameter that measures the strength of that interaction. Given the empirically known properties of the top quark, the factor \(\frac{h_t}{16\pi^2}\) is of order \(10^{-2}\). Due to its quadratic dependence on the cut-off scale \(\Lambda\) the term \(\frac{h_t}{16\pi^2}\Lambda^2\) is very large if the cut-off scale is large. If the Standard Model is valid up to the Planck scale \(\Lambda_{\textrm{Planck}}\approx10^{19}\,\textrm{GeV}\), the squared bare mass \(m_{\FH,0}^2\) and the effect of the vacuum fluctuations would have to cancel each other out to about 34 orders of magnitude in order to result in a physical Higgs mass of \(125\,\textrm{GeV}\). There is no known physical reason why the effects collected in the bare mass \(m_{\FH}\) should be in such a delicately balance with the effects from the vacuum fluctuations collected in \(\Delta m^2\). The fact that two fundamental scales—the Planck scale and the Higgs mass—are so widely separated from each other is referred to as the hierarchy problem. As a consequence of this problem, the violation of naturalness due to the Higgs mass is so severe.
Various solutions to the naturalness problem for the Higgs mass have been proposed in the form of theoretical alternatives to the Standard Model. In supersymmetry (see Martin [1998] for an introduction), contributions to \(\Delta m_{\FH}^{2}\) from supersymmetric partner particles can compensate the contribution from heavy fermions such as the top quark and thereby eliminate the fine-tuning problem. However, supersymmetric theories with this feature appear to be disfavoured by more recent experimental results, notably from the Large Hadron Collider (Draper et al. 2012). Other suggested solutions to the naturalness problem for the Higgs particle include so-called Technicolour models (Hill & Simmons 2003), in which the Higgs particle is replaced by additional fermionic particles, models with large extra dimensions, where the hierarchy between the Higgs mass and the Planck scale is drastically diminished (Arkani-Hamed et al. 1998), and models with so-called warped extra dimensions (Randall & Sundrum 1999).
An even more severe violation of naturalness is created by the cosmological constant \(\rho_V\), which specifies the overall vacuum energy density. Here the contribution due to vacuum fluctuations is proportional to the fourth power of the cut-off scale \(\Lambda\):
\[ \begin{equation} \rho_V = \rho_0 + c\Lambda^4 + \ldots\,. \label{cosmo_constant} \end{equation} \]The physical value \(\rho_V\) of the cosmological constant is empirically found to be of order \(\rho_V\sim10^{-3}\,\textrm{eV}\). The constant \(c\), which depends on parameters that characterize the top quark and the Higgs particle, is empirically known to be roughly of order \(1\). If we take the cut-off to be of the order of the Planck scale \(\Lambda\sim10^{19}\,\textrm{GeV}\), the bare term \(\rho_0\), must cancel the contribution \(c\Lambda^4\) to more than 120 orders of magnitude. Even if we assume a cut-off as low as \(\Lambda\sim1\,\textrm{TeV}\), i.e., already within reach of current accelerator technology, we find that a cancellation between \(\rho_0\) and \(c\Lambda^4\) to about 50 digits remains necessary. Contrary to the case of the Higgs mass, there are few ideas of how future physical theories might be able to avoid this problem.
5.3 Violations of Naturalness and Fine-Tuning for Life
As explained in Section 5.1, violations of naturalness can be seen as instances of fine-tuning, but not in the sense of fine-tuning for life. A connection between naturalness and fine-tuning for life can be constructed, however, along the following lines:
One can interpret equations \(\eqref{Higgs}\) and \(\eqref{cosmo_constant}\) as suggesting that the actual physical values of the Higgs mass and the cosmological constant are much smaller than the values that one would expect for them in the framework of the Standard Model. Notably, if the Higgs mass were of order of the cut-off \(\Lambda\), e.g., the Planck scale, and if the cosmological constant were of order \(\Lambda^4\), the bare parameters would not need to be fixed to many digits in order for the physical parameters to have their respective orders of magnitude, which means that the physical values would be natural. Thus, assuming naturalness and the validity of our currently best physical theories up to the Planck scale, one would expect values for the Higgs mass and the cosmological constants of the same order of magnitude as their vacuum contributions, i.e., values much larger than the actual ones.
With respect to the problem of specifying probability distributions over possible values of physical parameters discussed in Section 2.1 naturalness may be taken to suggest that all reasonable such distributions have most of their probabilistic weight close to the natural values. As explained, for the Higgs mass and the cosmological constant the natural values are much larger than the observed ones. Advocates of the view that fine-tuning for life requires a response because life-friendly constants are improbable therefore put particular emphasis on those instances of fine-tuning for life that are associated with violations of naturalness, notably the cosmological constant (e.g., Susskind 2005: ch. 2; Donoghue 2007; Collins 2009: sect. 2.3.3; Tegmark 2014: 140f.).
Bibliography
- Adams, Fred C., 2008, “Stars in other universes: stellar structure with different fundamental constants”, Journal of Cosmology and Astroparticle Physics, 08: 10. doi:10.1088/1475-7516/2008/08/010
- Adams, Fred C., 2019, “The degree of fine-tuning in our universe – and others”, Physics Reports, 807: 1–111. doi:10.1016/j.physrep.2019.02.001
- Aguirre, Anthony, 2007, “Making predictions in a multiverse: conundrums, dangers, coincidences”, in Carr 2007: 367–386. doi:10.1017/CBO9781107050990.023
- Arkani-Hamed, Nima, Savas Dimopoulos, and Gia Dvali, 1998, “The Hierarchy problem and new dimensions at a millimeter”, Physics Letters B, 429(3–4): 263–272. doi:10.1016/S0370-2693(98)00466-3
- Barbieri, Riccardo and Gian F. Giudice, 1988, “Upper bounds on supersymmetric particle masses”, Nuclear Physics B, 306(1): 63–76. doi:10.1016/0550-3213(88)90171-X
- Barnes, Luke A., 2012, “The fine-tuning of the universe for intelligent life”, Publications of the Astronomical Society of Australia, 29(4): 529–564. doi:10.1071/AS12015
- –––, 2017, “Testing the multiverse: Bayes, fine-tuning and typicality”, in Chamcham et al. 2017: 447–466. doi:10.1017/9781316535783.023
- –––, 2020, “A reasonable little question: a formulation of the fine-tuning argument”, Ergo, 6: 1220–1257. doi:10.3998/ergo.12405314.0006.042
- Barr, S. M. and A. Khan, 2007, “Anthropic tuning of the weak scale and of \(m_u/m_d\) in two-Higgs-doublet models”, Physical Review D, 76(4): 045002 doi:10.1103/PhysRevD.76.045002
- Barrett, Martin and Sober, Elliott, 2020, “The Requirement of Total Evidence: a reply to Epstein’s critique”, Philosophy of Science, 87(1): 191–203. doi:10.1086/706086
- Barrow, John D. and Frank J. Tipler, 1986, The Anthropic Cosmological Principle, Oxford: Oxford University Press.
- Behe, Michael J., 1996, Darwin’s Black Box, New York: The Free Press.
- Bostrom, Nick, 2002, Anthropic Bias: Observation Selection Effects in Science and Philosophy, New York: Routledge.
- Bradley, Darren J., 2009, “Multiple universes and observation selection effects”, American Philosophical Quarterly, 46: 61–72.
- Carlson, Erik and Erik J. Olsson, 1998, “Is our existence in need of further explanation?”, Inquiry, 41(3): 255–275. doi:10.1080/002017498321760
- Carr, Bernard J. (ed.), 2007, Universe or Multiverse?, Cambridge: Cambridge University Press. doi:10.1017/CBO9781107050990
- Carr, Bernard J. and Martin J. Rees, 1979, “The anthropic principle and the structure of the physical world”, Nature, 278: 605–612. doi:10.1038/278605a0
- Carter, B., 1974, “Large number coincidences and the anthropic principle in cosmology”, in M. S. Longair (ed.), Confrontation of Cosmological Theory with Observational Data, Dordrecht: Reidel, pp. 291–298.
- –––, 1983, “The anthropic principle and its implications for biological evolution”, Philosophical Transactions of the Royal Society A, 310(1512): 347–363. doi:10.1098/rsta.1983.0096
- Chamcham, Khalil, Joseph Silk, John D. Barrow, and Simon Saunders (eds.), 2017, The Philosophy of Cosmology, Cambridge: Cambridge University Press. doi:10.1017/9781316535783
- Collins, R., 2009, “The teleological argument: an exploration of the fine-tuning of the cosmos”, in W. L. Craig and J.P. Moreland (eds.), The Blackwell Companion to Natural Theology, Oxford: Blackwell, pp. 202–281.
- Colyvan M., J. L. Garfield, and G. Priest, 2005, “Problems with the argument from fine-tuning”, Synthese, 145(39): 325–338. doi:10.1007/s11229-005-6195-0
- Craig, William Lane, 2003, “Design and the anthropic fine-tuning of the universe”, in Manson 2003: 155–177.
- Davies, Paul C.W., 2006, The Goldilocks Enigma: Why is the Universe Just Right for Life?, London: Allen Lane.
- Dawid, Richard, 2013, String Theory and the Scientific Method, Cambridge: Cambridge University Press. doi:10.1017/CBO9781139342513
- Dembski, William A., 1998, The Design Inference: Eliminating Chance through Small Probabilities, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511570643
- Dicke, R. H., 1961, “ Dirac’s cosmology and Mach’s principle”, Nature, 192: 440–441. doi:10.1038/192440a0
- Dirac, P.A.M., 1938, “A new basis for cosmology”, Proceedings of the Royal Society A, 165: 199–208. doi:10.1098/rspa.1938.0053
- Donoghue, John F., 2007, “The fine-tuning problems of particle physics and anthropic mechanisms”, in Carr 2007: 231–246. doi:10.1017/CBO9781107050990.017
- Dorr, Cian and Frank Arntzenius, 2017, “Self-locating priors and cosmological measures”, in Chamcham et al. 2017: 396–428. doi:10.1017/9781316535783.021
- Draper, Kai, Paul Draper, and Joel Pust, 2007, “Probabilistic arguments for multiple universes”, Pacific Philosophical Quarterly, 88(3): 288–307. doi:10.1111/j.1468-0114.2007.00293.x
- Draper, Paul, 2020, “In defense of the requirement of total evidence”, Philosophy of Science, 87(1): 179–190. doi:10.1086/706084
- Draper, Patrick, Patrick Meade, Matthew Reece, and David Shih, 2012, “Implications of a 125 GeV Higgs boson for the MSSM and low-scale supersymmetry breaking”, Physical Review D, 85(9): 095007. doi:10.1103/PhysRevD.85.095007
- Earman, John, 1987, “The SAP also rises: a critical examination of the anthropic principle”, American Philosophical Quarterly, 24(4): 307–317.
- Earman, John and Jesus Mosterín, 1999, “A critical look at inflationary cosmology”, Philosophy of Science, 66(1): 1–49. doi:10.1086/392675
- Einstein, Albert, 1949, “Autobiographical notes”, in P.A. Schilpp (ed.), Albert Einstein: Philosopher-Scientist, Peru, IL: Open Court Press.
- Elga, Adam, 2004, “Defeating Dr. Evil with self-locating belief”, Philosophy and Phenomenological Research, 69(2): 383–396. doi:10.1111/j.1933-1592.2004.tb00400.x
- Ellis, George F.R., 2011, “Does the multiverse really exist?”, Scientific American, 305: 38–43. doi:10.1038/scientificamerican0811-38
- Epstein, Peter F., 2017, “The fine-tuning argument and the requirement of total evidence”, Philosophy of Science, 84(4): 639–658. doi:10.1086/693465
- Friederich, Simon, 2019a, “Reconsidering the inverse gamblers fallacy charge against the fine-tuning argument for the multiverse”, Journal for General Philosophy of Science, 50: 155–178. doi:10.1007/s10838-018-9422-3
- –––, 2019b, “A new fine-tuning argument for the multiverse”, Foundations of Physics, 49: 1011–1021. doi:10.1007/s10701-019-00246-2
- –––, 2021, Multiverse Theories: A Philosophical Perspective, Cambridge: Cambridge University Press.
- Giudice, Gian Francesco, 2008, “Naturally speaking: the naturalness criterion and physics at the LHC”, in Gordon L. Kane and Aaron Pierce (eds.), Perspectives on LHC physics, Singapore: World Scientific, pp. 155–178. doi:10.1142/9789812779762_0010
- Glymour, Clark N., 1980, Theory and Evidence, Princeton, NJ: Princeton University Press.
- Gould, Stephen Jay, 1983, “Mind and supermind”, Natural History, 92(5): 34–38.
- Greene, Brian, 2011, The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos, New York: Vintage.
- Grinbaum, Alexei, 2012, “Which fine-tuning arguments are fine?”, Foundations of Physics, 42(5): 615–631. doi:10.1007/s10701-012-9629-9
- Grohs, Evan, Howe, Alex R., and Adams, Fred C., 2018, “Universes without the weak force: Astrophysical processes with stable neutrons”, Physical Review D, 97: 043003. doi:10.1103/PhysRevD.97.043003
- Guth, Alan H., 1981, “Inflationary universe: A possible solution to the horizon and flatness problems”, Physical Review D, 23(2): 347. doi:10.1103/PhysRevD.23.347
- –––, 2000, “Inflation and eternal inflation”, Physics Reports, 333: 555–574. doi:10.1016/S0370-1573(00)00037-5
- Hacking, Ian, 1987 “The inverse gambler’s fallacy: the argument from design. The anthropic principle applied to Wheeler Universes”, Mind, 96(383): 331–340. doi:10.1093/mind/XCVI.383.331
- Hall, Lawrence J., David Pinner, and Joshua T. Ruderman, 2014, “The weak scale from BBN”, Journal of High Energy Physics, 2014(12): 134. doi:10.1007/JHEP12(2014)134
- Harnik, Roni, Graham D. Kribs, and Gilad Perez, 2006, “A universe without weak interactions”, Physical Review D, 74(3): 035006. doi:10.1103/PhysRevD.74.035006
- Hartle, James B. and Mark Srednicki, 2007, “Are we typical?”, Physical Review D, 75(12): 123523. doi:10.1103/PhysRevD.75.123523
- Hawthorne, John and Yoaav Isaacs, 2018, “Fine-tuning fine-tuning”, in Matthew A. Benton, John Hawthorne, and Dani Rabinowitz (eds.), Knowledge, Belief, and God: New Insights in Religious Epistemology, Oxford: Oxford University Press, pp. 136–168. doi:10.1093/oso/9780198798705.001.0001
- Hill, Christopher T. and Elizabeth H. Simmons, 2003, “Strong dynamics and electroweak symmetry breaking”, Physics Reports, 381(4–6): 235–-402. doi:10.1016/S0370-1573(03)00140-6
- Hogan, Craig J., 2000, “Why the universe is just so”, Reviews of Modern Physics, 72: 1149–1161. doi:10.1103/RevModPhys.72.1149
- –––, 2007, “Quarks, electrons, and atoms in closely related universes”, in Carr 2007: 221–230. doi:10.1017/CBO9781107050990.016
- Holder, Rodney D., 2002, “Fine-tuning, multiple universes and theism”, Noûs, 36(2): 295–312. doi:10.1111/1468-0068.00372
- Howson, Colin, 1991, “The ‘old evidence’ problem”, British Journal for the Philosophy of Science, 42(4): 547–555. doi:10.1093/bjps/42.4.547
- Hoyle, Fred, D.N.F. Dunbar, W.A. Wenzel, and W. Whaling, 1953, “A state in C12 predicted from astrophysical evidence”, Physical Review, 92: 1095.
- van Inwagen, Peter, 1993, Metaphysics, Colorado: Westview Press.
- Juhl, Cory, 2005, “Fine-tuning, many worlds, and the ‘inverse gambler’s fallacy’”, Noûs, 39(2): 337–347. doi:10.1111/j.0029-4624.2005.00504.x
- –––, 2006, “Fine-tuning is not surprising”, Analysis, 66(4): 269–275. doi:10.1111/j.1467-8284.2006.00628.x
- –––, 2007, “Fine-tuning and old evidence”, Noûs, 41(3): 550–558. doi:10.1111/j.1468-0068.2007.00661.x
- Keynes, John, M., 1921, A Treatise on Probability, London: Macmillan.
- Koperski, Jeffrey, 2005, “Should we care about fine-tuning?”, British Journal for the Philosophy of Science, 56(2): 303–319. doi:10.1093/bjps/axi118
- Kotzen, Matthew, 2012, “Selection biases in likelihood arguments”, British Journal for the Philosophy of Science, 63(4): 825–839. doi:10.1093/bjps/axr044
- Landsman, Klaas, 2016, “The fine-tuning argument: exploring the improbability of our own existence”, in K. Landsman and E. van Wolde (eds.), The Challenge of Chance, Heidelberg: Springer, pp. 111–129. doi:10.1007/978-3-319-26300-7_6
- Leslie, John, 1986, “Anthropic explanations in cosmology”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, pp. 87–95.
- –––, 1988, “No inverse gambler’s fallacy in cosmology”, Mind, 97(386): 269–272. doi:10.1093/mind/XCVII.386.269
- –––, 1989, Universes, London: Routledge.
- Lewis, Geraint J. and Luke A. Barnes, 2016, A Fortunate Universe: Life in a Finely Tuned Cosmos, Cambridge: Cambridge University Press. doi:10.1017/CBO9781316661413
- MacDonald, J. and D.J. Mullan, 2009, “Big bang nucleosynthesis: The strong nuclear force meets the weak anthropic principle”, Physical Review D, 80(4): 043507. doi:10.1103/PhysRevD.80.043507
- Manson, Neil A. (ed.), 2003, God and Design: The Teleological Argument and Modern Science, London: Routledge.
- –––, 2009, “The fine-tuning argument”, Philosophy Compass, 4(1): 271–286. doi:10.1111/j.1747-9991.2008.00188.x
- Manson, Neil A. and Michael J. Thrush, 2003, “Fine-tuning, multiple universes, and the ‘this universe’ objection”, Pacific Philosophical Quarterly, 84(1): 67–83. doi:10.1111/1468-0114.00163
- Manson, Neil A., 2020, “How not to be generous to fine-tuning sceptics”, Religious Studies, 56(3): 303–317. doi:10.1017/S0034412518000586
- Martin, Stephen P., 1998, “A supersymmetry primer”, in Gordon L. Kane (ed.), Perspectives on Supersymmetry, Singapore: World Scientific, pp. 1–98. doi:10.1142/9789812839657_0001
- McCoy, C.D., 2015, “Does inflation solve the hot big bang model’s fine-tuning problems?”, Studies in History and Philosophy of Modern Physics, 51: 23–36. doi:10.1016/j.shpsb.2015.06.002
- McGrath, P.J., 1988, “The inverse gambler’s fallacy and cosmology—a reply to Hacking”, Mind, 97(386): 331–340. doi:10.1093/mind/XCVII.386.265
- McGrew, Timothy, Lydia McGrew, and Eric Vestrup, 2001, “Probabilities and the fine-tuning argument: a sceptical view”, Mind, 110(449): 1027–1038. doi:10.1093/mind/110.440.1027
- McMullin, Ernan, 1993, “Indifference principle and anthropic principle in cosmology”, Studies in History and Philosophy of Science, 24(3): 359–389. doi:10.1016/0039-3681(93)90034-H
- Miller, Kenneth R., 1999, Finding Darwin’s God: A Scientist’s Search for Common Ground between God and Evolution, New York: Cliff Street Books.
- Monton, Bradley, 2006, “God, fine-tuning, and the problem of old evidence”, British Journal for the Philosophy of Science, 57(2): 405–424. doi:10.1093/bjps/axl008
- Narveson, Jan, 2003, “God by design?”, in Manson 2003: 88–105.
- Oberhummer, H., A. Csótó, and H. Schlattl, 2000, “Stellar production rates of carbon and its abundance in the universe”, Science, 289(5476): 88–90. doi:10.1126/science.289.5476.88
- Paley, William, 1802, Natural Theology: or, Evidences of the Existence and and Attributes of the Deity, Collected from the Appearances of Nature, London: Rivington.
- Parfit, Derek, 1998, “Why anything? Why this?”, London Review of Books, January 22: 24–27.
- Penrose, Roger, 2004, The Road to Reality: A Complete Guide to the Laws of the Universe, London: Vintage.
- Planck Collaboration, 2014, “Planck 2013 results. I. Overview of products and scientific results”, Astronomy and Astrophysics, 571: A1. doi:10.1051/0004-6361/201321529
- Randall, Lisa and Raman Sundrum, 1999, “Large mass hierarchy from a small extra dimension”, Physical Review Letters, 83(17): 3370–-3373. doi:10.1103/PhysRevLett.83.3370
- Rees, Martin, 2000, Just Six Numbers: the Deep Forces that Shape the Universe, New York: Basic Books.
- Rickles, Dean, 2014, A Brief History of String Theory, Berlin, Heidelberg: Springer.
- Roberts, John T., 2012, “Fine-tuning and the infrared bull’s eye”, Philosophical Studies, 160(2): 287–303. doi:10.1007/s11098-011-9719-0
- Rosaler, Joshua and Harlander, Robert, 2019, “Naturalness, Wilsonian renormalization and ‘fundamental parameters’” in quantum field theory, Studies in History and Philosophy of Modern Physics, 66: 118–134. doi:10.1016/j.shpsb.2018.12.003
- Rota, Michael, 2016, Taking Pascal’s Wager: Faith, Evidence and the Abundant Life, Downers Grove, IL: Intervarsity Press.
- Schellekens, A.N., 2013, “Life at the interface of particle physics and string theory”, Reviews of Modern Physics, 85(4): 1491. doi:10.1103/RevModPhys.85.1491
- Sloan, David, Rafael A. Batista, Michael T. Hicks and Roger Davies (eds.), 2020, Fine-tuning in the Physical Universe, Cambridge: Cambridge University Press.
- Smart, J.J.C., 1989, Our Place in the Universe: A Metaphysical Discussion, Oxford: Blackwell.
- Smeenk, Chris, 2014, “Predictability crisis in early universe cosmology”, Studies in History and Philosophy of Modern Physics, 46(1): 122–133. doi:10.1016/j.shpsb.2013.11.003
- Smolin, Lee, 2007, “Scientific alternatives to the anthropic principle”, in Carr 2007: 323–366. doi:10.1017/CBO9781107050990.022
- Sober, Elliott, 2003, “The design argument”, in Manson 2003: 27–54.
- –––, 2009, “Absence of evidence and evidence of absence: evidential transitivity in connection with fossils, fishing, fine-tuning and firing squads”, Philosophical Studies, 143(1): 63–90. doi:10.1007/s11098-008-9315-0
- Steinhardt, Paul J. and Neil Turok, 2008, Endless Universe: Beyond the Big Bang, New York: Doubleday.
- Stenger, Victor J., 2011, The Fallacy of Fine-tuning: Why the Universe Is Not Designed for Us, New York: Prometheus Books.
- Susskind, Leonard, 2005, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design, New York: Back Bay Books.
- Swinburne, Richard, 2003, “The argument to God from fine-tuning reassessed”, in Manson 2003: 105–123.
- –––, 2004, The Existence of God, second edition, Oxford: Clarendon. doi:10.1093/acprof:oso/9780199271672.001.0001
- Tegmark, Max, 2014, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, New York: Knopf.
- Tegmark, Max and Martin J. Rees, 1998, “Why is the cosmic microwave background fluctuation level \(10^{-5}\)”, The Astrophysical Journal, 499(2): 526–532. doi:10.1086/305673
- Tegmark, Max, Anthony Aguirre, Martin J. Rees, and Frank Wilczek, 2006, “Dimensionless constants, cosmology, and other dark matters”, Physical Review D, 73(2): 023505. doi:10.1103/PhysRevD.73.023505
- ’t Hooft, G., 1980, “Naturalness, chiral symmetry and spontaneous chiral symmetry breaking”, in G. ’t Hooft (ed.), Recent Developments in Gauge Theories, New York: Plenum Press, pp. 135–157.
- Uzan, Jeanne-Philippe, 2003, “The fundamental constants and their variation: observational and theoretical status”, Reviews of Modern Physics, 75(2): 403. doi:10.1103/RevModPhys.75.403
- –––, 2011, “Varying constants, gravitation and cosmology”, Living Reviews in Relativity, 14: 2. doi:10.12942/lrr-2011-2
- Venn, John, 1866, The Logic of Chance, New York: Chelsea.
- Vilenkin, Alexander, 1995, “Predictions from quantum cosmology”, Physical Review Letters, 74(6): 846–849. doi:10.1103/PhysRevLett.74.846
- Weinberg, Steven, 1987, “Anthropic bound on the cosmological constant”, Physical Review Letters, 59(22): 2607. doi:10.1103/PhysRevLett.59.2607
- Weisberg, Jonathan, 2005, “Firing squads and fine-tuning: Sober on the design argument”, British Journal for the Philosophy of Science, 56(4): 809–821. doi:10.1093/bjps/axi139
- –––, 2010, “A note on design: what’s fine-tuning got to do with it?”, Analysis, 70(3): 431–438. doi:10.1093/analys/anq028
- –––, 2012, “The argument from divine indifference”, Analysis, 72(4): 707–714. doi:10.1093/analys/ans113
- Wells, James D., 2015, “The utility of Naturalness, and how its application to Quantum Electrodynamics envisages the Standard Model and Higgs boson”, Studies in History and Philosophy of Modern Physics, 49: 102–108. doi:10.1016/j.shpsb.2015.01.002
- Wetterich, Christof, 1984, “Fine-tuning problem and the renormalization group”, Physical Review B, 140: 215–222.
- White, Roger, 2000, “Fine-tuning and multiple universes”, Noûs, 34(2): 260–267. doi:10.1111/0029-4624.00210
- –––, 2011, “What’s fine-tuning got to do with it: a reply to Weisberg”, Analysis, 71(4): 676–679. doi:10.1093/analys/anr100
- Williams, Porter, 2015, “Naturalness, the autonomy of scales, and the 125GeV Higgs”, Studies in History and Philosophy of Modern Physics, 51: 82–96. doi:10.1016/j.shpsb.2015.05.003
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Nick Bostrom’s webpage on the anthropic principles and related matters
- Martin Rees on “many universes” as a candidate response to fine-tuning for life
- Sabine Hossenfelder on why the case for cosmic fine-tuning is shaky
- Paul J. Steinhardt on why the multiverse idea is ready for retirement
- Martin Rees on why the multiverse idea deserves to be better known
- controversy between Susskind and Smolin over the anthropic principle
- Fred Adams on why the universe is not so fine-tuned after all
- Simon Friederich on why we may live in a multiverse but may never know
Acknowledgments
I am grateful to Luke Barnes, Friedrich Harbach, Robert Harlander and two anonymous referees for helpful comments on earlier versions. Work on this article was supported by the Netherlands Organization for Scientific Research (NWO), Veni grant 275-20-065.