# Regularity and Inferential Theories of Causation

First published Tue Jul 27, 2021

A cause is regularly followed by its effect. This idea is at the core of regularity theories of causation. The most influential regularity theory can be found in Hume (1739). The theory has been refined by Mill (1843) who insisted that the relevant regularities are laws of nature. Further refinements used to enjoy popularity until David Lewis (1973) criticized the regularity theory and proposed an analysis of causation in terms of counterfactuals (see the entry on counterfactual theories of causation). Since then, counterfactual theories of causation have risen and regularity theories have more and more fallen into disuse.

Regularities are closely tied to inferences. When a cause is invariably followed by its effect, it is reasonable to infer the effect from the cause. In the wake of logical empiricism and logical approaches to belief change, theories that explicitly analyse causation in terms of inference relations have cropped up. The basic idea is that effects are inferable from corresponding causes in the context of a suitable background theory using a suitable logic. Inferential theories offer solutions to the problems which were decisive for the decline of regularity theories. Such theories may thus be seen as successors of the regularity theories.

## 1. Regularity Theories of Causation

The core idea of regularity theories of causation is that causes are regularly followed by their effects. A genuine cause and its effect stand in a pattern of invariable succession: whenever the cause occurs, so does its effect. This regular association is to be understood by contrast to a relation of causal power or efficacy. On a regularity theory, a cause and its effect only instantiate a regularity. No causal powers are posited in virtue of which a cause brings about its effect. Moreover, no metaphysical entities or connections are posited which would ground the regularities in the world. Causation is not conceived of as a metaphysically thick relation of production, but rather as simple instantiation of regularity.

### 1.1 Humean Regularity Theory

The most influential regularity theory is due to Hume. He defines a cause to be

[a]n object precedent and contiguous to another, and where all the objects resembling the former are plac’d in like relations of priority and contiguity to those objects, that resemble the latter. (Hume 1739: book I, part III, section XIV [1978: 169])

This definition contains three conditions for a cause. First, a cause temporally precedes its effect. Second, a cause is contiguous to its effect. That is, a cause is in spatiotemporal vicinity to its effect. Third, all objects similar to the cause are in a “like relation” to objects similar to the effect. This third resemblance condition says that cause and effect instantiate a regularity. To see this, note the presupposition of the resemblance condition: causes and effects can be sorted into types. The objects which are like the cause are of a certain type and so are the objects which are like the effect. The third condition thus says that all objects of a certain type are followed by an effect of a certain type.

A Humean Regularity Theory (HRT), which may or may not coincide with Hume’s genuine analysis of causation, can be summarized as follows: where $$c$$ and $$e$$ denote token events of the types $$C$$ and $$E$$, respectively,

$$c$$ is a cause of $$e$$ iff

(i)
$$c$$ is spatiotemporally contiguous to $$e$$,
(ii)
$$c$$ precedes $$e$$ in time, and
(iii)
all events of type $$C$$ are followed by an event of type $$E$$.

Causation is thus analysed in terms of spatiotemporal contiguity, temporal precedence, and regular connection. The causal relation is reduced to non-causal entities: the two particular facts about spatiotemporal contiguity and temporal precedence, and the general regularity. There is nothing more to causation according to HRT. In particular, causation does not involve a necessary connection, a productive relation, causal powers, or the like—not even to ground the regularities. This stance against metaphysically thick conceptions of causation is characteristic for the regularity theory (see, e.g., Dowe 2000: Ch. 2; Psillos 2009).

Let us explain some of the corollaries of HRT. The spatiotemporal contiguity of cause and effect excludes causation at a distance: a cause is proximate to its effect, either directly or via a chain of contiguous events (Hume 1739: book 1, part III, section II [1978: 75]). The temporal precedence of the cause and the direction of time imply that causation is asymmetric: if $$c$$ causes $$e$$, then it is not the case that $$e$$ causes $$c$$. Temporal precedence thus explains the asymmetry of causation, and how we can distinguish between cause and effect. However, it also rules out the possibility of simultaneous and backwards-in-time causation; and it dampens the prospects of a non-circular theory of time defined in terms of causation.

Condition (iii) relates a particular instance of causation, $$c$$ is a cause of $$e$$, to the class of all like cases, the types of $$c$$ and $$e$$. Hume takes causation to be primarily a relation between particular matters of fact. Yet the causal relation between these actual particulars holds in virtue of a certain regularity. Any causal relation thus supervenes on a regularity in the sense that there can be no change in the causal relation without a change in the regularity. Any regularity, in turn, supervenes on particular matters of fact: there can be no change in the regularity without a change of particular matters of fact. Whether a regularity is true is thus determined by which particular matters of fact obtain. Likewise, whether a causal relation obtains is determined by which regularity obtains. In the end, only the particular matters of fact determine whether or not a causal relation obtains, since the relation of supervenience is transitive. (For details, see the entries on supervenience and David Lewis, section on Humean supervenience.)

According to condition (iii), a cause is a sufficient condition for its effect in this sense: all $$c$$-like events occurring in the actual world are in fact followed by an $$e$$, where the actual world is understood as the collection of all particular spatiotemporal matters of facts. The sufficiency of the cause for the effect is not to be understood in a modally stronger sense (Dowe 2000: 20). In particular, Hume denies that a cause necessitates its effect. In the world, there is—to use Hume’s phrase—just a “constant conjunction” between cause and effect.

On HRT, every effect has a sufficient cause. This does not imply that any event has a sufficient cause, but it says that, if there is a causal relation, it is deterministic. Probabilistic causation is thus not covered by HRT. And while the theory is compatible with a probabilistic analog of causation, we follow Hume in restricting ourselves to deterministic causation in this entry. (See the entry on probabilistic causation for an overview of theories of causation, where the underlying processes may be indeterministic.)

HRT allows us to discover causal relations. We may observe spatiotemporal contiguity, temporal precedence, and that a certain type of event is generally followed by another type of event. Upon repeated observation of an invariable pattern of succession, we begin to expect that this particular event of a certain type is followed by that particular event of a certain type. We develop an inferential habit due to the experienced constant conjunction of types. Or as Hume puts it:

A cause is an object precedent and contiguous to another, and so united with it, that the idea of the one determines the mind to form the idea of the other. (Hume 1739: book 1, part III, section XIV [1978: 170])

Our mind forms a habit to expect or to infer the effect from a cause upon repeatedly experiencing events which occur in sequence. Thereby, we produce a connection between objects or events that goes “beyond what is immediately present to the senses” (1739: book I, part III, section II [1978: 73]). Our senses and our memory provide us only with finitely many instances of objects or events which occur jointly, and from which our minds form inferential habits. Still these inferential habits apply also to future, as of yet unobserved instances of the regularity. In Hume’s words,

after the discovery of the constant conjunction of any objects, we always draw an inference from one object to another, […]. Perhaps ’twill appear in the end, that the necessary connexion depends on the inference, instead of the inference’s depending on the necessary connexion. (Hume 1739: book 1, part III, section VI [1978: 88])

And indeed, Hume says that “[n]ecessity is something that exists in the minds, not in objects” (1739: book I, part III, section XIV [1978: 165]). So there is no necessary connection in the world which would go beyond regular association. However, the inferential habits we develop based on the perceived regularities makes us feel that the connection between cause and effect is stronger than mere invariable succession. The necessary connection exists in our minds but not in the objects (see section in the entry on Hume on Necessary Connection: Constructive Phase).

According to the standard interpretation, the regularities themselves are real. There really are jointly and repeatedly occurring events of certain types that are spatially contiguous and some temporally precede others. And the regularities are mind-independent: even if there were no minds around, the regularities would exist as patterns in the Humean mosaic of particular facts.

Is causation in the world, as expressed by HRT, extensionally equivalent to causation in the mind? The difference between the two definitions seems to be that condition (iii) of HRT is replaced by the following condition: $$c$$ of type $$C$$ makes us infer $$e$$ of type $$E$$. While HRT does not require the presence of an epistemic agent, the other definition seems to require such an agent and is thus mind-dependent. And an epistemic agent may infer an event from another event, even though there is no regularity connecting the two events. Having said that the two definitions are not extensionally equivalent, the definitions can be read in ways which make them coextensive. The epistemic agent of the second definition, for example, can be seen as a hypothetical omniscient observer who always and only infers in accordance with all and only true regularities (for details, see Garrett 1997: Ch. 5).

What is Hume’s view on causation? We will not enter this exegetical debate here. Suffice it to say that the interpretations of Hume are manifold. Psillos (2009: 132) says that HRT “was Hume’s view, too, give or take a bit” and argues for it in Psillos (2002: Ch. 1). But there are many deviating interpretations (Beebee 2006; Garrett 2009; Strawson 1989; J. Wright 1983). The relation between causation in the world and causation in the mind, in particular, has remained a matter of debate among Hume scholars (see e.g., Garrett 1993).

Let’s go back to the Humean Regularity Theory. HRT enjoys two benefits. As we have observed, no substantive metaphysics is needed to explain causation. No entities like causal powers or a necessary connection between cause and effect need to be posited. Causation is rather reduced to spatiotemporal contiguity, temporal precedence, and regular association. Second, Hume’s regularity theory explains how we can figure out what causes what. A cause is simply the event which temporally precedes and is spatiotemporally contiguous to another event such that the former and latter event instantiate a regularity. Since temporal precedence and spatiotemporal precedence can be observed, the problem of identifying causes reduces to the problem of identifying regularities.

However, HRT also faces serious problems. Recall that it presupposes that causes and effects can be sorted into types which are connected by a regularity. But it is unclear how the sorting can be achieved. What makes an event “like” another? Which aspects of the objects or events are relevant for the sorting? Many contemporary philosophers explicate the relevant notion of resemblance or similarity in terms of laws of nature. If there is a law of the form $$\forall x (A(x) \rightarrow B(x))$$, then events similar to a token event $$a$$ are simply events of the same type—other instantiations of $$A$$—and events similar to $$b$$ are also just token events of the same type. However, it is a non-trivial task to give a satisfying account of what a law of nature is—to say the least.

A second problem runs as follows. Assume, plausibly, that a giant meteor caused the dinosaurs to go extinct. For the meteor to be a cause on HRT, there must be a regularity saying that events like giant meteors hitting the Earth are followed by events like the extinction of the dinosaurs. But it seems as if such a regularity does not exist: the next giant meteor hitting the Earth will not cause the dinosaurs to go extinct. The dinosaurs are already extinct. The problem is that the meteor is only a cause of the extinction on a single occasion. But according to HRT this one meteor hitting the Earth is only a cause if some other meteors hitting the Earth also satisfy the regularity. But why would events far away in space and time determine whether or not this giant meteor caused the extinction of the dinosaurs on this occasion? It seems as if we can have causation without regularity on a single occasion. We will refer to this problem as the problem of singular causal relations.

A third problem is that there can be regularity without causation. The cry of the cock precedes the rise of the sun and the two events are regularly associated. And yet the cry of the cock is not a cause of the sunrise. A special case of this problem arises from joint effects of a common cause. Suppose an instantiation $$c$$ of $$C$$ is a common cause of the instantiations $$a$$ of $$A$$ and $$b$$ of $$B$$, where the token event $$a$$ precedes the token event $$b$$ in time (see Figure 1). Further suppose $$C$$ is instantiated whenever $$A$$ is. That is, whenever a token event of type $$A$$ occurs, a token event of type $$C$$ occurs. Then there is a regular connection between $$A$$ and $$B$$, which we normally do not count as a causal relation. This problem remains a central challenge for regularity theories to date. Figure 1

Setting aside the possibilities of remote and backwards causation for now, a regularity theory of causation is only tenable when the three problems can be answered. The next section focuses on how laws of nature may help overcome the problem of sorting events into appropriate types and how they may exclude the cry of the cock as a cause of the sunrise.

### 1.2 Regularities and Laws

Mill (1843) refined the Humean Regularity Theory of causation. An effect usually occurs only due to several instantiated factors, that is, instantiated event types. He distinguishes between positive and negative factors. Let $$C_1,$$ $$C_2,$$ …, $$C_n$$ be a set of positive factors and $$\overline{D}_1,$$ $$\overline{D}_2,$$ …, $$\overline{D}_m$$ a set of negative factors which are together sufficient for an effect $$E$$. Mill says that the totality of present positive and absent negative factors sufficient for an effect to occur is the cause of this effect. In symbols, the totality of the token events $$c_1,$$ $$c_2,$$ …, $$c_n$$ and the absence of any token events of the types $$D_1,$$ $$D_2,$$ …, $$D_m$$ is a cause of an occurring event $$e$$ iff $$C_1,$$ $$C_2,$$ …, $$C_n,$$ $$\overline{D}_1,$$ $$\overline{D}_2,$$ …, $$\overline{D}_m$$ is sufficient for $$E$$.

Furthermore, Mill claims that causation requires lawlike regularities. Mere regular association is not enough. The cry of a cock (as part of a totality of conditions) precedes the rise of the sun and the former is invariably followed by the latter, and yet the cry of a cock does not count as a cause of the sunrise. For the regularity is not a law of nature, it is a mere accidental regularity.

The Millian Regularity Theory (MRT) can be summarized as follows:

The totality of present positive factors $$\{ C_{1-n} \}$$ and absent negative factors $$\{ \overline{D}_{1-m} \}$$ is a cause of an event $$e$$ of type $$E$$ iff

(i)

there are events of type $$C_1,$$ $$C_2,$$ …, $$C_n$$ which are spatiotemporally contiguous to $$e$$ and there are no such events of type $$D_1,$$ $$D_2,$$ …, $$D_m$$,

(ii)

the events of type $$C_1,$$ $$C_2,$$ …, $$C_n$$ precede $$e$$ in time, and

(iii)

it is a law of nature that an instantiation of all positive factors $$\{ C_{1-n} \}$$ is followed by an instantiation of factor $$E$$ when none of the negative factors $$\{ D_{1-m} \}$$ is instantiated.

What are laws of nature? Well, on a regularity theory of causation, a law is a regularity understood as a stable pattern of events—nothing more, nothing less. And yet, laws are still special regularities. According to Mill (1843), regularities which subsume other true regularities are more general than the subsumed ones. The laws of nature are those regularities which are the most general. They thus organise our knowledge of the world in an ideal deductive system which strikes the best balance between simplicity and strength (see, e.g., 1843: Book III, Ch. XII). This best system account of laws has been further developed by Ramsey and Lewis. Ramsey puts it as follows,

even if we knew everything, we should still want to systematize our knowledge as a deductive system, and the general axioms in that system would be the fundamental laws of nature. The choice of axioms is bound to some extent to be arbitrary, but what is less likely to be arbitrary if any simplicity is to be preserved is a body of fundamental generalizations, some to be taken as axioms and others deduced. (1928 [1978: 131])

Everything else being equal, a system with fewer axioms is simpler. Simplicity thus demands the pruning of inessential elements from the system of laws. A deductive system is stronger when it entails more truths. Strength demands that the deductive system of laws should be as informative as possible. Note that simplicity and strength compete. A system consisting of only one uncomplicated axiom is simple but not very informative. A system which has one axiom for any truth is very strong but far from simple. To avoid a deductive system that is too simple to be informative and one that is too complicated, we need to balance simplicity and strength. The proper balance between simplicity and strength makes the deductive system best.

There is no guarantee that there is a unique best deductive system. If there is none, the laws of nature are those axioms and theorems that are common to all deductive systems that tie with respect to simplicity and strength (or are at least good enough). D. Lewis thus says that

a contingent generalization is a law of nature if and only if it appears as a theorem (or axiom) in each of the true deductive systems that achieves a best combination of simplicity and strength. (1973: 73)

As a consequence, laws are simple and yet highly informative as a part of each best deductive system. The regularity about the cry of the cock and the sunrise is not part of a best deductive system. And whichever regularity is not part of any best system is accidental: it is not a law of nature. The dividing line between laws and accidental regularities is determined by being a part of any best system. Note that no regularity can be regarded as a law of nature in absence of a full system.

There is a worry that the best system account of laws is infected by epistemic considerations. Of course, what truths a best system entails is an objective matter, it does not depend on our knowledge of it. Whether a truth is objectively implied by a best system, however, depends on the way this system is organized. And there may be considerable leeway in choosing the axioms to obtain a simple system. The worry is that the notion of simplicity is not fully objective, and even if it were, striking a balance between simplicity and strength is not. Hence, which regularities are laws depends on epistemic criteria, our desiderata to organize our knowledge as simple as possible in a deductive system. This is not to say that the lawlike regularities are mind-dependent. Laws and regularities are still patterns in the world independent of our knowledge of them. What makes some of the regularities laws and some others not, however, is not only determined by the world. The law-making feature, striking a proper balance between simplicity and strength, seems to have an epistemic component. This is presumably the cost for being free of substantive metaphysical commitments. (For details on the best system account of laws, see the entry on laws of nature.)

What does the epistemic component mean for causation as lawlike regularity? The regularity is determined solely by the world, but the lawlikeness of a regularity rests, in part, on epistemic criteria. So, while causation is not arbitrary—there must still be a regularity for causation—there is an epistemic element in choosing which regularities constitute causation. In brief, the regularities are mind-independent, what causes what is not.

The Humean regularity theory, where causation is instantiation of regularity, presupposes that events can be sorted into types. A type is a class of resembling events. Based on the best systems account, we may now appeal to laws of nature in order to determine which events are similar and which are dissimilar. Suppose it is a law that all events $$a$$ of type $$A$$ are followed by events $$b$$ of type $$B$$. Then the events or objects similar to $$a$$ are just the other instantiations of type $$A$$, and likewise for type $$B$$. But the law-making feature depends on epistemic considerations, and so the types determined by those laws depend on epistemic considerations. Hence, falling back to a Humean regularity theory does not solve the issue. We still need a way to sort events into types. But which events resemble each other comes in degrees and depends on the respects of comparison. The degrees of similarity and respects of comparison are not fixed by the world understood as the collection of all particular spatiotemporal matters of fact. The degrees and respects are rather epistemic components which depend on the categories and classification schemes we use. A related problem are gerrymandered categories or properties like “grue”, as studied by Goodman (1955) in his new riddle of induction.

Some defenders of the regularity theory see the dependence on epistemic components as a problem. To solve it, one may want to posit that there are natural properties which carve nature at its joints. The idea is that those natural properties provide a fully objective classification. Those natural properties would ensure that the lawlike regularities are free of epistemic components. However, what is natural is notoriously difficult to define. Natural properties, which are objectively similar and dissimilar to each other, are thus often simply posited as primitives which are not analysed further, as Lewis (1986: 60–2) does for example.

### 1.3 INUS Conditions

On the Millian Regularity Theory, a cause is nomologically sufficient for its effect: there is a law of nature such that the occurrence of a cause suffices for the occurrence of its effect. A cause is taken to be a totality of positive and negative factors which is sufficient and necessary for the effect. However, one and the same effect can have many causes understood as sufficient and necessary totalities (as has already been observed by Venn 1889). Each of these distinct totalities, or clusters of factors, is sufficient to bring about the effect, but no single one is necessary if another cluster is actual.

Mackie (1974: 63) conceives of each cluster of factors as a conjunction, for example $$C \land \neg A$$, where $$C$$ is a type of event and $$\neg A$$ is the absence of any event of type $$A$$. Consider, for instance, a house which burns to the ground. There is a variety of distinct clusters which can cause the burning of the house. One cluster includes a short-circuit, the presence of oxygen, the absence of a sprinkler system, and so on. Another cluster includes an arsonist, the use of petrol, again the presence of oxygen, and so on. Yet another cluster includes a burning tree falling on the house, and so on. Each cluster is a conjunction of single factors. The disjunction of all clusters represents then the plurality of causes (see Mackie 1965: 245 and 1974: 37–38 for details).

The disjunction of all clusters for an effect is relative to a causal field. A causal field is “a background against which the causing goes on” (1974: 63). Roughly, a causal field captures the circumstances which are kept fixed and so cannot even be considered as (part of) a cause. Or, as Mackie puts it, what is caused is

not just an event, but an event-in-a-certain-field, and some conditions can be set aside as not causing this-event-in-this-field simply because they are part of the chosen field. (1974: 35)

Suppose a causal field includes that a certain person was born and alive for a while. Being born thus cannot be a cause of the person’s death relative to this causal field. However, with enough ingenuity, the person being born is a cause of the person’s death relative to a causal field which does not include that the person was born and alive for a while as a part of the fixed background (see 1974: 37).

This suggests that the regularity for an effect is a disjunction of conjunctions which is necessary and sufficient for the effect relative to a causal field. Relative to this field, each cluster is sufficient for the burning of the house, but no single one is necessary; for another cluster can be sufficient for the burning. ($$C$$ is sufficient for $$E$$ means that, whenever an event of type $$C$$ occurs, so does an event of type $$E$$. $$C$$ is necessary for $$E$$ means that an event of type $$E$$ only occurs if an event of type $$C$$ occurs.) For simplicity, suppose the complex regularity (relative to a particular causal field) has the form:

$(C_1 \land C_2) \lor D_1 \leftrightarrow E.$

On the one hand, any cluster, for example $$C_1 \land C_2$$, is minimally sufficient for $$E$$ (and so is the “cluster” $$D_1$$). $$C_1 \land C_2$$ is minimally sufficient for $$E$$ means that the cluster is sufficient for $$E$$ and each conjunct of the cluster is necessary for its sufficiency. So no conjunct of any cluster is redundant for bringing about the effect. On the other hand, neither cluster is necessary for the effect $$E$$. $$C_1 \land C_2$$ is minimally sufficient but not necessary for $$E$$. $$E$$ occurs if $$D_1$$ does. Now, focus on $$C_1$$ which is on its own insufficient to bring about $$E$$. $$C_1$$ is a single factor of the cluster $$C_1 \land C_2$$ which is a sufficient but unnecessary condition for $$E$$. Hence, $$C_1$$ is an insufficient but non-redundant part of an unnecessary but sufficient condition for $$E$$ (relative to a causal field). Or simply, $$C_1$$ is an INUS condition for $$E$$.

In general, a complex regularity has the form:

$(C_{1,1} \land \ldots \land C_{1, n}) \lor \ldots \lor (C_{k, 1} \land \ldots \land C_{k, m}) \leftrightarrow E.$

Each single factor $$C_{i, j}$$ of a cluster is a potential cause. Hence, a cause is at least an INUS condition (relative to a particular causal field). “At least” because there are limiting cases:

(i)
if the regularity has the form $C_{1} \lor C_{2} \leftrightarrow E,$ the cause $$C_1$$ is a sufficient condition for $$E$$;
(ii)
if the regularity has the form $C_{1} \land C_{2} \leftrightarrow E,$ the cause $$C_1$$ is a necessary condition for $$E$$; and
(iii)
if the regularity has the form $C_{1} \leftrightarrow E,$ the cause $$C_1$$ is a necessary and sufficient condition for $$E$$.

On Mackie’s theory, a factor $$C$$ is a cause of $$E$$ iff $$C$$ is at least an INUS condition of $$E$$, and each factor of the cluster that contains $$C$$ and is sufficient for $$E$$ is instantiated.

How does Mackie’s theory of causation go beyond the regularity theories by Hume and Mill? Unlike HRT, Mackie’s theory captures causal scenarios where several different events are necessary to bring about a certain effect, and so qualify as a cause of the latter. Take the regularity

$(C_1 \land C_2) \lor D_1 \leftrightarrow E.$

Suppose $$C_1, C_2,$$ and $$D_1$$ occur independently of one another. There is neither a strict positive nor a strict negative correlation among these events. There are cases where $$C_1$$ does not occur but $$D_1$$ does, and so $$E$$ will occur. There are also cases where $$C_1$$ occurs but $$C_2$$ and $$D_1$$ do not, and so $$E$$ will not occur. And there will be cases where $$C_1$$ and $$C_2$$ occur together so that an instance of $$C_1$$ qualifies as a cause of an instance of $$E$$. An event of type $$C_1$$ may therefore be a cause of an event of type $$E$$, without there being a strict regular succession between $$C_1$$-events and $$E$$-events. This being said, Hume (1739: book I, part III, section XV) considers causation without strict regular succession. There, he seems to propose a regularity theory that goes beyond HRT and may cover causation based on non-strict succession.

The advantages of Mackie’s theory over Mill’s are less obvious. Recall that for Mill, a cause is a totality of positive and negative factors such that, in the respective causal scenario, this totality is sufficient for $$E$$. On this theory, events of type $$C$$ may be members of a cause for an effect of type $$E$$, without there being a strict regular succession between $$C$$-events and $$E$$-events. Moreover, the notion of a totality of factors is consistent with there being several totalities that are sufficient to bring about an event of type $$E$$. In other words, several totalities may serve as instances of some cluster in one and the same Mackie-style regularity. These totalities may be represented by conjunctions and joined by disjunctions. If we amend the Millian regularity theory by a requirement of minimal sufficiency, it seems as if we can go back and forth between a Mill-style and a Mackie-style representation of causes, with the qualification that causes in the sense of Mackie are mere factors of a cause in the sense of Mill. In this sense, the two theories seem to be intertranslatable. It should be pointed out, however, that Mackie’s theory gives us a more explicit and concise representation of the several totalities, or clusters, which are minimally sufficient to bring about a certain type of effect. The complex regularities and their elegant logical representation have not been in the conceptual repertoire of Mill.

Another merit of Mackie’s theory is that it provides an explanation for causal inference. If we know that an effect of type $$E$$ occurred, and that $$D_1$$ was not present, we may infer that the cluster $$(C_1 \land C_2)$$ occurred. In particular, we may infer that $$C_1$$ caused $$E$$.

Mackie’s regularity theory of causation still fails to distinguish between genuine causes and mere joint effects of a common cause. To see this, consider an example adapted from Russell (1921 [1961: 289]): the sounding of factory hooters in Manchester is regularly followed by London workers leaving their work. And yet the former is not a cause of the latter. However, as Mackie (1974: 81–84) himself points out, the sounding of the hooters in Manchester counts as a cause for the workers going home in London:

In more concrete terms, the sounding of the Manchester factory hooters, plus the absence of whatever conditions would make them sound when it wasn’t five o’clock, plus the presence of whatever conditions are, along with its being five o’clock, jointly sufficient for the Londoners to stop work a moment later—including, say, automatic devices for setting off the London hooters at five o’clock, is a conjunction of features which is unconditionally followed by the Londoners stopping work. (1974: 84)

The structure of the example can be given as follows. There are two instantiated effects $$E_1$$ (the sounding of the hooters in Manchester) and $$E_2$$ (the workers going home in London) such that $$C_1$$ (knocking-off time at five o’clock) is an instantiated common INUS condition:

\begin{align} (C_1 \land C_2) & \lor D \leftrightarrow E_1, \text{ and}\\ (C_1 \land C_3) & \lor B \leftrightarrow E_2. \end{align}

Furthermore, let’s say that $$C_2$$ and $$C_3$$ are instantiated on this particular occasion. Well then, $$C_1$$ counts as a common cause of $$E_1$$ and $$E_2$$ on Mackie’s theory.

In the presence of the complex regularities, the cluster

$E_1 \land \neg D \land C_3$

is sufficient for $$E_2$$. Indeed, the cluster of the sounding of the hooters in Manchester ($$E_1$$), the absence of any conditions that would make the Manchester hooters sound when it was not five o’clock ($$\neg D$$), and the presence of all conditions ($$C_3$$) that are together with it being five o’clock ($$C_1$$) jointly sufficient for the workers going home in London ($$E_2$$)—this cluster is sufficient for the workers going home in London. Hence, $$E_1$$ is at least an INUS condition of $$E_2$$. If $$D$$ is not instantiated, but $$C_2,$$ $$C_3,$$ $$E_1,$$ and $$E_2$$ are, then the instantiation of $$E_1$$ counts as a cause of the one of $$E_2$$. More generally, an effect of a cause $$C_1$$ may be part of an instantiated cluster sufficient for another effect of $$C_1$$. This shows that, on Mackie’s theory, an effect of a common cause may falsely count as a cause for another effect of said common cause.

For Mackie (1974: 85–86), this type of problem indicates that his regularity theory is at best incomplete. And he tells us what is missing: a genuine cause-effect relation is marked by causal priority. As an approximation, a token event $$c$$ was causally prior to another token event $$e$$ if an agent could have—in principle—prevented $$e$$ by “(directly or indirectly) preventing, or failing to bring about or to do $$c$$” (1974: 190). To prevent the sounding of the Manchester hooters does not prevent the workers from going home in London. Hence, the instance of $$E_1$$ was not causally prior to the associated instance of $$E_2$$ (though it is temporally prior). By contrast, preventing the sounding of the London hooters does prevent the workers from going home in London.

The approximate relation of causal priority is interventionist. Causes are seen as devices for an (ideal) agent to manipulate an effect. However, Mackie did not propose an interventionist theory of causation. His final notion of causal priority is independent of a notion of agency. $$c$$ was causally prior to $$e$$ if there was a time at which $$c$$ was fixed while $$e$$ was unfixed (see 1974: 180-183 and 189-192). It should be noted, however, that Mackie himself does not think that his analysis of causal priority succeeds (1974: xiv).

Another problem for Mackie’s regularity theory is that the complex regularities seem to be symmetric. This symmetry blurs the asymmetry between cause and effect, and leads to the problem of the direction of causation. Like Hume, one could stipulate that causes precede their effects in time. The direction of causation is thus simply the direction of time. However, this stipulation excludes both the possibility of (a) backwards causation and (b) an analysis of “time’s arrow” in terms of causation. Since—to the best of our knowledge—there is no account of causation which has fully solved the problem of the causal direction, the objections (a) and (b) do not speak decisively against the regularity theory.

### 1.4 Contemporary Regularity Theories

Mackie’s theory of causation continues to be influential. Richard Wright (1985, 2011) proposed a similar account for identifying causes which can be, and indeed is, used in legal contexts. Building on Hart and Honoré (1985), he defined a cause as follows: a condition $$c$$ was a cause of $$e$$ iff $$c$$ was necessary for the causal sufficiency of a set of existing conditions that was jointly sufficient for the occurrence of $$e$$. Causal sufficiency is here different from lawful sufficiency: the set of conditions must instantiate the antecedent of a causal law whose consequent is then instantiated by the effect. Wright’s account requires that we have causal laws—as opposed to laws—at our disposal and so it is only reductive if we have a reduction of the causal laws. Nevertheless, the idea that a cause is a necessary element of a sufficient set for an effect has been applied in legal cases to determine which event has caused another. In fact, this NESS test for causation has itself become influential in legal theory (see the entry on causation in the law).

Strevens (2007) develops Mackie’s theory further. On the way, he likewise gives up its reductive character by replacing sufficiency with causal sufficiency. The latter is to be understood in terms of causal entailment, which in turn is supposed to represent a causal process. (We explain Strevens’s theory of causation in greater detail in §2.4.) Notably, Strevens (2007) argues that Mackie’s original theory already solves not only the causal scenario known as overdetermination—as everyone agrees—but also (early and late) preemption. (For an overview of problematic causal scenarios, see Paul & Hall 2013).

Baumgartner (2008, 2013) developed Mackie’s theory further while keeping its reductive character. He observes that complex regularities like

$(C_1 \land C_2) \lor D_1 \leftrightarrow E$

are not symmetric in the following sense: an instantiation of a cluster is sufficient for $$E$$, but an instantiation of $$E$$ is generally not sufficient to determine which cluster is instantiated. An instantiation of $$E$$ does not determine whether $$C_1 \land C_2$$ or $$D_1$$ is instantiated. An instantiation of $$E$$ only determines the whole disjunction of minimally sufficient clusters. This non-symmetry may be exploited to establish a notion of causal priority or the direction of causation.

There is, of course, a problem when a single cluster is necessary and sufficient for an effect:

$(C_1 \land C_2) \leftrightarrow E.$

Here the instantiation of $$E$$ is sufficient to determine the instantiation of the cluster. Baumgartner’s suggestion requires an effect to have at least two alternative sufficient clusters in order to establish the direction of causation. And even then, it seems that the problem of joint effects of a common cause is still unsolved. Consider Figure 2, where $$A$$ and $$B$$ are the joint effects of a common cause $$C$$, and $$D$$ and $$E$$ are alternative causes for $$A$$ and $$B$$, respectively. Figure 2

Whenever $$A \land \neg D$$ is actual, so is $$C$$—for no effect occurs without any of its causes. Furthermore, whenever $$C$$ occurs, so does $$B$$. Hence, $$A \land \neg D$$ is minimally sufficient for $$C$$, and thus for $$B$$. And $$A \land \neg D$$ is a minimally sufficient cluster in a necessary condition for $$B$$:

$\tag{1} (A \land \neg D) \lor C \lor E \leftrightarrow B.$

But $$A \land \neg D$$ should not count as a cause of $$B$$. After all, the causes of $$B$$ are only $$C$$ and $$E$$ in this causal scenario.

Baumgartner proposes a solution. The complex regularity for an effect must be necessary for that effect in a minimal way. $$B$$ is instantiated only if $$C$$ or $$E$$ is instantiated. Hence, $$C \lor E$$ is necessary for $$B$$. And the left side of the complex regularity (1) contains no other disjunction—obtained by removing one disjunct—which is necessary for $$B$$. There are only two candidates. $$(A \land \neg D) \lor C$$ is not necessary because $$B$$ can be instantiated along with $$E, \neg C$$, and $$A \land D$$. Similarly, $$(A \land \neg D) \lor E$$ is not necessary because $$B$$ can be instantiated along with $$C,$$ $$\neg E$$ and $$A \land D$$. Hence, $$B \rightarrow C \lor E$$ and no disjunct can be removed from $$C \lor E$$ such that the implication still holds. In this sense, $$C \lor E$$ is a minimally necessary disjunction of minimally sufficient clusters for $$B$$.

The notion of a minimally necessary disjunction can also be characterized in terms of more formal concepts. Suppose $$\mathbf{C}$$ is a set of clusters, where each cluster is given by a conjunction of factors. $$\mathbf{C}$$ is a minimally necessary condition for $$E$$ iff

(i)
$$\bigvee \mathbf{C} \leftrightarrow E$$ is true in the actual world, and
(ii)
there is no $$\mathbf{C}'\subset \mathbf{C}$$ such that $$\bigvee \mathbf{C}' \leftrightarrow E$$ is also true in the actual world.

($$\bigvee \mathbf{C}$$ designates some disjunction of the members of $$\mathbf{C}$$.)

The underlying reason why $$A \land \neg D$$ can be pruned from the left side of (1) is that $$A \land \neg D$$ is sufficient for $$C \lor E$$, while the latter is not sufficient for the former in the given scenario. Now, an instantiation of $$C \lor E$$ makes a difference as to whether or not $$B$$ is instantiated—independently of $$A \land \neg D$$. The converse is not true: an instantiation of $$A \land \neg D$$ makes a difference as to whether or not $$B$$ is instantiated, but not independently of $$C \lor E$$ (see Baumgartner 2008: 340–346 and 2013: 90–96). This suggests the following requirement: a complex regularity for an effect must be a minimally necessary disjunction of minimally sufficient clusters for said effect. Indeed, given that each effect has at least two alternative causes, this seems to solve the problem of joint effects of a common cause.

Baumgartner (2013: 95) defines a cause on the type level as follows. Relative to a factor set containing $$C$$ and $$E$$, $$C$$ is a type-level cause of $$E$$ iff

(i)
$$C$$ is a factor of a minimally necessary disjunction of minimally sufficient clusters for $$E$$, and
(ii)
$$C$$ remains such a factor for any suitable expansion of the factor set.

A type-level cause is thus permanently non-redundant for its effect. He then goes on to define a cause on the token level (for details, see Baumgartner 2013: 98). Relative to a factor set containing $$C$$ and $$E$$, an instance $$c$$ of $$C$$ is an actual cause of an instance $$e$$ of $$E$$ iff $$C$$ is a type-level cause of $$E$$, and, for all suitable expansions of the factor set including the original factor set, $$C$$ is on an active causal route to $$E$$. An active causal route is roughly a sequence of factors $$\langle Z_1, \ldots , Z_n \rangle$$, where each element $$Z_i$$ is a factor of a minimally necessary disjunction of minimally sufficient clusters for its successor element $$Z_{i+1}$$, and each $$Z_i$$ is co-instantiated with a set of factors $$X_i$$ such that $$Z_i \land X_i$$ form a minimally sufficient cluster for $$Z_{i+1}$$. This amounts to a quite powerful analysis of actual causation. It solves many scenarios which are troublesome for counterfactual accounts, including overdetermination, early and late preemption, and scenarios known as “switches“, and “short-circuits“. For a detailed study of these scenarios and an explanation why they are trouble for the counterfactual approach to causation, see Paul and Hall (2013). Finally, it should be noted that Baumgartner’s regularity theory faces a problem relating to what Baumgartner and Falk (forthcoming) call structural redundancies. The latter explain this problem and offer a solution based on a causal uniqueness assumption.

## 2. Inferential Theories of Causation

Inferential theories of causation are driven by the idea that causal relations can be characterized by inferential relations. In essence, $$C$$ is a cause of $$E$$ iff

(i)
$$C$$ and $$E$$ are actual events, and
(ii)
$$E$$ can be inferred from $$C$$ in the context of a suitable background theory using a suitable logic.

Inferential theories may be seen as refinements and generalizations of regularity theories. For example, one can explain INUS conditions in terms of relatively simple inferences between the presumed cause and the putative effect. In this explanation, the distinctive feature of Mackie’s theory is explained by the logical form of the background theory. In a propositional setting, this background theory has the logical form of a biconditional

$(C_{1,1} \wedge \ldots \wedge C_{1,n}) \vee \ldots \vee (C_{k,1} \wedge C_{k,m}) \leftrightarrow E.$

Each $$C_{i,j}$$ that co-occurs with all conjuncts of one conjunction on the left-hand side of this biconditional is at least an INUS condition for the effect $$E$$. Obviously, we can infer $$E$$ from such a $$C_{i,j}$$ in the context of the biconditional and the other conjuncts that form, together with $$C_{i,j}$$, a sufficient condition for $$E$$. However, the exposition of the INUS account in Mackie (1965, 1974) is not explicitly inferential. Likewise, Hume’s original regularity theory can be interpreted inferentially, but it is not an explicitly logical or inferential account.

We understand the notion of an inference relation in a broad way. In essence, an inference relation maps sets of sentences to sets of sentences such that this mapping is guided by some idea of truth preservation: if all members of a given set of premises are true, then all sentences inferable from this set must be true as well. Moreover, there are inference relations that are guided by a weaker requirement: if all members of a given set of premises are believed, then it is rational to believe a certain set of conclusions. Nonmononotonic logics and formal theories of belief revision define inference relations in this sense. We shall say more about such inference systems in Section 2.2. Suffice it to say for now that an inference relation understood in our broad way may be defined syntactically in terms of relations of derivability, or semantically in terms of possible worlds and model-theoretic interpretations, or both ways.

Inferential theories seek to improve on regularity theories. Recall from Section 1.1 that virtually all regularity theories face two problems of extension. First, in the case of singular causal relations, we have a causal relation without regularity. Second, in the case of spurious causation, we have a regularity that is not considered to be causal. A sophisticated inferential theory may allow us to infer singular events, such as the nuclear disaster in Fukushima and the extinction of the dinosaurs, from a rich background theory and certain further hypotheses about presumed causes and the context of the presumed causal process. At least some inferential theories hold the promise to distinguish between spurious and genuine causal relations. The idea is that genuine causal relations may be characterized by distinctive inferential relations, and so be distinguished from spurious causal relations.

### 2.1 Deductive Nomological Approaches

Inherent in the philosophy of logical empiricism, broadly construed, is a strong opposition to traditional metaphysics. Not surprisingly, many of the logical empiricists were guided by Humean empiricist principles. Even some concepts of scientific language were suspected to be metaphysical in nature. Wittgenstein (1922: 5.136) held that belief in a general law of causality is superstition. He also denied that laws of nature could explain natural phenomena (6.371n). Frank (1932: Ch. V, IX) pointed out several difficulties in giving a precise formulation of a general law of causality. Russell (1913) argued that the notion of cause is beset with confusion, and should better be excluded from philosophical vocabulary.

Ramsey (1931) was less sceptical than Russell (1913) and Wittgenstein (1922) about the prospect of an analysis of causation. His account of causation seems to be the logical empiricist analysis that comes closest to Hume’s original analysis. It is centred on the notion of a variable hypothetical. Such a hypothetical is a universally quantified implication that has instances whose truth or falsity we are unable to verify for now. A case in point is the sentence saying that all humans are mortal. This sentence goes beyond our finite experience. For there is no way to conclusively establish that all men—past, present and future—are mortal.

To believe a universal hypothetical consists in “a general enunciation” and “a habit of singular belief” (Ramsey 1931: 241). That is, we are willing to believe instances of the universal sentence, even though we are not able to verify them at the moment. Ramsey builds here Humean ideas about the formation of a habit or custom into an informal semantics of universal sentences.

Ramsey’s account, then, amounts to the following analysis. $$C$$ is a cause of $$E$$ iff

(i)
$$C$$ and $$E$$ are actual,
(ii)
the implication $$C \rightarrow E$$ follows from a variable hypothetical $$\forall x (\phi(x) \rightarrow \psi(x))$$ and certain facts $$F$$,
(iii)
the events described by $$E$$ are not earlier than the events described by $$C$$ and $$F$$,
(iv)
$$(C \wedge F) \rightarrow E$$ is an instance of $$\forall x (\phi(x) \rightarrow \psi(x))$$, and
(v)
$$\forall x (\phi(x) \rightarrow \psi(x))$$ is believed, or we are led to believe it in light of future observations (Ramsey 1931: 249).

Condition (i) remains implicit, but it would be confusing to omit it here. Strictly speaking, condition (ii) is redundant since it follows from condition (iv).

Note that a regularity by itself need not give rise to a variable hypothetical that we believe or are led to believe by future observations. A case in point is the regular connection between the sound of the hooters at a factory in Manchester and the workers going home at a factory in London (see §1.3). Ramsey (1931: 242) literally speaks of trust in order to characterize our epistemic attitude towards causal laws. Ramsey’s account of such laws, which supersedes his earlier best systems analysis, is explicitly epistemic. However, Ramsey’s account does not seem to have the resources to address the problem of singular causal relations. Under which causal law could we subsume the causal hypothesis that dinosaurs became extinct by a collision of the Earth with a meteor?

More work on causation and explanation has been done within the logical empiricist program. The deductive nomological model of explanation by Hempel and Oppenheim (1948)—also referred to as the DN model of explanation—seems to give us an exemplar of a conceptual analysis without metaphysics. Let us briefly recall the basic elements of this model. The explanans of an empirical phenomenon $$E$$ consists of two types of statements. First, a set $$C$$ of antecedent conditions $$C_1, \ldots, C_k$$. Second, a set $$L$$ of laws $$L_1, \ldots, L_r$$. These two sets form the explanans, while $$E$$ is called the explanandum. $$C$$ and $$L$$ explain $$E$$ iff

(i)
$$E$$ is a logical consequence of the union of $$C$$ and $$L$$,
(ii)
$$L$$ is non-empty (while $$C$$ may be empty),
(iii)
the explanans is testable, and
(iv)
each member of $$C$$ and $$L$$ is true.

The explanandum must thereby be subsumed under at least one law.

While Hempel and Oppenheim were not primarily interested in an analysis of causation, they thought that the DN model may yield such an analysis:

The type of explanation which has been considered here so far is often referred to as causal explanation. If $$E$$ describes a particular event, then the antecedent circumstances described in the sentences $$C_1, C_2, \ldots, C_k$$ may be said jointly to “cause” that event, in the sense that there are certain empirical regularities, expressed by the laws $$L_1, L_2, \ldots, L_r$$, which imply that whenever conditions of the kind indicated by $$C_1, C_2, \ldots, C_k$$ occur, an event of the type described in $$E$$ will take place. Statements such as $$L_1, L_2, \ldots, L_r$$, which assert general and unexceptional connections between specified characteristics of events, are customarily called causal, or deterministic, laws. (Hempel & Oppenheim 1948: 139)

This passage nicely indicates how inferential approaches to causation parallel regularity approaches. The regular connection between the presumed cause and the putative effect is expressed by certain laws. The putative effect must be inferable from the presumed cause using these laws if the former is an actual cause of the latter. Causation implies that there are covering laws, according to which the cause is sufficient for its effect (cf. Pap 1952 and Davidson 1967, 1995). At the same time, the above passage reveals logical empiricist scruples to talk about causation. Hempel and Oppenheim are hesitant to set forth the DN model as an analysis of causation and to say that the antecedent conditions of a DN explanation are causes of the explanandum if the latter is a particular event.

More than a decade later than the first publication of the DN model, Hempel (1965: 351n) suggested replacing the notion of cause by the notion of antecedent conditions along the lines of the DN model. Moreover, he then distinguishes more carefully between laws of succession and laws of coexistence (1965: 352). Only the former give rise to causal explanations. If a DN explanation is based on laws of coexistence, this explanation does not qualify as causal. A case in point is the law that connects the length of a pendulum with its period:

$T=2 \pi \sqrt{l/g}$

where $$T$$ is the period of the pendulum, $$l$$ its length, and $$g$$ the gravitational acceleration nearby the surface of the Earth.

Arguably, even laws of succession do not always give rise to causal explanations. Such laws do not only allow for predictions, but also for retrodictions. That is, we can derive statements about events that are prior to events described by the antecedent conditions $$C_1,$$ $$C_2,$$ …, $$C_k$$ in a DN explanation. Hempel (1965: 353) himself observed that Fermat’s principle may be used in a DN explanation of an event that precedes an event described by an antecedent condition.

In sum, there are at least two problems with a simple DN account of causation. First, DN explanations using laws of coexistence do not seem to qualify as causal. Second, in some DN explanations, the explanandum is an event that occurs prior to an event described in the antecedent conditions. In neither case, we view the antecedent conditions of the DN explanation as cause of the event to be explained.

These observations anticipate later criticisms of the DN model, which are commonly referred to as symmetry problems. In the case of the infamous tower-shadow example, most people have come to view only the derivation of the length of the shadow as properly explanatory, but not the derivation of the height of the tower (Bromberger 1966). Hempel (1965: 353n) himself admits that causal DN explanations seem “more natural and plausible” than non-causal ones. Woodward (2003) goes a step further by expounding a causal account of explanation using causal models.

We must wonder why Hempel did not define that $$C$$ is a cause of a particular event $$E$$ iff there is a DN explanation of $$E$$ such that $$E$$ is the explanandum, $$C$$ is among the antecedent conditions, all antecedent conditions precede the occurrence of $$E$$, and all laws are laws of succession. Presumably, his interest in an analysis of causation was limited.

### 2.2 Ranking Functions

Spohn (2006, 2012) expounds a broadly Humean analysis of causation. $$C$$ is a cause of $$E$$ iff

(i)
$$C$$ and $$E$$ occur,
(ii)
$$C$$ precedes $$E$$, and
(iii)
$$C$$ “raises the metaphysical or epistemic status” of $$E$$, given the circumstances of the causal scenario (Spohn 2006: 97).

(iii) says that $$C$$ is a reason for $$E$$, given the circumstances. This relation is analysed in terms of ranking functions. Spohn views this analysis as an alternative to counterfactual approaches. At the end of this section, we will understand why the analysis qualifies as an inferential approach to causation.

Spohn develops the ranking-theoretic analysis in two steps. First, he explains what it is for a proposition to be a reason for another proposition. Second, causal relations are characterized as specific types of reason relations. Let us go a bit further into details. As regards notation, we follow Spohn (2006). The notation in Spohn (2012) is more refined, but slightly more complex as well.

A ranking function represents a belief state. More specifically, a ranking function $$\kappa$$ assigns each possible world a non-negative natural number such that $$\kappa(w)=0$$ for at least one possible world $$w$$. Intuitively, the rank $$\kappa(w)$$ of a possible world $$w$$ expresses a degree of disbelief. $$\kappa(w)=0$$ means that the degree of disbelief is zero. $$w$$ is not disbelieved in this case. Otherwise, it is disbelieved with rank $$\kappa(w)=n > 0$$. A ranking function thus defines a Grovian system of spheres, used to represent AGM belief revision operators (Alchourrón, Gärdenfors, & Makinson 1985; Grove 1988). A Grovian system of spheres is basically a set of nested sets. At the center of the system is its smallest member, a non-empty set of worlds. The smallest set is surrounded by its supersets, as shown in Figure 3. Figure 3

The ranks 0, 1, 2, 3,… are understood as cardinal numbers. For example, if world $$w_1$$ has rank 5 and world $$w_2$$ rank 10, then $$w_2$$ is disbelieved to a degree twice as much as $$w_1$$ is disbelieved. Some ranks may even be empty in the sense that there is no world that has a specific rank $$n$$. For example, ranks 3 and 4 may be without any possible worlds, while there are possible worlds at ranks 0, 1, 2, and 5. The cardinal interpretation of the ranks is needed for certain arithmetical operations, such as lifting the ranks by a certain cardinal number, applied to all possible worlds of a certain subset. Because of the possibility of empty ranks, a Grovian system of spheres does not define a unique ranking function.

At each possible world, either a proposition or its negation is true. We call a world at which a proposition $$A$$ is true an $$A$$-world, and identify the proposition $$A$$ with the set of $$A$$-worlds. The rank $$\kappa(A)$$ of a proposition $$A$$ is the rank of those possible worlds $$w$$ whose rank $$\kappa(w)$$ is minimal among the $$A$$-worlds. In Figure 3, the rank of $$A$$ is thus $$0$$ and the rank of $$\neg A$$ is $$1$$. A proposition $$A$$ is believed iff $$\kappa(\neg A)>0$$.

Ranking functions cannot only be used to express degrees of disbelief, but also to express degrees of belief. This is accomplished by the belief function $$\beta$$. Where $$A$$ is a proposition,

$\beta(A)= \kappa(\neg A) - \kappa(A).$

That is, $$\beta(A)$$ is the difference between the minimal rank of the $$\neg A$$-worlds and the minimal rank of the $$A$$-worlds. For example, if $$\kappa(\neg A)$$ is rather high, we strongly believe $$A$$. $$\beta(A)>0$$ means that $$A$$ is believed to be true, and $$\beta(A)<0$$ that $$A$$ is believed to be false. $$\beta(A)=0$$ means that $$A$$ is neither believed nor disbelieved. In the latter case, we have both $$A$$- and $$\neg A$$-worlds whose rank is zero.

We are almost done with the core of ranking theory. It remains to explain conditionalization. Conditionalization of a ranking function on a proposition represents the belief change upon coming to believe the proposition. The conditionalization of $$\kappa$$ on $$A$$ is defined as follows:

$\kappa(w\mid A)= \kappa(w) - \kappa(A)$

for all worlds $$w$$ at which $$A$$ is true, while $$\kappa(w\mid A)= \infty$$ for all worlds $$w$$ where $$A$$ is false. In less formal terms, if our agent comes to believe $$A,$$ then the ranks of all $$A$$-worlds are lowered by the rank of $$A$$. And the rank of all $$\neg A$$-worlds is set to infinity, which means that she definitely disbelieves those worlds. The conditionalization of a belief function $$\beta$$ is defined analogously:

$\beta(B\mid A) = \kappa(\neg B \: |\: A) - \kappa(B \: |\: A).$

These ranking-theoretic concepts at hand, Spohn defines what it is for a proposition to be a reason for another proposition. $$A$$ is a reason for $$B$$ in the context $$C$$—relative to $$\beta$$—iff

$\beta(B \: |\: A \cap C) > \beta(B \: |\: \neg A \cap C).$

Moreover, he introduces subtle distinctions between an additional, a sufficient, a necessary, and a weak reason. These distinctions give rise to the notions of an additional, a sufficient, a necessary, and a weak cause. We leave out these subtleties for simplicity.

Now that we know what a ranking-theoretic reason is, it remains to explain which types of reasons are causes. In essence, $$C$$ is a direct cause of $$E$$ at the possible world $$w$$—relative to the belief function $$\beta$$—iff

(i)
$$C$$ and $$E$$ are true at $$w$$,
(ii)
the event expressed by $$C$$ precedes the event expressed by $$E$$, and
(iii)
$$C$$ is a reason for $$E$$ in the context $$K$$, which is given by the past of $$E$$, excluding information about $$C$$.

Spohn thinks that we should understand causation relative to small worlds such that the context is given in a certain frame of possible events, but not by the global past of $$E$$. Causation is thus doubly relative to an epistemic perspective. First, it is relative to a belief function, and second to a frame of possible events.

A final step is needed to also capture non-direct causal relations. $$C$$ is a cause of $$E,$$ possibly non-direct, iff the ordered pair $$(C,E)$$ is in the transitive closure of the relation of direct causation. That is, $$C$$ is a cause of $$E$$ iff there is a sequence $$\langle C, C_1, \ldots, C_n, E \rangle$$ $$(n\geq 0)$$ such that each element of the sequence is a direct cause of the successor (if there is one).

Most notably, the ranking-theoretic account of causation is able to solve the problem of spurious causation, arising from common cause scenarios. The solution is quite subtle. Suppose $$C$$ is a common cause of $$E$$ and $$F,$$ where $$F$$ precedes $$E$$. The crucial question then is whether or not $$F$$ is a reason for $$E$$ in the context $$C$$. Spohn (2006: 106) argues that it is not. Recall that $$F$$ is a reason for $$E$$ in the context $$C$$ iff

$\beta(E\mid C \cap F) > \beta(E\mid C \cap \neg F),$

where

$\beta(E\mid C \cap F)=\kappa(\neg E\mid C \cap F) - \kappa( E\mid C \cap F),$

and

$\beta(E\mid C \cap \neg F)=\kappa(\neg E\mid C \cap \neg F) - \kappa( E\mid C \cap \neg F).$

Spohn determines the difference between the ranks $$\kappa(\neg E \mid C \cap F)$$ and $$\kappa( E\mid C \cap F)$$ by the difference in the number of violations of causal regularities between the world where $$\neg E \land C \land F$$ is true and the world where $$E \land C \land F$$ is true. In the former world, the causal regularity between $$C$$ and $$E$$ is violated, while the causal regularity between $$C$$ and $$F$$ is respected. No causal regularity is violated in the world where $$E \land C \land F$$ is true. Hence,

$\beta(E\mid C \cap F)=1.$

As for the calculation of $$\beta(E\mid C \cap \neg F),$$ we have two violations of a causal regularity in the world of $$\neg E \land C \land \neg F$$, but only one such violation in the world of $$E \land C \land \neg F$$. Hence,

$\beta(E\mid C \cap \neg F)=1.$

It therefore holds that

$\beta(E\mid C \cap F) = \beta(E\mid C \cap \neg F).$

Hence, $$F$$ is neither a reason nor a cause of $$E$$. It is easy to show that $$C$$ is a reason for $$E$$, as desired.

Spohn (2006, 2012) considers a number of further causal scenarios, and so he offers ranking-theoretic solutions to the problems of overdetermination, trumping, and various scenarios of preemption. The account of overdetermination parallels probabilistic accounts of this type of causal scenario. Spohn does not explicitly address singular causal relations that lack a corresponding regularity. But there might be a way to argue that a collision of the Earth with a huge meteor—at the time when dinosaurs were roaming the Earth—raises the ranks of those worlds where dinosaurs are alive. Such a raise amounts to believing that dinosaurs could have died because of such a collision.

Why is Spohn’s analysis of causation inferential? Recall that a ranking function represents the epistemic state of an agent, which implies that it determines which beliefs are held by the agent. Moreover, the conditionalization of a ranking function defines how such a function changes if the agent comes to believe a certain proposition. Hence, conditionalization defines an inference relation that maps a set of propositions to a powerset of propositions. The intended meaning of this inference relation is that, if we come to believe a certain proposition, then it is rational to believe a set of propositions. In other words, proposition $$B$$ is inferable from proposition $$A$$ in the context of a ranking function $$\kappa$$ iff $$\beta(B\mid A)>0$$, where $$\kappa$$ is underlying the belief function $$\beta$$. Put into symbolic notation:

$A \infd_\kappa B \;\text{ iff }\; \beta(B\mid A)>0.$

This notation is not used by Spohn (2012), but may be helpful to grasp the inferential nature of Spohn’s condition that a cause must be a reason for its effect. Notice that $$\infd_\kappa$$ is an enthymematic inference relation in the sense that certain premises are hidden in the background.

The inferential nature of the condition that a cause must be a ranking-theoretic reason for its effect can also be seen from the following two results. First, the conditionalization of a belief function $$\beta$$ defines a belief revision scheme (Spohn 1988). Such a scheme tells us how a rational agent should change his or her beliefs in light of new information (Gärdenfors 1988). Second, any belief revision scheme—in the sense of the AGM theory by Alchourrón, Gärdenfors, and Makinson (1985)—defines a nonmonotonic and enthymematic inference relation (Gärdenfors 1991). This result motivates the above definition of the inference relation $$A \infd_\kappa B$$.

Spohn’s theory of causation is first expounded in terms of ranking functions which model the epistemic states of some agents. However, it should be noted that Spohn (2012: Ch. 14) aims to spell out the objective counterpart to ranking functions. He seeks to objectify ranking functions by saying what it means that a ranking function represents objective features of the world. The notion of truth may serve as an introductory example. Given $$w$$ is the actual world, a ranking function $$\kappa$$ is an objective representation of this world iff $$\beta(A)>0$$ for all propositions $$A$$ such that $$w\in A$$. (Proposition $$A$$ is construed as a set of possible worlds; $$\beta$$ is the belief function of $$\kappa$$, as defined above.) This explanation, however, does not yet capture modal and causal features of our world. And so Spohn goes on to explain how modal and causal properties—defined by a given ranking function $$\kappa$$—can be understood as objectives features of the world. The details are intricate and complex. They are spelled out in (2012: Ch. 15); the basic ideas are summarized in Spohn (2018). Suffice it to say that Spohn’s objectification of ranking functions is not only a promising attempt for an objective account of causation and modality, but also a way to resolve the tension between an epistemic and non-epistemic theory of causation.

### 2.3 Strengthened Ramsey Test

Drawing on Spohn (2006), Andreas and Günther (2020) analyse causal relations in terms of certain reason relations. This is the basic schema of the analysis: $$C$$ is a cause of $$E$$ iff

(i)
$$C$$ is a reason for believing $$E$$, and
(ii)
the ordered pair $$(C, E)$$ satisfies a few further conditions.

Andreas and Günther specify the first condition by a strengthened Ramsey Test, which emerges from an analysis of the word “because” in natural language. Ramsey (1931) proposed to evaluate conditionals in terms of belief change. Roughly, you accept “if $$A$$ then $$C$$” when you believe $$C$$ upon supposing $$A$$. This Ramsey Test has been pointedly expressed by Stalnaker:

First, add the antecedent (hypothetically) to your stock of beliefs; second, make whatever adjustments are required to maintain consistency (without modifying the hypothetical belief in the antecedent); finally, consider whether or not the consequent is then true. (1968: 102)

Gärdenfors (1988) expressed the Ramsey Test in more formal terms. Where $$K$$ denotes an agent’s set of beliefs and $$K \ast A$$ the operation of changing $$K$$ on the supposition of $$A,$$ $$A > C \in K$$ if and only if $$C \in K \ast A.$$ In the wake of work by Rott (1986), Andreas and Günther suggest strengthening the Ramsey Test as follows:

$$A \gg C$$ iff, after suspending judgment about $$A$$ and $$C$$, an agent can infer $$C$$ from the supposition of $$A$$ (in the context of further beliefs in the background). (2019: 1230)

This conditional is then used to give a simple analysis of the word “because” in natural language:

$\text{Because } A, C \text{ (relative to }K)\;\;\;\text{ iff }\;\;\;A \gg C \in K \;\;\; \text{ and }\;\;\; A, C \in K$

where $$K$$ designates the belief set of the agent. This analysis may well be incomplete, but proves useful for analysing causation. The next step toward such an analysis is to impose further constraints on the inferential relations between the antecedent and the consequent. For the expert reader, it may be worth noting that the conditional $$\gg$$ is defined in terms of finite belief bases rather than logically closed sets. Inferential relations between antecedent and consequent can therefore be defined syntactically in terms of derivability.

How may the conditional $$\gg$$ be used to analyse causation? Suppose the antecedent $$C$$ designates a presumed cause and the consequent $$E$$ a putative effect. It is first required by Andreas and Günther (2020) that we can infer $$E$$ from $$C$$ in a forward-directed manner. That is, there is an inferential path between $$C$$ and $$E$$ such that, for every inferential step, whenever we infer the occurrence of an event, no premise asserts the occurrence of an event that is temporally later than the inferred event. This condition is a refined and slightly modified variant of Hume’s requirement according to which a cause always precedes its effect.

Second, the inferential path between $$C$$ and $$E$$ must be such that every intermediate conclusion is consistent with our beliefs about the actual world. The idea is that there must be an inferential path between the presumed cause and the putative effect that tells us how the effect is brought about by the cause. This inferential representation of how the effect is produced by the cause must be consistent with what we believe the world is like.

Third, every generalization—used in the inferential path between $$C$$ and $$E$$—must be non-redundant in the following sense: it must not be possible to derive the generalization from other explicit beliefs. (A generalization is simply a universal sentence, which may represent a strict or non-strict law.) By means of this condition, Andreas and Günther (2020) try to solve the problem of spurious causation in common cause scenarios. Suppose there is a thunderstorm. Then, we have an electrical discharge that causes a flash which is followed by thunder. The flash precedes the thunder, and there is a regular connection between these two types of events. But we would not say that the flash is a cause of the thunder. So, we need to discriminate between forward-directed inferences with a causal meaning and other forward-directed inferences that lack such a meaning. Note that we can derive the event of the flash and the event of the thunder from the electrical discharge in a forward-directed manner using only non-redundant generalizations, that is, universal sentences that we think are relatively fundamental laws of nature. Among these are the laws of electrodynamics, atomic and acoustic theory, and optics. However, the only inferential path from the flash to the thunder that is forward-directed requires redundant generalizations: universal sentences that we can derive from more fundamental laws (see Andreas 2019 for details). The strategy is that spurious causal relations can be excluded by requiring the generalizations used in the inferential path to be non-redundant. This strategy works in the mentioned common cause scenario. It remains to be seen whether this strategy works in full generality.

In sum, $$C$$ is a cause of $$E$$—relative to a belief state—iff

(i)
$$C$$ and $$E$$ are believed to be true,
(ii)
the event of $$C$$ precedes that of $$E$$, and
(iii)
after suspending judgement about $$C$$ and $$E$$, we can infer $$E$$ from $$C$$ in a forward-directed manner, without using redundant generalizations, and without inferring sentences that are inconsistent with our beliefs.

Obviously, this analysis is epistemic, just as Hume’s analysis of causation in the mind as well as Ramsey’s and Spohn’s analyses are. Condition (ii) is needed because this condition is not implied by the requirement that the inferential path between $$C$$ and $$E$$ is forward-directed. (Such an implication does not hold because, strictly speaking, forward-directedness of the inferential path means that no inferential step is backward-directed.) The problem of instantaneous causation is addressed in Andreas (2019), at least for some causal scenarios. Andreas and Günther (2020) deliver rigorous solutions to a number of causal scenarios, including overdetermination, early and late preemption, switches, and some scenarios of prevention. Singular causal relations without a regular connection among $$C$$-type and $$E$$-type events are not addressed.

### 2.4 The Kairetic Account

The kairetic account by Strevens (2004, 2008) analyses causation in terms of causal models. It is driven by the idea that every causal claim is grounded in a causal-explanatory claim. Causation and explanation go hand in hand. Strevens uses the notion of a causal model to define a relation of entailment with a causal meaning. A set of propositions entail an explanandum $$E$$ in a causal model only if this entailment corresponds to a “real causal process by which $$E$$ is causally produced” (Strevens 2004: 165). Causal models are assumed to be founded in physical facts about causal influence, and it is assumed that these facts “can be read off the true theory of everything” (Strevens 2004: 165).

The kairetic account parallels the DN account of causation in some respects. The logical form of the explanans is basically the same as in the DN model. We have laws and propositions about events. A causal model is here simply a set of causal laws taken together with a set of propositions about potential causal factors. Unlike the DN account of causation, the kairetic analysis uses causal notions in the analysans. More specifically, the notion of entailment with a causal meaning is taken as antecedently given. In the previous section, we have mentioned Andreas’s (2019) attempt to further specify the relation of logical entailment with a causal meaning using non-modal first-order logic, belief revision theory, and some further non-logical concepts. This specification may be taken to supplement the kairetic account.

In Strevens (2004), the kairetic account is essentially motivated by three types of problems. First, causation by conditions that are most of the time but not always sufficient to produce a corresponding effect. Second, problems of preemption. Third, the problem of how fine-grained the description of a cause should be. Consideration of these problems leads to a sophisticated and powerful analysis of causation, which is based on the notion of an explanatory kernel for an event. Such a kernel contains laws and propositions about events. Given the veridical causal model $$M$$, the explanatory kernel for an event $$E$$ is the causal model $$K$$ such that (i) $$M$$ entails $$K$$, (ii) $$K$$ entails $$E$$, and (iii) $$K$$ optimizes two further properties called generality and cohesion. Generality means that $$K$$ encompasses as many physical systems as can be. Cohesion is a measure for the degree to which potential causal factors are active in all systems satisfying $$K$$ (Strevens 2004: 171). Eventually, Strevens defines $$C$$ to be a causal factor for $$E$$ iff it is a member in some explanatory kernel for $$E$$. Causes, in the kairetic account, are difference-makers with respect to the effect. If we were to take an actual cause out of an explanatory kernel for $$E$$, the remainder of the kernel would not produce the putative effect $$E$$.

Let us try to understand why the description of the cause should be maximally general. There are three reasons for this. First, this requirement allows us to eliminate potential causal factors that do not contribute to the causal production of the effect. A kernel $$K$$ that does mention such a factor—in addition to the “active” causal factors—is less general than a kernel $$K'$$ that mentions only the “active” causal factors. Hence, $$K'$$ should be preferred over $$K$$, other things being equal.

Put simply, some elements of a veridical causal model are not involved in the production of the effect. Other elements are involved, but their influence can be neglected, provided the description of the effect is not overly fine-grained. For example, the gravitational forces of other planets may influence the trajectory of a rock thrown nearby the Earth’s surface. But these forces do not make a difference to whether or not the rock hits a fragile object. Hence, the kernel should not contain such forces. More generally, a kernel for $$E$$ should not contain elements of $$M$$ that are involved in the production of $$E$$ without actually making a difference to it. This gives us a second reason for the requirement of generality.

These two reasons for optimizing generality are motivated by a consideration of difference-making: each member of the kernel should make a difference to the production of the effect. If some member of the kernel is absent, the remainder of the kernel does not suffice to bring about the effect. Otherwise, it is not a proper kernel for the effect.

The third reason for optimizing generality of the kernel is a bit more intricate. Suppose a brick of four kilograms is thrown against a window such that the window shatters. Then, for this effect to be brought about, it is often not necessary that the brick has a weight of exactly four kilograms. A brick that is slightly lighter and thrown with the same velocity would also do. Nor does it have to be a brick, as we know from the rock-throwing examples. So it seems more accurate to say that the shattering of the window was caused by the fact or event that a rigid, non-elastic, and sharp object was thrown against the window in such a manner that the object’s impact was within a certain interval. The latter description is obviously more general than the former in the sense that more physical systems satisfy it. In brief, the kernel for $$E$$ should not contain aspects of causally relevant factors that do not make a difference to $$E$$.

The requirement of cohesion is needed to exclude disjunctions of causal factors from a kernel. Suppose $$C_1$$ is a potential, but inactive causal factor, while $$C_2$$ is active. That is, $$C_2$$ is actually contributing to the causal production of $$E$$, and $$C_1$$ is not. Then neither $$C_1$$ nor $$C_1 \vee C_2$$ should be part of the kernel. We have just seen how $$C_1$$ is excluded from the kernel by the requirement to maximize generality. $$C_1 \vee C_2$$ is excluded by requiring that the kernel should be maximally cohesive. Other things being equal, kernels with $$\{C_1 \vee C_2\}$$ and $$\{C_1, C_1 \vee C_2\}$$ are less cohesive than the kernel containing just $$\{C_2\}$$.

Strevens (2008) is more comprehensive, and addresses a number of further topics, most of which are primarily concerned with the notion of an explanation. Notably, an account is offered for singular causal relations without a corresponding regular connection between the presumed cause and the putative effect (Ch. 11).

### 2.5 Causal Model Approaches

In his seminal book on causality, Pearl (2000) expounded a formal framework of causal models, which is centred on structural equations. A structural equation

$v_i=f_i(pa_i, u_i),\;\; i=1, \ldots, n$

tells us that the values $$pa_i$$ of certain parent variables determine the value $$v_i$$ of a child variable according to a function $$f_i$$ in the context of a valuation $$u_i$$ of background variables. At bottom, a causal model is a set of structural equations. Each equation stands for some causal law or mechanism. We can view these equations as representing elementary causal dependencies. The notion of a parent variable relies on causal graphs, which represent elementary causal relations by directed edges between nodes. Each node stands for a distinct variable of the causal model. Causal models have been used to study both deterministic and probabilistic causation. (For details, see the entry on causal models.)

A distinctive feature of the semantics of structural equations is that it encodes some notion of asymmetric determination. The values of the parent variables determine the value of the child variable, but not the other way around. Together with Halpern, Pearl devised several counterfactual analyses of causation in the framework of causal models. These analyses incorporate elements of the inferential approach to causation. To be more precise, condition AC2(a) of the original and the updated Halpern-Pearl definition of actual causality is counterfactual, while AC2(b) is inferential. The latter condition requires that a given effect must be a consequence of its cause for a range of background conditions. (See Halpern (2016) for details.) Notably, the framework of causal models has also been used to develop inferential approaches to causation that work without a counterfactual condition along the lines of Lewis (1973). We explain the basic concepts of the accounts by Beckers and Vennekens (2018) and Bochman (2018).

Beckers and Vennekens (2018) begin with explaining what it is for the value of a variable to determine the value of another variable—according to a structural equation—in a context $$L$$. This explanation gives us the notion of $$C$$ being a direct possible contributing cause of $$E$$. Such a direct possible contributing cause is actual if $$C$$, $$E$$, and the context $$L$$ are actual. For $$C$$ to be an actual contributing cause of $$E$$, there must be a sequence

$\langle C, C_1, \ldots, C_n, E\rangle\quad (n\geq 1)$

such that each element of the sequence is a direct actual contributing cause of its successor (if there is one). Finally, $$C$$ is a cause of $$E$$ iff

(i)
$$C$$ is an actual contributing cause of $$E$$,
(ii)
the causal path between $$C$$ and $$E$$ satisfies a specific temporal constraint, while
(iii)
there would be no such path if $$\neg C$$ were realized instead of $$C$$.

The temporal constraint basically requires that the events of the sequence $$\langle C, C_1, \ldots, C_n, E\rangle$$ are temporally ordered, i.e., no element in the sequence occurs later than its successor.

Beckers and Vennekens (2018) deliver rigorous solutions to a number of causal problems, including overdetermination, early and late preemption, switches, and some scenarios of prevention. The account is non-reductive since structural equations represent elementary causal dependencies. The framework of causal models does not give us a further analysis of such dependencies. The benefit of this strategy is that the problem of spurious causal relations in common cause scenarios does not arise in the first place. The problem of singular causal relations is not explicitly addressed, but it allows for a trivial solution in the framework of causal models. We can build a causal model with a structural equation saying that a collision of the Earth with a huge meteor leads to the extinction of dinosaurs. A non-trivial solution is as challenging as in the other inferential approaches. It seems very difficult to cite all the different causal laws and background conditions that causally determine dinosaurs to die.

The analysis by Beckers and Vennekens (2018) is inferential in spirit, but not explicitly inferential in the sense that they define a logical inference system with structural equations. Bochman (2018) goes a step further in this respect. His analysis is centred on what computer scientists call a production inference relation. This inference relation is characterized by certain metalogical axioms, such as Strengthening (of the antecedent), Weakening (of the consequent), Cut, and Or. To get some understanding of a production inference relation, it suffices to know that it can be reduced to a set of rules

$A_1 \wedge \ldots \wedge A_k \Rightarrow B_1 \vee \ldots \vee B_n$

where $$A_1, \ldots, A_k$$ and $$B_1, \ldots, B_n$$ are propositional literals, i.e., propositional atoms or negations thereof. Such a rule represents an elementary causal dependency in manner akin to a structural equation. The symbol $$\Rightarrow$$ stands for some notion of asymmetric determination accordingly. Bochman (2018) shows how structural equations using only binary variables may be translated into a set of rules.

It seems as if we could now define $$C$$ to be a cause of $$E$$ iff we can infer $$E$$ from $$C$$ in the context $$K$$. But things are more complicated. The challenge is that the inferential path between $$C$$ and $$E$$ must also represent some active causal path, i.e., a sequence of events such that each event actually caused its successor (if there is one). In other words, we need some notion of an inferential path that corresponds to a causal path in the actual setting of events. Once such a notion is established, Bochman defines $$C$$ to be a cause of $$E$$ iff (i) we can infer $$E$$ from $$C$$ in the context $$K$$, while (ii) we cannot infer $$E$$ from the context $$K$$ alone. The details of this inferential account are a bit intricate, and so we must refer the reader to Bochman (2018) to get a full understanding. There, we can also find rigorous solutions to a number of causal problems, including overdetermination, early and late preemption, and some scenarios of prevention. It is worth noting, finally, that Bochman presents his analysis as an inferential explication of NESS conditions.

## 3. Conclusion

Regularity theories of causation have lost acceptance among philosophers to a considerable extent. One reason is David Lewis’s (1973) influential attack on the regularity approach which concludes that its prospects are dark. And indeed, the problem of spurious causation—to pick just one—was severe for the regularity and inferential theories back then. Mackie’s INUS account and simple deductive-nomological accounts alike cannot properly distinguish genuine from spurious causes, for example in common cause scenarios. In the meantime, however, regularity and inferential theories have been developed that offer at least tentative solutions to the problem of spurious causation, for instance Baumgartner (2013), Spohn (2012), and Andreas and Günther (2020) (which are presented in §1.4, §2.2, §2.3, respectively).

Another challenge is the direction of causation. David Lewis (1973, 1979) thought that counterfactuals themselves yield an account of the direction of causation. This account, however, turned out to be rather controversial and is rarely adopted (see, e.g., Frisch 2014: 204n). To determine the direction of causation, regularity and inferential theories tend to rely on broadly Humean constraints on the temporal order between cause and effect. This move comes at the cost of excluding backwards-in-time causation. We must wonder how severe this problem is. So far, there is neither a commonly agreed understanding of the very idea of backwards causation nor actual empirical evidence for it (cf. the entry on backwards causation). The direction of causation remains a point of controversy within regularity, inferential, and counterfactual approaches.

To conclude, the contemporary regularity and inferential theories have made some remarkable progress. Valid criticisms have been largely overcome and many causal scenarios—which continue to be challenging for counterfactual accounts of causation—have been solved. In particular, Baumgartner (2013), Beckers and Vennekens (2018), and Andreas and Günther (2020) all solve the set of scenarios including overdetermination, early and late preemption, and switches. In light of these recent developments, it should not be considered evident anymore that counterfactual theories of causation have a clear edge over regularity and inferential theories.

## Bibliography

• Alchourrón, Carlos E., Peter Gärdenfors, and David Makinson, 1985, “On the Logic of Theory Change: Partial Meet Contraction and Revision Functions”, Journal of Symbolic Logic, 50(2): 510–530. doi:10.2307/2274239
• Andreas, Holger, 2019, “Explanatory Conditionals”, Philosophy of Science, 86(5): 993–1004. doi:10.1086/705447
• Andreas, Holger and Mario Günther, 2019, “On the Ramsey Test Analysis of ‘Because’”, Erkenntnis, 84(6): 1229–1262. doi:10.1007/s10670-018-0006-8
• –––, 2020, “Causation in Terms of Production”, Philosophical Studies, 177(6): 1565–1591. doi:10.1007/s11098-019-01275-3
• Baumgartner, Michael, 2008, “Regularity Theories Reassessed”, Philosophia, 36(3): 327–354. doi:10.1007/s11406-007-9114-4
• –––, 2013, “A Regularity Theoretic Approach to Actual Causation”, Erkenntnis, 78(S1): 85–109. doi:10.1007/s10670-013-9438-3
• Baumgartner, Michael and Christoph Falk, forthcoming, “Boolean Difference-Making: A Modern Regularity Theory of Causation”, The British Journal for the Philosophy of Science, first online: December 2020. doi:10.1093/bjps/axz047
• Beckers, Sander and Joost Vennekens, 2018, “A Principled Approach to Defining Actual Causation”, Synthese, 195(2): 835–862. doi:10.1007/s11229-016-1247-1
• Beebee, Helen, 2006, Hume on Causation, London: Routledge. doi:10.4324/9780203966600
• Beebee, Helen, Christopher Hitchcock, and Peter Menzies (eds.), 2009, The Oxford Handbook of Causation, Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199279739.001.0001
• Bochman, Alexander, 2018, “Actual Causality in a Logical Setting”, in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden: International Joint Conferences on Artificial Intelligence Organization, 1730–1736. doi:10.24963/ijcai.2018/239
• Bromberger, Sylvain, 1966, “Why-Questions”, in Mind and Cosmos: Essays in Contemporary Science and Philosophy, R. Colodny (ed.), Pittsburgh: University of Pittsburgh Press, 86–111.
• Carroll, John W., 2003 , “Laws of Nature”, The Stanford Encyclopedia of Philosophy (Fall 2016), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2016/entries/laws-of-nature/>
• Davidson, Donald, 1967, “Causal Relations”, The Journal of Philosophy, 64(21): 691–703. doi:10.2307/2023853
• –––, 1995, “Laws and Cause”, Dialectica, 49(2–4): 263–280. doi:10.1111/j.1746-8361.1995.tb00165.x
• Dowe, Phil, 2000, Physical Causation, (Cambridge Studies in Probability, Induction and Decision Theory), Cambridge: Cambridge University Press. doi:10.1017/CBO9780511570650
• Faye, Jan, 2001 , “Backward Causation”, The Stanford Encyclopedia of Philosophy (Summer 2018), Edward N. Zalta (ed.), <https://plato.stanford.edu/archives/sum2018/entries/causation-backwards/>
• Frank, Philipp, 1932, Das Kausalgesetz und Seine Grenzen, Wien: Springer.
• Friederich, Simon and Peter W. Evans, 2019, “Retrocausality in Quantum Mechanics”, The Stanford Encyclopedia of Philosophy (Summer 2019) Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/qm-retrocausality/>
• Frisch, Mathias, 2014, Causal Reasoning in Physics, Cambridge: Cambridge University Press. doi:10.1017/CBO9781139381772
• Garrett, Don, 1993, “The Representation of Causation and Hume’s Two Definitions of ‘Cause’”, Noûs, 27(2): 167–190. doi:10.2307/2215754
• –––, 1997, Cognition and Commitment in Hume’s Philosophy, New York: Oxford University Press.
• –––, 2009, “Hume”, in Beebee, Hitchcock, and Menzies 2009: 73–91.
• Gärdenfors, Peter, 1988, Knowledge in Flux, Cambridge, MA: MIT Press.
• –––, 1991, “Belief Revision and Nonmonotonic Logic: Two Sides of the Same Coin?: Abstract”, in Logics in AI, J. van Eijck (ed.), (Lecture Notes in Computer Science 478), Berlin, Heidelberg: Springer Berlin Heidelberg, 52–54. doi:10.1007/BFb0018432
• Goodman, Nelson, 1955, Fact, Fiction, and Forecast, Cambridge, MA: Harvard University Press.
• Grove, Adam, 1988, “Two Modellings for Theory Change”, Journal of Philosophical Logic, 17(2): 157–170. doi:10.1007/BF00247909
• Halpern, Joseph, 2016, Actual Causality, Cambridge, MA: MIT Press.
• Hart, H. L. A. and Tony Honoré, 1985, Causation in the Law, second edition, Oxford: Clarendon Press.
• Hempel, Carl G., 1965, Aspects of Scientific Explanation, New York: The Free Press.
• Hempel, Carl G. and Paul Oppenheim, 1948, “Studies in the Logic of Explanation”, Philosophy of Science, 15(2): 135–175. doi:10.1086/286983
• Hitchcock, Christopher, 1997 , “Probabilistic Causation”, The Stanford Encyclopedia of Philosophy (Fall 2018), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/fall2018/entries/causation-probabilistic/>
• –––, 2018 , “Causal Models”, The Stanford Encyclopedia of Philosophy (Summer 2020), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/sum2020/entries/causal-models/>
• Hume, David, 1739 , A Treatise of Human Nature, London. Second edition of L. A. Selby-Bigge & P. H. Nidditch (eds.), Oxford: Clarendon Press, 1978.
• Lewis, David, 1973, Counterfactuals, Oxford: Blackwell.
• –––, 1973, “Causation”, Journal of Philosophy, 70(17): 556–567. doi:10.2307/2025310
• –––, 1979, “Counterfactual Dependence and Time’s Arrow”, Noûs, 13(4): 455–476. doi:10.2307/2215339
• –––, 1986, On the Plurality of Worlds, Oxford: Blackwell.
• Mackie, J. L., 1965, “Causes and Conditions”, American Philosophical Quarterly, 2(4): 245–264.
• –––, 1974, The Cement of the Universe: A Study of Causation, Oxford: Clarendon Press.
• McLaughlin, Brian and Karen Bennett, 2005 , “Supervenience”, The Stanford Encyclopedia of Philosophy (Winter 2018), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/win2018/entries/supervenience/>
• Menzies, Peter and Helen Beebee, 2019 , “Counterfactual Theories of Causation”, The Stanford Encyclopedia of Philosophy (Spring 2020), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/spr2020/entries/causation-counterfactual/>
• Mill, John Stuart, 1843, A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation, two volumes, London: Parker.
• Moore, Michael, 2019, “Causation in the Law”, The Stanford Encyclopedia of Philosophy (Winter 2019), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/win2019/entries/causation-law/>
• Morris, William Edward and Charlotte R. Brown, 2014 , “David Hume”, The Stanford Encyclopedia of Philosophy (Summer 2020), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/sum2020/entries/hume/>
• Pap, Arthur, 1952, “Philosophical Analysis, Translation Schemas, and the Regularity Theory of Causation”, The Journal of Philosophy, 49(21): 657–666. doi:10.2307/2020991
• Paul, L. A. and Ned Hall, 2013, Causation: A User’s Guide, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199673445.001.0001
• Pearl, Judea, 2000, Causality: Models, Reasoning and Inference, New York: Cambridge University Press. Second edition, 2009.
• Psillos, Stathis, 2002, Causation and Explanation, London: Routledge. doi:10.4324/9781315710716
• –––, 2009, “Regularity Theories”, in Beebee, Hitchcock, and Menzies 2009: 131–157.
• Ramsey, Frank Plumpton, 1928 , “Universals of Law and of Fact”, manuscript. Printed in his Foundations: Essays in Philosophy, Logic, Mathematics, and Economics, D. H. Mellor (ed.), London: Routledge & Kegan Paul, 1978.
• –––, 1931, “General Propositions and Causality”, in his The Foundations of Mathematics and Other Logical Essays, R. B. Braithwaite (ed.),, New York: Humanities Press, 1931, 237–255.
• Reichenbach, Hans, 1956, The Direction of Time, Berkeley/Los Angeles, CA: University of California Press.
• Rott, Hans, 1986, “Ifs, Though, and Because”, Erkenntnis, 25(3): 345–370. doi:10.1007/BF00175348
• Russell, Bertrand, 1913, “On the Notion of Cause”, Proceedings of the Aristotelian Society, 13(1): 1–26. doi:10.1093/aristotelian/13.1.1
• –––, 1921 , “Psychological and Physical Causal Laws”, Lecture V of The Analysis of Mind, London: George Allen and Unwin, 93–107. Reprinted in The Basic Writings of Bertrand Russell, Robert E. Egner, and Lester E. Denonn (eds.), New York: Simon and Schuster, 1961, 287–295.
• Spohn, Wolfgang, 1988, “Ordinal Conditional Functions: A Dynamic Theory of Epistemic States”, in Causation in Decision, Belief Change, and Statistics: Proceedings of the Irvine Conference on Probability and Causations, Volume II, William L. Harper and Brian Skyrms (eds.), Dordrecht: Kluwer, 105–134. doi:10.1007/978-94-009-2865-7_6
• –––, 2006, “Causation: An Alternative”, The British Journal for the Philosophy of Science, 57(1): 93–119. doi:10.1093/bjps/axi151
• –––, 2012, The Laws of Belief: Ranking Theory and Its Philosophical Applications, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199697502.001.0001
• –––, 2018, “How the Modalities Come into the World”, Erkenntnis, 83(1): 89–112. doi:10.1007/s10670-016-9874-y
• Stalnaker, Robert, 1968, “A Theory of Conditionals”, in Studies in Logical Theory (American Philosophical Quarterly Monograph Series), Nicholas Rescher (ed.), Oxford: Blackwell, 98–112.
• Strawson, Galen, 1989, The Secret Connexion: Causation, Realism, and David Hume, Oxford: Oxford University Press.
• Strevens, Michael, 2004, “The Causal and Unification Approaches to Explanation Unified-Causally”, Noûs, 38(1): 154–176. doi:10.1111/j.1468-0068.2004.00466.x
• –––, 2007, “Mackie Remixed”, in Causation and Explanation, Joseph Keim Campbell, Michael O’Rourke, & Harry S. Silverstein (eds.), MIT Press, 4–93.
• –––, 2008, Depth: An Account of Scientific Explanation, Cambridge, MA: Harvard University Press.
• Venn, John, 1889, The Principles of Empirical or Inductive Logic, London: Macmillan.
• Weatherson, Brian, 2009 , “David Lewis”, The Stanford Encyclopedia of Philosophy (Winter 2016), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/win2016/entries/david-lewis/>
• Wittgenstein, Ludwig, 1922, Tractatus Logico-Philosophicus, London: Routledge & Kegan Paul.
• Woodward, James, 2001 , “Causation and Manipulability”, The Stanford Encyclopedia of Philosophy (Winter 2016), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/win2016/entries/causation-mani/>
• –––, 2003, Making Things Happen: A Theory of Causal Explanation, Oxford: Oxford University Press. doi:10.1093/0195155270.001.0001
• Wright, John P., 1983, The Sceptical Realism of David Hume, Manchester: Manchester University Press.
• Wright, Richard W., 1985, “Causation in Tort Law”, California Law Review, 73(6): 1735–1828.
• –––, 2011, “The NESS Account of Natural Causation: A Response to Criticisms”, in Perspectives on Causation, Richard Goldberg (ed.), Portland, OR: Hart Publishing, ch. 14. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.