Supplement to The Analytic/Synthetic Distinction

Analyticity and Chomskyan Linguistics

  • 1. Background
  • 2. Semantic Features
  • 3. I- vs E-languages
  • 4. Persisting Explanatory Burdens
  • 5. Chomsky’s Doubts and Retreats
  • 6. Conclusion

[References to sections in the main entry are prefixed with “ASD”. Footnotes are substantive.]

1. Background

This supplement to the entry on the analytic-synthetic distinction will not set out in any detail Noam Chomsky’s important proposals about the nature of human language.[19] Whether or not one ultimately agrees with those proposals, there can be no doubt that they have been tremendously influential in linguistics, philosophy and in cognitive science generally (a discipline that his work partly initiated). For anyone doubtful of their relevance to philosophy, it is enough to simply note how frequently philosophers before him, such as Ayer (1934 [1952]) and Wittgenstein (1953, [1967]), appealed to a notion of “grammar” that is entirely ungrounded in the kind of detailed empirical research that Chomsky and others have assiduously pursued for some seventy years. This supplement will be concerned only with ways in which Chomskyan proposals have recast and deepened our understanding of what an account of the analytic might involve.

The main aim of a Chomskyan linguistics is to explain the remarkable fact that virtually all human beings automatically acquire a natural language that enables them to understand a potential infinity of novel sentences. Chomsky’s approach to this issue has its roots in much of the Fregean and Positivist traditions that were reviewed in the main entry. He was a student in the 1940s of the formal linguist, Zellig Harris who recommended he study with the philosopher, Nelson Goodman, who was then teaching seminars on the manuscript that would become his The Structure of Appearance (Goodman, 1951 [1977]). This was a formidable work in which Goodman employed the then still fairly novel techniques of “logical construction” developed by the Positivists (especially Carnap (1928 [1967]) to derive the structure of experience from a sensory base (see ASD, §1.2ff). Although Chomsky was likely already inclined to formal approaches, the elegance and formal precision of both Harris’ and Goodman’s work became and has remained an ideal throughout the many developments of his views.

Chomsky regards this human linguistic ability to be the result of a largely innate sensitivity to the elaborate structures of natural language, which he argued that the methods of 1940s (so-called) “structuralism” and behaviorism in linguistics were inadequate to describe. The logical techniques he learned from Harris and Goodman (as well from the work of the logician, Emil Post, 1936) seemed to him far more promising for setting out the “generative syntax” of a language, or the recursive system for combining symbols in ways that give rise to those structures. They also promised to do so in ways independent of any controversial “semantics,” or complex relations of the symbols to ideas or phenomena in the world. Just how independent the syntax of a natural language is from its “semantics” has been the topic of continuing controversy, some of it involving the issue of analyticity about which Chomsky has vacillated in important ways that will be discussed below.

An important feature of Chomsky’s work has been its focus on data that are immediately available to most any native speaker of a natural language, but which were by and large little noticed until Chomsky called attention to them. They generally consist of pairs of very similar sentence-like strings of words, one of which is perfectly acceptable, and the other clearly not (marked by “*”). To cite some simple examples: virtually all English speakers would balk at such strings as

(1)
*Who did John and kiss Mary? (cf. John and who kissed Mary?).
(2)
*Who did stories about terrify Mary? (cf. Stories about who terrified Mary?)
(3)
*She’s as likely as he’s to get ill (cf. She is as likely as he is to get ill)
(4)
*I recommended the Brie without tasting (cf. Which cheese did you recommend without tasting?)[20]

Much of Chomskyan linguistics has been devoted to determining the surprisingly complex principles needed to account for a virtual infinity of such examples.

2. Semantic Features

(1)–(4) exhibit the kind of grammatical, or syntactic data that has most concerned Chomsky, the acceptability of certain strings of English words largely without regard to their meaning, or semantics. But Chomsky was not uninterested in semantics,[21] and, often endorses the analytic-synthetic distinction:

If the best argument for dispensing with the analytic – synthetic distinction is that it is of no use to the field linguist, then virtually everyone who actually works in descriptive semantics, or ever has, must be seriously in error, since such work is shot through with assumptions about connections of meaning, which will (in particular) induce examples of the analytic-synthetic distinction. (2000, p. 47; see also pp. 34–35; 61–65)

A particular issue that is implicit in Chomsky’s discussion of language is an issue of semantic productivity that was crucial to Frege:

the possibility of our understanding sentences which we have never heard before rests evidently on this, that we can construct the sense of a sentence out of parts that correspond to words. (Frege, c. 1914 [1979]: 79; emphasis added)

This semantic productivity is made possible by what has come to be called the compositionality (q.v.) of language, which Frege famously captured on the model of mathematical functions, whereby the semantic value of a complex expression is a function of the semantic values of its parts. To take a familiar example, the truth of “Snow is white and grass is black” is a truth-function of the truth values of its parts (in this case, it would obviously be false, given that one constituent is false).

In an (at the time) influential attempt to combine the Fregean insight with Chomsky’s then nascent syntactic program, Jerry Fodor and Jerry Katz (1963) also stressed that:

The fact that a speaker can understand any sentence must mean that the way he understands sentences which he has never previously encountered is compositional: on the basis of his knowledge of the grammatical properties and the meanings of the morphemes of the language, the rules which the speaker knows enables him to determine the meaning of a novel sentence in terms of the manner in which the parts of the sentence are composed form the whole. (Fodor and Katz, 1963, p. 171, emphasis added)

To capture the semantic aspects of “understanding,” Fodor and Katz proposed that, in addition to markers for grammatical categories, there are also “semantic markers” that provide

the means by which we can decompose the meaning of one sense of a lexical item into its atomic concepts, and thus exhibit the semantic structure IN a dictionary entry and the semantic relations BETWEEN dictionary entries. (p185, emphasis orig.)

They provide as an example an analysis of the several meanings of “bachelor” in terms of markers for animal (for young, unmated seals) and human, the latter bifurcating into person with lowest academic degree and male, which further divides into unmarried and young knight (p186). A semantic theory would provide such systems of markers for presumably a wide swath of expressions in natural languages. Generally:

Grammatical markers mark the formal differences on which the distinction between well-formed and ill-formed strings of morphemes rests, while semantic markers give each well-formed string the conceptual content that permits it to be a means of genuine verbal communication. (p.210)

(Katz and Postal, 1964, pursued similar proposals.) Extending the scope of such a program, Katz (1972) further enlarged on the analytic data (ASD, §1) and drew attention to speakers’ agreements about, e.g., synonymy, redundancy, antonymy, and implication. And, to be sure, at least some examples of such semantic data seem as striking and needing of explanation as purely syntactic cases.[22] Since, as we have seen (ASD, §3.7), the explanations of analytic data offered by Quine, Putnam and (later) Fodor seem empirically inadequate, perhaps the best explanation of this data is to be had in terms of semantics markers.

As we’ll see in §5, Chomsky vacillated in his views of the analytic. In his (1965), however, he came to share these interests of Fodor and Katz (1963), and included in the purview of linguistic theory at least some clearly semantic phenomena:

It is clear, as Katz and Fodor have emphasized, that the meaning of a sentence is based on the meaning of its elementary parts and the manner of their combination. … [T]here are cases that suggest the need for an even more abstract notion of grammatical function and grammatical relation than any that has been developed so far, in any systematic way. Consider, for example, these sentence pairs:

(19)
(i)
John strikes me as pompous – I regard John as pompous.
(ii)
I liked the play – the play pleased me. …

Clearly. there is a meaning relation, approaching a variety of paraphrase, in these cases. … It seems that beyond the notions of surface structure (such as “grammatical subject”) and deep structure (such a “logical subject”), there is still some more abstract notion of “semantic function” still unexplained. (1965, pp.161–3)

Chomsky (1965) regarded Fodor and Katz’s semantic markers as giving rise to “selectional rules” that supplement more purely syntactic “subcategorization rules,” and claimed that:

Failure to observe a selectional rule will produce such examples as:

(20)
#Colorless green ideas sleep furiously.
(21)
#Golf plays John
(22)
#Misery loves company

which he (1965, p. 149) regards as “deviant” (n.b., (22) can’t be literal, misery being inanimate). Such examples are now usually marked with a “#” rather than a “*” to indicate semantic/pragmatic rather than purely syntactic anomaly.

Chomsky mentions other examples of the sort that were treated as analytic data in ASD §1:

(23)
#Oculists are generally better trained than eye-doctors.
(24)
#Both of John’s parents are married to aunts of mine.[17]
(25)
#I knew you would come, but I was wrong. (1965, p. 77)

But he also cites as an example one that many philosophers might actually find more purely syntactic:

(26)
Mary expects to feed herself; so Mary expects to feed Mary

claiming that this is “analytic, with the three occurrences of Mary taken to be coreferential” (2000, p. 47).[23]

And he sometimes treats as syntactic, features of words that might initially seem semantic:

features such as [Human], [+Abstract] … play a role in the functioning of the syntactic component, no matter how narrowly syntax is conceived. (1965, p151).

Such cases are presumably like the linguistic treatments of “gender,” which are largely distinct from the biological distinction (thus, in Spanish, the noun for the moon –La Luna– is feminine,  while in German it –Der Mond is masculine, and the girl –Das Mädchen–neuter; see Corbett, 1991, §7.3, for further such divergencies. Similar divergencies are observed in the case of “animacy”; see, e.g.,  Trompenaars et al, 2021). Just which “semantic” features are in this way assimilated into the syntax is not always entirely clear, but it presumably has to do with the presence of some more purely syntactic effect, as in examples Chomsky discusses in the same passage:

(27)
*A very walking/hitting person appeared.

vs.

(28)
A very frightening/amusing person appeared.

where the gerund adjective seems to involve a selection rule requiring [+abstract] (see §4 below for more such mixed semantic/syntactic examples).

Chomsky (2000, p32) stresses that verbs, with their relational structure, are richer sources of semantic data than the nouns that philosophers tend to discuss,[24] and he frequently discusses the (supposed) inferential connection between persuade and intend:

It seems reasonable to suppose that semantic relations between words like “persuade,” “intend,” “believe,” can be expressed in purely linguistic terms (namely: if I persuade you to go, then you intend to go. (Chomsky 1977, p. 142; see also 1980a, p. 62; 2000, pp. 62ff,176ff)

He sometimes even presses surprisingly strong views about philosophically controversial cases. Contrary to much of the trend of many materialists who advocate computational theories of mental processes, Chomsky (2000) notes with approval Wittgenstein’s claim

[T]he question whether machines think cannot seriously be posed: “We can only say of a human being and what is like one that it thinks” (Wittgenstein [1953 [1967], §360)], maybe dolls and spirits; that is the way the tool is used. (2000, p. 44)

(Cf. also Wittgenstein, 1953 [1967], §281, although it’s worth wondering whether Wittgenstein – or Chomsky! – really intended to be making claims merely about features of lexical entries on a par, per above, with a claim about linguistic gender; see ASD, §5 for further discussion of this example.)

In any case, Chomsky early on makes an important claim that would seem to support the traditional philosophical interest in the analytic:

[T]he syntactic component contains a lexicon, and...each lexical item is specified in the lexicon in terms of its intrinsic semantic features, whatever these may be. (1965, p. 198fn11, italics mine)

Thus, knowing the meaning of a term would seem to entail that one appreciates what “intrinsic semantic features” are specified in its lexical entry. However, understanding exactly the role such semantic features actually play requires a further important distinction.

3. I- vs. E-languages

Crucial to Chomsky’s views is a distinction he draws (1965, §1.1) between “competence” and “performance,” or speakers’ ability to understand (or reject) a potential infinitude of examples vs. their actual use of words in speech. He sharpens this distinction in his (1986, pp. 20–2) by distinguishing what he regards as essentially the ordinary, folk notion of external, what he calls “E-languages,” such as English, Mandarin, Swahili, ASL and other languages that are commonly taken to be spoken or signed by various social groups, vs. what he regards as the theoretically more interesting notion of an internal “I-language.” This is not a “language” that is spoken at all, but is an internal, largely innate computational system in the brain (or a stable final-state of that system) that is responsible for a speaker’s linguistic competence.[25] Chomsky (2000, pp. 16–40) argues that E-languages are too ill-defined by vague, pragmatic and changing social boundaries and conventions to serve as a theoretically interesting category (but see Ringa and Eska, 2013, for evidence nevertheless of their scientific viability). Whether or not his views prove ultimately correct, it is worth pursuing them as an explanatory avenue that could conceivably provide an empirically principled basis for the analytic.

This distinction between I- and E-languages dovetails nicely with recent, independent philosophical attention on the complex interplay between syntax, semantics and pragmatics in ordinary conversation, particularly in the phenomenon of “polysemy.” This is the use of a word that retains its single “meaning” despite being used to indicate different, conflicting referents or truth-conditions sometimes in a single sentence. For example, The bank is being understood polysemously in

(29)
The bank was destroyed by fire and so moved across the street.

The sentence is perfectly acceptable despite the switch in intended reference: what was destroyed was a building but what moved across the street was a financial institution. It stands in contrast to

(30)
#The bank was eroding fast and so raised its interest rates.

where such a switch between the alluvial and financial senses would involve clear ambiguity, or homonymy (Chomsky, 2000, pp. 180–1).

A particularly interesting and, for proponents of the analytic, troublesome related phenomenon is what the philosopher Frederick Waismann (1945) called “open texture,” or the serious possibility of using old terms in new ways to accommodate previously unanticipated cases. Simon Blackburn (1996) provides a timely example of the term “mother,” whose

open texture is revealed if, through technological advance, differences open up between the mother that produces the ovum, the mother that carries the foetus to term, and the mother that rears the baby. It will then be fruitless to pursue the question of which is the ‘real’ mother, since the term is not adapted to giving a decision in the new circumstances. (Blackburn 1996)[26]

For Chomsky, phenomena of polysemy and open-texture underwrite an important claim he advances:

We cannot assume that statements (let alone sentences) have truth conditions. At most they have something more complex: “truth indications” in some sense. There is good evidence that words have intrinsic properties of sound, form, and meaning; but also open texture, which allows their meanings to be extended and sharpened in certain ways. (Chomsky 1996, p. 52)

He (2000: 36–52, 188) points out that this proposal is a way of fleshing out Peter Strawson’s (1950) claim that it is not words but people that refer by using words. The items that are true or false are not sentences by themselves, but statements made on specific occasions in specific contexts.

Indeed:

A lexical item provides us with a certain range of perspectives for viewing what we take to be the things in the world, or what we conceive in other ways; these items are like filters or lenses, providing ways of looking at things and thinking about the products of our minds. … Referring to London, we can be talking about a location or area, people who sometimes live there, the air above it (but not too high), buildings, institutions, etc., in various combinations (as in London is so unhappy, ugly, and polluted that it should be destroyed and rebuilt 100 miles away, still being the same city). (2000, p. 36)

These “perspectives” are provided by what he, Robyn Carston (2002, 2012) and Paul Pietroski (2005, 2018) have variously called “indicators,” or “pointers” to concepts, or sets of concepts in “regions of conceptual space,” which contain various kinds of information out of which a speaker can make a selection in using the I-language structures to express a truth-conditional content in a particular context:[27] that is, truth-conditions are a matter of performance not competence. As Sperber and Wilson (1986/95) and many others have stressed, ordinary communication is often driven by a concern to be relevant, saying all that is needed as efficiently as possible for the purposes at hand. To this end, speakers may delete material provided by the I-language, and exploit gestures, intonation, social conventions, and contextual and background information to understand the truth-conditions in ways that go well beyond what I-language structures provide.

Given the richness of the resources available to speakers, they certainly needn’t be slave to the particular constraints of their I-language. Indeed, they often aren’t, as when, for example, English speakers omit subjects and auxiliaries that (they know full well) their (I-)language requires, asking, e.g., Finish your thesis? or Been to New York lately?; or when determiners and auxiliaries are omitted in headlines (President shot!) or public signs (No vehicles allowed in park);[28] or when logicians introduce formalisms, or chemists notations like “H2O,” whose syntax may be quite different from that of any natural language (cf., Chomsky, 2000, pp. 42–43). A speaker selects those features of an I-language structure that are appropriate for the context in which a specific utterance is being produced, and ignores those that aren’t. What the I-language provides might be regarded as defaults: the structures and features of lexical items are normally respected, but may be overridden for any number of good reasons, such as cogency, efficiency or the development of scientific theory. Like ethical precepts against, for example, lying and killing, they are prima facie or default presumptions that can be overridden by other considerations (soldiers killing in war can still appreciate that killing is wrong). Thus one can override syntactic principles and produce tokens of President shot!, as well as of (22), Misery loves company, knowing but over-riding the [–abstract] selection restriction on loves.

Such a view would seem to offer the possibility of an ecumenical resolution of some of the disputes we discussed in ASD (§§3.4, 3.7) between Putnam (1962 [1975]) and Katz (1990:pp216ff) about whether Cats are animals is analytic or could be refuted by discovering the things are robots. They might both be right: Cats are animals might be analytic insofar as [+animal] is indeed part of the I-language item pronounced “cats,” but tokens of it may nevertheless be false as a matter of use in the light of surprising scientific discoveries. Similarly, in the Canadian dispute over whether gay marriage is a contradiction (ASD§3.7), Chomsky (1965, p77) might be right in his claim about (26), #Both of John’s parents are married to aunts of mine, being anomalous (at least in 1965), while the court could be right about how the term should be used legally, a case of “open texture” not unlike the case of “mother” mentioned by Blackburn above. And logical constants such as and and or may have certain classical inferential roles specified in their entries which are nonetheless sometimes not respected, either because of expressive efficiency or of over-riding theory (cf., ASD, fn 9). (So the definition of “analytic” would have to be qualified: p is analytic iff p could be known by virtue of knowing the meanings of its constituent words, unless it is false. Thus, someone could know something is analytic in this sense despite not believing it to be true.)

Polysemous use of language might also go some way towards explaining a persistent puzzle about the analytic: why some claims seem analytic despite the persistent failure/impossibility of providing successful analyses (cf. Fodor, 1981, 1998), and why the best we seem to be able to find are what Wittgenstein (1953 {1967, §§66–67) called “family resemblances” between different uses of a word:[29] there may be nothing in common between all uses of a univocal word.

But does anything go? The trouble with Wittgenstein’s “family resemblances” and Waismann’s “open texture” is that they seem totally unconstrained. Everything, after all, resembles everything else in some respect or other (any two things are co-members of an infinite number of sets), and so a term could be extended in virtually any way at all. There must be some constraints on meaning if people are to acquire their competence not only to understand acceptable strings, but to come to think and to communicate with each other in the remarkably rich and stable ways they manifestly do, despite familiar failures. As Fodor and Lepore (1998 [2002]) observe “Surely there just couldn’t be a word that’s polysemous between lamb-the-animal and (say) beef-the-meat? Or between lamb-the-animal and succotash-the-mixed-vegetable?” (p. 117).

4. Persisting Explanatory Burdens

All this pushes the question back to how to justify the identification of specific semantic constraints as part of the identity of a lexical item. This it has not proven easy to do. Fodor (1970, 1981) contested on linguistic grounds some of the most prized examples of analyticities, such as #(8) in set II (ASD, §1), If Holmes killed Sikes, then Sikes is dead (although see Pietroski, 2002, for a reply). And Fodor, J.D. et al. (1975) raised doubts about whether any kind of “semantic decomposition” of lexical items into “basic” semantic features is psychologically real – indeed, whether there is a psychologically principled basis for determining the features into which they might be decomposed: given the failure of traditional empiricist efforts to reduce concepts to sensory experience (see ASD, §§2–3), it’s unlikely the basic features would be perceptual (see Fodor, 1981; 1998, p. 44).

However, it’s important to distinguish here, as Fodor, J.D. et al, 1980, and many psycholinguists don’t always do, between decomposition as a performance issue, reflected, say, in response times to presented sentences, from it as an issue of linguistic competence, which might not always in reflected in performance (cf., McCourt, 2021). One might access a lexical item in a variety of ways more efficient than consulting its definition. For example, Jackendoff (1992) and others have called attention to the heavy use of spatial metaphors in many grammatical constructions. But such facts don’t entail that the concepts of the domains to which these metaphors are applied – say, the structure of the mind, social relations, or mathematics – are, themselves, somehow intrinsically spatial, or really thought by anyone to be so. People may find it useful to conceive of many domains in spatial ways. However, conceptions –common beliefs, family resemblances, prototypes, memories of exemplars – are one thing, people’s concepts and the meanings of their words quite another. Two mathematicians can have the thought that there is no largest prime, even if one of them thinks of numbers spatially and the other purely algebraically (see Rey, 1985). Moreover, as Fodor and LePore (1998 [2002]) repeatedly stress (following Frege, see §2 above), it’s essential to concepts and word meaning that they are compositional, the meaning of a syntactic combination being in general a function of the meaning of its parts, and this, prototypes and images fail to be (pet fish is not composable from prototypes or images of pet and fish).

But even if we restrict attention to competence, one must still ask Quine’s question: how we are to distinguish an analytic claim from simply a tenacious worldly belief? When he’s sympathetic to the analytic (which, as will be evident shortly, he not always is), Chomsky claims, for example:

It has been argued plausibly that concepts of a locational nature – including goal and source of action, object moved, etc. – enter widely into lexical structure, often in quite abstract ways. In addition, notions like actor, recipient of action, instrument, event, intention, causation and others are pervasive elements of lexical structure, with their specific properties and interrelations. (Chomsky 2000, p. 62).

And he (2000, pp. 128, 183, 204fn14) endorses work of Julius Moravcsik (1975, 1990) and James Pusteyovsky (1995) on the role of such Aristotelian categories in natural language semantics.

Aristotelian categories may well have a place; but one needs to be careful. Pusteyovsky (1995, 2002) offers a rich theory of “the generative lexicon,” whereby lexical items have “argument,” “event,” “qualia” and “inheritance” structures, each of which “contributes a different kind of information to the meaning of a word” (p419). Now, the first two, argument and event structures, have been independently proposed as parts of syntactic theory (see Pietroski, 2018), but it is hard to see how the second two, qualia and inheritance structures, are separable from merely common beliefs about the phenomena in the world words pick out (cf. ASD, §3.6C). Pusteyovsky, for example, writes:

the qualia structure of a word specifies four aspects of its meaning:

  • the relation between it and its constituent parts;
  • that which distinguishes it within a larger domain (its physical characteristics);
  • its purpose and function;
  • whatever brings it about. (2002, p. 418)

and the

Inheritance Structure [specifies] How the word is globally related to other concepts in the lexicon. (2002, p. 419)

But the purposes and functions of things, and the relations they bear to their parts and to other things, as well as issues of, e.g., intention and causation, are a large part of simply what constitutes a person’s mere beliefs about the world, which, as Putnam (1965 [1975]) rightly observed (ASD, §3.4A), are frequently revised in the light of evolving theories without apparent changes in the meanings of words. Speakers perfectly competent with the word “music” may be ignorant of or revise their ideas about its constituent parts (instruments, counterpoint, harmonic structure), its purpose and function (amusement, edification, the glory of god), as well as about how on earth musicians manage to produce it (or perhaps it’s just naturally occurring “music of the spheres!”). And many speakers may understand natural phenomena as artifacts, as when they regard the world as the creation of God, and earthquakes as expressions of His wrath. As Jerry Fodor (1987) pointed out:

People can have radically false theories and really crazy views, consonant with our understanding perfectly well, thank you, which false views they have and what radically crazy things it is they actually believe. Berkeley thought that chairs are mental, for heaven’s sake! Which are we to say he lacked, the concept MENTAL or the concept CHAIR? (1987, p. 125)

(See Fodor and Lepore, 1998 [2002], for related objections, and Pusteyovsky, 1998, for a reply).

Specific proposals of a recent linguistics text about the semantic structure of some English words make the problem particularly vivid. Laurel Brinton (2000) writes, for example, that:

Trot – requires [+QUADRUPED] subject
{The horse, *the money, *the spider} trotted home

Fly – requires [+WINGED] subject
{The airplane, the bird, *the goat} flew north

Admire – requires [+HUMAN] subject
{Judy, *the goldfish} admires Mozart (Brinton 2000, p. 154)

Set aside the straightforward empirical inadequacies of the proposals (don’t missiles and balls fly through the air without wings?). Even if they matched speakers’ intuitions, there would still be the issue of their significance. It would, of course, be certainly surprising to encounter trotting spiders, flying goats, and goldfish admiring Mozart, but it’s hard to see why we should conclude that our surprise is due to constraints of the I-language rather than merely to commonplace beliefs about spiders, goats and goldfish. Perhaps there are meaning constraints in the vicinity of these examples (trots requires legs, admire a mental agent), but the challenge is to provide a principled basis for insisting upon specific ones.[30]

These examples point to a still further serious problem of determining just what, if anything, it is specifically about the I-language that presents difficulties for violations of semantic constraints. Are such worldly features as [+quadruped], [+winged] or (returning to standard examples of bachelor and pediatrican) [–married] and [+doctor] really included in lexical items processed by the I-language? Notice that the analytic data certainly don’t present quite the same immediate difficulties in processing that are presented by syntactic ones:

(31)
#Some bachelor is married,
(32)
#No one who runs moves,

or denials that ancestor is transitive or marriage symmetric (see ASD, §1), seem to be perfectly parsable, in contrast to:

(33)
*Who did John and kiss Mary

and

(34)
*She’s as likely as he’s to get ill.

Indeed, for starters, what are in fact (non-formally) contradictory sentences, e.g.,

(35)
There is a finite number of primes

often appear as premises in perfectly intelligible reductio ad absurdum proofs that show they are! And philosophers often quite intelligently dispute proposed analyses of, e.g., cause and know, as well as supposed “category mistakes” that many have claimed are involved in claiming, e.g., that reasons are causes, or sensations events in the brain.

Lastly, unlike syntactically unacceptable cases, mere semantic anomalies can be perfectly well embedded in other phrases: thus,

(36)
Sue denied that some bachelor is married.
(37)
John was amused by the thought that the number three likes Tabasco sauce.

are perfectly acceptable, unlike

(38)
*Sue wondered who did John and kiss Mary.
(39)
*John asked who stories about frightened Ann.

So it’s not clear that the I-language so much as cares about mere semantic anomalies. Perhaps they really aren’t phenomena of language at all, but rather features of our conceptual and/or belief systems, the (unembedded) sentences being simply silly and false –precisely as Quine (1960 [2013[, p. 210) proposed.

It’s worth contrasting these cases with semantic cases that the I-language does seem to care about. These don’t seem to be abundant –we mentioned a few earlier in discussing (29)–(30) (§2 above)– but one striking set of examples are “negative polarity” items (“NPI”s), such as ever and at all, which can only appear in certain, e.g., “negative” contexts.[31] Thus:

(40)
Sue doubts Tom has ever flown.
(41)
Sue doesn’t like pistachios at all.

are fine, but not:

(42)
*Sue knows Tom has ever flown.
(43)
*Sue likes pistachios at all.

Here the semantic features seem to have a kind of syntactic reflex, prohibiting material that, in contrast to mere “category mistakes,” cannot in general be acceptably embedded:

(44)
*John thinks Sue knows he has ever flown.
(45)
*It is widely believed that Sue likes pistachios at all.

So here we have what would seem to be some serious linguistic evidence for the reality of what seems to be a semantic phenomenon.[32] Until we find analogous data that show some such syntactic reflexes to category mistakes and other of the analytic data, it’s going to be hard to claim that these latter “analytic” cases are due to some genuine property of the language system, as opposed to merely something about our belief systems, in the way Quinean sceptics suspect (cf. ASD, §3.5) – and, as we will now see, as sometimes Chomsky himself does!

5. Chomsky’s Doubts and Retreats

Chomsky vacillates a great deal about the analytic. Passages quoted so far seem decidedly sympathetic to it, but interwoven throughout those passages are more Quinean reservations, that (given the earlier passages) are so surprising and significant that they need to be quoted at length. What is particularly distinctive and important about his vacillations is that, unlike the traditional philosophical reactions that he sometimes shares, Chomsky sees the issues not as ones involving merely intuitions and behavioral dispositions, but of ascertaining their aetiologies in what he suspects is a subtle and complex, but still not well-understood internal organization of speakers’ minds.

He (1965) early on recognizes the difficulty of distinguishing semantic from syntactic phenomena:

A decision as to the boundary separating syntax and semantics (if there is one) is not a prerequisite for theoretical and descriptive study of syntactic and semantic rules. On the contrary, the problem of delimitation will clearly remain open until these fields are much better understood than they are today. Exactly the same can be said about the boundary separating semantic systems from systems of knowledge and belief. That these seem to interpenetrate in obscure ways has long been noted. (1965, p.159; added emphasis)

Indeed, it isn’t at all clear that semantic phenomena should be included in the language faculty at all:

Do the “semantic rules” of natural language that are alleged to give the meanings of words belong to the language faculty strictly speaking, or should they be regarded perhaps as centrally embedded parts of a conceptual or belief system, or do they subdivide in some way? (1980a, p. 62)

Addressing some of the examples that were discussed in ASD, §3, he continues:

Much of the debate about these matters has been inconclusive. It turns on dubious intuitions as to whether we would still call our pets “cats” if we learned that they are robots controlled by Martians, or whether we would call the stuff in the Ganges “water” were we to discover that it differs in chemical composition from what flows in the Mississippi…. (1980a, p. 62)

And, recalling his own examples of persuade/intend, chase/follow:

[Q]uite generally, arguments for analytic connections of some sort involving properties of the verbal system, with its network of thematic relations, are more compelling than those devised for common nouns. Even assuming there to be certain analytic connections, as I believe to be the case, the question remains whether these are to be assigned to the language faculty, hence to appear in some form in its representations of meaning, or to a conceptual system. (1980a, p. 62; emphasis added)

This last passage seems to raise an interesting possibility of a “conceptual” system distinct from both the language faculty and a general belief confirmation system, along lines many traditional philosophers might find congenial (cf. the talk in §2 of lexical items containing not the semantic features themselves, but merely “pointers to (sets of) concepts in conceptual space”). But, of course, such a possibility would need to be spelt out empirically, and the prospects aren’t clear (see Harman 1994 [1999] for prima facie difficulties).

Sometimes, moreover, Chomsky seems to capitulate almost entirely to Quine:

Much of what is often regarded as central to the study of meaning cannot be dissociated from systems of belief in any natural way. (Chomsky 1975, p. 23)

Indeed:

With regard to “meaning holism” [Quine] may well turn out to be correct, at least in large part. (2000, p. 61; cf., ASD, §3.4fn13)

     However, despite all of Chomsky’s vacillations, the really important point to bear in mind is his methodological one:

The status of a statement as a truth of meaning or of empirical fact can only be established by empirical inquiry, and considerations of many sorts may well be relevant; for example, inquiry into language acquisition and variation among languages. The question of the existence of analytic truths and semantic connections more generally is an empirical one, to be settled by inquiry that goes well beyond the range of evidence ordinarily brought to bear in the literature on these topics. (2000, pp. 63–64)

He provides as an example his (1980b) dispute with Harman (1980) over the “persuade”/“intend” connection,[33] writing:

Suppose that two people differ in their intuitive judgments as to whether I can persuade John to go to college without his deciding or intending to do so (see Harman 1980). We are by no means at an impasse. Rather, we can construct conflicting theories and proceed to test them. (2000, p. 64)

Specifically, along lines he himself pursues:

One who holds that the connection between persuade and decide or intend is conceptual will proceed to elaborate the structure of the concepts, their primitive elements, the principles by which they are integrated and related to other cognitive systems, and so on; and will seek to show that other properties of language and other aspects of the acquisition and use of language can be explained in terms of the very same assumptions about the innate structure of the language faculty, in the same language and others, and that the same concepts play a role in other aspects of thought and understanding. (2000, p. 64)

On the other hand:

One who holds that the connection is one of deeply held belief, not connection of meaning, has the task of developing a general theory of belief fixation that will yield the right conclusions in these and numerous other cases. (2000, p. 64)

What with the phenomenon of “confirmation holism” that was discussed in ASD, §3.4, a general theory of belief fixation looks to be far more difficult than philosophers have traditionally supposed. Chomsky is likely sympathetic to Fodor’s (2000) pessimism in this regard, and so, unsurprisingly, concludes:

The first tack –in terms of innate conceptual structure – seems far more promising to me, and is the only approach that has any results or even proposals to its credit; it is, however, a matter of empirical inquiry, not pronouncements on the basis of virtually no evidence. (2000, p. 64; see also p. 129)

Perhaps the best expression of Chomsky’s ultimate view of semantics and the analytic is one he stated early on:

The syntactic and semantic structure of natural languages evidently offers many mysteries, both of fact and principle, and that any attempt to delimit the boundaries of these domains must certainly be quite tentative. (1965, p. 163)

6. Conclusion

Chomsky’s program offers a welcome invitation to explore the aetiologies of judgments regarding the analytic data much more deeply in the internal structure of the mind than did traditional appeals merely to intuitions and behavioral dispositions; but it remains to be seen whether in the end the aetiologies can ultimately ground the analytic-synthetic distinction any better than they did. Perhaps a linguistic distinction is not what is wanted, but rather some independent theory of concepts that might better do the work many 20th-Century philosophers hoped the analytic would perform (cf. ASD §2), and is arguably the sort of investigation many philosophers have really had in mind. But here it is reasonable to regard the verdict as still out, the theoretical understanding of our understanding of language, concepts and the mind being a much subtler topic of on-going empirical research than many philosophers have traditionally supposed.

Copyright © 2022 by
Georges Rey <georey2@gmail.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free