Devoir de Philosophie

Compositionality

Publié le 22/02/2012

Extrait du document

A language is compositional if the meaning of each of its complex expressions (for example, ‘black dog') is determined entirely by the meanings its parts (‘black', ‘dog') and its syntax. Principles of compositionality provide precise statements of this idea. A compositional semantics for a language is a (finite) theory which explains how semantically important properties such as truth-conditions are determined by the meanings of parts and syntax. Supposing English to have a compositional semantics helps explain how finite creatures like ourselves have the ability to understand English's infinitely many sentences. Whether human languages are in fact compositional, however, is quite controversial. It is often supposed that meaning, within a context of use, determines truth-value. Meaning and context are also generally thought to determine what terms refer to, and what predicates are true of - that is, to determine extensions. Because of this, compositionality is generally taken to license a principle of substitutivity: if expressions have the same meaning, and substituting one for another in a sentence does not change the sentence's syntax, the substitution can have no effect on truth. Given this, the idea that meaning is compositionally determined constrains what can be identified with meaning. For example, expressions with the same extension cannot always be substituted for one another without change of truth-value. Natural languages obey a principle of compositionality (PC) only if something more ‘fine-grained' than extensions plays the role of meanings, so that expressions with the same extension can still have different meanings. Some say that no matter how fine-grained we make meaning, there will be counterexamples to the claim that meaning is compositionally determined. For example, ‘woodchuck' and ‘groundhog' are synonymous. But the sentences ‘Mae thinks Woody is a woodchuck' and ‘Mae thinks Woody is a groundhog' apparently may diverge in truth-value. (Certainly Mae may assert, with understanding, that Woody is a woodchuck, while denying, with understanding, that Woody is a groundhog. For she can understand ‘woodchuck' and ‘groundhog' but not realize that they are synonymous.) Such arguments lead to the conclusion that meaning is not compositionally determined (or that different forms cannot share meanings). There is no consensus about such arguments. All agree PCs need some restrictions - they do not, for instance, apply to words in quotational constructions. So if ‘thinks' is implicitly quotational, the argument loses its force (see Use/mention distinction and quotation). If ‘thinks', as Frege held, produces a (more or less) systematic shift in the semantic properties of the expressions in its scope, this perhaps reduces the threat to compositionality (see Sense and reference §§4, 5). Some find such accounts of ‘thinks' incredible (Davidson). Linguists often understand the claim that a language is compositional as asserting an extremely tight correspondence between its syntax and semantics (see Syntax). A (simplified) version of such a claim is that (after disambiguating simple word forms), there is, for each (simple) word, a meaning and, for each syntactic rule used in sentence construction, an operation on meanings, such that the meaning of any sentence is mechanically determined by applying the operations on meanings (given by the rules used in constructing the sentence) to the meanings of the simple parts. (Often a host of extra restrictions are incorporated. For example: the operations may be limited to applying function to argument; the order in which operations are applied may be settled by the structure of the sentence.) Some see such principles as providing significant constraints on semantical theories, constraints which may help us decide between theories which are in other respects equivalent (for example, Montague 1974). Such strong PCs imply that every ambiguous sentence is either syntactically ambiguous or contains simple expressions which are themselves ambiguous (see Ambiguity). There are putative counterexamples to this. For instance, it has been alleged that ‘The women lifted the piano' has two meanings, a ‘collective' one (the group lifted it) and a ‘distributive' one (each individual lifted it) (see Pelletier 1994). Context sensitivity makes the formulation of PCs a delicate matter. In some important sense, differing uses of ‘that is dead' have like meanings and syntax. But obviously they may differ in truth; presumably such differences may arise even within one context. How can this be reconciled with compositionality? We might (implausibly) say that demonstrative word forms, strictly speaking, lack meanings (only tokens have such). An alternative (see Kaplan 1989) sees the meaning of the word type ‘that' as ‘incomplete', needing contextual supplementation to determine a meaning of the sort other terms have. A third strategy insists that if ‘that is dead but that is not' is true, its ‘that's are different demonstrative word types. (Advocates of this view are good at seeing very small subscripts.) This view makes it generally opaque to speakers whether tokens are tokens of the same word type, and thus opaque whether their arguments are logically valid. A fourth strategy sees context as providing an assignment of referents to uses of a type in a context. On most ways of working this out, it seems there is no true fleshing out of the principle: the meaning of a sentence type (a) is determined by the meanings of parts and syntax, and (b) determines, in context, the sentence's truth-value. This approach also seems to undermine the idea that an argument like that is dead; so that is dead is formally valid (see Demonstratives and indexicals). Even if relatively tight PCs should prove untenable (as claims about natural languages), it would not follow that natural languages did not have compositional semantics - finite theories which assign truth-conditions and other semantically relevant properties to sentence types or their uses. It has been argued that only if they do would it be possible for finite creatures like ourselves to learn them (since the languages involve infinite pairings of sounds and meanings); analogous arguments hold that our ability to understand (in principle) natural languages requires the existence of such semantics. Such arguments apparently presuppose that learning and understanding a language requires knowing a theory from which semantic facts (such as the fact that, for any speaker x and time t, a use of ‘J'existe encore' by x at t says that, at t, x still exists) are deducible. While such a view is not implausible, alternative plausible views of competence do not support this sort of argument. For example, one might identify linguistic competence with the possession of syntactic knowledge (or even just with the possession of syntactic abilities) by someone with appropriate social and environmental relations and behavioural dispositions. Whether natural languages have compositional semantics, and whether the meanings of their sentences are determined simply by the meanings of their parts and syntax, is still not settled.

« sentence construction, an operation on meanings, such that the meaning of any sentence is mechanically determined by applying the operations on meanings (given by the rules used in constructing the sentence) to the meanings of the simple parts.

(Often a host of extra restrictions are incorporated.

For example: the operations may be limited to applying function to argument; the order in which operations are applied may be settled by the structure of the sentence.) Some see such principles as providing significant constraints on semantical theories, constraints which may help us decide between theories which are in other respects equivalent (for example, Montague 1974 ). Such strong PCs imply that every ambiguous sentence is either syntactically ambiguous or contains simple expressions which are themselves ambiguous (see Ambiguity ).

There are putative counterexamples to this.

For instance, it has been alleged that ‘The women lifted the piano' has two meanings, a ‘collective' one (the group lifted it) and a ‘distributive' one (each individual lifted it) (see Pelletier 1994 ). Context sensitivity makes the formulation of PCs a delicate matter.

In some important sense, differing uses of ‘that is dead' have like meanings and syntax.

But obviously they may differ in truth; presumably such differences may arise even within one context.

How can this be reconciled with compositionality? We might (implausibly) say that demonstrative word forms, strictly speaking, lack meanings (only tokens have such).

An alternative (see Kaplan 1989 ) sees the meaning of the word type ‘that' as ‘incomplete' , needing contextual supplementation to determine a meaning of the sort other terms have.

A third strategy insists that if ‘that is dead but that is not' is true, its ‘that's are different demonstrative word types.

(Advocates of this view are good at seeing very small subscripts.) This view makes it generally opaque to speakers whether tokens are tokens of the same word type, and thus opaque whether their arguments are logically valid.

A fourth strategy sees context as providing an assignment of referents to uses of a type in a context.

On most ways of working this out, it seems there is no true fleshing out of the principle: the meaning of a sentence type (a) is determined by the meanings of parts and syntax, and (b) determines, in context, the sentence's truth-value.

This approach also seems to undermine the idea that an argument like that is dead; so that is dead is formally valid (see Demonstratives and indexicals ). Even if relatively tight PCs should prove untenable (as claims about natural languages), it would not follow that natural languages did not have compositional semantics - finite theories which assign truth-conditions and other semantically relevant properties to sentence types or their uses.

It has been argued that only if they do would it be possible for finite creatures like ourselves to learn them (since the languages involve infinite pairings of sounds and meanings); analogous arguments hold that our ability to understand (in principle) natural languages requires the existence of such semantics.

Such arguments apparently presuppose that learning and understanding a language requires knowing a theory from which semantic facts (such as the fact that, for any speaker x and time t, a use of ‘J'existe encore' by x at t says that, at t, x still exists) are deducible.

While such a view is not implausible,. »

↓↓↓ APERÇU DU DOCUMENT ↓↓↓