Conceptual semantics

Conceptual semantics is a framework for semantic analysis developed mainly by Ray Jackendoff. Its aim is to provide a characterization of the conceptual elements by which a person understands words and sentences, and thus to provide an explanatory semantic representation (title of a Jackendoff 1976 paper). Explanatory in this sense refers to the ability of a given linguistic theory to describe how a component of language is acquired by a child (as proposed by Noam Chomsky; see Levels of adequacy).

Recently, conceptual semantics in particular, and lexical semantics in general, have taken on increasing importance in linguistics and psycholinguistics. Many contemporary theories of syntax (how sentences are constructed from individual words) rely on elements that are idiosyncratic to words themselves. As a result, a sound theory accounting for the properties of the meanings of words is required.

Meaning and Decomposition

Like many semantic theories, Jackendoff claims that a decompositional method is necessary to explore conceptualization. Just as one of the ways a physical scientist tries to understand matter is by breaking it down into progressively smaller parts, so a scientific study of conceptualization proceeds by breaking down, or decomposing, meanings into smaller parts. Clearly, however, this decomposition cannot go on forever: we must ‘reach bottom’ at some stage. This is the level of conceptual structure, the level of mental representations which encode the human understanding of the world, containing the primitive conceptual elements out of which meanings are built, plus their rules of combination. Just as generative syntax posits a finite set of syntactic categories and rules for combining them, so Conceptual Semantics posits ‘a finite set of mental primitives and a finite set of principles of mental combination’ governing their interaction (Jackendoff 1990: 9). Jackendoff refers to this set of primitives and the rules governing them as the ‘grammar of sentential concepts’ ( Jackendoff 1990: 9). His starting point is a close analysis of the meanings of lexemes dedicated to bringing out parallelisms and contrasts which reveal the nature of the conceptual structures underlying them. What his method shows, he says, is that the psychological organization on which meaning rests ‘lies a very short distance below the surface of everyday lexical items – and that progress can be made in exploring it’ (1991: 44). Jackendoff claims that a decompositional method is necessary to explore conceptual structure, in which the concepts underlying word meaning are broken down into their smallest elements: conceptual primitives envisaged as the semantic equivalents of phonological features. Conceptual Semantics posits ‘a finite set of mental primitives and a finite set of principles of mental combination’ governing their interaction. The conceptual structure of a lexical item is an element with zero or more open argument slots, which are filled by the syntactic complements of the lexical item. Jackendoff’s system permits interesting connections to be made between apparently unrelated meanings, but can be criticized for the apparently somewhat arbitrary nature of the conceptual constituents it recognizes.

Problems with conceptual semantics

Jackendoff’s system could be criticized for precisely this feature: its highly abstract primitives. These may permit interesting connections to be made between apparently unrelated meanings, but how justified are we in believing that these connections are cognitively real? Clearly, the more abstract the conceptual primitives we propose, the greater the number of possible connections between domains we can make. This is a similar criticism to the criticism of arbitrariness earlier made against cognitive semantics . What guarantee do we have, for instance, that a conceptual feature like [PL] really exists? In its current early state, the theory seems somewhat arbitrary and unconstrained: the investigator simply looks for plausible underlying conceptual structures, but there are no clear procedures for determining when a primitive is justified. Jackendoff has addressed this question in two ways. First, he has stated that it is simply too early to demand that the theory justify its primitives: as in any immature science, all we have to go on are hunches; only when we have a good description of the semantic phenomena can we begin to constrain the theory (1990: 4). Second, he adopts a holistic approach to the justification of his primitives:

In fact, an isolated primitive can never be justified: a primitive makes sense only in the context of the overall system of primitives in which it is embedded. With this proviso, however, I think a particular choice of primitives should be justified on the grounds of its capacity for expressing generalizations and explaining the distribution of the data. That is, a proposed system of primitives is subject to the usual scientific standards of evaluation.
[1]

See also

References

This article is issued from Wikipedia - version of the 6/11/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.