Finitary relation

This article is about the set-theoretic notion of relation. For the common case, see binary relation.
For other uses, see Relation (disambiguation).

In mathematics, a finitary relation has a finite number of "places". In set theory and logic, a relation is a property that assigns truth values to -tuples of individuals. Typically, the property describes a possible connection between the components of a -tuple. For a given set of -tuples, a truth value is assigned to each -tuple according to whether the property does or does not hold.

An example of a ternary relation (i.e., between three individuals) is: " was introduced to by ", where is a 3-tuple of persons; for example, "Beatrice Wood was introduced to Henri-Pierre Roché by Marcel Duchamp" is true, while "Karl Marx was introduced to Friedrich Engels by Queen Victoria" is false.

Informal introduction

Relation is formally defined in the next section. In this section we introduce the concept of a relation with a familiar everyday example. Consider the relation involving three roles that people might play, expressed in a statement of the form "X thinks that Y likes Z ". The facts of a concrete situation could be organized in a table like the following:

Relation S : X thinks that Y likes Z
Person X Person Y Person Z
Alice Bob Denise
Charles Alice Bob
Charles Charles Alice
Denise Denise Denise

Each row of the table records a fact or makes an assertion of the form "X thinks that Y likes Z ". For instance, the first row says, in effect, "Alice thinks that Bob likes Denise". The table represents a relation S over the set P of people under discussion:

P = {Alice, Bob, Charles, Denise}.

The data of the table are equivalent to the following set of ordered triples:

S = {(Alice, Bob, Denise), (Charles, Alice, Bob), (Charles, Charles, Alice), (Denise, Denise, Denise)}.

By a slight abuse of notation, it is usual to write S(Alice, Bob, Denise) to say the same thing as the first row of the table. The relation S is a ternary relation, since there are three items involved in each row. The relation itself is a mathematical object defined in terms of concepts from set theory (i.e., the relation is a subset of the Cartesian product on {Person X, Person Y, Person Z}), that carries all of the information from the table in one neat package. Mathematically, then, a relation is simply an "ordered set".

The table for relation S is an extremely simple example of a relational database. The theoretical aspects of databases are the specialty of one branch of computer science, while their practical impacts have become all too familiar in our everyday lives. Computer scientists, logicians, and mathematicians, however, tend to see different things when they look at these concrete examples and samples of the more general concept of a relation.

For one thing, databases are designed to deal with empirical data, and experience is always finite, whereas mathematics at the very least concerns itself with potential infinity. This difference in perspective brings up a number of ideas that may be usefully introduced at this point, if by no means covered in depth.

Relations with a small number of "places"

The variable giving the number of "places" in the relation, 3 for the above example, is a non-negative integer, called the relation's arity, adicity, or dimension. A relation with places is variously called a -ary, a -adic, or a -dimensional relation. Relations with a finite number of places are called finite-place or finitary relations. It is possible to generalize the concept to include infinitary relations between infinitudes of individuals, for example infinite sequences; however, in this article only finitary relations are discussed, which will from now on simply be called relations.

Since there is only one 0-tuple, the so-called empty tuple ( ), there are only two zero-place relations: the one that always holds, and the one that never holds. They are sometimes useful for constructing the base case of an induction argument. One-place relations are called unary relations. For instance, any set (such as the collection of Nobel laureates) can be viewed as a collection of individuals having some property (such as that of having been awarded the Nobel prize). Two-place relations are called binary relations or, in the past, dyadic relations. Binary relations are very common, given the ubiquity of relations such as:

A -ary relation is a straightforward generalization of a binary relation.

Formal definitions

When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation.

The simpler of the two definitions of k-place relations encountered in mathematics is:

Definition 1. A relation L over the sets X1, …, Xk is a subset of their Cartesian product, written LX1 ×  × Xk.

Relations are classified according to the number of sets in the defining Cartesian product, in other words, according to the number of terms following L. Hence:

Relations with more than four terms are usually referred to as k-ary or n-ary, for example, "a 5-ary relation". A k-ary relation is simply a set of k-tuples.

The second definition makes use of an idiom that is common in mathematics, stipulating that "such and such is an n-tuple" in order to ensure that such and such a mathematical object is determined by the specification of n component mathematical objects. In the case of a relation L over k sets, there are k + 1 things to specify, namely, the k sets plus a subset of their Cartesian product. In the idiom, this is expressed by saying that L is a (k + 1)-tuple.

Definition 2. A relation L over the sets X1, …, Xk is a (k + 1)-tuple L = (X1, …, Xk, G(L)), where G(L) is a subset of the Cartesian product X1 ×  × Xk. G(L) is called the graph of L.

Elements of a relation are more briefly denoted by using boldface characters, for example, the constant element a = (a1, …, ak) or the variable element x = (x1, …, xk).

A statement of the form "a is in the relation L " or "a satisfies L " is taken to mean that a is in L under the first definition and that a is in G(L) under the second definition.

The following considerations apply under either definition:

As a rule, whatever definition best fits the application at hand will be chosen for that purpose, and anything that falls under it will be called a relation for the duration of that discussion. If it becomes necessary to distinguish the two definitions, an entity satisfying the second definition may be called an embedded or included relation.

If L is a relation over the domains X1, …, Xk, it is conventional to consider a sequence of terms called variables, x1, …, xk, that are said to range over the respective domains.

Let a Boolean domain B be a two-element set, say, B = {0, 1}, whose elements can be interpreted as logical values, typically 0 = false and 1 = true. The characteristic function of the relation L, written ƒL or χ(L), is the Boolean-valued function ƒL : X1 ×  × Xk  B, defined in such a way that ƒL() = 1 just in case the k-tuple is in the relation L. Such a function can also be called an indicator function, particularly in probability and statistics, to avoid confusion with the notion of a characteristic function in probability theory.

It is conventional in applied mathematics, computer science, and statistics to refer to a Boolean-valued function like ƒL as a k-place predicate. From the more abstract viewpoint of formal logic and model theory, the relation L constitutes a logical model or a relational structure that serves as one of many possible interpretations of some k-place predicate symbol.

Because relations arise in many scientific disciplines as well as in many branches of mathematics and logic, there is considerable variation in terminology. This article treats a relation as the set-theoretic extension of a relational concept or term. A variant usage reserves the term "relation" to the corresponding logical entity, either the logical comprehension, which is the totality of intensions or abstract properties that all of the elements of the relation in extension have in common, or else the symbols that are taken to denote these elements and intensions. Further, some writers of the latter persuasion introduce terms with more concrete connotations, like "relational structure", for the set-theoretic extension of a given relational concept.

History

The logician Augustus De Morgan, in work published around 1860, was the first to articulate the notion of relation in anything like its present sense. He also stated the first formal results in the theory of relations (on De Morgan and relations, see Merrill 1990). Charles Sanders Peirce restated and extended De Morgan's results. Bertrand Russell (1938; 1st ed. 1903) was historically important, in that it brought together in one place many 19th century results on relations, especially orders, by Peirce, Gottlob Frege, Georg Cantor, Richard Dedekind, and others. Russell and A. N. Whitehead made free use of these results in their Principia Mathematica.

Notes

  1. De Morgan, A. (1858) "On the syllogism, part 3" in Heath, P., ed. (1966) On the syllogism and other logical writings. Routledge. P. 119,

See also

References

Bibliography

This article is issued from Wikipedia - version of the 11/17/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.