Littlewood–Offord problem

In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A.

The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord.[1] This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than of the 2n possible subsums of S fall into the disc.

In 1945 Paul Erdős improved the upper bound for d = 1 to

using Sperner's theorem.[2] This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space.[2]

Suppose S = {v1, …, vn}. By subtracting

from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form

that fall in the target set A, where takes the value 1 or 1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi.

References

  1. Littlewood, J.E.; Offord, A.C. (1943). "On the number of real roots of a random algebraic equation (III)". Rec. Math. (Mat. Sbornik) N.S. 12 (54) (3): 277–286.
  2. 1 2 Bollobás, Béla (1986). Combinatorics. Cambridge. ISBN 0-521-33703-8.
This article is issued from Wikipedia - version of the 4/7/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.