Gibbs' inequality

Josiah Willard Gibbs

In information theory, Gibbs' inequality is a statement about the mathematical entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality. It was first presented by J. Willard Gibbs in the 19th century.

Gibbs' inequality

Suppose that

is a probability distribution. Then for any other probability distribution

the following inequality between positive quantities (since the pi and qi are positive numbers less than one) holds[1]:68

with equality if and only if

for all i. Put in words, the information entropy of a distribution P is less than or equal to its cross entropy with any other distribution Q.

The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written:[2]:34

Note that the use of base-2 logarithms is optional, and allows one to refer to the quantity on each side of the inequality as an "average surprisal" measured in bits.

Proof

Since

it is sufficient to prove the statement using the natural logarithm (ln). Note that the natural logarithm satisfies

for all x > 0 with equality if and only if x=1.

Let denote the set of all for which pi is non-zero. Then

So

and then trivially

since the right hand side does not grow, but the left hand side may grow or may stay the same.

For equality to hold, we require:

  1. for all so that the approximation is exact.
  2. so that equality continues to hold between the third and fourth lines of the proof.

This can happen if and only if

for i = 1, ..., n.

Alternative proofs

The result can alternatively be proved using Jensen's inequality or log sum inequality.

Corollary

The entropy of is bounded by:[1]:68

The proof is trivial - simply set for all i.

See also

References

  1. 1 2 Pierre Bremaud (6 December 2012). An Introduction to Probabilistic Modeling. Springer Science & Business Media. ISBN 978-1-4612-1046-7.
  2. David J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press. ISBN 978-0-521-64298-9.
This article is issued from Wikipedia - version of the 10/6/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.