Extremum estimator

In statistics and econometrics, extremum estimators is a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective function, which depends on the data. The general theory of extremum estimators was developed by Amemiya (1985).

Definition

An estimator is called an extremum estimator, if there is an objective function such that

where Θ is the possible range of parameter values. Sometimes a slightly weaker definition is given:

where op(1) is the variable converging in probability to zero. With this modification doesn’t have to be the exact maximizer of the objective function, just be sufficiently close to it.

The theory of extremum estimators does not specify what the objective function should be. There are various types of objective functions suitable for different models, and this framework allows us to analyse the theoretical properties of such estimators from a unified perspective. The theory only specifies the properties that the objective function has to possess, and when one selects a particular objective function, he or she only has to verify that those properties are satisfied.

Consistency

When the set Θ is not compact (Θ = R in this example), then even if the objective function is uniquely maximized at θ0, this maximum may be not well-separated, in which case the estimator will fail to be consistent.

If the set Θ is compact and there is a limiting function Q0(θ) such that: converges to Q0(θ) in probability uniformly over Θ, and the function Q0(θ) is continuous and has a unique maximum at θ = θ0. If these conditions are satisfied then is consistent for θ0.[1]

The uniform convergence in probability of means that

The requirement for Θ to be compact can be replaced with a weaker assumption that the maximum of Q0 was well-separated, that is there should not exist any points θ that are distant from θ0 but such that Q0(θ) were close to Q0(θ0). Formally, it means that for any sequence {θi} such that Q0(θi) → Q0(θ0), it should be true that θiθ0.

Asymptotic normality

Assuming that consistency has been established and the derivatives of the sample satisfy some other conditions,[2] the extremum estimator converges to an asymptotically Normal distribution

Examples

See also

Notes

  1. Newey & McFadden (1994), Theorem 2.1
  2. Shi, Xiaoxia. "Lecture Notes: Asymptotic Normality of Extremum Estimators" (PDF).

References

This article is issued from Wikipedia - version of the 11/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.