Ratio estimator
The ratio estimator is a statistical parameter and is defined to be the ratio of means of two variates. Ratio estimates are biased and corrections must be made when they are used in experimental or survey work. The ratio estimates are asymmetrical and symmetrical tests such as the t test should not be used to generate confidence intervals.
The bias is of the order O(1/n) (see big O notation) so as the sample size (n) increases, the bias will asymptotically approach 0. Therefore, the estimator is approximately unbiased for large sample sizes.
Definition
Assume there are two characteristics – x and y – that can be observed for each sampled element in the data set. The ratio R is
The ratio estimate of a value of the y variate (θy) is
where θx is the corresponding value of the x variate. θy is known to be asymptotically normally distributed.[1]
Statistical properties
The sample ratio (r) is estimated from the sample
That the ratio is biased can be shown with Jensen's inequality as follows (assuming independence between x and y):
Under simple random sampling the bias is of the order O( n−1 ). An upper bound on the relative bias of the estimate is provided by the coefficient of variation (the ratio of the standard deviation to the mean).[2] Under simple random sampling the relative bias is O( n−1/2 ).
Correction of the mean's bias
The correction methods, depending on the distributions of the x and y variates, differ in their efficiency making it difficult to recommend an overall best method. Because the estimates of r are biased a corrected version should be used in all subsequent calculations.
A correction of the bias accurate to the first order is[3]
where mx is the mean of the variate x and sab is the covariance between a and b.
To simplify the notation sab will be used subsequently to denote the covariance between the variates a and b.
Another estimator based on the Taylor expansion is
where n is the sample size, N is the population size, mx is the mean of the variate x, sx2 and sy2 are the sample variances of the x and y variates respectively and ρ is the sample correlation between the x and y variates.
A computationally simpler but slightly less accurate version of this estimator is
where N is the population size, n is the sample size, mx is the mean of the x variate, sx2 and sy2 are the sample variances of the x and y variates respectively and ρ is the sample correlation between the x and y variates. These versions differ only in the factor in the denominator ( N - 1 ). For a large N the difference is negligible.
A second-order correction is[4]
Other methods of bias correction have also been proposed. To simplify the notation the following variables will be used
Pascual's estimator:[5]
Beale's estimator:[6]
Tin's estimator:[7]
Sahoo's estimator:[8]
Sahoo has also proposed a number of additional estimators:[9]
If mx and my are both greater than 10, then the following approximation is correct to order O( n−3 ).[4]
An asymptotically correct estimator is[10]
Jackknife estimation
A jackknife estimate of the ratio is less biased than the naive form. A jackknife estimator of the ratio is
where n is the size of the sample and the ri are estimated with the omission of one pair of variates at a time.[11]
An alternative method is to divide the sample into g groups each of size p with n = pg.[12] Let ri be the estimate of the ith group. Then the estimator
has a bias of at most O( n−2 ).
Other estimators based on the division of the sample into g groups are:[13]
where is the mean of the ratios rg of the g groups and
where ri' is the value of the sample ratio with the ith group omitted.
Other methods of estimation
Other methods of estimating a ratio estimator include maximum likelihood and bootstrapping.[11]
Estimate of total
The estimated total of the y variate ( τy ) is
where ( τx ) is the total of the x variate.
Variance estimates
The variance of the sample ratio is approximately:
where sx2 and sy2 are the variances of the x and y variates respectively, mx and my are the means of the x and y variates respectively and sab is the covariance of a and b.
Although the approximate variance estimator of the ratio given below is biased, if the sample size is large, the bias in this estimator is negligible.
where N is the population size, n is the sample size and mx is the mean of the x variate.
Another estimator of the variance based on the Taylor expansion is
where n is the sample size, N is the population size and ρ is the correlation coefficient between the x and y variates.
An estimate accurate to O( n−2 ) is[10]
An estimator accurate to O( n−3 ) is[4]
A jackknife estimator of the variance is
where ri is the ratio with the ith pair of variates omitted and rJ is the jackknife estimate of the ratio.[11]
Variance of total
The variance of the estimated total is
Variance of mean
The variance of the estimated mean of the y variate is
where mx is the mean of the x variate, sx2 and sy2 are the sample variances of the x and y variates respectively and ρ is the sample correlation between the x and y variates.
Skewness
The skewness and the kurtosis of the ratio depend on the distributions of the x and y variates. Estimates have been made of these parameters for normally distributed x and y variates but for other distributions no expressions have yet been derived. It has been found that in general ratio variables are skewed to the right, are leptokurtic and their nonnormality is increased when magnitude of the denominator's coefficient of variation is increased.
For normally distributed x and y variates the skewness of the ratio is approximately[7]
where
Effect on confidence intervals
Because the ratio estimate is generally skewed confidence intervals created with the variance and symmetrical tests such as the t test are incorrect.[11] These confidence intervals tend to overestimate the size of the left confidence interval and underestimate the size of the right.
If the ratio estimator is unimodal (which is frequently the case) then a conservative estimate of the 95% confidence intervals can be made with the Vysochanskiï–Petunin inequality.
Alternative methods of bias reduction
An alternative method of reducing or eliminating the bias in the ratio estimator is to alter the method of sampling. The variance of the ratio using these methods differs from the estimates given previously. Note that while many applications such as those discussion in Lohr[14] are intended to be restricted to positive integers only, such as sizes of sample groups, the Midzuno-Sen method works for any sequence of positive numbers, integral or not. It's not clear what it means that Lahiri's method works since it returns a biased result.
Lahiri's method
The first of these sampling schemes is a double use of a sampling method introduced by Lahiri in 1951.[15] The algorithm here is based upon the description by Lohr.[14]
- Choose a number M = max( x1, ..., xN) where N is the population size.
- Choose i at random from a uniform distribution on [1,N].
- Choose k at random from a uniform distribution on [1,M].
- If k ≤ xi, then xi is retained in the sample. If not then it is rejected.
- Repeat this process from step 2 until the desired sample size is obtained.
The same procedure for the same desired sample size is carried out with the y variate.
Lahiri's scheme as described by Lohr is biased high and, so, is interesting only for historical reasons. The Midzuno-Sen technique described below is recommended instead.
Midzuno-Sen's method
In 1952 Midzuno and Sen independently described a sampling scheme that provides an unbiased estimator of the ratio.[16][17]
The first sample is chosen with probability proportional to the size of the x variate. The remaining n - 1 samples are chosen at random without replacement from the remaining N - 1 members in the population. The probability of selection under this scheme is
where X is the sum of the N x variates and the xi are the n members of the sample. Then the ratio of the sum of the y variates and the sum of the x variates chosen in this fashion is an unbiased estimate of the ratio estimator.
In symbols we have
where xi and yi are chosen according to the scheme described above.
The ratio estimator given by this scheme is unbiased.
Särndal, Swensson, and Wretman credit Lahiri, Midzuno and Sen for the insights leading to this method[18] but Lahiri's technique is biased high.
Ordinary least squares regression
If a linear relationship between the x and y variates exists and the regression equation passes through the origin then the estimated variance of the regression equation is always less than that of the ratio estimator. The precise relationship between the variances depends on the linearity of the relationship between the x and y variates: when the relationship is other than linear the ratio estimate may have a lower variance than that estimated by regression.
Uses
Although the ratio estimator may be of use in a number of settings it is of particular use in two cases:
- when the variates x and y are highly correlated through the origin
- when the total population size is unknown
History
The first known use of the ratio estimator was by John Graunt in England who in 1662 was the first to estimate the ratio y/x where y represented the total population and x the known total number of registered births in the same areas during the preceding year.
Later Messance (~1765) and Moheau (1778) published very carefully prepared estimates for France based on enumeration of population in certain districts and on the count of births, deaths and marriages as reported for the whole country. The districts from which the ratio of inhabitants to birth was determined only constituted a sample.
In 1802, Laplace wished to estimate the population of France. No population census had been carried out and Laplace lacked the resources to count every individual. Instead he sampled 30 parishes whose total number of inhabitants was 2,037,615. The parish baptismal registrations were considered to be reliable estimates of the number of live births so he used the total number of births over a three-year period. The sample estimate was 71,866.333 baptisms per year over this period giving a ratio of one registered baptism for every 28.35 persons. The total number of baptismal registrations for France was also available to him and he assumed that the ratio of live births to population was constant. He then used the ratio from his sample to estimate the population of France.
Karl Pearson said in 1897 that the ratio estimates are biased and cautioned against their use.[19]
See also
References
- ↑ Scott AJ, Wu CFJ (1981) On the asymptotic distribution of ratio and regression estimators. JASA 76: 98–102
- ↑ Cochran WG (1977) Sampling techniques. New York: John Wiley & Sons
- ↑ Hartley HO, Ross A (1954) Unbiased ratio estimators. Nature 174: 270–271
- 1 2 3 Ogliore RC, Huss GR, Nagashima K (2011) Ratio estimation in SIMS analysis. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 269 (17) 1910–1918
- ↑ Pascual JN (1961) Unbiased ratio estimators in stratified sampling. JASA 56(293):70–87
- ↑ Beale EML (1962) Some use of computers in operational research. Industrielle Organization 31: 27-28
- 1 2 Tin M (1965) Comparison of some ratio estimators. JASA 60: 294–307
- ↑ Sahoo LN (1983). On a method of bias reduction in ratio estimation. J Statist Res 17:1—6
- ↑ Sahoo LN (1987) On a class of almost unbiased estimators for population ratio. Statistics 18: 119-121
- 1 2 van Kempen GMP, van Vliet LJ (2000) Mean and variance of ratio estimators used in fluorescence ratio imaging. Cytometry 39:300–305
- 1 2 3 4 Choquet D, L'ecuyer P, Léger C (1999) Bootstrap confidence intervals for ratios of expectations. ACM Transactions on Modeling and Computer Simulation - TOMACS 9 (4) 326-348 DOI: 10.1145/352222.352224
- ↑ Durbin J (1959) A note on the application of Quenouille's method of bias reduction to estimation of ratios. Biometrika 46: 477-480
- ↑ Mickey MR (1959) Some finite population unbiased ratio and regression estimators. JASA 54: 596–612
- 1 2 Lohr S (2010) Sampling - Design and Analysis (2nd edition)
- ↑ Lahiri DB (1951) A method of sample selection providing unbiased ratio estimates. Bull Int Stat Inst 33: 133–140
- ↑ Midzuno H (1952) On the sampling system with probability proportional to the sum of the sizes. Ann Inst Stat Math 3: 99-107
- ↑ Sen AR (1952) Present status of probability sampling and its use in the estimation of a characteristic. Econometrika 20-103
- ↑ Särndal, C-E, B Swensson J Wretman (1992) Model assisted survey sampling. Springer, §7.3.1 (iii)
- ↑ Pearson K (1897) On a form of spurious correlation that may arise when indices are used for the measurement of organs. Proc Roy Soc Lond 60: 498