Friedman test

For US Army cryptologist William F. Friedman's cryptanalytic test, see Vigenère cipher § Friedman test.
For Friedman pregnancy test, see Rabbit test.

The Friedman test is a non-parametric statistical test developed by Milton Friedman.[1][2][3] Similar to the parametric repeated measures ANOVA, it is used to detect differences in treatments across multiple test attempts. The procedure involves ranking each row (or block) together, then considering the values of ranks by columns. Applicable to complete block designs, it is thus a special case of the Durbin test.

Classic examples of use are:

The Friedman test is used for one-way repeated measures analysis of variance by ranks. In its use of ranks it is similar to the Kruskal–Wallis one-way analysis of variance by ranks.

Friedman test is widely supported by many statistical software packages.


  1. Given data , that is, a matrix with rows (the blocks), columns (the treatments) and a single observation at the intersection of each block and treatment, calculate the ranks within each block. If there are tied values, assign to each tied value the average of the ranks that would have been assigned without ties. Replace the data with a new matrix where the entry is the rank of within block .
  2. Find the values:
    • ,
  3. The test statistic is given by . Note that the value of Q as computed above does not need to be adjusted for tied values in the data.
  4. Finally, when n or k is large (i.e. n > 15 or k > 4), the probability distribution of Q can be approximated by that of a chi-squared distribution. In this case the p-value is given by . If n or k is small, the approximation to chi-square becomes poor and the p-value should be obtained from tables of Q specially prepared for the Friedman test. If the p-value is significant, appropriate post-hoc multiple comparisons tests would be performed.

Related tests

Post hoc analysis

Post-hoc tests were proposed by Schaich and Hamerle (1984)[4] as well as Conover (1971, 1980)[5] in order to decide which groups are significantly different from each other, based upon the mean rank differences of the groups. These procedures are detailed in Bortz, Lienert and Boehnke (2000, pp. 275).[6]

Not all statistical packages support Post-hoc analysis for Friedman's test, but user-contributed code exists that provides these facilities (for example in SPSS,[7] and in R.[8])


  1. Friedman, Milton (December 1937). "The use of ranks to avoid the assumption of normality implicit in the analysis of variance". Journal of the American Statistical Association. American Statistical Association. 32 (200): 675–701. doi:10.2307/2279372. JSTOR 2279372.
  2. Friedman, Milton (March 1939). "A correction: The use of ranks to avoid the assumption of normality implicit in the analysis of variance". Journal of the American Statistical Association. American Statistical Association. 34 (205): 109. doi:10.2307/2279169. JSTOR 2279169.
  3. Friedman, Milton (March 1940). "A comparison of alternative tests of significance for the problem of m rankings". The Annals of Mathematical Statistics. 11 (1): 86–92. doi:10.1214/aoms/1177731944. JSTOR 2235971.
  4. Schaich, E. & Hamerle, A. (1984). Verteilungsfreie statistische Prüfverfahren. Berlin: Springer. ISBN 3-540-13776-9.
  5. Conover, W. J. (1971, 1980). Practical nonparametric statistics. New York: Wiley. ISBN 0-471-16851-3.
  6. Bortz, J., Lienert, G. & Boehnke, K. (2000). Verteilungsfreie Methoden in der Biostatistik. Berlin: Springer. ISBN 3-540-67590-6.
  7. "Post-hoc comparisons for Friedman test".
  8. "Post hoc analysis for Friedman's Test (R code)". February 22, 2010.

Further reading

This article is issued from Wikipedia - version of the 9/7/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.