Your browser does not support JavaScript!

What accuracy statistics really measure

What accuracy statistics really measure

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IEE Proceedings - Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The paper aims to provide the software estimation research community with a better understanding of the meaning of, and relationship between, two statistics that are often used to assess the accuracy of predictive models: the mean magnitude relative error, MMRE, and the number of predictions within 25% of the actuals, pred(25). It is demonstrated that MMRE and pred(25) are, respectively, measures of the spread and the kurtosis of the variable z where z=estimate/actual. Thus, z is considered to be a measure of accuracy, and statistics such as MMRE and pred(25) to be measures of properties of the distribution of z. It is suggested that measures of the central location and skewness of z, as well as measures of spread and kurtosis, are necessary. Furthermore, since the distribution of z is non-normal, non-parametric measures of these properties may be needed. For this reason, boxplots of z are useful alternatives to simple summary metrics. It is also noted that the simple residuals are better behaved than the z variable, and could also be used as the basis for comparing prediction systems.


    1. 1)
      • S.D. Conte , H.E. Dunsmore , V.Y. Siien . (1986) Software engineering metrics and models.
    2. 2)
      • M. JORGENSEN . Experience with the accuracy of software maintenance task effort prediction models. IEEE Trans. Softw. Eng. , 674 - 681.
    3. 3)
      • PICKARD, L.M., KITCHENHAM, B.A., LINKMAN, S.J.: `An investigation of analysis techniques for software data sets', Proceedings of the Sixth International symposium on Software metrics (Metrics 99), 1999, IEEE Computer Society Press, Los Alamitos, California.
    4. 4)
      • MYRTVEIT, I., STENSRUD, E.: `Does history add value to project cost estimation? An empirical validation of a claim in CMM', Proceedings of the combined 10th European Software control and metrics conference and the 2nd SCOPE conference on software product evaluation, Hirstmonceux, England, April, 1999, p. 71–79.
    5. 5)
      • B. IGLEWICZ , D.C. HOAGLIN , F. MOSTELLER , J.W. TUKEY . (1983) Robust scale estimators and confidence intervals for location Understanding robust and exploratory data analysis.
    6. 6)
      • STENSRUD, E., MYRTVEIT, I: `Human performance estimating with analogy and regression models: An empirical validation', Proceedings of the Fifth International Software metrics symposium, 1998, IEEE Computer Society Press.
    7. 7)
      • Y. MIYAZAKI , A. TAKANOU , H. NOZAKI , N. NAKAGAWA , K OKADA . Method to estimate parameter values in software prediction models. Info. Softw. Technol. , 3 , 239 - 243.
    8. 8)
      • J.-M. DESHARNAIS . (1989) Analyse statistique de la productivitie des projects de developpment en informatique a partir de la technique des point des fonction, Masters thesis.
    9. 9)
      • S. SIEGAL , J. Jr. CASTELLAN . (1988) Non-parametric statistics for the behavioural sciences.
    10. 10)
      • B.W.N. LO , X. GAO . Assessing software cost estimation models: criteria for accuracy, consistency and regression. Aust. J. Info. Syst. , 1 , 30 - 44.

Related content

This is a required field
Please enter a valid email address