Value of ranked voting methods for estimation by analogy

Value of ranked voting methods for estimation by analogy

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

One long-standing issue in estimation by analogy (EBA) is finding closest analogies. Prior studies revealed that existing similarity measures are easily influenced by extreme values and irrelevant features. Instead of identifying closest projects based on the aggregated similarity degrees, the authors propose to use ranked voting methods that rank projects per feature, and then aggregate those ranks over all features using voting count rules. The project(s) with highest score will be the winners and form new estimate for the target project. This also enables us to automatically come up with the preferred number of analogies for each target project, since the winner set may contain more than a single winner. Empirical evaluation with Jack-knifing procedure has been carried out, in which nine datasets come from two repositories (PROMISE and ISBSG) were used for benchmarking. The proposed models are compared with some well known estimation methods: regular K-based EBA, stepwise regression, ordinary least-square regression and categorical regression tree. The performance figures of the proposed models were promising. The use of voting methods present some useful advantages: (i) saving time in finding appropriate K number of analogies for each individual project, (ii) no need for project pruning and (iii) no data standardisation are required.


    1. 1)
      • 1. Azzeh, M.: ‘A replicated assessment and comparison of adaptation techniques for analogy-based effort estimation’, Empir. Softw. Eng., 2012, 17, (1–2), pp. 90127 (doi: 10.1007/s10664-011-9176-6).
    2. 2)
      • 2. Chiu, N.H., Huang, S.J.: ‘The adjusted analogy-based software effort estimation based on similarity distances’, J. Syst. Softw., 2007, 80, pp. 628640. (doi: 10.1016/j.jss.2006.06.006).
    3. 3)
      • 3. Jorgensen, M., Indahl, U., Sjoberg, D.: ‘Software effort estimation by analogy and ‘regression toward the mean’, J. Syst. Softw., 2003, 68, pp. 253262 (doi: 10.1016/S0164-1212(03)00066-9).
    4. 4)
      • 4. Keung, J., Kitchenham, B., Jeffery, D.R.: ‘Analogy-X: providing statistical inference to analogy-based software cost estimation’, IEEE Trans. Softw. Eng., 2008, 34, (4), pp. 471484 (doi: 10.1109/TSE.2008.34).
    5. 5)
      • 5. Shepperd, M., Schofield, C.: ‘Estimating software project effort using analogies’, IEEE Trans. Softw. Eng., 1997, 23, pp. 736743 (doi: 10.1109/32.637387).
    6. 6)
      • 6. Azzeh, M., Neagu, D., Cowling, P.: ‘Fuzzy grey relational analysis for software effort estimation’, Empir. Softw. Eng., 2010, 15, pp. 6090 (doi: 10.1007/s10664-009-9113-0).
    7. 7)
      • 7. Mendes, E., Watson, I., Triggs, C., Mosley, N., Counsell, S.: ‘A comparative study of cost estimation models for web hypermedia applications’, Empir. Softw. Eng., 2003, 8, pp. 163196 (doi: 10.1023/A:1023062629183).
    8. 8)
      • 8. Kocaguneli, E., Menzies, T., Bener, A., Keung, J.: ‘Exploiting the essential assumptions of analogy-based effort estimation’, IEEE Trans. Softw. Eng., 2011, 38, (2), pp. 425438 (doi: 10.1109/TSE.2011.27).
    9. 9)
      • 9. Fishburn, P.C.: ‘Condorcet social choice functions’, SIAM J. Appl. Math., 1977, 33, pp. 469489 (doi: 10.1137/0133030).
    10. 10)
      • 10. Klamler, C.: ‘On the closeness aspect of three voting rules: Borda, Copeland, Maximin’, Group Decis. Negot., 2005, 14, (3), pp. 233240 (doi: 10.1007/s10726-005-0958-3).
    11. 11)
      • 11. Ekrem, K., Menzies, T., Keung, J.: ‘On the value of ensemble effort estimation’, IEEE Trans. Softw. Eng., 2011, 38, (6), pp. 14031416.
    12. 12)
      • 12. Miranda, E.: ‘Improving subjective estimates using paired comparisons’, IEEE Softw., 2001, 18, (1), pp. 8791 (doi: 10.1109/52.903173).
    13. 13)
      • 13. Koch, S., Mitlöhner, J.: ‘Software project effort estimation with voting rules’, J. Decis. Support Syst., 2009, 46, (4), pp. 895901 (doi: 10.1016/j.dss.2008.12.002).
    14. 14)
      • 14. Eckert, D., Klamler, C., Mitlöhner, J., Schlötterer, C.: ‘A distance-based comparison of basic voting rules’, Cent. Eur. J. Oper. Res., 2006, 14, (4), pp. 377386 (doi: 10.1007/s10100-006-0011-x).
    15. 15)
      • 15. Pham, D.T., Ghanbarzadeh, A., Koç, E., Otri, S., Rahim, S., Zaidi, M.: ‘The bees algorithm – a novel tool for complex optimisation problems’. Second Virtual Int. Conf. Intelligent Production Machines and Systems (I*PROMS-06), Cardiff, UK, 2006, pp. 454459.
    16. 16)
      • 16. Boetticher, G., Menzies, T., Ostrand, T.: ‘PROMISE repository of empirical software engineering data. Available at’, West Virginia University, Department of Computer Science, 2010.
    17. 17)
      • 17. ISBSG, 2007: ‘International software benchmark and standard group’, Data CDRelease, 2007, 10,
    18. 18)
      • 18. Kadoda, G., Cartwright, M., Chen, L., Shepperd, M.: ‘Experiences using case based reasoning to predict software project effort’. Proc. EASE: Evaluation and Assessment in Software Engineering Conf., Keele, UK, 2000.
    19. 19)
      • 19. Kirsopp, C., Mendes, E., Premraj, R., Shepperd, M.: ‘An empirical analysis of linear adaptation techniques for case-based prediction’. Int. Conf. CBR, 2003, pp. 231245.
    20. 20)
      • 20. Dejaeger, K., Verbeke, W., Martens, D., Baesens, B.: ‘Data mining techniques for software effort estimation: a comparative study’, IEEE Trans. Softw. Eng., 2012, 38, (2), pp. 375397 (doi: 10.1109/TSE.2011.55).
    21. 21)
      • 21. Menzies, T., Chen, Z., Hihn, J., Lum, K.: ‘Selecting best practices for effort estimation’, IEEE Trans. Softw. Eng., 2006, 32, pp. 883895 (doi: 10.1109/TSE.2006.114).

Related content

An Erratum has been published for this content:
This is a required field
Please enter a valid email address