Discover deeper bugs with dynamic symbolic execution and coverage-based fuzz testing

Discover deeper bugs with dynamic symbolic execution and coverage-based fuzz testing

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Coverage-based fuzz testing and dynamic symbolic execution are both popular program testing techniques. However, on their own, both techniques suffer from scalability problems when considering the complexity of modern software. Hybrid testing methods attempt to mitigate these problems by leveraging dynamic symbolic execution to assist fuzz testing. Unfortunately, the efficiency of such methods is still limited by specific program structures and the schedule of seed files. In this study, the authors introduce a novel lazy symbolic pointer concretisation method and a symbolic loop bucket optimisation to mitigate path explosion caused by dynamic symbolic execution in hybrid testing. They also propose a distance-based seed selection method to rearrange the seed queue of the fuzzer engine in order to achieve higher coverage. They implemented a prototype and evaluate its ability to find vulnerabilities in software and cover new execution paths. They show on different benchmarks that it can find more crashes than other off-the-shelf vulnerability detection tools. They also show that the proposed method can discover 43% more unique paths than vanilla fuzz testing.


    1. 1)
      • 1. Duran, J.W., Ntafos, S.C.: ‘An evaluation of random testing’, IEEE Trans. Softw. Eng., 1984, SE-10, (4), pp. 438444.
    2. 2)
      • 2. Miller, B.P., Fredriksen, L., So, B.: ‘An empirical study of the reliability of unix utilities’. Proc. of the Workshop of Parallel and Distributed Debugging, Santa Cruz, CA, USA, 1990, pp. ixxxi.
    3. 3)
      • 3. Sutton, M., Greene, A., Amini, P.: ‘Fuzzing: brute force vulnerability discovery’ (Pearson Education, New York, NY, USA, 2007).
    4. 4)
      • 4. Cadar, C., Godefroid, P., Khurshid, S., et al: ‘Symbolic execution for software testing in practice: preliminary assessment’. Proc. of the 33rd Int. Conf. on Software Engineering, Waikiki, HI, USA, 2011, pp. 10661071.
    5. 5)
      • 5. Ganesh, V., Leek, T., Rinard, M.: ‘Taint-based directed whitebox fuzzing’. Proc. of the 31st Int. Conf. on Software Engineering, Vancouver, Canada, 2009, pp. 474484.
    6. 6)
      • 6. Godefroid, P., de Halleux, P., Nori, A.V., et al: ‘Automating software testing using program analysis’, IEEE Softw., 2008, 25, (5), pp. 3037.
    7. 7)
      • 7. Neystadt, J.: ‘Automated penetration testing with white-box fuzzing’ (MSDN Library, 2008).
    8. 8)
      • 8. Rawat, S., Jain, V., Kumar, A., et al: ‘Vuzzer: application-aware evolutionary fuzzing’. Proc. of the Network and Distributed System Security Symp. (NDSS), San Diego, CA, USA, 2017.
    9. 9)
      • 9. Stephens, N., Grosen, J., Salls, C., et al: ‘Driller: augmenting fuzzing through selective symbolic execution’. Proc. of the Network and Distributed System Security Symp., San Diego, CA, USA, 2016.
    10. 10)
      • 10. Zalewski, M.: ‘American fuzzy lop’. Available at
    11. 11)
      • 11. Godefroid, P., Levin, M.Y., Molnar, D.: ‘Sage: whitebox fuzzing for security testing’, Queue, 2012, 10, (1), p. 20.
    12. 12)
      • 12. Majumdar, R., Sen, K.: ‘Hybrid concolic testing’. 29th Int. Conf. on Software Engineering. ICSE 2007, Minneapolis, MN, USA, 2007, pp. 416426.
    13. 13)
      • 13. Pak, B.S.: ‘Hybrid fuzz testing: Discovering software bugs via fuzzing and symbolic execution’. PhD thesis, Carnegie Mellon University, Pittsburgh, PA, 2012.
    14. 14)
      • 14. Yeh, C.-C., Chung, H., Huang, S.-K.: ‘Craxfuzz: target-aware symbolic fuzz testing’. 2015 IEEE 39th Annual Computer Software and Applications Conf. (COMPSAC), Taichuang, Taiwan, 2015, vol. 2, pp. 460471.
    15. 15)
      • 15. Shoshitaishvili, Y., Wang, R., Hauser, C., et al: ‘Firmalice-automatic detection of authentication bypass vulnerabilities in binary firmware’. Network and Distributed System Security, San Diego, CA, USA, 2015.
    16. 16)
      • 16. DARPA: ‘Cyber grand challenge’. Available at
    17. 17)
      • 17. Baldoni, R., Coppa, E., D'Elia, D.C., et al: ‘A survey of symbolic execution techniques’. arXiv preprint arXiv:1610.00502, 2016.
    18. 18)
      • 18. Boonstoppel, P., Cadar, C., Engler, D.: ‘Rwset: attacking path explosion in constraint-based test generation’. Proc. of the Theory and Practice of Software, 14th Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems, TACAS'08/ETAPS'08, Berlin, Heidelberg, 2008, pp. 351366.
    19. 19)
      • 19. Schwartz, E.J., Avgerinos, T., Brumley, D.: ‘All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask)’. 2010 IEEE Symp. on Security and Privacy (SP), Berkeley, CA, USA, 2010, pp. 317331.
    20. 20)
      • 20. Chipounov, V., Kuznetsov, V., Candea, G.: ‘S2e: A platform for in-vivo multi-path analysis of software systems’. ASPLOS, Newport Beach, CA, USA, 2011.
    21. 21)
      • 21. Dolan-Gavitt, B., Hulin, P., Kirda, E., et al: ‘Lava: large-scale automated vulnerability addition’. 2016 IEEE Symp. on Security and Privacy (SP), San Jose, CA, USA, 2016, pp. 110121.
    22. 22)
      • 22. King, J.C.: ‘Symbolic execution and program testing’, Commun. ACM, 1976, 19, (7), pp. 385394.
    23. 23)
      • 23. Cadar, C., Sen, K.: ‘Symbolic execution for software testing: three decades later’, Commun. ACM, 2013, 56, (2), pp. 8290.
    24. 24)
      • 24. Brumley, D., Jager, I., Avgerinos, T., et al: ‘Bap: A binary analysis platform’. Int. Conf. on Computer Aided Verification, Snowbird, UT, USA, 2011, pp. 463469.
    25. 25)
      • 25. Song, D., Brumley, D., Yin, H., et al: ‘Bitblaze: A new approach to computer security via binary analysis’. Int. Conf. on Information Systems Security, Hyderabad, India, 2008, pp. 125.
    26. 26)
      • 26. Thakur, A., Lim, J., Lal, A., et al: ‘Directed proof generation for machine code’. Int. Conf. on Computer Aided Verification, Edinburgh, UK, 2010, pp. 288305.
    27. 27)
      • 27. Trtık, M., Strejcek, J.: ‘Symbolic memory with pointers’. Automated Technology for Verification and Analysis: 12th Int. Symp., ATVA 2014, Sydney, Australia, 3–7 November 2014, Proc., Sydney, Australia, 2014, vol. 8837, p. 380.
    28. 28)
      • 28. Cadar, C., Dunbar, D., Engler, D.R., et al: ‘Klee: unassisted and automatic generation of high-coverage tests for complex systems programs’. Operating Systems Design and Implementation, San Diego, CA, USA, 2008, vol. 8, pp. 209224.
    29. 29)
      • 29. Cadar, C., Ganesh, V., Pawlowski, P., et al: ‘Exe: A system for automatically generating inputs of death using symbolic execution’. Proc. of the ACM Conf. on Computer and Communications Security, Alexandria, VA, USA, 2006.
    30. 30)
      • 30. Avgerinos, A.: ‘Exploiting Trade-offs in Symbolic Execution for Identifying Security Bugs’. PhD thesis, Carnegie Mellon University, CMU, 2014.
    31. 31)
      • 31. Cha, S.K., Avgerinos, T., Rebert, A., et al: ‘Unleashing mayhem on binary code’. 2012 IEEE Symp. on Security and Privacy (SP), San Francisco, CA, USA, 2012, pp. 380394.
    32. 32)
      • 32. Avgerinos, T., Cha, S.K., Rebert, A., et al: ‘Automatic exploit generation’, Commun. ACM, 2014, 57, (2), pp. 7484.
    33. 33)
      • 33. Kim, S.Y., Lee, S., Yun, I., et al: ‘Cab-fuzz: practical concolic testing techniques for cots operating systems’. 2017 USENIX Annual Technical Conf. (USENIX ATC 17), Santa Clara, CA, 2017, pp. 689701.
    34. 34)
      • 34. Wang, R., Jiang, S., Chen, D.: ‘Similarity-based regression test case prioritization’. Software Engineering and Knowledge Engineering, Pittsburgh, PA, USA, 2015, pp. 358363.
    35. 35)
      • 35. Bellard, F.: ‘QEMU, a fast and portable dynamic translator’. USENIX Annual Technical Conf., FREENIX Track, Anaheim, CA, USA, 2005, pp. 4146.
    36. 36)
      • 36. Lattner, C., Adve, V.: ‘Llvm: A compilation framework for lifelong program analysis & transformation’. Proc. of the Int. Symp. on Code Generation and Optimization: Feedback-Directed and Runtime Optimization, Washington, DC, USA, 2004, p. 75.
    37. 37)
      • 37. Xu, H., Zhao, Z., Zhou, Y., et al: ‘On benchmarking the capability of symbolic execution tools with logic bombs’. arXiv preprint arXiv:1712.01674, 2017.
    38. 38)
      • 38. Wagner, J., Kuznetsov, V., Candea, G.: ‘Overify: optimizing programs for fast verification’. 14th Workshop on Hot Topics in Operating Systems (HotOS XIV), Santa Ana Pueblo, NM, USA, 2013, number EPFL-CONF-186012.
    39. 39)
      • 39. Cadar, C.: ‘Targeted program transformations for symbolic execution’. Proc. of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, New York, NY, USA, 2015, pp. 906909.
    40. 40)
      • 40. Wagner, J., Kuznetsov, V., Candea, G., et al: ‘High system-code security with low overhead’. 2015 IEEE Symp. on Security and Privacy (SP), San Jose, CA, USA, 2015, pp. 866879.
    41. 41)
      • 41. Bishop, M., Engle, S., Howard, D., et al: ‘A taxonomy of buffer overflow characteristics’, IEEE Trans. Dependable Secur. Comput., 2012, 9, (3), pp. 305317.
    42. 42)
      • 42. Wang, T., Wei, T., Lin, Z., et al: ‘Intscope: automatically detecting integer overflow vulnerability in x86 binary using symbolic execution’. Network and Distributed System Security, San Diego, CA, USA, 2009.
    43. 43)
      • 43. Wang, Y., Gu, D., Xu, J., et al: ‘Ricb: integer overflow vulnerability dynamic analysis via buffer overflow’. Int. Conf. on Forensics in Telecommunications, Information, and Multimedia, Shanghai, China, 2010, pp. 99109.
    44. 44)
      • 44. Perl, H., Dechand, S., Smith, M., et al: ‘Vccfinder: finding potential vulnerabilities in open-source projects to assist code audits’. Proc. of the 22Nd ACM SIGSAC Conf. on Computer and Communications Security, CCS'15, New York, NY, USA, 2015, pp. 426437.
    45. 45)
      • 45. Yamaguchi, F., Lindner, F., Rieck, K.: ‘Vulnerability extrapolation: assisted discovery of vulnerabilities using machine learning’. Proc. of the 5th USENIX Conf. on Offensive Technologies, WOOT'11, Berkeley, CA, USA, 2011, pp. 1313.
    46. 46)
      • 46. Grieco, G., Grinblat, G.L., Uzal, L., et al: ‘Toward large-scale vulnerability discovery using machine learning’. Proc. of the Sixth ACM Conf. on Data and Application Security and Privacy, CODASPY ‘16, New York, NY, USA, 2016, pp. 8596.
    47. 47)
      • 47. Jones, J.A., Harrold, M.J.: ‘Test-suite reduction and prioritization for modified condition/decision coverage’, IEEE Trans. Softw. Eng., 2003, 29, (3), pp. 195209.
    48. 48)
      • 48. Zhang, D., Liu, D., Lei, Y., et al: ‘SimFuzz: test case similarity directed deep fuzzing’, J. Syst. Softw., 2012, 85, (1), pp. 102111.
    49. 49)
      • 49. Rothermel, G., Untch, R.H., Chu, C., et al: ‘Prioritizing test cases for regression testing’, IEEE Trans. Softw. Eng., 2001, 27, (10), pp. 929948.
    50. 50)
      • 50. Elbaum, S., Malishevsky, A., Rothermel, G.: ‘Incorporating varying test costs and fault severities into test case prioritization’. Proc. of the 23rd Int. Conf. on Software Engineering, Washington, DC, USA, 2001, pp. 329338.
    51. 51)
      • 51. Krishnamoorthi, R., Sahaaya Arul Mary, S.A.: ‘Factor oriented requirement coverage based system test case prioritization of new and regression test cases’, Inf. Softw. Technol., 2009, 51, (4), pp. 799808.
    52. 52)
      • 52. Wang, T., Wei, T., Gu, G., et al: ‘Taintscope: A checksum-aware directed fuzzing tool for automatic software vulnerability detection’. 2010 IEEE Symp. on Security and Privacy (SP), Berkeley, CA, USA, 2010, pp. 497512.
    53. 53)
      • 53. Haller, I., Slowinska, A., Neugschwandtner, M., et al: ‘Dowsing for overflows: A guided fuzzer to find buffer boundary violations’. USENIX Security, Washington, DC, USA, 2013, pp. 4964.
    54. 54)
      • 54. Coppa, E., D'Elia, D.C., Demetrescu, C.: ‘Rethinking pointer reasoning in symbolic execution’. Proc. of the 32Nd IEEE/ACM Int. Conf. on Automated Software Engineering, ASE 2017, Piscataway, NJ, USA, 2017, pp. 613618.
    55. 55)
      • 55. Avgerinos, T., Rebert, A., Cha, S.K., et al: ‘Enhancing symbolic execution with veritesting’. Proc. of the 36th Int. Conf. on Software Engineering, ICSE 2014, New York, NY, USA, 2014, pp. 10831094.
    56. 56)
      • 56. Sen, K., Marinov, D., Agha, G.: ‘Cute: a concolic unit testing engine for c’. ACM SIGSOFT Software Engineering Notes, New York, NY, USA, 2005, vol. 30, pp. 263272.
    57. 57)
      • 57. Lecomte, S.: ‘Fuzzwin: a new concolic tool using a pin ir+z3’. Available at

Related content

This is a required field
Please enter a valid email address