Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Environment generation for validating event-driven software using model checking

Environment generation for validating event-driven software using model checking

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Software — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Event-driven systems maintain an ongoing dialog with their environment. Examples include: distributed programs, where each process reacts to received messages by performing computation and sending messages to peers; graphical user interfaces (GUI), where the interface software and the underlying application react to user inputs and web-based applications, where presentation, business logic and storage tier functionality react to user inputs and response from other web-based services. Such applications are difficult to test because the set of possible interaction sequences between the system and its environment can be very large and governed by complex constraints. The exhaustive nature of software model checking techniques offers hope for effectively validating such systems, however, they only work for closed systems. Previously, the authors developed the Bandera Environment Generator, which given an open system, called a unit under analysis, closes it with a model of its environment. The authors' previous work on environment generation has focused on developing broadly applicable mechanisms for modelling environment behaviour. The generality of this approach often makes it difficult to produce environment models that enable precise and efficient system analysis. The authors' experience shows that by exploiting information about the application domain, generated environments can be made both more precise and more efficient for model checking. This study presents the concept of domain-specific environment generation, details techniques for customising environment generation for the domain of event-driven software, and assesses those techniques on the domains of GUI and web-based applications.

References

    1. 1)
      • Barlas, E., Bultan, T.: `Netstub: a framework for verification of distributed java applications', ASE, 2007, p. 24–33.
    2. 2)
      • SUN: ‘Swing Tutorial’, http://java.sun.com/docs/books/tutorial/uiswing/index.html.
    3. 3)
      • Tkachuk, O., Brat, G., Visser, W.: `Using code level model checking to discover automation surprises', Proc. Digital Avionics Systems Conf., 2002.
    4. 4)
      • Spring Framework: http://www.springframework.org/.
    5. 5)
      • Colby, C., Godefroid, P., Jagadeesan, L.J.: `Automatically closing open reactive programs', PLDI, 1998, p. 345–357.
    6. 6)
      • P. Parizek , F. Plasil . Specification and generation of environment for model checking of software components. Electron. Notes Theory Comput. Sci. , 2 , 143 - 154
    7. 7)
      • Khurshid, S., Păsăreanu, C.S., Visser, W.: `Generalized symbolic execution for model checking and testing', TACAS, 2003, p. 553–568.
    8. 8)
      • L. Dillon , G. Kutty , L. Moser , P. Melliar Smith , Y. Ramakrishna . A graphical interval logic for specifying concurrent systems. ACM Trans. Softw. Eng. Methodol. , 2 , 131 - 165
    9. 9)
      • Corbett, J.C., Dwyer, M.B., Hatcliff, J.: `Bandera: extracting finite-state models from Java source code', ICSE, 2000, p. 439–448.
    10. 10)
      • Giannakopoulou, D., Păsăreanu, C.S., Cobleigh, J.: `Assume-guarantee verification of source code with design-level assumptions', ICSE, 2004, p. 211–220.
    11. 11)
      • Tkachuk, O., Rajan, S.P.: `Application of automated environment generation to commercial software', ISSTA, 2006, p. 203–214.
    12. 12)
      • J. Steven , P. Chandra , B. Fleck , A. Podgurski . jRapture: A capture/replay tool for observation-based testing. ISSTA , 158 - 167
    13. 13)
      • Z. Manna , A. Pnueli . (1991) The temporal logic of reactive and concurrent systems: specification.
    14. 14)
      • SUN: ‘Java Pet Store’, http://java.sun.com/j2ee/1.4/download.html.
    15. 15)
      • JUnit: http://www.junit.org.
    16. 16)
      • JPF: http://javapathfinder.sourceforge.net.
    17. 17)
      • E. Gamma , R. Helm , R. Johnson , J.M. Vlissides . (1994) Design patterns: elements of reusable object-oriented software.
    18. 18)
      • S.P. Rajan , O. Tkachuk , M.R. Prasad , I. Ghosh , N. Goel , T. Uehara . WEAVE: Web applications validation environment. ICSE Companion , 101 - 111
    19. 19)
      • Halfond, W.G.J., Orso, A.: `Improving test case generation for web applications using automated interface discovery', ESEC/SIGSOFT FSE, 2007, p. 145–154.
    20. 20)
      • Ghosh, I., Rajan, S., Shannon, D., Khurshid, S.: `Efficient symbolic execution of strings for validating web applications', DEFECTS, 2009, p. 22–26.
    21. 21)
      • Tkachuk, O., Dwyer, M.B.: `Adapting side effects analysis for modular program model checking', ESEC/SIGSOFT FSE, 2003, p. 188–197.
    22. 22)
      • A.A. Andrews , J. Offutt , R.T. Alexander . Testing web applications by modeling with fsms. Softw. Syst. Model. , 326 - 345
    23. 23)
      • Memon, A., Banerjee, I., Nagarajan, A.: `GUI ripping: Reverse engineering of graphical user interfaces for testing', WCRE, 2003, p. 260–269.
    24. 24)
      • SeleniumIDE: http://seleniumhq.org/projects/ide.
    25. 25)
      • Avrunin, G.S., Corbett, J.C., Dillon, L.: `Analyzing partially-implemented real-time systems', ICSE, 1997, p. 228–238.
    26. 26)
      • Hughes, G., Bultan, T.: `Interface grammars for modular software model checking', ISSTA, 2007, p. 39–49.
    27. 27)
      • X. Deng , J. Lee , Robby . Bogor/Kiasan: A k-bounded symbolic execution for checking strong heap properties of open systems. ASE , 157 - 166
    28. 28)
      • Struts: http://struts.apache.org/.
    29. 29)
      • Tkachuk, O., Dwyer, M.B., Păsăreanu, C.S.: `Automated environment generation for software model checking', ASE, 2003, p. 116–129.
    30. 30)
      • Dwyer, M.B., Robby, , Tkachuk, O., Visser, W.: `Analyzing interaction orderings with model checking', ASE, 2004, p. 154–163.
    31. 31)
      • Stoller, S.D., Liu, Y.A.: `Transformations for model checking distributed java programs', SPIN, 2001, p. 192–199.
    32. 32)
      • Halfond, W.G., Anand, S., Orso, A.: `Precise interface identification to improve testing and analysis of web applications', ISSTA, 2009, p. 285–296.
    33. 33)
      • SUN: ‘J2EE 1.4 Tutorial’, http://java.sun.com/j2ee/1.4/docs/tutorial/doc/.
    34. 34)
      • Robby, , Dwyer, M.B., Hatcliff, J.: `Bogor: an extensible and highly-modular model checking framework', ESEC/SIGSOFT FSE, 2003, p. 267–276.
    35. 35)
      • Păsăreanu, C.S., Dwyer, M.B., Huth, M.: `Assume-guarantee model checking of software: a comparative case study', SPIN, 1999, p. 168–183.
    36. 36)
      • Robby: Bogor; http://bogor.projects.cis.ksu.edu, 2003.
    37. 37)
      • Ricca, F., Tonella, P.: `Analysis and testing of web applications', ICSE, 2001, p. 25–34.
    38. 38)
      • Godefroid, P.: `Model checking for programming languages using VeriSoft', POPL, 1997, p. 174–186.
    39. 39)
      • JWebUnit: http://jwebunit.sourceforge.net/.
    40. 40)
      • Dwyer, M.B., Avrunin, G., Corbett, J.C.: `Patterns in property specifications for finite-state verification', ICSE, 1999, p. 411–420.
    41. 41)
      • Brat, G., Havelund, K., Park, S., Visser, W.: `Java PathFinder – a second generation of a Java model-checker', Proc. Workshop on Advances in Verification, July 2000.
    42. 42)
      • Sprenkle, S., Gibson, E., Sampath, S., Pollock, L.: `Automated replay and failure detection for web applications', Proc. ASE'05, 2005, p. 253–262.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2009.0017
Loading

Related content

content/journals/10.1049/iet-sen.2009.0017
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address