This is an open access article published by the IET under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/)
Recently, transforming graphical user interface (GUI) mockups into code becomes a common challenging practice for current software developers. However, this transformation usually takes time especially when GUI changes keep pace with evolutionary features. There are many studies admitted this challenge and presented solutions in terms of computer-based GUI mockups. However, there is a research gap in this kind of research as very few of them adopted hand-drawn mockups as an input. In this study, the authors employed YOLOv5 is a fast and accurate deep learning framework to automate the process of converting hand-drawn GUI mockups into Android-based GUI prototype. The process starts with detecting all GUI mockups in an input image and determining their bounding boxes, classifying these mockups into their corresponding GUI objects, then finally aligning these objects together to form the output prototype based on the layout presented in the input image. Experimental results show the effectiveness of the proposed approach in generating a visually appealing Android GUI from hand-drawn mockups with a recognition accuracy of 98.54% when tested on various hand-drawn GUI structures designed by five developers.
References
-
-
1)
-
2)
-
3)
-
33. Seifert, J., Pfleging, B., Valderrama Bahamóndez, E.d.C., et al: ‘Mobidev: a tool for creating apps on Mobile phones’. Proc. of the 13th Int. Conf. on Human Computer Interaction with Mobile Devices and Services (MobileHCI ’11), New York, NY, USA, 2011, pp. 109–112.
-
4)
-
5)
-
35. Lasecki, W.S., Kim, J., Rafter, N., et al: ‘Apparition: crowdsourced user interfaces that Come to life as you sketch them’. Proc. of the 33rd Annual ACM Conf. on Human Factors in Computing Systems (CHI ’15), New York, NY, USA, 2015, pp. 1925–1934.
-
6)
-
20. Robinson, A.J.: ‘Sketch2code: generating a website from a paper mockup’. , 2019.
-
7)
-
8)
-
1. Moran, K.P., Bernal-Cardenas, C., Curcio, M., et al: ‘Machine learning-based prototyping of graphical user interfaces for mobile apps’, IEEE Trans. Softw. Eng., 2020, 46, (2), pp. 196–221, doi: 10.1109/TSE.2018.2844788.
-
9)
-
7. Myers, B.: ‘Challenges of HCI design and implementation’, Interactions, 1994, 1, (1), pp. 73–83.
-
10)
-
10. Moran, K., Li, B., Bernal-Cárdenas, C., et al: ‘Automated reporting of GUI design violations for mobile apps’. Proc. of the 40th Int. Conf. on Software Engineering (ICSE ’18), Gothenburg, Sweden, 2018, pp. 165–175,.
-
11)
-
32. Chatty, S., Sire, S., Vinot, J.-L., et al: ‘Revisiting visual interface programming: creating GUI tools for designers and programmers’. Proc. of the 17th annual ACM Symp. on User interface software and technology (UIST ’04), Santa Fe, NM, USA, 2004, p. 267.
-
12)
-
13)
-
9. Lelli, V., Blouin, A., Baudry, B.: ‘Classifying and qualifying GUI defects’. 2015 IEEE 8th Int. Conf. on Software Testing, Verification and Validation (ICST), Graz, Austria, April 2015, pp. 1–10.
-
14)
-
49. Girshick, R.B.: ‘Fast r-cnn’. 2015 IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 1440–1448.
-
15)
-
16)
-
17)
-
18)
-
19)
-
20)
-
29. Samir, H., Kamel, A.: ‘Automated reverse engineering of Java graphical user interfaces for web migration’, , 2007.
-
21)
-
22)
-
23. Balog, M., Gaunt, A.L., Brockschmidt, M., et al: ‘Deepcoder: learning to write programs’. , 2017.
-
23)
-
24)
-
25)
-
25. Chang, T.-H., Yeh, T., Miller, R.: ‘Associating the visual representation of user interfaces with their internal structures and metadata’. Proc. of the 24th Annual ACM Symp. on User Interface Software and Technology (UIST ‘11), New York, NY, USA, 2011, pp. 245–256.
-
26)
-
27)
-
31. Landay, J.A., Myers, B.A.: ‘Sketching interfaces: toward more human interface design’, Computer, 2001, 34, (3), pp. 56–64.
-
28)
-
30. Coyette, A., Kieffer, S., Vanderdonckt, J.: ‘Multi-fidelity prototyping of user interfaces’. In Proc. of the 11th IFIP TC 13 Int. Conf. on Human-computer Interaction (INTERACT'07), Berlin, Heidelberg, 2007, pp. 150–164.
-
29)
-
34. Meng, X., Zhao, S., Huang, Y., et al: ‘WADE: simplified GUI add-on development for third-party software’. Proc. of the 32Nd Annual ACM Conf. on Human Factors in Computing Systems (CHI ’14), New York, NY, USA, 2014, pp. 2221–2230.
-
30)
-
6. Tucker, A.B.: ‘Computer science handbook’ (Chapman & Hall/CRC, Boca Raton, Fla, 2004, 2nd edn.).
-
31)
-
15. Wang, C.-Y., Liao, H.-Y.M., Yeh, I.-H., et al: ‘CSPNet: A new backbone that can enhance learning capability of CNN’. , 2019.
-
32)
-
27. Dixon, M., Fogarty, J.: ‘Prefab: implementing advanced behaviors using pixel-based reverse engineering of interface structure’. Proc. of the SIGCHI Conf. on Human Factors in Computing Systems (CHI ’10), New York, NY, USA, 2010, pp. 1525–1534.
-
33)
-
34)
-
12. Myers, B., Park, S.Y., Nakano, Y., et al: ‘How designers design and program interactive behaviors. (:unav)’, September 2008.
-
35)
-
16. Huang, G., Liu, Z., Weinberger, K.Q.: ‘Densely connected convolutional networks’. 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 2261–2269.
-
36)
-
18. Beltramelli, T.: ‘Pix2code: generating code from a graphical user interface screenshot’. , May 2017, .
-
37)
-
21. Jain, V., Agrawal, P., Banga, S., et al: ‘Sketch2code: transformation of sketches to UI in real-time using deep neural network’. , 2019.
-
38)
-
14. Joseph, R., Ali, F.: ‘YOLOv3: an Incremental improvement’, 2018.
-
39)
-
22. Eyiokur, F.I., Yaman, D., Ekenel, H.K.: ‘Sketch classification with deep learning models’. 2018 26th Signal Processing and Communications Applications Conf. (SIU), Izmir, Turkey, 2018, pp. 1–4.
-
40)
-
41)
-
19. Mohian, S., Csallner, C.: ‘Doodle2app: native app code by freehand ui sketching’. Proc. of the IEEE/ACM 7th Int. Conf. on Mobile Software Engineering and Systems, MOBILESoft ‘20, New York, NY, USA, 2020, pp. 81–84.
-
42)
-
28. Hinze, A., Bowen, J., Wang, Y., et al: ‘Model-driven GUI & interaction design using emulation’. Proc. of the 2Nd ACM SIGCHI Symp. on Engineering Interactive Computing Systems (EICS ’10), New York, NY, USA, 2010, pp. 273–278.
-
43)
-
17. Deka, B., Huang, Z., Franzen, C., et al: ‘Rico: a Mobile app dataset for building data-driven design applications’. Proc. of the 30th Annual ACM Symp. on User Interface Software and Technology (UIST ’17), Qubec City, QC, Canada, 2017, pp. 845–854.
-
44)
-
45)
-
46)
-
11. Landay, J.A., Myers, B.A.: ‘Interactive sketching for the early stages of user interface design’. Proc. of the SIGCHI Conf. on Human factors in computing systems (CHI ’95), Denver, Colorado, United States, 1995, pp. 43–50.
-
47)
-
48)
-
8. Nguyen, T.A., Csallner, C.: ‘Reverse engineering Mobile application user interfaces with REMAUI (T)’. 2015 30th IEEE/ACM Int. Conf. on Automated Software Engineering (ASE), Lincoln, NE, USA, November 2015, pp. 248–259.
-
49)
-
26. Dixon, M., Leventhal, D., Fogarty, J.: ‘Content and hierarchy in pixel-based methods for reverse engineering interface structure’. Proc. of the SIGCHI Conf. on Human Factors in Computing Systems (CHI ’11), New York, NY, USA, 2011, pp. 969–978.
-
50)
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-sen.2019.0378
Related content
content/journals/10.1049/iet-sen.2019.0378
pub_keyword,iet_inspecKeyword,pub_concept
6
6