Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon openaccess Retrieval and management system for layer sound effect library

Here, the authors present a novel interactive prototype system that enhances the effectiveness and ingenuity for sound designers to explore the sound effect library created by layering in multi-methods. They combine the explored methods of semantic keyword, acoustic feature, and layer relationship. In particular, the system visualises the layer relationship via circle pack, which facilitates the sound designers’ understanding on the components of the mixed sound effect by the designed layer and sourced layer. In order to evaluate the proposed method, they conduct a timing experiment along with a five-point Likert scale survey to analyse the searching efficiency, the user experience, and the interactive user behaviours. The studies performed by the authors show that the proposed system is capable of enhancing the sound designers’ ability for sound effects searching, thus creating new possible combination and design.

References

    1. 1)
      • 19. Robertson, G., Czerwinski, M., Larson, K., et al: ‘Data mountain: using spatial memory for document management’. In Proc. of the 11th Annual ACM Symp. on User Interface Software and Technology, 1998, pp. 153–162.
    2. 2)
      • 8. Heise, S., Hlatky, M., Loviscach, J.: ‘Soundtorch: quick browsing in large audio collections’. Audio Engineering Society Convention 125. Audio Engineering Society, San Francisco, CA, 2008, pp. 18.
    3. 3)
      • 30. McInnes, L., Healy, J., Melville, J.: ‘UMAP: uniform manifold approximation and projection for dimension reduction’, J. Open Source Softw., 2018, pp. 151, arXiv:1802.03426v3.
    4. 4)
      • 9. Favory, X., Font, F., Serra, X.: ‘Search result clustering in collaborative sound collections’. Proc. of the 2020 Int. Conf. on Multimedia Retrieval, Dublin, Ireland, June 2020, pp. 207214.
    5. 5)
      • 16. Okamoto, K., Yamanishi, R., Matsushita, M.: ‘Sound-effects exploratory retrieval system based on various aspects’. IEE J. Trans. Electron., Inf. Syst., 2016, 136, (12), pp. 17121720.
    6. 6)
      • 17. Lafay, G., Misdariis, N., Lagrange, M., et al: ‘Semantic browsing of sound databases without keywords’, J. Audio Eng. Soc., 2016, 64, (9), pp. 628635.
    7. 7)
      • 25. Wold, S., Esbensen, K., Geladi, P.: ‘Principal component analysis’, Chemometr. Intell. Lab. Syst., 1987, 2, (1–3), pp. 3752.
    8. 8)
      • 20. Richan, E., Rouat, J.: ‘A proposal and evaluation of new timbre visualization methods for audio sample browsers’, Pers. Ubiquitous Comput., 2020, pp. 114, Available at: https://doi.org/10.1007/s00779-020-01388-1.
    9. 9)
      • 12. Nuanáin, C.Ó., Herrera, P., Jordá, S.: ‘Rhythmic concatenative synthesis for electronic music: techniques, implementation, and evaluation’, Comput. Music J., 2017, 41, (2), pp. 2137.
    10. 10)
      • 29. Van Der Maaten, L.: ‘Accelerating t-SNE using tree-based algorithms’, The J. of Mach. Learn. Res., 2014, 15, (1), pp. 32213245.
    11. 11)
      • 6. Fried, O., Jin, Z., Oda, R.: ‘Audioquilt: 2D Arrangements of audio samples using metric learning and kernelized sorting’. In NIME, Goldsmiths, University of London, UK., 2014, pp. 281286.
    12. 12)
      • 18. Yang, J., Hermann, T.: ‘Interactive mode explorer sonification enhances exploratory cluster analysis’, J. Audio Eng. Soc., 2018, 66, (9), pp. 703711.
    13. 13)
      • 11. Turquois, C., Hermant, M., Gómez-Marín, D., et al: ‘Exploring the benefits of 2D visualizations for drum samples retrieval’. Proc. of the 2016 ACM on Conf. on Human Information Interaction and Retrieval, Carrboro, North Carolina, USA., 2016, pp. 329332.
    14. 14)
      • 5. Font, F., Roma, G., Serra, X.: ‘Freesound technical demo’. Proc 21st ACM Int. Conf on Multimedia MM ‘13. ACM Press, Barcelona, 2013, pp. 411412.
    15. 15)
      • 24. Zheng, F., Zhang, G., Song, Z.: ‘Comparison of different implementations of MFCC’, J. Comput. Sci. Technol., 2001, 16, (6), pp. 582589.
    16. 16)
      • 14. Dupont, S., Frisson, C., Siebert, X., et al: ‘Browsing sound and music libraries by similarity’. Audio Engineering Society Convention 128, Audio Engineering Society, Novel London West, London, UK., 2010, pp. 17.
    17. 17)
      • 2. Viers, R.: ‘Sound design’, in Somerviller, B. (Ed.): ‘The Sound Effects Bible’ (Michael Wiese Productions, Studio City, CA, 2008, 1st edn.), pp. 165173.
    18. 18)
      • 22. Adeli, M., Rouat, J., Molotchnikoff, S.: ‘Audiovisual correspondence between musical timbre and visual shapes’, Front. Hum. Neurosci., 2014, 352, (8), pp. 112.
    19. 19)
      • 26. Cox, M.A., Cox, T.F.: ‘Multidimensional scaling’, in Chen, C., Hardle, W.K., Unwin, A. (Eds.): ‘Handbook of data visualization’ (Springer, Berlin, Heidelberg, 2008, 1st edn), pp. 315347.
    20. 20)
      • 15. Urbain, G., Frisson, C., Moinet, A., et al: ‘A semantic and content-based search user interface for browsing large collections of Foley sounds’. Proc. of the Audio Mostly, Norrkoping, Sweden, 2016, pp. 272277.
    21. 21)
      • 21. Evreinova, T.V., Evreinov, G., Raisamo, R.: ‘An exploration of volumetric data in auditory space’, J. Audio Eng. Soc., 2014, 62, (3), pp. 172187.
    22. 22)
      • 4. Font, F., Bandiera, G.: ‘Freesound explorer: make music while discovering freesound!’. Web Audio Conf. WAC, London, UK, August 2017, pp. 12.
    23. 23)
      • 13. Berthaut, F., Desainte-Catherine, M., Hachet, M.: ‘Combining audiovisual mappings for 3D musical interaction’. Int. Computer Music Conf., New York, USA., June 2010, pp. 100108.
    24. 24)
      • 10. Hemgren, D.: ‘Fuzzy Content-Based Audio Retrieval Using Visualization Tools’. Master thesis, School of Electrical Engineering and Computer Science (EECS), 2019.
    25. 25)
      • 27. Schwarz, D., Schnell, N.: ‘Sound search by content-based navigation in large databases’. Sound and Music Computing (SMC)’, 2009, pp. 1–1.
    26. 26)
      • 7. Heise, S., Hlatky, M., Loviscach, J.: ‘Aurally and visually enhanced audio search with soundtorch’. CHI'09 Extended Abstracts on Human Factors in Computing Systems, Boston, MA, USA., 2009, pp. 32413246.
    27. 27)
      • 3. Ahlberg, C., Shneiderman, B.: ‘Visual information seeking: tight coupling of dynamic query filters with starfield displays’, in Bederson, B.B., Shneiderman, B. (Eds.): ‘The craft of information visualization’ (Morgan Kaufmann, San Francisco, CA, 2003, 1st edn), pp. 713.
    28. 28)
      • 28. Maaten, L.V.D., Hinton, G.: ‘Visualizing data using t-SNE’, J. Mach. Learn. Res., 2008, 9, (Nov), pp. 25792605.
    29. 29)
      • 1. Font, F., Roma, G., Serra, X.: ‘Sound sharing and retrieval’, in Virtanen, T., Plumbley, M.D., Ellis, D. (Eds.): ‘Computational analysis of sound scenes and events’ (Springer, Cham, Switzerland, 2018, 1st edn), pp. 279301.
    30. 30)
      • 23. McFee, B., Raffel, C., Liang, D., et al: ‘Librosa: audio and music signal analysis in python’. Proc. of the 14th Python in Science Conf., Austin, Texas, 2015, (8), pp. 1824.
http://iet.metastore.ingenta.com/content/journals/10.1049/ccs.2020.0027
Loading

Related content

content/journals/10.1049/ccs.2020.0027
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address