Automatic detection, segmentation and classification of snore related signals from overnight audio recording

Automatic detection, segmentation and classification of snore related signals from overnight audio recording

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Signal Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Snore related signals (SRS) have been found to carry important information about the snore source and obstruction site in the upper airway of an Obstructive Sleep Apnea/Hypopnea Syndrome (OSAHS) patient. An overnight audio recording of an individual subject is the preliminary and essential material for further study and diagnosis. Automatic detection, segmentation and classification of SRS from overnight audio recordings are significant in establishing a personal health database and in researching the area on a large scale. In this study, the authors focused on how to implement this intelligent method by combining acoustic signal processing with machine learning techniques. The authors proposed a systematic solution includes SRS events detection, classifier training, automatic segmentation and classification. An overnight audio recording of a severe OSAHS patient is taken as an example to demonstrate the feasibility of their method. Both the experimental data testing and subjective testing of 25 volunteers (17 males and 8 females) demonstrated that their method could be effective in automatic detection, segmentation and classification of the SRS from original audio recordings.


    1. 1)
      • 1. Lugaresi, E., Cirignotta, F., Coccagna, G., Piana, C.: ‘Some epidemiological data on snoring and cardiocirculatory disturbances’, Sleep, 1980, 3, (3–4), pp. 4221.
    2. 2)
    3. 3)
    4. 4)
      • 4. Young, T., Evans, L., Finn, L., Palta, M.: ‘Estimation of the clinically diagnosed proportion of sleep apnea syndrome in middle-aged men and women’, Sleep, 1997, 20, pp. 705706.
    5. 5)
    6. 6)
      • 6. Rougui, E.J., Istrate, D., Souidene, W.: ‘Audio sound event identification for distress situations and context awareness’. Proc. 31st Int. Conf. IEEE EMBS, Minneapolis, Minnesota, USA, September 2009, pp. 35013504.
    7. 7)
      • 7. Huijie, X., Weining, H., Yulisheng, C.L.: ‘Spectral analysis of snoring sound and site of obstruction in obstructive sleep apnea/hypopnea syndrome’, J. Audiol. Speech Pathol., 2011, 19, (1), pp. 2832.
    8. 8)
    9. 9)
      • 9. Wenhung, L., Yisyuan, S.: ‘Classification of audio signals in all-night sleep studies’. Proc. 18th Int. Conf. on Pattern Recognition, Hong Kong, China, August 2006, pp. 302305.
    10. 10)
    11. 11)
      • 11. Rabiner, R.L., Schafer, W.R.: ‘Theory and Applications of Digital Speech Pocessing’ (Pearson Education, Inc., 2010).
    12. 12)
    13. 13)
    14. 14)
    15. 15)
    16. 16)
    17. 17)
    18. 18)
      • 18. Wenhung, L., Yukai, L.: ‘Classification of non-Speech human sounds: feature selection and snoring sound analysis’. Proc. 2009 IEEE Int. Conf. on Systems, Man, and Cybernetics, San Antonio, TX, USA, October 2009, pp. 26952700.
    19. 19)
    20. 20)
    21. 21)
    22. 22)
      • 22. Theodoridis, S., Koutroumbas, K.: ‘Pattern recognition’ (Academic Press of Elsevier B.V., 2009, 4th edn.) pp. 595909.
    23. 23)
      • 23. Ribeiro, F., Florêncio, D., Cha, Z., Seltzer, M.: ‘CROWDMOS: An approach for crowdsourcing mean opinion score studies’. Proc. 2011 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Prague, Czech Republic, May 2011, pp. 24162419.

Related content

This is a required field
Please enter a valid email address