Archives of Acoustics, Online first
10.24425/aoa.2024.148775

Snoring Sound Recognition Using Multi-Channel Spectrograms

Ziqiang YE
South China University of Technology
China

Jianxin PENG
South China University of Technology
China

Xiaowen ZHANG
Guangzhou Medical University
China

Lijuan SONG
Guangzhou Medical University
China

Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a common and high-risk sleep-related breathing disorder. Snoring detection is a simple and non-invasive method. In many studies, the feature maps are obtained by applying a short-time Fourier transform (STFT) and feeding the model with single-channel input tensors. However, this approach may limit the potential of convolutional networks to learn diverse representations of snore signals. This paper proposes a snoring sound detection algorithm using a multi-channel spectrogram and convolutional neural network (CNN). The sleep recordings from 30 subjects at the hospital were collected, and four different feature maps were extracted from them as model input, including spectrogram, Mel-spectrogram, continuous wavelet transform (CWT), and multi-channel spectrogram composed of the three single-channel maps. Three methods of data set partitioning are used to evaluate the performance of feature maps. The proposed feature maps were compared through the training set and test set of independent subjects by using a CNN model. The results show that the accuracy of the multi-channel spectrogram reaches 94.18%, surpassing that of the Mel-spectrogram that exhibits the best performance among the single-channel spectrograms. This study optimizes the system in the feature extraction stage to adapt to the superior feature learning capability of the deep learning model, providing a more effective feature map for snoring detection.
Keywords: obstructive sleep apnea-hypopnea syndrome; snoring; convolutional neural network; multi-channel spectrogram
Full Text: PDF
Copyright © 2023 The Author(s). This work is licensed under the Creative Commons Attribution 4.0 International CC BY 4.0.


DOI: 10.24425/aoa.2024.148775