Archives of Acoustics, Online first
10.24425/aoa.2024.148784

Speech Emotion Recognition Using a Multi-Time-Scale Approach to Feature Aggregation and an Ensemble of SVM Classifiers

Antonina STEFANOWSKA
Faculty of Computer Science, Białystok University of Technology
Poland

Sławomir Krzysztof ZIELIŃSKI
ORCID ID 0000-0002-3205-974X
Faculty of Computer Science, Białystok University of Technology
Poland

Due to its relevant real-life applications, the recognition of emotions from speech signals constitutes a popular research topic. In the traditional methods applied for speech emotion recognition, audio features are typically aggregated using a fixed-duration time window, potentially discarding information conveyed by speech at various signal durations. By contrast, in the proposed method, audio features are aggregated simultaneously using time windows of different lengths (a multi-time-scale approach), hence, potentially better utilizing information carried at phonemic, syllabic, and prosodic levels compared to the traditional approach. A genetic algorithm is employed to optimize the feature extraction procedure. The features aggregated at different time windows are subsequently classified by an ensemble of support vector machine (SVM) classifiers. To enhance the generalization property of the method, a data augmentation technique based on pitch shifting and time stretching is applied. According to the obtained results, the developed method outperforms the traditional one for the
selected datasets, demonstrating the benefits of using a multi-time-scale approach to feature aggregation.
Keywords: speech emotion recognition; feature aggregation; ensemble classification
Full Text: PDF
Copyright © 2023 The Author(s). This work is licensed under the Creative Commons Attribution 4.0 International CC BY 4.0.


DOI: 10.24425/aoa.2024.148784