Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Language Model Score Regularization for Speech Recognition

Inspired by the fact that back-off and interpolated smoothing algorithms have significant effect on statistical language modeling, this paper proposes a sentence-level Language model (LM) score regularization algorithm to improve the fault-tolerance of LMs for recognition errors. The proposed algorithm is applicable to both count-based LMs and neural network LMs. Instead of predicting the occurrence of a sequence of words under a fixed order Markov assumption, we use a composite model consisting of different order models with either n-gram or skip-gram features to estimate the probability of the sequence of words. In order to simplify implementations, we derive a connection between bidirectional neural networks and the proposed algorithm. Experiments were carried out on the Switchboard corpus. Results on N-best lists re-scoring show that the proposed algorithm achieves consistent word error rate reduction when it is applied to count-based LMs, Feedforward neural network (FNN) LMs, and Recurrent neural network (RNN) LMs.

http://iet.metastore.ingenta.com/content/journals/10.1049/cje.2019.03.015
Loading

Related content

content/journals/10.1049/cje.2019.03.015
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address