Chinese Language Model Adaptive Method Based on Recurrent Neural Network
Keywords:
Deep learning, Recurrent neural network, Chinese language model, Self-adaption algorithmAbstract
Deep learning is more and more widely used in natural language processing. Compared with the traditional n-gram statistical language model, Recurrent neural network (RNN) modeling technology has shown great advantages in language modeling, and has been gradually applied in speech recognition, machine translation and other fields. However, at present, the training of RNN language models is mostly offline. For different speech recognition tasks, there are language differences between training corpus and recognition tasks, which affects the recognition rate of speech recognition systems. While using RNN modeling technology to train the Chinese language model, an online RNN model self-adaption algorithm is proposed, which takes the preliminary recognition results of speech signals as corpus to continue training the model, so that the adaptive RNN model can get the maximum match with the recognition task. The experimental results show that the adaptive model effectively reduces the language difference between the language model and the recognition task, and the recognition rate of the system is further improved after the Chinese word confusion network is re-scored, which has been verified in the actual Chinese speech recognition system.