Physics Maths Engineering
In this study, we used unidirectional and bidirectional long short-term memory (LSTM) deep learning networks for Chinese news classification and characterized the effects of contextual information on text classification, achieving a high level of accuracy. A Chinese glossary was created using jieba—a word segmentation tool—stop-word removal, and word frequency analysis. Next, word2vec was used to map the processed words into word vectors, creating a convenient lookup table for word vectors that could be used as feature inputs for the LSTM model. A bidirectional LSTM (BiLSTM) network was used for feature extraction from word vectors to facilitate the transfer of information in both the backward and forward directions to the hidden layer. Subsequently, an LSTM network was used to perform feature integration on all the outputs of the BiLSTM network, with the output from the last layer of the LSTM being treated as the mapping of the text into a feature vector. The output feature vectors were then connected to a fully connected layer to construct a feature classifier using the integrated features, finally classifying the news articles. The hyperparameters of the model were optimized based on the loss between the true and predicted values using the adaptive moment estimation (Adam) optimizer. Additionally, multiple dropout layers were added to the model to reduce overfitting. As text classification models for Chinese news articles, the Bi-LSTM and unidirectional LSTM models obtained f1-scores of 94.15% and 93.16%, respectively, with the former outperforming the latter in terms of feature extraction.
The study aims to develop and evaluate LSTM-based models for classifying Chinese news articles, focusing on how contextual information influences classification performance.
The research utilized unidirectional and bidirectional LSTM deep learning networks. A Chinese glossary was created using the jieba word segmentation tool, followed by stop-word removal and word frequency analysis. Word2vec was then applied to map processed words into word vectors, which served as feature inputs for the LSTM models. The models were optimized using the Adam optimizer, and multiple dropout layers were incorporated to mitigate overfitting.
The study achieved high classification accuracy, demonstrating that both unidirectional and bidirectional LSTM models effectively capture contextual information in Chinese news articles. The bidirectional LSTM model, in particular, enhanced performance by processing information in both forward and backward directions.
Chen Liu (2024) developed and assessed LSTM-based models for classifying Chinese news articles, highlighting the significance of contextual information in text classification. By employing unidirectional and bidirectional LSTM networks, the study achieved high accuracy rates, with the bidirectional model outperforming the unidirectional one. These findings underscore the effectiveness of LSTM architectures in processing and classifying textual data, offering valuable insights for future research in natural language processing and machine learning applications.
Show by month | Manuscript | Video Summary |
---|---|---|
2025 April | 2 | 2 |
2025 March | 68 | 68 |
2025 February | 43 | 43 |
2025 January | 55 | 55 |
2024 December | 43 | 43 |
2024 November | 68 | 68 |
2024 October | 34 | 34 |
Total | 313 | 313 |
Show by month | Manuscript | Video Summary |
---|---|---|
2025 April | 2 | 2 |
2025 March | 68 | 68 |
2025 February | 43 | 43 |
2025 January | 55 | 55 |
2024 December | 43 | 43 |
2024 November | 68 | 68 |
2024 October | 34 | 34 |
Total | 313 | 313 |