Abstract
Text-to-speech (TTS) is a speech-processing tool that is extremely beneficial for visually impaired individuals. The TTS device is functional in converting text into human-like sounds. Conversely, achieving the TTS results for non-discretized Arabic language text is extremely demanding, as it encompasses manifold rules and unique features. To accomplish that, the LAPESDO-EATTS algorithm applies different levels of data preprocessing to enhance textual clarity and prepare input data for effective speech synthesis. Next, the Word2Vec model is used for word embedding to capture the contextual relationships among words in a high-dimensional space. For the classification process, a stacked convolutional autoencoder (SCAE) classifier can be employed. Additionally, the hyperparameter selection process is performed using the Egyptian stray dogs optimization (ESDO) algorithm to optimize the classification results of the SCAE system. For the final text-to-speech synthesis, the LAPESDO-EATTS system utilizes NVIDIA NeMo TTS models, which are renowned for their high-quality and natural-sounding output. To validate the improved performance of the LAPESDO-EATTS method, extensive experiments are conducted, and the results are examined using various methods. The comparative study reported the improvement of the LAPESDO-EATTS technique across multiple metrics.
| Original language | English |
|---|---|
| Number of pages | 21 |
| Journal | Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A/Chung-kuo Kung Ch'eng Hsuch K'an |
| Early online date | Oct 2025 |
| DOIs | |
| State | Published - 16 Oct 2025 |
Keywords
- Arabic tweets
- Egyptian stray dogs optimization
- Natural Language Processing
- Text-to-speech
- Visually Challenged People
Fingerprint
Dive into the research topics of 'Leveraging applied linguistics with stacked convolutional autoencoder for Arabic sentiment analysis and text-to-speech accessibility for visually impaired people'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver