Volume no :
10 |Issue no :
1Article Type :
Google ScholarAuthor :
P Suresh Kumar, P Saranya, P Nisha, M Anish, R JawaharPublished Date :
08 - April - 2025Publisher :
Journal of Artificial Intelligence and Cyber Security (JAICS)1.
N. C. Camgoz, S. Hadfield, O. Koller, and R. Bowden, “Sign Language Transformers: Joint End-to-End Sign Language Recognition and Translation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 12, pp. 3001–3015, Dec. 2020.
2.
R. Ko, S. Tang, and W. Li, “Sign Language Recognition Using Hybrid Deep Learning Models,” IEEE Access, vol. 8, pp. 134567–134578, 2020.
3.
H. Lee and K. Park, “Animated Avatar Generation for Real-Time Sign Language Communication,” Comput. Animat. Virtual Worlds, vol. 31, no. 2, pp. 1–12, Jun. 2020.
4.
A. Lopez, M. Camara, and P. Gutierrez, “Real-Time Sign Language Recognition Using Mobile Devices,” J. Ambient Intell. Humaniz. Comput., vol. 12, no. 4, pp. 4511–4525, Apr. 2021.
5.
J. S. Park, S. H. Lee, and Y. H. Kim, “Efficient End-to-End Speech Recognition for Real-Time Accessibility,” IEEE Access, vol. 9, pp. 113211– 113223, 2021.
6.
S. Kaur, P. Singh, and A. Kumar, “Deep Learning Based Automated Sign Language Translation: A Survey,” IEEE Access, vol. 10, pp. 11247–11260, 2022.
7.
L. Zhang, J. Li, and H. Kim, “Avatar-Based Sign Language Visualization for Accessibility,” IEEE Access, vol. 10, pp. 11234–11246, 2022.
8.
H. Chen and T. Yu, “Privacy-Preserving Client-Side Speech Recognition for Accessibility Systems,” IEEE Trans. Multimedia, vol. 25, no. 6, pp. 1621–1634, Jun. 2023.
9.
M. Patel and R. Mehta, “Browser-Based AI for Real-Time Applications,” IEEE Internet Things J., vol. 10, no. 5, pp. 2312–2324, May 2023.
10.
D. Martinez, F. Torres, and M. Velazquez, “Low-Latency Speech Recognition for Web Accessibility,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Rhodes, Greece, 2023, pp. 1532–1536.
11.
A. Kumar, R. Singh, and S. Rao, “Real-Time Speech-to-Sign Language Translation System Using Animated Avatars,” IEEE Trans. Emerg. Topics Comput., vol. 12, no. 3, pp. 890–903, 2024.
12.
P. Gupta, S. Desai, and V. Patel, “Multi-Modal Assistive Interfaces for Inclusive Communication,” IEEE Trans. Human-Mach. Syst., vol. 54, no. 1, pp. 120–131, Mar. 2024.
13.
T. Nguyen and H. Tran, “Sign Gesture Generation Using Deep Reinforcement Learning,” IEEE Robot. Autom. Lett., vol. 9, no. 2, pp. 588– 595, Apr. 2024.
14.
J. Zhao and X. Li, “Real-Time Cross-Modal Translation for Accessibility Applications,” IEEE Access, vol. 12, pp. 45421–45435, 2024.
15.
S. Bhatt, “Evaluating Real-Time Accessibility Systems: Metrics and Benchmarks,” in Proc. IEEE Int. Conf. User Model., Adapt., Personaliz., Tokyo, Japan, 2024, pp. 78–85.
16.
O. Koller, N. C. Camgoz, H. Ney, and R. Bowden, “Weakly Supervised Learning With Multi- Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 9, pp. 2306– 2320, Sep. 2020.
17.
S. Albanie, A. Vedaldi, and S. Zisserman, “Learning Grammatical Structures for Continuous Sign Language Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 6, pp. 2956–2970, Jun. 2022.
18.
Y. Wang, X. Chen, and F. Li, “End-to-End Multimodal Speech-to-Gesture Translation Using Deep Neural Networks,” IEEE Trans. Multimedia, vol. 25, no. 4, pp. 1012–1024, Apr. 2023.
19.
R. S. Rao, K. Narayanan, and M. Balakrishnan, “Client-Side Deep Learning for Privacy-Aware Assistive Communication Systems,” IEEE Access, vol. 11, pp. 88934–88947, 2023.
20.
A. Verma and S. Banerjee, “Avatar-Based Multilingual Sign Language Rendering for Inclusive Human–Computer Interaction,” IEEE Trans. Human Mach. Syst., vol. 55, no. 2, pp. 245–256, Apr. 2025
