TY - JOUR AU - Cherrate, Meryem AU - Sabri, My Abdelouahed AU - Yahyaouy, Ali AU - Aarab, Abdellah PY - 2026 TI - Recognizing Sign Language Gestures Using a Hybrid Spatio-Temporal Deep Learning Model JF - Journal of Computer Science VL - 21 IS - 12 DO - 10.3844/jcssp.2025.2965.2974 UR - https://thescipub.com/abstract/jcssp.2025.2965.2974 AB - Recognizing gestures in American Sign Language (ASL) from video data presents significant challenges due to the intricate combination of hand gestures, facial cues, and body motion. In this work, we introduce a hybrid deep learning framework that integrates Convolutional Neural Networks (CNNs) for extracting spatial characteristics with Long Short-Term Memory (LSTM) networks for capturing temporal sequences. The model was trained and evaluated on a subset of 25 classes from the WLASL dataset, a comprehensive video collection comprising over 2,000 labeled ASL signs. Achieving an accuracy of 96%, the proposed system demonstrates superior performance compared to traditional methods. These findings underscore the strength of spatio-temporal modeling in sign language recognition. With a design geared toward scalability and real-time deployment, the approach shows strong potential to support communication and accessibility for individuals with hearing impairments. Future developments will aim to mitigate class imbalance, broaden applicability to other sign languages, and assess the benefits of Transformer-based models for enhanced recognition.