Bridging Silence and Semantics: A Multimodal Review of Sign Language Recognition, Translation, and Adaptive Learning Systems
Author : Antony Jacob
Abstract :Bridging communication gaps for the deaf and mute community remains an open AI challenge, demanding systems that go beyond static sign recognition toward adaptive, emotion-aware interaction. While existing research has advanced isolated gesture recognition, few works address dynamic sentence translation, con textual understanding, and learner adaptability in real-world envi ronments. This review analyzes recent developments in multimodal learning—integrating vision, text, and speech—to enable seamless bidirectional communication and personalized education. It highlights the evolution from CNN-based recognition to transformer driven sign language understanding and avatar-based delivery. This review synthesizes emerging multimodal approaches that blend recognition, translation, and emotion-aware adaptation into a unified assistive learning framework.
Keywords :Sign Language Recognition, Neural Machine Translation, Emotion Detection, Adaptive Learning Platforms, Transformer Models, Multimodal Deep Learning.
Conference Name :International Conference on Science, Engineering & Technology (ICSET-25)
Conference Place Cochin,India
Conference Date 8th Nov 2025