HOME

TheInfoList



OR:

Sign Language Recognition (shortened generally as SLR) is a computational task that involves recognizing actions from
sign languages Sign languages (also known as signed languages) are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign l ...
. This is an essential problem to solve especially in the digital world to bridge the communication gap that is faced by people with hearing impairments. Solving the problem usually requires not only annotated color (RGB) data, but various other modalities like depth, sensory information, etc. are also useful.


Isolated sign language recognition

ISLR (also known as word-level SLR) is the task of recognizing individual signs or tokens called glosses from a given segment of signing video clip. This is commonly seen as a classification problem when recognizing from isolated videos, but requires other things like video segmentation to be handled when used for real-time applications.


Continuous sign language recognition

In CSLR (also known as sign language transcription), given a sign language sequence, the task is to predict all the signs (or glosses) in the video. This is more suitable for real-world transcription of sign languages. Depending on how it is solved, it can also sometimes be seen as an extension to the ISLR task.


Continuous sign language translation

Sign language translation refers to the problem of translating a sequence of signs (called glosses) to any required
spoken language A spoken language is a language produced by articulate sounds or (depending on one's definition) manual gestures, as opposed to a written language. An oral language or vocal language is a language produced with the vocal tract in contrast with a si ...
. It is generally modeled as an extension to the CSLR problem.


References

Sign language {{sign-lang-stub