Skip to main content

    SignSpeak: Sign Language Translator

    People of the Deaf community frequently face communication challenges when their hearing peers don't know sign language, and lipreading isn't known or accurate. SignSpeak addresses this need by translating signs/phrases into speech output and displaying spoken responses as text.

    TensorFlow.js was used to handle the machine learning, along with JavaScript and CSS. The computer learns signs with dynamic start/stop training buttons, enabling two-handed signing; while training, webcam images at 5 frames per second are registered in the TensorFlow model with the corresponding label. The initial sign "ready" is required, and will begin building the sentence. The sign "finish" is also required, and when detected, the computer speaks the collected sentence, clears the phrase array, and starts a timeout function for listening for a response.

    Project Members: Carolyn Campbell