Two undergraduates at the University of Washington made a glove that translates gestures in American Sign Language (ASL) into English and speaks it via speakers. The SignAloud glove won them a $10,000 Lemelson-MIT Student Prize and international attention.

The SignAloud glove captures ASL gestures with sensors that measure everything from XYZ coordinates to the way individual fingers flex or bend. That sensor data is sent via Bluetooth to a nearby computer and fed into coding algorithms that categorize the gestures, which are translated into English and then audibly spoken via speaker.

However, according to Navid Azodi, co-creator of SignAlound, “Keep in mind, we have by no means captured the entire language and we’re nowhere near that. [ASL] is more than just words and phrases, and we know that. It has complex grammar structures. What we eventually want to get is for SignAloud to categorize a majority of the language,” says Azodi.

“Many of the sign language translation devices already out there are not practical for everyday use. Some use video input, while others have sensors that cover the user’s entire arm or body,” says Thomas Pryor, another co-creator of SignAloud.

Azodi and Pryor shared a desire to give back to the world using their engineering talents and considerable experience. SignAloud could even help translate ASL into other languages, or be used as a pure input device from ASL into a computer. However, they knew that using a gadget to translate ASL would be a challenge, but they didn’t anticipate the difficulty they would encounter when it came to tracking the nuances of language. “That learning process was one of, if not the most humbling experience. People called it a miracle device, and it’s not. It’s not about helping people; they don’t need help. It’s about how technology can be accessible and inclusive. It’s just a means of building bridges and breaking down barriers.”

Photo and Source of Information: Fast Company