NSF CAREER award to fund research improving sign language education
Computer scientist developing wearable technology that helps teach ASL

One day, it might be possible to learn American Sign Language from anywhere — at home, in an office or at a party — as long as you’ve got a pair of smart glasses, a watch or even a ring.
This is the future ¶¶Ňő¶ĚĘÓƵ computer scientist Yincheng Jin envisions in his recently awarded , aiming to design a daily ASL training platform using wearable technology and artificial intelligence.
The CAREER award is the highest distinction the NSF offers to researchers it has tapped to become future leaders of their fields. As an assistant professor in the Thomas J. Watson College of Engineering and Applied Science’s School of Computing, Jin focused his previous work on the area of wearables for ASL translation. Yet, he quickly discovered there was more to it: Many people, including the deaf and hard of hearing (DHH) community, don’t understand ASL well enough in the first place.
“When I went in depth, I found that most DHH people aren’t familiar with sign language. If they’re not familiar, the translation is not that important,” Jin said. “I want to go further to help this community by helping them learn, so that’s why I conducted this project.”
The reason why it’s so difficult to grasp ASL, despite the prevalence of online learning resources, is not only that certain areas, such as rural regions, lack qualified teachers. It’s also because those online lessons are just that — online.
“Even though websites or online materials increase, the limitation lies in that there is no one who helps [ASL learners] check whether their sign is correct or not,” Jin said. “In that case, they are unable to practice by themselves and not able to improve, because of limited teachers, and are unable to teach themselves. This limits the ability of sign-language learners.”
Jin’s latest research will be the first to combine wearable technology and advanced AI to provide the capability of ASL learning daily.
“We proposed our method to use recent, advanced wearable technologies to sense or detect ASL motions, including facial and hand gestures,” he said. “And based on advanced large language models such as ChatGPT, we want to conduct a solution for learners by sensing gestures and providing corrective feedback.”
This way, anybody with access to such technology can learn ASL anywhere, while simultaneously receiving teacher-like feedback — even if no one is actually there to lead the class.
“I want to help all people who are interested in ASL or are required to learn ASL, which means that they just need to bring a wearable device — like smart glasses with three cameras — and they can practice ASL words and sentences and grammar,” Jin said. “All of that, they can do it by themselves, whatever and wherever they want.”
To get to this point, Jin and his team must first gather data on all components of sign language for these devices and reconstruct those signs. His team will also use advanced machine learning algorithms to address the challenges of limited data, generalizability and differing scenarios. Students in Binghamton’s ASL classes will have the opportunity to use Jin’s AI-powered devices and train on ASL by themselves.
“We combine the reconstructed gestures with online gestures and provide which part of your sign is incorrect,” Jin said. “Like a real teacher, we will use the large language model to give human-like feedback.”
On top of working with ASL learners and the DHH community in the Binghamton area, Jin will collaborate with Gallaudet University, a leading institution in deaf education. The results of this project can feed into the design of mobile apps that aid ASL learning and potentially never-before-seen smart devices, while generating robust datasets to be used in future courses on human-computer interaction.
But while Jin’s CAREER research is primarily focused on aiding ASL education, the principles of this technology apply to many other fields. He imagines his research might lend a hand to robot control and interactions, or even rehabilitation services. Take a gander through his previous work, and you’ll similarly see the names of many creative solutions bridging people and computers already — ThermoTag, EchoWhisper, SonicASL.
All of this, he said, is in the name of “improving human well-being, and to make humans live more intelligently.”