Kinect Sign Language Translator – part 1

Published

Today, we have the first part of a two-part blog posted by program managers in Beijing and Redmond respectively—first up, Guobin Wu:

I consider myself incredibly lucky to be the program manager of the Kinect Sign Language Translator project. There are more than 20 million people in China who are hard of hearing, and an estimated 360 million such people around the world, so this project has immense potential to generate positive social impact worldwide.

Opening new doors of communication for sign language users

Microsoft Research Podcast

Collaborators: Renewable energy storage with Bichlien Nguyen and David Kwabi

Dr. Bichlien Nguyen and Dr. David Kwabi explore their work in flow batteries and how machine learning can help more effectively search the vast organic chemistry space to identify compounds with properties just right for storing waterpower and other renewables.

I clearly remember the extensive effort we put into writing the proposal. We knew that Prof. Xilin Chen of the Chinese Academy of Sciences had been researching sign language recognition technology for 10 years, and he was eager to try the Kinect technology, which offered a very favorable price-to-performance ratio. Ming Zhou, a principle researcher from Microsoft Research Asia, had a good working relationship with Prof. Chen, and it was on Ming’s strong recommendation that we submitted the sign language translator proposal in response to Stewart Tansley’s call for Kinect projects.

Students from the special education school at Beijing Union University participated in the project.During the first six months, we focused mainly on Chinese sign language data collection and labeling. Prof. Chen’s team worked closely with Prof. Hanjing Li of the special education school at Beijing Union University. The first step was to recruit two or three of Prof. Li’s students who are deaf to be part of the project. One candidate in particular stood out: Dandan Yin. We were moved when, during the interview, she told us, “When I was a child, my dream was to create a machine to help people who can’t hear.”

The next milestone was to build a sign language recognition system. The team has published many papers that explain the technical details, but what I want to stress here is the collaborative nature of the project. Every month, we had a team meeting to review the progress and plan our next steps. Experts from a host of disciplines—language modeling, translation, computer vision, speech recognition, 3D modeling, and special education—contributed to the system design.

Our system is still a research prototype. It is progressing from recognizing isolated words signed by a specific person (translator mode) to understanding continuous communication from any competent signer (communication mode). Our current prototype can successfully produce good results for translator mode, and we are diligently working to overcome the technology hurdles so that the system can reliably understand and interpret in communication mode. And while we’re solving those challenges, we are also starting to build up the system’s vocabulary of American Sign Language gestures, which are different from those of Chinese Sign Language.

An avatar on the computer screen (right) represents the hearing person and interprets their spoken language into sign language. We’ve had the good fortune to demo the system at both the Microsoft Research Faculty Summit and the Microsoft company meeting this year. Dandan attended both events and displayed her professionalism as a signer. After the Faculty Summit in July, she emotionally thanked Microsoft for turning her dream into reality. I was nearly moved to tears by our reception during the company meeting, the first one that I’d ever attended in person. And I was thrilled to hear thundering applause when Dandan communicated with a hearing employee by using our system.

Since these demos, the project has received much attention from researchers and the deaf community, especially in the United States. We expect that more and more researchers from different disciplines and different countries will collaboratively build on the prototype, so that the Kinect Sign Language Translator system will ultimately benefit the global community of those who are deaf or hard of hearing. The sign language project is a great example of selecting the right  technical project with the right innovative partners, and applying effort and perseverance over the years. It has been a wonderful, multidisciplinary, collaborative effort, and I’m honored and proud to be involved.

Guobin Wu (opens in new tab), Research Program Manager, Microsoft Research Asia

Learn more