Tell us more about your background and job at Google.
When I moved to the U.S in 1984, there were no transcription services. I wanted to change that, so I focused my work on optimizing speech and language recognition to help people who are deaf or hard of hearing.
How has your personal experience shaped your career?
I completely lost my hearing when I was one. I learned to lipread well so I could communicate with other students and teachers. My family was also very helpful to me. When I switched to a school where my father taught, he made sure I was in a class with children I knew so it was a smoother transition.
But in eighth grade, I moved to a math school with new teachers and students and was unable to lipread what they taught in class or communicate with my new classmates. I sat, day after day, not understanding the material they were teaching and had to teach myself from textbooks. If I had a tool like Live Transcribe when I was growing up, my experience would have been very different.
In what ways has assistive technology — like Live Transcribe — changed your experience today?
Technology provides tremendous opportunities to help people with disabilities — I know this firsthand.
I use Live Transcribe every day to communicate with others. I use it to play games and share stories with my twin granddaughters — which is life-changing. And just last week, I gave a lecture at a mathematical seminar at John Hopkins University. During it, I could interact with the audience and answer questions — without Live Transcribe that would have been very difficult for me to do.
I used to rely heavily on lipreading for day-to-day tasks, but when people wear masks I can’t do that — I don’t even know when someone who’s wearing a mask is talking to me. Because of this, Live Transcribe is even more important to me — especially when at stores, riding public transit or visiting a doctor.
What are you excited about when you think about speech recognition technology ten years from now?
My dream is to use speech recognition technology to help people communicate. As technology advances, it will unlock new possibilities — such as transcribing speech even as people switch languages, understanding people with all accents and speech motor skills, indicating more sound events with visual symbols and automatically integrating sign recognition or additional haptic feedback technologies.
Further in the future, I hope to see an experience where people are no longer dependent on a mobile phone to see transcriptions. Perhaps transcriptions will be available in convenient wearable eye technologies or appear on a wall when someone looks at it. There’s a variant of prediction that there will be no mobile phones since all devices around us — like our walls — will act as mobile devices when people need them to.
What do you want others to learn from World Hearing Day?
According to WHO, one in ten people will experience hearing loss by 2050. Still, a lot of people with hearing loss don’t know about novel speech recognition technologies that could help them communicate, and hearing people aren’t aware of these tools.
World Hearing Day is an opportunity to make everybody aware of the needs of people with hearing loss and the technology that everyone can use to have a tremendous impact on their lives.