Coderus, a leading software development company, is thrilled to announce that it has successfully obtained the ISO...
New AI Tech for TV Sign Language Debuts at IBC
UK start-up Robotica is creating state-of-the-art artificial intelligence to bring sign language translations to the small screen at scale to make more programming accessible to people who prefer or require sign languages.
Premiering at the International Broadcasting Convention (IBC) on Friday (Stand 9.C03, Hall 9), Robotica is bringing the world’s first broadcast-standard sign language avatars to television audiences. Their ultra-realistic, human-like digital signers already know British Sign Language, and are now learning American, Italian and other sign language, as well as visual signing systems such as Makaton and Cued Speech.
About sign language
As many as one is six people are deaf or have difficulty hearing, and there are 70 million sign language users globally, collectively using more than 300 different sign languages. People who are born deaf, and children of deaf adults (CODA), may learn a sign language as their first or only language, and this may lead to them being excluded from mainstream information and entertainment.
CEO Adrian Pickering says “There’s a global shortage of sign language translators and interpreters. They work really hard to improve lives in hospitals and courtrooms, at job interviews, helping people buy a new home. It’s a tough job and takes years to learn. Even if there were a hundred times as many translators, there still wouldn’t be near enough to meet the demands of a content-hungry digital world. Last year, the BBC released 28,000 hours of new content. Every single hour, tens of thousands of new page pages are crafted, 30,000 hours of new videos are uploaded to YouTube. The only way that sign language users can gain equality of access to information and entertainment is with machine translation.”
Pickering, together with co-founders Michael Davey and Matthew Bolton, started Robotica to meet this demand. “Human translations will always be first choice. Anything that can be signed by human interpreters should be signed by human interpreters. You don’t want a computer giving your diagnosis, or reporting a disaster. There will always be a need for empathy, for the personal touch. We’ll just translate everything else!” said Davey.
Aren’t subtitles enough?
Sign languages do not share grammar or concepts with their local spoken counterparts, and typically can’t be written down. For many deaf people, reading English can be difficult or impossible, and subtitles and audio description may be of no help. “Learning to read English as a second language, without being able to hear it, is like learning to read Korean without knowing how to speak it,” said Catherine Cooper, a Robotica’s Product Owner and Deaf Culture Consultant. “For children in particular, subtitles just don’t work. We need sign language on TV as that’s the language we think and speak.”
Robotica Machine Learning Limited is an audiovisual machine learning, comprehension, translation, and synthetic presentation services company, based in the heart of Norwich, United Kingdom.
Since its inception in 2020, Robotica’s vision – The Power to Comprehend™ – inspires us to create technology that can comprehend language, unlocking endless possibilities.
Drawing from fields as diverse as neuroscience, telecoms and video games, our break-through motion capture, AI, translation, 3D and synthetic video technologies generate visual translations for television, public transport, leisure and education. Presented by virtual interpreters, this technology enables us to translate everything into sign languages, at scale.
Robotica works with deaf communities and deaf charities, and industry leaders from TV broadcast, public transport and leisure to create sign language videos to accompany customers’ original content.
Increasing the value of content
The addition of sign language brings new value to broadcast content, helping it to reach further and find a new audience.