Silent speech voice control
Researchers have built a system that can recognise commands from tongue, ear and jaw movements - a breakthrough in the development of silent speech control for disabled people iStock

Researchers from Georgia Tech, Google Glass and Microsoft have collaborated to build a system that lets disabled people control wearables such as Google Glass using just tongue, ear and jaw movements.

The silent speech system makes use of a magnetic tongue-control system similar to that previously used by paralysed patients to control wheelchairs, as well as ear pieces that use infrared light to map how the shape of the ear canal changes when a person utters a particular word, as each word in the English vocabulary manipulates the ear canal in a different way.

The magnetic tongue-control system used to control wheelchairs relies on paralysed patients needing to have a magnetic tongue piercing or a sensor affixed to their tongue, so the researchers decided to see if they could develop a system that was less invasive.

Inspiration struck Thad Starner, a professor directing the contextual computing group at Georgia Institute of Technology who is also technical lead on Google Glass, when he went for a dental appointment.

The dentist stuck a finger in Starner's ear and asked him to bite down – a common test for jaw function – and Starner noticed that when his jaw moved, the space in his ears did too. "I said, well, that's cool. I wonder if we can do silent speech recognition with that?'" Starner told New Scientist.

To go with their new device, researchers programmed software to perform speech recognition based on how ear canals moved when people wore the device and spoke 12 distinct phrases, including "Give me my medicine, please" and "I need to use the bathroom".

The researchers found that their system was able to correctly recognise what the testers were saying 90% of the time when both the tongue and ear trackers were worn, with accuracy slightly lower when only the ear trackers were worn. Modifications to the ear trackers produced an accuracy result of 96% in another experiment.

It is still early days for the technology, and next the researchers will develop a phrasebook made up of lots of common words and phrases that patients might want to use, to improve the accuracy of the ear trackers when used.

If commercially realised, silent speech recognition could transform the lives of disabled people as it would allow them to discretely control a wide range of wearable devices, and the technology has been researched by various institutes around the world since 2007.

The study, entitled "Toward Silent-Speech Control of Consumer Wearables" is published in the IEEE Computer Society journal Computer.