Stephen Hawking
We've all heard Stephen Hawking talk, but no one's ever asked if he likes his computerised voice Reuters

Researchers in the US have started The Human Voicebank Initiative, a database of synthetic voices that can be matched with patients who have speech impairments, to help the tens of millions of people worldwide who rely on generic voices - and they need people to donate voices.

If you've ever heard Stephen Hawking talk, then you'll know that people who have speech impairments due to cerebral palsy, Parkinson's or neurological conditions often have to make do with generic artificially computerised voices that make them sound like a robot.

In order to solve this problem, Professor Rupal Patel from Northwestern University and Dr Tim Bunnel from Nemours Alfred I duPont Hospital for Children have created a new technology called VocaliD that can build synthetic voices using whatever vocal sounds a patient can produce. These are then put together with a voice from a donor who matches the patient's age, gender and size.

The end result is a synthetic voice that suits the patient and has a powerful positive impact on their ability to express themselves. And each synthetic voice is as unique as a fingerprint.

"We thought that there had to be a way to reverse-engineer what is left of a voice and we did just that, using some funding from the National Science Foundation to create custom-crafted voices that captured their unique vocal identities," Patel said at the TEDWomen Talk conference in December 2013.

In order to donate a voice, donors will have to record between two-three hours of speech, or rather several thousand sample sentences, which have been sourced from classic books like The Wonderful Wizard of Oz, The Velveteen Rabbit and White Fang.

"The idea is for the [donor] to cover all the different combinations of sounds that occur in language. The more speech you have, the better the voice."

When a donor reads out the sentences, the database chops the sound clips up to multiple files of one word each or one common utterance such as "um" or "ah", and then the words can be put together in a program and when the patient types on their computer program to communicate, a more natural voice befitting them comes out.

At the moment, the existing software Model Talker is only Windows-based and designed primarily for patients who are losing their voices, but the researchers are developing a smartphone app to work on iOS and Android devices.

In the meantime, interested voice donors or people with technical or business expertise who can help with the app development or fundraising should apply to contribute here.