Scientists have found a site of the human brain, responsible for the pitch of our speech
In June, a team of scientists from the University of California in San Francisco published a study , which sheds light on how people change the pitch in their speech.
The results of this research can be useful in creating synthesizers of naturally sounding speech - with emotions and various intonations.
About the study - in our today's article.
Photo Florian Koppe / CC
How did the study
A team of scientists at the University of California recently conducted a series of experiments. She studies the relationship between different parts of the brain and the organs of speech. Researchers are trying to find out what happens in the brain during a conversation.
The work, which is discussed in the article, is focused on the area controlling the larynx, including at the time of the change in pitch.
The leading expert in the study was neurosurgeon Edward Chang (Edward Chang). He works with patients suffering from epilepsy - carries out operations that prevent convulsions. Chang watches the activity of the brain of some of his patients with the help of special equipment.
The team recruited volunteers for their studies specifically from this group of patients. Connected sensors allow you to monitor their neural activity during the experiments. This method, known as electrocorticography , - helped scientists find the area of the brain responsible for changes in pitch.
Participants of the study were asked to repeat aloud the same sentence, but to emphasize each time in different words. This changed the meaning of the phrase. At the same time, the frequency of the primary tone also changed, the frequency of the oscillations of the vocal cords.
The team found that neurons in one part of the brain were activated when the patient raised the tone. This site in the motor zone of the cortex is responsible for the muscles of the larynx. The researchers stimulated the neurons in this area with electricity, to which the muscles of the larynx responded with tension, and some patients spontaneously uttered sounds.
Participants in the study also included recording their own votes. This caused a neuronal response. From this team members concluded that this area of the brain is involved not only in changing the frequency of the main tone, but also in the perception of speech. This can give an idea of how the brain participates in the imitation of someone else's speech - allows you to change the pitch and other characteristics to parody the interlocutor.
Useful in the development of voice synthesizers
Journalist Robbie Gonzalez (Robbie Gonzalez) from the publication Wired assumes , that the results of the study can be useful in prosthetics of the larynx and will allow patients without voice to "speak" more realistically. This is confirmed by the scientists themselves.
Synthesizers of human speech - for example, the one with which used Stephen Hawking, are able to reproduce words now, interpreting neural activity. However, they can not place accents, as a person with a healthy speech apparatus would do. Because of this speech sounds unnatural, and it is not always clear whether the interlocutor asks the question or makes a statement.
Scientists continue to explore the area of the brain responsible for changing the frequency of the main tone. There is an assumption that in the future speech synthesizers will be able to analyze the neural activity in this area and based on the received data to build sentences in a natural way - to emphasize the pitch of the words needed, to intonationally formulate questions and statements depending on what the person wants to say.
Other studies of speech models
Not so long ago, in the laboratory of Edward Chang, another was conducted. study , which can help in the development of voice-forming devices. Participants read hundreds of sentences, in the sound of which almost all possible phonetic constructions of American English were used. And scientists followed the neural activity of the subjects.
Picture PxHere /PD
This time, the subject of interest was the coarticulation - how the organs of the speech tract work (for example, lips and tongue) when pronouncing different sounds. Attention was paid to words in which the same solid consonant is followed by different vowels. When pronouncing such words, lips and tongue often work in different ways - as a result, our perception of the corresponding sounds also differs.
The scientists not only defined the groups of neurons responsible for the specific movements of the vocal tract organs, but also established that the speech centers of the brain coordinate the movements of the muscles of the tongue, larynx and other organs of the vocal tract, relying on the context of speech-in what order the sounds are pronounced. We know that the language takes different positions depending on what the next sound in the word will be, and there is a huge amount of such sound combinations - this is another factor that makes the sound of human speech natural.
The study of all variants of coarctication controlled by neuronal activity will also play a role in the development of speech synthesis technologies for people who have lost the ability to speak, but whose neural functions are preserved.
To help people with disabilities are used. and systems working on the inverse principle - tools based on AI, which help to convert speech into text. The presence of intonations and accents in speech is also a difficulty for this technology. Their presence interferes with the algorithms of artificial intelligence to recognize individual words.
Employees of Cisco, the Moscow Institute of Physics and Technology and the Higher School of Economics recently presented A possible solution to the problem for converting the text of American English speech. Their system uses the pronunciation base. CMUdict and the possibility of a recurrent neural network. Their method consists in automatic preliminary "clearing" of speech from "superfluous" prizvukov. Thus, in its sounding speech comes close to colloquial American English, without distinct regional or ethnic "tracks."
The future of speech research
Professor Chang in the future wants to investigate and how the brain of people who speak the dialects of the Chinese language works. In them, variations in the pitch frequency can significantly change the meaning of the word. Scientists wonder how people perceive different phonetic constructions in this case.
Benjamin Dichter (Benjamin Dichter), one of the colleagues of Chang, considers , that the next step is to go further in understanding the connection "brain-larynx". The team must now learn how to guess which tone frequency the speaker will choose, analyzing its neural activity. This is the key to creating a synthesizer of naturally sounding speech.
Scientists are assume , that in the near future such a device will not be released, but the study of Dichter and the team will bring science closer to the moment when the apparatus of artificial speech learns to interpret not only individual words but also intonations, and therefore add emotions to speech.
More interesting about the sound is in our Telegram channel :
How the Star Wars
Unusual audio gadgets
Sounds from the world of nightmares
Cinema on the plates
Music at work
It may be interesting
This Post is providing valuable and unique information, I know that you take a time and effort to make a awesome article
beach wedding venues
Custom PVC Patches
There are specific dissertation web-sites by way of the web to produce safe apparently documented inside your website. <a href="https://houstonembroideryservice.com/custom-pvc-patches/">Custom PVC Patches</a>