出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2015/10/07 12:32:33」(JST)
|
Lip reading, also known as lipreading or speechreading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available, relying also on information provided by the context, knowledge of the language, and any residual hearing. Although primarily used by deaf and hard-of-hearing people, people with normal hearing generally process visual information from the moving mouth at a subconscious level.
In everyday conversation, people with normal vision, hearing and social skills sub-consciously use information from the lips and face to aid aural comprehension and most fluent speakers of a language are able to speechread to some extent (see McGurk effect). This is because each speech sound (phoneme) has a particular facial and mouth position (viseme), and people can to some extent deduce what phoneme has been produced based on visual cues, even if the sound is unavailable or degraded (e.g. by background noise).
Lipreading while listening to spoken language provides the redundant audiovisual cues necessary to initially learn language, as evidenced by Lewkowicz who in his studies determined that babies between 4 and 8 months of age pay special attention to mouth movements when learning to speak both native and nonnative languages. While after 12 months of age enough audiovisual cues have been attained that they no longer have to look at the mouth when encountering a native language, hearing a nonnative language spoken again prompts this shift to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech.[1]
Research has shown that, as expected, deaf adults are better at lipreading than hearing adults due to their increased practice and heavier reliance on lip reading in order to understand speech. However, when the same research team conducted a similar study with children it was determined that deaf and hearing children have similar lip reading skills. It is only after 14 years of age that skill levels between deaf and hearing children begin to differentiate significantly, indicating that lipreading skill in early life is independent of auditory capability. This may indicate a deterioration in lip reading ability with age for hearing individuals or an increased efficiency in lip reading ability with age for deaf individuals.[2]
Lipreading has been proven to activate not only the visual cortex of the brain, but also the auditory cortex in the same way when actual speech is heard. Research has shown that rather than have clearcut different regions of the brain dedicated to different senses, the brain works in a mutisensory fashion, thus making a coordinated effort to consider and combine all the different types of speech information it receives, regardless of modality. Therefore, as hearing captures more articulatory detail than sight or touch the brain uses speech and sound to compensate for other senses.[3]
Speechreading is limited, however, in that many phonemes share the same viseme and thus are impossible to distinguish from visual information alone. Sounds whose place of articulation is deep inside the mouth or throat are not detectable, such as glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z]; likewise for nasalisation (e.g. [m] vs. [b]). It has been estimated that only 30% to 40% of sounds in the English language are distinguishable from sight alone.
Thus, for example, the phrase "where there's life, there's hope" looks identical to "where's the lavender soap" in most English dialects. Author Henry Kisor titled his book What's That Pig Outdoors?: A Memoir of Deafness in reference to mishearing the question, "What's that big loud noise?" He used this example in the book to discuss the shortcomings of speechreading.[4]
As a result, a speechreader must depend heavily on cues from the environment, from the context of the communication, and a knowledge of what is likely to be said. It is much easier to speechread customary phrases such as greetings or a connected discourse on a familiar topic than utterances that appear in isolation and without supporting information, such as the name of a person never met before.
Difficult scenarios in which to speechread include:
Lip reading, also known as speechreading, is difficult because only 30% of the speech is visible, the other 70% is inferred by context clues. Thus, there are some things that can be done to make the process a little easier. Learning to lip read is like learning to read a book. A novice lip reader will concentrate on each sound, and may miss the meaning. Lip reading will be more effective if you receive the message as a whole rather than each individual sound.
Lipreading is a skill that is easier to develop in those who have experience with spoken language. In one study by Tonya R.Bergeson adults who progressively became deaf, are able to read lips much better than those who suddenly became deaf.[6]
Lip reading can be taught, but initially infants begin to lip read between the age of 6 and 12 months. In order to imitate, a baby must learn to shape their lips in accordance with the sounds they are hearing.[7] Even newborns have been shown to imitate adult mouth movements such as sticking out the tongue or opening the mouth, which could be a precursor to further imitation and lip reading abilities.[8] Infants as young as 4 months have the ability to connect visual and auditory information, which is helpful when learning to lip read. For example, one study showed that infants tend to look longer at a visual stimulus that corresponds to an auditory stimulus they hear from a recording.[9]
New studies have shown that it is possible that aspects of lip reading may indicate signs of autism.[citation needed] Research from Florida Atlantic University compared groups of infants (ages four to 12 months) to a group of adults in a test of lip reading abilities. The study discusses the significance of the shift babies make between watching the eyes and mouth of people speaking at different developmental stages. At age of four months, they typically focus their attention on the eyes for understanding. Between ages of six to eight months, during the "babbling" stage of language acquisition, they shift their focus to the mouth of the speaker. They continue lip reading until about 10 months of age, at which they switch their attention back to the eyes. Researchers suggest that the second stage relates to the emergence of speech and ability to better understand "social cues, shared meanings, beliefs and desires", according to professor of Psychology David J Lewkowicz.[10] When hearing a language different from their native language, babies revert their attention back to the mouth, despite what stage of learning acquisition they are at; they continue to lip read up to about 12 months of age. Although, greater research is needed to support their claim, their data suggest that "the infants who continue to focus most of their attention on the mouth past 12 months of age are probably not developing the age-appropriate perceptual and cognitive skills and thus may be at risk for disorders like autism".
While lip reading is a natural ability that develops in babies at a young age, people can be taught to lip read and to become better lip readers. There are even trainers and teachers who can aid people when they are learning to lip read and help them to focus on certain context cues. Here are several ways lip reading can be taught or improved:[11]
Speechreaders who have grown up deaf may never have heard the spoken language and are unlikely to be fluent users of it, which makes speechreading much more difficult. They must also learn the individual visemes by conscious training in an educational setting. In addition, speechreading takes a lot of focus, and can be extremely tiring. For these and other reasons, many deaf people prefer to use other means of communication with non-signers, such as mime and gesture, writing, and sign language interpreters.
To quote from Dorothy Clegg's 1953 book The Listening Eye,[12] "When you are deaf you live inside a well-corked glass bottle. You see the entrancing outside world, but it does not reach you. After learning to lip read, you are still inside the bottle, but the cork has come out and the outside world slowly but surely comes in to you." This view—that speechreading, though difficult, can be successful—is relatively controversial within the deaf world; for an incomplete history of this debate, see manualism and oralism.
When talking with a deaf person who uses speechreading, exaggerated mouthing of words is not considered to be helpful and may in fact obscure useful clues. However, it is possible to learn to emphasize useful clues; this is known as "lip speaking".
Speechreading may be combined with cued speech—movements of the hands that visually represent otherwise invisible details of pronunciation. One of the arguments in favor of the use of cued speech is that it helps develop lip-reading skills that may be useful even when cues are absent, i.e., when communicating with non-deaf, non-hard of hearing people.
Cued speech helps to relieve speechreading ambiguities; ultimately a combined practice of lipreading and use of cued speech brings greater clarity and accuracy to understanding spoken sentences. Dr. R.Orin Cornett was the inventor of cued speech; before his passing in 2002 he is known for his work at Gallaudet University in Washington DC. During his research he did a study with 18 profoundly deaf children to test their understanding of language with different ways to improve clarity of sentences ( i.e. cued speech, lipreading, cued speech and lipreading etc.).[13] These children had at least four years of cued speech instruction. His research showed that the clarity of language can be improved by up to 95% with the combination of lipreading and cued speech for those who are deaf. Just the same, a person who was listening, lipreading and was exposed to cued speech had increased understanding of the sentences. This is a significant increase compared to the 30% of words understood solely by lipreading. Thus, in order to augment one's understanding of sentences spoken, if one is deaf, one needs to rely on lipreading and cued speech; the combination of both will bring greater clarity of language.
|
|
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
関連記事 | 「reading」「LIP」「lip」 |
.