出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2016/06/24 10:41:40」(JST)
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs (the ears) and the auditory parts of the sensory system.
The folds of cartilage surrounding the ear canal are called the pinna. Sound waves are reflected and attenuated when they hit the pinna, and these changes provide additional information that will help the brain determine the direction from which the sounds came.
The sound waves enter the auditory canal, a deceptively simple tube. The ear canal amplifies sounds that are between 3 and 12 kHz. At the far end of the ear canal is the tympanic membrane, which marks the beginning of the middle ear.
Sound waves travel through the ear canal and hit the tympanic membrane, or eardrum. This wave information travels across the air-filled middle ear cavity via a series of delicate bones: the malleus (hammer), incus (anvil) and stapes (stirrup). These ossicles act as a lever, converting the lower-pressure eardrum sound vibrations into higher-pressure sound vibrations at another, smaller membrane called the oval window or vestibular window. The manubrium (handle) of the malleus articulates with the tympanic membrane, while the footplate (base) of the stapes articulates with the oval window. Higher pressure is necessary at the oval window than at the typanic membrane because the inner ear beyond the oval window contains liquid rather than air. The stapedius reflex of the middle ear muscles helps protect the inner ear from damage by reducing the transmission of sound energy when the stapedius muscle is activated in response to sound. The middle ear still contains the sound information in wave form; it is converted to nerve impulses in the cochlea.
Cochlea | |
---|---|
Diagrammatic longitudinal section of the cochlea. The cochlear duct, or scala media, is labeled as ductus cochlearis at right.
|
|
Anatomical terminology
[edit on Wikidata]
|
The inner ear consists of the cochlea and several non-auditory structures. The cochlea has three fluid-filled sections, and supports a fluid wave driven by pressure across the basilar membrane separating two of the sections. Strikingly, one section, called the cochlear duct or scala media, contains endolymph, a fluid similar in composition to the intracellular fluid found inside cells. The organ of Corti is located in this duct on the basilar membrane, and transforms mechanical waves to electric signals in neurons. The other two sections are known as the scala tympani and the scala vestibuli; these are located within the bony labyrinth, which is filled with fluid called perilymph, similar in composition to cerebrospinal fluid. The chemical difference between the fluids endolymph and perilymph fluids is important for the function of the inner ear due to electrical potential differences between potassium and calcium ions.
The plan view of the human cochlea (typical of all mammalian and most vertebrates) shows where specific frequencies occur along its length. The frequency is an approximately exponential function of the length of the cochlea within the Organ of Corti. In some species, such as bats and dolphins, the relationship is expanded in specific areas to support their active sonar capability.
The organ of Corti forms a ribbon of sensory epithelium which runs lengthwise down the cochlea's entire scala media. Its hair cells transform the fluid waves into nerve signals. The journey of countless nerves begins with this first step; from here, further processing leads to a panoply of auditory reactions and sensations.
Hair cells are columnar cells, each with a bundle of 100-200 specialized cilia at the top, for which they are named. There are two types of hair cells. Inner hair cells are the mechanoreceptors for hearing: they transduce the vibration of sound into electrical activity in nerve fibers, which is transmitted to the brain. Outer hair cells are a motor structure. Sound energy causes changes in the shape of these cells, which serves to amplify sound vibrations in a frequency specific manner. Lightly resting atop the longest cilia of the inner hair cells is the tectorial membrane, which moves back and forth with each cycle of sound, tilting the cilia, which is what elicits the hair cells' electrical responses.
Inner hair cells, like the photoreceptor cells of the eye, show a graded response, instead of the spikes typical of other neurons. These graded potentials are not bound by the “all or none” properties of an action potential.
At this point, one may ask how such a wiggle of a hair bundle triggers a difference in membrane potential. The current model is that cilia are attached to one another by “tip links”, structures which link the tips of one cilium to another. Stretching and compressing, the tip links may open an ion channel and produce the receptor potential in the hair cell. Recently it has been shown that Cadherin-23 CDH23 and Protocadherin-15 PCDH15 are the adhesion molecules associated with these tip links.[1] It is thought that a calcium driven motor causes a shortening of these links to regenerate tensions. This regeneration of tension allows for apprehension of prolonged auditory stimulation.[2]
Afferent neurons innervate cochlear inner hair cells, at synapses where the neurotransmitter glutamate communicates signals from the hair cells to the dendrites of the primary auditory neurons.
There are far fewer inner hair cells in the cochlea than afferent nerve fibers – many auditory nerve fibers innervate each hair cell. The neural dendrites belong to neurons of the auditory nerve, which in turn joins the vestibular nerve to form the vestibulocochlear nerve, or cranial nerve number VIII.[3] The region of the basilar membrane supplying the inputs to a particular afferent nerve fibre can be considered to be its receptive field.
Efferent projections from the brain to the cochlea also play a role in the perception of sound, although this is not well understood. Efferent synapses occur on outer hair cells and on afferent (towards the brain) dendrites under inner hair cells
The cochlear nucleus is the first site of the neuronal processing of the newly converted “digital” data from the inner ear (see also binaural fusion). In mammals, this region is anatomically and physiologically split into two regions, the dorsal cochlear nucleus (DCN), and ventral cochlear nucleus (VCN). The VCN is further divided by the nerve root into the posteroventral cochlear nucleus (PVCN) and the anteroventral cochlear nucleus (AVCN).[4]
The trapezoid body is a bundle of decussating fibers in the ventral pons that carry information used for binaural computations in the brainstem. Some of these axons come from the cochlear nucleus and cross over to the other side before traveling on to the superior olivary nucleus. This is believed to help with localization of sound.[5]
The superior olivary complex is located in the pons, and receives projections predominantly from the ventral cochlear nucleus, although the dorsal cochlear nucleus projects there as well, via the ventral acoustic stria. Within the superior olivary complex lies the lateral superior olive (LSO) and the medial superior olive (MSO). The former is important in detecting interaural level differences while the latter is important in distinguishing interaural time difference.[6]
The lateral lemniscus is a tract of axons in the brainstem that carries information about sound from the cochlear nucleus to various brainstem nuclei and ultimately the contralateral inferior colliculus of the midbrain.
The inferior colliculus (IC) are located just below the visual processing centers known as the superior colliculi. The central nucleus of the IC is a nearly obligatory relay in the ascending auditory system, and most likely acts to integrate information (specifically regarding sound source localization from the superior olivary complex[7] and dorsal cochlear nucleus) before sending it to the thalamus and cortex.[8]
The medial geniculate nucleus is part of the thalamic relay system.
The primary auditory cortex is the first region of cerebral cortex to receive auditory input.
Perception of sound is associated with the left posterior superior temporal gyrus (STG). The superior temporal gyrus contains several important structures of the brain, including Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of basic characteristics of sound such as pitch and rhythm. We know from work in nonhuman primates that primary auditory cortex can probably itself be divided further into functionally differentiable subregions.[9][10][11][12] [13][14][15] The neurons of the primary auditory cortex can be considered to have receptive fields covering a range of auditory frequencies and have selective responses to harmonic pitches.[16] Neurons integrating information from the two ears have receptive fields covering a particular region of auditory space.
The primary auditory cortex is surrounded by secondary auditory cortex, and interconnects with it. These secondary areas interconnect with further processing areas in the superior temporal gyrus, in the dorsal bank of the superior temporal sulcus, and in the frontal lobe. In humans, connections of these regions with the middle temporal gyrus are probably important for speech perception. The frontotemporal system underlying auditory perception allows us to distinguish sounds as speech, music, or noise.
The outer ear funnels sound vibrations to the eardrum, increasing the sound pressure in the middle frequency range. The middle-ear ossicles further amplify the vibration pressure roughly 20 times. The base of the stapes couples vibrations into the cochlea via the oval window, which vibrates the perilymph liquid (present throughout the inner ear) and causes the round window to bulb out as the oval window bulges in. Vestibular and tympanic ducts are filled with perilymph, and the smaller cochlear duct between them is filled with endolymph, a fluid with a very different ion concentrations and voltage.[17][18][19][20] Vestibular duct perilymph vibrations bend organ of Corti outer cells (4 lines) causing prestin to be released in cell tips. This causes the cells to be chemically elongated and shrunk (somatic motor), and hair bundles to shift which, in turn, electrically effects the basilar membrane’s movement (hair-bundle motor). These motors (outer cells) amplify the perilymph vibrations that initially incited them over 40-fold. Since both motors are chemically driven they are unaffected by the newly amplified vibrations due to recuperation time.[21] The outer hair cells (OHC) are minimally innervated by spiral ganglion in slow (unmyelinated) reciprocal communicative bundles (30+ hairs per nerve fiber); this contrasts inner hair cells (IHC) that have only afferent innervation (30+ nerve fibers per one hair) but are heavily connected. There are 4x more OHC than IHC. The basilar membrane is a wall where the majority of the IHC and OHC sit. Basilar membrane width and stiffness corresponds to the frequencies best sensed by the IHC. At the cochlea base the Basilar is at its narrowest and most stiff (high-frequencies), at the cochlea apex it is at its widest and least stiff (low-frequencies). The tectorial membrane supports the remaining IHC and OHC. Tectorial membrane helps facilitate cochlear amplification by stimulating OHC (direct) and IHC (via endolymph vibrations). Tectorial's width and stiffness parallels Basilar's and similarly aids in frequency differentiation.[22][23][24][25][26][27][28][29][30]
The superior olivary complex (SOC), in pons, is the first convergence of the left and right cochlear pulses. SOC has 14 described nuclei; their abbreviation are used here (see Superior olivary complex for their full names). MSO determines the angle the sound came from by measuring time differences in left and right info. LSO normalizes sound levels between the ears; it uses the sound intensities to help determine sound angle. LSO innervates the IHC. VNTB innervate OHC. MNTB inhibit LSO via glycine. LNTB are glycine-immune, used for fast signalling. DPO are high-frequency and tonotopical. DLPO are low-frequency and tonotopical. VLPO have the same function as DPO, but act in a different area. PVO, CPO, RPO, VMPO, ALPO and SPON (inhibited by glycine) are various signalling and inhibiting nuclei.[31][32][33][34]
The trapezoid body is where most of the cochlear nucleus (CN) fibers decussate (cross left to right and vice versa); this cross aids in sound localization.[35] The CN breaks into ventral (VCN) and dorsal (DCN) regions. The VCN has three nuclei.[clarification needed] Bushy cells transmit timing info, their shape averages timing jitters. Stellate (chopper) cells encode sound spectra (peaks and valleys) by spatial neural firing rates based on auditory input strength (rather than frequency). Octopus cells have close to the best temporal precision while firing, they decode the auditory timing code. The DCN has 2 nuclei. DCN also receives info from VCN. Fusiform cells integrate information to determine spectral cues to locations (for example, whether a sound originated from in front or behind). Cochlear nerve fibers (30,000+) each have a most sensitive frequency and respond over a wide range of levels.[36][37]
Simplified, nerve fibers’ signals are transported by bushy cells to the binaural areas in the olivary complex, while signal peaks and valleys are noted by stellate cells, and signal timing is extracted by octopus cells. The lateral lemniscus has three nuclei: dorsal nuclei respond best to bilateral input and have complexity tuned responses; intermediate nuclei have broad tuning responses; and ventral nuclei have broad and moderately complex tuning curves. Ventral nuclei of lateral lemniscus help the inferior colliculus (IC) decode amplitude modulated sounds by giving both phasic and tonic responses (short and long notes, respectively). IC receives inputs not shown, including visual (pretectal area: moves eyes to sound. superior colliculus: orientation and behavior toward objects, as well as eye movements (saccade)) areas, Pons (superior cerebellar peduncle: thalamus to cerebellum connection/hear sound and learn behavioral response), spinal cord (periaqueductal grey: hear sound and instinctually move), and thalamus. The above are what implicate IC in the ‘startle response’ and ocular reflexes. Beyond multi-sensory integration IC responds to specific amplitude modulation frequencies, allowing for the detection of pitch. IC also determines time differences in binaural hearing.[38] The medial geniculate nucleus divides into ventral (relay and relay-inhibitory cells: frequency, intensity, and binaural info topographically relayed), dorsal (broad and complex tuned nuclei: connection to somatosensory info), and medial (broad, complex, and narrow tuned nuclei: relay intensity and sound duration). The auditory cortex (AC) brings sound into awareness/perception. AC identifies sounds (sound-name recognition) and also identifies the sound’s origin location. AC is a topographical frequency map with bundles reacting to different harmonies, timing and pitch. Right-hand-side AC is more sensitive to tonality, left-hand-side AC is more sensitive to minute sequential differences in sound.[39][40] Rostromedial and ventrolateral prefrontal cortices are involved in activation during tonal space and storing short-term memories, respectively.[41] The Heschl’s gyrus/transverse temporal gyrus includes Wernicke’s area and functionality, it is heavily involved in emotion-sound, emotion-facial-expression, and sound-memory processes. The entorhinal cortex is the part of the ‘hippocampus system’ that aids and stores visual and auditory memories.[42][43] The supramarginal gyrus (SMG) aids in language comprehension and is responsible for compassionate responses. SMG links sounds to words with the angular gyrus and aids in word choice. SMG integrates tactile, visual, and auditory info.[44][45]
|
|
|
|
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「聴覚路」「聴覚伝導路」 |
関連記事 | 「pathway」「auditory」 |
.