出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2013/05/07 18:41:54」(JST)
Motion perception is the process of inferring the speed and direction of elements in a scene based on visual, vestibular and proprioceptive inputs. Although this process appears straightforward to most observers, it has proven to be a difficult problem from a computational perspective, and extraordinarily difficult to explain in terms of neural processing.
Motion perception is studied by many disciplines, including psychology (i.e. visual perception), neurology, neurophysiology, engineering, and computer science.
Contents
|
The inability to perceive motion is called akinetopsia and it may be caused by a lesion to cortical area V5 in the extrastriate cortex. Neuropsychological studies of a patient who could not see motion, seeing the world in a series of static "frames" instead, suggested that visual area V5 in humans is homologous to motion processing area MT in primates.[1][2]
Two or more stimuli that are switched on and off in alternation can produce two different motion percepts. The first, demonstrated in the figure to the right is "Beta movement", often used in billboard displays, in which an object is perceived as moving when, in fact, a series of stationary images is being presented. This is also termed "apparent motion" and is the basis of movies and television. However, at faster alternation rates, and if the distance between the stimuli is just right, an illusory "object" the same colour as the background is seen moving between the two stimuli and alternately occluding them. This is called the phi phenomenon and is an example of "pure" motion detection uncontaminated, as in Beta movement, by form cues.[3]
This pure motion perception is referred to as "first-order" motion perception and is mediated by relatively simple "motion sensors" in the visual system, that have evolved to detect a change in luminance at one point on the retina and correlate it with a change in luminance at a neighbouring point on the retina after a short delay. Sensors that work this way have been referred to as either Hassenstein-Reichardt detectors after the scientists Bernhard Hassenstein and Werner Reichardt, who first modelled them,[4] motion-energy sensors,[5] or Elaborated Reichardt Detectors.[6] These sensors detect motion by spatio-temporal correlation and are plausible models for how the visual system may detect motion. There is still considerable debate regarding the exact nature of this process.
Second-order motion is motion in which the moving contour is defined by contrast, texture, flicker or some other quality that does not result in an increase in luminance or motion energy in the Fourier spectrum of the stimulus.[7][8] There is much evidence to suggest that early processing of first- and second-order motion is carried out by separate pathways.[9] Second-order mechanisms have poorer temporal resolution and are low-pass in terms of the range of spatial frequencies to which they respond. Second-order motion produces a weaker motion aftereffect unless tested with dynamically flickering stimuli.[10] First and second-order signals appear to be fully combined at the level of Area V5/MT of the visual system.
Each neuron in the visual system is sensitive to visual input in a small part of the visual field, as if each neuron is looking at the visual field through a small window or aperture. The motion direction of a contour is ambiguous, because the motion component parallel to the line cannot be inferred based on the visual input. This means that a variety of contours of different orientations moving at different speeds can cause identical responses in a motion sensitive neuron in the visual system.
Individual neurons early in the visual system (V1) respond to motion that occurs locally within their receptive field. Because each local motion-detecting neuron will suffer from the aperture problem, the estimates from many neurons need to be integrated into a global motion estimate. This appears to occur in Area MT/V5 in the human visual cortex.
Having extracted motion signals (first- or second-order) from the retinal image, the visual system must integrate those individual local motion signals at various parts of the visual field into a 2-dimensional or global representation of moving objects and surfaces. Further processing is required to disambiguate true "global motion" direction.[citation needed]
As in other aspects of vision, the observer's visual input is generally insufficient to determine the true nature of stimulus sources, in this case their velocity in the real world. In monocular vision for example, the visual input will be a 2D projection of a 3D scene. The motion cues present in the 2D projection will by default be insufficient to reconstruct the motion present in the 3D scene. Put differently, many 3D scenes will be compatible with a single 2D projection. The problem of motion estimation generalizes to binocular vision when we consider occlusion or motion perception at relatively large distances, where binocular disparity is a poor cue to depth. This fundamental difficulty is referred to as the inverse problem.
Detection and discrimination of motion can be improved by training with long term results. Participants trained to detect the movements of dots on a screen in only one direction become particularly good at detecting small movements in the directions around that in which they have been trained. This improvement was still present 10 weeks later. However perceptual learning is highly specific. For example, the participants show no improvement when tested around other motion directions, or for other sorts of stimuli.[11]
Hadad, B.,Maurer, D., Lewis, T. L. (2001). Long trajectory for the development of sensitivity to global and biological motion. Developmental Science, 14:6, pp 1330–1339.
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「kinesthesia」「kinesthetic」「kinesthetic sense」 |
関連記事 | 「motion」「perception」 |
.