出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2012/09/02 17:40:24」(JST)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2010) |
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (August 2008) |
Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion.
Contents
|
Video technology was first developed for cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Charles Ginsburg led an Ampex research team developing the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders sold for $50,000 in 1956, and videotape cost $300 per one-hour reel.[1] However, prices steadily dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) tapes to the public. After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and tape equipment plummeted.
Later advances in computer technology allowed computers to capture, store, edit and transmit video clips.
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) standards specify 25 frame/s, while NTSC (USA, Canada, Japan, etc.) specifies 29.97 frame/s. Film is shot at the slower frame rate of 24photograms/s, which complicates slightly the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve the illusion of a moving image is about fifteen frames per second.
Video can be interlaced or progressive. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second, which would have required sacrificing image detail in order to remain within the limitations of a narrow bandwidth. The horizontal scan lines of each complete frame are treated as if numbered consecutively and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines.
Analog display devices reproduce each frame in the same way, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more life-like reproduction (although with halved detail) of rapidly moving parts of the image when viewed on an interlaced CRT display, but the display of such a signal on a progressive scan device is problematic.
NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often specified as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.
In progressive scan systems, each refresh period updates all of the scan lines of each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. When displaying a natively interlaced signal, however, overall spatial resolution will be degraded by simple line doubling and artifacts such as flickering or "comb" effects in moving parts of the image will be seen unless special signal processing is applied to eliminate them. A procedure known as deinterlacing can be used to optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD Television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.
Ratios where the height is taller than the width are uncommon in general everyday use, but do have application in computer systems where the screen may be better suited for a vertical layout. The most common tall aspect ratio of 3:4 is referred to as portrait mode and is created by physically rotating the display device 90 degrees from the normal position. Other tall aspect ratios such as 9:16 are technically possible but rarely used. (For a more detailed discussion of this topic please refer to the page orientation article.)
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats. Therefore, an NTSC DV image which is 720 pixels by 480 pixels is displayed with the aspect ratio of 4:3 (which is the traditional television standard) if the pixels are thin and displayed with the aspect ratio of 16:9 (which is the anamorphic widescreen format) if the pixels are fat.
Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television.
The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A common way to reduce the number of bits per pixel in digital video is by chroma subsampling (e.g. 4:4:4, 4:2:2, 4:2:0/4:1:1).
Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation.
The subjective video quality of a video processing system may be evaluated as follows:
Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".
A wide variety of methods are used to compress video streams. Video data contains spatial and temporal redundancy, making uncompressed video streams extremely inefficient. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern standards are MPEG-2, used for DVD, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.
Stereoscopic video can be created using several different methods:
Blu-ray Discs greatly improve the sharpness and detail of the two-color 3D effect in color coded stereo programs. See articles Stereoscopy and 3-D film.
There are different layers of video transmission and storage, each with its own set of formats to choose from.
For transmission, there is a physical connector and signal protocol ("video connection standard" below). A given physical link can carry certain "display standards" which specify a particular refresh rate, display resolution, and color space.
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital "video encoding", of which a number are available.
Video can be transmitted or transported in a variety of ways. Wireless broadcast as an analog or digital signal. Coaxial cable in a closed circuit system can be sent as analog interlaced 1 volt peak to peak with a maximum horizontal line resolution up to 480. Broadcast or studio cameras use a single or dual coaxial cable system using a progressive scan format known as SDI serial digital interface and HD-SDI for High Definition video. The distances of transmission are somewhat limited depending on the manufacturer the format may be proprietary. SDI has an negligible lag and is uncompressed. There are initiatives to use the SDI standards in closed circuit surveillance systems, for Higher Definition images, over longer distances on coax or twisted pair cable. Due to the nature of the higher bandwidth needed, the distance the signal can be effectively sent is a half to a third of what the older interlaced analog systems supported.[2]
New formats for digital television broadcasts use the MPEG-2 video codec and include:
Analog television broadcast standards include:
An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing synchronization information or a time delay. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.
See Computer display standard for a list of standards used for computer monitors and comparison with those used for television.
(See List of video recording formats.)
Wikimedia Commons has media related to: Video |
|
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
拡張検索 | 「video display terminal」「video keratoscope」「videokeratoscope」「video UDS」 |
.