出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2016/01/19 02:11:27」(JST)
この項目では、音声記録方法について説明しています。
|
サラウンド(英語:surround)は、音声の記録再生方法のひとつである。モノラル(1.0ch)、ステレオ(2.0ch)音声よりも多くのチャンネル(3ch以上)を有する。
一般的には単にサラウンド、あるいはサラウンド音声という言い方がされる。
臨場感のある音響を再生するため、映画館などでは比較的古くから導入されている(ディズニーの『ファンタジア』(1940年)など)。1950年代に登場した超大作の70ミリ映画では5.1chサラウンド音響が普通であり、ごく一部の35ミリ映画では4.0chサラウンド映画が製作された。1977年、アナログ方式のドルビーステレオを採用した『スター・ウォーズ』が巨大な成功を収めるとアメリカ映画の多くはサラウンド音響を採用するのが普通となった。日本の映画界では音にお金をかける習慣がなく、普及は大きく遅れた。
1993年にデジタル圧縮技術を使ったDTS方式が『ジュラシック・パーク』で採用されると、映画館の音質に対する注目度が高まることになる。
家庭用では1980年代半ば頃からアメリカ映画のビデオテープやレーザーディスクではドルビーサラウンド(上記の映画館用のドルビーステレオの家庭版)を採用しており、AVアンプと後方に2つのスピーカーをセットすることによって家庭でもサラウンドを楽しめるようになりTVドラマもサラウンド化している。日本では1990年代初期よりAVアンプの普及が進んだ。また日本では手軽な仮想サラウンド(後述)もよく利用されるようになった。
更に大きく一般家庭に広まったのは、1990年代末より本格的に普及したDVDビデオとデジタルAVアンプによるドルビーデジタル方式および、DTS方式からである。
現在では、サラウンドのチャンネル数は通常「5.1ch」「7.1ch」などと記述される。通常のスピーカは1chで1とカウントし、超低音域再生専用のスピーカー(サブウーファー)は、通常のスピーカのch区分とは異なるという意味で「.1ch」とカウントする。つまりピリオドで区切って区別しているだけであり、小数の0.1ではない。
超低音域専用のチャネル(前述の.1ch)から出力される音を低域効果音(LFE:Low-frequency effect)と呼ぶ。
基本となるのは5.1chであり、元となるDVDのソフトに含まれている信号は5.1ch分である(ただし6.1chのドルビーデジタルサラウンドEX、7.1chのドルビーデジタルプラスといった上位互換性のある方式も存在する)。
5.1chの基本システムは以下の通り。
これらをベースに仮想サラウンド技術を利用してスピーカーを減らしたり、より臨場感の高い音響を再生するためスピーカーを増やしたりする。以下に現在主に利用されている例を記す。
NHK放送技術研究所は2005年(平成17年)に22.2chサラウンド方式を発表した。これは2つのLFEや上下に設置したスピーカーなどで、あらゆる方向の音響を表現する。次世代の映像規格であるスーパーハイビジョンでの採用が予定されている[1]。
[ヘルプ] |
ウィキメディア・コモンズには、サラウンドに関連するカテゴリがあります。 |
|
この項目は、工学・技術に関連した書きかけの項目です。この項目を加筆・訂正などしてくださる協力者を求めています。 |
Surround sound is a technique for enriching the sound reproduction quality of an audio source with additional audio channels from speakers that surround the listener (surround channels), providing sound from a 360° radius in the horizontal plane (2D) as opposed to "screen channels" (centre, [front] left, and [front] right) originating only from the listener's forward arc.
Surround sound is characterized by a listener location or sweet spot where the audio effects work best, and presents a fixed or forward perspective of the sound field to the listener at this location. The technique enhances the perception of sound spatialization by exploiting sound localization; a listener's ability to identify the location or origin of a detected sound in direction and distance. Typically this is achieved by using multiple discrete audio channels routed to an array of loudspeakers.[1]
There are various surround sound based formats and techniques, varying in reproduction and recording methods along with the number and positioning of additional channels.
Though cinema and soundtracks represent the major uses of surround techniques, its scope of application is broader than that as surround sound permits creation of an audio-environment for all sorts of purposes. Multichannel audio techniques may be used to reproduce contents as varied as music, speech, natural or synthetic sounds for cinema, television, broadcasting, or computers. In terms of music content for example, a live performance may use multichannel techniques in the context of an open-air concert, of a musical theatre or for broadcasting;[2] for a film specific techniques are adapted to movie theater, or to home (e.g. home cinema systems).[3][4] The narrative space is also a content that can be enhanced through multichannel techniques. This applies mainly to cinema narratives, for example the speech of the characters of a film,[5][6][7] but may also be applied to plays for theatre, to a conference, or to integrate voice-based comments in an archeological site or monument. For example, an exhibition may be enhanced with topical ambient sound of water, birds, train or machine noise. Topical natural sounds may also be used in educational applications.[8] Other fields of application include video game consoles, personal computers and other platforms.[9][10][11][12] In such applications, the content would typically be synthetic noise produced by the computer device in interaction with its user. Significant work has also been done using surround sound for enhanced situation awareness in military and public safety applications.[13]
Commercial surround sound media include videocassettes, DVDs, and HDTV broadcasts encoded as compressed Dolby Digital and DTS, and lossless audio such as DTS HD Master Audio and Dolby TrueHD on Blu-ray Disc and HD DVD, which are identical to the studio master. Other commercial formats include the competing DVD-Audio (DVD-A) and Super Audio CD (SACD) formats, and MP3 Surround. Cinema 5.1 surround formats include Dolby Digital and DTS. Sony Dynamic Digital Sound (SDDS) is an 8 channel cinema configuration which features 5 independent audio channels across the front with two independent surround channels, and a Low-frequency effects channel. Traditional 7.1 surround speaker configuration introduces two additional rear speakers to the conventional 5.1 arrangement, for a total of four surround channels and three front channels, to create a more 360° sound field.
Most surround sound recordings are created by film production companies or video game producers; however some consumer camcorders have such capability either built-in or available separately. Surround sound technologies can also be used in music to enable new methods of artistic expression. After the failure of quadraphonic audio in the 1970s, multichannel music has slowly been reintroduced since 1999 with the help of SACD and DVD-Audio formats. Some AV receivers, stereophonic systems, and computer soundcards contain integral digital signal processors and/or digital audio processors to simulate surround sound from a stereophonic source (see fake stereo).
In 1967, the rock group Pink Floyd performed the first-ever surround sound concert at "Games for May", a lavish affair at London’s Queen Elizabeth Hall where the band debuted its custom-made quadraphonic speaker system.[14] The control device they had made, the Azimuth Co-ordinator, is now displayed at London's Victoria and Albert Museum, as part of their Theatre Collections gallery.[15]
The first documented use of surround sound was in 1940, for the Disney studio's animated film Fantasia. Walt Disney was inspired by Nikolai Rimsky-Korsakov's operatic piece, Flight of the Bumblebee to have a bumblebee featured in his musical Fantasia and also sound as if it was flying in all parts of the theatre. The initial multichannel audio application was called 'Fantasound', comprising three audio channels and speakers. The sound was diffused throughout the cinema, controlled by an engineer using some 54 loudspeakers. The surround sound was achieved using the sum and the difference of the phase of the sound. However, this experimental use of surround sound was excluded from the film in later showings. In 1952, "surround sound" successfully reappeared with the film "This is Cinerama", using discrete seven-channel sound, and the race to develop other surround sound methods took off.[16][17]
In the 1950s, the German composer Karlheinz Stockhausen experimented with and produced ground-breaking electronic compositions such as Gesang der Jünglinge and Kontakte, the latter using fully discrete and rotating quadraphonic sounds generated with industrial electronic equipment in Herbert Eimert's studio at the Westdeutscher Rundfunk (WDR). Edgar Varese's Poeme Electronique, created for the Iannis Xenakis designed Philips Pavilion at the 1958 Brussels World's Fair, also utilised spatial audio with 425 loudspeakers used to move sound throughout the pavilion.
In 1957, working with artist Jordan Belson, Henry Jacobs produced Vortex: Experiments in Sound and Light - a series of concerts featuring new music, including some of Jacobs' own, and that of Karlheinz Stockhausen, and many others - taking place in the Morrison Planetarium in Golden Gate Park, San Francisco. Sound designers commonly regard this as the origin of the (now standard) concept of "surround sound." The program was popular, and Jacobs and Belson were invited to reproduce it at the 1958 World Expo in Brussels.[18] There are also many other composers that created ground-breaking surround sound works in the same time period.
In 1978, a concept devised by Max Bell for Dolby Laboratories called "split surround" was tested with the movie "Superman". This led to the 70mm stereo surround release of "Apocalypse Now," which became the first formal release in cinemas with three channels in the front and two in the rear. There were typically five speakers behind the screens of 70mm-capable cinemas, but only the Left, Center and Right were used full-frequency, while Center-Left and Center-Right were only used for bass-frequencies (as it is currently common). The "Apocalypse Now" encoder/decoder was designed by Michael Karagosian, also for Dolby Laboratories. The surround mix was produced by an Oscar-winning crew led by Walter Murch for American Zoetrope. The format was also deployed in 1982 with the stereo surround release of Blade Runner.
The 5.1 version of surround sound originated in 1987 at the famous French Cabaret Moulin Rouge. A French engineer, Dominique Bertrand used a mixing board specially designed in cooperation with Solid State Logic, based on 5000 series and including six channels. Respectively: A left, B right, C centre, D left rear, E right rear, F bass. The same engineer had already achieved a 3.1 system in 1974, for the International Summit of Francophone States in Dakar Senegal.
Surround sound is created in several ways. The first and simplest method is using a surround sound recording technique—capturing two distinct stereo images, one for the front and one for the back or by using a dedicated setup, e.g. an augmented Decca tree [19]—and/or mixing-in surround sound for playback on an audio system using speakers encircling the listener to play audio from different directions. A second approach is processing the audio with psychoacoustic sound localization methods to simulate a two-dimensional (2-D) sound field with headphones. A third approach, based on Huygens' principle, attempts reconstructing the recorded sound field wave fronts within the listening space; an "audio hologram" form. One form, wave field synthesis (WFS), produces a sound field with an even error field over the entire area. Commercial WFS systems, currently marketed by companies sonic emotion and Iosono, require many loudspeakers and significant computing power.
The Ambisonics form, also based on Huygens' principle, gives an exact sound reconstruction at the central point; less accurate away from center point. There are many free and commercial software programs available for Ambisonics, which dominates most of the consumer market, especially musicians using electronic and computer music. Moreover, Ambisonics products are the standard in surround sound hardware sold by Meridian Audio In its simplest form, Ambisonics consumes few resources, however this is not true for recent developments, such as Near Field Compensated Higher Order Ambisonics.[20] Some years ago it was shown that, in the limit, WFS and Ambisonics converge.[21]
Finally, surround sound can also be achieved by mastering level, from stereophonic sources as with Penteo, which uses Digital Signal Processing analysis of a stereo recording to parse out individual sounds to component panorama positions, then positions them, accordingly, into a five-channel field. However, there are more ways to create surround sound out of stereo, for instance with the routines based on QS and SQ for encoding Quad sound, where instruments were divided over 4 speakers in the studio. This way of creating surround with software routines is normally referred to as "upmixing,",[22] which was particularly successful on the Sansui QSD-series decoders that had a mode where it mapped the L ↔ R stereo onto an ∩ arc.
This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2010) |
In most cases, surround sound systems rely on the mapping of each source channel to its own loudspeaker. Matrix systems recover the number and content of the source channels and apply them to their respective loudspeakers. With discrete surround sound, the transmission medium allows for (at least) the same number of channels of source and destination; however, one-to-one, channel-to-speaker, mapping is not the only way of transmitting surround sound signals.
The transmitted signal might encode the information (defining the original sound field) to a greater or lesser extent; the surround sound information is rendered for replay by a decoder generating the number and configuration of loudspeaker feeds for the number of speakers available for replay – one renders a sound field as produced by a set of speakers, analogously to rendering in computer graphics. This "replay device independent" encoding is analogous to encoding and decoding an Adobe PostScript file, where the file describes the page, and is rendered per the output device's resolution capacity. The Ambisonics and WFS systems use audio rendering; the Meridian Lossless Packing contains elements of this capability
There are many alternative setups available for a surround sound experience, with a 3-2 (3 front, 2 back speakers and a Low Frequency Effects channel) configuration (more commonly referred to as 5.1 surround) being the standard for most surround sound applications, including cinema, television and consumer applications.[23] This is a compromise between the ideal image creation of a room and that of practicality and compatibility with two-channel stereo.[24] Because most surround sound mixes are produced for 5.1 surround (6 channels), larger setups require matrixes or processors to feed the additional speakers.[24]
The standard surround setup consists of three front speakers, LCR (left, center and right, two surround speakers LS and RS (left and right surround respectively) and a subwoofer for the Low Frequency Effects (LFE) channel, that is low-pass filtered at 120 Hz. The angles between the speakers have been standardized by the ITU (International Telecommunication Union) recommendation 775 and AES (Audio Engineering Society) as follows: 60 degrees between the L and R channels (allows for two-channel stereo compatibility) with the center speaker directly in front of the listener. The Surround channels are placed 100-120 degrees from the center channel, with the subwoofer’s positioning not being critical due to the low directional factor of frequencies below 120 Hz.[25] The ITU standard also allows for additional surround speakers, that need to be distributed evenly between 60 and 150 degrees.[23][25]
Surround mixes of more or less channels are acceptable, if they are compatible, as described by the ITU-R BS. 775-1, with 5.1 surround. The 3-1 channel setup (consisting of one monophonic surround channel) is such a case, where both LS and RS are fed by the monophonic signal at an attenuated level of -3 dB.[24]
The function of the center channel is to anchor the signal so that any central panned images do not shift when a listener is moving or is sitting away from the sweet spot.[26] The center channel also prevents any timbral modifications from occurring, which is typical for 2-channel stereo, due to phase differences at the two ears of a listener.[23] The centre channel is especially used in films and television, with dialogue primarily feeding the center channel.[24] The function of the center channel can either be of a monophonic nature (as with dialogue) or it can be used in combination with the left and right channels for true three-channel stereo. Motion Pictures tend to use the center channel for monophonic purposes with stereo being reserved purely for the left and right channels. Surround microphones techniques have however been developed that fully use the potential of three-channel stereo.
In 5.1 surround, phantom images between the front speakers are quite accurate, with images towards the back and especially to the sides being unstable.[23][24] The localisation of a virtual source, based on level differences between two loudspeakers to the side of a listener, shows great inconsistency across the standardised 5.1 setup, also being largely affected by movement away from the reference position. 5.1 surround is therefore limited in its ability to convey 3D sound, making the surround channels more appropriate for ambience or effects.[23])
7.1 channel surround is another setup, most commonly used in large cinemas, that is compatible with 5.1 surround, though it is not stated in the ITU-standards. 7.1 channel surround adds two additional channels, center-left (CL) and center-right (CR) to the 5.1 surround setup, with the speakers situated 15 degrees off centre from the listener.[23] This convention is used to cover an increased angle between the front loudspeakers as a product of a larger screen.
Most 2-channel stereophonic microphone techniques are compatible with a 3-channel setup (LCR), as many of these techniques already contain a center microphone or microphone pair. Microphone techniques for LCR should, however, try to obtain greater channel separation to prevent conflicting phantom images between L/C and L/R for example.[24][26][27] Specialised techniques have therefore been developed for 3-channel stereo. Surround microphone techniques largely depend on the setup used, therefore being biased towards the 5.1 surround setup, as this is the standard.[23]
Surround recording techniques can be differentiated into those that use single arrays of microphones placed in close proximity, and those treating front and rear channels with separate arrays.[23][25] Close arrays present more accurate phantom images, whereas separate treatment of rear channels is usually used for ambience.[25] For accurate depiction of an acoustic environment, such as a halls, side reflections are essential. Appropriate microphone techniques should therefore be used, if room impression is important. Although the reproduction of side images are very unstable in the 5.1 surround setup, room impressions can still be accurately presented.[24]
Some microphone techniques used for coverage of three front channels, include double-stereo techniques, INA-3 (Ideal Cardioid Arrangement), the Decca Tree setup and the OCT (Optimum Cardioid Triangle).[24][27] Surround techniques are largely based on 3-channel techniques with additional microphones used for the surround channels. A distinguishing factor for the pickup of the front channels in surround is that less reverberation should be picked up, as the surround microphones will be responsible for the pickup of reverberation.[23] Cardioid, hypercardioid, or supercardioid polar patterns will therefore often replace omnidirectional polar patterns for surround recordings. To compensate for the lost low-end of directional (pressure gradient) microphones, additional omnidirectional (pressure microphones), exhibiting an extended low-end response, can be added. The microphone’s output is usually low-pass filtered.[24][27] A simple surround microphone configuration involves the use of a front array in combination with two backward-facing omnidirectional room microphones placed about 10–15 meters away from the front array. If echoes are notable, the front array can be delayed appropriately. Alternatively, backward facing cardioid microphones can be placed closer to the front array for a similar reverberation pickup.[25]
The INA-5 (Ideal Cardioid Arrangement) is a surround microphone array that uses five cardioid microphones resembling the angles of the standardised surround loudspeaker configuration defined by the ITU Rec. 775.[25] Dimensions between the front three microphone as well as the polar patterns of the microphones can be changed for different pickup angles and ambient response.[23] This technique therefore allows for great flexibility.
A well established microphone array is the Fukada Tree, which is a modified variant of the Decca Tree stereo technique. The array consists of 5 spaced cardioid microphones, 3 front microphones resembling a Decca Tree and two surround microphones. Two additional omnidirectional outriggers can be added to enlarge the perceived size of the orchestra and/or to better integrate the front and surround channels.[23][24] The L, R, LS and RS microphones should be placed in a square formation, with L/R and LS/RS angled at 45 degrees and 135 degrees from the center microphone respectively. Spacing between these microphones should be about 1.8 meters. This square formation is responsible for the room impressions. The center channel is placed a meter in front of the L and R channels, producing a strong center image. The surround microphones are usually placed at the critical distance (where the direct and reverberant field is equal), with the full array usually situated several meters above and behind the conductor.[23][24]
The NHK (Japanese broadcasting company) developed an alternative technique also involving 5 cardioid microphones. Here a baffle is used for separation between the front left and right channels, which are 30 cm apart.[23] Outrigger omnidirectional microphones, low-pass filtered at 250 Hz, are spaced 3 meters apart in line with the L and R cardioids. These compensate for the bass roll-off of the cardioid microphones and also add expansiveness.[26] A 3-meter spaced microphone pair, situated 2–3 meters behind front array, is used for the surround channels.[23] The centre channel is again placed slightly forward, with the L/R and LS/RS again angled at 45 and 135 degrees respectively.
The OCT-Surround (Optimum Cardioid Triangle-Surround) microphone array is an augmented technique of the stereo OCT technique using the same front array with added surround microphones.The front array is designed for minimum crosstalk, with the front left and right microphones having supercardioid polar patterns and angled at 90 degrees relative to the center microphone.[23][24] It is important that high quality small diaphragm microphones are used for the L and R channels to reduce off-axis coloration.[25] Equalization can also be used to flatten the response of the supercardioid microphones to signals coming in at up to about 30 degrees from the front of the array.[23] The center channel is placed slightly forward. The surround microphones are backwards facing cardioid microphones, that are placed 40 cm back from the L and R microphones. The L, R, LS and RS microphones pick up early reflections from both the sides and the back of an acoustic venue, therefore giving significant room impressions.[24] Spacing between the L and R microphones can be varied to obtain the required stereo width.[24]
Specialized microphone arrays have been developed for recording purely the ambience of a space. These arrays are used in combination with suitable front arrays, or can be added to above mentioned surround techniques.[25] The Hamasaki square (also proposed by NHK) is a well established microphone array used for the pickup of hall ambience. Four figure-eight microphones are arranged in a square, ideally placed far away and high up in the hall. Spacing between the microphones should be between 1–3 meters.[24] The microphones nulls (zero pickup point) are set to face the main sound source with positive polarities outward facing, therefore very effectively minimizing the direct sound pickup as well as echoes from the back of the hall[25] The back two microphones are mixed to the surround channels, with the front two channels being mixed in combination with the front array into L and R.
Another ambient technique is the IRT (Institut fuer Rundfunktechnik) cross. Here, four cardioid microphones, 90 degrees relative to one another, are placed in square formation, separated by 21–25 cm.[25][27] The front two microphones should be positioned 45 degrees off axis from the sound source. This technique therefore resembles back to back near-coincident stereo pairs. The microphones outputs are fed to the L, R and LS, RS channels. The disadvantage of this approach is that direct sound pickup is quite significant.
Many recordings do not require pickup of side reflections. For Live Pop music concerts a more appropriate array for the pickup of ambience is the cardioid trapezium.[24] All four cardioid microphones are backward facing and angled at 60 degrees from one another, therefore similar to a semi-circle. This is effective for the pickup of audience and ambience.
All the above-mentioned microphone arrays take up considerable space, making them quite ineffective for field recordings. In this respect, the double MS (Mid Side) technique is quite advantageous. This array uses back to back cardioid microphones, one facing forward, the other backwards, combined with either one or two figure-eight microphone. Different channels are obtained by sum and difference of the figure-eight and cardioid patterns.[24][25] When using only one figure-eight microphone, the double MS technique is extremely compact and therefore also perfectly compatible with monophonic playback. This technique also allows for postproduction changes of the pickup angle.
Surround replay systems may make use of bass management, the fundamental principle of which is that bass content in the incoming signal, irrespective of channel, should be directed only to loudspeakers capable of handling it, whether the latter are the main system loudspeakers or one or more special low-frequency speakers called subwoofers.
There is a notation difference before and after the bass management system. Before the bass management system there is a Low Frequency Effects (LFE) channel. After the bass management system there is a subwoofer signal. A common misunderstanding is the belief that the LFE channel is the "subwoofer channel". The bass management system may direct bass to one or more subwoofers (if present) from any channel, not just from the LFE channel. Also, if there is no subwoofer speaker present then the bass management system can direct the LFE channel to one or more of the main speakers.
Because the low-frequency effects channel requires only a fraction of the bandwidth of the other audio channels, it is referred to as the ".1" channel; for example "5.1" or "7.1".[citation needed]
The LFE is a source of some confusion in surround sound. The LFE channel was originally developed to carry extremely low "sub-bass" cinematic sound effects (with commercial subwoofers sometimes going down to 30 Hz, e.g., the loud rumble of thunder or explosions) on their own channel. This allowed theaters to control the volume of these effects to suit the particular cinema's acoustic environment and sound reproduction system. Independent control of the sub-bass effects also reduced the problem of intermodulation distortion in analog movie sound reproduction. A "sub-woofer" capable of playing back frequencies as low as 5 Hz was developed by a small speaker manufacturer in Florida. It utilized a propellor design and required a large cabinet to move sub-sonic air mass.[28]
In the original movie theater implementation, the LFE was a separate channel fed to one or more subwoofers. Home replay systems, however, may not have a separate subwoofer, so modern home surround decoders and systems often include a bass management system that allows bass on any channel (main or LFE) to be fed only to the loudspeakers that can handle low-frequency signals. The salient point here is that the LFE channel is not the "subwoofer channel"; there may be no subwoofer and, if there is, it may be handling a good deal more than effects.[29]
Some record labels such as Telarc and Chesky have argued that LFE channels are not needed in a modern digital multichannel entertainment system.[citation needed] They argue that all available channels have a full-frequency range and, as such, there is no need for an LFE in surround music production, because all the frequencies are available in all the main channels. These labels sometimes use the LFE channel to carry a height channel, underlining its redundancy for its original purpose. The label BIS generally uses a 5.0 channel mix.
The descriptions of surround sound specifications below distinguish between the number of discrete channels encoded in the original signal and the number of channels reproduced for playback. The number of channels reproduced for playback can be changed by using matrix decoding. A distinction is also made between the number of channels reproduced for playback and the number of speakers used to reproduce (each channel may refer to a group of speakers). The graphics to the right of each specification description represent the number of channels, not the number of speakers.
This notation, e.g. "5.1", reflects the number of full range channels; including a ".1" to reflect the limited range of the LFE channel.
E.g. 2 basic stereo speakers with no LFE channel = 2.0
5 full-range channels + 1 LFE channel = 5.1
It can also be expressed as the number of full-range channels in front of the listener, separated by a slash from the number of full-range channels beside or behind the listener, separated by a decimal point from the number of limited-range LFE channels.
E.g. 3 front channels + 2 side channels + an LFE channel = 3/2.1
This notation can then be expanded to include the notation of Matrix Decoders. Dolby Digital EX, for example, has a sixth full-range channel incorporated into the two rear channels with a matrix. This would be expressed:
3 front channels + 2 rear channels + 3 channels reproduced in the rear in total + 1 LFE channel = 3/2:3.1
Note: The term stereo, although popularised in reference to two channel audio, can also be properly used to refer to surround sound, as it strictly means "solid" (actually meaning three-dimensional sound) sound. However this is no longer a common usage and "stereo sound" is almost exclusively used to describe two channel, left and right, sound.
In accordance with ANSI/CEA-863-A[30]
Zero-based order within multi-channel mp3/wav/flac datastream[31][32][33][34] |
Order within DTS/AAC[35][36] |
Channel name | Color-coding on commercial receiver and cabling |
---|---|---|---|
0 | 1 | Front left | White |
1 | 2 | Front right | Red |
2 | 0 | Center | Green |
3 | 5 | Low frequency | Purple |
4 | 3 | Surround left | Blue |
5 | 4 | Surround right | Grey |
6 | 6 | Surround back left | Brown |
7 | 7 | Surround back right | Khaki |
Front left | Center | Front right |
Surround left | Surround right | |
Surround back left | Surround back right | |
Low frequency |
In 2002, Dolby premiered a master of We Were Soldiers which featured a Sonic Whole Overhead Sound soundtrack. This mix included a new ceiling-mounted height channel.
Ambisonics is a series of recording and replay techniques using multichannel mixing technology that can be used live or in the studio and which recreates the soundfield as it existed in the space, in contrast to traditional surround systems, which can only create illusion of the soundfield if the listener is located in a very narrow sweetspot between speakers. Any number of speakers in any physical arrangement can be used to recreate a sound field. With 6 or more speakers arranged around a listener, a 3-dimensional ("periphonic", or full-sphere) sound field can be presented. Ambisonics was invented by Michael Gerzon.
Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performers or instruments. This idea of a three dimensional or "internal" form of sound has also translated into useful advancement of technology in many things such as stethoscopes creating "in-head" acoustics and IMAX movies being able to create a three dimensional acoustic experience.
PanAmbio combines a stereo dipole and crosstalk cancellation in front and a second set behind the listener (total of four speakers) for 360° 2D surround reproduction. Four channel recordings, especially those containing binaural cues, create speaker-binaural surround sound. 5.1 channel recordings, including movie DVDs, are compatible by mixing C-channel content to the front speaker pair. 6.1 can be played by mixing SC to the back pair.
This table shows the various speaker configurations that are commonly used for end-user equipment. The order and identifiers are those specified for the channel mask in the standard uncompressed WAV file format (which contains a raw multichannel PCM stream) and are used according to the same specification for most PC connectible digital sound hardware and PC operating systems capable of handling multiple channels.[37][38] While it is certainly possible to build any speaker configuration, there isn't a lot of commercially available movie or music content for alternative speaker configurations. Such cases, however, can be worked around by remixing the source content channels to the speaker channels using a matrix table specifying how much of each content channel is played through each speaker channel.
Channel name | Identifier | Index | Flag | 1.0 Mono[Note 1] | 2.0 Stereo[Note 2] | 3.0 Stereo | 3.0 Surround | 4.0 Quad | 4.0 Surround | 5.0 | 5.0 Side[Note 3] | 6.0 | 6.0 Side[Note 3] | 7.0 | 7.0 Side[Note 4] | 7.0 Surround[Note 3] | 9.0 Surround | 11.0 Surround |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Front Left | SPEAKER_FRONT_LEFT | 0 | 0x00000001 | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Front Right | SPEAKER_FRONT_RIGHT | 1 | 0x00000002 | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Front Center | SPEAKER_FRONT_CENTER | 2 | 0x00000004 | Yes | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Back Left | SPEAKER_BACK_LEFT | 4 | 0x00000010 | No | No | No | No | Yes | No | Yes | No | Yes | No | Yes | No | Yes | Yes | Yes |
Back Right | SPEAKER_BACK_RIGHT | 5 | 0x00000020 | No | No | No | No | Yes | No | Yes | No | Yes | No | Yes | No | Yes | Yes | Yes |
Front Left of Center | SPEAKER_FRONT_LEFT_OF_CENTER | 6 | 0x00000040 | No | No | No | No | No | No | No | No | No | No | Yes | Yes | No | No | Yes |
Front Right of Center | SPEAKER_FRONT_RIGHT_OF_CENTER | 7 | 0x00000080 | No | No | No | No | No | No | No | No | No | No | Yes | Yes | No | No | Yes |
Back Center | SPEAKER_BACK_CENTER | 8 | 0x00000100 | No | No | No | Yes | No | Yes | No | No | Yes | Yes | No | No | No | No | No |
Side Left | SPEAKER_SIDE_LEFT | 9 | 0x00000200 | No | No | No | No | No | No | No | Yes | No | Yes | No | Yes | Yes | Yes | Yes |
Side Right | SPEAKER_SIDE_RIGHT | 10 | 0x00000400 | No | No | No | No | No | No | No | Yes | No | Yes | No | Yes | Yes | Yes | Yes |
Front Left Height | SPEAKER_LEFT_HEIGHT | 12 | 0x00001000 | No | No | No | No | No | No | No | No | No | No | No | No | No | Yes | Yes |
Front Right Height | SPEAKER_RIGHT_HEIGHT | 14 | 0x00004000 | No | No | No | No | No | No | No | No | No | No | No | No | No | Yes | Yes |
Any of the channel configurations above may include a low frequency effects (LFE) channel (the channel played through the subwoofer.) This would make the configuration ".1" instead of ".0". Most modern multichannel mixes will contain an LFE.
7.1 surround sound is a popular format in theaters & Home cinema including Blu-rays with Dolby and DTS being major players[39]
10.2 is the surround sound format developed by THX creator Tomlinson Holman of TMH Labs and University of Southern California (schools of Cinema/Television and Engineering). Developed along with Chris Kyriakakis of the USC Viterbi School of Engineering, 10.2 refers to the format's promotional slogan: "Twice as good as 5.1". Advocates of 10.2 argue that it is the audio equivalent of IMAX[weasel words].
11.1 is the sound format supported by BARCO with few installations in theaters worldwide. [40]
22.2 is the surround sound component of Ultra High Definition Television, and has been developed by NHK Science & Technical Research Laboratories. As its name suggests, it uses 24 speakers. These are arranged in three layers: A middle layer of ten speakers, an upper layer of nine speakers, and a lower layer of three speakers and two sub-woofers. The system was demonstrated at Expo 2005, Aichi, Japan, the NAB Shows 2006 and 2009, Las Vegas, and the IBC trade shows 2006 and 2008, Amsterdam, Netherlands.
Wikibooks has more on the topic of: Surround sound |
|
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「encompass」「investment」「circumvent」「circumvention」「包囲」 |
関連記事 | 「surrounding」 |
.