Jump to content
IGNORED

Why Haven;t You Tried Immersive 3D Audio Yet?


Recommended Posts

That time was not computer processing. For modern technologies may be other conclusions.

 

Also need model a different speaker systems by optimal result (as example, allowable error of frequency and phase response of sound field in single or several points). It can be computer method only for resonable terms of calculations.

 

You need to read about why three channel was considered better then comment. The theory does not require computers.

Link to comment
The theory does not require computers.

 

I suppose, computer is very useful for theoretical work. Instead slide rule ;-)

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment

Compared to stereo or 5.1, using an Ambiophonic system saves space, cables, etc. The front speakers are much closer, even closer than one third the stereo spacing or you just put them on either side of a TV screen. You never need a center speaker. In the rear the two speakers should be at a narrow angle around 20 degrees or so, so you don't need anything at 110 degrees or all the Atmos, Auro speakers either for full surround. No one will believe it but Ambio is the least expensive, least intrusive on home décor, and only full surround protocol out there. It plays music just as well so you don't need two systems as witchdoctor has wrestled with.

Link to comment
Sorry this is a tape measure issue not computational issue. You need to read the research.

 

Give me link, please.

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment
Witch,

 

Two speakers goes back to Bell Labs research. In general 3 is the optimal number according to them but two was more practical.

That depends on how you look at it. Bell labs started with a microphone and a speaker per instrument. Realizing, of course, that this was impractical, they started reducing the number of microphones/channels used (remember this was the early 1930's and the microphones were primitive. They might have even been carbon telephone-style microphones, I don't know what they used and Welch/Reed in their seminal history of audio recording and playback "From Tinfoil to Stereo", doesn't shed any light on the matter.) until they reduced the number of mikes/channels to three. It is important to note here that these multiple channels were not recorded. The technology to allow that simply did not exist at the time. They set up the musicians in one soundproof room, and ran lines to another designated listening room containing the amplifiers and speakers. From these trials, they concluded while three were Ideal for relying all of the spatial cues required for stereo, two could be used by splitting the output of the center channel equally into each of the two flanking channels. Having said that, I know that what wasn't tried was a coincident pair because British engineer Alan Blumlein had not yet invented that particular miking technique. Blumlien's coincident method of recording is designed for, and requires only two speakers. To this day, some form of coincident miking is the only kind of miking arrangement that yields true stereo.

 

Unfortunately, in the 1950's when record and motion picture companies started recording in stereo, most of the technical people involved went back to the Bell Labs experiments in order see how it should be done. Two of the record company pioneers doing stereo recording, RCA Victor and Mercury took a page from Bell Lab's book and took them at face value and started to use three spaced omni mikes; right, left, and center. The result of this was that until multi-channel, multi-miking came along in the early '60's, this was adopted (at least here in the U.S.A) as the way to record for stereo. In the early days, engineers like RCA's Louis Leyton, and Mercury's C. R. Fine had a very practical reason to use the Bell approach. Their bread and butter, in the middle 50's, was monaural records. With the Bell method, they could take the center channel and use it directly to cut the mono release, and then use all three for stereo by splitting the center channel equally into the right and left channels. Both RCA and Mercury used three-track 1/2-inch Ampex 350 tape machines.

 

Whether or not the U. S. record companies knew about Blumlein's experiments at the BBC before the war is unknown to me, but I do know that British Decca and E.M.I/HMV knew about coincident miking and developed their own versions of it. Decca developed the "Decca Tree" a roughly cross-shaped stand with three mikes, the two stereo mikes being coincident, and the third, single mike mounted above them (for mono). I don't know why they did it this way as one of the major positive characteristics of coincident miking is that they are phase coherent and will sum perfectly into mono (something spaced omnis are not and will not do). Meanwhile on the Continent, The French devised a system they called O.R.T.F. (named for the French National Radio Network) which is, generally speaking, a pair of Cardioid condenser mikes on a "T" bar spaced about a meter apart with the mikes aimed at 90 degrees to one another. This too gives excellent, phase coherent stereo. In Germany, meanwhile, DGG came up with what they call the M-S (Mittle -Siet or Middle-Side) pattern. This consists of a figure-of-eight pattern mike situated with it's two polar lobes at right angle to the ensemble; i.e. with one lobe facing the right-hand stage wall, and the other lobe facing the left stage wall. Mounted slightly above this was a single omnidirectional or a cardioid mike facing straight ahead directly at the center of the ensemble (either will work, but if you are recording or broadcasting where there's an audience, you probably don't want to use an omni as it will pick-up audience noise as readily as it will pick-up the musicians). Where all three pick-up pattern lobes intersected was the area covered by the mikes. Using a bit of algebra via a mixing matrix yielded left and right stereo channels.

 

All of these variations on Blumlein's coincident microphoning technique, have one thing in common: They all yield excellent TWO channel stereo, designed to be played back using two channels/speakers.

As to why I haven't tried it, in my case I hear sounds from where they are coming from. And prefer a "wall of sound" a lot of the time. If drum sound is coming from a speaker I hear the speaker even in a small environments like a living room. Probably from too many years consulting in the broadcast industry to change.

George

Link to comment
Hearing the 3D effect and liking it are two different things. In the case of 3D TV failure, it has been discovered rather late that some people do not have stereoscopic vision. The symptom is well documented.

 

However, whether similar deficiency exist in human hearing is not known. At least, I am not aware of any research addressing to the lack of stereophonics hearing. IMO, this is crucial to understand our hearing as we are mostly relying on the two speakers to create the phantom center image.

 

Furthermore, due to habit we may not like or able to decode the artificial 3D sound created by two speakers properly. In an experiment conducted by Washington Uni found that " It seems that people can become so accustomed to attending to the input from just one ear, that they have difficulty ignoring any input from that ear. Selective attention is not always a conscious process.". Although, this statement is not in reference to 3D hearing but it could well be applied to audiophiles who for decades trained to accept and decode the erroneous stereo sound, may now suddenly need to readjust their brain to hear without crosstalk. To expect them to appreciate 3D sound may need sometime for the brain to readjust itself.

 

Look at all these people waiting in line for an Auro 3D demo

 

Link to comment
Compared to stereo or 5.1, using an Ambiophonic system saves space, cables, etc. The front speakers are much closer, even closer than one third the stereo spacing or you just put them on either side of a TV screen. You never need a center speaker. In the rear the two speakers should be at a narrow angle around 20 degrees or so, so you don't need anything at 110 degrees or all the Atmos, Auro speakers either for full surround. No one will believe it but Ambio is the least expensive, least intrusive on home décor, and only full surround protocol out there. It plays music just as well so you don't need two systems as witchdoctor has wrestled with.

 

This seems perfect for a second system in another room. I need to read the links you posted, thanks

Link to comment
.... All of these variations on Blumlein's coincident microphoning technique, have one thing in common: They all yield excellent TWO channel stereo, designed to be played back using two channels/speakers.

 

George, you told me 4 years ago that that you will try Ambiophonics. At least, you could have experienced some other methods. Then it is easier to understand the OP. I spent many years perfecting stereo setup. Now, I have spent about 5 years with Ambio. The first two was very difficult without support and help from others. When I say 3D sound, I know exactly what is is. Stereo playback will not give you that no matter how expensive your setup is.

 

IMO, there is no stereo sound in nature. Sound originates from a single source. Stereo is just to recreate the illusion. It is not real. The main problem with stereo is crosstalk. How do you propose to address the crosstalk in stereo? This is a known defect in stereo.

 

Maybe, this will convince you to experiment with the concept.

 

http://download.springer.com/static/pdf/301/art%253A10.1155%252F2010%252F719197.pdf?originUrl=http%3A%2F%2Fasp.eurasipjournals.springeropen.com%2Farticle%2F10.1155%2F2010%2F719197&token2=exp=1486774778~acl=%2Fstatic%2Fpdf%2F301%2Fart%25253A10.1155%25252F2010%25252F719197.pdf*~hmac=c45e0307639c87b012b11eee552f00d754c1d4783cf8780152daa01f3cdf9e5b

 

 

 

 

 

Sent from my iPhone using Tapatalk

Link to comment
George, you told me 4 years ago that that you will try Ambiophonics. At least, you could have experienced some other methods. Then it is easier to understand the OP. I spent many years perfecting stereo setup. Now, I have spent about 5 years with Ambio. The first two was very difficult without support and help from others. When I say 3D sound, I know exactly what is is. Stereo playback will not give you that no matter how expensive your setup is.

 

 

Yes, I recall telling you that. Unfortunately, things have changed in the ensuing time, and I'm not in a position to do that any more. Sorry about that but sometimes circumstances beyond one's control can change plans of good intentions completely.

 

If it makes you feel any better, my general dislike of multichannel or "surround sound" is mainly aimed at Dolby 5.1, DTS, etc. and is not named at Ambiphonics with which I have no experience. I did have a cheap Ambisonics decoder back in the late 70's but that's not the same thing from what I understand. Good thing too, because that decoder was no better than Columbia's SQ, or or Sansui's QS.

IMO, there is no stereo sound in nature. Sound originates from a single source. Stereo is just to recreate the illusion. It is not real. The main problem with stereo is crosstalk. How do you propose to address the crosstalk in stereo? This is a known defect in stereo.

 

I suppose that if you nit-pick that's true. But what is called Stereophonic sound is merely a (crude) attempt at recreating a complete sound field from, what is essentially two data points (the microphones). Within that sound field, individual points of sound can emanate from right. left and center, from front or in back from near the floor or up above. but always between the playback speakers. That is what interests me, an orchestra spread out before me as if I were third-row-center. An illusion in which I can close my eyes and aurally "SEE" the individual instruments and players arrayed in space exactly as they would be were I attending a live concert hall. Very few recordings are made in such a way as to be able to do that and I find it thrilling. The paucity of what I consider an essential component of stereo is what pushed me into recording in the first place. By Jimminy, if I couldn't buy recordings that do that, I'll make 'em myself.

 

 

I'll go there and read it.

 

OK, I certainly agree that in a multichannel set such as Dolby 5.1, DTS, perhaps Ambiphonics, that crosstalk is the big destroyer of localization. But It's not true in true coincident stereo recording. When two channels are employed, the X-Y or other coincident microphone techniques act as our surrogate ears - not as good, perhaps, certainly not discerning of what they "hear" but they do what our ears do as well as technology can deliver. I've heard many recordings where the seeming goal of the recordist is to minimize crosstalk between instruments. Back in the 60's and 70"s they did this by recording orchestral works using as many as a single mike per instrument. By close-miking everything crosstalk is minimized. I've even seen engineers put Gobos around and between various "big" instruments such as drums in order to minimize crosstalk. Did is work? You bet, but the result was basically unlistenable to anyone who had ever heard live music before. Sound stage? imaging? There was none. Nothing but a line of individual instruments lined-up cheek-by-jowel from the left to the right, and only that because each instrument or instrument group was pan-potted, electronically into the position it seemed to occupy. There is no depth and no hight to the ensemble. Many a fine performance was ruined by this kind of overproduction.

George

Link to comment
IMO, there is no stereo sound in nature. Sound originates from a single source. Stereo is just to recreate the illusion. It is not real. The main problem with stereo is crosstalk. How do you propose to address the crosstalk in stereo? This is a known defect in stereo.

 

Sorry to jump in but there is no sound in nature only variations in air pressure which radiate from a source. It takes a listener to hear it for it to become sound. Most listeners have 2 ears. The illusion created in the mind of the listener is what permits the listener to locate the source.

 

Stereo recording/playback is an attempt to recreate an analogous illusion.

Kal Rubinson

Senior Contributing Editor, Stereophile

 

Link to comment
Sorry to jump in but there is no sound in nature only variations in air pressure which radiate from a source. It takes a listener to hear it for it to become sound. Most listeners have 2 ears. The illusion created in the mind of the listener is what permits the listener to locate the source.

 

Stereo recording/playback is an attempt to recreate an analogous illusion.

 

Well put, Kal! +1

George

Link to comment
Sorry to jump in but there is no sound in nature only variations in air pressure which radiate from a source. It takes a listener to hear it for it to become sound. Most listeners have 2 ears. The illusion created in the mind of the listener is what permits the listener to locate the source.

 

Stereo recording/playback is an attempt to recreate an analogous illusion.

 

Variation in air pressure = sound?

 

Then how come I can hear the rubbing/touching of the cotton buds in my ears while it is touching the ear drum?

 

As far as I know sound originates from vibration. Acoustic transmission over the air is via air molecules vibration. The intensity of the vibration creates different pressure. The louder the sound is the higher the vibration of air molecules which increases the air pressure.

 

This is my understanding. The sound that I hear usually due the excitement of my ear drums and some tiny hairs by the air molecules except when I am diving or cleaning my ears with cotton buds. Air pressure is not sound. You don't hear air pressure but the vibration of the molecules.

 

 

 

 

 

Sent from my iPhone using Tapatalk

Link to comment
.... I've even seen engineers put Gobos around and between various "big" instruments such as drums in order to minimize crosstalk. Did is work?.

 

That would not work because it got nothing to do crosstalk or the person using that thinking it will minimize crosstalk misunderstood the principle of crosstalk in stereo reproduction.

 

Now I see the picture clearly. :)

 

 

Sent from my iPhone using Tapatalk

Link to comment
Sorry to jump in but there is no sound in nature only variations in air pressure which radiate from a source. It takes a listener to hear it for it to become sound. Most listeners have 2 ears. The illusion created in the mind of the listener is what permits the listener to locate the source.

Stereo recording/playback is an attempt to recreate an analogous illusion.

 

"Most listeners have two ears"

 

Most? Are you counting Minions as listeners?

Link to comment
Variation in air pressure = sound?

 

Then how come I can hear the rubbing/touching of the cotton buds in my ears while it is touching the ear drum?

 

As far as I know sound originates from vibration. Acoustic transmission over the air is via air molecules vibration. The intensity of the vibration creates different pressure. The louder the sound is the higher the vibration of air molecules which increases the air pressure.

 

This is my understanding. The sound that I hear usually due the excitement of my ear drums and some tiny hairs by the air molecules except when I am diving or cleaning my ears with cotton buds. Air pressure is not sound. You don't hear air pressure but the vibration of the molecules.

 

 

 

 

 

Sent from my iPhone using Tapatalk

 

Cont/....

 

The vibrating air molecules will reach your ears. Some will go to left and some will go the right.

 

Depending on the location of the source the ear closer to the source will receive a slightly different vibration level ( loudness) and also reach reaches earlier than the other ( timing).

 

When you use a X Y microphone, you are capturing the single source like how your ears would capture them. That is not the same as how you would hear them.

 

Now you use the recording and playback the sound from a stereo setup. X to one channel and Y to the other channel.

 

What you will hear is the repeat of the first occurrence of the sound but now from two different location. The same sound now sliced to what was captured by X located at one side and the other half Y on the location which is far away from the original source.

 

The same process repeats again. The air vibration from X ( let's say left speaker) reaches your left ear and your right ear. The same also happens from the right speaker.

 

What was originally a single source is now duplicated in to two source. One source will be slightly lowered in the loudness level and delayed.

 

This is what happens with stereo and no matter how well you record the sound you cannot eliminate the fundamental error of doubling the source with stereo playback.

 

 

 

Sent from my iPhone using Tapatalk

Link to comment
That depends on how you look at it. Bell labs started with a microphone and a speaker per instrument.

 

...

 

All of these variations on Blumlein's coincident microphoning technique, have one thing in common: They all yield excellent TWO channel stereo, designed to be played back using two channels/speakers.

 

Cool post, George.

 

Microphone per instrument is way of artifical synthesizing of sound field during mixing.

 

For full capturing of sound field 2 microphones (better on head dummy) are enough.

 

Need separate number of microphones (raw channels) and number of speakers (audio holography synth channels).

 

It is different things.

 

Using of several speakers in the room for sound holography demand computer control (I don't know other technology now and before) for phase-frequency management.

 

I approximatelly imagine, how these speakers interfere. Possibly there 2 speakers enough, possibly not. I can't suppose it without calculations for certain room.

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment
Cool post, George.

 

Microphone per instrument is way of artifical synthesizing of sound field during mixing.

 

For full capturing of sound field 2 microphones (better on head dummy) are enough.

 

Need separate number of microphones (raw channels) and number of speakers (audio holography synth channels).

 

It is different things.

 

Using of several speakers in the room for sound holography demand computer control (I don't know other technology now and before) for phase-frequency management.

 

I approximatelly imagine, how these speakers interfere. Possibly there 2 speakers enough, possibly not. I can't suppose it without calculations for certain room.

 

I am using JRiver with zones. Except for timing adjustment for my convolution zones (only during the setup) I do not need and other corrections except filtering frequencies above 6Khz. .

 

You can hear the samples of the 12 speakers and stereo in my profile link. Can you hear the the phase errors from the original 2L file?

 

 

Sent from my iPhone using Tapatalk

Link to comment
Variation in air pressure = sound?

 

Then how come I can hear the rubbing/touching of the cotton buds in my ears while it is touching the ear drum?

Yes, in the vast majority of situations and, specifically, as it pertains to music. You can hear sounds via other media (like water) and even by direct application of pressure to the tympanic membrane or other body parts connected to it.

 

As far as I know sound originates from vibration. Acoustic transmission over the air is via air molecules vibration. The intensity of the vibration creates different pressure. The louder the sound is the higher the vibration of air molecules which increases the air pressure.
The vibration of an object (sound source) creates compressions and rarifactions in the air which can be described as local variations in air pressure.

 

This is my understanding. The sound that I hear usually due the excitement of my ear drums and some tiny hairs by the air molecules except when I am diving or cleaning my ears with cotton buds.
The air pressure variation does not directly impinge on the hair cells but they are affected by the transmission of the pressure via intervening ossicles and endolymphatic fluid.

 

Air pressure is not sound. You don't hear air pressure but the vibration of the molecules.
The vibration is measured as variations in air pressure.

Kal Rubinson

Senior Contributing Editor, Stereophile

 

Link to comment
Yes, in the vast majority of situations and, specifically, as it pertains to music. You can hear sounds via other media (like water) and even by direct application of pressure to the tympanic membrane or other body parts connected to it.

 

The vibration of an object (sound source) creates compressions and rarifactions in the air which can be described as local variations in air pressure.

 

The air pressure variation does not directly impinge on the hair cells but they are affected by the transmission of the pressure via intervening ossicles and endolymphatic fluid.

 

The vibration is measured as variations in air pressure.

 

I always thought it was vibration what translates to sound we hear.

 

https://www.sciencedaily.com/releases/2009/04/090423132955.htm

 

The relevant portion.

 

Ricci explained, "Location is important, because our entire theory of how sound activates these channels depends on it. Now we have to re-evaluate the model that we've been showing in textbooks for the last 30 years."

 

Deep inside the ear, specialized cells called "hair cells" sense vibrations in the air. The cells contain tiny clumps of hairlike projections, known as stereocilia, which are arranged in rows by height. Sound vibrations cause the stereocilia to bend slightly, and scientists think the movement opens small pores, called ion channels. As positively charged ions rush into the hair cell, mechanical vibrations are converted into an electrochemical signal that the brain interprets as sound.

 

 

 

 

Sent from my iPhone using Tapatalk

Link to comment

You can hear the samples of the 12 speakers and stereo in my profile link. Can you hear the the phase errors from the original 2L file?

 

Phase error it is altering position of virtual sound source.

 

I can't understand how to objectively measure refer (must be) and listened positions.

 

Here we can compare captured acoustic wave close to ear in concert hall and wave close to ear during playback.

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment
Phase error it is altering position of virtual sound source.

 

I can't understand how to objectively measure refer (must be) and listened positions.

 

Here we can compare captured acoustic wave close to ear in concert hall and wave close to ear during playback.

 

Since you are alluding whatever error that you are referring to must have a negative effect when using more than 2 speakers for stereo, I am asking whether they are audible and negatively impact your listening experience.

 

In theory, putting two more speakers at the rear for a stereo playback should cause all kind of errors. However, actual listening experience shows it has no negative effect of any kind.

 

 

Sent from my iPhone using Tapatalk

Link to comment
I am asking whether they are audible and negatively impact your listening experience.

 

In theory, putting two more speakers at the rear for a stereo playback should cause all kind of errors. However, actual listening experience shows it has no negative effect of any kind.

 

There is error will or not depend on precision of phase management of the distributed speaker system. The management system must take 2 recorded channels and calculate how to bring it as 1:1 to points of ears: left channel to left ear, right channel to right ear and acoustically isolate ears. How many speakers there need, I don't know.

 

But it is system of first approach.

 

Real sound hologramm must be created surround full body of a listener. Because sound impact to full body, not only ears. Especially low frequencies.

 

Listening experience is other aspect than fidelity (original = played back copy that percepted in a brain).

 

I will consider what fidelity achieved, if I will listen live acoustic band in a hall and right after it played back its record and I will not listen any difference in perception of full spatial picture.

 

Objectively it can be checked via comparing, as I wrote in previous post.

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment
There is error will or not depend on precision of phase management of the distributed speaker system. The management system must take 2 recorded channels and calculate how to bring it as 1:1 to points of ears: left channel to left ear, right channel to right ear and acoustically isolate ears. How many speakers there need, I don't know.

 

But it is system of first approach.

 

Real sound hologramm must be created surround full body of a listener. Because sound impact to full body, not only ears. Especially low frequencies.

 

Listening experience is other aspect than fidelity (original = played back copy that percepted in a brain).

 

I will consider what fidelity achieved, if I will listen live acoustic band in a hall and right after it played back its record and I will not listen any difference in perception of full spatial picture.

 

Objectively it can be checked via comparing, as I wrote in previous post.

 

If you have another practical method to improve my current stereo system, please share. I promise to read every word of it and experiment myself and ask you more question to improve the sound.

 

I promise to faithfully record my progress for evaluation and post it here. I promise I would sincerely spend months even years if it can improve my current stereo system and should not cost more than $125. That what I paid for Ambiophonics which ranges from FOC to &125 for the MiniDSP.

 

So which method you are proposing?

 

 

Sent from my iPhone using Tapatalk

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...