Jump to content
IGNORED

True to life recording? - We are fooling ourselves!


STC

Recommended Posts

40 minutes ago, STC said:

 

Who said 250ms?  

Ah-ha microsecond...I misread.....yes that delay makes sense 

 

46 minutes ago, STC said:

In the case of identical signal from each speakers but one is inverted, you do not hear silence because the ears receives two different sound. However, if you were to listen to one speaker only at a time both will sound exact the same unless you the one of the rare person with sect who could hear absolute phase. 

 

When the the wind reaches the ear at the same time, what was heard as two do not merge into one image because the ears are still hearing two separate sound which provides their own ITD and ILD. However, since the sound from one speaker reaches the other by a delay of 250 microseconds, the sound which hit the ear at the exact moment from each speaker cause confusion as the two phases were different. The brain becomes confused and loses the ability to locate them. What you hear is like sound coming from every direction. 

You know I think were are saying more or less  the same thing but mis-interpreting each other’s writing. Its rather a complicated subject to write clearly about. Let’s agree to errr agree 😁 

Link to comment
10 minutes ago, Blackmorec said:

You know I think were are saying more or less  the same thing but mis-interpreting each other’s writing. Its rather a complicated subject to write clearly about. Let’s agree to errr agree 😁 

 

I am afraid we are not. You are equating how human ears work to stereo production. That is a myth perpetuated by audiophiles and also some audio manufacturers. Stereo is nothing more than two sound and got no relation to our hearing. If it is true that stereo contains all the information of phase, amplitude and timing that can be accurately reproduced to the ears than you will hear natural 3D sound which will sound like a recording made by you wearing a binaural microphones. Even that too wouldn’t be inaccurate because you have to place the microphone exactly where your ear drums are. 

 

 

Link to comment
45 minutes ago, STC said:

 

I am afraid we are not. You are equating how human ears work to stereo production. That is a myth perpetuated by audiophiles and also some audio manufacturers. Stereo is nothing more than two sound and got no relation to our hearing. If it is true that stereo contains all the information of phase, amplitude and timing that can be accurately reproduced to the ears than you will hear natural 3D sound which will sound like a recording made by you wearing a binaural microphones. Even that too wouldn’t be inaccurate because you have to place the microphone exactly where your ear drums are. 

Actually STC, you’re correct. We don’t share the same view at all. Its just that i’m getting bored with these non-sensical arguments and thought I’d be pleasant about it.  I’m not equating how human ears work with stereo production. I’m stating how stereo reproduction fools the ears into hearing something that isn’t there. Stereo reproduction is designed to fool the ears, not be like them.  Stereo done right works by producing 2 signals that the ears treat like they treat every other signal in nature. Its not more complicated, nor simpler than that.  The difference is that in nature every signal originates from a single points source, whereas in stereo, the signal originates from 2 closely matched sources. Which brings me right back to where I started and where I now intend to stop. 

Link to comment
16 minutes ago, Blackmorec said:

The difference is that in nature every signal originates from a single points source, whereas in stereo, the signal originates from 2 closely matched sources

 

There you go again. A sound from a speaker is a sound like any other sound you hear. Whether they are from a bird tweet, drum, car, horn, siren, guitar and everything else. Even an electric guitar played live is a sound like the same coming out from a speaker. The ears function to detect sound. Any sound and the HRTF works similarly whenever any sound reaches the ears. 

 

Whenever you keep on saying “in nature” you are giving the impression that the sound from the speakers are not. They are. There is no distinction in sound perceived by us irrespective where the originates. It is absurd to think just because you hear a single sound from stereo speakers, stereo somehow manages to defy human’s hearing.  A simple 17cm AB microphones acoustic transmission of a stereo playback would provide the evidence the evidence. I have provided mine. 

 

 

Link to comment
40 minutes ago, STC said:

 

There you go again. A sound from a speaker is a sound like any other sound you hear. Whether they are from a bird tweet, drum, car, horn, siren, guitar and everything else. Even an electric guitar played live is a sound like the same coming out from a speaker. The ears function to detect sound. Any sound and the HRTF works similarly whenever any sound reaches the ears. 

 

Whenever you keep on saying “in nature” you are giving the impression that the sound from the speakers are not. They are. There is no distinction in sound perceived by us irrespective where the originates. It is absurd to think just because you hear a single sound from stereo speakers, stereo somehow manages to defy human’s hearing.  A simple 17cm AB microphones acoustic transmission of a stereo playback would provide the evidence the evidence. I have provided mine. 

 

 

I’ll have one more go at this....I can’t resist 

 

A sound from a speaker is like any other sound you hear. True...absolutely true

 

A SPEAKER....singular 

 

But in stereo the sound is split and THE SAME sound is produced by a second speaker, which NEVER happens in nature.  Now I have 2 simultaneous signals reaching my ears, where the amplitude between the 2 can be be manipulated. 

 

If all I  had was a single loudspeaker it would send signal to both ears and the ears would detect the difference between the 2 amplitudes caused by the diameter of the head and the different distances travelled and it would locate ALL the sounds as coming from the same place where that loudspeaker is standing. 

 

In stereo, with 2 loudspeakers we set out to fool the brain. Instead of a single sourced signal reaching both ears L & R,  with a difference in amplitude that corresponds to the geometry between loudspeaker and head, we instead send a signal to the left ear and another separate signal to the right. Our ears hear these 2 signals and see that they match each other perfectly, other than some subtle  changes in amplitude and phase.  Because WE have generated those 2 signals we can manipulate the relative amplitude of all the Signals’ elements,  such that the brain assigns different locations to each of the elements, the so called sound stage. 

 

Now whether you think this works or not is probably down to your system and your experience, but in my system the soundstage can be huge, highly focussed and highly specific, so I would report that in a well sorted system, which mine is, the stereo illusion works pretty much perfectly and you hear not a single trace of individual loudspeaker. 

 

 

 

 

 

Link to comment
3 minutes ago, Blackmorec said:

In stereo, with 2 loudspeakers we set out to fool the brain. Instead of a single sourced signal reaching both ears L & R,  with a difference in amplitude that corresponds to the geometry between loudspeaker and head, we instead send a signal to the left ear and another separate signal to the right. Our ears hear these 2 signals and see that they match each other perfectly, other than some subtle  changes in amplitude and phase.  Because WE have generated those 2 signals we can manipulate the relative amplitude of all the Signals’ elements,  such that the brain assigns different locations to each of the elements, the so called sound stage. 

 

Ah finally we are on the same wavelength. But two questions. 

 

1) what do you mean we send one signal to the left ear and another separate signal to right ear?

 

2) how the sound emerging from one speaker imposes self discipline to itself so that it doesn’t travel beyond the intended ear? Unless headphones that’s not possible. The ears will always perceive two sets of signal for one phantom image. 

Link to comment
1 hour ago, STC said:

 

Ah finally we are on the same wavelength. But two questions. 

 

1) what do you mean we send one signal to the left ear and another separate signal to right ear?

 

2) how the sound emerging from one speaker imposes self discipline to itself so that it doesn’t travel beyond the intended ear? Unless headphones that’s not possible. The ears will always perceive two sets of signal for one phantom image. 

Oh, that’s good!

 

What I mean by we send one signal to the left ear and 1 signal to the right ear is that we have 2 loudspeakers, one carrying the left signal intended for the left ear and the other carrying the right signal and intended for the right ear. 

Now obviously the signal intended for the left ear is also going to reach the right ear, albeit a little later, and here’s where psychoacoustics, the brain, helps out.

 

The ears will always receive (not perceive!!!) two sets of signal for one phantom image. But the psycho-acoustic phenomenon known as the precedence effect or law of the first wavefront works as follows:

When a sound is followed by another sound separated by a sufficiently short delay, listeners perceive a single auditory event with spacial location dominated by the first arriving sound. And that’s why when you record the signal reaching the ears (the signal you receive) it sounds different on replay to the signal you actually perceive.

 

So, in summary there’s a difference between what you receive and what you perceive thanks to psycho-acoustics. When you record what you receive and replay it, there are no psycho-acoustics so for a recording, what you receive is also what you perceive, whereas when you hear the signal, what you receive is NOT what your perceive. 

Link to comment

I should also add that this psycho-acoustic ability to identify and favour the first arriving wavefront is quite delicate and any impactful set up shortcomings will send you back to hearing 2 speakers. Something as seemingly trivial as a large piece of furniture in close proximity to a loudspeaker can cause enough diffraction to unbalanced the 2 signals in which case the brain treats them separately.  Bad contacts can similarly cause problems, in fact anything that could cause a channel imbalance of some kind, so plenty of alternatives, which is what makes it sensitive. 

And Frank is quite correct when says that its like a switch. Somewhere in those synapses the signals are either routed one way or the other, (either/or) so a switch would be appropriate. And that’s why you always only hear one or the other, not both. They don’t run in parallel, they run singly and selectively according to their defined route through the brain. The switch must be autonomous and probably is conditional upon whether the 2 ear signals match well enough to be combined. When it is, you hear the combined signal with all the spatial information whereas when it isn’t you’ll hear your 2 loudspeakers as sources 

 

One interesting little trinket was the idea that out of phase signals confuse the brain significantly enough to make it present this overall amorphous everywhere yet nowhere undifferentiated soundstage. 

Link to comment
7 hours ago, Blackmorec said:

The ears will always receive (not perceive!!!) two sets of signal for one phantom image. But the psycho-acoustic phenomenon known as the precedence effect or law of the first wavefront works as follows:

 

Yes. I use presence effect all the time. From 50 microseconds to 100ms. Ambiophonics is all about understanding the precedence effect and clear understanding of it required for effective crosstalk cancellation and the exact values to create the virtual concert hall reverberation. 

 

But it is a fallacy to think you don’t perceive the second sound (5) sample clearly shows that the image is no longer shifts in stereo playback because our ears receive two sets of ITD cues unlike listening it with headphones. Either you can explain (5) or you do not. 

 

 

Link to comment

The image does shift because of the time delay between the two speakers - this is precisely what I hear when true mono is replayed over a competent stereo setup, as I've described many times: if I stand in the centre of the speakers, and then move to the left the phantom images then 'follow' me - that is, they still appear to be directly in front of me, rather than along the centre line between the speakers. If the rig is not at optimum, and I keep moving laterally, at some point the mind no longer sustains that interpretation - and the sound then dives into the nearer speaker.

 

The remarkable thing is that at its peak this phantom image following can't be shaken, no matter how you move your head or body - the delay of the signals is always translated as an illusion that the sound is directly in front.

Link to comment
10 hours ago, STC said:

 

Yes. I use presence effect all the time. From 50 microseconds to 100ms. Ambiophonics is all about understanding the precedence effect and clear understanding of it required for effective crosstalk cancellation and the exact values to create the virtual concert hall reverberation. 

 

But it is a fallacy to think you don’t perceive the second sound (5) sample clearly shows that the image is no longer shifts in stereo playback because our ears receive two sets of ITD cues unlike listening it with headphones. Either you can explain (5) or you do not. 

 

 

Hey STC, this discussion could still finish up with us both agreeing.  

 

Regarding example 5, here’s what I’m GUESSING is going on, based on physics and logic. 

 

With headphones the actual sound sources are clamped to each ear, so the distance involved from sound source to eardrum is a couple of centimetres, and the distance is fixed,  identical L&R and with no external reflections or diffraction.

 

So; 

sound travels at approximately 322 metres a second

250 microseconds (us) is 0.000250 seconds 

In 250us sound would therefore travel 322,000 X 0,00025 cms = 8cm,  approximately the width of the back of your hand 

 

Loudspeakers are anything from 2000 to 4000 cm away from each ear and the head moves quite freely between them. 

 

With headphones, the delay in the signal in terms of distance more or less equals the distance between sound source and eardrum, so essentially with the delay, when the sound wave hits the left eardrum, the right signal is only just being generated at the headphone membrane, so the ear/brain has no difficulty in sensing the difference between the 2 channels and in assigning precedence to the first arriving signal. The ratio between delay and distance is ca 1:1 

 

With speakers at say 4000cm distant,  the ratio between distance and delay is 8/4000 is 0.002: 1  (1:500) and head movement i.e error can be larger than the delay. Further, any minute differences in speaker position vs ears will add to the error.  As soon as the error gets even close to the delay, the effect disappears (I noted elsewhere how sensitive the precedence effect is). 

 

So in essence, headphones will provide a highly controlled stable environment capable of resolving a 250us delay, whereas with speakers, the far greater latitudes of movement and therefore experimental error mean that 250us is to all intents and purposes undetectable. 

 

Is that such a problem?  Not really, because the precedence effect is only considered to work between 1 - 40ms, so minimum 4 times longer than 250us. It probably doesn’t work below 1ms for exactly the above reason.   

 

So to answer your question, due to their fixed nature and the much smaller distances involved between sound pressure wave source and eardrum, headphones have the ability to resolve far smaller (shorter) times delays between L&R channels than loudspeakers. 

Link to comment

I should summarise the above by asking if this difference in resolution (which is about 1 order of magnitude (2:1 vs 1:500)  between headphones and loudspeakers is important. Experimentally sure, however in the real World, the delays we are interested in i.e the ones that bestow location on musicians all fall well within AN ACCURATE and WELL SET UP loudspeaker’s ability to resolve and are reinforced by amplitude differences.  The smaller delays, ie the delays caused by the head when the L channel reaches the R ear we anyway want to ignore in terms of assigning location. 

 

 

 

Link to comment
1 hour ago, Blackmorec said:

Hey STC, this discussion could still finish up with us both agreeing.  

 

Regarding example 5, here’s what I’m GUESSING is going on, based on physics and logic. 

 

With headphones the actual sound sources are clamped to each ear, so the distance involved from sound source to eardrum is a couple of centimetres, and the distance is fixed,  identical L&R and with no external reflections or diffraction.

 

So; 

sound travels at approximately 322 metres a second

250 microseconds (us) is 0.000250 seconds 

In 250us sound would therefore travel 322,000 X 0,00025 cms = 8cm,  approximately the width of the back of your hand 

 

Loudspeakers are anything from 2000 to 4000 cm away from each ear and the head moves quite freely between them. 

 

With headphones, the delay in the signal in terms of distance more or less equals the distance between sound source and eardrum, so essentially with the delay, when the sound wave hits the left eardrum, the right signal is only just being generated at the headphone membrane, so the ear/brain has no difficulty in sensing the difference between the 2 channels and in assigning precedence to the first arriving signal. The ratio between delay and distance is ca 1:1 

 

With speakers at say 4000cm distant,  the ratio between distance and delay is 8/4000 is 0.002: 1  (1:500) and head movement i.e error can be larger than the delay. Further, any minute differences in speaker position vs ears will add to the error.  As soon as the error gets even close to the delay, the effect disappears (I noted elsewhere how sensitive the precedence effect is). 

 

So in essence, headphones will provide a highly controlled stable environment capable of resolving a 250us delay, whereas with speakers, the far greater latitudes of movement and therefore experimental error mean that 250us is to all intents and purposes undetectable. 

 

Is that such a problem?  Not really, because the precedence effect is only considered to work between 1 - 40ms, so minimum 4 times longer than 250us. It probably doesn’t work below 1ms for exactly the above reason.   

 

So to answer your question, due to their fixed nature and the much smaller distances involved between sound pressure wave source and eardrum, headphones have the ability to resolve far smaller (shorter) times delays between L&R channels than loudspeakers. 

 

As you said - guessing and that is pretty much incorrect. This is what crosstalk is all about. Theoretically the sound from left speaker is intended only for the left ear and the same for the right speaker was meant for the right ear. 

 

If you place a divider between the speakers right up to your head ( where you sufficiently attenuate the cross speakers sound from reaching the ear) , you will even hear a resolution of 40microseconds. I have no problem of reproducing the tearing of the paper sound from 30 degrees to the left from my loudspeakers playback.

 

 

Link to comment
18 minutes ago, Blackmorec said:

I should summarise the above by asking if this difference in resolution (which is about 1 order of magnitude (2:1 vs 1:500)  between headphones and loudspeakers is important. Experimentally sure, however in the real World, the delays we are interested in i.e the ones that bestow location on musicians all fall well within AN ACCURATE and WELL SET UP loudspeaker’s ability to resolve and are reinforced by amplitude differences.  The smaller delays, ie the delays caused by the head when the L channel reaches the R ear we anyway want to ignore in terms of assigning location. 

 

 

 

 

Now, you are slowly agreeing that accurate phase is no longer relevant. The point is the stereo "ACCURATE and WELL SET UP" loudspeakers could not produce the accurate position due to crosstalk. I have another track where the male and female voice would appears to be coming from the centre but with headphones the male will be on the left  and the female on the right. No stereo system could reproduce that accurately. There goes the reality and accuracy.

Link to comment
12 minutes ago, Blackmorec said:

so no psycho-acoustics were applied

 

I am speechless. Do you understand what psychoacoustics is? 

4 hours ago, STC said:

 

Now, you are slowly agreeing that accurate phase is no longer relevant. The point is the stereo "ACCURATE and WELL SET UP" loudspeakers could not produce the accurate position due to crosstalk. I have another track where the male and female voice would appears to be coming from the centre but with headphones the male will be on the left  and the female on the right. No stereo system could reproduce that accurately. There goes the reality and accuracy.

 

Looks like you are not interested to find out how this is even possible. Either the loudspeaker positional information is correct and the headphones playback wrong or vice versa. 

Link to comment
13 hours ago, STC said:

 

Now, you are slowly agreeing that accurate phase is no longer relevant. The point is the stereo "ACCURATE and WELL SET UP" loudspeakers could not produce the accurate position due to crosstalk. I have another track where the male and female voice would appears to be coming from the centre but with headphones the male will be on the left  and the female on the right. No stereo system could reproduce that accurately. There goes the reality and accuracy.

 

ST, could you possibly post a snippet of that track, please - I would be interested in what it shows.

Link to comment
10 hours ago, Blackmorec said:

I still don’t agree with you. I have documented what I hear and what the physics and what-we-know-about-psychoacoustics would indicate will happen and they pretty much agree. What I hear from my system in terms of music is ‘no speakers as sources’ and ‘all musicians occupying their individual points in space’, making for outstanding clarity and resolution, while rhythms and interplay of rhythms are incredibly involving and alluring. Are they correct? According to what? They sound great, but I have no idea what the sound engineer put on the master tape and whether what I hear is what he heard or intended. And why should I care as long as what i’m hearing sounds natural, musical, highly involving, joy giving and as far as my tastes are concerned, error free?

 

The good news is that there is enough 'data' on all recordings for this to happen - the "correctness" is irrelevant, because the mind is doing all the necessary compensation to allow this level of  subjective involvement ... if one is listening to live playing of music, there is no "correct" place or way to listen to it, because everywhere is "correct".

 

Your hearing system goes way beyond the capabilities of the sound engineer - he's a rank amateur in making it "work for you!" ... it's pretty obvious at times how contrived the effects are, but it doesn't matter, because your ear/brain is just enjoying the energy, the vitality of the sound of the musical elements.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...