Jump to content
IGNORED

Soundstage Width cannot extend beyond speakers


STC

Recommended Posts

On 10/30/2018 at 9:39 AM, STC said:

 

Going back to the chart. 

 

E8C1A1B4-CA22-4B5C-B044-8482CD2BC133.thumb.jpeg.09da32224eb9b14f8adce59cedaeac71.jpeg

 

 Previously, it was calculated that there  was a 410microseconds delay between the left and right ears for the violin signal at 120 degrees. This time difference has already been encoded in the recording. 

 

This difference is represented by L3 and R3. Now, going back to Haas experiment about first wavefront, the second signal L3a will arrive at the right ear about 250 microseconds for 60 degrees. Our brain will now localize the sound to come from 60 degress. 

 

However, the original delay of 410 microseconds ( R3) is yet to arrive at the right ear. R3 will arrive 160microseconds  ( 410 - 250) after L3a. 

 

Now your ears hear three signals delayed by 250 and 160 microseconds. Then there will be another delayed signal R3a that will arrive 250us after the arrival R3.  The difference between L3 and R3a is 660us. 

 

From Haas experiment, and your reference of precedence effect what will happen to R3 and R3a delayed signals that arrive at the ears. Will the image shift or will it superimpose on the original perceived image of the first delayed sound L3a or will it be treated as early reflections?

 

You can use the same principle by moving the speakers closer to the side wall so that the image could shift to the outer of speakers boundary. But the distance to the wall should be very close so that the delay does not exceed 0.6 to 1ms. 

 

Reading (and interpreting) research, adding up time delays up to 1ms does not cause the precedence effect, but instead causes the summation effect. This simply shifts the apparent sound source in the direction of the ear with the shortest delay. So, the violin will shift to the left, if I understand this correctly.

Link to comment
14 minutes ago, pkane2001 said:

 

Reading (and interpreting) research, adding up time delays up to 1ms does not cause the precedence effect, but instead causes the summation effect. This simply shifts the apparent sound source in the direction of the ear with the shortest delay. So, the violin will shift to the left, if I understand this correctly.

 

By how many degrees based on the speakers location in the diagram? Can the shift reflect a 410microseconds interaural time difference location? It can't.

Link to comment
4 minutes ago, STC said:

 

By how many degrees based on the speakers location in the diagram? Can the shift reflect a 410microseconds interaural time difference location? It can't.

 

Why not? I'm sorry, but I'm missing something in your argument. If a 410usec delay is added to whatever other ITD is produced by the speakers (which is the same from the left speaker and the right speaker), then the sound will shift exactly by a distance represented by 410usec delay. 

Link to comment
36 minutes ago, pkane2001 said:

 

Why not? I'm sorry, but I'm missing something in your argument. If a 410usec delay is added to whatever other ITD is produced by the speakers (which is the same from the left speaker and the right speaker), then the sound will shift exactly by a distance represented by 410usec delay. 

 

Didn't you also asked me to read about first wavefront principle?  It can be answered using that.

 

Based on the principle,  the precedence effect takes place after 1ms (sic). Whatever first "reflected" sound that arrives you ear within 1ms will shift the image.  In the diagram, the second sound that arrive the ear is L3a. That will be second loudest and most identical to L3. That sound arrives after 250μs. This will be localized by our brain. That location will be fixed. Now, R3 and R3a will arrive much later. The total time difference between TWO successive signals in the following order L3a, R3 and R3a never reached the real value of 410μs of the original time difference for the violin location. The longest ITD that the brain could receive was 250μs only. Since there is no clear ITD value of 410μs with the speakers reproduction, could you help me to understand how is it possible that the brain now can localize the sound outside the speakers boundary?

 

 

Link to comment
8 minutes ago, STC said:

 

Didn't you also asked me to read about first wavefront principle?  It can be answered using that.

 

Based on the principle,  the precedence effect takes place after 1ms (sic). Whatever first "reflected" sound that arrives you ear within 1ms will shift the image.  In the diagram, the second sound that arrive the ear is L3a. That will be second loudest and most identical to L3. That sound arrives after 250μs. This will be localized by our brain. That location will be fixed. Now, R3 and R3a will arrive much later. The total time difference between TWO successive signals in the following order L3a, R3 and R3a never reached the real value of 410μs of the original time difference for the violin location. The longest ITD that the brain could receive was 250μs only. Since there is no clear ITD value of 410μs with the speakers reproduction, could you help me to understand how is it possible that the brain now can localize the sound outside the speakers boundary?

 

Don't recall asking you about first wavefront. But again, under 1ms, there is no perception of 'first', 'second' or 'third' sound. The sounds coming in with different time delays within 1ms all add up to one perceived source (called the summation effect).

 

Link to comment
6 minutes ago, pkane2001 said:

 

Don't recall asking you about first wavefront. But again, under 1ms, there is no perception of 'first', 'second' or 'third' sound. The sounds coming in with different time delays within 1ms all add up to one perceived source (called the summation effect).

 

 

Ok. Under summation effect where  do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers?  

Link to comment
3 minutes ago, STC said:

 

Ok. Under summation effect where  do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers?  

 

I can’t. Can you? 

Link to comment
24 minutes ago, STC said:

 

Ok. Under summation effect where  do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers?  

 

Why bother calculating -- that's way too complicated. Just put the song on and listen. You'll hear the violin easily.

Custom room treatments for headphone users.

Link to comment
1 hour ago, STC said:

 

Ok. Under summation effect where  do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers?  

 

Per @jabbr suggestion: here is a test track with a recorded bell and voice. Both are centered on the original track (delay of 0) but recorded at various heights. I've delayed the right channel by various amounts relative to the left.

 

And yes, I used phase in the frequency domain to make these adjustments, not time:

 

IPDTestFiles

 

I'll keep this up for only a short time, so please download and try if you're interested. This has 0, 200, 400, 600, and 1000 microseconds delay in the right channel.

 

Link to comment
24 minutes ago, pkane2001 said:

And yes, I used phase in the frequency domain to make these adjustments, not time:

 

You labelled the files with time not phase despite you want to prove IPD is ITD. Why is that despite saying that they are both the same,  you are still bringing in time to illustrate this point. Just stick to phase alone. What is the phase angle for each file?

 

This is about human localization process not to prove that time is phase which is mathematically correct. Any school kids should able to confirm that. I am using ITD to illustrate human localization process. If you want to explain IPD to prove that it can be done use my chart and give the angles and discuss that entirely in phase domain without bringing time. I never used phase in my diagrams. Why it is so complicated to make a point which is so obvious?

Link to comment

For more fun, slow the file down to 44.1 khz and listen.  I do mean slow down not resample.  In Audacity you can change the file rate without changing samples.  Or alternatively slow the file down to 23% of its normal speed.  Then listen again. 

 

Also with the original file try listening centered, over to the left in line with the left speaker and ditto for the right. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
4 minutes ago, esldude said:

For more fun, slow the file down to 44.1 khz and listen.  I do mean slow down not resample.  In Audacity you can change the file rate without changing samples.  Or alternatively slow the file down to 23% of its normal speed.  Then listen again. 

 

Also with the original file try listening centered, over to the left in line with the left speaker and ditto for the right. 

 

Amazing! try listening at 0.16x speed. Nice echo.

Link to comment
2 minutes ago, STC said:

 

Amazing! try listening at 0.16x speed. Nice echo.

At 23% you have lots of reverb.  At 16 % you are in the transition zone between huge reverb and echo.  

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
4 hours ago, pkane2001 said:

 

Per @jabbr suggestion: here is a test track with a recorded bell and voice. Both are centered on the original track (delay of 0) but recorded at various heights. I've delayed the right channel by various amounts relative to the left.

 

And yes, I used phase in the frequency domain to make these adjustments, not time:

 

IPDTestFiles

 

I'll keep this up for only a short time, so please download and try if you're interested. This has 0, 200, 400, 600, and 1000 microseconds delay in the right channel.

 

 

4 hours ago, STC said:

 

You labelled the files with time not phase despite you want to prove IPD is ITD. Why is that despite saying that they are both the same,  you are still bringing in time to illustrate this point. Just stick to phase alone. What is the phase angle for each file?

 

This is about human localization process not to prove that time is phase which is mathematically correct. Any school kids should able to confirm that. I am using ITD to illustrate human localization process. If you want to explain IPD to prove that it can be done use my chart and give the angles and discuss that entirely in phase domain without bringing time. I never used phase in my diagrams. Why it is so complicated to make a point which is so obvious?

 

 

 

Link to comment
6 hours ago, STC said:

 

You labelled the files with time not phase despite you want to prove IPD is ITD. Why is that despite saying that they are both the same,  you are still bringing in time to illustrate this point. Just stick to phase alone. What is the phase angle for each file?

 

This is about human localization process not to prove that time is phase which is mathematically correct. Any school kids should able to confirm that. I am using ITD to illustrate human localization process. If you want to explain IPD to prove that it can be done use my chart and give the angles and discuss that entirely in phase domain without bringing time. I never used phase in my diagrams. Why it is so complicated to make a point which is so obvious?

 

STC, this wasn't about proving anything, only you seem to want to prove something. It was a simple test. You are the one that keeps trying to make a distinction between time delay and phase. I've already asked you to stop doing this a long time ago as it's a meaningless distinction, and I seriously don't want to continue to argue this. I've already stated all the arguments, and really don't want to repeat.

 

What is true, however, is that I used phase to delay the right track, as this was done in the frequency domain. 

 

Link to comment
8 hours ago, esldude said:

At 23% you have lots of reverb.  At 16 % you are in the transition zone between huge reverb and echo.  

 

Makes sense. At above 1ms is when the precedence effect is supposed to kick in. Around 5ms, the two sounds should start to become distinct, turning first into reverb, and then into echo.

Link to comment
  • 2 weeks later...

Try listening to the track Anthem Without Nation by Nitin Sawhney (from the Beyond Skin album available on Tidal). You may like the whole album.

This is one of my favourite test tracks, it has amazing sound image and depth with big lows and plenty of details. 
When I listen to it at high volumes I can hear all sorts of sounds well beyond my speakers in all directions.

Best.
Mev
 

 

mevdinc.com (My autobiography)
Recently sold my ATC EL 150 Actives!

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...