pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 On 10/30/2018 at 9:39 AM, STC said: Going back to the chart. Previously, it was calculated that there was a 410microseconds delay between the left and right ears for the violin signal at 120 degrees. This time difference has already been encoded in the recording. This difference is represented by L3 and R3. Now, going back to Haas experiment about first wavefront, the second signal L3a will arrive at the right ear about 250 microseconds for 60 degrees. Our brain will now localize the sound to come from 60 degress. However, the original delay of 410 microseconds ( R3) is yet to arrive at the right ear. R3 will arrive 160microseconds ( 410 - 250) after L3a. Now your ears hear three signals delayed by 250 and 160 microseconds. Then there will be another delayed signal R3a that will arrive 250us after the arrival R3. The difference between L3 and R3a is 660us. From Haas experiment, and your reference of precedence effect what will happen to R3 and R3a delayed signals that arrive at the ears. Will the image shift or will it superimpose on the original perceived image of the first delayed sound L3a or will it be treated as early reflections? You can use the same principle by moving the speakers closer to the side wall so that the image could shift to the outer of speakers boundary. But the distance to the wall should be very close so that the delay does not exceed 0.6 to 1ms. Reading (and interpreting) research, adding up time delays up to 1ms does not cause the precedence effect, but instead causes the summation effect. This simply shifts the apparent sound source in the direction of the ear with the shortest delay. So, the violin will shift to the left, if I understand this correctly. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
STC Posted November 1, 2018 Author Share Posted November 1, 2018 14 minutes ago, pkane2001 said: Reading (and interpreting) research, adding up time delays up to 1ms does not cause the precedence effect, but instead causes the summation effect. This simply shifts the apparent sound source in the direction of the ear with the shortest delay. So, the violin will shift to the left, if I understand this correctly. By how many degrees based on the speakers location in the diagram? Can the shift reflect a 410microseconds interaural time difference location? It can't. ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 4 minutes ago, STC said: By how many degrees based on the speakers location in the diagram? Can the shift reflect a 410microseconds interaural time difference location? It can't. Why not? I'm sorry, but I'm missing something in your argument. If a 410usec delay is added to whatever other ITD is produced by the speakers (which is the same from the left speaker and the right speaker), then the sound will shift exactly by a distance represented by 410usec delay. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
STC Posted November 1, 2018 Author Share Posted November 1, 2018 36 minutes ago, pkane2001 said: Why not? I'm sorry, but I'm missing something in your argument. If a 410usec delay is added to whatever other ITD is produced by the speakers (which is the same from the left speaker and the right speaker), then the sound will shift exactly by a distance represented by 410usec delay. Didn't you also asked me to read about first wavefront principle? It can be answered using that. Based on the principle, the precedence effect takes place after 1ms (sic). Whatever first "reflected" sound that arrives you ear within 1ms will shift the image. In the diagram, the second sound that arrive the ear is L3a. That will be second loudest and most identical to L3. That sound arrives after 250μs. This will be localized by our brain. That location will be fixed. Now, R3 and R3a will arrive much later. The total time difference between TWO successive signals in the following order L3a, R3 and R3a never reached the real value of 410μs of the original time difference for the violin location. The longest ITD that the brain could receive was 250μs only. Since there is no clear ITD value of 410μs with the speakers reproduction, could you help me to understand how is it possible that the brain now can localize the sound outside the speakers boundary? ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 8 minutes ago, STC said: Didn't you also asked me to read about first wavefront principle? It can be answered using that. Based on the principle, the precedence effect takes place after 1ms (sic). Whatever first "reflected" sound that arrives you ear within 1ms will shift the image. In the diagram, the second sound that arrive the ear is L3a. That will be second loudest and most identical to L3. That sound arrives after 250μs. This will be localized by our brain. That location will be fixed. Now, R3 and R3a will arrive much later. The total time difference between TWO successive signals in the following order L3a, R3 and R3a never reached the real value of 410μs of the original time difference for the violin location. The longest ITD that the brain could receive was 250μs only. Since there is no clear ITD value of 410μs with the speakers reproduction, could you help me to understand how is it possible that the brain now can localize the sound outside the speakers boundary? Don't recall asking you about first wavefront. But again, under 1ms, there is no perception of 'first', 'second' or 'third' sound. The sounds coming in with different time delays within 1ms all add up to one perceived source (called the summation effect). -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
STC Posted November 1, 2018 Author Share Posted November 1, 2018 6 minutes ago, pkane2001 said: Don't recall asking you about first wavefront. But again, under 1ms, there is no perception of 'first', 'second' or 'third' sound. The sounds coming in with different time delays within 1ms all add up to one perceived source (called the summation effect). Ok. Under summation effect where do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers? ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 3 minutes ago, STC said: Ok. Under summation effect where do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers? I can’t. Can you? -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
jabbr Posted November 1, 2018 Share Posted November 1, 2018 24 minutes ago, STC said: Ok. Under summation effect where do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers? Why bother calculating -- that's way too complicated. Just put the song on and listen. You'll hear the violin easily. Custom room treatments for headphone users. Link to comment
pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 1 hour ago, STC said: Ok. Under summation effect where do you think the violin will be located? It is easy to calculate and let’s make it simple by agree the spread of sound rounded to 340m/s. Can you calculate where the phantom image of the violin will be produced by the speakers? Per @jabbr suggestion: here is a test track with a recorded bell and voice. Both are centered on the original track (delay of 0) but recorded at various heights. I've delayed the right channel by various amounts relative to the left. And yes, I used phase in the frequency domain to make these adjustments, not time: IPDTestFiles I'll keep this up for only a short time, so please download and try if you're interested. This has 0, 200, 400, 600, and 1000 microseconds delay in the right channel. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
STC Posted November 1, 2018 Author Share Posted November 1, 2018 24 minutes ago, pkane2001 said: And yes, I used phase in the frequency domain to make these adjustments, not time: You labelled the files with time not phase despite you want to prove IPD is ITD. Why is that despite saying that they are both the same, you are still bringing in time to illustrate this point. Just stick to phase alone. What is the phase angle for each file? This is about human localization process not to prove that time is phase which is mathematically correct. Any school kids should able to confirm that. I am using ITD to illustrate human localization process. If you want to explain IPD to prove that it can be done use my chart and give the angles and discuss that entirely in phase domain without bringing time. I never used phase in my diagrams. Why it is so complicated to make a point which is so obvious? ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
esldude Posted November 1, 2018 Share Posted November 1, 2018 For more fun, slow the file down to 44.1 khz and listen. I do mean slow down not resample. In Audacity you can change the file rate without changing samples. Or alternatively slow the file down to 23% of its normal speed. Then listen again. Also with the original file try listening centered, over to the left in line with the left speaker and ditto for the right. STC 1 And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. Link to comment
STC Posted November 1, 2018 Author Share Posted November 1, 2018 4 minutes ago, esldude said: For more fun, slow the file down to 44.1 khz and listen. I do mean slow down not resample. In Audacity you can change the file rate without changing samples. Or alternatively slow the file down to 23% of its normal speed. Then listen again. Also with the original file try listening centered, over to the left in line with the left speaker and ditto for the right. Amazing! try listening at 0.16x speed. Nice echo. ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
esldude Posted November 1, 2018 Share Posted November 1, 2018 2 minutes ago, STC said: Amazing! try listening at 0.16x speed. Nice echo. At 23% you have lots of reverb. At 16 % you are in the transition zone between huge reverb and echo. And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. Link to comment
STC Posted November 1, 2018 Author Share Posted November 1, 2018 4 hours ago, pkane2001 said: Per @jabbr suggestion: here is a test track with a recorded bell and voice. Both are centered on the original track (delay of 0) but recorded at various heights. I've delayed the right channel by various amounts relative to the left. And yes, I used phase in the frequency domain to make these adjustments, not time: IPDTestFiles I'll keep this up for only a short time, so please download and try if you're interested. This has 0, 200, 400, 600, and 1000 microseconds delay in the right channel. 4 hours ago, STC said: You labelled the files with time not phase despite you want to prove IPD is ITD. Why is that despite saying that they are both the same, you are still bringing in time to illustrate this point. Just stick to phase alone. What is the phase angle for each file? This is about human localization process not to prove that time is phase which is mathematically correct. Any school kids should able to confirm that. I am using ITD to illustrate human localization process. If you want to explain IPD to prove that it can be done use my chart and give the angles and discuss that entirely in phase domain without bringing time. I never used phase in my diagrams. Why it is so complicated to make a point which is so obvious? ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 6 hours ago, STC said: You labelled the files with time not phase despite you want to prove IPD is ITD. Why is that despite saying that they are both the same, you are still bringing in time to illustrate this point. Just stick to phase alone. What is the phase angle for each file? This is about human localization process not to prove that time is phase which is mathematically correct. Any school kids should able to confirm that. I am using ITD to illustrate human localization process. If you want to explain IPD to prove that it can be done use my chart and give the angles and discuss that entirely in phase domain without bringing time. I never used phase in my diagrams. Why it is so complicated to make a point which is so obvious? STC, this wasn't about proving anything, only you seem to want to prove something. It was a simple test. You are the one that keeps trying to make a distinction between time delay and phase. I've already asked you to stop doing this a long time ago as it's a meaningless distinction, and I seriously don't want to continue to argue this. I've already stated all the arguments, and really don't want to repeat. What is true, however, is that I used phase to delay the right track, as this was done in the frequency domain. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 1, 2018 Share Posted November 1, 2018 8 hours ago, esldude said: At 23% you have lots of reverb. At 16 % you are in the transition zone between huge reverb and echo. Makes sense. At above 1ms is when the precedence effect is supposed to kick in. Around 5ms, the two sounds should start to become distinct, turning first into reverb, and then into echo. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
semente Posted November 11, 2018 Share Posted November 11, 2018 I can't remember if this has been posted here before: Linkwitz-Hearing spatial detail.pdf "Science draws the wave, poetry fills it with water" Teixeira de Pascoaes HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256) Link to comment
mevdinc Posted November 11, 2018 Share Posted November 11, 2018 Try listening to the track Anthem Without Nation by Nitin Sawhney (from the Beyond Skin album available on Tidal). You may like the whole album. This is one of my favourite test tracks, it has amazing sound image and depth with big lows and plenty of details. When I listen to it at high volumes I can hear all sorts of sounds well beyond my speakers in all directions. Best. Mev mevdinc.com (My autobiography) Recently sold my ATC EL 150 Actives! Link to comment
STC Posted November 11, 2018 Author Share Posted November 11, 2018 There are many recordings could do that. This is about true stereo recordings that work mostly on level difference alone irrespective of the frequency based on stereophonic principle. ST My Ambiophonics System with Virtual Concert Hall Ambience Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now