Jump to content
IGNORED

Soundstage Width cannot extend beyond speakers


STC

Recommended Posts

7 minutes ago, Jud said:

 

A sound source moving between mics will show an increasing time/phase delay at one mic, with a decreasing time/phase delay at the other. Then as it moves outside the frame of the mics, it will exhibit increased time/phase delay with both mics.

 

 

 

What is the effect of hearing different phase with a single speaker? How a 180 degree and -180 degree phase sound like?

Link to comment
36 minutes ago, Shadorne said:

 

Not sure everyone can hear absolute phase however the room or a thin wall speaker may respond differently to low frequency initial excitation depending on the non-linearity of the walls - at high enough levels of course - and assuming the walls have more give in one direction than another...

 

If something doesnt work for everyone then it would not be a basic for how stereo work.

 

The only thin wall speaker design I am familiar with is Harbeth and not sure how it got to do with phase. Anyway, this is irrelevant to the Amused to the Death you referred to support your argument. 

 

 

33 minutes ago, Shadorne said:

 

Yes - if you read many of my posts - I regard phase as an extremely important aspect of accurate audio reproduction.

 

Why? Can you tell the sound difference of a phase of a frequency?  Play a sine way of 2000hz with a single monitor at a distance of 3.42 meter at 80 dB. Then move the speakers to 3.59 meter and play again. Adjust the volume so both level matches. Now do a blind test and see if you can identify which is which. The phase that would reach you ear at at the two different spot will be the full length and half length of the frequency wavelength. 

 

Edit-  I have wrongly stated +180 earlier. It supposed to be the peak at both sides separated by 180 degree. 

Link to comment
On 10/20/2018 at 9:15 PM, Blackmorec said:

Would be interesting to re-read some of this stuff as confusion tends to creep in over time when you’re not using what you learned many years ago.  I know that precedent effect is very active in suppression of short delay reflections but on reflection I’m not exactly sure how it would deal with one half of a stereo signal reaching both ears....i’m a little foggy on the facts. 

 

From Toole’s book.

 

1) delays from zero to 0.6 to 1ms. Alters the image. This is the basic for stereo to work. 

 

2) from 1 to 40ms adds loudness, liveliness and body. It also said to broaden the primary sound source. 

 

————————

 

Usually, the side wall will be more than 1 foot away which correspondence to approximately 1ms or more delay. This effect of broadening is explained by 2. 

Link to comment
24 minutes ago, pkane2001 said:

 

Are we deliberately ignoring relative phase in these discussions? After all, hearing localization depends, in a large part, on the relative phase of the two channels rather than absolute. 

 

I'd still like to hear any substantiation of why the sound couldn't possibly extend beyond the two speakers, as it's been stated here that this is a 'fact'.

 

I have added link of what pro audio do to extend sound beyond the monitor. I have tried to explain that phase/timing is actually a reference to level/timing. While two levels, say 70dB will be 73dB but if they are at the opposite phase that will be zero dB. This is perfectly understood. 

 

Then if you think about changing speaker’s polarity of one speaker then the speakers phase will be positive and negative. In this case, even though they are opposite phase, you will hear sound like coming from everywhere and sound which suppose to be spread out will sound more focused. Now move your one of speakers about 3 inches forward. What sounded dead center will now be shifted to the speaker which closer to you. The phase is definitely not accurate anymore, per your definition, but if you now use the balance control and increase the other speakers level then the sound will be in dead center again. So what happened to the phase accuracy here? 

 

 

Link to comment
12 minutes ago, pkane2001 said:

 

It is the relative phase differences between the recorded sounds in the two channels that aid in sound localization. So what if the whole scene is shifted by a few inches to the left or to the right? 

 

Phase accuracy is a different discussion (and it's not hard to calculate its effect, BTW). But, while shifting one speaker by 3 inches changes the overall phase relationship between two channels, it doesn't alter the relative phase shifts of individual sounds recorded in the two channels. 

 

So, again, what prevents the recorded sound from appearing to come from outside the speakers?


 

 

Tje answer you looking for will not make sense because to you because of your para 1. 

 

When you say it is relative phase,  you must also state relative to what?  It is relative to the stereo microphone. Let’s take ORTF as example. Let’s say it is separate by 17cm which the same distance of the average human pinnae distance. Some places they give 21.5cm but 17 seemed to be closer. The microphones are at 110 degree angle. 

 

So the accurate phase reaches the two microphone and converted to recorded sound. According to you this has been phase accurately preserved. 

 

Now, tell me at what distance/separation the speakers  should be placed so that the same accurate phase information reaches you?

Link to comment
4 minutes ago, pkane2001 said:

 

Sorry, but how did this morph into an 'accuracy of phase reproduction' discussion? I thought this was about whether sounds can appear to come from outside the speakers?

 

 

But you already said that’s because of phase and nowhere in stereo reproduction they talk about phase. They only talk about level and time difference. So maybe I am wrong. I gave example asking how phase preserve this information. 

Link to comment
30 minutes ago, pkane2001 said:

 

For a 700ms you'll hear a separate sound in left and right channels, like an echo -- that delay is huge. It's like moving one speaker 240 meters closer to you than the other one :)

 

A small delay in one of the channels will shift all the sounds to one side (delay in right channel will cause the sound to shift to the left, not right). But now play the recorded difference in phase between the two channels and the sound position will move relative to this new position.

 

I corrected that to microseconds. But you get the picture. I have posted a link about phase and leave that for you to explain. 

 

Now from the picture, the speakers are placed at 3 meters forming an equilateral triangle. The violin is 15.5 meters and the sax is at 18 meters. I am sure you can calculate the delay between the right and left microphones and also reaching the ears. That’s the orginal timing and level difference captured in the recording. 

 

So the timing difference between L1 and R1 is about 410µs and L2 and R2 is 60µs. That’s the encoded  timing difference. The level too can be calculated but the difference will be encoded correctly too. So far okay?

Link to comment
8 hours ago, pkane2001 said:

 

So far OK, except I can't verify your delay calculations without also having an angle specified.

 

 

My bad. Say the violin is 56 degrees to the left at 15.5 meter. And the sax is 15 degree to the left at 18 meters away from the center of the head or microphones. I changed the degrees slightly because it doesn't reflect the spread accurately in the picture. So the new delay for the sax is 128microseconds.  Please correct me if my calculation is wrong. It has been more than 30 years. :) 

Link to comment
8 hours ago, pkane2001 said:

 

The difference between two ears spaced by 17cm distance at 90 degrees will be 495μs. For a bigger head with a distance of 23cm it will be 670μs. That is the biggest soundstage you could spread the speakers (like headphones) . In most stereo setup the speakers will be placed at 60 degrees. For a wide dispersion speakers, it can extend beyond that. You can also toe-in the speakers so that they are facing you where the frequency response will be flat. This will allow you to extend the speakers width without creating a hole in the middle.

 

278FD649-753B-4A6F-8496-F8C94D0C24FE.thumb.jpeg.d869f557c19715da4aad980378e324c6.jpegAD2ED221-FE2A-4B3C-A8C0-24A5F9B13961.thumb.jpeg.fe980e7a895dc7bfcc5c03a58e5261eb.jpeg

 

A speaker located at equilateral 3 meter triangle will be located by a timing difference of 2470μs. That is, the sound from the left speaker alone will reach with a timing difference of L3a -L3 = 247μs.

 

The original information for the violin was encoded in the recording as having a timing difference of 410μs . The sax timing difference is 128μs. When played by the stereo speakers, it will send a signal from the left speaker and another signal from the right speaker which will be delayed by 128μs for sax. And 410μs for the violin. This is a perfect recreation of the original recording as far as the location is concerned.

 

But.... that doesn't happen with speakers because of PRECEDENCE EFFECT.

 

Link to comment
36 minutes ago, pkane2001 said:

 

Precedence effect applies to inter-aural delay above 1ms. Take your example of a 495μs delay from a 90 degree sound source (that's to the left or right side of the microphone), add a 247μs delay from the speaker angle and you get a 742μs ITD, which is still less than 1ms. So why is precedence effect an issue?

 

 

yes and no. The precedence effect is an explanation involves about arrival of second sound and how it affect the perception of the first sound.  What happens when a second sound arrives after 200μs and the third sound arrives by another 200μs delay? Will it alter the position or processed as fusion zone? Is it confusing? Will the brain too get confused?

 

In the diagram, original delay of the violin is 410μs. This will be produced by the speakers where R3 will arrive later than L3 by 410μs. But before the arrival of L3, the right ear receives another signal L3a that arrives after 247μs. This is the first cue for the brain to localize the sax, in this case the brain will tell it is at 60 degrees.  Then R3 will arrive after L3a by 163μs later than L3a. Where will this sound be located?  Then there is another sound R3a that will arrive at the left ear from the right speaker by a delay of 247μs. Where will this be localize. 

 

In any case, do you see the ear receiving a second sound later than the first sound by 410μs to reflect the exact position of the violin? No matter how far the violin is in the stage and how much delay between the left and right signal capture intact,  the eventual sound that can be delayed from a 60 degrees speakers cannot exceed 247μs for a 17cm pinnae distance. 

Link to comment
1 hour ago, pkane2001 said:

 

Yes, I noticed that when you said this IR was for the rear channels. Without direct sound your IR treats all sound as reflected, which explains the large reverb I heard. That’s what I meant when I said we may have different aims here, your IR was not created for the same purpose.

 

This is a good example to answer about phase. The 120 degrees IR when produced from the front speakers doesn’t throw the sound to come from 120 degrees. Most of the reference to phase is actually a reference to timing. Sometimes they refer to level. 

 

An example in http://www.mcsquared.com/mono-stereo.htm where the level and phase. Here phase actually refers to timing. 

 

Stereo

True stereophonic sound systems have two independent audio signal channels, and the signals that are reproduced have a specific level and phase relationship to each other so that when played back through a suitable reproduction system, there will be an apparent image of the original sound source.

 

Another example is here. Phase refers to timing. http://alumni.media.mit.edu/~araz/sss/Sound_Localization.html

 

A binaural cue is a cue which relies on the fact that a listener hears sound from two different locations - namely the ears. Localization cues at low frequencies are given by interaural phase differences, where the phase difference of the signals heard at the two ears is an indication of the location of the sound source. At frequencies where the wavelength is shorter than the ear separation, phase cues cannot be used; interaural intensity difference cues are used, since the human head absorbs high frequencies.

Link to comment

And there are so many scholarly references where they clearly said that stereo cannot create sound outside the speakers. This is in reference to real stereo sound without synthesized sound. There is also no height information unless DSP is involved and sometime the HF from the tweeter and prior knowledge can falsely give sense of height in non existing height information. It is normal to Visual a bird above the speakers when you hear bird chirps flying in the music. 

 

.....sounds can be panned to locations between the speakers, thereby creating phantom images where there are no speakers. Stereo cannot, however, create phantom images to the sides, above, below or behind the listener....

 

https://pdfs.semanticscholar.org/3842/1b06d81fab81378465bad8c2aa39bf770331.pdf

Link to comment
13 minutes ago, pkane2001 said:

 

Somehow I get the sense that we might agree, at least on some things :)

 

Do you still think that sound cannot be reproduced outside the two speakers 60 degree triangle?

 

 

I am waiting for you to answer. You have responded to my last post. No sound cannot be produced outside speakers. 

 

 

 

6 minutes ago, pkane2001 said:

 

Listener is the keyword in the above. Sound cannot be reproduced to the sides, above, below or behind the listener, not speaker. These are different statements. Any other references that show the original point that sound can't come from outside the speakers?

 

Read carefully, Listener doesn’t apply to the rest. Read the full article. 

 

Link to comment

1)Traditionally, a single pair of speakers could only produce phantom images that appear to originate from a location at or between the physical speakers.

 

2)

The idea of stereo music was to place the featured artist as a phantom image between the two speakers.

 

From the same link. 

 

Maybe it it is so obvious that no one cared to state that explicitly.  I am really surprised that wasn’t the case. 

 

And do you think the sentence will make sense if it says behind speakers?

 

and if you want to know the answer I already provided you in my earlier post where you brought in precedence effect and then gone silent after that. 

Link to comment
5 hours ago, pkane2001 said:

 

That does seem to back up your assertion. But, it doesn't explain why the speakers boundary is the limit.

 

Based on references I provided earlier in this thread, phase difference (ITD) is part of the localization mechanism our ears use under 1KHz. There's no reason that I can see that would make ITD localization be limited to speaker positions when captured and reproduced with a two mic and a two speaker system, as long as phase is properly captured and reproduced.

 

Perhaps for frequencies above 1KHz, where amplitude differences dominate the localization process, this would make sense. But below 1KHz, where phase is the primary localization mechanism,  I don't see what prevents phantom sound sources from being placed outside the speakers. In fact, artificial phase manipulation can (and is often used to) place the sounds outside the speakers, so we know it's possible.

 

 

I thought it was you who brought up precedence effect, and I answered that it's not in play for the under 1ms delays?

 

https://www.computeraudiophile.com/forums/topic/54537-soundstage-width-cannot-extend-beyond-speakers/?do=findComment&comment=889251

 

All the questions answered.  

Link to comment
2 hours ago, gmgraves said:

That's just wrong. Whether or not there's a "sweet spot" depends almost entirely on the speaker design

 

Stereophonics works with the basis that there is a sweet spot. The phantom images that created in between the speakers is based entirely on Interaural level and time difference.

 

This principle will work accurately only when you position yourself in between the two speakers. It will still work even if you not in the middle but the phantom image is no longer at the actual position as in the orginal event.  The further you move towards one speaker, the less you hear of the phantom image and more of the direct sound from the speaker closer to you. 

 

Take your stereo mic to a field and record a sound at every 30 degrees and listen to it with your speakers. The most accurate placements of the instrument will be when you are sitting in the sweet spot. 

Link to comment
On ‎10‎/‎24‎/‎2018 at 10:42 AM, pkane2001 said:

 

Precedence effect applies to inter-aural delay above 1ms. Take your example of a 495μs delay from a 90 degree sound source (that's to the left or right side of the microphone), add a 247μs delay from the speaker angle and you get a 742μs ITD, which is still less than 1ms. So why is precedence effect an issue?

 

 

I typed this on the 24th but could complete it 

 

 

yes and no. The precedence effect is arrival of second sound. What happens when a second sound arrives after 200μs and the third sound arrives by another 200μs delay? Will it alter the position or processed as fusion zone?

 

 

 

This requires a long answer but I am sure the precedence effect is discussed from 0 to 40ms.  I thought i posted the reference and relevant pages.  Anyway, as you said, it is all the first arriving wavefront. In my first paragraph, I asked what happens to the third sound after the second sound arriving the ear by 200μs. What will be the effect of the third sound? You may have to go back to several post to follow the thread and this reply.

 

Thanks.

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...