Jump to content
IGNORED

Soundstage Width cannot extend beyond speakers


STC

Recommended Posts

12 minutes ago, pkane2001 said:

 

It is the relative phase differences between the recorded sounds in the two channels that aid in sound localization. So what if the whole scene is shifted by a few inches to the left or to the right? 

 

Phase accuracy is a different discussion (and it's not hard to calculate its effect, BTW). But, while shifting one speaker by 3 inches changes the overall phase relationship between two channels, it doesn't alter the relative phase shifts of individual sounds recorded in the two channels. 

 

So, again, what prevents the recorded sound from appearing to come from outside the speakers?


 

 

Tje answer you looking for will not make sense because to you because of your para 1. 

 

When you say it is relative phase,  you must also state relative to what?  It is relative to the stereo microphone. Let’s take ORTF as example. Let’s say it is separate by 17cm which the same distance of the average human pinnae distance. Some places they give 21.5cm but 17 seemed to be closer. The microphones are at 110 degree angle. 

 

So the accurate phase reaches the two microphone and converted to recorded sound. According to you this has been phase accurately preserved. 

 

Now, tell me at what distance/separation the speakers  should be placed so that the same accurate phase information reaches you?

Link to comment
4 minutes ago, STC said:

Now, tell me at what distance/separation the speakers  should be placed so that the same accurate phase information reaches you?

 

Sorry, but how did this morph into an 'accuracy of phase reproduction' discussion? I thought this was about whether sounds can appear to come from outside the speakers?

 

Link to comment
4 minutes ago, pkane2001 said:

 

Sorry, but how did this morph into an 'accuracy of phase reproduction' discussion? I thought this was about whether sounds can appear to come from outside the speakers?

 

 

But you already said that’s because of phase and nowhere in stereo reproduction they talk about phase. They only talk about level and time difference. So maybe I am wrong. I gave example asking how phase preserve this information. 

Link to comment
2 minutes ago, STC said:

But you already said that’s because of phase and nowhere in stereo reproduction they talk about phase. They only talk about level and time difference. So maybe I am wrong. I gave example asking how phase preserve this information. 

Time difference is phase.

Link to comment
Just now, STC said:

 

But you already said that’s because of phase and nowhere in stereo reproduction they talk about phase. They only talk about level and time difference. So maybe I am wrong. I gave example asking how phase preserve this information. 

 

Absolute phase difference between channels, such as caused by speaker positioning, is different than the relative phase differences between sound sources recorded in the two channels.

 

As long as the relative phase differences recorded between the sounds in the two channels can be reproduced (not completely destroyed by filters, speakers, amps, etc.) the  position of a sound can be placed in between or outside the speakers.

Link to comment
8 minutes ago, STC said:

 

Like this?

 

AD2ED221-FE2A-4B3C-A8C0-24A5F9B13961.thumb.jpeg.fe980e7a895dc7bfcc5c03a58e5261eb.jpeg

 

4 minutes ago, pkane2001 said:

 

Something like that, yes.

 

 

I only see capture, not reproduction, in that diagram.

 

What is the mechanism that will have the the speakers "place" an image of the harp to the left of the left speaker?

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
12 minutes ago, STC said:

 

Like this?

 

AD2ED221-FE2A-4B3C-A8C0-24A5F9B13961.thumb.jpeg.fe980e7a895dc7bfcc5c03a58e5261eb.jpeg

 

Perhaps @gmgraves could do us a favour and record a sound source positioned like the harp or back violins.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
1 minute ago, pkane2001 said:

 

Phase differences between the harp sound recorded in the left and the right channels.

 

 

These will be in the recording.

How are they accurately reproduced with a pair of speakers?

 

I've never listened to this (images outside of in-between speakers space) happen with real stereo, have read that it's not possible, and am as interested as you in learning why.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
Just now, STC said:

 

Thanks for keeping up. Do you agree the image will now shift towards the right? The same will happen if you can delay the right speaker by 700ms. Now, what happens to the phase?  

 

For a 700ms you'll hear a separate sound in left and right channels, like an echo -- that delay is huge. It's like moving one speaker 240 meters closer to you than the other one :)

 

A small delay in one of the channels will shift all the sounds to one side (delay in right channel will cause the sound to shift to the left, not right). But now play the recorded difference in phase between the two channels and the sound position will move relative to this new position.

Link to comment
30 minutes ago, pkane2001 said:

 

For a 700ms you'll hear a separate sound in left and right channels, like an echo -- that delay is huge. It's like moving one speaker 240 meters closer to you than the other one :)

 

A small delay in one of the channels will shift all the sounds to one side (delay in right channel will cause the sound to shift to the left, not right). But now play the recorded difference in phase between the two channels and the sound position will move relative to this new position.

 

I corrected that to microseconds. But you get the picture. I have posted a link about phase and leave that for you to explain. 

 

Now from the picture, the speakers are placed at 3 meters forming an equilateral triangle. The violin is 15.5 meters and the sax is at 18 meters. I am sure you can calculate the delay between the right and left microphones and also reaching the ears. That’s the orginal timing and level difference captured in the recording. 

 

So the timing difference between L1 and R1 is about 410µs and L2 and R2 is 60µs. That’s the encoded  timing difference. The level too can be calculated but the difference will be encoded correctly too. So far okay?

Link to comment
55 minutes ago, STC said:

Before that,  what happens to the instrument sound in the dead center when you turn the balance control to the right? Do you change the phase or the level?

 

The level by means of the less summation of the same phase.

 

Up is a sine of 10V.

Same phase from two speakers gives a summed voltage of 20V in the centre.

Attenuate the left channel with "50%" (6dB) and the summed output is now 15V and in the middle of the position between the centre and the right hand speaker.

 

Something like that ?

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
1 hour ago, pkane2001 said:

 

Let me ask you a different question: why would a pair of speakers not be able to reproduce the recorded phase difference?

 

 

 

Instinctively I'd think that it would be reproduced as a change in level. I'd have to do a lot of studying to come up with a substantiated response...

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
1 minute ago, pkane2001 said:

 

Phase and amplitude are separate and distinct. Our ears appear to have the ability to detect differences in both.

 

I know that but you asked about the system/speakers being able to reproduce those phase differences.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

I've just bumped into this:

 

Stereo & the Soundstage

John Atkinson  |  Dec 4, 1986

 

(...)

 

Blumlein's genius, however, lay in the fact that he realized that the low-frequency phase information can be replaced by corresponding amplitude information. If you have two independent information channels, each feeding its own loudspeaker, then the ratio of the signal amplitudes between those two loudspeakers will define the position of a virtual, phantom, sound source for a centrally placed listener equidistant from them. For any ratio of the sound levels of the two speakers, this virtual source occupies a dimensionless point somewhere on the line joining their acoustic centers. The continuum of these points, from that represented by maximum-left/zero-right to that represented by zero-left/maximum-right, makes up the conventional stereo image. If there is no reverberant information, then the brain will place the virtual image of the sound source in the plane of the speakers; if there is reverberation recorded with the correct spatial relationship to the corresponding direct sound, if it is "coherent," then the brain places the virtual image behind the speakers, the exact distance depending on the recorded direct-sound/reverberant-sound ratio.

 

Thus by recording amplitude information only in a two-channel system, we can create a virtual soundstage between and behind the loudspeakers.

 

Hands go up everywhere: but...but...but surely both ears receive the signal from both loudspeakers. Shouldn't this acoustic crosstalk work against the creation of a stereo image?

 

The facile answer is that, as the vast majority of people can perceive stereo images, it doesn't. The real answer is that, contrary to what you might have read in Polk's advertising, the brain is able to work out which signal is intended for which ear. If a wavefront reaches the left ear from the left speaker, the brain knows that that wavefront will reach the right ear around 0.7ms later, the time taken for the wave to travel around the head, and therefore can ignore it.

 

So there we have it: a perfect stereo image implies a perfect soundstage. All is rosy in the audiophile garden.

 

Hmm. A suspicious word, perfect. Where's the catch?

 

Well, we have only been discussing the interaction between the two loudspeakers and the listener. What about the amplitude-information only, two-channel recording? Where does that come from?

 

When it comes to recording music, there are two mutually incompatible philosophies. One is to capture as faithfully as possible the acoustic sound produced by a bunch of musicians, in effect treating a performance as an event to be preserved in a documentary manner. The second, which is far more widespread, is to treat the recording itself as the event, the performance, using live sounds purely as ingredients to be mixed and cooked. This, of course, is how all nonclassical recordings are made. The sound of an instrument or singer is picked up with one microphone, and the resultant mono signal, either immediately or at a later mixdown session, is assigned a lateral position in the stereo image with a panpot. As this is a device which by definition produces a ratio of amplitudes between the two channels, it would seem that every recording made this way is a true amplitude-stereo recording, capable of producing a well-defined stereo image.

 

Do such recordings have a soundstage associated with that image, however?

 

Sometimes.

 

When producing such a recording, the producer decides how much and what type of reverberation should be associated with each of the mono sound sources, and also decides where in space that reverberation should be positioned. There is no reason at all why the ambience surrounding, say, a centrally placed lead vocalist, should have any relationship with that around the drums. Or the guitar. Or the synthesizer. And if it doesn't, then the listener doesn't hear a soundstage. Rather, he hears a collage of individual musical events, bearing no spatial relationship to one another.

 

(...)


Read more at https://www.stereophile.com/asweseeit/1286awsi/index.html

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...