Jump to content
IGNORED

Soundstage Width cannot extend beyond speakers


STC

Recommended Posts

1 hour ago, semente said:

 

Here's the definition of sweet spot as per Stereophile's Glossary:

 

sweet spot That listening seat from which the best soundstage presentation is heard. Usually a center seat equidistant from the loudspeakers.
Read more at https://www.stereophile.com/content/sounds-audio-glossary-glossary-r-s

 

Exactly. What happens with high quality SQ is that there is no "best spot" - if I sit in the prescribed, correct position there is zero advantage to be gained. As Peter says, you can walk around, doing some useful things as well at the same time :D - the experience remains as captivating as it would locking oneself rigidly in one spot, not daring to move a muscle, in case some of the "magic" is lost ... :).

Link to comment
Just now, fas42 said:

 

Exactly. What happens with high quality SQ is that there is no "best spot" - if I sit in the prescribed, correct position there is zero advantage to be gained. As Peter says, you can walk around, doing some useful things as well at the same time :D - the experience remains as captivating as it would locking oneself rigidly in one spot, not daring to move a muscle, in case some of the "magic" is lost ... :).

 

"the experience remains (almost) as captivating" this I agree.

But the imaging starts losing quality once you move away from the tip of the isosceles triangle.

 

And I'm actually one of those listeners which finds timbral accuracy more important than soundstage...

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
7 minutes ago, semente said:

 

"the experience remains (almost) as captivating" this I agree.

But the imaging starts losing quality once you move away from the tip of the isosceles triangle.

 

And I'm actually one of those listeners which finds timbral accuracy more important than soundstage...

 

Okay, this is where I step up another notch from what Peter achieves - the imaging never loses quality, no matter where I move in the room. To repeat what I have said many times, the presentation is convincing - how it comes across is exactly as if the real sources of sound lay in an arrangement from the plane of the speakers back, always behind the speakers - if there was a curtain  in line with, hiding the speakers, extending fully to the side walls, one could walk around anywhere in front of the curtain, and detect no aural clues that you were in fact being fooled ...

Link to comment
Just now, fas42 said:

 

Okay, this is where I step up another notch from what Peter achives - the imaging never loses quality, no matter where I move in the room. To repeat what I have said many times, the presentation is convincing - how it comes across is exactly as if the real sources of sound lay in an arrangement from the plane of the speakers back, always behind the speakers - if there was a curtain  in line with, hiding the speakers, extending fully to the side walls, one could walk around anywhere in front of the curtain, and detect no aural clues that you were in fact being fooled ...

 

I hope you don't mind me saying this but I am ever more convinced that your expectations are not very high.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

Soundstage width in my experience is guided less by the electronics and more by the music and speakers. Speakers with a wide dispersion pattern seem to be better / easier is attaining width of soundstage. Room / speaker setup is key.

 

In my environment, soundstage quality (imaging, stability, focus, etc) is more important than width (or depth).

Link to comment

An amusing aside - went to a flashy opening  last night - had a pretty good quality PA; one could tell at times what it was capable of. Yet it was set up as such things nearly always are. Lots of bass oomph, making the room feel like it was full of sound, "enriching the space" - and the treble ... where was it?!!

 

A clever violin recital was done; she recorded on the fly a rhythm backing, just plucking it out, and then sequencing that to endlessly repeat; and then accompanying herself. The finale was to add layer upon layer, so that it became ever richer, round by round - very creative.

 

Did it sound like a violin? A million miles from that - the bass strings were huge, inches in thickness, the treble were like spider's gossamer, if you blinked they vanished completely.

Link to comment
4 hours ago, gmgraves said:

 

I can't really tell you why some speakers make images beyond the speaker's edges while others don't, nor why some recordings have that characteristic while others don't. I just know that it happens.

 

The fact that setups were getting the sound right 50 years ago shows how little true progress has been made in understanding - there's nothing new under the sun! Again, it's not the speakers, but how well the whole rig has been sorted - there's a continuum of behaviour: start with very best recordings, on a decent rig - and work up to all recordings, on a setup of highest quality that's been optimised to the last detail - there are places all along the spread between those points for systems to reside.

Link to comment
11 hours ago, PeterSt said:

 

I hear you. But you forgot the reason of why.

That something is recorded from the center of the stage is irrelevant to begin with. Shift it to the left a little (2 meters, seen from the listener) and it will show up more to the left of the center (of your stereo speakers).

You make it sound like I now must sit 2 meters (or so) more to the left or else there is no stereo image ? This just doesn't make sense.

Of course you said that the mikes must be in the center of the recording. No wait, you don't say that either.

 

What are you saying ?

 

 

Aha, this gives some clues. You work with time delay.

No.

Nothing in your body works with time delay that I know of. It is all phase. And that per frequency. You obviously think that if you move your head (left / right) for one degree (but say your nose moves 1 cm and with that each of your ears 0.5 cm in opposite direction) that you suddenly are able to catch a time arrival difference between left and right ear of this one cm, which is at 340m/s how few ?

Of course with this you are also saying that for a nice stereo image your nose must point dead-center to the speakers (and must be in the middle of it) ?

 

All works with phase angles and differences between them. From that we derive angle (to the source). And just saying : might you move your head mentioned 1 cm (left/right) you can still envision the angle of where the source is. Or would you say that the source moves ? or that its distance changes ?

 

If a guitar was 60 degrees to the left of a stereo microphone setup (seen from the listener who is in the center of the stage) and you play this back through loudspeakers, if all is right the guitar shows up 60 degrees to the left of an imaginary center, which btw was created by the microphone's distance (to the guitar and all). So the distance is related, and not by means of time again. Just measurable distance by "meter". So if you approach the speaker, the angle to the guitar gets wider. This would happen in reality just the same if you only approach from the middle and keep on that center line.

In some mysterious way when you are at say 5 meters from the speakers and walk sideways, the guitar goes ... where ?

I think the problem with your reasoning could be that you see that 60 degrees as fixed. This is obviously not so.

 

To the latter I should add that your perceived sweetspot also has a defined distance. And Oh, I already know, this is related to the toeing of the speaker, right ?

Wrong. The toeing of the speaker isn't related to a thing, except for waves meeting at another place. Waves of which a kazillion exist in parallel to begin with (just look at your loudspeaker driver(s) and how it radiates sound). If that total beam, thus of one speaker, would be 3 meters wide at the middle position at the distance you reside, you'd have 1,5 meters left/right margin to stay in both beams and perceive stereo image. With a somewhat longer room this 3 meters does not make sense and will merely be the total width of the room and you can be anywhere, sideways.

 

 

This one again; yes you are correct. But this is only because the guitar, where-ever it resides, does not move. So if you sufficiently walk to the left then at some stage the guitar is in between you and the speaker. And it doesn't matter whether at first the guitar was in between the speakers somewhere at first, or that it was outside of it to begin with (which is harder to believe for you anyway).

The whole image shifts of course. This is because you're making the angle of "perceivement" smaller. The very same would happen when you'd walk sideways of the real-live stage. Be at 90 degree angle and you'd have all in one plane (longitudinal for you), assumed all the players stood/sat on one line (that line 90 degrees opposed to the normal audience).

 

By now I am not sure why I need all this explanation. I am not making up anything of what I perceive and what I regard normal. Come over and have a listen.


 

Quote

 

You make it sound like I now must sit 2 meters (or so) more to the left or else there is no stereo image ? This just doesn't make sense.

"..two-channel stereo is an antisocial system: Only one

 


listener can hear it the way it was created. If one leans a little to the left or right,
the featured artist fl ops into the left or right loudspeaker, and the soundstage
distorts. When we sit up straight, the featured artist fl oats as a phantom image
between the loudspeakers, often perceived to be a little too far back and with a
sense of spaciousness that is different from the images in the left and right
loudspeakers (see Figure 8.4 and the associated discussion).
This puts the sound image more or less where it belongs in space, but
then there is another problem:" - Toole

 

" To hear the phantom center image, and
any other panned images between the loudspeakers correctly located, listeners must be on
the symmetrical axis between the loudspeakers. Away from the symmetrical axis, as in
cars, and through headphones, we don’t hear real stereo; we hear a spatially distorted, but
still entertaining, rendering."  - Toole.

 

You are referring to quote 2 situation.

 

Quote

Nothing in your body works with time delay that I know of

 

Hearing works with time and level delay. If you don't get this then rest of this discussion is not going to yield any meaningful conclusion. Human hears timing difference in microseconds to determine the location.

 

Quote

By now I am not sure why I need all this explanation. I am not making up anything of what I perceive and what I regard normal. Come over and have a listen.

 

Same to you. People have listened. That's  including a reviewer who thought it was a surreal experience. 

 

Link to comment
13 minutes ago, STC said:

Hearing works with time and level delay. If you don't get this then rest of this discussion is not going to yield any meaningful conclusion. Human hears timing difference in microseconds to determine the location.

 

https://www.coursera.org/lecture/human-brain/lecture-4-2-s-deducing-the-location-of-sounds-P2gbi

Link to comment
8 hours ago, semente said:

sweet spot That listening seat from which the best soundstage presentation is heard. Usually a center seat equidistant from the loudspeakers.

 

HiFi 1976.

 

Quote

But the imaging starts losing quality once you move away from the tip of the isosceles triangle.

 

Can someone now tell us why this would be so ?

Mind you please, I explained it from my POV with the 3m and 1.5m wide beam and such.

 

What's radiated towards you - and most certainly what arrives at your listening position (assumed a few m from the speakers) is not a point. It is a wide beam. Two wide beams.

Now please explain instead of stating that it isn't so. And anticipate your visit for support if the claim, of course. :ph34r:

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
48 minutes ago, STC said:


 


listener can hear it the way it was created. If one leans a little to the left or right,
the featured artist fl ops into the left or right loudspeaker, and the soundstage
distorts. When we sit up straight, the featured artist fl oats as a phantom image
between the loudspeakers, often perceived to be a little too far back and with a
sense of spaciousness that is different from the images in the left and right
loudspeakers (see Figure 8.4 and the associated discussion).
This puts the sound image more or less where it belongs in space, but
then there is another problem:" - Toole

 

" To hear the phantom center image, and
any other panned images between the loudspeakers correctly located, listeners must be on
the symmetrical axis between the loudspeakers. Away from the symmetrical axis, as in
cars, and through headphones, we don’t hear real stereo; we hear a spatially distorted, but
still entertaining, rendering."  - Toole.

 

 

Toole is not wrong, but doesn't have full understanding. The variable which he fails to take into account is the level of audible anomalies - vary that, and the experience changes.

 

Achieving low levels of such anomalies is hard, and hence it's understandable he writes this.

Link to comment

Interesting discussion.  IMHO, yes there is a sweet "spot" but the spot is not a laser point but in fact a sizeable area that allows movement of your head your body.  If one has to sit tight freezing into a position of a laser point to receive the sound wave, I am sorry that I am not the one who can do that and it is not relaxing at all to be in such frozen position for hours. 

 

A subwoofer does not require to point at the sitting position because the frequency it produces is so low and the sound wave is wide to make it embrace the whole area and you do not need a sweet spot to hear it coming.  I believe the same applies to other frequence though the higher it goes, the sweet "spot" is reduced.

MetalNuts

Link to comment
9 hours ago, Abtr said:

I'm quite sure this must be caused by the bass overtones (sympathetic vibration) which can be localized even with a (single) subwoofer. A plain 24Hz source can't be localized. 

 

Another claim. And "quite sure" ?

 

Orelo MKII Sub-low Specs <-- Please read.

OK, you won't. Then a small excerpt :

 

Distortion free means : No audible harmonics, and that requires the THD (Total Harmonic Distortion) to be better than 3.8% in the range under 100Hz.
Audible harmonics : For example a 20Hz tone which is totally inaudible to us humans, but now showing a profound audible 40Hz tone because of a 2nd harmonic (40Hz) being too high (worse than 3.8% THD).

 

1353282432_Orelo20MKII20Sub-low20Response.thumb.png.41d3c164174fa61aa9ddeb9b5ad6fb69.png

 

This is straight to 19Hz, +/- 0.5 dB, OK ? (no, not +/-3dB which would be a killing 6dB difference). Notice the smoothing of 1/12 which actually hardly is smoothing. So not faked anywhere.

The text in the link also states that this is at 88dBSPL.

Edit : And the roll off explicitly depicts that might lower frequencies than 19Hz be present, they don't audibly distort as well. So this is all carefully (DSP) tuned. And btw, the -3dB point would be at 16.5Hz.

 

So all we can probably be quite sure about is that the context I am talking about is as unknown as the moon to most.

Abtr, your idea about this is 100% OK of course. I would have said the same (I hope that this is obvious now). But in my case all is different ...

 

And oh, about unknown contexts, listening to undistorted low frequencies requires re-learn to listen. I am serious. You just won't know the experience. And all owners of this speaker most certainly agree.

 

Now envision the extra dimension in the listening experience when you'd be able to perceive bass from left and right separately. And news maybe : you won't even know it's in the material without being able to perceive it. But I do ...

More news : even with higher frequency support an upright double base sounds from the center, even if the material depicts that the guy played left on stage. Unless you have the same experience as I do (all over), this is about the channel separation in the electronics. Go look up the figures of your DACs. What do they say ? Here's mine (from 2011) :

 

-  Channel separation you probably never heard before (watch the lowest basses);

and

channel separation of better 125dB (130dB with differential (XLR) output)

 

Maybe this is not so normal (but this is up to you). But wait, maybe people observe this all through vinyl playback. What's that ? a prosperous 40dB ? nah, make it 35dB.

So yes, that comes across as dead center to me (I am serious as I have a 1000+ vinyl rips just the same).

 

:)

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
7 hours ago, GUTB said:

Soundstage width in my experience is guided less by the electronics and more by the music and speakers

 

I won't disagree with this, but have a remark anyway;

 

The far most profound means to influence the imaging is the high frequencies. Now a whole plethora of prerequisites are in order, beginning with the digital filtering. Thus, let that smear the happening and imaging deteriorates. No further words on this please, but do know that I speak with the "no ringing" as a base. No quality judgment but my way to get to the subject of this thread (which for me is more explicit regarding my background (the Phasure name blahblah)). But also how I started the NOS DAC which wasn't allowed to be that and thus in-software filtering emerged. More blahblah.

 

People should just by some whatever means try to envision what actually happens when you have a speaker which is 118dB sensitive for the complete mid-high range from ~180Hz to 22KHz and which thus requires 1Watt to blow the windows out. This is all so super speedy that it is an other world. Really. And Yes, I made it like that for the sheer reason. And Yes, the DAC does 2000V/us (rise/fall time) on its output stage for inordinate reasons (like the D/A chips not doing that at all - but, always oversize in audio !). It all doesn't make sense really (ah, that's what you thought already) but all contributes.

 

Summarized : what I just talked about is mostly impeded by electronics. The speaker drivers themselves help vastly of course but meanwhile it is sheer electric (not really electronic) arrangement which makes the speaker 118dB. Without that arrangement it would be 115dB. Does that 3dB matter ? you don't want to know !!

 

As of late I am taking about electric butterflies dancing through the room (this started to work like that with Lush^2). The last few days however, I describe the lot as electrifying which is the electric butterfly^2 because it now is the whole energizing of the room (and this time because of a new ^2 Interlink - it never stops).

But sticking to the butterflies : they dance everywhere, impeded by the "noises" of the music. Btw, the Entheogenic album I referred to yesterday, is a good example of it. And I know, most people are not interested in such music (if music at all), @sementeahead. And if the "play" is not in the music, then no butterflies, no seagulls and also no crows. But basses playing from one side only ? sure. But it would be "weak" for the experience. People will be as pleased with the bass from the middle.

So this is not about "I'm good". It is about how sounds may appear between the speakers only and the why of that (reading the OP may make you think differently :P). Try LP and you may understand. Now it's not electronics but mechanics (or something).

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
18 minutes ago, MetalNuts said:

an article about problems of stereo production and the fix:

 

Yes. And I am afraid that @STCis all about that and that he does not really like to see his hard work debunked. Which won't be the case anyway, I'm sure. But maybe there are more ways leading to Rome, some unexplored.

And most probably, what ambiophonics achieves, achieves more of it than what normal stereo might be able to do. I can't tell, but I assume it.

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
58 minutes ago, MetalNuts said:

but in fact a sizeable area that allows movement of your head your body. 

 

Even in live performance head shifting happens all the time and the stage (or object ) changes position. It is understood sweet spot is not a pin point area but a reasonable spot where the sound sounds the best with correct perceptive. In a concert hall a sweet spot is subjective as you can choose anywhere which sounds best to. In stereo reproduction, the whole idea is to recreate the stage accurately and that can happened when both speakers energy directed to a spot to arrive at exact timing so that a solid phantom centre image is created.

Link to comment
31 minutes ago, PeterSt said:

Yes. And I am afraid that @STCis all about that and that he does not really like to see his hard work debunked.

 

Nothing there to debunk. It is a valid and accepted method. "The most recent, and the most ambitious, attempt to extract the maximum from legacy stereo recordings is Ambiophonics (Glasgal, 2001, 2003; www.ambiophonics.org). It has gone through several phases of evolution, incorporating binaural techniques as well as complex synthesis of spatial effects to provide optimum sound delivery." Toole ( Sound Reproduction: Loudspeakers and Room ).

Link to comment

 

1 hour ago, MetalNuts said:

 

So ... People who seriously read that, see how seriously I approach it all differently. But, I am not a MD and I did not study medicine. But ...

@vraois and did (with focus on the auditory systems). I am pretty sure that mentioning him won't work out (he will be busy elsewhere these days), but he is the owner of all the same gear (also the speaker) and he also treated his room, unlike me. And I already know how this works out for him. But he needs to tell it himself (and you will see that he also will tell that it works out better at his place than at mine). If he chimes in ...

 

From both is medical education and my localization project, we both provided a seminar at some show in the US. Mind on Music, it was named. We both had discussed in advance how both our approaches could be equally true, while thus a person A says it is all about timing and person B (me) says that this can work via phase only. People may sort out themselves where the truth is, but I personally don't see timers running anywhere in my head, while I - but you too - know that we work with phase to begin with. So ... OK.

 

If I browse quickly through the Ambiophonics article/guide I (personally) see mistakes only. Almost a commercial for the phenomenon. That it is all against what I think of all myself doesn't help and all I see is "false" prerequisites. But good to know : ITD is also known as  the Phase Domain (don't take my word for it).

Here's an excerpt from 100 or so emails exchanges with vrao, to prepare for this seminar (btw CAS 2013) :

 

 

VR> Avg. ear distance 21.5 cm, corresponding to an interaural time delay of 625 µs but is different for different head sizes and other factors...
 
PS: Because you seem to depend on time. Time is nothing. It is something someone made up and seems plausible. Phase is another matter, because it is very easy to theoretically see how it works. Example : the speakers are in phase (or not) test. Super easy to perceive, right ? This has nothing to do with time (timing). Add up levels, yes. So, when two waves come to you in the same phase, they add up. When 180 degree not, they subtract. Just level difference. Well, anyone not deaf can perceive level difference. No research needed for that. So all what is needed further is the recognition that minute level differences make us localize. And I tell you again (because it is crucial) : with one frequency only (which is a sine) this can not be done.
 
VR: Well ............ a very compelling argument, luckily I got this out of the way before the talk. Hope you don't have any more such over the top, requires a thesis research project to clarify. Anyway as you mentioned the answer was in the slides, sharp noises including high frequency will be ITD or in the phase domain. The overlap is more than the 1.6, it extends to 3-5K. Maybe more who knows, but there is a frequency dominated range and it has to do with the auditory pathway. I'll write some thing up on a slide. 
 

So fun. Two people from completely different fields, sparring. 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...