Jump to content
IGNORED

Sound Stage - Is it all in the timing?


Recommended Posts

I can't imagine the USB cable having any affect, positive or negative on imaging. As long as the digital file being sent over that USB cable has proper separation (more than about 30 dB - and all digital has greater than 90 dB of electronic separation), then the stereo soundstage will be intact.

Hi George

Just because you can't imagine it , doesn't mean it isn't so.

100s of posters in different threads can attest that the USB cables with improved separation between data and power have an improved soundstage. It's about minimising the degrading effects of RF/EMI hitching a ride along with the binary data.

E.E. John Swenson has been able to demonstrate this. Obscure low level detail and low level harmonics with system noise and the soundstage will be degraded. With a good recording having plenty of low level ambience and detail, try listening to it first with the air conditioning on, then with the A/C turned off to get the general idea .

Digital Audio isn't as resilient to the effects of RF/EMI as many would wish to believe !

Regards

Alex

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

Since I am yet to hear a difference between USB cables, my point of view remains intact for all those who might be interested........USB cables impart no difference in the audible portion of an analog presentation.

 

.....on a side note, LOTS of misuse of the term 'jitter' here again. Folks,.....jitter is not the omnipotent variable in digital audio.

 

Talking USB in relation to soundstage to me sounds like judging wines based on the bottle shape.

 

Enjoy!

Link to comment
Since I am yet to hear a difference between USB cables, my point of view remains intact for all those who might be interested........USB cables impart no difference in the audible portion of an analog presentation.

 

USB Cables can impart significant difference in the audible portion of an analog presentation, such as the sound coming out of the amplifier and going to the speakers... not that they always do, but they sure can.

 

 

.....on a side note, LOTS of misuse of the term 'jitter' here again. Folks,.....jitter is not the omnipotent variable in digital audio.

 

Talking USB in relation to soundstage to me sounds like judging wines based on the bottle shape.

 

Enjoy![/QUote]

 

Huh again - jitter will kill the sound of a device. Collapse the soundstage, make anything sound like a patented ear ripper, and fatigue any listener in record time. While I agree that it is not the digital boogeyman some make it to be, it is a significant factor in the sound of digital devices, in particular DACs.

 

Drop by here anytime- I can demo three or four cables that I think even you will have to admit "changed the soundstage". The explanation of why I will leave up to you. :)

 

-Paul

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
Since I am yet to hear a difference between USB cables, my point of view remains intact for all those who might be interested........USB cables impart no difference in the audible portion of an analog presentation.

 

You have posted the same kind of disclaimer several times previously. The vast majority of C.A. members simply do not agree with you, as evidenced by the 1,000s of posts and numerous threads on the subject in this forum .

Perhaps as you are unable to hear these things, and also the fact that you are on the wrong side of 50 means that you are unlikely to hear much above 15kHz ( if that) then perhaps you should also get younger people to evaluate your speaker designs by actually listening to HF detail, instead of having to rely on what your measurements appear to be suggesting ? (grin)

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Drop by here anytime- I can demo three or four cables that I think even you will have to admit "changed the soundstage". The explanation of why I will leave up to you.

 

-Paul

 

Paul

You will need to convince him by the use of DBT in a format suitable to him, that YOU can hear these differences, as his pre conditioned brain will almost certainly refuse to permit him to hear these things for himself.

You may as well ask Archimago or several of the other naysayers along too, as they are also highly unlikely to hear them either, no matter how obvious the differences are to others WITHOUT the need for DBT.

Perhaps you and Alex C could join forces on a suitable demo ?

 

Alex

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Paul

You will need to convince him by the use of DBT in a format suitable to him, that YOU can hear these differences, as his pre conditioned brain will almost certainly refuse to permit him to hear these things for himself.

You may as well ask Archimago or several of the other naysayers along too, as they are also highly unlikely to hear them either, no matter how obvious the differences are to others WITHOUT the need for DBT.

Perhaps you and Alex C could join forces on a suitable demo ?

 

Alex

 

Relax- there are people that do not hear differences in USB cables, and thank goodness there are. They act as a control on the wilder side of things.

 

But usually once a little directed listening has taken place, that changes. Not always, but often. Then DBT tests do show positive results.

 

In any case, it isn't worth arguing over - if someone doesn't hear a difference, they are a lucky soul. You have no idea how much it irked my to put out $179 on a USB cable a few weeks ago. It made all the difference in the world to me, but I still resent having to spend that kind of money on a cable that should not make any difference in the first place.

 

-Paul

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
I tend to think that all the soundstage information we hear is related to phase differences we perceive.

 

The sound stage perceived from a two-channel recording is a complex function of:

 

-interaural level difference

-interaural temporal difference

-the ratio of a source's direct sound over the reflected sound (both recorded and added by the replay room) and the decay time of that reflected sound

-the spectral balance of the direct sound (skewed by the attenuation in air and, more importantly, how a spectrum matches the ear's HRTF for a particular angle of incidence)

-the precise temporal location of lateral reflections in the replay room (some are good, contrary to common thinking)

-the randomness of the room's lateral sound components

 

 

As an aside: I can get excellent sound stage from well-recorded 128Mbps MP3s, both in-room and using decent headphones. And the way to get there involved moving big bits around in a very physical world, as opposed to fooling around with the cables that move the little bits around.

Link to comment
You have posted the same kind of disclaimer several times previously. The vast majority of C.A. members simply do not agree with you, as evidenced by the 1,000s of posts and numerous threads on the subject in this forum.

 

Hardly surprising as the stylus has been stuck in his broken record for a long time.

"Relax, it's only hi-fi. There's never been a hi-fi emergency." - Roy Hall

"Not everything that can be counted counts, and not everything that counts can be counted." - William Bruce Cameron

 

Link to comment
The sound stage perceived from a two-channel recording is a complex function of:

 

-interaural level difference

-interaural temporal difference

-the ratio of a source's direct sound over the reflected sound (both recorded and added by the replay room) and the decay time of that reflected sound

-the spectral balance of the direct sound (skewed by the attenuation in air and, more importantly, how a spectrum matches the ear's HRTF for a particular angle of incidence)

-the precise temporal location of lateral reflections in the replay room (some are good, contrary to common thinking)

-the randomness of the room's lateral sound components

 

 

As an aside: I can get excellent sound stage from well-recorded 128Mbps MP3s, both in-room and using decent headphones. And the way to get there involved moving big bits around in a very physical world, as opposed to fooling around with the cables that move the little bits around.

 

Thanks, nicely thorough and informative. Would you include phase relationships among the temporal factors affecting soundstage; as a separate factor affecting soundstage; or as a factor not affecting soundstage?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Would you include phase relationships

 

'Phase relationships' is useless audiophilistic talk. Can you tell me what these phase relationships are? A component or speaker has a phase response, but as long as both channels have the same response, and as long as non-linear phase distortion in the mid-band is limited and contains no drastic changes, it is a non-issue.

 

Interested parties may want to read up on microphone techniques and on auditory localisation, e.g. the works of Jens Blauert.

Link to comment

Replies like post 35, plus the quote below, make me wonder what you are doing at Computer Audiophile forum.

 

https://www.google.com.au/#q=phase+relationship+with+audio

 

128Mbps MP3s indeed !

 

As an aside: I can get excellent sound stage from well-recorded 128Mbps MP3s, both in-room and using decent headphones. And the way to get there involved moving big bits around in a very physical world, as opposed to fooling around with the cables that move the little bits around.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
'Phase relationships' is useless audiophilistic talk. Can you tell me what these phase relationships are? A component or speaker has a phase response, but as long as both channels have the same response, and as long as non-linear phase distortion in the mid-band is limited and contains no drastic changes, it is a non-issue.

 

Interested parties may want to read up on microphone techniques and on auditory localisation, e.g. the works of Jens Blauert.

 

Thinking of my Vandersteen speakers with their "time-aligned" drivers - they've always done marvelously well in terms of sound stage, giving all the variety in the signal they receive (huge for recordings in spaces like cathedrals or where there's been signal processing in the recording to artificially obtain that sound; small where the recording's in a studio, a room in a house, or purposely crushed, as in Tom Waits' recording of "Heigh Ho" from the Snow White soundtrack; and everything in between). Vandersteen's literature speaks of the design of the crossover and its effect on phase being important to maintenance of this "time alignment."

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Care to learn why the use of 'phase' in the majority of these finds is a misnomer?

From you ? No thanks. Try connecting your speakers out of phase !

As is so often the case, someone new comes along with an agenda to attack the "Audiophile" in Computer Audiophile,and not to learn or share experiences, but to preach., usually with sarcastic replies like yours.

Anybody who seriously believes that 128Kbs can have a very good soundstage is delusional.

Using a decent pair of headphones and a good quality headphone amplifier will show just how poor MP3 really is with challenging material until you get to 320 kBs, (although 256 may be acceptable to some non discerning listeners.)

It's dull, boring and lifeless in comparison even with RB CD, let alone 24/96 , 24/192 or good DSD.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

Thanks for correcting the typo.

 

Anybody who seriously believes that 128Kbs can have a very good soundstage is delusional.

 

That's an interesting statement. Let's generalise it ...

 

Anybody who seriously believes that AAAA can sound BBBB is delusional.

 

Is this applicable to the sort of claims you, particularly you, so often make here and on other forums?

Link to comment
Vandersteen's literature speaks of the design of the crossover and its effect on phase being important to maintenance of this "time alignment."

 

I wrote

 

"as long as non-linear phase distortion in the mid-band is limited and contains no drastic changes"

 

And about the only place in an audio chain where such distortions would be possible is in the crossover region(s) of multi-way speakers. That is obvious.

But the imaging capabilities of your speakers are a function of much much more than just their on-axis phase response.

Link to comment
I wrote

 

"as long as non-linear phase distortion in the mid-band is limited and contains no drastic changes"

 

And about the only place in an audio chain where such distortions would be possible is in the crossover region(s) of multi-way speakers. That is obvious.

But the imaging capabilities of your speakers are a function of much much more than just their on-axis phase response.

 

Re "obvious," thank you for making the answer more obvious to me. As you know, the technical side of audio engineering is not a strong point for me, so I am happy to be able to get a little more understanding as a layperson. Re much more than on-axis phase response being involved, sure; my question was just if phase response could be involved at all, thinking of the example of the crossover design in my speakers, and that's been answered affirmatively.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Uh huh... manipulating bits can certainly have an effect on all these things.

 

So can moving a speaker an inch or so.

 

Or changing the listening location.

 

And so can USB cables.

 

All of which affect the phase of the signals from the speakers, which can and does create interference patterns at various locations in the room that humans can both hear and interpret as meaningful location information. Those exact locations being defined by a large number of variables, including all the ones mentioned above.

 

It's easiest understood if you skip the FFTs and use two coherent waves, originating from two sources, and traveling different distances. At some points those waves are going to interact and produce constructive interference, and at some points, they will interact and produce destructive interference. Manipulation of those points, in whatever manner, moves those points around and humans, using many of the mechanisms you detailed below, process that data. The same thing happens, in a more complicated way, in a normal audio system playing music.

 

If you have a correctly setup system, you will already have taken into account much of this, and located the speakers in a location to produce the best sound. Further, you will probably have already cleaned up the rest of the system to produce the best possible sound as well, within the limits of the equipment. In a case like this, changing the USB cable can be the difference between a good system and a *really* good system.

 

So what's your point? Obviously manipulation of the signal will change this, but just as obviously, manipulation of the physical environment will accomplish the same ends, and allow one to hear a recording more as it was engineered.

 

I do agree with you that you can have an excellent soundstage, with real extension, depth, and separation, from a 128kbs MP3. But it is a lot easier to hear and appreciate with higher sampling rates. :)

 

-Paul

 

 

 

 

The sound stage perceived from a two-channel recording is a complex function of:

 

-interaural level difference

This is mostly a human thing - humans have signal receptors in both ears that are excited by stimulation in one ear, and inhibited by stimulation in the other ear.

-interaural temporal difference

This is the difference in the arrival time of a signal to each ear.

-the ratio of a source's direct sound over the reflected sound (both recorded and added by the replay room) and the decay time of that reflected sound

Uh huh...

-the spectral balance of the direct sound (skewed by the attenuation in air and, more importantly, how a spectrum matches the ear's HRTF for a particular angle of incidence)

I think you are talking more about localization in the horizontal plane here, which is primarily accomplished in humans by ILD and ITD.

-the precise temporal location of lateral reflections in the replay room (some are good, contrary to common thinking)

Uh huh...

-the randomness of the room's lateral sound components

Uh huh again...

 

 

As an aside: I can get excellent sound stage from well-recorded 128Mbps MP3s, both in-room and using decent headphones. And the way to get there involved moving big bits around in a very physical world, as opposed to fooling around with the cables that move the little bits around.

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

So what's your point?

 

To list the properties of a sound field that are known to figure importantly in the auditory location of sound sources. It is left to the reader to ponder which aspects of audio system performance affect these properties, and how.

 

And no, the item mentioning HRTFs emphatically is not uniquely concerned with lateral localisation.

Link to comment

Most of the theoretical discussion goes over my head but I would like to add a comment about phase and it's effect on sound stage. I use Audiolense digital room correction software. With this you have the capability of doing a frequency response correction only or a frequency response and phase correction together. In my own experience, as well as reading other users experiences, there is generally no comparison between a frequency only correction and a frequency and phase correction. The improvement in sound stage with time domain/phase correction is clearly audible to me.

I don't know how much this relates to the original topic but I feel it's well worth mentioning in the context of the effect of phase on sound stage.

Link to comment
To list the properties of a sound field that are known to figure importantly in the auditory location of sound sources. It is left to the reader to ponder which aspects of audio system performance affect these properties, and how.

 

And no, the item mentioning HRTFs emphatically is not uniquely concerned with lateral localisation.

 

For those like me unfamiliar with the acronym: HRTF = "head related transfer function," which can be thought of as the effects on a sound as it travels from outside the ear to the eardrum.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

On a related note, some people might wonder if we can objectively measure soundstage height, depth and width.

Here is a response I received from CA member tonmeister86. Tonmeister86 is ​Sean Olive | Director Acoustic Research | Harman International | Audio Musings by Sean Olive

 

 

quote_icon.png Originally Posted by Blake viewpost-right.png

Sean:

 

Apologies for the off-topic detour, but in your opinion or experience:

 

With currently available measurement equipment, is it possible to measure all aspects of sound? For example, is it possible to measure soundstage width, height and depth? How about imaging or instrumental layering/separation?

 

I would be very interested to know your views on this topic.

 

Cheers,

 

Blake

 

Response from Sean Olive: "Current measurements are able to capture the linear and nonlinear distortions in audio equipment which can be used to predict perceptual dimensions related to timbre (e.g. bright/dull, clarity, coloration,etc). Current measurements of nonlinear distortion such as THD are not reliable indicators of audibility as the added harmonics are often masked by the signal.

 

Spatial dimensions are generally harder to characterize with measurements as the recordings themselves, speaker directivity and listening room all interact in ways that affect the dimensions you suggest. That said, binaural measurements at the listening seat using some signal processing can reveal the general location of the image (azimuth) and the width and envelopment of the imagery which is related to the IACC. Look at some of PhD work of Wolfgang Hess for example."

 

Time-variant Binaural Activity Characteristics as Indicator of Auditory ... - Wolfgang Hess - Google Books

Speaker Room: Lumin U1X | Lampizator Pacific 2 | Viva Linea | Constellation Inspiration Stereo 1.0 | FinkTeam Kim | dual Rythmik E15HP subs  

Office Headphone System: Lumin U1X | Lampizator Golden Gate 3 | Viva Egoista | Abyss AB1266 Phi TC 

Link to comment
To list the properties of a sound field that are known to figure importantly in the auditory location of sound sources. It is left to the reader to ponder which aspects of audio system performance affect these properties, and how.

 

 

Perhaps some very confused readers, but okay, I can live with that.

 

 

And no, the item mentioning HRTFs emphatically is not uniquely concerned with lateral localisation.

 

I'm not certain that you will have enough refraction of the direct sound through air at normal listening distances to have a significant amount of vertical localization information. Especially with audio frequencies. But I could easily be wrong on that. Have to calculate it.

 

How exactly do you manipulate the distributed spectral energy in the direct sound, and in what way to achieve what end? Mind expanding on that just a little please?

 

-Paul

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

Blake, re Sean Olive's reference to signal masking THD: I'm noticing lots of stuff about masking, and wondering if we are looking at this the wrong way round. We're trying to hear the music, not the noise, so shouldn't the question be how much of the low-level music signal is masked by the noise, rather than vice versa?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Recordings themselves have variable quality as we know, not all recordings are made optimally, but when they are recorded 'properly' sound stage is excellent.

Even with poor soundstage in the recording it can be "evolved" or "grow", by the use of hyper-end power cables. Bad recordings turn good recording. A good recording umm... turns even a better recording.

 

I don't believe the cable can actually add anything to the signal to perhaps brighten it up.

Signal from source is constant, it doesn't change. But the power cables create / re-manufacture / alter "sound" through forms in two ways:

 

1). Vibrations through resonance properties in the cables themselves. Power cables make the most significant difference.

 

2). Noise dampening - Unwanted noise outside of 60hz, including RFI and EMI.

 

The soundstage information needs to exist in the recording in the first place, because playing MP3, or radio has generally no soundstage at all, there's only the mid point in the speaker position and that's it.

All music has its sound stage. One may have more than the other. But with power cable upgrades the sound stage can be increased, more than what it was engraved in the recording itself. Spaciousness can also increase significantly, with more clarity and separation and textures. Power cables can get, very, very expensive. I know it's hard to believe... sounds *too* magical. But it does work. I witnessed it, as well as many other folks in the small audio community.

 

 

 

Noise floor in current DACs are very low -90db and below for the really good ones. If music contains the finer details to allow the soundstage to breathe, at which point does the DAC need to extremely quiet... -120db... would we hear at this level. Depending on the noise level from a typical USB connection, and from other sources, PSU mains noise injections, at which point would the level of noise interfere with the soundstage. Anybody have some rules of thumb?

 

It seems pointless to have a very quiet DAC when it's being swamped with noise via the front and back door.

Noise can be infested not only in the DAC's but even more so in the amplification devices. It is natural, to draw more current and give off more electrical noise in high output power. That is why I have been telling you guys to not to listen to music too loud. Once you get good hearing from good diet you won't need to listen to music loud because you can hear them well at lower volumes.

 

 

bunny

 

  • Windows PC + Creative EMU0404 USB DAC w/ stock USB cable
  • Focal CMS 65 speakers
  • Very hyper-end Power cables for all components

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...