Jump to content
IGNORED

Sound Stage - Is it all in the timing?


Recommended Posts

Blake, re Sean Olive's reference to signal masking THD: I'm noticing lots of stuff about masking, and wondering if we are looking at this the wrong way round. We're trying to hear the music, not the noise, so shouldn't the question be how much of the low-level music signal is masked by the noise, rather than vice versa?

 

 

Good point Jud. Perhaps we can get Sean to jump in to provide further insight. I just sent him a pm inviting him to join, to the extent he has time.

Speaker Room: Lumin U1X | Lampizator Pacific 2 | Viva Linea | Constellation Inspiration Stereo 1.0 | FinkTeam Kim | dual Rythmik E15HP subs  

Office Headphone System: Lumin U1X | Lampizator Golden Gate 3 | Viva Egoista | Abyss AB1266 Phi TC 

Link to comment
Even with poor soundstage in the recording it can be "evolved" or "grow", by the use of hyper-end power cables. Bad recordings turn good recording. A good recording umm... turns even a better recording.

 

 

Signal from source is constant, it doesn't change. But the power cables create / re-manufacture / alter "sound" through forms in two ways:

 

1). Vibrations through resonance properties in the cables themselves. Power cables make the most significant difference.

 

2). Noise dampening - Unwanted noise outside of 60hz, including RFI and EMI.

 

 

All music has its sound stage. One may have more than the other. But with power cable upgrades the sound stage can be increased, more than what it was engraved in the recording itself. Spaciousness can also increase significantly, with more clarity and separation and textures.

 

bunny

 

This is exactly what I do not want in audio equipment - something that imposes its own sound on the material. That is by definition a distortion, and more than that, any time you have a single sound character imposed on everything you listen to, it quickly becomes boring, no matter how impressive it may seem at first. (This is the same reason I have no time for "sonic spectacular" recordings - I'll always choose artistic merit.) I think of Tom Waits' version of "Heigh Ho," and the wonderful humor of giving a song supposedly sung by dwarves who spend their days in caves mining the exact soundstage and atmosphere it deserves - a crushed soundstage, and a plodding rhythm fitting people at the end of a workday underground. To make that into something soaring and airy would be a travesty of the artist's intent.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Perhaps some very confused readers, but okay, I can live with that.

 

 

 

 

I'm not certain that you will have enough refraction of the direct sound through air at normal listening distances to have a significant amount of vertical localization information. Especially with audio frequencies. But I could easily be wrong on that. Have to calculate it.

 

How exactly do you manipulate the distributed spectral energy in the direct sound, and in what way to achieve what end? Mind expanding on that just a little please?

 

-Paul

 

directivity and power response are two ways.....speaker placement another but it won't correct for the two previous.

 

I've been mentioning power response, directivity, polar response, phase quadrature, etc for years here on CA but for some reason, nobody takes the time to read the vast sea of concrete literature available explaining these properties......everything falls back to bits and jitter which IMO make up around 10% of what we hear, much less so for those with decent DACs which can be had for shy of $150 these days.

 

...but yet i get labeled as the broken record or the one beating my head against a wall. Sean Olive, Earl Geddes, Tom Danley......these are the guys pioneering the most exciting analog experiences around and publishing remarkable work in the acoustics realm.

Link to comment
...but yet i get labeled as the broken record or the one beating my head against a wall.

 

That's because you keep insisting that the speakers and the room are the be-all-end-all, and mainly ignore the other important areas. It doesn't help either that you are unable (unwilling?) to hear any of the differences so many other members report about USB cables (mainly), power cables and audible differences between software players, even those being played from system memory, where there is a vast amount of anecdotal evidence that it does indeed result in audible improvements.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
That's because you keep insisting that the speakers and the room are the be-all-end-all, and mainly ignore the other important areas. It doesn't help either that you are unable (unwilling?) to hear any of the differences so many other members report about USB cables (mainly), power cables and audible differences between software players, even those being played from system memory, where there is a vast amount of anecdotal evidence that it does indeed result in audible improvements.

 

Alex - It's possible that mayhem's system isn't (as) subject to changes in sound due to switches in power or USB cables.

 

In other words, each of you should consider the possibility of system or other environmental differences being responsible for the variance in your experiences, and thus that the other isn't simply being deluded or bull-headed.

 

On second thought...naah! ;D

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

In my experience, soundstage depends heavily on both timing accuracy and frequency accuracy. Personally, I consider frequency reproduction solved. The remaining issue is to get accurate reproduction of transients.

 

I believe there is a mind-boggling amount of crucial information in attack transients for both timbre-recognition and sound source location. There is necessarily a lot of information in sounds' ending as it is by a room's reverberation characteristics that we detect room size and ambience.

 

So, provided you already have proper frequency response, clean up the attack transients and you should reap a lot of benefits in soundstage.

 

Therefore, clocks, jitter reduction or elimination, higher time-domain resolution recording, higher-resolution processing, proper output stage (I'm thinking tubes here), appropriate minimal-effect filtering (e.g. minimal phase filters, apodizing filters, passive vs active, minimal high-grade components, etc...), and everything you can implement or use to help this in the HRA Hi-Fi system will help.

 

As for the mechanism with cables, that's a whole lot to absorb and understand, but I did an experiment once myself to see what all the polemic was about by simply adding a pice of scotch tape on the + power line of the computer end of a USB cable. Not only were the high frequencies higher when using my DAC and I could especially hear it on violins and guitars, but also I instantly had my sensitive tooth start hurting.

 

I'm not acquainted with the field in detail, but from what I gather, you would not want to have interference from power lines affecting your data for Audio purposes, and USB was not designed with audio data transfer optimization in mind initially (yes, despite the fact that it supposedly is 'digital data', the digital is actually represented by analog state transitions and despite the fact that there's a protocol and design guidelines for USB - I know the naysayers' drill but the proof is in the experimentation). We're stuck with those designs with our generic USB cables unlike some of us fortunate enough to buy LightSpeed or iFi's more affordable solutions.

 

I would posit that any USB cable that works on reducing the power lines interference with the audio data helps, e.g. the distancing on these, minimizing noise and interference at the sockets, etc...

 

Finally, you don't need to believe in cables' effects to benefit from the previous description about timing, attack transients and soundstage.

 

Things I found that vastly improved soundstage in my setup:

 

- Original resolution or high-res audio instead of lossy audio

- Using a ripped file with an audiophile player instead of reading from CD with iTunes (I use Audirvana)

- Using a good flat-response set of monitors or musical speakers with a good amplifier

- Using an external DAC through USB and playing files in BitPerfect mode

- DSD/DXD native recordings rather than lower-resolution PCM recordings

- Room acoustic treatment (using REW and DIY acoustic panels and bass traps)

Dedicated Line DSD/DXD | Audirvana+ | iFi iDSD Nano | SET Tube Amp | Totem Mites

Surround: VLC | M-Audio FastTrack Pro | Mac Opt | Panasonic SA-HE100 | Logitech Z623

DIY: SET Tube Amp | Low-Noise Linear Regulated Power Supply | USB, Power, Speaker Cables | Speaker Stands | Acoustic Panels

Link to comment
Not only were the high frequencies higher when using my DAC and I could especially hear it on violins and guitars, but also I instantly had my sensitive tooth start hurting.

The high frequencies, even some female singers voices, may appear to be slightly higher due to less masking of the low level harmonic structure by noise. If the sound was then fatiguing, it suggests that improvements elsewhere in the area of further reduction of system noise may be needed.

No USB cable on it's own can stop RF/EMI from getting into the DAC, although some with better separation between Data and power wires may result in worthwhile improvements as described by many members.

Room acoustic treatment and correct positioning of the speakers is also very important as Anthony keeps saying, however I disagree with the proposition that it should be done using further digital processing.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

Am I the only one who finds it strange that there is a discussion about improving the soundstage, apparently without agreement about how it is perceived and hence what would be required to improve it.

 

The book “Auditory Neuroscience” has a chapter on the neural basis of sound localization, which is well written and quite accessible. The comments below have been taken from that reference.

 

As has been noted above by Fokus we have a duplex system of sound location, using both interaural level differences (ILDs) and interaural timing differences (ITDs). ITDs are higher in frequency than ILDs. The result is that we can hear changes in location of as little as one degree from a midline in front of head. Sensitivity to vertical changes or from behind the head is much lower. This implies timing differences at our ears of as little as 10 to 15 microseconds, or level differences of 0.5 to 0.8 db.

 

The implication is that anything that changes the relative timing of the music by more than 10 microseconds risks distorting the sound localization.

 

The above does not address determining sound source distance about which less is known.

 

It is probably worth adding that differing aspects of hearing use differing parts of the brain. In particular the qualitative assessment of sound is performed separately from the localization aspect.

Link to comment
That's because you keep insisting that the speakers and the room are the be-all-end-all, and mainly ignore the other important areas. It doesn't help either that you are unable (unwilling?) to hear any of the differences so many other members report about USB cables (mainly), power cables and audible differences between software players, even those being played from system memory, where there is a vast amount of anecdotal evidence that it does indeed result in audible improvements.

 

Simply put.....you're deluding yourself. Anecdotal evidence?........of what?....more audiophile B.S.

 

Enjoy!

Link to comment
This implies timing differences at our ears of as little as 10 to 15 microseconds, or level differences of 0.5 to 0.8 db.

 

The implication is that anything that changes the relative timing of the music by more than 10 microseconds risks distorting the sound localization.

 

The above does not address determining sound source distance about which less is known.

 

It is probably worth adding that differing aspects of hearing use differing parts of the brain. In particular the qualitative assessment of sound is performed separately from the localization aspect.

 

I consider level reproduction solved too.

 

It seems to me that both level and frequency content is useful to distance perception.

 

Hence, as I mentioned, it is by improving timing, especially for attack transients nowadays that we get better soundstage. There's a lot of work on clocks/jitter and minimal phase noise for clocks in the new PS Audio DAC and the new iFi iDSD Nano one (and many others).

Dedicated Line DSD/DXD | Audirvana+ | iFi iDSD Nano | SET Tube Amp | Totem Mites

Surround: VLC | M-Audio FastTrack Pro | Mac Opt | Panasonic SA-HE100 | Logitech Z623

DIY: SET Tube Amp | Low-Noise Linear Regulated Power Supply | USB, Power, Speaker Cables | Speaker Stands | Acoustic Panels

Link to comment
directivity and power response are two ways.....speaker placement another but it won't correct for the two previous.

 

I've been mentioning power response, directivity, polar response, phase quadrature, etc for years here on CA but for some reason, nobody takes the time to read the vast sea of concrete literature available explaining these properties......everything falls back to bits and jitter which IMO make up around 10% of what we hear, much less so for those with decent DACs which can be had for shy of $150 these days.

 

...but yet i get labeled as the broken record or the one beating my head against a wall. Sean Olive, Earl Geddes, Tom Danley......these are the guys pioneering the most exciting analog experiences around and publishing remarkable work in the acoustics realm.

 

This is Computer Audiophile and not commercial, like discos, clubs, casinos, etc. (or "Pro" if you want).

 

Of course speakers and their listening environment (listening room) means a lot regarding soundstage reproduction, but you can not forget the source at any time. How about the 10% jitter, etc. you talk about amplified reaching your ears.

 

Even if the speakers are outstanding and also the extraordinary matched listening room , I would like a clean source almost free of any tip of distortion.

 

Roch

Link to comment

It should also be noted that according to presently (generally, but not universally accepted) accepted theory, 16/44.1 is all that is necessary for perfect reproduction of audio, as very few humans can hear more than a 22kHz sine wave, and with typical ambient noise levels, a 96dB range is more than adequate.

Clearly, this is not the case as many people can hear worthwhile improvements with higher resolution formats including 24/96 and 24/192 PCM, SACD and the more recent DSD formats. Does the book also address these issues ? If not, then it is out of date and of far less value as a reference. This forum is more about exploring the boundaries and challenging obviously inadequate or outdated theory, not perpetuating the status quo.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Am I the only one who finds it strange that there is a discussion about improving the soundstage, apparently without agreement about how it is perceived and hence what would be required to improve it.

 

The book “Auditory Neuroscience” has a chapter on the neural basis of sound localization, which is well written and quite accessible. The comments below have been taken from that reference.

 

As has been noted above by Fokus we have a duplex system of sound location, using both interaural level differences (ILDs) and interaural timing differences (ITDs). ITDs are higher in frequency than ILDs. The result is that we can hear changes in location of as little as one degree from a midline in front of head. Sensitivity to vertical changes or from behind the head is much lower. This implies timing differences at our ears of as little as 10 to 15 microseconds, or level differences of 0.5 to 0.8 db.

 

The implication is that anything that changes the relative timing of the music by more than 10 microseconds risks distorting the sound localization.

 

The above does not address determining sound source distance about which less is known.

 

It is probably worth adding that differing aspects of hearing use differing parts of the brain. In particular the qualitative assessment of sound is performed separately from the localization aspect.

 

The reception in the cranium generally works quite well, since nature is reasonably au fait with perceiving and getting the timing right to determine, "ahh there's a knock on the door to my right" or that guitar is coming from the right speaker. Unless there are mitigating medical or chemical conditions where a sense of imagery and location direction is not that good, the discussion of perception isn't the question...this is assumed to work, reliably, unlike electronics which can drift.

 

What doesn't work from the power outlet to the speaker and the room is what's of interest and how managing that process maximizes soundstage, or at least be consistent enough to observe that if there is something wrong with the soundstage, the problem can be identified and rectified.

AS Profile Equipment List        Say NO to MQA

Link to comment
This is exactly what I do not want in audio equipment - something that imposes its own sound on the material. That is by definition a distortion, and more than that, any time you have a single sound character imposed on everything you listen to, it quickly becomes boring, no matter how impressive it may seem at first.

 

Unfortunately, your speakers and your room will shape the "sound" of the music to degree so significant that if you switch speakers in the same room it would sound different or if you put the same speakers in a different space they would also sound different. How do you get away from that? And how do you know what "right" is? I suppose if we could find out what speakers were used in the mixing room and then listen to those speakers for that song we could get close. Short of that, we are always listening to our "house sound" whether we believe our gear is dead flat/no distortion or otherwise. That is why speaker selection is so important but to say you don't want audio equipment that imposes its own sound is a myth to achieve.

 

And I am not saying the front end doesn't impact sound stage…it does and in a significant way. I'm just commenting that the idea that the gear can be "transparent" is a myth, at least as far as speakers are concerned. They all "sound" like something…and we pick the one that sounds "best" to us. Subjective, not objective because two speakers that measure essentially the same can sound very different.

 

John

 

P.S. Sorry if I sound cranky…just off work!

Positive emotions enhance our musical experiences.

 

Synology DS213+ NAS -> Auralic Vega w/Linear Power Supply -> Auralic Vega DAC (Symposium Jr rollerball isolation) -> XLR -> Auralic Taurus Pre -> XLR -> Pass Labs XA-30.5 power amplifier (on 4" maple and 4 Stillpoints) -> Hawthorne Audio Reference K2 Speakers in MTM configuration (Symposium Jr HD rollerball isolation) and Hawthorne Audio Bass Augmentation Baffles (Symposium Jr rollerball isolation) -> Bi-amped w/ two Rythmic OB plate amps) -> Extensive Room Treatments (x2 SRL Acoustics Prime 37 diffusion plus key absorption and extensive bass trapping) and Pi Audio Uberbuss' for the front end and amplification

Link to comment
Thanks for correcting the typo.

 

 

 

That's an interesting statement. Let's generalise it ...

 

Anybody who seriously believes that AAAA can sound BBBB is delusional.

 

Is this applicable to the sort of claims you, particularly you, so often make here and on other forums?

 

128 MP3 is irritating to listen to on a good system. It sounds so bad, it's not even worth to bother with sound stage discussion.

AS Profile Equipment List        Say NO to MQA

Link to comment
That's because you keep insisting that the speakers and the room are the be-all-end-all, and mainly ignore the other important areas. It doesn't help either that you are unable (unwilling?) to hear any of the differences so many other members report about USB cables (mainly), power cables and audible differences between software players, even those being played from system memory, where there is a vast amount of anecdotal evidence that it does indeed result in audible improvements.

 

Speakers may not be the end all but the scale of their impact is off the charts. Oh, and I'm on your side on the issue of certain cables/interconnects making a noticeable difference. I am all about my ribbon connections between gear as I think it makes a significant difference in my perceived experience (dynamics, clarity, tone all seem better).

 

I just think that speaker selection is the single most important factor in the sound of a system that everything else is a distant impact.

 

John

 

P.S. Again with the sorry if I sound cranky comment...

Positive emotions enhance our musical experiences.

 

Synology DS213+ NAS -> Auralic Vega w/Linear Power Supply -> Auralic Vega DAC (Symposium Jr rollerball isolation) -> XLR -> Auralic Taurus Pre -> XLR -> Pass Labs XA-30.5 power amplifier (on 4" maple and 4 Stillpoints) -> Hawthorne Audio Reference K2 Speakers in MTM configuration (Symposium Jr HD rollerball isolation) and Hawthorne Audio Bass Augmentation Baffles (Symposium Jr rollerball isolation) -> Bi-amped w/ two Rythmic OB plate amps) -> Extensive Room Treatments (x2 SRL Acoustics Prime 37 diffusion plus key absorption and extensive bass trapping) and Pi Audio Uberbuss' for the front end and amplification

Link to comment
I just think that speaker selection is the single most important factor in the sound of a system that everything else is a distant impact.

 

John

 

Agreed. Get that right with correct positioning and some room correction too, then you are better able to hear the marked differences due to other components. The differences should be far more obvious, NOT less obvious or inaudible as mayhem13 keeps insisting.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
This is Computer Audiophile and not commercial, like discos, clubs, casinos, etc. (or "Pro" if you want).

 

Of course speakers and their listening environment (listening room) means a lot regarding soundstage reproduction, but you can not forget the source at any time. How about the 10% jitter, etc. you talk about amplified reaching your ears.

 

Even if the speakers are outstanding and also the extraordinary matched listening room , I would like a clean source almost free of any tip of distortion.

 

Roch

 

Yeah yeah yeah......it's yaddda Hifi yaddda......commercial or 'PROFFESIONAL' audio has no business imparting any opinions here on CA where jitter and bitrate rule the roost. It's just this audiophool mentality that keeps this hobby in the dark ages and shrinking quickly. Keep up the good work here!

Link to comment
Am I the only one who finds it strange that there is a discussion about improving the soundstage, apparently without agreement about how it is perceived and hence what would be required to improve it.

 

The book “Auditory Neuroscience” has a chapter on the neural basis of sound localization, which is well written and quite accessible. The comments below have been taken from that reference.

 

As has been noted above by Fokus we have a duplex system of sound location, using both interaural level differences (ILDs) and interaural timing differences (ITDs). ITDs are higher in frequency than ILDs. The result is that we can hear changes in location of as little as one degree from a midline in front of head. Sensitivity to vertical changes or from behind the head is much lower. This implies timing differences at our ears of as little as 10 to 15 microseconds, or level differences of 0.5 to 0.8 db.

 

The implication is that anything that changes the relative timing of the music by more than 10 microseconds risks distorting the sound localization.

 

The above does not address determining sound source distance about which less is known.

 

It is probably worth adding that differing aspects of hearing use differing parts of the brain. In particular the qualitative assessment of sound is performed separately from the localization aspect.

 

Without going into who really understands this or not, we have plenty of "rule of thumb" tools to use to optimize sound in our listening areas. It really doesn't matter one bit that people do not totally understand what is happening when they change the toe in for their speakers, or run sophisticated DRC software on a PrePro. All they are after is getting the best sound.

 

So to answer you question - yes it makes sense to discuss it. Even for people who do not understand the physics behind it.

 

-Paul

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
Speakers may not be the end all but the scale of their impact is off the charts. Oh, and I'm on your side on the issue of certain cables/interconnects making a noticeable difference. I am all about my ribbon connections between gear as I think it makes a significant difference in my perceived experience (dynamics, clarity, tone all seem better).

 

I just think that speaker selection is the single most important factor in the sound of a system that everything else is a distant impact.

 

John

 

P.S. Again with the sorry if I sound cranky comment...

 

(grin) And here I thought the music was always the single most important factor in the sound of a system... (/grin)

 

Yeah, physically, the speakers are the most decisive factor in the sound of an audio system. But after you choose your speakers, there is a lot that you can do to make things sound even better.

 

-Paul

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
Am I the only one who finds it strange that there is a discussion about improving the soundstage, apparently without agreement about how it is perceived and hence what would be required to improve it.

 

No.

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
Without going into who really understands this or not, we have plenty of "rule of thumb" tools to use to optimize sound in our listening areas. It really doesn't matter one bit that people do not totally understand what is happening when they change the toe in for their speakers, or run sophisticated DRC software on a PrePro. All they are after is getting the best sound.

 

Hey Paul,

 

The latter yes. But none of these "tools" are going to help a thing.

 

I think it is correct that people don't need to understand fully what is really happening, but this is because it won't be solved by understanding by you "the consumer". So, you might toe-in more in attempt to solve a problem (and undoubtedly an optimal setting will be obtained), but the "problem" really is to be solved by, well, someone like me (ha ha).

 

So Paul, by saying what you just said, I'm afraid you only testify that you'r not in that league of maybe the few who can see where the real matters happen, and this is so low level that indeed it is for the few only.

 

Now the above brings you obviously nothing and at this moment I myself don't even know where to go because the subject is so huge that even if all the questions/suggestions from the OP were cut to 10% we still would not get anywhere any time. However, it is my very subject so I thought to at least start with responding in this thread, to next see whether and how I can contribute with usefulness. Maybe that won't happen ...

 

To start with at least something (again not worth much), before I started Phasure I designed a local positioning system with 0.1mm accuracy in the space of around 50x50m (~150x150ft) and you know what ? this is all phase based;

When I started Phasure (now 8 years ago) I had the idea that this could be reversed for sound perception. So very brief : think GPS as the commonly known application where you detect a couple of antennas which by means of phase angle differences will tell you were you are - which can be reversed to two speakers sending sound while you can localize the original positions of the instruments.

This is a way way too brief explanation but theoretically this can be done in the 3D space if only the sound waves are short enough to allow sufficient phase angle difference at the receptor (our ears).

 

So, I'll try to spit out undoubtedly scattered "data" in following posts maybe, but at least now you know how "Phasure" emerged. But you should also now know how it came about that the Phasure NOS1 - if I am well informed - world wide is recognized as the most accurate D/A converter. This is just needed for the job ...

(hey this is not a commercial, but hopefully tells how all has to start out, will it ever work)

 

Peter

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Here's a first scatter :

 

MP3 has a WIDER sound stage compared to lossless material.

 

Ok, who am I to say so. But let's say it needs a few "accuracy" prerequisites to let that happen.

Well, nice that it happens. But assumed I am correct, we now must wonder WHY. For now, just think about that.

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Next one :

 

Can anybody tell me how wide the sound stage must be, as how we perceive it through loudspeakers ?

 

Is it as wide as the original stage ?

Was there even one ? (studio recording)

 

If the answer is Yes, is the stage allowed to go beyond our side walls ?

 

Can it even ?

 

If the answer is No, but the original stage is more wide than our room is, what will happen to the size of the instruments ?

 

 

Here too, think about this. The proper answers will vastly help the subject (I think).

And uhm, I am not saying that I have all the answers, but no matter what, when we would be able to end up with answers to these kind of maybe strange questions, I'm sure we progressed.

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

If you are able to answer this question, you are miles ahead already :

 

Can you tell how it comes that you never saw a kukoo ? (outside of a zoo)

Of course you must have them in your country and you must hear them some times. So you hear them, but never see them.

Why ?

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...