Jump to content
IGNORED

A conversation with Charles Hansen, Gordon Rankin, and Steve Silberman


Recommended Posts

This was a very interesting read. It is a move towards supporting what many have said for a very long time, that digital transmission is not infallible. It is a move towards validating what many could hear for a very long time despite being told it was impossible.In fact it challenges the holy mantra that 'bits is bits,' describing them as abstractions that can only be realized in the real analogue world. Most importantly it reinforces what many have known for a very long time, trust your ears and that anything in the sonic chain, between cello to cortex, can potentially affect sound quality and music. Listen to notes not numbers and hopefully we can get back to discussing what enhances the enjoyment of music.

Sound Minds Mind Sound

 

 

Link to comment

Once again I have to disagree with you. I think you take the wrong conclusion of this very interesting article. I think there are a few passages there and they are very vague (and in fact my 2 points are not the only one who will be challenged).

 

But, no problem at all... Let us agree to disagree

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
It is a move towards supporting what many have said for a very long time, that digital transmission is not infallible.

 

Digital transmission is fallible - which is why well designed digital transmission systems always involve error correction.

Link to comment

Two things that seem to me to cause trouble are:

 

- The word "digital" causes people to concentrate on the "digits," i.e., bits, to the exclusion of other real issues affecting sound, such as timing (jitter), and electrical effects such as EMI, RFI, voltage stability to extremely small tolerances, etc.

 

- As Boris rightly points out, the various protocols in common use are made to be very robust, error correcting, error tolerant, or both. But because these protocols are made to keep working in adverse environments does not mean these conditions can't affect the sound - E.g., electrical noise that doesn't stop digital transmission over USB, but gets into the analog side of the system through power or ground, or affects stability of clocking, causing higher jitter.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Yep, you not have to worry about it (not really that is in our case audible), when you have your DAC on the async USB Bus, because the USB Driver handles the error detection. Look IN the USB packet and see what is going on there. A point which these people even don't have touched....

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

- As Boris rightly points out, the various protocols in common use are made to be very robust, error correcting, error tolerant, or both. But because these protocols are made to keep working in adverse environments does not mean these conditions can't affect the sound - E.g., electrical noise that doesn't stop digital transmission over USB, but gets into the analog side of the system through power or ground, or affects stability of clocking, causing higher jitter.

 

Jud, thanks for your kind words. The sources of noise that you mention are the reason why devices such as the Aubisque USB filter, the AQVOX or the iUSB can in principle improve the performance of USB DACs. I have personally attached a iUSB to my Benchmark DAC2 though I must admit that I have been lazy and have not tested whether it changes anything to the sound in practice. I just wanted to be on the safe side and be sure that I would make the most of my DAC. What I could do, as it is so easy with the iUSB, is to switch on and off its galvanic isolation (which iFi Audio pompously calls "ISOearth") to hear whether it makes a difference, but I have done so much A-Bing lately (between my Hegel HD10+AP2 and my new Benchmark DAC2; between the SPDIF-AP2 input on my DAC2 and its iUSB-USB input) that I am getting tired of this sort of tests and just want to relax and listen to music . Also, when I A-B things intensively, I tend to become much more vulnerable to being annoyed by my tinnitus, which is what has been happening to me in recent days after all this testing.

Link to comment
Yep, you not have to worry about it (not really that is in our case audible), when you have your DAC on the async USB Bus, because the USB Driver handles the error detection. Look IN the USB packet and see what is going on there. A point which these people even don't have touched....

 

They didn't talk about error detection because the USB isochronous protocol doesn't feature error correction.

System (i): Stack Audio Link > Denafrips Iris 12th/Ares 12th-1; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

System (iv) Technics 1210GR > Leak 230 > Tannoy Cheviot

Link to comment

I don't want to be cocky here, but do you know why? Because there is a reserved bandwith, reliability, buffering and so on...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
I don't want to be cocky here, but do you know why? Because there is a reserved bandwith, reliability, buffering and so on...

 

I was just correcting what you said.

System (i): Stack Audio Link > Denafrips Iris 12th/Ares 12th-1; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

System (iv) Technics 1210GR > Leak 230 > Tannoy Cheviot

Link to comment

No problem, I have to learn better english... Most of the time I have problems of describing what I mean....

With the sentence "because the USB Driver handles the error detection", I have meant what I wrote on your statement...

 

But that part was not even touched on their article...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
No problem, I have to learn better english... Most of the time I have problems of describing what I mean....

With the sentence "because the USB Driver handles the error detection", I have meant what I wrote on your statement...

 

But that part was not even touched on their article...

 

I am saying that the statement "because the USB Driver handles the error detection" is factually incorrect.

System (i): Stack Audio Link > Denafrips Iris 12th/Ares 12th-1; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

System (iv) Technics 1210GR > Leak 230 > Tannoy Cheviot

Link to comment

OK, thanks for correcting me. The USB Driver is designed that you don't need a error correction (that's why I have wrote look inside a USB Packet), because the reliability and the buffering of the signal is guaranteed.

 

Thanks again...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

Hi Chris. You mentioned asynchronous USB, and also talked about buffering , etc., as means to ensure reliable passage of the USB signal.

 

Async USB relies on the DAC's clock, so many sources of jitter are eliminated. However, the specific problems I mentioned above remain. If electrical noise affects the DAC's clock, since that is the clock async USB relies on by design, it cannot correct any clocking errors resulting from that noise. If you read some of the ESS white papers about the SABRE DAC, they talk about how critical it is to maintain a constant voltage as an absolute reference against which to compare the incoming signal to evaluate whether it is above or below the "zero crossing point," and thus whether it represents a 1 or 0. Very small fluctuations will alter the time at which the signal crosses, in effect causing jitter. This takes place in the DAC chip *after* the data has been clocked out of the DAC's buffer, so again async USB does not correct it. Low level electrical noise from the computer can have similar effects.

 

Buffering, etc., will allow the USB signal to be read, but will not eliminate power fluctuations, noise, etc., from coming through the electrical connection into the DAC and the rest of the system.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
it cannot correct any clocking errors resulting from that noise.

 

Yep, by design this is correct. Every designer of a network have to ensure that this is not the case....

 

About the SABRE DAC: I cannot say anything about that, because I don't know the design of that DAC...

 

But I can talk about the design of network... Until the stream hits the DAC, as network designer, you have to ensure that the stream is proberly... And now it comes difficult... I can only caught that digital stream on the buffer in the DAC, then it became a analog signal... If I understand you right, you want me to prove that when the digital signal comes in it is the same as the PCM output, right?

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

OK, before any here nitpicking, there are network protocols which have error correction in it (like TCP/IP as example, there is a layer 4 protocol which ensures that we have a checksum on the transmission data)

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

But I can talk about the design of network... Until the stream hits the DAC, as network designer, you have to ensure that the stream is proberly... And now it comes difficult... I can only caught that digital stream on the buffer in the DAC, then it became a analog signal... If I understand you right, you want me to prove that when the digital signal comes in it is the same as the PCM output, right?

 

No, sorry if I did not explain this well.

 

The signal can come over to the DAC just fine. No alteration to the bits whatever, they are sitting there in the DAC's buffer just as they were sent from the source.

 

So if there are no network problems, what could the matter be?

 

Well, the DAC is electrically connected (if we are talking about USB) to the rest of the system. Plain old electrical noise, or slight voltage fluctuations, can be carried by the USB cable to the DAC. In the DAC, this can cause problems in various ways. None of these problems involves alteration of bit values. But they can alter the timing of the bits, i.e., jitter.

 

- Noise can affect the DAC's clocking circuitry, causing clock jitter.

 

- To obtain the bit values, the DAC chip evaluates the voltage of the signal against a base. The place where this voltage over the base makes the DAC chip see a "1" rather than a "0" is the "zero crossing point." If electrical noise or voltage fluctuations change very slightly the base against which the DAC chip is evaluating the signal, the changeover between 0 and 1 or vice versa can be delayed or speeded up slightly, i.e., jitter is introduced to the signal. This happens in the DAC chip itself, after the DAC's clock.

 

So no network problems at all; the USB cable has transmitted the data just fine. But plain old electrical noise and tiny voltage fluctuations can affect sound quality in the DAC *after* network data transmission has taken place.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
No, sorry if I did not explain this well.

 

No, you explain it perfect, the problem lies on my side... Many thanks to you for your patience with me...

 

The signal can come over to the DAC just fine. No alteration to the bits whatever, they are sitting there in the DAC's buffer just as they were sent from the source.

 

So, we have a bit perfect signal....

 

None of these problems involves alteration of bit values.

 

Yep, normally we are here, when we play music...

 

But they can alter the timing of the bits, i.e., jitter

 

Sorry, no, why? The transport protocol doesn't allow that... We have here reliability and buffer... If you have with this a problem you hear exactly NOTHING...

 

Noise can affect the DAC's clocking circuitry, causing clock jitter.

 

Now, I am not the perfect people who can answer this... In modern DAC I would say no, but this is not my profession...

 

To obtain the bit values, the DAC chip evaluates the voltage of the signal against a base. The place where this voltage over the base makes the DAC chip see a "1" rather than a "0" is the "zero crossing point." If electrical noise or voltage fluctuations change very slightly the base against which the DAC chip is evaluating the signal, the changeover between 0 and 1 or vice versa can be delayed or speeded up slightly, i.e., jitter is introduced to the signal. This happens in the DAC chip itself, after the DAC's clock

 

Yep, it would say this is true... As I wrote long before, jitter is IN the DAC... In the process of getting the signal analog...

 

So no network problems at all; the USB cable has transmitted the data just fine. But plain old electrical noise and tiny voltage fluctuations can affect sound quality in the DAC *after* network data transmission has taken place.

 

Sorry, @Jud, but I am a network technic, I am only talk in the digital domain... No question about it, after the signal get's converted in a PCM signal all is possible..., before it hears on my rules *LOL*, not exactly but 99,9%

 

I hope you can anything do with my post....

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
No, you explain it perfect, the problem lies on my side... Many thanks to you for your patience with me...

 

 

 

So, we have a bit perfect signal....

 

Yep, normally we are here, when we play music...

 

Sorry, no, why? The transport protocol doesn't allow that... We have here reliability and buffer... If you have with this a problem you hear exactly NOTHING...

 

Right, as you mention below, what I am talking about is jitter *in the DAC*.

 

Now, I am not the perfect people who can answer this... In modern DAC I would say no, but this is not my profession...

 

People who have worked for audio manufacturers and build and test their own DACs have said on the forum that the clock circuitry is sensitive and can be affected by electrical noise, so I believe them. :-)

 

Yep, it would say this is true... As I wrote long before, jitter is IN the DAC... In the process of getting the signal analog...

 

So it looks as if we may agree. I will read the Audiostream article, but I don't think they were saying much that was different from this.

 

Sorry, @Jud, but I am a network technic, I am only talk in the digital domain... No question about it, after the signal get's converted in a PCM signal all is possible..., before it hears on my rules *LOL*, not exactly but 99,9%

 

I hope you can anything do with my post....

 

If you are speaking of a "PCM signal" as essentially what happens after the data is in the DAC but before the conversion to analog occurs, then again I believe we are in agreement.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Yep, it would say this is true... As I wrote long before, jitter is IN the DAC... In the process of getting the signal analog...

 

Sorry, @Jud, but I am a network technic, I am only talk in the digital domain... No question about it, after the signal get's converted in a PCM signal all is possible..., before it hears on my rules *LOL*, not exactly but 99,9%

 

I hope you can anything do with my post....

 

Let's see if I can try. But I'll be brief.

 

One example: With USB and Firewire, noise can come in over the ground connection of the interface cable. Noise also comes in on the USB VBUS if it is connected. Currents also flow in both directions in the cable, so again contamination is possible.

 

Second example: With player s/w that bypasses a lot of the OS audio layers to give the most direct path of data out the USB port, the algorithms for caching the file into memory (another feature of good player s/w) and feeding it out do vary from product to product (and sometimes version to version, right Audirvana users?), changing the subtle and not-so-subtle character of the music.

 

I am sure the above is hard for someone strictly from the computer side to accept. It is hard for a lot of the people already here in audio to accept. But I experiment a lot with these things and my and my friends ears are very clearly and repeatably hearing these things without hesitation. Sometimes it is hard to know exactly what is causing the difference, but it is not hard to point a finger in certain directions when one is changing just one variable at a time.

Link to comment

@Jud: I hope I don't insult you... But if you look at my post I ALWAYS talk about a digital domain....

 

So, Barry says he hear a difference between FLAC and WAV. NEVER EVER.... You agreed with me, that it will be interesting in audible factor IN THE DAC, right.... So, the FLAC files get's long before the DAC converted to a WAV file... (and what I ever wrote; the same Bitstream goes into the DAC)

You know what I mean....

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

@Superdad: I am very sorry, but I don't understand you, where you challenge my statements...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
@Jud: I hope I don't insult you... But if you look at my post I ALWAYS talk about a digital domain....

 

So, Barry says he hear a difference between FLAC and WAV. NEVER EVER.... You agreed with me, that it will be interesting in audible factor IN THE DAC, right.... So, the FLAC files get's long before the DAC converted to a WAV file... (and what I ever wrote; the same Bitstream goes into the DAC)

You know what I mean....

 

Well I do know what you mean. And after a lot of listening (at the request of others here) I agree with it only under the following exact circumstance:

1) CD properly ripped to hard drive first as a full WAV or AIFF file (depending on Windows or Mac);

2) File converted into another file (leaving original in place) that is FLAC or Apple Lossless;

3) FLAC (or Apple Lossless) file converted back to WAV or AIFF;

4) Checksum of both original and twice converted file compared and shown identical;

5) Playback of original WAV or AIFF will have identical sound to playback of twice converted WAV or AIFF.

 

Realtime decoding/playback of FLAC or Apple Lossless sometimes will sound slightly worse--mostly depending on the power, ram, OS processes and configuration of the computer.

Could pick out Apple Lossless versus AIFF 95% of the time with good recordings when my computer was an Apple G4 mini (and without memory play s/w).

With an Intel Core 2 Duo Mac min, 8GB of RAM and a fine tuned OS (stripped of many extraneous processes), I can not hear the difference between AIFF and Apple Lossless of the same file.

I can easily hear a thousand other changes that "objectivists" challenge (USB cables, computer linear vs. switching PS, tiny changes to digital filters, mastering differences, sample rate, etc.), but these days I don't sweat FLACs.

Link to comment
@Superdad: I am very sorry, but I don't understand you, where you challenge my statements...

 

I read you as saying that before the "bits" get to the DAC chip, everything is always perfect. I gave you examples where the noise and quality of the bitstream get affected by various means BEFORE they get to the DAC, and which audibly effect subtleties of the music that the DAC receives to convert.

 

If you don't believe it that is fine. But these areas are where today's high-end audio engineers are focusing their attention and getting results.

Link to comment
Well I do know what you mean. And after a lot of listening (at the request of others here) I agree with it only under the following exact circumstance:

1) CD properly ripped to hard drive first as a full WAV or AIFF file (depending on Windows or Mac);

2) File converted into another file (leaving original in place) that is FLAC or Apple Lossless;

3) FLAC (or Apple Lossless) file converted back to WAV or AIFF;

4) Checksum of both original and twice converted file compared and shown identical;

5) Playback of original WAV or AIFF will have identical sound to playback of twice converted WAV or AIFF.

 

1) of course...

2) i am not talking about MP3

3) Yep

4) Yep

5) What I always say!!!

 

Can you now tell me, where I am wrong?

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
I read you as saying that before the "bits" get to the DAC chip, everything is always perfect. I gave you examples where the noise and quality of the bitstream get affected by various means BEFORE they get to the DAC, and which audibly effect subtleties of the music that the DAC receives to convert.

 

If you don't believe it that is fine. But these areas are where today's high-end audio engineers are focusing their attention and getting results.

 

Many thanks for explaining your post... Examples where the noise affect the bitstream in a digital domain? I must have over read it????

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...