Jump to content
IGNORED

A conversation with Charles Hansen, Gordon Rankin, and Steve Silberman


Recommended Posts

Many thanks for explaining your post... Examples where the noise affect the bitstream in a digital domain? I must have over read it????

 

See my post #44.

 

And by the way, although I respect his right to his opinion and his crusade, please do not get my opinions mixed up with those of the other Alex on this forum (sandyk). He and I (and others) have already be around the block on some of these topics. (He is convinced that identical checksum files can sound different through the same playback chain.)

 

Regards,

Alex C.

Link to comment
See my post #44.

 

And by the way, although I respect his right to his opinion and his crusade, please do not get my opinions mixed up with those of the other Alex on this forum (sandyk). He and I (and others) have already be around the block on some of these topics. (He is convinced that identical checksum files can sound different through the same playback chain.)

 

Regards,

Alex C.

 

We both agree about songstress Renee Olstead though !

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
Yep, by design this is correct. Every designer of a network have to ensure that this is not the case....

 

About the SABRE DAC: I cannot say anything about that, because I don't know the design of that DAC...

 

But I can talk about the design of network... Until the stream hits the DAC, as network designer, you have to ensure that the stream is proberly... And now it comes difficult... I can only caught that digital stream on the buffer in the DAC, then it became a analog signal... If I understand you right, you want me to prove that when the digital signal comes in it is the same as the PCM output, right?

 

The SABRE Dac works the same principle as all other DACs and your humble ethernet packets. They both are differential mode transmissions of electrical signals, 1 and 0 in larger 8bit, 16bit, 24bit words.

 

TCP/IP Ethernet packets and error correction systems are all good for that purpose. Buffering and holding is good, the main differences between audio packet data and network packet data is timing. If the network packet arrives 20ms to 10s later, it gets fed into a buffer, eventually, the web page takes a little slower to load, you have to wait.

 

Audio packets and these are voltages (yes analog voltages) ~0.4V =0, to ~4.8V =1, switched quickly and at the right time, contain coded 24bit words that contain markers in TIME, so that the DAC can correctly match the correct level at the right time. Audio has no tolerance for timing errors, the end result is it sounds rough, unclear and has 'bad edges'.

 

USB protocol for version USB1 didn't have an error correction protocol and we ended up with the half baked adaptive processes which sounded ok to a point, but not fantastic. Streamlength and asynchrounous concects changed all that, these days it doesn't matter, it's established as a 'standard' with most DACs this method in some form or other.

 

Jud is very good at explaining the concept, I hope I could add something from another angle. Have a look at the picure of a USB data stream on the TV monitor on page 1 of this thread. It's an analog waveform, cause it has electrical values, it really doesn't matter, for the definition, if the waveforms are coded 1's and 0's, logic arrays, or sine waves, or music, they all an analog waveform.

 

Like this:

 

logic analyser 57.png

AS Profile Equipment List        Say NO to MQA

Link to comment

@Jud and One and a half: Many thanks to both of us...

 

@One and a half: That's the reason why I have wrote async DAC's (which is a USB 2.0 driver) and in this case we need no error correction...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment

from Chriss71:

Sorry, no, why? The transport protocol doesn't allow that... We have here reliability and buffer... If you have with this a problem you hear exactly NOTHING...

 

If I understand what you are trying to say, you are looking at it too much like a network person. The audio stream doesn't work the same way. You can have errors in the TIMING iinfo (not the DATA) and still hear music. But the music is altered.

 

Jud essentially explained it when he said this:

 

Well, the DAC is electrically connected (if we are talking about USB) to the rest of the system. Plain old electrical noise, or slight voltage fluctuations, can be carried by the USB cable to the DAC. In the DAC, this can cause problems in various ways. None of these problems involves alteration of bit values. But they can alter the timing of the bits, i.e., jitter.

 

- Noise can affect the DAC's clocking circuitry, causing clock jitter.

 

- To obtain the bit values, the DAC chip evaluates the voltage of the signal against a base. The place where this voltage over the base makes the DAC chip see a "1" rather than a "0" is the "zero crossing point." If electrical noise or voltage fluctuations change very slightly the base against which the DAC chip is evaluating the signal, the changeover between 0 and 1 or vice versa can be delayed or speeded up slightly, i.e., jitter is introduced to the signal. This happens in the DAC chip itself, after the DAC's clock.

 

So no network problems at all; the USB cable has transmitted the data just fine. But plain old electrical noise and tiny voltage fluctuations can affect sound quality in the DAC *after* network data transmission has taken place.

 

As Jud wrote, none of this directly changes the bitstream over the USB, but it does change the timing information of the signal in the DAC, and thus changes the resulting audio that is reconstructed in analog from BOTH the unaltered DATA and the TIMING info. This is not the same way data is transferred and reconstructed using network/data protocols in a computer, and this is where I think you are mistaken.

Main listening (small home office):

Main setup: Surge protectors +>Isol-8 Mini sub Axis Power Strip/Protection>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three BXT (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three BXT

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment
...I hope I could add something from another angle. Have a look at the picure of a USB data stream on the TV monitor on page 1 of this thread. It's an analog waveform, cause it has electrical values, it really doesn't matter, for the definition, if the waveforms are coded 1's and 0's, logic arrays, or sine waves, or music, they all an analog waveform.

 

Excellent post. Totally clear.

 

 

"Analog", "Digital"--its all just voltages, signals, and noise passing through time. Some aspects of what is audible at the edge of the audio arts are quantifiable from measurements, some may be present in graphical depictments of signals but remain unseen because we don't hear with our eyes, and many things (e.g. wide variations in the sound of cables) we are just nowhere near having measurements to correlate to perceived quality of sound.

My ears are the most accurate instrument for transducing the signal to my brain; it being a vastly complex super-computer wired specifically (since I was 2) for interpreting the achievement of ever greater aural ecstasy.

 

 

Maybe I should put the paragraph I just wrote into my signature. I really sums up my bottom line sentiment about the whole "who can hear or measure what between what components." So much time and effort is wasted trying to convince each other of what is or is not.

 

 

And getting back to the original topic of the thread: I actually thought the Hansen/Rankin "interview" (it was canned in that it was via composed written exchange), was very timely and cogent. They both know their stuff, and what they said is logical and does not seem very controversial.

 

Baby Alex.jpg

Yep, its me. In front of a silver-soldered Fisher 500C from when my dad worked for Avery Fisher--selling stainless car racing mufflers and trying to convince him that seat belts could be a big business! The Garrard must have been playing Nina, Brubeck, Miles, or Rodrigo.

Link to comment
@Jud and One and a half: Many thanks to both of us...

 

@One and a half: That's the reason why I have wrote async DAC's (which is a USB 2.0 driver) and in this case we need no error correction...

 

This is where we all have our particular knowledge to piece together the fundamentals of CA. I really want to know what the process is to read "Beethoven sym 09.aif" which is a block of data into a USB or an Ethernet stream. By understanding the path we can analyse where improvements can be made, perhaps cut out a lot of junk hardware, better power supplies to keep the noise down..computers have many psu let's identify which ones are critical, about time we opened Pandora's box. It might be a long road ahead, but always a challenge.

AS Profile Equipment List        Say NO to MQA

Link to comment
@Jud: I hope I don't insult you... But if you look at my post I ALWAYS talk about a digital domain....

 

So, Barry says he hear a difference between FLAC and WAV. NEVER EVER.... You agreed with me, that it will be interesting in audible factor IN THE DAC, right.... So, the FLAC files get's long before the DAC converted to a WAV file... (and what I ever wrote; the same Bitstream goes into the DAC)

You know what I mean....

 

No insult taken at all, Chris. I don't have the education in network or digital audio concepts to be precise in what I say or understand, so to me since PCM is a way of encoding a digital recording I think of "PCM" and "digital" as being the same. I'm guessing to you as someone with a networking background there is a difference in meanings between these two terms. I always like to learn things, so you would be doing me a favor by explaining the distinction.

 

As far as a potential difference between FLAC and WAV: This goes along with what I mentioned about (very) slight voltage fluctuations and the possibility they may cause slight timing variations right in the DAC chip by slightly altering the reference voltage the chip is using for comparison to the signal, in order to determine whether the signal should be evaluated as a "1" or "0." Think of a computer decompressing audio from FLAC as it sends the resulting WAV file to the DAC, versus one that is sending a WAV file. The work of doing the decompression is very little for a modern CPU, granted. So suppose the very tiny resulting voltage dip in the baseline only speeds up the time when the signal is evaluated by the DAC chip as crossing from "0" to "1" by 50 picoseconds, a mere instant. That may still be an audibly significant amount of jitter.

 

That is what I think Charlie Hansen is describing when he talks about the possibility that a computer having to do "extra" work on compressed formats, for example, may be responsible for a difference in sound quality. It is not a change in the digits or anything to do with the integrity of the stream being transmitted to the DAC. Rather, it is the possibility these tiny voltage fluctuations or increases in electrical noise may cause jitter in the DAC chip itself (or the DAC's clocking mechanism).

 

Edit: Since Gordon Rankin pretty much invented async USB for audio as we know it today, and he worked for Charlie Hansen on the design of the Ayre QB-9 DAC, I would suppose both of them are familiar with the operation of the USB network part of things. So I don't think you need to worry that they are ignoring this aspect. :-)

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

I have put this link up a couple of times elsewhere on the forum, because I think the scope traces there help visualize some of the things we have been discussing:

 

Noise from computer, how ?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

That is what I think Charlie Hansen is describing when he talks about the possibility that a computer having to do "extra" work on compressed formats, for example, may be responsible for a difference in sound quality. It is not a change in the digits or anything to do with the integrity of the stream being transmitted to the DAC. Rather, it is the possibility these tiny voltage fluctuations or increases in electrical noise may cause jitter in the DAC chip itself (or the DAC's clocking mechanism).

 

If any sonic differences between WAV and FLAC files come from this phenomenon, then memory play will eliminate them effectively.

Link to comment
If any sonic differences between WAV and FLAC files come from this phenomenon, then memory play will eliminate them effectively.

 

One would think so, wouldn't one? :-)

 

But I am always afraid of being too simplistic or definitive in areas I know little about, so I can't say for sure. I will go this far, though: To me, the fact that many of the most popular players are memory players may mean such players help to minimize or eliminate the problems we've been describing. On the other hand, you can find many posts here and elsewhere that say there are no measurable/audible differences between different players, let alone slightly different versions of one player. (Or, as one can find a couple of years back in this forum, identical player source code compiled by GCC versus LLVM - heh, heh. :-)

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

As far as a potential difference between FLAC and WAV

 

Hi Jud!

 

I will describe my views on that problem you a referring:

I (as Network Technician) can ensure that nothing get's twisted from the HDD (and now it's completely indifferent if the bits are WAV or FLAC - bits are bits) to the buffer in the DAC. Long before the stream hits the DAC, the processor make of that FLAC file a WAV file. So, in the RAM of the computer you already have a WAV file. The Computer ensures that the stream is correct and the timing is reliable. And this is why I don't think that it sound different. I have checked that many times, because it is relativ simple to do.

What I can think about is IN THE DAC! I am not familiar how the DAC get's changed the stream to the PCM signal. I think there should say anybody who knows about what he's talking (and not me). I would like to get explained by the programmer of the DAC chips what is going on there? How much noise (you will say jitter) needs it, to change the signal? and so on...

 

So, I think Gordon Rankin would here the right person to ask this...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
Hi Jud!

 

I will describe my views on that problem you a referring:

I (as Network Technician) can ensure that nothing get's twisted from the HDD (and now it's completely indifferent if the bits are WAV or FLAC - bits are bits) to the buffer in the DAC. Long before the stream hits the DAC, the processor make of that FLAC file a WAV file. So, in the RAM of the computer you already have a WAV file. The Computer ensures that the stream is correct and the timing is reliable. And this is why I don't think that it sound different. I have checked that many times, because it is relativ simple to do.

What I can think about is IN THE DAC! I am not familiar how the DAC get's changed the stream to the PCM signal. I think there should say anybody who knows about what he's talking (and not me). I would like to get explained by the programmer of the DAC chips what is going on there? How much noise (you will say jitter) needs it, to change the signal? and so on...

 

So, I think Gordon Rankin would here the right person to ask this...

 

I generally agree, and wonder myself whether there can be ongoing conversion in the computer while the DAC is working on the bits (not necessarily conversion of the same file - the computer might be converting track 12 while the DAC is working on track 1...).

 

The only thing I would disagree with is the statement that the computer can ensure the timing is reliable in the case of an async USB input, where it is the DAC's clock that governs the timing.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

@Jud: Sorry, I had should been more specific, the USB Driver ensures the reliability and that the timing is correct...

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
@Jud: Sorry, I had should been more specific, the USB Driver ensures the reliability and that the timing is correct...

 

I said this yesterday and I thought that you agreed that you were incorrect, but you are still saying incorrectly that 'the USB driver ensures the reliability'. I doesn't, there is no error checking.

 

I am saying that the statement "because the USB Driver handles the error detection" is factually incorrect.

System (i): Stack Audio Link > Denafrips Iris 12th/Ares 12th-1; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

System (iv) Technics 1210GR > Leak 230 > Tannoy Cheviot

Link to comment
Please explain how this exactly applies to Asynchronous and Adaptive USB audio.

Because async USB run over isochronous USB links!

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
Please explain how this exactly applies to Asynchronous and Adaptive USB audio.

 

I don't understand what you want to explain me. Async USB 2.0 ensures the reliability of the bitstream. It's just that simple (not really, but...). It has many designs which we don't need error correction... (reliability, buffering, reserved bandwidth and so on)

Albert Einstein: Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Link to comment
Why you find in that quote "ERROR CHECKING"

 

Extra for you, Richard...

 

[ATTACH=CONFIG]6167[/ATTACH]

 

Well to me 'the USB Driver ensures the reliability' has an implication of error checking. The mention of 'reliability' in your image of USB audio specs, is in connection with how many bit errors are to be expected, given a lack of error correction in the USB isochronous protocol and the driver that implements it. The USB driver does not 'ensure reliability', although typically you don't expect many errors as the document says.

 

But bit errors were not really mentioned in the interview, as the main point was to discuss subtler problems resulting in jitter or noise that were imposed on the analog signal coming out of the DAC.

System (i): Stack Audio Link > Denafrips Iris 12th/Ares 12th-1; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

System (iv) Technics 1210GR > Leak 230 > Tannoy Cheviot

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...