Jump to content
IGNORED

Experimental Verification of Bit Perfect Music Server


ksamnic

Recommended Posts

Can anyone point me to the results of an actual test of the output from audio servers. IE: a test that captures the bits that come out of the servers and compares them to the original CD, and measures any jitter introduced by the interface choice.

 

Why?

 

I keep reading a large number of postings on this site (and others) about cards like Lynx (for example) being bit perfect and great and sounding better than a cheap on-board spdif or usb output (w/o a sound card).

 

My feeling (and I am new to all of this) is that if two digital output streams sound different (through the same external DAC and subsequent audio components), then one (or both) of the audio bit streams has been corrupted on the path from the file to the dac.

 

My instincts tell me that the less components in the path from wav file to bits going into my dac, the better.

 

I have the same question for cables. I figure that digital cables either work (get the bits there) or they don't (they lose or change bits).

 

Anyway, I am just looking for any links to actual scientific studies (I tried searching the forums here first but could not find anything quantitative - just opinions).

 

 

 

Link to comment

Hi ksamnic - You'll never get a single answer to your questions surrounding these topics. Some of this stuff is incredibly technical and many engineers cannot even agree on reasons why or why not. I'm certainly no expert but will offer some opinions based on my own research and talking to real experts in the field.

 

A test that captures the bits from a music server and compares them to the original CD source won't tell you much other than all the bits arrived or didn't. There is no timing information in such a test. Such a test may be like copying two spreadsheets from one computer to another followed by two word documents on another day. All four files will be identical on both computers but that doesn't tell you much about the trasmission of the files from one to the other.

 

Your feeling about two digital streams sounding different = corruption in a data stream is fairly common. However, it is not based on facts and engineering supported by measurements. Every interface measures differently even though the data coming out may be bit perfect. One example is when I reviewed the hiFace USB to S/PDIF converter. I really did not like the sound of the converter compared to others even though both output bit perfect signals. I had the hiFace tested and found the jitter spec was OK but it also output way too much voltage to the S/PDIF interface. Neither of these change or corrupt the bits but both change the sound.

 

The examples you provide, Lynx and on-board S/PDIF, are perfect for comparison. Unfortunately not many people are skilled enough to test the jitter on each device, and of the skilled people many of them have better things to do with their time.

 

Jitter for the most part doesn't alter the bits (one or zeros) it is imperfections in timing. If you have an incredible amount of jitter I suppose it could alter the bit stream, but I haven't see that yet.

 

I highly recommend looking for Audio Engineering Society AES papers on jitter. They will be very confusing, they are to most people, but they may give you some objective information you're seeking.

 

I love that you're asking for solid information about this stuff. The more people that ask and learn what's going on the more we all learn in the process. Please share your findings if possible.

 

 

Founder of Audiophile Style | My Audio Systems AudiophileStyleStickerWhite2.0.png AudiophileStyleStickerWhite7.1.4.png

Link to comment

Hello,

 

Quote: IE: a test that captures the bits that come out of the servers and compares them to the original CD, and measures any jitter introduced by the interface choice.

 

Gosh, that's easy. Just buy a Weiss DAC202 and use the provided 16 and 24-bit WAV files to perform a transparency test. $6.7k seems like a small price to pay to know if your playback system is bit-perfect! Once you're done with your testing, you can even use the DAC202 to play music. I don't have one (yet), but I hear that they are lovely. :-)

 

Besides that, I have not had much luck with this. Just for fun, I did take a small USB audio interface that had a TOSLINK S/PDIF output. I connected that to the S/PDIF input on my E-MU 0404 USB and recorded a track. Playback on the source side was via Mediamonkey with the ASIO plugin and a native ASIO driver for the audio interface.

 

Next, I used the SoX command-line utility to decode the PCM data to ASCII so that I could compare the recording with the original WAV file. They were amazingly similar and would go for pages and pages with no differences at all. However, periodically I would see an amplitude value that changed at a different point in time (a sample early or late) on the recording vs. the source. Also (and more concerning) sometimes there would be a few to as many as a few dozen samples that were completely missing in the recording. These two anomalies are likely caused by two different issues with my playback/recording chain, but I got board with testing and decided to go back to enjoying my music--something that I just realized I should go and do now (why am I sitting in front of this computer??!)

 

Good-bye and Good luck with your testing!

 

-- David

 

 

 

 

Link to comment

Are you sure about this (audio files having time)? I understand that there is timing info in the metadata on a CD, but my understanding was that digital audio is just PCM'd audio. There would be a sampling rate etc and the stream is written out as a series of 1's and 0's.

 

Link to comment

Audio is all about timing, whereas "standard" data is not. Once the digital file is turned into an audio signal instead of a static WAV/AIFF/FLAC file then timing is critical.

 

 

From Wikipedia:

"As samples are dependent on time, an accurate clock is required for accurate reproduction. If either the encoding or decoding clock is not stable, its frequency drift will directly affect the output quality of the device. A slight difference between the encoding and decoding clock frequencies is not generally a major concern; a small constant error is not noticeable. Clock error does become a major issue if the clock is not stable, however. A drifting clock, even with a relatively small error, will cause very obvious distortions in audio and video signals, for example."

 

http://en.wikipedia.org/wiki/Pulse-code_modulation

 

Founder of Audiophile Style | My Audio Systems AudiophileStyleStickerWhite2.0.png AudiophileStyleStickerWhite7.1.4.png

Link to comment

I agree with Chris: Audio is all about timing. This has been the main problem in digital music since the beginning: When In 1972 Nippon Columbia began to digitally master recordings, and in the same year the BBC began using pulse code modulation for high-quality sound, and now, 39 years after, digital engineers has been continuously trying to improve it.

 

You can have a perfect frequency response, but with a bad timing, music will not sound like real music.

 

That's the big debate between analogue an digital, but finally (I do believe) we are getting the "real thing".

 

Happy listening,

 

Roch

 

Link to comment

Yes, I totally get this (need for a good stable clock) - but I didn't think that it came into play except when encoding/decoding. The part that I am interested in is the stream of bits from the media server to the external DAC.

 

I think it is a pretty straight-forward computing job: take an encoded digital file and send it out a port at a constant rate without messing up the bits. No A/D or D/A.

 

I can only think of two things that can go wrong with the streaming (which doesn't mean there are only two things ... these are just the ones I can think of!):

 

1. the hardware and/or software in the data path alters or drops bits, or

2. there is jitter introduced by the hardware and/or Operating System, resulting in data loss/corruption at the receiving end

 

This is why I am looking for quantitative studies. There are many opinions that equipment x or y "sounds better", and I am sure that it does - but the science guy in me wants to know why.

 

... is it because some interfaces lose bits?

... is it because some cards unintentionally change bits?

... is it because some systems introduce jitter?

... is it because some components intentionally "color" the sound?

 

I figure that the way to find out is to compare the actual audio stream, measuring parameters that could account for a change in sound quality.

 

Also, I am pretty sure that someone must have already done this study. It seems like a pretty obvious research project for a computer engineering grad student.

 

Anyway, I will post here if I find out anything.

 

Link to comment

Hey Chris,

 

I had the hiFace tested and found the jitter spec was OK but it also output way too much voltage to the S/PDIF interface. Neither of these change or corrupt the bits but both change the sound.

 

This seems to be a tough one. Something like a contradictio interminis.

I didn't read back on your review concerned, but I'd say that the way too much voltage on the SPDIF interface would incur for "way amounts" of jitter. Next it would be fairly easy to make mistakes in the measuring, meaning *where* the jitter measurements took place (and I don't think you did that yourself, so I'm not addressing you here :-). Options would be (I think) :

 

1. At the i2s in the HiFace;

2. At the SPDIF-out on the HiFace;

3. At the end of the SPDIF cable, used for the listening sessions;

4. Inside the DAC used for the listening sessions, after the receiver chip, on i2s again;

5. At the analogue out of the DAC used for the listening sessions.

 

In my thinking, only 4 and 5 would be valid options, but they also will not have been seen as the most logical options. The latter because of the inclusion of the DAC itself (4 for the receiver chip and whatever more may be there, 5 for the whole DAC being in the chain).

 

1 would not be valid because it really needs the i2s to SPDIF conversion (creates jitter), plus SPDIF just *is* the means to use (SPDIF implies jitter within itself compared to i2s).

 

2 would not be valid because it's no reality in the first place (see 3 below) - btw as few reality as 1 (right above). Besides, it will just need an SPDIF cable to measure.

 

3 would not be valid because the cable (length) implies additional jitter (or at least may change the signature). Also, the load of the DAC's input will play a role here, and I assume this could not be 100% simulated by the analyser. Furthermore, it is just this which may play a large role, thinking of the inherent problem in the first place (too high voltage, reflections, etc.).

 

When we can agree a bit on the above, the most "precise" point would be 4 (but think about right in front of that which will be the SPDIF input terminal, which sadly implies the problems of 3 again). Now the problem will be that the analyser needs an i2s input, and certainly not all analysers have that (what about a few only).

However, "precise" may not be precise at all, when there's an SRC involved (in the DAC concerned).

 

You see ? this is a tough one. My conclusion (FWIW) would be that only 5 is really valid, but now it needs proper interpretation. You won't be able to measure the jitter of the HiFace, but you can only measure the relative effects of it. Relative means : try other connections, and look how they work out relatively to eachother.

Of course 5 implies measuring at the analogue side of things, which IMO should be done anyway. Not everybody may agree with this though.

 

Long story short : This was not about the HiFace again, but about the quote above, and the seemingly "unexplainable". I think everything can be explained, but it may need digging deeper. And lots of time, sometimes.

 

Regards,

Peter

 

 

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

1. the hardware and/or software in the data path alters or drops bits, or

2. there is jitter introduced by the hardware and/or Operating System, resulting in data loss/corruption at the receiving end

 

Emphasation is mine.

 

Maybe you didn't exactly want to say this, but now your base is wrong. I mean, there is no loss as such anywhere. It is about jitter.

Maybe carefully read Chris' first post again. Maybe somewhat hard to interpret right if you don't know in the first place, but I think it is all there.

 

Otherwise try to investigate first what exactly jitter *is*. In very far too short : this is not about corruption at all.

 

Edit : And sadly your #1 doesn't happen either, when all is right.

 

HTH

Peter

 

 

 

 

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Gang,

 

This is my setup for testing:

 

Host Computer (OSX, Win7/Vista/XP, Linux):== USB Analyzer ==>DAC UUT-----> Prism dScope III.

 

Also have attached at the I2S level my Tektronix MSO4000 scope with the Audio Module. It can decode I2S frames into data and store those on USB Flash drives or Compact Flash. WaveCrest DTS jitter analyzer for clocks usually connected to Word Clock.

 

Now I can look at the data at the USB Analyzer in frames going to the dac. This is really just PCM data just like the root file and will tell me if the Application/Operating system is sending bit true data. I can also verify that this information is getting to the dac via my TEK scope and then I can look at the output Waveform the same way John Atkinson does in Stereophile with the Prism dScope III with the JTEST.

 

The Wavecrest can tell the amount of jitter going to the dac. But most dacs will reject a good bit of this unless it gets too high. Most dacs that have jitter problems are getting this from the Master Clock as most current dacs really operate from this clock only.

 

Don't ask the price tag for this stuff... rough guess... about $50k

 

You could use a USB to SPDIF converter and feed it back and record it into the computer and verify bits that way.

 

~~~~~~~~~~

 

One note about timing.... PCM is a time based protocol. If there was a timing problem then the output of the device would not be bit true as there would have to be some filler or zero data present in the stream which would be not present in the original content and therefore the stream would no longer be bit true.

 

Now most of the bit true tests are not really a 1:1 comparison. Some people use HDCD disks and players since the LSB is the trigger there and this is usually one of the most obvious of non-bit true problems (i.e. the loss or change of the Least Significant bit).

 

Really feeding it back into the computer as digital only data and using a sliding window comparison tool is the really only way of checking for complete accuracy with the bits.

 

Thanks

Gordon

 

Link to comment

Ehhm ... that you guys usually don't understand what I write about is one thing, but that I don't understand what you write, is another. Well, I think. Haha.

 

So Gordon, what was this all about ?

Was this about how the HiFace should be tested ? no.

Was this about how to test jitter ? maybe. But why the story. Where does it hook in. What i2s, why.

Was this about how to test for bit perfect data ? could be. It seems the most likely. But I guess then this would have been enough :

 

You could use a USB to SPDIF converter and feed it back and record it into the computer and verify bits that way.

 

and

 

Really feeding it back into the computer as digital only data and using a sliding window comparison tool is the really only way of checking for complete accuracy with the bits.

 

So ?

(but sorry if I completely missed the point !)

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

It could be a "better" timing on DSD than on PCM?

 

It is, in a way, since all the timing errors don't fall into the audio band, since the "Nyquist-band" is 1.4 MHz. Higher the sampling rate, wider the error distribution over the frequencies - if it's truly random. But this naturally depends on type of the jitter.

 

Now if you take DSD, and increase number of bits to 6 and sampling rate to 40 MHz you end up with ESS Sabre DAC... :) Naturally this requires bunch of DSP processing to convert it from 32-bit PCM.

 

Most modern DACs are somewhere between that and DSD. And some are hybrids, like modern BB parts with combination of 6-bit multi-bit DAC running at 352.8/384 kHz and 3-bit SDM running at 22.5/24.5 MHz.

 

So these don't have all jitter "in-band" like traditional multi-bit DACs. So the jitter behavior is different. There are some elaborate papers on the topic too...

 

The original idea of DSD is that you skip DSP processing on ADC side to convert it to PCM and then skip DSP processing again on DAC side to convert it back. That's where the name comes from.

 

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

try: http://www.stereophile.com/reference/193jitter/index.html

 

Unlike simple data transfer, audio requires data+timing info. Missed timing is jitter. So unlike what you assume, it isn't just a job where you take an "encoded digital file and send it out a port at a constant rate without messing up the bits. No A/D or D/A"; it's more involved than that. Audio streams don't have a protocol that assures bit perfect transmission along with perfect timing, so even without A/D or D/A conversion, the timing errors can creep in.

 

Once the conversion takes place we hear the results of the timing errors/jitter. The data (bits) aren't corrupted; but the timing info is altered.

 

Main listening (small home office):

Main setup: Surge protectors +>Isol-8 Mini sub Axis Power Strip/Protection>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three BXT (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three BXT

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment

Peter,

 

I think you should re-read the original post. You seem a little lost on the topic.

 

~~~~~

 

DSD, PCM and timing...

 

Gang,

 

Look you are missing the point of what I was saying. In computer audio everything is done in a block format. Usually this block is sent at the same time every time and the size is relative for the sample frequency. This is the same for Firewire, PCI, USB, Internal Sound Cards and SPDIF. The DAC at the end of the string is either clocked by an Asynchronous or Synchronous clock of some kind. Therefore any method of output be it DSD or PCM would still result in the same problem.

 

But since the computer has to send the data of a set block size and frequency then the only way to have a timing problem would be to insert some data that would make the setup non-bit perfect.

 

Timing therefore is not the issue. I thought so for almost a year and then the Mastering eMail list I am on told me otherwise.

 

Timing problems would not cause jitter with asynchronous devices because the clock is located at the dac. Remember there is NO JITTER in the transmission of data. There is only Audio Related Jitter Errors when the system converts the parallel data to a serial Audio Stream such as I2S (or L/R Justified, DSP etc). With synchronous systems like most Firewire interfaces and Adaptive USB dacs this would become a serious problem.

 

Computer timing maybe a more miroscopic problem than what I was saying and maybe this is still something to investigate. But if I look at frames from the computer these happen at such exacting times (done to the micro second) that I really think there is something else at bay.

 

Thanks

Gordon

 

Link to comment

Thanks Gordon, PeterSt, Miska & firedog for your detailed explanations, research and/or personal conclusions, but I'm right now more confused than before...!

 

Then, I agree with Chris again regarding: "You'll never get a single answer to your questions surrounding these topics."

 

And also with David: "but I got board with testing and decided to go back to enjoying my music--something that I just realized I should go and do now..."

 

Thanks anyway,

 

Happy listening!

 

Roch

 

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...