Jump to content
IGNORED

Blue or red pill?


Recommended Posts

Just now, manisandher said:


Had we used a USB DAC, any idea how we could have captured the USB digital outputs in real time during the A/B/X to verify that they  were bit-identical?

 

Mani.

 

One way would be to use a DAC or a DDC that has USB input and outputs SPDIF, Toslink, AES/EBU, etc. 

 

Another is a hardware or software USB monitor/analyzer. Hardware can be expensive. Software (some is free) can be used on the same PC that generates USB traffic to capture and analyze it or record it. 

Link to comment
45 minutes ago, PeterSt said:

 

But wait now. This is the difference between "general" playback software and those who pay attention to the "flow" for better SQ. XXHighEnd is not alone in that (but was the first and maybe for that reason the most extended with it). So you buy a fine turntable with some nice features and $ which is with software no difference.

So it is really nothing strange (these days). Or maybe I take too much for granted ?

 

Using HQPlayer I can play from a file stored on RAM drive, local hard disk, or NAS. To me, this is a better test to determine audible differences between NAS and local storage playback, since HQPlayer does not do anything special or have any additional settings beyond ASIO buffer size that might matter in this case. 

 

Link to comment
25 minutes ago, manisandher said:

Like you Paul, I would be really, really happy if Mans did find consistent differences between 'A' and 'B', because we'd no doubt learn something very useful from this. I'll do anything I can to aid him.

 

Thanks a lot, Mani! I may have said this before the test, but I'll say it again: a finding of something we can't do or don't know how to is much more exciting to me than confirming what I already know. I like a puzzle! And your cooperation with Mans is really appreciated. 

 

Link to comment
4 hours ago, STC said:

@manisandher and @mansr, guys how about trying a different DAC? 

 

There are three factors that can affect the output of a DAC based on a source

 

1. Bits change -- supposedly not in this test

2. Timing change (jitter) -- possibly happened, remains to be seen if jitter is detectable in the analog captures

3. Electrical or EMI Noise coming from the PC - again possibly can be seen in the analog captures

 

@STC is proposing another possibility due differences in DAC and the recorder. That the DAC is reacting differently to incoming digital data than the recorder used in the test. For example, if the receiver circuit at the DAC is very sensitive to noise, it may not receive every bit exactly as the recorder circuit would. So, while the recorded bits are matching perfectly, perhaps the DAC didn't get the same exact bits. Another example, if the DAC PHY forms a ground loop with the PC while the recorder doesn't, the DAC may not receive the same unmolested bits as the recorder. I think this is a possibility, as well, although harder to check from existing captures unless the DAC has a digital output that allows recording.

 

 

 

 

Link to comment
9 minutes ago, STC said:

 

I think they both are getting exactly the same bits. However, the recorder ensures the bits are identical to the original files. In DAC case, I think the priority is to convert the digital to analogue for audio and some compromise could have take taken place. If I am not mistaken the DAC used in the Altmann DAC was PCM1604 which is rather dated and the error correction if any may not be as good as modern chips. I am just speculating here.  

 

Ok, then I propose yet another failure mode :) Remember that bits are carried by analog waveforms. If the receiver circuit in the DAC is not as good as the receiver in the recorder, it may not recover all the same bits as the recorder might if the circuit is flakey or more sensitive to noise, or forms a ground loop, or has a different threshold of detection, or...?

 

Link to comment
9 minutes ago, manisandher said:

 

I'm assuming you meant "supposedly not in this test"?

 

But even so, why the "supposedly"? Mans has said that all the digital captures were bit-identical. Everyone else (including PeterSt, who initially felt that there was a possibility they might not be, until Mans shared his findings) seems to have accepted this.

 

Mani.

Yes, sorry. Corrected.

 

'Supposedly' because I'd still want to see Mans report his findings and how he checked for bit-correctness.

Link to comment
10 minutes ago, manisandher said:

 

Mans took some digital and analogue captures of 'digital silence' playing through the DAC. Presumably electrical and/or EMI noise would be detectable in the analogue captures (though I can't see anything untoward in my cursory analysis.)

 

And remember, what we're looking for in the analogue captures isn't anything 'absolute'. We're looking for differences between the analogue captures of 'A' and 'B'. It wouldn't be enough to show that there was electrical and/or EMI noise getting to the DAC - it would be be necessary to show that this noise was consistently different between the analogue captures of 'A' and 'B'.

 

Mani

 

Mani, I agree. But what's been proposed by others, including Peter as I recall, is that different algorithms produce different load on the CPU, memory, bus, etc., resulting in different noise patterns and current draw. The conclusion that's drawn from this is that different player programs, different versions of the PC operating system, or even just where the source file is being accessed from, can all result in different patterns of PC noise that can somehow infect the DA process and result in audible differences. Following this logic, it's possible that different SFS settings produced wildly different CPU activity in the PC that resulted in very different noise patterns. Put an AM radio next to a PC, and you can actually hear the differences ;)

 

I'm skeptical about this process in a general case, but I have seen measurements of some really poorly designed DACs where PC activity was clearly reflected in the analog output. Proper design should take care of this, and according to measurements, it appears to do so even in some of the very inexpensive DACs.

 

Link to comment
29 minutes ago, PeterSt said:

 

FYI and hopefully not more confusion : Isochronous USB (which is what a USB DAC would normally use) also doesn't do error correction (it may steer the USB transfer speed though).

 

It can also detect an error and drop a micro frame containing a few samples. Or, try to do some interpolation/guessing to make up for the samples in the frame that's in error.

 

Link to comment
4 minutes ago, STC said:

 

 But interporpolation is a form of guessing what a bad data should have been. I may have mistakenly thought this is some form of error correction.

 

Interpolation is intelligent guessing, it'll almost never be exactly right but can get close. The protocol does not specify what should be done with bad packets. I assume in most cases, they will just be skipped.

Link to comment
8 minutes ago, STC said:

 

Paul, Isn't it true that DAC need to process the data continuously unlike the recorder where it can take extra few micro seconds to write the data. All DAC do have a sample buffer to collect the adequate data to prevent buffer underrun. The size of the buffer also responsible for latency. 

 

Now going back to the data coming from XXHE, can the SFS influence how much data reaches the DAC's sample buffer. Whether the smaller SFS creates more errors or delays in the sample buffer which translates to different latency and becomes audible?

 

DAC does, but the incoming data doesn't have to be in a continuous stream. In USB isochronous protocol (that's what we are still talking about, right?) the data is sent in small packets every 125µs, on the dot. Each packet contains a number of samples, and the number can be increased or decreased by one if the buffer is filling up too slowly or too quickly. The receiver on the DAC side decides to increase or to decrease packet size and communicates with the PC to do so.

 

The DAC itself is driven entirely by an internal clock that expects the buffer to at least have one sample left in it at the next clock cycle. If, despite the sample adjustments the buffer runs empty or overflows, samples will be lost, but that's an unusual condition and will result in audible clicks and pops.

 

Link to comment
5 minutes ago, STC said:

 

 That means the Altmann DAC couldn't tell the pc to send the desired packet size. Can now the different SFS affcts the data in sample buffer? Or that is irrelevant....

Sorry, all we been discussing for the past few hours is USB, not SPDIF.

 

SPDIF is a very different protocol. The timing is controlled entirely by the PC. The data is sent continuously and the clock is embedded in the  data signal. This means the timing of the data must be very well controlled by the PC. There is simple parity check, but that does not help with timing errors. If the source clock is poor or noisy, the output of the dac will be jittery. 

Link to comment
3 minutes ago, Miska said:

 

The "controlled by the PC" is a bit vague. If you use USB-to-SPDIF converter for example, most of those use asynchronous USB transfer and the timing is driven by the clock in the converter. I have bunch of such devices, including bare ones like the first generation M2Tech hiFace and newer Musical Fidelity V-Link192. These are pretty good sources, although with V-Link I needed to cut the ground connection of AES cable at receiver end to make it actually floating (otherwise the DAC output easily had ground noise issues).

 

If you use something like Lynx AES cards, those have their own clocks too and are not as such timed by the PC.

 

But in the end it depends on definition of "by the PC", if anything connected to the computer is counted in, then yes. But, yes of course S/PDIF and AES/EBU transfer clock with the data, so the timing ultimately relies on the source and is just massaged by PLL's at the receiver end. Same goes also for Bluetooth, AirPlay and such. At best, digital PLL's can do pretty good job at recovering clocks. End result naturally depends on combination of transmitter and receiver quality/capabilities in this case. At best, it can be really good, better than poor asynchronous USB implementations.

 

 

A USB-to-SPDIF box simply moves the source of the clock to outside the PC to the converter, it doesn't change the fact that the clock is still outside the DAC. The SPDIF part carries the embedded clock, but this time it's coming from the converter and not from the PC. I have a few of these as well, and some have 10x worse jitter than feeding the DAC directly from SPDIF output on my motherboard :) while others are much better (such as an SU-1, f.i.)

 

Link to comment
1 hour ago, Confused said:

Plus, unless I have missed a key post somewhere, whatever this change is it cannot be measured.  Or maybe it can, but not with the measurement apparatus available.  Maybe it cannot be measured with any apparatus currently available or that has been applied to audio?

 

What you’ve missed is that no analysis or review of the data has been published yet, so any conclusions about immesurability are premature. The rest of what you’ve posted is correct.

Link to comment
2 hours ago, Audiophile Neuroscience said:

Jitter and timing issues were dismissed as we were told a decent DAC would buffer the signal and re-clock the timing.Similarly the issue of noise was dismissed as galvanic isolation should fix this

 

Mani has provided evidence that challenges many of the above assertions. 

 

You are implying conclusions that are not justified by the test.

 

1. There was no buffering of data in this case and no reclocking, since the output from the PC went over SPDIF into the DAC. Buffering/reclocking is usually associated with asynchronous protocols, such as isochronous USB.

 

2. I see no evidence of galvanic isolation in Mani's DAC or PC. Please share it if you have it.

 

Link to comment
5 minutes ago, Audiophile Neuroscience said:

I was referring to historical reasons provided by others as to why it was supposedly impossible for bit identical files to sound different, not specifically to Mani's setup. 

 

Right. And these reasons are not debunked by Mani's test score and may easily be the explanation for why he heard the differences that he did. Your post seemed to imply that Mani somehow proved these assertions about jitter and galvanic isolation wrong. He didn't.

 

 

Link to comment
3 minutes ago, mansr said:

Mani did some longer captures of a 10 kHz sine tone using both SFS settings. Here's an FFT of those:

 

mani-10k-fft-16m.thumb.png.39c7d98fc8113ccf74471c2c07bf73b1.png

 

Looks like there's a little more jitter with SFS=0.1. I'm not saying this is necessarily what Mani heard, only that it's an identifiable difference. Maybe. It could also be caused by some unrelated change in the environment that occurred between the two recordings.

 

Both of the long captures also had a dropout of a few milliseconds about 140 seconds in. The FFT is from the portion preceding this glitch.

 

 

Is that (horizontal) frequency scale correct? +/- 2 Hz from fundamental? And what's the vertical scale? dB? Is the equipment sensitive enough for such a fine-grained frequency analysis? The difference is visible on the graph, but I can't imagine it would be at all audible if the jitter sidebands are a fraction of a Hz away from the fundamental. Do you?

 

The drop-outs are worth investigating, as they point to some sort of miscommunication, timing error, or even programming problem.

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...