Jump to content
IGNORED

Why does SPDIF basically suck?


Recommended Posts

47 minutes ago, Pete-FIN said:

Wait what!? I did not know that there is no 'universally agreed' standard for USB-C.

 

Can you please tell more about this, what is not agreed in USB-C?

I'm curious about that too. The standard has gone through some updates, but to my knowledge all variants are compatible.

Link to comment
1 minute ago, jabbr said:

Of course it’s possible — every reasonable FIFO codes for the possibility of buffer over or under runs. Big enough buffer and should be statistically rare. 

The S/PDIF spec requires a transmitter clock tolerance of ±1000 ppm for consumer devices. If your local clock is perfect, that worst case translates to a drift of 3.6 s per hour. To play a CD's worth of music without risk of underflow, you'd thus have to pre-buffer for about 4 s before starting playback. That might just about be acceptable. However, the DAC can't know upfront how long the incoming stream will last. For all it knows, you'll be playing continuously for a week. Handling that would require a 10-minute pre-buffer.

 

I suppose you could deliberately slow your local clock by 1000 ppm and avoid the need to pre-buffer. You'd still need a huge fifo to pick up the excess incoming data, of course, and eventually it would fill up.

Link to comment
2 minutes ago, Ralf11 said:

back to TOSLINK and SPDIF - how bad are they?

Depends on the quality at both ends. Done well, it can be very good. Done poorly, it is dreadful. Feeding 192 kHz S/PDIF into a cheap Cambridge Audio receiver actually made it emit smoke. That's not done well.

Link to comment
1 minute ago, Superdad said:

Sorry, was thinking about I2S at the time.

They showed up around the same time.

 

1 minute ago, Superdad said:

Doesn’t change my opinion of S/PDIF though.  A lot of effort is required to at both transmit and receive end to make it perform well,

Transmitting it isn't hard. The problem for the receiver is recovering the clock without excessive jitter. Early implementations used all transitions for clock recovery resulting in data-dependent jitter, which is what the j-test signal is designed to expose. Later implementations started using only the preamble of each sample, which is constant, to avoid this issue. USB is infinitely more complex but also far more capable.

 

1 minute ago, Superdad said:

and then the are stil issues with the cable, etc.  No denying that, unless generated right in the computer,  S/PDIF adds another unnecessary conversion in the chain.

Yes, going from USB to S/PDIF to I2S is pointless when you can just as well go to I2S directly.

Link to comment
8 minutes ago, davide256 said:

Digital cables are one of the reasons I want my next DAC to have the UPNP/Roon renderer as part of a single chassis solution... I've found I really can't stand USB cables, even with ISO Regen I use back to back USPCB connectors.  SPDIF I splurged on quite a while ago, but I'm sure that if I was willing to spend $$$ on even better  I'd find differences... which all could be rendered moot by keeping the endpoint digital D/A function internal to a single integrated well designed unit.

The argument against this is that it's better to keep noisy digital electronics far away from from the sensitive analogue parts, such as in a separate box.

Link to comment
1 minute ago, Ralf11 said:

another question is how much damage do unnecessary conversions really do in the digital domain - under what circumstances will jitter increase to hearable levels, for example?

Jitter doesn't accumulate. A good final stage can undo much damage done earlier, and a poor one will ruin the cleanest of inputs.

Link to comment
13 minutes ago, Superdad said:

I2S (via LVDS over HDMI) is rarely done in ideal fashion as the source ends up as the master clock (very few DACs feed master clock out to slave the I2S source).

One reason might be that to have the DAC be the master clock, you'd need to deal with an unknown amount of signal skew. A 1-metre cable would have a minimum of 10 ns round-trip delay, probably twice as much when LVDS interfaces and other logic is included. That's too much to be fed directly into most DAC chips, and you'd need a resynchronisation stage to ensure the phase requirements are met. The source would also need to support an external master clock, which not all do.

Link to comment
1 minute ago, Audiophile Neuroscience said:

What is confusing for me, as far as I can understand the concepts, is the notion that the clock doesn't have to be *recovered* so long as it is *replaced* by a better one.

Absent a feedback channel for flow control, the playback rate must be slaved to the sender in order to avoid clock drift.

Link to comment
3 minutes ago, Audiophile Neuroscience said:

So if spdif lacks the feedback channel it must be slaved to the sender, with its jitter? In the alternative, why can't you just discard the sending clock, forget about drift, just use a new clock at the receiving end?

If you ignore drift, you'll need to either drop or insert samples whenever the clocks slip by more than a sample period. The S/PDIF spec requires a frequency accuracy of 1000 ppm for the sender. Suppose your local clock is running at a perfect 48 kHz while the sender is at the upper end of the permitted range, that is 48048 Hz. Every second, you'll be receiving 48 samples more than you know what to do with. You have no choice but to discard them, and this causes distortion. Similarly, if the sender is slow, you'll have to somehow pull 48 samples per second out of thin air, again distorting the signal.

Link to comment
7 hours ago, Superdad said:

Or you can do what @JohnSwenson did for the unique S/PDIF input of the Bottlehead DAC, which was to use an FPGA instead of a traditional S/PDIF receiver with its jitter-prone PLL.  He did a little cleanup of the S/PDIF signal then sent it into an FPGA for the decoding. But the special part was that he’s used a digitally controlled low phase-noise clock, with performance close to some of the best fixed frequency clocks. The FPGA told the variable clock to speed up or slow down so it was synchronized to the average data rate of the source.  It was a REALLY good S/PDIF input!

Yes, this is a well-known and obvious approach. The point is that you must do something to synchronise with the source clock.

Link to comment
13 minutes ago, adamdea said:

But there are at least two ways in which the conversion clock can run independently of the playback rate (long term clock rate) of the sender- one is to have a long buffer and delay playback by long enough to render any clock drift moot- I think chord did this around 2007, PS audio later on. The price is a 2-4 sec delay which never bothered me.

We already discussed this a few pages ago. With a 4-second pre-buffer and a worst-case (1000 ppm slow) sender, your buffer will run dry in about an hour. Enough for a regular album, but too short for longer works, of which there are many, both classical (think opera) and modern (double albums).

 

13 minutes ago, adamdea said:

The other is ASRC where the sending clock rate is encoded in the data and the discrepancy between the sending and conversion clocks manifests as a minute change in pitch (if my brain's wired up correctly this morning).

One application of ASRC is to replace a variable-rate clock. Instead of adjusting your clock to match the sender, you measure the average rate of the incoming data (relative to the local clock) and resample the signal to make it match. The advantage of this method is that you can use a fixed-rate clock, and these generally have better stability than variable ones. Your clock is also not limited to the usual audio sampling rates, so you can choose whatever makes your DAC stage perform optimally. I believe Benchmark DACs use an approach similar to this.

Link to comment
11 minutes ago, adamdea said:

but buffer size is not necessarily equal to one sample.

Of course it isn't. That would never work. The buffer size is, however, irrelevant. No matter what it is, if you don't synchronise playback to the incoming data, any buffer will eventually become inadequate. A properly designed DAC should be able to run indefinitely without glitches.

Link to comment
6 minutes ago, adamdea said:

I see your comment about the 4 second buffer and indefinitely etc. But in my experience (with the ps audio dac) it works perfectly well, probably because most sending devices are much closer than 1000ppm (I think one of my squeezebox touches is 44094, the other 44100, as is my cdp). You can get through the ring cycle with that.

That puts your Squeezebox at an error of 136 ppm, making it a bit of an outlier. Most devices seem to manage an accuracy of 50 ppm or better. So yes, in practice, you can get away for longer with a smaller buffer. Nevertheless, the spec is what it is, and it is IMO inexcusable to not comply with it.

 

6 minutes ago, adamdea said:

Even listening to opera  there are always track breaks- no one wants a recording of the interval between acts. Even Rheingold is only 2.5 hours or so without interval. A few things (maybe Einstein on the beach? go on fro longer than that without an interval.

How will the playback software know when to stop and restart the DAC so as to let it re-buffer?

 

6 minutes ago, adamdea said:

In practical terms there is no real issue apart from a delay at the beginning. And of course you can switch over to pll or rebuffer if there is a problem.

That really complicates things.

Link to comment
17 minutes ago, Superdad said:

If that is “well known and obvious,” can you point us to any other commercial DACs that implement this technique?

I don't know about DACs specifically, but the idea of adjusting a variable-rate clock to match an external signal is hardly novel. It's such an obvious thing to do that I really can't recall when I first encountered it. If you don't see it that way, I can only say I'm sorry.

 

17 minutes ago, Superdad said:

It actually was not that easy to develop.

The concept can still be obvious.

Link to comment
15 minutes ago, Summit said:

I don’t know how your reply are related to my respond to your declaration that SPDIF wasn’t even meant to be used externally when in fact it was I2s and that haven’t stopped you from using it, and shouldn’t.

He already said he misspoke, misremembered, or mis-something-else.

Link to comment
2 minutes ago, Summit said:

They use a coax cable with BNC connectors. SPDIF use the same 75 Ohm coax cable and this is a thread about “Why does SPDIF basically suck”.  

If S/PDIF sucks, it's not because of the cable or connector.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...