Jump to content
IGNORED

Why does SPDIF basically suck?


Recommended Posts

4 minutes ago, Superdad said:

Between elimination of stages and the ability to slave an internal USB>I2S board to the DAC master clock, there would seem to be very few decent arguments in favor of using an external DDC.  The only valid argument I have ever heard was that the DAC designer's USB (or Ethernet) input did not have enough effort put into it.

A valid reason for an external converter would be if the DAC doesn't have USB at all.

 

S/PDIF also lets you do things like split the signal and feed multiple receivers at the same time. Admittedly not a typical thing to do, but if you do need to, it's possible. With USB it's pretty much impossible.

Link to comment
1 hour ago, Superdad said:

 

I certainly never said it did.  I don't see the world in back-and-white. 9_9

 

My original point was simply that, unless one is producing a really good, well clocked S/PDIF single right in the source computer (eschewing USB altogether), the choice to use S/PDIF is just a choice to do USB>I2S>S/PDIF conversion externally--and to then require another S/PDIF>I2S conversion in the DAC.

 

Between elimination of stages and the ability to slave an internal USB>I2S board to the DAC master clock, there would seem to be very few decent arguments in favor of using an external DDC.  The only valid argument I have ever heard was that the DAC designer's USB (or Ethernet) input did not have enough effort put into it.

 

Cheers,

--AJC

 

I don’t believe conversions between different digital interfaces is bad, I even think it has its advantage; like one type of transmission and protocol can be better at reducing RF noise or can be used to send over longer distance, another type of protocol can be better at reducing jitter and so on.

 

I actually think most would go USB >S/PDIF or Network >USB >S/PDIF (which includes conversion, buffering and reclocking) externally and when S/PDIF>I2S conversion in the DAC. What will be lost or add in the conversion VS Network > USB > USB externally and USB > I2S conversion in the DAC?

 

Okay the only valid argument for me is if the sound improves with a DDC, USB regen, render or not. I have tested with some DACs and mR/uR or JCAT USB direct to DAC didn’t sound as good as mR/uR or JCAT USB to DDC BNC/SPDIF to DAC.  

 

No wired digital interface sucks, only some implementation. The same is true for tube vs SS; R2R vs SD, render vs server; planar vs dipole vs horn vs box speakers; Class A vs class A/B vs class D.

Link to comment
3 hours ago, Summit said:

 

They use a coax cable with BNC connectors. SPDIF use the same 75 Ohm coax cable and this is a thread about “Why does SPDIF basically suck”.  

 

 And Coax SPDIF should, for best performance , also use 75 ohm BNC connectors (including the sockets) at both ends of the 75 ohm cable.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
2 hours ago, mansr said:

S/PDIF also lets you do things like split the signal and feed multiple receivers at the same time. Admittedly not a typical thing to do, but if you do need to, it's possible. With USB it's pretty much impossible.

 

Even my old MF X-DAC V3 permits feeding another DAC via Coax SPDIF.

xdacv3withcricket3s.jpg

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment

So, no doubt naively, I am conceptualizing that if the buffer is large enough to hold *all* samples then controlling/synchronizing their flow becomes moot......

 

11 hours ago, mansr said:

 So yes, in practice, you can get away for longer with a smaller buffer.

 

 

 

Don't you mean a larger buffer???o.O (confused)

 

11 hours ago, adamdea said:

Anyway the major advantage of trying out a really long buffer

 

That's "long" as in large right?

 

.

11 hours ago, adamdea said:

 

a really long buffer is to discover that it doesn't make any difference and stop angsting  about jitter.

 

Do you mean the buffer doesn't alter jitter problems or solves jitter problems?

 

 

Sound Minds Mind Sound

 

 

Link to comment
1 minute ago, Audiophile Neuroscience said:

So, no doubt naively, I am conceptualizing that if the buffer is large enough to hold *all* samples then controlling/synchronizing their flow becomes moot......

How many samples is that? A week's worth? A month? A year?

Link to comment
4 minutes ago, mansr said:

How many samples is that? A week's worth? A month? A year?

 

Hi Mans. I am not challenging you (this time haha) just trying to learn. I gather a bigger buffer is required for a year vs a week. IF my thinking is correct then, I would say that I would be 'happy' if the buffer held enough samples on average to last a full song. Now that may mean a variable pause before each playback but that would be my preference IF it solved jitter (and SQ improved) when using spdif. OTOH IMO just go with USB which doesn't have the same problem and to my ears sounds better anyway.

Sound Minds Mind Sound

 

 

Link to comment
1 hour ago, Audiophile Neuroscience said:

Hi Mans. I am not challenging you (this time haha) just trying to learn. I gather a bigger buffer is required for a year vs a week. IF my thinking is correct then, I would say that I would be 'happy' if the buffer held enough samples on average to last a full song. Now that may mean a variable pause before each playback but that would be my preference IF it solved jitter (and SQ improved) when using spdif.

How would the DAC know how much to pre-buffer? Nothing in the S/PDIF signal indicates the duration, nor is there any kind of start/stop command.

Link to comment
12 hours ago, Summit said:

 

I don’t believe conversions between different digital interfaces is bad, I even think it has its advantage; like one type of transmission and protocol can be better at reducing RF noise or can be used to send over longer distance, another type of protocol can be better at reducing jitter and so on.

 

I actually think most would go USB >S/PDIF or Network >USB >S/PDIF (which includes conversion, buffering and reclocking) externally and when S/PDIF>I2S conversion in the DAC. What will be lost or add in the conversion VS Network > USB > USB externally and USB > I2S conversion in the DAC?

 

Okay the only valid argument for me is if the sound improves with a DDC, USB regen, render or not. I have tested with some DACs and mR/uR or JCAT USB direct to DAC didn’t sound as good as mR/uR or JCAT USB to DDC BNC/SPDIF to DAC.  

 

No wired digital interface sucks, only some implementation. The same is true for tube vs SS; R2R vs SD, render vs server; planar vs dipole vs horn vs box speakers; Class A vs class A/B vs class D.

 

All DDCs I've tried add coloration - it can be good or bad depending on the system and a DDC, but it's one more component in the chain plus an extra cable. It always alters the sound. 

If one has a clean USB output from a PC (or streamer) and a USB input board in a DAC is done right (a separate PSU, decent clock etc), I don't see a reason to use a DDC. In my system any DDC affects performance negatively. 

JPLAY & JCAT Founder

Link to comment
17 minutes ago, Marcin_gps said:

All DDCs I've tried add coloration - it can be good or bad depending on the system and a DDC, but it's one more component in the chain plus an extra cable. It always alters the sound. 

If one has a clean USB output from a PC (or streamer) and a USB input board in a DAC is done right (a separate PSU, decent clock etc), I don't see a reason to use a DDC. In my system any DDC affects performance negatively. 

Really subtle marketing there.

Link to comment
11 hours ago, Audiophile Neuroscience said:

 

 

That's "long" as in large right?

 

.

 

Do you mean the buffer doesn't alter jitter problems or solves jitter problems?

 

 

I guess you can call a buffer long or large depending whether you are meansurign it in bits or in the length of time that those bits represent. 

 

The buffer basically solves the "problem" of jitter. To be fair (which I generally avoid) some people would term this the "first order" effects of jitter only ie the problem of the conversion clock having to match anything in the sending clock. People like John Swenson claim that there is a second order jitter effect- all the little bits marching into the buffer have such heavy footsteps that they make the conversion clock wobble  even though they tiptoe out perfectly in time.

 

I have tried a dac with a long buffer and I have tried a proper cd trasnport slaved to the dacs conversion clock. Both of them are reassuring but they mainly convinced me that there wasn;t much to be worried about in the first place. Jitter is largely a bogeyman-the actual evidence of audibility is slight and after it is solved nothing much changes (at least for me).

 

It should be borne in mind that mansr is basically setting out why from an engineering point of view S/PDIF is not the best way of doing things. A properly asynchronous way of sending data would be better (eg ethernet or maybe bulk usb). But that isn't the same as saying there is a real problem with SPDIF which limits the listener. I notice incidentally that some designers and some listeners still prefer spdif over audio usb,.

 

 

You are not a sound quality measurement device

Link to comment
10 hours ago, mansr said:

How would the DAC know how much to pre-buffer? Nothing in the S/PDIF signal indicates the duration, nor is there any kind of start/stop command.

If you've got enough for 1.5 hour and its rebuffers whenever someone stops playback, see how many complaints you get. My guess would be the square root of bugger all.

 

Incidentally I would have guessed it would be possible in principle to detect whether track changes involved a silence between tracks or whether genuinely gapless plackback was necessary. Only a few live albums and some heavily tracked classical albums actually require this. Many (most?) classcial symphonies are still put on the cd one track per movement.

For most music most people play most of the time rebuffering between tracks would be fine and a relatively short buffer would do.

This all sounds theoretical and impractical but some dac manufacturers seem to have managed a scheme using a buffer to enable the use of a fixed frequency clock.  I seem to remember that naim used  (or claimed they used) a solution of having a variety of different fixed frequency clocks which they matched to the incoming stream. 

I can see that it wouldn't be a satisfying solution in the sense of being guaranteed to work all the time for anything that was thrown at it. But seriously, if I was worried about jitter I would happily sacrifice the ability to listen to 4 hours of absolutely continuous music. without a couple of seconds break.

You are not a sound quality measurement device

Link to comment
37 minutes ago, adamdea said:

I guess you can call a buffer long or large depending whether you are meansurign it in bits or in the length of time that those bits represent. 

 

The buffer basically solves the "problem" of jitter. To be fair (which I generally avoid) some people would term this the "first order" effects of jitter only ie the problem of the conversion clock having to match anything in the sending clock. People like John Swenson claim that there is a second order jitter effect- all the little bits marching into the buffer have such heavy footsteps that they make the conversion clock wobble  even though they tiptoe out perfectly in time.

 

I have tried a dac with a long buffer and I have tried a proper cd trasnport slaved to the dacs conversion clock. Both of them are reassuring but they mainly convinced me that there wasn;t much to be worried about in the first place. Jitter is largely a bogeyman-the actual evidence of audibility is slight and after it is solved nothing much changes (at least for me).

 

It should be borne in mind that mansr is basically setting out why from an engineering point of view S/PDIF is not the best way of doing things. A properly asynchronous way of sending data would be better (eg ethernet or maybe bulk usb). But that isn't the same as saying there is a real problem with SPDIF which limits the listener. I notice incidentally that some designers and some listeners still prefer spdif over audio usb,.

 

 

Thanks for your thoughtful reply. I'm guessing your views on jitter will be controversial.

Sound Minds Mind Sound

 

 

Link to comment
16 hours ago, Summit said:

They use a coax cable with BNC connectors. SPDIF use the same 75 Ohm coax cable and this is a thread about “Why does SPDIF basically suck”.  

You mean the same cables and connectors used in lab equipment to get very clean multi-gigahertz signals around? The same connector used in just about all scopes, even the fastest ones?

 

The BNC connector is very hard to improve on. 

NUC10i7 + Roon ROCK > dCS Rossini APEX DAC + dCS Rossini Master Clock 

SME 20/3 + SME V + Dynavector XV-1s or ANUK IO Gold > vdH The Grail or Kondo KSL-SFz + ANK L3 Phono 

Audio Note Kondo Ongaku > Avantgarde Duo Mezzo

Signal cables: Kondo Silver, Crystal Cable phono

Power cables: Kondo, Shunyata, van den Hul

system pics

Link to comment
1 minute ago, Audiophile Neuroscience said:

Thanks for your thoughtful reply. I'm guessing your views on jitter will be controversial.

All views on jitter are controversial on a hifi forum I'd say..

But there was a lot of literature in the early 90s. It is exciting when a problem is "discovered" which can be solved as this gives a justification for newer and better products. Julian Dunn wrote some interesting stuff and proposed the j test. Sterophile uses it to measure jitter and has done for years. Pretty much all proper dacs over £100 (and probably loads of them under it) now pass the j test with flying colours, and they have done for years That much is pretty much a fact.

If you doubt my scepticism try finding some actual evidence of audibility of jitter since Benjamin and Gannon. And look at the levels of jitter found in the output of modern dacs.

You are not a sound quality measurement device

Link to comment
28 minutes ago, adamdea said:

If you've got enough for 1.5 hour and its rebuffers whenever someone stops playback, see how many complaints you get. My guess would be the square root of bugger all.

 

Incidentally I would have guessed it would be possible in principle to detect whether track changes involved a silence between tracks or whether genuinely gapless plackback was necessary. Only a few live albums and some heavily tracked classical albums actually require this. Many (most?) classcial symphonies are still put on the cd one track per movement.

For most music most people play most of the time rebuffering between tracks would be fine and a relatively short buffer would do.

This all sounds theoretical and impractical but some dac manufacturers seem to have managed a scheme using a buffer to enable the use of a fixed frequency clock.  I seem to remember that naim used  (or claimed they used) a solution of having a variety of different fixed frequency clocks which they matched to the incoming stream. 

I can see that it wouldn't be a satisfying solution in the sense of being guaranteed to work all the time for anything that was thrown at it. But seriously, if I was worried about jitter I would happily sacrifice the ability to listen to 4 hours of absolutely continuous music. without a couple of seconds break.

You're looking at this as a user. I'm looking at it from an engineering point of view. If I were to build a DAC, it would work equally well no matter how it was used. If someone wants to play continuously for a month, that should be possible. Hmm, maybe I should build a DAC. How hard can it be?

Link to comment
12 minutes ago, mansr said:

You're looking at this as a user. I'm looking at it from an engineering point of view. If I were to build a DAC, it would work equally well no matter how it was used. If someone wants to play continuously for a month, that should be possible. Hmm, maybe I should build a DAC. How hard can it be?

I get that. While we are at it,  why have any sort of isochronous transmission mechanism. Even asynch usb seems inelegant -why bother troubling the sending device all the time, and surely we want proper error correction and retransmission?

You are not a sound quality measurement device

Link to comment
32 minutes ago, adamdea said:

All views on jitter are controversial on a hifi forum I'd say..

But there was a lot of literature in the early 90s. It is exciting when a problem is "discovered" which can be solved as this gives a justification for newer and better products. Julian Dunn wrote some interesting stuff and proposed the j test. Sterophile uses it to measure jitter and has done for years. Pretty much all proper dacs over £100 (and probably loads of them under it) now pass the j test with flying colours, and they have done for years That much is pretty much a fact.

If you doubt my scepticism try finding some actual evidence of audibility of jitter since Benjamin and Gannon. And look at the levels of jitter found in the output of modern dacs.

 

I quoted the seminal 1992 Dunn and hawksford paper earlier in the thread. Jitter was a concern then and DAC designers since have gone to great lengths to reduce it to ever more vanishingly small values.Evidence for audibility is also always going to be controversial on an audio forum. Interestingly, I noticed that the Red Pill/Blue Pill thread seems to have stalled.

Sound Minds Mind Sound

 

 

Link to comment
6 minutes ago, Audiophile Neuroscience said:

 

I quoted the seminal 1992 Dunn and hawksford paper earlier in the thread. Jitter was a concern then and DAC designers since have gone to great lengths to reduce it to ever more vanishingly small values.

The thing is- if one considers the effect of jitter (sidebands on the signal at +/- jitter frequency), the relative ease of getting very high levels of jitter of attenuation at hf, the effects of masking and the hypothesies about audibility in Dunn's work, it is difficult to avoid the conclusion that the problem was solved (if it was a real problem). IIRC in the Dunn and hawsford paper they were comparing devices with a proper pll to ones which basically had no jitter attenuation at all.

 

 

 

 

 

You are not a sound quality measurement device

Link to comment
9 minutes ago, adamdea said:

I get that. While we are at it,  why have any sort of isochronous transmission mechanism. Even asynch usb seems inelegant -why bother troubling the sending device all the time,

Over the course of the playback, the average transfer rate must equal the playback rate. A buffer at the receiver enables the data transfer to take place in bursts. With USB 2.0 audio class devices, a data burst every 125 μs transfers, at the fixed bus rate of 480 Mbps, all the sample data for that microframe interval. Generally speaking, the larger the receive buffer, the less frequently data bursts are required. Still, no matter how large a buffer is deployed, it will require periodic refilling, be it every 125 μs or once an hour, and if the sender is late, playback will stutter. Smaller, more frequent transfers also reduce latency, which is a good thing. Although music playback is less sensitive to latency than, say, gaming, it still matters in some situations. Suppose you use a software volume control. When you adjust the volume, you expect the change to take effect more or less immediately, not after a minute.

 

9 minutes ago, adamdea said:

and surely we want proper error correction and retransmission?

Define proper. Guaranteed error-free transmission implies unbounded latency, and we can't have that. Instead, we must decide how much latency is acceptable and do a best-effort transfer within that interval. Since low latency is generally desirable, we look at the typical error rate of the link layer and choose a retransmission scheme resulting in an acceptable level of packet loss. With USB, it turns out that errors are extremely rare to begin with, so there is no need for retransmissions. Bulk transfers, which have higher reliability demands, do have retransmission on error at the cost of unbounded latency.

Link to comment
4 minutes ago, mansr said:

Over the course of the playback, the average transfer rate must equal the playback rate. ...

Define proper. Guaranteed error-free transmission implies unbounded latency, and we can't have that. Instead, we must decide how much latency is acceptable and do a best-effort transfer within that interval. Since low latency is generally desirable, we look at the typical error rate of the link layer and choose a retransmission scheme resulting in an acceptable level of packet loss.

I'm rather boring in this, but personally I think slimdevices nailed it years ago. I spose you might want mulitple inputs to a dac, but having a server in a cupboard somewhere transmitting via ethernet or wifi to playing devices round the house seems like a sensible way of doing it. There is a slight latency on the volume control but it's not a big deal..

 

What do you regard as the idea transmission mechanism?

You are not a sound quality measurement device

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...