Jump to content
IGNORED

Assertions on DACs & Digital Audio


Recommended Posts

I agree with mansr on this. Unless something is really done wrong the reflections on USB are quite small.

 

This all started for S/PDIF interfaces using RCA plugs and jacks which DO have significant relections (RCA plugs are nowhere near 75 ohms). The 1.5 meters was based on the round trip time of a cable relative to the cycle time of the 44.1 S/PDIF stream.

 

Even if there ARE reflections on USB this is only going to be appropriate when using full speed NOT high speed. Using this logic the minimum would be an inch or so for high speed USB.

 

I don't see this as a criteria at all.

 

John S.

 

 

Link to comment
12 minutes ago, Ralf11 said:

Here is Quote #2:

 

re Jitter... "timing inaccuracies of as little as 10 picoseconds are audible"

Sure. In a PCIe link that amount of jitter will cause transmission errors. If these errors are frequent, audible data loss may result.

 

712764988_Screenshot_20190216-0049002.png.3d03f673b09eb3edc7ac3554b84f20d2.png

Link to comment
9 hours ago, mansr said:

Notice that the peaks can only reach about 0.2 V. With a reflection ratio of 1.6%, the worst-case resultant shift in the zero crossing would be about 0.75% UI, which is well within the allocated margin. A higher peak level must also have a shorter rise/fall time, and the maximum skew caused by a 1.6% reflection remains around 0.8% UI.

 

This amount of jitter in the actual DAC clock would of course be unacceptable. However, even the clock recovered from this signal will be much better. Moreover, this clock is only used within the USB PHY.

 

Nice analysis ;) 

 

I agree that the jitter on USB, given buffering etc, is not nearly the concern as for a DAC clock. My point was simply that the amount of correlated jitter (as well as other sources) is order n of magnitude higher than random jitter for the USB clock itself thus placing a bound on the amount of jitter that ought be tolerated on the USB clock itself -- also this analysis can be similarly made for external clocks and again, small discrepancies in cable impedances can cause small amounts of reflection which result in small amounts of jitter yet still orders of magnitude more than the clock oscillator itself. This point is more appropriate for other recent threads but @Ralf11 asked specifically, and this is a specific answer: yes reflections matter to a very limited degree if the cable meets spec.

 

These same issues also apply to Ethernet and the specific reason that I recommend Belden bonded cables, and particularly the REVConnect connectors, is that they maintain a very constant impedance through the connector, cable and connector also minimizing reflections.

 

In any case there seems to be recent fascination with using "OCXO" clocks in USB, Ethernet, external clocks etc etc etc and regardless of the unconcern with very slight reflections in cables, any clocked signal which travels over a cable will be subject to such types of correlated jitter that use of "extreme" clocks is not warranted.

Custom room treatments for headphone users.

Link to comment
30 minutes ago, mansr said:

If the cable meets spec, reflections do not matter at all. That's what it means to meet spec.

 

The impedance is specified 90 ohms +/- 10% (ish). Meeting spec doesn't mean that reflections don't matter at all, rather that they are within limits. In my use of language a small number is not the same as zero.

 

In any case the specific question posed to me did not suggest, to me, that the question was limited to the USB specification, which frankly interests me very little, but rather to the issue of reflections in cables which I find more interesting in general, as well as reflections in PCB traces, around vias etc etc etc, which I do find relevant to the topics that I'm interested in. So suit yourself. 👋

Custom room treatments for headphone users.

Link to comment
4 hours ago, R1200CL said:

So will the amount of reflection and jitter in a cable be affected how well it is shielded ? (John’s suggestion using Faraday Cage implemented in cables )

 

Reflection amount is independent of shield.

Jitter have several contributors. The following documents may help:

https://www.keysight.com/upload/cmc_upload/All/ADMF2009_HowToMeasureJitterEffectively.pdf

https://www.ieee.li/pdf/viewgraphs/jitter_basics_advanced.pdf

 

 

Jitter.jpg

Link to comment
9 hours ago, jabbr said:

The impedance is specified 90 ohms +/- 10% (ish). Meeting spec doesn't mean that reflections don't matter at all, rather that they are within limits. In my use of language a small number is not the same as zero.

Meeting spec means reflections won't be worse than what the receiver is required to handle.

 

9 hours ago, jabbr said:

In any case the specific question posed to me did not suggest, to me, that the question was limited to the USB specification, which frankly interests me very little, but rather to the issue of reflections in cables which I find more interesting in general, as well as reflections in PCB traces, around vias etc etc etc, which I do find relevant to the topics that I'm interested in. So suit yourself. 👋

Ralf's quote explicitly said USB cables. In general, reflections occur wherever there is a discontinuity of some kind. How much they matter depend on their strength (which can be calculated), the type of signal, and the length of the connection.

 

Digital interface standards typically specify timing requirements that transmitters must stay within and (usually) somewhat relaxed parameters that a receiver must accept. Provided all the parts are as specified, the link will work reliably.

 

High-frequency analogue signals will be degraded by reflections. In video signals, reflections can sometimes be seen to cause blurring (with short cables) or ghosting (with long cables). For example, with the VESA standard mode of 1024x768 pixels at 75 Hz, reflections in a 5 m long cable will cause a ghost image shifted by about 4 pixels.

 

Low-frequency analogue signals, including audio, are much less affected by normal reflection levels. Since the wavelength of a 20 kHz signal in typical cabling is 10 km, there will be no appreciable effect unless the cables are very long. It does matter for phone lines, but not for much else.

Link to comment

The first time I read that USB cables should be 1.5m was from the Berkeley Audio Design Alpha USB manual: “1.5 meters is a good default length for USB, SPDIF and AES cables.” And I believe some audio reviews also reiterated this and went more into the explanations

 

I’ve always thought the USB part didn’t make much sense but I don’t know enough to really know for sure. Good to hear from a few experts it’s not true. Too bad I already bought a 1.5m cable. I could have used a shorter one.
 

Link to comment
14 hours ago, JohnSwenson said:

This all started for S/PDIF interfaces using RCA plugs and jacks which DO have significant relections (RCA plugs are nowhere near 75 ohms). The 1.5 meters was based on the round trip time of a cable relative to the cycle time of the 44.1 S/PDIF stream.

I still don't see how one arrives at 1.5 m. The round-trip time of such a cable is about 15 ns whereas the bit interval for 44.1 kHz S/PDIF is 350 ns.

Link to comment
55 minutes ago, Ralf11 said:

I think we can all agree to put this one in the unlikely category (tho not impossible as Jabbr wockyied above).

 

 

Well ... nothing to suggest that 1.5m is better for hypothetical USB reflections in any case ...

 

55 minutes ago, Ralf11 said:

Are we ready for #3??

 

Shoot

Custom room treatments for headphone users.

Link to comment

ok, here is #3:

 

"The S/PDIF interface is fundamentally flawed, in that the clock is carried within the audio data...  This requires that the receiving device lock to that clock with a phase-locked loop (PLL)  and generate a new clock based on the incoming clock.  PLLs are imperfect devices, they tend to track and pass along rather than reject variations in the incoming signal.

 

The clock "recovered" by the PLL becomes the timing reference for the critical D to A conversion stage.  This is the point where jitter matters - in the clock that controls when each digital sample is converted to an analog value in the DAC chip.  If that clock isn't perfectly precise, the ...[signal] is reconstructed with slightly staggered spacing between samples, resulting in tiny errors in the analog waveform.

 

Such micro-errors in amplitude don't exist in nature, which perhaps explains why our brains are highly sensitive to [it].

 

This interface-induced jitter isn't random in nature; it's directly correlated with the audio signal being transmitted... the clock is modulated by by the music carried by the interface * * * and degrades SQ."

 

p. 190

 

There are a lot of individual links in the chain of logic here, so I added some para.s to break it up.  Also, some induction in the assertion.   Best to be specific as to which stmt. in critiques, I think.

Link to comment
4 minutes ago, Ralf11 said:

"The S/PDIF interface is fundamentally flawed,

That's a bit of an overstatement. Personally, I'd go for the less dramatic "imperfect."

 

4 minutes ago, Ralf11 said:

in that the clock is carried within the audio data...  This requires that the receiving device lock to that clock with a phase-locked loop (PLL)  and generate a new clock based on the incoming clock.

There are two alternatives to this:

  1. Put the master clock in the receiver and use a flow control mechanism.
  2. Use a separate clock signal.

#1 is obviously the best, but is also the most difficult to implement. Compare the cost and complexity of USB audio (which does this) to S/PDIF.

 

#2 would give a slightly more stable clock signal than embedding it with the data. The downsides are more complex (and thus more expensive) cabling as well as potential problems with signal skew, cross-talk, and other things that can only happen with more than two wires.

 

4 minutes ago, Ralf11 said:

PLLs are imperfect devices, they tend to track and pass along rather than reject variations in the incoming signal.

A PLL tracks variations below its corner frequency and rejects those above it. That is its entire purpose. Implementations differ in where they place the corner frequency and how well they attenuate jitter above it. Cascading two or more PLLs is a simple way of improving the total performance. A PLL coupled with a VCXO can give very good performance if the source clock is reasonably stable.

 

4 minutes ago, Ralf11 said:

The clock "recovered" by the PLL becomes the timing reference for the critical D to A conversion stage.  This is the point where jitter matters - in the clock that controls when each digital sample is converted to an analog value in the DAC chip.  If that clock isn't perfectly precise, the ...[signal] is reconstructed with slightly staggered spacing between samples, resulting in tiny errors in the analog waveform.

I can't really argue with that.

 

4 minutes ago, Ralf11 said:

Such micro-errors in amplitude don't exist in nature, which perhaps explains why our brains are highly sensitive to [it].

That's quite a leap. Usually, our senses are less sensitive to stimuli that don't exist, or are of no significance, in nature, not more.

 

4 minutes ago, Ralf11 said:

This interface-induced jitter isn't random in nature; it's directly correlated with the audio signal being transmitted... the clock is modulated by by the music carried by the interface * * * and degrades SQ."

This was true of early S/PDIF receivers. Current (and not so current) ones use only the fixed preamble of each data word for clock recovery in order to remove any data dependency. Perhaps the book was written before this practice became commonplace, or perhaps the author is ignorant of this development.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...