Jump to content
IGNORED

Understanding USB


Recommended Posts

23 minutes ago, unbalanced output said:

A digital data stream, from the other side, carries along the time information. It can be non-real time (i.e. both time stamp and data are transmitted and buffered at the receiver where the time signal is regenerated) or real time (only data is passed, time information is passed synchronously from the source clock down to the last user of the data. This is surprisingly very difficult to be done properly in a digital environment which involves all sorts of latency issues.

Nonsense. In an audio stream, the sample timing is implied by the sample rate and position in the stream. Nothing needs to be sent anywhere.

 

23 minutes ago, unbalanced output said:

An USB transmitter creates a synchronous data stream which is sent to the receiver. In many DACs, the timing on this data stream determines also the cycle of DAC update - the process is greatly improved if using the most accurate clock in the chain as a source (either at the transmitter or at the receiver).

Show me one such DAC available to purchase new today. All current USB DACs use adaptive isochronous mode with a local master clock.

 

23 minutes ago, unbalanced output said:

If there are packets lost or delayed, there may be issues if the data is not buffered and interpolated in the DAC.

In isochronous mode, the required bandwidth is reserved from host to device, and bulk transfers are limited to whatever remains. Every 125 µs, the host controller sends a data packet containing the number of samples requested by the DAC. Provided the OS keeps the buffers filled, this cannot go wrong. Since latency isn't important for music playback, the buffer size can be set high enough that underruns simply don't occur. 100 ms is usually plenty.

 

It is of course possible for a data packet to be corrupted, e.g. from noise in the cable. If this happens, it is up to the DAC whether to fill in silence, try to interpolate, or simply drop the packet. With proper cables, this is so rare an occurrence that it is of no consequence whatsoever.

 

Reliable transfer of audio data over USB is a non-issue. Apparently some DACs are susceptible to noise somehow coupled over the USB cable. This has nothing to do with the data transfer and its timing.

 

23 minutes ago, unbalanced output said:

In order to use the USB purely as a data transmission without relying in the time information, the DAC has to re-generate locally the data stream from scratch by buffering the data and build it up from scratch (that's for example what a Regen or few DACs like Chord do).

Because USB data arrives in bursts, all DACs, even the old synchronous ones, must have a local buffer.

Link to comment
23 minutes ago, Ralf11 said:

"implied" ?? do you mean determined?

 

so is there any jitter or not?

I mean that if the sample rate is fs, then sample number n corresponds to time t = n / fs seconds from the start of the recording. Counting samples is trivial and not subject to jitter.

 

23 minutes ago, Ralf11 said:

and what about noise & galv. isolation, loop currents?

That's a separate issue. It is absolutely possible to create such problems, as I have demonstrated.

Link to comment
15 minutes ago, unbalanced output said:

Sorry if I oversimplified. The time pace is indeed defined by the master device, whichever side it is placed. This clock dictates the pace of the data stream. Each data package can carry at most 1kb at 8kHz (for XMOS at least).

At 48 kHz sample rate, each data packet carries on average 6 samples. If the host clock is a bit a fast, some packets will have 5 samples. If it is a bit slow, some packets will have 7 samples. With 32-bit samples and two channels, this comes out to 48 bytes. At 384 kHz, it's a whopping 384 bytes. Nothing to be concerned about.

 

15 minutes ago, unbalanced output said:

The data is then broken down into higher frequency/individual samples using the same clock reference! Therefore it doesn't make much of a difference whether the data is sent in packets or not, they're simply broken down further down the line from the buffer - actually the only guarantee is that it doesn't get any better downstream.

That doesn't make any sense.

 

15 minutes ago, unbalanced output said:

Perhaps what you're not considering is the fact that the clock is (typically) not regenerated or check past the buffer - the data may be used in the correct clock cycle or not, there's no guarantee for that.

The clock is typically downstream of the USB receiver, directly connected to the I2S transmitter and the DAC chip. The USB receiver decides through unspecified means when to request more or fewer samples per frame. One possibility is to trigger this when the FIFO level crosses certain thresholds.

 

15 minutes ago, unbalanced output said:

Agree in the point that latency is not an issue as long as it is constant, however that is not the case in synchoronous or adaptive modes. In asynchronous mode it is indeed minimised, however the processing delay is still variable.

You appear to have the different modes confused. Let me clarify:

  • Synchronous: the DAC uses a PLL to recover the clock from the USB (micro)frame arrival times. Nobody uses this any more.
  • Asynchronous: the DAC has a local free-running clock and uses an ASRC or simply hopes for the best. Nobody uses this either.
  • Adaptive: the DAC adjusts the requested number of samples per frame to compensate for drift between its local clock and the host clock. Everybody uses this mode.

All three modes of operation use USB isochronous transfers.

Link to comment
12 minutes ago, beerandmusic said:

not according to what MANSR said, and i tend to believe him that whatever size the buffer is, is sufficient.  I would have a very difficult time believing that dac engineering is so premature, that they are not properly buffering.

The DAC buffer is quite small, no more than 100 ms. Pro gear used for live effects has a latency of only a few milliseconds. The buffer only needs to hold a few packets to do its job.

 

Because isochronous mode is intended to provide a constant, low latency, there is no retransmission on error. Bulk mode has retransmission but guaranteed latency or bandwidth.

 

DACs use isochronous mode since the guaranteed throughput and latency is more important than perfect delivery. Remember, actual packet errors are very rare, and the only consequence, should one occur, is minor annoyance. Under normal usage, you're unlikely to encounter one during a year of listening. A storage device has the opposite requirements. Even a single bit in error can have dire consequences, but nobody notices if copying a file takes a few milliseconds longer. This is the situation USB bulk transfers are intended for.

Link to comment
55 minutes ago, beerandmusic said:

Assuming transmission of a music file can't be done iin same manner, what do you suggest is a "proper cable"?  I mean are you going to suggest that one needs to spend $200 on a cable that meets USB spec???

A proper 6-foot cable need not cost more than $10. You can probably find one for $5 too.

Link to comment
7 minutes ago, GUTB said:

Even if you don’t want to go through the minor trouble of buying and returning to test your belief system, could you at least make it out an audio show and listen for yourself? Audio shows aren’t the best venues for real listening but some DACs and some rooms are very impressive. Go listen to an Optologic, a Wavedream, Yggdrasil, etc, and come back with your findings

At a show, it's highly unlikely that they'd be using the same amps and speakers. Any comparisons made there are meaningless.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...