Jump to content
IGNORED

Hi-Res - Does it matter? Blind Test by Mark Waldrep


Ajax

Recommended Posts

  • 2 months later...
6 hours ago, mansr said:

I think we can all agree that an oversampling sigma-delta DAC needs a clock to work at all. If this clock is synchronous with the audio data, the design of the DAC chip is simplified since everything can run in lockstep with a constant number of cycles per sample in which to do computations. For this reason, most chips require a clock (various designated system or master) at a simple multiple of the sample rate, typically 128, 256, or 512. If a chip supports these multiples, a constant 24.576 MHz clock can be used for sample rates of 48 kHz, 96 kHz, and 192 kHz while the 44.1 kHz rate family can be handled with a 22.5792 MHz clock. For best jitter performance, designers often use two crystal oscillators at these frequencies and enable one or the other depending on the current sample rate.

 

To support audio sample rates unrelated to the system clock frequency, an asynchronous sample rate converter (ASRC) is required. This device compares the incoming sample rate to the system clock rate and periodically adjusts the parameters of an interpolation filter to match. The output is a data stream with a sample rate synchronised to the system clock. The rest of the chip works as usual. Downsides of this approach include greater design effort, larger chip area, increased power consumption, and more electrical noise. To work well, a substantially faster system clock is also needed, and this too can add to the challenges. Nonetheless, this is how ESS DACs work, so clearly these issues can be overcome. Then again, those chips are not cheap.

 

The constraints and trade-offs involved in DAC chip design are almost entirely unrelated to software resampling between two known rates. The maths behind the interpolation filters is of course the same, but that's where the similarities end.

 

Do I understand correctly?

 

(1) All this talk about floating point math relates only to the asynchronous sample rate converter?

(2) A DAC with only a 22.5792 MHz clock uses a asynchronous sample rate converter only for the 48K family of sample rates?

(3) If an asynchronous sample rate converter is used, extra effort is required to reduce jitter?

 

Thanks.

 

 

mQa is dead!

Link to comment
14 hours ago, mansr said:

To work well, a substantially faster system clock is also needed, and this too can add to the challenges. Nonetheless, this is how ESS DACs work, so clearly these issues can be overcome

 

This would explain the 100MHz clock in the Brooklyn DAC+.  Since this clock is not at a simple multiple of any sample rate, then does that mean an asynchronous sample rate converter is required for all (i.e. both 44.1K and 48K families) sample rate conversions?

mQa is dead!

Link to comment
3 hours ago, mansr said:

I think we need to take a step back. We've been discussing three different situations.

 

Firstly, there is resampling to an integer multiple of the input rate. This is the simplest case. Doubling (or tripling, etc) the sample rate can be done by simply inserting one (or two, etc) zero samples after each input sample, then applying a low-pass filter with a cut-off at the Nyquist frequency of the input (half the sample rate). As we know, applying a filter means convolving the signal with the impulse response of the filter. Since we've inserted a bunch of zeros into the signal, we know that many of the multiplications involved in the convolution will give a zero result, so we can simplify the calculations by skipping those entirely. We can also skip the step of actually placing zeros into the signal and directly do the multiplications that (might) give a non-zero result. On a rate doubling, half the output samples are coincident in time with the input samples while the other half are positioned midway between two input samples. The computations for the former of these involve half the values in the filter impulse response (let's say the even-numbered ones), and for the latter the other half of the impulse response (the odd-numbered values) are used.

 

Secondly, we discussed resampling to a non-integer multiple of the input rate. Conceptually, this can be achieved by zero-stuffing the input to yield a sample rate equal to the lowest common multiple of the two rates, low-pass filtering this, and finally discarding samples to leave the desired target rate. For example, to produce 1.5 (3/2) times the input rate, we would first triple the rate by inserting two zeros after each sample and low-pass filtering as discussed above. Then we'd simply discard half of those samples (which is fine since the signal is already properly band-limited), thus halving the sample rate to the desired 1.5x multiple of the input. That's the long way around, and as before, there are some shortcuts to be made. Multiplying by zero is silly, so that can be skipped. It is likewise silly to actually calculate the values of the samples that are then immediately discarded. After these simplifications, we notice that the output samples can be divided into three sets: those coincident with input samples, those positioned one third of the way between input samples, and those at the two thirds point. As in the rate doubling case, each of these sets involves a separate subset of values from the filter impulse response. This is what the term polyphase refers to. For a conversion from 44.1 kHz to 96 kHz, the ratio reduces to 320/147, so the impulse response is split into 320 parts or phases. Compared to doubling the rate, we need to store 160 times as many filter coefficients which may be an issue for a small microcontroller or DAC chip. The computational effort per output sample is, however, the same. For software running on a PC this extra memory requirement is of no consequence.

 

Thirdly, we have the asynchronous sample rate converter. This is used to convert an input with an unknown or variable sample rate to a (typically higher) fixed rate. Two parts are involved here. First, a digital PLL determines the input rate compared to the chosen output rate. Second, that ratio is used to configure a polyphase resampler as discussed above. The input rate is monitored continuously, and if it drifts, the resampler is adjusted accordingly. An ASRC is typically used only for on-the-fly conversions. In offline processing, the source rate is known (or can be determined), so a fixed-ratio converter is all one needs.

 

In all the cases above, arithmetic must be performed with sufficient precision that the accumulated error ending up in each output sample is smaller than one LSB of the output format (roughly). If the input and output are both 24-bit integer, the intermediate format used for the convolution must have somewhat higher precision. On a PC, it is often easiest to simply use 64-bit floating-point which has all the precision required, though it is possible to screw things up and amplify small errors into large ones. In a constrained environment, a likelier choice is something like 48-bit fixed-point. Fixed-point requires a little more design effort to ensure everything stays in range, but once done it is more efficient in terms of silicon utilisation.

 

While such a design is possible, it is definitely unusual. A single-clock design typically uses an ASRC to convert all inputs to a much higher fixed rate. Benchmark DACs, for example, work this way.

 

Actually, an ASRC is a common tool for jitter reduction. Simply put, jitter can be dealt with in two ways: by adjusting the local clock to match the data, or by adjusting (resampling) the data to match the local clock. Since the ASRC uses a digital PLL, it can be made with a very low corner frequency, down to a few Hz, whereas analogue PLL/VCO designs tend to have a much higher corner frequency in order not to lose lock. The best analogue jitter cleaners use a cascade of two or more PLLs, each stage lowering the corner frequency. Needless to say, that can get expensive. That's not to say the ASRC is without issues of its own. The appropriate choice depends on many factors, and neither method can be universally declared superior.

 

Thank you. This is well written and informative; I was able to follow along but I will need a little time to fully digest (i.e. to connect all the dots together in my mind).

 

 

mQa is dead!

Link to comment
5 hours ago, lucretius said:

 

This would explain the 100MHz clock in the Brooklyn DAC+.  Since this clock is not at a simple multiple of any sample rate, then does that mean an asynchronous sample rate converter is required for all (i.e. both 44.1K and 48K families) sample rate conversions?

 

Already answered by @mansr in another post: "A single-clock design typically uses an ASRC to convert all inputs to a much higher fixed rate."

mQa is dead!

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...