Jump to content
IGNORED

How many bits, how fast, just how much resolution is enough?


BlueSkyy

Recommended Posts

Easy. Using a high-res source, apply the same low-pass filter and dither you'd use for a conversion while leaving the result at 192/24. Feed the original and the filtered data to the same DAC with the same settings.

 

Except that the same DAC very likely behaves in different way if you send there 44.1k content instead... ;) The difference will be huge for example if you use any NOS R2R ladder DAC! And your result still doesn't tell anything about audibility of the format differences alone, only tells about that particular DAC.

 

For example, the output result is vastly different if I send same RedBook data for example to TEAC UD-503 or Metrum Musette. First one is oversampling delta-sigma DAC while the second one is NOS R2R ladder DAC. Plus then there are various DACs that technically sit somewhere between the two, having shitty oversampling filters with just 40 dB stop-band rejection or less, etc.

 

If the digital and/or analog filters are leaky, audibility of those depends on what kind of amplifiers follow, how much intermodulation products those leaks end up producing in the audio band, etc, etc...

 

In addition, even in digital domain, there is no single way to do the conversion to/from RedBook, each of those also produce different results.

 

So overall this is complex topic and the results are not straightforward.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

If the original "audio sample rate" doesn't go through the decimation step but stays at the original higher rate, then one gets to "help filtering and response" and "optimiz[e] the conversion hardware" without requiring the "local processes" of oversampling/upsampling. And if these higher rates do indeed "help filtering and response," then where exactly is the tradeoff between speed and accuracy he's trying to sell?

 

Yes, that's why we should be using hires as much as possible. Be it PCM or DSD. Much less back and forth massaging of the data. Filters can be much more relaxed, for DSD, simple 1st order antialiasing filter is enough at ADC side. For DXD it can also be pretty simple. RedBook needs the most massaging at both ADC and DAC side to get something proper.

 

If all content would be native DSD256 recordings...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I would be more convinced if someone had rational explanations for what point it sounds so good no improvement can be detected with higher sample rates. So far it is just higher is better as if the improvement can be extended forever with more more more.

 

I have posted quite a number of measurement results to show what are the differences... What else is needed?

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Let's not forget the oil can resonance of many hard-domed tweeters which once triggered may very well have repercussions further down in the audible range:

 

Yes, that is particularly exaggerated by leaky oversampling filters with RedBook sources triggering the resonance all the time.

 

As I've said, the sound gives me feeling of being in dentist chair. It's the "CD sound of 80's". It hurts... Some people think it sounds "accurate", to me it sounds hissy-messy.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
DAC imperfections are not part of the data format.

 

Sure not, same applies to any data format, be it PCM or DSD. Imperfections come from the algorithms involved in generating and dealing with the data. And from the converters, A/D and D/A. Some are better, some are worse...

 

What matters in the end is what comes out of the DAC in analog form. That's the ultimate measure.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
It's important to look at these concerns in relative terms and conditions. In the case of your example, given the wavelength of 1.35" at 10khz, if your head at the time of listening or your speakers in the horizontal plane are off axis 1 degree or more, you've already exceeded your 100 microsecond time smear by a magnitude of 3 or greater.......and this assumes laboratory consistency for everything else in the signal and environmental chain.

 

Typical time domain spread of a RedBook oversampling filter is around 1 millisecond.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I seem to have missed where you have the specs for a sample rate filter combination so good no further audible improvement is possible by increasing sample rate or improving the filter. One might make a better measured result beyond this, but hear not the improvement. Care to tell us what it is?

 

If you think you don't hear the measurable improvements, including improvements in the audio band, it's your opinion. However, you cannot claim that nobody else in this world is able to hear it.

 

That way there is a target instead of this mindless idea more is better ad infinitum.

 

As long as measurable (and/or audible) signal fidelity keeps improving by improving implementation, I will keep doing it ad infinitum. I have no reason not to. So far I've been also hearing improvements, but not going to claim anything about anybody else's hearing or non hearing. That's something they need to decide on their own.

 

If improvements become unmeasurable, there just needs to be better measurements. Luckily measurement equipment also keeps improving at steady pace. ;)

 

My target is beyond what is possible, so the target will be never reached. That's how I like it.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Some have made the case for having pre-amps, amps and speakers that are linear out to 40kHz or more such that feeding them information not brickwalled at 20kHz doesn't introduce new problems that we were trying to solve with the hi-res and gentle filters solution at the DAC. Is it therefore possible that the reason some of us do/don't prefer (or hear) the benefits of hi-res is the limitations of the downstream equipment?

 

Likely yes... I have paid particular attention to use equipment that can cleanly reproduce at least to ~50 kHz.

 

Some of the improvements are audible also without such equipment, for example improvements in intermodulation and time domain behavior. Also in many cases jitter and level linearity performance improves.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
No surprise. His business depends on it.

 

Luckily I don't depend on my business. :) (it is not what I do for living)

 

But overall, how is that different from any other audiophile development? Maybe your profile picture explains your opinion in the most clear way. :D I don't share that opinion though.

 

I was doing HQPlayer for myself long before I started selling licenses for it, just thought that since I have something I could also make it available to others while I keep improving it. My business was started for doing audio measurement/analysis systems... Lot of the same algorithms just happen to fit for HQPlayer purpose too.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Only working with his example buddy......but even for a millisecond (.001) there's still no relevance unless ones head is clamped in a vise of chair positioned in a measured equilateral triangle in an anechoic room of equandestant dimensions.

 

Only if you think it from positional point of view. But if you think it from settling time point of view, you may understand better. Meaning that step response takes about one millisecond to settle, which is same time as full cycle of 1 kHz sine wave.

 

I like to move around a bit, ya know?

 

My headphones (~90% of my listening) stay pretty much stable on my head, no matter how much I move around.

 

God help us all if the culinary world develops the same audiophobia criteria where everything matters! Who wants a lab technician instead of a Michelin chef?

 

Michelin chef is a lab technician of culinary world... Everything needs to be exactly perfect, and the plate must also look very tidy and beautiful, not just stuff randomly thrown on a plate.

 

Many Michelin chefs think it is very tiresome to maintain Michelin ratings and move away from such to gain more freedom and comfort in working.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Again.......I've figured after a few years of this some of you would have stopped trying to separate time from frequency in an effort to incorrectly attribute suspected audible differences to this mystical place. Can't have one without the other fellas!

 

No, not at all! That's what I've been saying! More you "improve" frequency domain performance of your "perfect" filter by making it steeper, worse the time domain performance becomes. It is 1/x relationship afterall. What is form of art is to create filters that really push the limits of impossible in that 1/x relationship. Because in filter design you always have the "margin" between the two. It is not hard to design a filter that is bad in both domains simultaneously, but it is hard to design a filter that is as good as possible in both simultaneously.

 

This is the same with Fourier transform, longer the transform is, more frequency resolution it has, but less time resolution it has. However, hearing is known to exceed this Fourier uncertainty principle in it's capability to distinguish both time and frequency simultaneously.

 

You know, your filters have a very good time-frequency response if they are at most 1st order. But you just need to use very high sampling rate in such case...

 

If timing changes, so does the response at the listening position.

 

But you just need to understand that your head doesn't really move within the transient's one millisecond long time window. Your head's Doppler-modulation is pretty low because frequency caused by the movement is much lower than the signal you listen. Even if you move your head, it causes frequency shift, but doesn't really affect the system's step response in a way similar to the filters do. So you at most frequency modulate the source signal with low frequency signal. Which is different thing.

 

And certainly when I use my Sennheiser HD800, or for example Sennheiser IE800's, they are very steady placement in regards to my head as function of time.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Nevertheless, the idea there is no end to improvements in that area which will be audible does not make much sense to me.

 

Because this is argument that is never going to end, just like any argument that depends on human senses. Be it whiskey, wines, music whatever.

 

One colleague at Nokia (when I was still working there) once claimed to me that digital filter rejection, or audio electronics distortion doesn't have to be less than -40 dB because nobody is able to hear anything beyond that. And lot of people have claimed that nobody is able to hear difference between 128 kbps MP3 and lossless. Local technical magazine has stopped reviewing audio equipment since 80's when CD was introduced because they claimed that every CD player has sounded the same since and nobody is able to hear any differences.

 

Nothing in world is going to change opinion of these people and trying to argue with them is just like trying to argue which one is better, Shiraz or Cabernet Sauvignon. Or whether Islay or Speyside is better. It is just futile and waste of time.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Say a sampling rate around 20 ghz.

 

It is easy to calculate, depending on where you want to set the boundaries. Because you know you dynamic range, 6 dB/oct filter response and phase response, so you can calculate where you need to put the sampling rate.

 

If you put for example fc=25 kHz, you immediately know that 96 kHz sampling frequency is not going to make it...

 

If you look at any of the modern delta-sigma converter chips, we could just utilize the modulator outputs from ADC straight, instead of decimating to some low rate PCM just to soon end up interpolating back for DAC output...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
96 khz is enough to push every reasonable complaint about filtering and everything else completely out of consideration.

 

Certainly it is not. And for 96 kHz PCM you end up doing over 10x decimation factors from the actual AD conversion stage. And again over 10x interpolation factors for the DA conversion stage. Just wasted effort.

 

And while many would say why not do 192/384 etc, in the pro recording area which is his business, as cheap as storage is those higher rates for recording, mixing and mastering become very burdensome. Not just in cost of using up hard drives, but in terms of how long processing such streams take.

 

Processing 384k, even tens of channels in realtime is completely non-issue these days. Really.

 

Lavry does use a gentler filter starting to roll off above 30 khz when doing 96 khz sampling in his gear.

 

Going down >100 dB in 30 kHz to 48 kHz band is practically brickwall. It is more than 100 dB/oct, that is not "gentle".

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
If anybody has blind listening test info where a lot of transients were used (say snare drums, etc.) for redbook vs. hi-zoot digital, I'd really like to see it.

 

That is the program material where I'd expect a difference (if there is a difference).

 

In the past I did my own set of percussion recordings and have been using those as test tracks. Not blind test though. But easy to compare because I can also compare to the direct sound without any recording equipment involved. Mics were same distance from the instruments as my ears are when playing.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Thanks again. So this theorem proves that there can be no audible difference between redbook and higher resolution recordings other than different audible artifacts of DA-conversion. I suppose that's possible..

 

Of course there can be... :D

 

Theorems don't prove anything about hearing, only maths that happen in digital domain and only under certain conditions.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Shannon-Nyquist theorem. That is the mathematical proof.

 

...that you cannot implement exactly in real world. It only conveniently assumes infinitely long signals with filters that are infinitely long ,so that nothing ever comes out of the filter, because it also has infinite delay. And assumes infinite precision of timing and resolution of the samples. But other than that, yes, it works nice...

 

View the video linked in my signature. You can skip to the 20 min 50 sec. mark, and watch about two minutes. Using analog sources and analog monitoring gear with AD/DA in between he shows you can move a band-limited square wave thru various amounts of delay between sample points and see the wave shape you get on the analog o-scope is exactly the same other than moving in time relative to a second squarewave. What more proof could you want? You have the theorem predicting something, and an analog monitoring system showing the theorem works as advertised.

 

IIRC, pretty heavily oversampled converters. Which makes pretty big difference...

 

Here's 19k sine wave from a NOS DAC running at 44.1k sampling rate:

musette-19k-44k1_2.png

 

And the same source data, same DAC, but now upsampled to 384k sampling rate before sending to the DAC:

musette-19k-384_2.png

 

So you certainly want to have the conversion running at higher rate than 44.1 kHz...

 

 

P.S. DPO-scope is good way to see how stable the waveform is, which also tells quite a bit about the reconstruction that you don't see in old-school scope.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I must say I am surprised and dismayed at you Miska. If that is from the Musette at 44.1 it simply indicates a bad DAC.

 

Any DAC running conversion at 44.1k is just bad.

 

It will look pretty much the same on any non-oversampled DAC where the conversion section actually runs at 44.1k. To correctly reconstruct it (from 16-bit data), you would need analog filter that has 96 dB attenuation slope in 20 kHz to 24.1 kHz band, thus rolling off by 96 dB in just 4.1 kHz wide band. You are not going to find that kind of analog filter in DACs... (and it would have completely horrible phase response in audio band)

 

When I have time, I can give you more pictures of NOS behavior from delta-sigma DACs, where the upsampling digital filters have been disabled.

 

I don't have any images, but early DACs have displayed 19 khz with good waveform shape long ago in the early days of CD because I have seen them do it on my oscope. If your NOS DAC needs 384 khz to do a 19 khz wave, you should forget it as a poor design.

 

You have probably been looking at oversampled DAC. Already CD's using TDA1541A chip used the SAA7220 digital oversampling filter to run the converter at 176.4 kHz. For example the Marantz CD-60 I had (from late 80's). 19 kHz sine from those is not as clean as the one in my picture at 384k rate, but better than the NOS.

 

The video uses an inexpensive ADC/DAC from around the year 2002. So probably a sigma delta chip. If such chips give PCM 44.1 khz results with a nice clean wave and the NOS can't then it suggests problem for the NOS, not some blanket dismissal of results using modern DAC chips that aren't handicapped.

 

All delta-sigma DAC chips are oversampled at least by factor of 64x and practically all have digital oversampling filters upsampling the input data to 352.8k (for 44.1/88.2/176.4 inputs) or 384k (for 48/96/192 inputs).

 

Modern DAC chips are not "handicapped" because they do upsampling and don't run the conversion at 44.1k rate which would be madness because the quality would be horrible!

 

Earlier in the video he steps 1 khz at a time from 15 khz to 20 khz and you can see the image Miska posted has no bearing on what is possible with modern DACs. (well calling a 14 year old low end DAC modern)

 

Because they are no running the conversion section at 44.1 kHz sampling rate! Instead the data is systematically upsampled to 352.8/384 kHz sampling rate as first thing! 44.1 kHz sampling rate is completely unsuitable for running high fidelity D/A conversion process. That's the entire point of my argument.

 

Same goes for A/D converters. Typical today's ADC runs at 5.6 MHz (when output is 44.1k PCM) delta-sigma modulation which is then converted using chain of digital filters down to 44.1 kHz 24-bit PCM.

 

Then D/A converter does the inverse, it converts the 44.1 khz 24-bit PCM input to 5.6 MHz (or higher like tens of MHz in ESS Sabre) delta-sigma modulation bitstream for conversion.

 

My point has been, that you could at least leave the PCM conversion at the typical intermediate rate of 352.8 kHz PCM, instead of shuffling the data down to 44.1k and then back up for no good reason (in modern world).

 

Here is the analog scope image of 18 khz from that video. He moved the camera, but you can see 19 khz and 20 khz is just as nice.

 

[ATTACH=CONFIG]31162[/ATTACH]

 

That's certainly from a heavily oversampled DAC....

 

Actually looks nicer than your 384 khz image. Though I understand that is just an artefact of the display.

 

Analog scopes have a natural smooth-function built in... ;)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Your figures prove that the 44.1 kHz sampled signal contains everything needed for accurate reconstruction.

 

Yes..

 

Whether it contains all the necessary information and whether it comes with embedded artifacts from the anti-alias filtering is another matter.

 

That incorporating a digital processing step aids in producing a quality output is completely beside the point.

 

No, it is precisely the point! If you don't unnecessarily shuffle the rate back and forth, you can leave out large part of that digital processing! Hires -> less processing needed. You could completely cut out the "Decimation Filter" block from the ADC chip and "8X Interpolator" block from the DAC chip, completely unnecessary stuff!

 

And oh yes, with my marketing hat on; by all means keep using RedBook as it needs hefty amount of DSP to make top quality analog signal out of it! Meaning more need for HQPlayer's upsampling!

 

If all content would be recorded in native DSD256, there wouldn't be as much need for HQPlayer upsampling. So whenever I'm advocating those top notch hires formats, I'm advocating less sales for HQPlayer. Well, people could still use it for speaker adjustments, digital room correction and such, but not need it so much for upsampling.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
It is very likely that the engineers who designed 8x oversampling into DAC chips decades ago did so for solid engineering reasons as engineers dislike doing things irrationally (tho are sometimes forced to do irrational things by their MBA Overlords).

 

But best practices decades ago may also not be best practices today.

 

It is just 8x still today because of pricing / resource reasons. Top of the line DAC chips sell for around $2.5/piece which means that heavy cost awareness must be used when designing the DSP engine to those chips. There are other significant factors too - to keep cost of the final product down, only one clock input is used. And since the chips must run without cooling, the amount of current consumption must be limited. Another reason is that since the DSP engine resides on the same piece of silicon as the sensitive analog parts, less than millimeter away, the amount of noise generated by the DSP engine must be kept low.

 

But these days all that DSP stuff can be done externally, outside of the chip, without resource limits. Either in player software, or on a separate processor chip inside the DAC. That's why I do 512x oversampling digital filters in the software these days.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...