Jump to content
IGNORED

192 khz vs 48 khz poll


esldude

192 khz sampled digital audio will record and reproduce analog musical signals below 20 khz more acc  

58 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

What *wouldn't* be reconstructed is a sudden transient that began *and* ended in between two consecutive samples. But that is equivalent to the "frequency of interest" being half the sample rate or higher - in other words, if you have an event that occurs within 1/23,000th of a second and your sample rate is 44.1kHz, that event won't be captured for the same reason a 44.1kHz sample rate is inadequate to reconstruct a 23kHz waveform.

 

If you want to see filter transient anomalies:

 

1) Use what ever ADC with 48 kHz output sampling rate

2) Input transient that is shorter than length of the anti-alias filter's impulse response and has frequency content exceeding the 24 kHz bandwidth. For example single step from DC level to another DC level (100 ns is OK).

3) Inspect frequency and time domain data output of the ADC

 

Impulse response of output side of the filter used in (2) is usually at least around ~50 samples long, that means 50/48000 = 1 millisecond.

 

In (3) you can see that output reflects properties of the particular filter. Better the frequency domain response, longer it takes for the time domain to settle (and vice versa). So in the example case with linear phase AA-filter, the step would become unsettled 25 samples before the step and continue to wobble around for 25 samples after the step. While with minimum phase filter it would just wobble around for 50 samples only after the step. Thus the transient is smeared across 1 ms long period. If you make filter steeper, the transient smear becomes longer, if you make the filter roll off slower, you get more aliased frequency content back to audio band or alternatively early frequency roll-off.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Yes, it is cynical. I think many, many audiophiles have this idea that with more dots (higher sample rate) you have a smoother and more accurate waveform. That isn't a benefit of higher sample rates. You get only more bandwidth. Within the stipulation I made of 20 khz and lower both at least theoretically are equally accurate below 20 khz. I think this idea hirez equals smoother or hirez equals more resolution is a problem. It is technically incorrect as a way to imagine what goes on and what it gets you. More bit depth actually gets more resolution.

 

It actually isn't open to any substantial argument as the theory, and largely the actual working of the system is this way. However I wonder how many have the opinion higher sample rate equal higher accuracy over the range of human hearing. It really isn't a poll of what people think they do or don't hear at the different rates. Perhaps that should have been stipulated. It is a poll of what people think happens.

 

And if you had the opinion the poll statement was true then explain.

 

Now of course theory and reality can be somewhat different. I don't think there is much evidence theory and reality of digital audio are at odds other than at the very margins (excepting poorly designed equipment which is not all that common anymore). If you have frequencies above nyquist the filtering and such might vary a bit and what folding back into the audible range there is could be minor differences as wgscott, and some others have hinted at.

 

So why do I care? Because repeatedly when discussing what might be going on between hirez and normal digital this is a point of misunderstanding.

 

I think properly done 48 khz is very, very close to fully transparent. As wgscott mentioned, and papers by Lavry indicate you could benefit perhaps with 96 khz. You get for sure everything any mikes are capturing and a chance to do filtering a bit differently with more space between what humans hear. But it should be pretty small a difference. Anymore than 96 khz I don't know what one could be getting. When I ask people I have about as often as not been told simply more bits is better, and how could I even wonder whether or not it would be better? Well where does it stop? Some folks have said 192 khz sounds like the mike feed. I have read of other professionals in mastering who say 384 finally gets close. Of course what little blind testing has been done fails to indicate the higher rates are audible.

Hi esldude - Nicely written point of view. I don't agree with all of it, but that's irrelevant. When we read comments like yours above it at lest allows a good starting point for further discussion if desired.

 

Thanks for following up your original comment.

Founder of Audiophile Style | My Audio Systems AudiophileStyleStickerWhite2.0.png AudiophileStyleStickerWhite7.1.4.png

Link to comment
btw, Benoit Mandelbrot introduced fractals with the question of measuring the actual length of the coast of Brittany (western tip of Europe...)

 

Guess that false voters would call that outstanding mathematician a moron and suggest him to use a ruler

 

Or Jackson Pollock maybe? Really, I don't see anyone calling names here.

That I ask questions? I am more concerned about being stupid than looking like I might be.

Link to comment
In my example, you will never know the 3 points on the wave, because the pitch is unique just as the very beginning of the wave.....and what you speak of that wouldn't be reconstructed is what i am referring to.

 

If the pitch at the beginning occurs quickly enough not to be sampled, by definition its frequency must be above the sample rate. No one is saying a 44.1kHz sample rate allows you to reconstruct a 45kHz tone.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
btw, Benoit Mandelbrot introduced fractals with the question of measuring the actual length of the coast of Brittany (western tip of Europe...)

 

Guess that false voters would call that outstanding mathematician a moron and suggest him to use a ruler

 

No, they would simply point out that you aren't dealing with a self-similarity transformation. Music isn't a fractal.

Link to comment
No, they would simply point out that you aren't dealing with a self-similarity transformation. Music isn't a fractal.

ain't the point. the point is complexity and how to approach it.

 

There are many things to take note of when reading Nyquist theorem. Notably that it doesn't care about human hearing but about the frequency range : if the mikes feed a 50 K range you need to sample at 100.

 

And that's just to keep the static view whereas music is ever-changing : common sense applies perfectly : Nyquist theorem adresses A ONE signal while music is ever changing (notes, harmonics, transients, reverberations, what have you) : let's say ok what you get at 44 k allows to perfectly reconstruct all the collected data at that time up to 22 k ; but it's a different signal that the one you collect whatever fraction of time later : common sense applies ; the more moments you capture, even though 44 K is enough for reconstruction up to 22 k, the closer you are to the breath of music in time

Link to comment
Do you have one, apart from the insult?

Yes : the point is complexity and how to approach it.

 

There are many things to take note of when reading Nyquist theorem. Notably that it doesn't care about human hearing but about the frequency range : if the mikes feed a 50 K range you need to sample at 100.

 

And that's just to keep the static view whereas music is ever-changing : common sense applies perfectly : Nyquist theorem adresses A ONE signal while music is ever changing (notes, harmonics, transients, reverberations, what have you) : let's say ok what you get at 44 k allows to perfectly reconstruct all the collected data at that time up to 22 k ; but it's a different signal that the one you collect whatever fraction of time later : common sense applies ; the more moments you capture, even though 44 K is enough for reconstruction up to 22 k, the closer you are to the breath of music in time

 

Link to comment
Yes it makes a difference for transient response, because 48k requires steep brickwall filter (unless you are fine with 3 kHz of usable bandwidth).

 

You need to sample at least 8x the wanted bandwidth to preserve transients.

 

Describe which kind of transients you are talking about. I sure don't see a limit of 3 khz bandwidth usable. I also haven't found actual music to be nearly so transient rich as people imagine it is. And when I push a pair of IMD test tones through at max level that seems to make it through okay. The transients of that signal are more than you see in music. Now if you are talking impulse and step responses then some of those are going to be above 20 khz, and again, I don't see those signals in music.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

Problem is not frequency content as such, problem is band limiting and it's consequences.

 

This is also related to discrete Fourier transform. When you make the transform longer (more points), you increase frequency resolution, but lose time resolution in equal proportion. Doubling transform length doubles frequency resolution but halves time resolution because it is now twice as long in time. Recent study showed that hearing exceeds Fourier transform's time-frequency capabilities (there are advanced mathematical methods that exceed Fourier transform time-frequency too).

 

Same thing happens with filters, when you make anti-alias filter steeper (increse it's spatial resolution) it becomes longer in time and loses temporal resolution in equal portion. Better the filter algorithm, less tradeoff there is between the two, but there is always some.

 

But only when the filter's impulse response can fit in time into half-wave of 20 kHz (25 µs) sine while providing 20*log10(2x) dB attenuation (where x is number of sample bits) at fs - 20000 full spatial and temporal resolution is preserved.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
The transients of that signal are more than you see in music. Now if you are talking impulse and step responses then some of those are going to be above 20 khz, and again, I don't see those signals in music.

 

It goes at least to 100 kHz. You have not checked this one?

There's life above 20 kilohertz! A survey of musical instrument spectra to 102.4 kHz

 

Especially reproducing transient like the initial 60 µs of this

http://www.cco.caltech.edu/~boyk/spectra/11.htm#b

accurately takes quite a lot more temporal resolution than 1 millisecond.

 

You may want to use something like this for recording:

SANKEN MICROPHONE CO .,LTD. | Product [ CO-100K ]

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
the fact that it is debatable at all, and the fact that there is a near 50% split in opinion, would lean me to believe what i already do believe. That you will always increase the accuracy of the reproduction with the higher sampling rate, provided perfect circuitry to accomplish the sampling task does not cause distortion. I don't even believe it is possible to achieve perfect reproduction without an infinite sampling. If you have to use an algorithm to mathematically reproduce the signal from a "sampling", there will be inaccuracies in the reproduction. Unfortunately this is not a perfect world, and we don't have perfect circuitry.

 

What is the purpose of the debate? Are you of the belief that we cannot continue to improve the technology, from live music to reproduction. Do we stop technology from improving the great gift of hearing the music of the world?

 

Well, I will note once more, you don't seem to understand how it is supposed to work, much less how it does. Just a blind idea more is better. And you seem to think having math involved is some kind of black eye to a process. I got news for you, all your analog processes involve math too you just don't seem to see it.

 

My idea isn't about whether something can be improved. It is about whether or not the approach being used even would be an improvement. What is keeping us from perfect reproduction of live music in the home isn't a matter of higher sample rate or lower distortion or any of this high end stuff. That is at best gilding the currently lily. What is needed to advance that is a different lily altogether.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
The answer to the question, as stated, is yes (True), more accurate at 192. Plain physics folks. Bringing in other factors, scenarios, 'percieved' audible differences, etc. doesn't change it.

 

Care to explain why this is? It isn't obvious at all why that would be.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
In my example, you will never know the 3 points on the wave, because the pitch is unique just as the very beginning of the wave.....and what you speak of that wouldn't be reconstructed is what i am referring to.

 

Jud was quite correct.

 

If your signal started and stopped between samples it was something at a higher frequency than 20 khz. So it doesn't apply.

 

And what he tried to show is if your waveform started between samples and continued it will effect the sample value of the very next sample. Not easy to imagine perhaps, but it will. Not only that, but the next sample value will be different if it started at the halfway point between samples vs quarter of the way or three-quarters or one eighth. Further as only one set of points fits with any particular wave, that waveform, starting in between samples, will also be correctly reconstructed just as it happened when the DA conversion is done. No gaps, no dropouts, no loosing the beginning of the signal (yes even a beginning between samples).

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
If you want to see filter transient anomalies:

 

1) Use what ever ADC with 48 kHz output sampling rate

2) Input transient that is shorter than length of the anti-alias filter's impulse response and has frequency content exceeding the 24 kHz bandwidth. For example single step from DC level to another DC level (100 ns is OK).

3) Inspect frequency and time domain data output of the ADC

 

Impulse response of output side of the filter used in (2) is usually at least around ~50 samples long, that means 50/48000 = 1 millisecond.

 

In (3) you can see that output reflects properties of the particular filter. Better the frequency domain response, longer it takes for the time domain to settle (and vice versa). So in the example case with linear phase AA-filter, the step would become unsettled 25 samples before the step and continue to wobble around for 25 samples after the step. While with minimum phase filter it would just wobble around for 50 samples only after the step. Thus the transient is smeared across 1 ms long period. If you make filter steeper, the transient smear becomes longer, if you make the filter roll off slower, you get more aliased frequency content back to audio band or alternatively early frequency roll-off.

 

Okay Miska, I get this. I don't think I know as much as you about these things, but have looked at such. But you are describing something above the nyquist frequency. And something not found in musical signals.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
It goes at least to 100 kHz. You have not checked this one?

There's life above 20 kilohertz! A survey of musical instrument spectra to 102.4 kHz

 

Especially reproducing transient like the initial 60 µs of this

Graphs 11a, 11b, and 11c

accurately takes quite a lot more temporal resolution than 1 millisecond.

 

You may want to use something like this for recording:

SANKEN MICROPHONE CO .,LTD. | Product [ CO-100K ]

 

Well Miska we are talking past each other I think. I didn't say there was nothing above 20khz, nor that mikes don't record that. I do say we can't hear that, and that makes those matters not so important.

 

Now in regards to music, in hirez widebandwidth material, even those with ultrasonic info, it has always been low in level. I am sure somewhere there is music with it high in level, but it appears rather uncommon. I haven't seen those super steep transients. If there are some they are not an ongoing common signal that will explain general sound quality differences.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

 

My idea isn't about whether something can be improved. It is about whether or not the approach being used even would be an improvement.

 

THe question wasn't about an "improvement". Improvement is a relative term and will always be debatable. The question was about accuracy, and common sense dictates the more the more the sampling, the more the accuracy that is obtainable, provided the circuitry is perfected such that an increase in sampling doesn't distort the result.

Link to comment
Yes, it is cynical. I think many, many audiophiles have this idea that with more dots (higher sample rate) you have a smoother and more accurate waveform.

 

An algorithm can make a "smooth" waveform, so smoothness is irrelevant.

I think many people have this idea that a higher sampling rate can't make a more accurate reproduction.... kind of silly when you really think about it....

Link to comment
When you make the [Fourier] transform longer (more points), you increase frequency resolution, but lose time resolution in equal proportion. Doubling transform length doubles frequency resolution but halves time resolution because it is now twice as long in time. Recent study showed that hearing exceeds Fourier transform's time-frequency capabilities (there are advanced mathematical methods that exceed Fourier transform time-frequency too).

 

Same thing happens with filters, when you make anti-alias filter steeper (increse it's spatial resolution) it becomes longer in time and loses temporal resolution in equal portion. Better the filter algorithm, less tradeoff there is between the two, but there is always some.

 

But only when the filter's impulse response can fit in time into half-wave of 20 kHz (25 µs) sine while providing 20*log10(2x) dB attenuation (where x is number of sample bits) at fs - 20000 full spatial and temporal resolution is preserved.

 

If nothing else, Miska seems to be describing one very concrete reason why a higher bit rate would allow for construction of a filter (in a DAC) that allows for better frequency response accuracy AND diminishes the time smearing that our ears do seem to far more sensitive to than Fourier imagined.

Synology NAS>i7-6700/32GB/NVIDIA QUADRO P4000 Win10>Qobuz+Tidal>Roon>HQPlayer>DSD512> Fiber Switch>Ultrarendu (NAA)>Holo Audio May KTE DAC> Bryston SP3 pre>Levinson No. 432 amps>Magnepan (MG20.1x2, CCR and MMC2x6)

Link to comment
THe question wasn't about an "improvement". Improvement is a relative term and will always be debatable. The question was about accuracy, and common sense dictates the more the more the sampling, the more the accuracy that is obtainable, provided the circuitry is perfected such that an increase in sampling doesn't distort the result.

 

Common sense says more is better, understanding how this works will indicate that common sense idea of more is better in fact won't help you in this situation.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
... And you seem to think having math involved is some kind of black eye to a process.

 

I love math, and would never suggest any such thing. What i said was, when you need a mathematical "prediction algorithm" to create "predictable samples" based on history instead of actual samples, your result will be less accurate.

I never inferred anything was wrong with math.

Link to comment
Common sense says more is better, understanding how this works will indicate that common sense idea of more is better in fact won't help you in this situation.

 

Since you know how it works, please explain to me how if a rare bird noise is made between T50.1 and T50.9 and samplings are only at T50 and T51, how that sound is reproduced.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...