Jump to content
IGNORED

The Optimal Sample Rate for Quality Audio


Recommended Posts

Hi Jud,

 

...There will nearly always be some necessity for converting rates - as you mentioned, for the sake of EQ and other processing. But to the extent it can reasonably be avoided, and original data in the recording preserved, why not?...

 

I believe we agree on this.

I was not arguing *for* SRC, just mentioning situations where I believe it provides a benefit.

 

In terms of converting to a higher sample rate for playback, I prefer not to simply because I'd rather listen to files at their native rate. If I *was* going to use SRC to attain a higher than native sample rate, I'd have to dedicate quite a bit of time to converting the music in my library using an off-line process. (Even the best SRC I've heard is, in my view, not at its best when performing the conversion in real time, i.e. during playback.) Since there are too many other things I prefer to do with my time, I listen to music in my library at the rate at which it was delivered.

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

Link to comment

Hi David,

 

Barry, a question for you -- just something I'm curious about: When you upsample material for mastering, do you use "power-of-two," or "integer," resampling (i.e., 44.1 goes to 88.2 or 176.4), or do you just resample everything to 192?

 

--David

 

The rate at which I'll convert to depends on the individual program I'm working with. But to answer your question, I don't aim for integer conversion as my experience has been this is only important for SRC algorithms that are not very capable. In other words, the algorithms that perform "better" at integer conversion than they do at non-integer conversion will, to my ears, not do a particularly good job even at integer conversion; they merely perform less *badly*.

 

With the most transparent of the algorithms I've tried (currently iZotope's 64-bit SRC by a good long country mile), it just doesn't matter. To my ears, their SRC will create results that are *much* more faithful to the unconverted original, whether integer or non-integer. It seems to me, this algorithm can handle the math without issue, i.e. it "doesn't care" what sort of conversion I ask it to do.

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

Link to comment
That wasn't actually what I was trying to say.

 

In that case my apologies for misunderstanding what you were saying.

 

I think the interpolation facilities of the best SRC software are good enough that while the additional samples are not actual recorded samples, they are not quite "empty air" either.

 

I have to agree that the best non-realtime sample rate conversion algorithms are probably better than the on-the-fly ones used in the current crop of DACs. The engineer in me still thinks "early upsampling" is solving the wrong problem. With the processing power of the chips used increasing constantly, the non-realtime algorithms used on a PC today will be implemented in real time in a DAC in a couple of years. Do we want to increase the cost of our equipment significantly to overcome that temporary performance gap? Some people will say "yes", but I would say that money on those resources are probably better used elsewhere in the chain.

 

What I was referring to is that it would be quite nice to have material recorded in 176.4/192 or even 352.8/384, or DSD, that would require only 2x (in the case of 176.4/192) or no oversampling at all in either the computer or the DAC. There is at least some non-negligible amount of material available in 176.4/192 and DSD, though one could wish for both more and cheaper.

 

And it will be a cost issue - and again I am questioning (but that is just my personal opinion) if the potential improvement in going beyond 96/24 is justified in terms of complexity and cost, compared to concentrating the effort to areas where the returns are much clearer (speakers and room). For some people it might be, for others not.

Link to comment

Curious if anyone has heard the Lavry DA11 DAC. Looks like it has some interesting features and is reasonably priced. I recall mentioning to my brother all the seemingly endless possible DAC solutions and approaches that are available these days. He mentioned friends who were using Lavry stuff in their pro audio applications. I may have seen a poster in another thread who mentioned that he preferred the older Lavry DA10 DAC to the newer DA11.

JohnMH

Link to comment
Hi Jud,

 

I believe we agree on this.

I was not arguing *for* SRC, just mentioning situations where I believe it provides a benefit.

 

Yep, understood.

 

In terms of converting to a higher sample rate for playback, I prefer not to simply because I'd rather listen to files at their native rate. If I *was* going to use SRC to attain a higher than native sample rate, I'd have to dedicate quite a bit of time to converting the music in my library using an off-line process. (Even the best SRC I've heard is, in my view, not at its best when performing the conversion in real time, i.e. during playback.) Since there are too many other things I prefer to do with my time, I listen to music in my library at the rate at which it was delivered.

 

Don't know if the iZotope packaged with Audirvana+ can do offline SRC. While it may not be at its best doing on-the-fly SRC (though I wonder what the effect is of Audirvana putting the file in memory before playback - does this mean the SRC is effectively accomplished offline?), I like it better than without, so that's what I choose in my particular setup.

 

Julf -

 

With the processing power of the chips used increasing constantly, the non-realtime algorithms used on a PC today will be implemented in real time in a DAC in a couple of years. Do we want to increase the cost of our equipment significantly to overcome that temporary performance gap?

 

In the meantime I'll make do with iZotope bundled with Audirvana+. Since I'd have bought Audirvana+ anyway, iZotope's very good SRC capabilities are effectively free to me. For the future, at any given time it will simply depend on whether pre-DAC or in-DAC upsampling sounds better to me with the particular software and DAC I have.

 

And it will be a cost issue - and again I am questioning (but that is just my personal opinion) if the potential improvement in going beyond 96/24 is justified in terms of complexity and cost, compared to concentrating the effort to areas where the returns are much clearer (speakers and room). For some people it might be, for others not.

 

As a practical matter, the effective cost to me is whatever I want to spend on hi-res files. That expense comes in relatively small increments, versus speaker or room changes that would take considerably greater one-time outlays. (To say nothing of the fact that my wife and I are happy with the speakers and the room as they are.) I made sure I preferred the DAC I bought last year (first new one in about 20 years) to my old one with RedBook files, since that is what I have most of. Thus the additional hi-res capability was a bonus.

 

Others' calculations may be very different - whether to replace a not very old DAC with a new one considerably more expensive than mine, for example, versus spending the same on speakers. Even in that situation, I might go for the DAC. There is something in my nature that prefers giving the best possible signal to the speakers I have, versus changing speakers and giving them a signal that isn't quite as good. Of course if I hadn't been extremely well contented with my current speakers for a couple of decades, I might feel differently.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Don't know if the iZotope packaged with Audirvana+ can do offline SRC.

 

I'm pretty sure the current version of A+ cannot do offline SRC. If it does, it's well hidden. I'll point out that Fidelia Advanced can do offline resampling using iZotope, with a fair amount of control over a number of parameters. If one wanted to fool around with offline iZotope resampling, Fidelia Advanced may be the cheapest way to stick one's toe in the water.

 

In the meantime I'll make do with iZotope bundled with Audirvana+. Since I'd have bought Audirvana+ anyway, iZotope's very good SRC capabilities are effectively free to me. For the future, at any given time it will simply depend on whether pre-DAC or in-DAC upsampling sounds better to me with the particular software and DAC I have.

 

I have to agree with this approach, although on my current system, I still prefer the overall SQ I get from Pure Music with its on-the-fly upsampling. (I wouldn't bet against A+ moving into the lead with v1.4, though.)

 

--David

Listening Room: Mac mini (Roon Core) > iMac (HQP) > exaSound PlayPoint (as NAA) > exaSound e32 > W4S STP-SE > Benchmark AHB2 > Wilson Sophia Series 2 (Details)

Office: Mac Pro >  AudioQuest DragonFly Red > JBL LSR305

Mobile: iPhone 6S > AudioQuest DragonFly Black > JH Audio JH5

Link to comment
So? The ESS SABRE³² Reference ES9018 chip can upsample 24-bit 192 kHz material to no less than 1536 kHz, even (...and it uses a 32-bit internal data path to go with that).

 

Is that something special? ;) I already support the same with 64-bit floating point internal data path and 32-bit integer output. Or alternatively up to 24.576 MHz 1-bit Delta-Sigma.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
A very good point, Chris. Red book only needs 1.4 Mbit/s, while 384/24 requires 18.5 Mbit/s - pretty serious transmission speeds.

 

Quite pathetic speed, goes just fine even over WLAN transmission link. Or easily at eight channels over gigabit ethernet.

 

Another issue is disk space - a red book CD is 0.6 GB, the same album in 384/24 is 8 GB. Yes, disk capacities are constantly increasing, and prices are decreasing, but still... Again, perhaps justifiable if those extra bits actually contain real information, but if the music is just upsampled, it is just fluff - better do the upsampling at the DAC instead of wasting bandwidth and disk space.

 

Why would it ever go to disk? I'm performing upsampling on the fly during playback. Doesn't go to disk ever, and I can change and update the upsampler at any time. If it's built into DAC, then it's mostly literally carved into stone. And with DACs typically becomes performance limitations due to heat, power consumption, etc...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Hi Lavry Tech - While this paper may be interesting and very valuable from an engineering standpoint, your surrounding statements really hurt your credibility. If you post here on CA in an effort to educate there is no need to tout Dan as "One of the world’s top converter designers..." or to begin your post with "Interested in the facts?"

 

I can find several engineers and AES Fellows who contradict much of what Dan says. My point is there's not one set of facts.

 

I would appreciate something to support your assertion that my “surrounding statements really hurt” my credibility.

 

1.) “Interested in the facts?” is not a statement; it is a query with the specific goal of generating interest in a rather “dry subject” that has important implications for anyone serious about digital audio. It is relevant to the subject of the paper because the vast majority of “rebuttals” to Dan Lavry’s assertion that there is an optimal sample rate for high quality audio are based on opinion or subjective “test” results. We are not afraid of facts; and would be interested in hearing from the “AES Fellows who contradict much of what Dan says” in their own words. This is not a “new subject,” and during the years that have passed since the original Sampling Theory paper was published, no one has yet come forward with credible scientific evidence to the contrary.

 

2.) Regarding- “If you post here on CA in an effort to educate there is no need to tout Dan as "One of the world’s top converter designers..." or to begin your post with "Interested in the facts?"

One cannot really “educate” anyone else; one can only show them the way and hope they can educate themselves. I find it quite surprising that anyone associated with an online Forum would take the perspective that people who are not familiar with a very narrow field of electronics design would also not be interested in this subject. For example; I typed “Optimal sample rate” into Google, and Computer Audiophile was third on the list of results; which is something anyone, anywhere in the world can do.

 

Personally; I believe despite that fact that Dan Lavry is well known and respected in the professional audio industry, there are millions of people world-wide that are interested in the subject and are not aware of who Dan Lavry is; or why his fact-based argument might be more credible than either the opinions of people who lack anything even close to the depth of his understanding, or who have commercial interests in promoting lower quality audio as “better.”

 

In a world where nothing less than “extreme” even registers with so many who are overwhelmed by the amount of information available to them (useful and otherwise), I felt that a mildly provocative subtitle would help in the effort to bring attention to the subject.

 

Here is what Dan Lavry had to say:

“The industry is exposed to a well-financed campaign by large manufacturers trying to sell the false notion that faster sampling is better. There is a lot of advertising of higher sample rate conversion gear, aimed at benefiting the makers of such gear. A smaller converter manufacture has a choice. One can join the high sample rate crowd (making high sample rate converters) while riding the advertising hype that is well financed by larger companies. The alternative is to stay true to quality audio.

 

Lavry Engineering stands for quality audio. So we do what we can to steer the industry in the right direction in a manner that is transparent and does not benefit only our interests.

 

A few years back, I resisted the 192kHz sampling hype. That is when I wrote the paper “Sampling Theory” and refused to make higher sample rate gear. The hype died down and 44.1- 96kHz became mainstream again in professional recording and Mastering studios. A few years passed by and here we are again, this time with the pushing of 384kHz and even 768kHz. Again there is no credible engineering reason for it, and no supporting objective listening tests results.

 

We are trying to do our best to steer audio in the right direction. I am sorry to see that you seem to be focused on the paper introduction instead of the paper itself. I agree that the introduction was aimed towards getting people interested in reading the paper. I think that the Lavrytech introduction was a drop in the ocean compared to the well subsidized advertising hype for higher and higher sample rates for audio. I hope that people would concentrate more on the issue (the paper content) and less on the packaging (the announcement).”

Link to comment
Hi Jud,

 

Why use SRC? Well for me, since I record at 192k, I need it to create 96k and CD versions.

I use it in mastering too. Even when a mix comes in at 44.1k, one of the first things I'll do is create copies at a higher sample rate. The reason for this is that when applying EQ or other processing, I find the results sound better at higher rates. Further, if done at higher rates and the results later converted to 44.1 with a high quality algorithm (such as iZotope's), *some* of the benefits of the higher rates are preserved. In other words, I've found it creates a better sounding 44.1 version than if the SRC was eliminated and all mastering done at 44.1.

 

I believe SRC means reclocking data, and if you have a very stable master clock, it is in effect a de-jitter. Also, when one goes from 16 bit to 24 bit, this tends to lessen or redistribute quantization error. These should have audible effects.

 

Those of us who were into sampling from the stone age (remember the 8-bit Ensoniq Mirage or original Emu Emax?) can attest to what quantization error does to sound quality. Going from 8 to 12 then to 16 bit sampling tremendously improved sound quality. So you can imagine from 16 to 24 bits.

 

Regards,

JR

Oppo UDP-205/Topping D90 MQA/eBay HDMI->I2S/Gallo Reference 3.5/Hsu Research VTF-3HO/APB Pro Rack House/LEA C352 amp/laser printer 14AWG power cords/good but cheap pro audio XLR cables.

Link to comment
Quite pathetic speed, goes just fine even over WLAN transmission link. Or easily at eight channels over gigabit ethernet.

 

Not all of us have gigabit ethernet to our homes yet - somewhat ironically, as I spent a lot of time 10 years ago preaching the benefits of gigabit ethernet as a delivery media instead of SONET/SDH. :)

 

Why would it ever go to disk? I'm performing upsampling on the fly during playback.

Because that is what a lot of the discussion was about - pre-upsampling (or better, recording in high resolution in the first place) versus upsampling at playback time. Architecturally what you are doing is moving the processing from the DAC to the main computer, but it is really just a different way to do the DAC processing - a slightly different thing from what Jud was discussing.

Link to comment
Those of us who were into sampling from the stone age (remember the 8-bit Ensoniq Mirage or original Emu Emax?) can attest to what quantization error does to sound quality. Going from 8 to 12 then to 16 bit sampling tremendously improved sound quality.

 

Oh yes :). I remember how, after having played a bit with the 8-bit Emulator, I got my hands on the 12-bit Yamaha TX16W - such an improvement in sound quality (once you had loaded the OS from floppy disks).

 

Speaking of 8-bit sampling, my favourite performance is still Peter Langston's version of Some Velvet Morning ("by Eedie & Eddie And The Reggaebots")

 

(done on the ancient DECtalk speech synthesizer (made famous by Stephen Hawking) , an Ensoniq Mirage, a Casio CZ-101, and a set of classic Yamaha gear (DX7, TX816 and RX11))

Link to comment

Hi JR,

 

I believe SRC means reclocking data, and if you have a very stable master clock, it is in effect a de-jitter. Also, when one goes from 16 bit to 24 bit, this tends to lessen or redistribute quantization error. These should have audible effects...

 

While I agree with regard to SRC and jitter, I think it is deeper than that insomuch it can be taken to indicate SRC necessarily makes things sound better by reclocking the data. In my experience, most SRC algorithms, despite of reclocking, tend to brighten and harden timbre, which I deem a distinct negative.

 

Additionally, there is at least one brand of converter that automatically reclocks any input to an arbitrary rate (which the designers feel is optimal). To my ears, the results with this are always brightened and hardened. So again, the improved jitter spec comes at a sonic price (which to some, will be a net loss in sound quality rather than the gain implied by reduced jitter).

 

As to going from 16-bit to 24-bit, I agree 100%. In an earlier post, I mentioned using SRC during mastering source material that comes in at less than high resolution, even when the target is a CD. Just before applying the SRC, the very first step is to copy the source files to create versions with longer word lengths (usually 24-bit but sometimes 32-bit float, depending on the mastering application I'm using; there are currently four in the toolbox - each does something the others don't).

 

With the longer word length files, there is more room for any processing that may be applied in mastering and as with the higher sample rates, the end result is better sounding than can be created otherwise. For projects destined for CD, once all mastering processes are complete, the penultimate step is SRC down to 44.1 kHz and last of all, dither/noise shaping is applied to reduce the word length to 16-bits.

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

Link to comment

http://test.beperkdestraling.org/Studies%20en%20Rapporten/Tinnitus/Auditory%20Response%20to%20Pulsed%20radiofrequency%20energy.pdf

 

If you get the sample rates high enough, then it could cause clicks and buzzes in your ears. Admittedly the intensity will have to be pretty high as well as the rate. Pretty interesting stuff here anyway. Intense pulses of 2.5 mhz to 10 ghz will cause sound to be heard.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
If you get the sample rates high enough, then it could cause clicks and buzzes in your ears.

 

Absolutely. I have several friends who work with military radars - and they claim it is not that unusual to hear a click whenever the radar points your way. The scary part is that according to the paper you linked to the cause is actually thermal expansion in your ear bones because of the microwave radiation - and I remember reading about that effect back in the days of Wireless World.

Link to comment
Is that something special? ;)

Of course it isn't. That was actually my point, even my $1K DAC (which I consider relatively cheap) can do it. :)

I already support the same with 64-bit floating point internal data path and 32-bit integer output. Or alternatively up to 24.576 MHz 1-bit Delta-Sigma.

Yeah but IMO that's overkill if your DAC is connected directly to your power amp, using no EQ nor preamp nor analog attenuation. The theoretical 144 dB of dynamic range you'll get with just 24-bit integer output already provides sufficient headroom due to thermal noise kicking in at around -120 dB, and 32-bit float internal data path ought to be just as good as 64-bit float internal data path for just upsampling 24-bit 192 kHz material. Or am I wrong?

If you had the memory of a goldfish, maybe it would work.
Link to comment
The theoretical 144 dB of dynamic range you'll get with just 24-bit integer output already provides sufficient headroom due to thermal noise kicking in at around -120 dB, and 32-bit float internal data path ought to be just as good as 64-bit float internal data path for just upsampling 24-bit 192 kHz material. Or am I wrong?

 

Of course. More is always better. Just as with cars - 6 cylinders is better than 4. 8 cylinders is better than 6. 12 cylinders is better than 8. 24 cylinders is better than 12. 32 cylinders...

Link to comment

Hi spdif-usb,

 

What I find interesting is that the best sounding mastering software I have (among several different packages and apps) and the best sounding recording/mixing apps I've heard tend to do their internal math at 48-bits, 64-bits and 80-bits.

 

I'm not suggesting my DAC needs this; just an observation about the software that happens, to my ears, to be the most transparent at processing audio. (My DAC is spec'd at 24-bits, no more and is to date, the most transparent one I've yet experienced, by a good country mile. Of course, while I've been fortunate to hear a great many, I have not heard all.)

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

Link to comment
What I find interesting is that the best sounding mastering software I have (among several different packages and apps) and the best sounding recording/mixing apps I've heard tend to do their internal math at 48-bits, 64-bits and 80-bits.

 

Absolutely - as I think I might have said before, any intermediate processing definitely needs more headroom and precision, so you want extra bits for that. But once the processing is done, and the results normalized, the subsequent DAC doesn't really benefit from having any more bits than the source material.

Link to comment

Hi Julf,

 

Absolutely - as I think I might have said before, any intermediate processing definitely needs more headroom and precision, so you want extra bits for that. But once the processing is done, and the results normalized, the subsequent DAC doesn't really benefit from having any more bits than the source material.

 

For my ears and based on my experience so far, I agree.

(That is why I wrote the second paragraph in post #95).

 

On the other hand and perhaps merely coincidental (I don't know), with my 24-bit DAC, I'm hearing the best playback of 16-bit material I've yet heard. (That said, I attribute this to the whole design, not simply the wordlength.)

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

Link to comment
Of course. More is always better. Just as with cars - 6 cylinders is better than 4. 8 cylinders is better than 6. 12 cylinders is better than 8. 24 cylinders is better than 12. 32 cylinders...

I was talking about the internal upsampling from 192 kHz to 1536 kHz. As for car analogies, I think the best car should have no cylinders at all... just a single big jet turbine will nicely fit the job. =P

If you had the memory of a goldfish, maybe it would work.
Link to comment
Hi Julf,

 

 

 

For my ears and based on my experience so far, I agree.

(That is why I wrote the second paragraph in post #95).

 

On the other hand and perhaps merely coincidental (I don't know), with my 24-bit DAC, I'm hearing the best playback of 16-bit material I've yet heard. (That said, I attribute this to the whole design, not simply the wordlength.)

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

 

A question Barry. You have said when you hear 176 or 192 it crosses a threshold into sounding like live feed. You may have no reason to have done so, but have you listened with some of the relatively affordable consumer level DACs playing at those high sample rates? And if so do they get pretty close or fall far short in your opinon? By affordable I don't mean like $100 units, but maybe units of $1000 and less.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

Hi elsdude,

 

A question Barry. You have said when you hear 176 or 192 it crosses a threshold into sounding like live feed. You may have no reason to have done so, but have you listened with some of the relatively affordable consumer level DACs playing at those high sample rates? And if so do they get pretty close or fall far short in your opinon? By affordable I don't mean like $100 units, but maybe units of $1000 and less.

 

To be clear, with regard to 4x rates, I said I find the threshold crossed*with the best converters I've heard*. I've also said many converters I've heard which have a "192" in their spec sheets actually perform *worse* (to my ears) than they do at 2x rates.

 

In other words, I find marvelous *potential* in 4x rates but only *some* gear able to exhibit this. I find that gear magical because never before have I been unable to distinguish the recorded sound from the mic feed - not with any analog device and not with any digital device operating at less than 4x rates (even the ones that drop my jaw when run at 4x rates).

 

I know of no converter in the $1000 range that does 4x rates. Not talking what's on the spec sheet but what comes out of my speakers. Even at $2000, the best I've heard isn't even spec'd for 4x rates - but does a wonderful job at 96k.

 

Best regards,

Barry

Soundkeeper Recordings

Barry Diament Audio

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...