Jump to content
IGNORED

Hi-Res - Does it matter? Blind Test by Mark Waldrep


Ajax

Recommended Posts

  • 2 weeks later...
14 minutes ago, Rexp said:

Yeah the results wont be out for a couple of months so we can only speak in general terms. To me there is a clear difference between tracks, would be great if one the members with a scope could identify it through analysis. I read somewhere that Group Delay is the thing to measure @pkane2001 might know? 


You can certainly try using DeltaWave to determine differences between the files. It’ll analyze timing/phase, amplitude and frequency differences. Group delay by itself isn’t going to make an audible difference — it’s just a delay. A variable group delay can certainly be audible and DeltaWave will show if it’s present.

Link to comment
  • 2 months later...
18 minutes ago, John Dyson said:

Well, I am arguing for floating poiint for internal calculations.  I hope that most software written anew nowadays is not 32bit signed like SOX is.  I don't really know how other people do it -- but keep things in FP when you can.  As I have mentioned, it appears that some of my master tape rips as used by pros are FP.   (I don't know about all of them.)

 

When you talk about 'whatever has to be rounded' -- that is the problem.  A lot of math is not straight-through...  Sometimes that rounding becomes amplified noise.   Simple software doesn't see that problem.   Every time there is a loss of precision, I actually think about it.

 

With a little bit of experience, then it becomes less and less easy to be cavalier with precision.

 

John

 

 

So getting back to this blind test. Do you see anything wrong with the methodology? Mark used Sonic Studio’s PROCESS tool to do the conversions. The maker claims this to be an "ultra-resolution" set of DSP tools, whatever that means. Any issues that you or anyone else is aware of with this tool that would invalidate the test?

 

Just as FYI, I do all my math in 64-bit floating point internally :) And yes, I compute all the filter coefficients at execution time.

 

Link to comment
14 minutes ago, John Dyson said:

As long as the interim files maintain the highest precision, and if the tools are indeed good -- then I am pretty happy with the general methodology.    I mean, if the tools support 64bits for interim .wav files, then I am super happy with that also.   The precision conversion should be deferred as long as it can be, unless that conversion is an aspect of the test.

 

BTW -- off topic I would use 64bits 100% internally on my software, if it didn't get so danged slow.   I'll have to document the algorithms some day, so that a researcher can figure out a better way to do the math.  If someone knows how to do something like a 4096 (or 4095 or whatever) tap Hilbert transform (ignore the low level optimization available for symmetric taps) with the same mathematical accuracy -- also using  a Kaiser-Bessel 4.5 filter (note that window has lots of small coefficients near the tails), then I'd be very appreciative.  Yes, I need the bandwidth & accuracy!!!!  Unfortunately, because of the situation, I must use the number of taps that make the filter unsymmetrical.  I REALLY need the window, and a raw Hilbert transform without a window just won't work.  Also, use some with Hann window when trying to economize on the taps and the sideband character isn't important.   I haven't considered the IIR approximations, seems like it would wobble too much, even with lots of taps at high precision?  Does anyone have experience with an intensive use of IIR Hilbert transforms?  (Need 20Hz to 20+kHz with very high precision, 1% error is a killer.)

 

What do you use for FFTs? FFTW is pretty amazing at running large transforms and convolutions in 64 bits, but only if you're willing to abide by the GPL license.

Link to comment
1 minute ago, John Dyson said:

I have watched FFTW since in came out back in the '90s, and really like the completeness & apparent quality of their work.   I haven't needed FFT yet on this project, and a previous project just needed something simple -- absolute and complete accuracy/production quality wasn't a goal at the time.  I just 'grabbed' a freely usable FFT, and used it.  (Investgated dynamic range compression/expansion in the Fourier domain.  Results worked, but inconclusive.)

Off topic: I almost considered using some FFTs in the DHNRDS DA instead of the other filtering techniques, but *at the time* I felt that I would be delving into another science project realm.  (the DA was already a science project by itself) Using an FFT for the input bandpass filters would be very interesting.   The needed filters are very smooth (Q=0.470 and Q=0.450), smooth phase, and right now I have an evil brute force FIR implementation, because IIR filters don't match the requirements because of the subtle nature of the difference between the DSP world and analog world.   The input filters cost a big part of the CPU for non-MD reduction decodes.  (like 30% of a Haswell core at 96k.  WAY too much.)  Important thing is that they must be sample accurate in timing. NlgN would be a speed improvement over what I did (too ugly to admit to.)

 

 

 

Applying large filters with lots of non-zero coefficients would be a lot faster using a well-optimized FFT library.

Link to comment
  • 5 months later...
  • 4 weeks later...
25 minutes ago, John Dyson said:

I found what you said to be true.   I even found that the hiss (well, it is above 20kHz, hiss? :-)) is added AFTER the compression used on almost all CDs and downloads (of course, talking about sample rates above CDs.)   So, the hiss is TOTALLY specious, and the first thing that I do is to remove it.   This is beneficial to remove because the amplitude of the hiss is so strong that it might mess up tracking on my project.

 

* ADD-ON -- in my statement above, I meant that the compression was used on most CDs, but the hiss is also sometimes added to High Res products.  The intention was to state that the hiss was actually added AFTER most of the actual mastering...  Therefore TOTALLY specious.

 

Maybe the basic mastering might be slightly better on some SACDs (like on the Love over Gold example), but the hiss is probably just intended to be able to give the 'High Res' moniker to the product.  I was told, by my project partner, that it was CRITICAL to have bw to above 30kHz -- but it does appear that the additional information above 21kHz is often specious.

 

John

 

 

SACD spec calls for an optional 50kHz low-pass filter, but I've seen quite a bit of content that seems to move the quantization noise to well below 50k, often at significant levels. Perhaps that's due to some poorly implemented ADCs or encoder software. From my experience, I'd say a 30KHz cutoff is a safer bet.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...