Jump to content
IGNORED

Lies about vinyl vs digital


Recommended Posts

8 hours ago, sandyk said:

 Given that most material reissued as high res was recorded from Tape and had very little genuine musical content above 35kHz that seems highly unlikely. Barry Diament is one of the few Recording Engineers that provides a genuine frequency response in his high res releases to a little over 50kHZ.

 

You do not understand - many mics record up to 50khz, and LPs easily reproduce that. It is present on pretty much any vinyl rip. You keep emphasizing text meaning  you just want to ignore some facts. But “the facts is the facts” and that energy is there. Just look at at any competent LP recording. (Shrug) 

 

What that energy means may be controversial, the same as when it is recorded at a high res to the digital realm. It is not controversial at all when running the digital files through a DAW. 

 

And what *I* find with my own little ears, is that high-res vinyl rip recordings sound like the best vinyl, and are easily identified as such. 16/44.1 or even 24/48 vinyl rip recordings do not - invariably sounding like a heavy - very heavy - layer of cotton is over the speakers. Honestly, you demand people believe that two identical files can sound different because of unknown and unknowable reasons, but you are gonna give me crap over this? 👏

 

 

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
7 hours ago, sandyk said:

A test from Barry Diament's new 24/192 album. Obviously, Barry does NOT agree with you.

 Click on the image several times for a full screen image.

B.D -zAlexTestR.jpg

 

Barry records everything digitally, and an LP could reproduce this without issue. 

 

Also, I think Barry uses 24/192k for everything, but you would have to ask him. His recordings are always astoundingly gorgeous.

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
8 hours ago, John Dyson said:

The band limiting filter is someting in common -- and those can be done essentially perfectly.  Those are NOT what causes the distortion requiring heroic measures.  It is the non-integral rate conversion mechanisms (the contorted methods) that cause the distortion.

 

Who cares about a filter that lops off 500Hz, when it has already been lopped off?  Linear phase filters are perfect (for the purpose), no phase shits or anything like that.  I could do a linear phase filter that does 100s of passes though it -- and not be any significant loss beyond the first filter (other than a bit of dithering due to finite precision math.)  Linear phase filters can be done SUPER FLAT, and all there is a roundoff and the necessary slight frequency response undulations...   That is the main distorting factor of integral rate conversion.  Non-integral adds a bit of NONLINEAR distortion IN ADDITION -- it isn't really what should be done all of the time.

 

The non-integral rate conversion has to do MORE things -- the ugly part of general rate conversion that is NOT mathematically perfect.  Every time that you do a non integral rate conversion, you get a loss that doesn't have to be there.

 

In the case of integral rate conversion, you can do it over and over and over again, and do it many more times than non-integral rate conversion, then have less loss.  Non-integral should be special purpose only.

 

John

 

I don’t see this anymore John - plenty enough bits to avoid it. _float128 is still measured in mFlops on modern chips. Or even 5 year old chips. 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
5 hours ago, John Dyson said:

Doing the conversion back and forth is pretty much impossible to do reliably.  Both integral and non-integral conversions cannot be done perfectly -- so it would be wrong to claim that non-integral is somehow inferior because perfect conversion cannot normally be done.

 

It is just that non-integral is more likely to give less accurate results than integral conversion.  (There are so many variables, that it would be dishonest of my to make absolute claims unless the specific situation is controlled and well defined.)

 

Basically (in a crude way), here are the two schemes:

 

non integral:

44.1k -> create approx samples in-between the 44.1k samples somewhere by approx or common factor -> brickwall just below 22kHz -> clean 96kHz

 

integral:

48k -> create 96k by filling-in every other sample, zero for the others -> brickwall just below 24kHz... -> result is clean 96kHz -> double signal level to correct for every other sample.

 

The integral conversion is perfect if selecting every other sample, zero or 48kHz input  -- counterintuitve, but after the brickwall & 6dB gain, the result is essentially mathematically perfect (is perfect if filter is ignored.)

 

The non-integral conversion dosn't have an all or nothing for perfection, but has either a polynomial approx conversion type thing (other methods also), or a common factor thing (which CAN be done for specific rates.)  However, invariably the non-integral scheme results in more distortion (wrt audiophile specsmanship.)

 

The good thing is that integral conversion doesn't lose anything beyond the frequency response thing and the micro-micro simple roundoff that exists everywhere -- but the integral conversion has VERY LITTLE opportunity for approximation errors.  The non-integrla conversion has lots (LOTS more) opportunity for all kinds of errors to creep in.  There is a choice:  INTEGRAL (1.0 or 0.0), or NON-INTEGRAL (usually long calculation - often based on multiple input samples.)  Where are there more chances for errors to creep in?

 

John

 

 

So, have you measured how much quantization noise a non-integral conversion adds? I get your point that *any*  at all is not good, but how much is actually added? I am not sure I even know how to measure that, so I am just looking for a feel of how significant it is. 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
51 minutes ago, mansr said:

Resampling using a proper band-limited interpolation filter, integer ratio or not, involves no nonlinear operations. Thus, it cannot cause nonlinear distortion.

Refer to the reviews showing otherwise.  The distortion exists.

 

I just checked -- and did about 10seconds of internet lookup to verify -- refer to URL below

 

The term:  'cubic interpolation' for example   THAT IS NONLINEAR.

 

It is common to use such algorithms.  If you do the rather onerous common factor type algorithms to brute force a result -- might be used in fixed conversion LSI of some kind.  Not so good for normal programming (bits move around different on LSI than in programmning, some things are not easy to do in programming like in LSI.)

 

Read the following URL and the attached document with the comments that essentially agree with mine (and, I knew nothing about that URL until a few minutes ago)  -- hint again : cubic interpolation is nonlinear.

https://www.psaudio.com/article/sample-rate-conversion/

John

 

5148255032673409856AES2005_ASRC.pdf

Link to comment
3 hours ago, The_K-Man said:

 

Who gives a hoot about what's going on up at 50kHz - or even 30k for that matter?

 

Must be a lot of DOGS on this forum.  As for me, I can hear clearly up to only about 14kHz, so none of that matters to me.

 

Wow - hostile aren’t you?

 

Which part of “the recording sounds like the vinyl playback” makes me a dog? Obviously, nobody can hear 50khz, so that is not the reason, just happens to be a fact. Ignoring facts means your conclusions are likely to be wrong.

 

What’s your reasoning then, or are you just one of those people that ignore facts when they inconveniently contradict a theory you hold on to? 

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
7 minutes ago, Paul R said:

 

So, have you measured how much quantization noise a non-integral conversion adds? I get your point that *any*  at all is not good, but how much is actually added? I am not sure I even know how to measure that, so I am just looking for a feel of how significant it is. 

 

 

Being very blunt -- some levels of precision required by SOME audiophiles is insane :-), but the complication of doing non-integral conversion makes it really be a good thing to avoid.  I just attached an URL from PS audio as further information.  I really don't know everything about every bit of DSP, but in my own history of dealing with the stuff, sample rate conversion is best avoided unless really needed.

(answering your question better -- I haven't done measurments, but I have seen results.)

Not as being the be-all and end-all of SRC methods -- just as an example, I also posted an AES article about ONE method as an attachment (in another post), that shows the arcane complexity needed -- while integral conversion can be done 100% accurately when sleeping. :-).  Things like multi-channel or multi-threading DO NOT affect the hoops that need to be jumped to get accurate conversion...

 

If the conversions have reasonable common factors, there are brute-force accurate ways of doing the conversion.  44.1k is somewhat of an oddball.  (so is 88.2k.)

 

 

John

Link to comment
28 minutes ago, John Dyson said:

Being very blunt -- some levels of precision required by SOME audiophiles is insane :-), but the complication of doing non-integral conversion makes it really be a good thing to avoid.  I just attached an URL from PS audio as further information.  I really don't know everything about every bit of DSP, but in my own history of dealing with the stuff, sample rate conversion is best avoided unless really needed.

(answering your question better -- I haven't done measurments, but I have seen results.)

Not as being the be-all and end-all of SRC methods -- just as an example, I also posted an AES article about ONE method as an attachment (in another post), that shows the arcane complexity needed -- while integral conversion can be done 100% accurately when sleeping. :-).  Things like multi-channel or multi-threading DO NOT affect the hoops that need to be jumped to get accurate conversion...

 

If the conversions have reasonable common factors, there are brute-force accurate ways of doing the conversion.  44.1k is somewhat of an oddball.  (so is 88.2k.)

 

 

John

 

I will have to reread that old AES paper again, but in section 6.2, they are showing a peak spur of -126.9db,  116.4db THD+N, and a pass band of 17.97khz for a 48k to 44.1k conversion, which is one of the worst conversions of course. (It was 2005, they did not do high res as we know it today. That poor SHARC would have rolled over and died if it could handle it at all. :) )

 

I am not sure exactly what that is saying, but it seems like a pretty small conversion penalty. And we have better algorithms and processors today.  That isn’t to dispute your experience, just an observation.

 

Plus 48k was chosen just to match an existing tape recorder I think, not a lot of “sound” science behind it! (Pun intended!) 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
6 hours ago, fas42 said:

 

With regard to LP limitations I meant that certain levels of waveforms are just not attempted, because cartridges will never be able to track them, or adjacent grooves are just too close to each other - that type of thing.

 

I have never had issues with digital "capturing all the information" - CD replay for me conveyed all I have ever heard from vinyl, in the earliest years of the silver disk being around - I can think of only about 2 TT rigs I've heard in the last 35 years that had anything really special about them.

 

 

Have to be specific there Frank - certainly there is equalization applied to vinyl, but that is neither subtracting nor adding information. 

 

We will simply have to disagree. I do think you have a bit of tunnel vision, probably brought on by being thrown so much shade about your ideas. But I totally disagree with you on standard res digital capturing all the information. I have cassette tapes I made that sound better than some CDs. I digitize them at 88.2k though, not at the much higher resolution I believe is needed for vinyl. 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
23 minutes ago, Paul R said:

 

I will have to reread that old AES paper again, but in section 6.2, they are showing a peak spur of -126.9db,  116.4db THD+N, and a pass band of 17.97khz for a 48k to 44.1k conversion, which is one of the worst conversions of course. (It was 2005, they did not do high res as we know it today. That poor SHARC would have rolled over and died if it could handle it at all. :) )

 

I am not sure exactly what that is saying, but it seems like a pretty small conversion penalty. And we have better algorithms and processors today.  That isn’t to dispute your experience, just an observation.

 

Plus 48k was chosen just to match an existing tape recorder I think, not a lot of “sound” science behind it! (Pun intended!) 

 

 

Hey those early Sony PCM machines ran at 44,056 hz and we just play them at 44,100 hz.  Close enough I guess.  The funny thing is both 44,100 and 48,000 rates were chosen because of video one way or another.  If I were emperor of audio, I would have just picked a nice round number like 50 khz.  :)

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
43 minutes ago, Paul R said:

 

I will have to reread that old AES paper again, but in section 6.2, they are showing a peak spur of -126.9db,  116.4db THD+N, and a pass band of 17.97khz for a 48k to 44.1k conversion, which is one of the worst conversions of course. (It was 2005, they did not do high res as we know it today. That poor SHARC would have rolled over and died if it could handle it at all. :) )

 

I am not sure exactly what that is saying, but it seems like a pretty small conversion penalty. And we have better algorithms and processors today.  That isn’t to dispute your experience, just an observation.

 

Plus 48k was chosen just to match an existing tape recorder I think, not a lot of “sound” science behind it! (Pun intended!) 

 

 

My 'complaint' isn't that it is impossible to do accurately, it is just a real pain, and simple integral conversions (and reasonable common factor conversions also) are soooo muuuuch better than the general purpose rate conversion...    For example, half-way between like 72k isn't all that bad either -- it has simple multiply/divide factors.  So, when I mentioned 'integral' -- that kind of includes some other simple factors like 72k -- not too bad.

 

Maybe I am wrapped up in elegance rather than 'yes it can be done accurately'...  However, it is so silly (to me) to keep mired in odd rates like 44.1k (there was a real reason for the choice -- but now we only need it for legacy, right?)

 

(this is stream of consiousness -- not rudness below):

 

Think about it this way -- you are putting together a nice little audio widget, but no-one wants it unless it supports all of the various sample rates...  Your widget is simple/clean and works super well, however to do ALL rates accurately, you have to either 1) purchase use of someone eles's code, or 2) write your own, spending a month or two including proving it to work well.

 

I solved my own project's problem -- it just supports ALL sample rates directly -- no conversions except integral when absolutely necessary.  All of the math automatically adapts -- the LITERALLY 100's *of filters, delays, hilbert transforms, attack/decay calulations.  When I talked to my PROFESSIONALrecording engineer project partner about dealing with sample rates -- he REALLY did not want SRC...  I agreed with him, so now EVERY LITTLE THING is dynamically adapted.  (Luckily, I am pretty good at structuring things -- so the program LOOKS no more complex even though it calculates ALL of the filter parameters based upon the current sample rate.)

 

* Off topic: I havent counted the hilbert transforms/filters/etc lately, but I can be pretty sure that the program does over 48 hilbert transforms PER SAMPLE.  It does well over 320 FIR filters PER SAMPLE -- it has all of these filters because there are nonlinear operations being done between them -- so they cannot just all be collapsed into a single set of filters... Most of the work is being done in about 17 threads -- so the program can benefit up to about 12 cores, maybe more in reality.   There are ZERO precalculated filters -- they are all 'designed' at runtime.  I have a LOT of rate adaptation in my program 🙂.

John

Link to comment
1 hour ago, Paul R said:

 

Barry records everything digitally, and an LP could reproduce this without issue. 

 

Also, I think Barry uses 24/192k for everything, but you would have to ask him. His recordings are always astoundingly gorgeous.

 

 

http://www.channld.com/vinylanalysis1.html

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
24 minutes ago, esldude said:

Hey those early Sony PCM machines ran at 44,056 hz and we just play them at 44,100 hz.  Close enough I guess.  The funny thing is both 44,100 and 48,000 rates were chosen because of video one way or another.  If I were emperor of audio, I would have just picked a nice round number like 50 khz.  :)

 

There was a lot of advocacy for 60hz if I recall correctly. But video was the 800# 🦍 that won. 🤪

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
23 minutes ago, John Dyson said:

My 'complaint' isn't that it is impossible to do accurately, it is just a real pain, and simple integral conversions (and reasonable common factor conversions also) are soooo muuuuch better than the general purpose rate conversion...    For example, half-way between like 72k isn't all that bad either -- it has simple multiply/divide factors.  So, when I mentioned 'integral' -- that kind of includes some other simple factors like 72k -- not too bad.

 

Maybe I am wrapped up in elegance rather than 'yes it can be done accurately'...  However, it is so silly (to me) to keep mired in odd rates like 44.1k (there was a real reason for the choice -- but now we only need it for legacy, right?)

 

(this is stream of consiousness -- not rudness below):

 

Think about it this way -- you are putting together a nice little audio widget, but no-one wants it unless it supports all of the various sample rates...  Your widget is simple/clean and works super well, however to do ALL rates accurately, you have to either 1) purchase use of someone eles's code, or 2) write your own, spending a month or two including proving it to work well.

 

I solved my own project's problem -- it just supports ALL sample rates directly -- no conversions except integral when absolutely necessary.  All of the math automatically adapts -- the LITERALLY 100's *of filters, delays, hilbert transforms, attack/decay calulations.  When I talked to my PROFESSIONALrecording engineer project partner about dealing with sample rates -- he REALLY did not want SRC...  I agreed with him, so now EVERY LITTLE THING is dynamically adapted.  (Luckily, I am pretty good at structuring things -- so the program LOOKS no more complex even though it calculates ALL of the filter parameters based upon the current sample rate.)

 

* Off topic: I havent counted the hilbert transforms/filters/etc lately, but I can be pretty sure that the program does over 48 hilbert transforms PER SAMPLE.  It does well over 320 FIR filters PER SAMPLE -- it has all of these filters because there are nonlinear operations being done between them -- so they cannot just all be collapsed into a single set of filters... Most of the work is being done in about 17 threads -- so the program can benefit up to about 12 cores, maybe more in reality.   There are ZERO precalculated filters -- they are all 'designed' at runtime.  I have a LOT of rate adaptation in my program 🙂.

John

 

I can see that I suppose. All the pros I know use SRC sparingly, but are not overly concerned with it, as it really only happens one time. And the better SRC algorithms are more than just  pretty good. 

 

Yours,

-Paul

 

P.S.

Is the CAPITAL LETTER SHOUTING emphasis stuff really necessary?  Italics are so much better for emphasis. 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
23 minutes ago, semente said:

 

Rob knows his stuff.

 

His Pure Vinyl is the best vinyl recording software out there, period. Oh, it has a few quirks in the latest Beta version, but the results sound very good indeed. I really want, but can not afford, a pair of his Seta phono preamps too...

 

I usually just just refrain from arguing about this subject though. There are some people who are dead convinced that CD playback is the supremest pinnacle of all the supreme pinnacles in all of Oz high end audio. 😁

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
29 minutes ago, Paul R said:

 

There was a lot of advocacy for 60hz if I recall correctly. But video was the 800# 🦍 that won. 🤪

I don't really think video was the 800 pound gorilla.  It was simply a matter of economics.  Sony et al figured out they could easily adapt video recorders to store digital data easier and less expensively than making dedicated gear for that initially.  So 44,100 was a rate that could be used with both NTSC and PAL video recorders in frames.  44,100 would evenly divide into both frame rates. 

 

60 khz would actually have been better for all purposes such as working over all video frame rates and such.  But TV, did cause a 48 khz compromise that worked with most frame rates evenly other than NTSC.  And everyone thought 60 khz was excessive.  

 

Oh well standards are like that and we are stuck with split rate families based upon 44 and 48 rates.  Maybe if we'd had 60khz that is all we would have needed. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
29 minutes ago, Paul R said:

 

I can see that I suppose. All the pros I know use SRC sparingly, but are not overly concerned with it, as it really only happens one time. And the better SRC algorithms are more than just  pretty good. 

 

Yours,

-Paul

 

P.S.

Is the CAPITAL LETTER SHOUTING emphasis stuff really necessary?  Italics are so much better for emphasis. 

I am rude :-).  Actually I keep trying to find a way to emphasize to help people avoid my blather - and help get to the point.  I have so darned much stuff going on -- it is so hard to organize.

 

About the SRC stuff -- I look at it like this -- almost 5 lines of code for a trivial decimation or interpolation routine (the few times that I am doing it -- it is a few lines inline along with one or two FIR filter definitions (and delay specification for timing compensation.)).  For SRC, we are talking probably a few 100 lines of code to do it correctly.


If whomever used SRC really knew the mess that it is -- they'd avoid it more.  It isn't just the numbers (in the audiophile world -- they do worry about such insanely low numbers), but the simple *very poor* design choice for today's use.  (asterisk instead of upper case, okay?)

 

Ideally (and correctly), do the conversion *with the very best possible algorithims* only *once*, then simple up/down convert around the normal rates.  That is even the philosophy of the use of my program, even though there *will* be people who use it more than once on each recording (I understand a redo because of a calibration or mode setting matter, but that is it.)

 

My program does interpolation/decimation for only one reason -- to run at 192k/384k  and not slow down the process too much since the program runs at sample rate.   No reason to worry about processing material at 90kHz, when the original DolbyA only had 35kHz BW rolled off, and the DHNRDS is like, really totally flat to 40kHz.  No need for more BW, so 96kHz (or 160kHz, actually) is the highest rate that it runs internally.

 

John

Link to comment
3 minutes ago, esldude said:

I don't really think video was the 800 pound gorilla.  It was simply a matter of economics.  Sony et al figured out they could easily adapt video recorders to store digital data easier and less expensively than making dedicated gear for that initially.  So 44,100 was a rate that could be used with both NTSC and PAL video recorders in frames.  44,100 would evenly divide into both frame rates. 

 

60 khz would actually have been better for all purposes such as working over all video frame rates and such.  But TV, did cause a 48 khz compromise that worked with most frame rates evenly other than NTSC.  And everyone thought 60 khz was excessive.  

 

Oh well standards are like that and we are stuck with split rate families based upon 44 and 48 rates.  Maybe if we'd had 60khz that is all we would have needed. 

For a while, my best audio recorders were my D9 video decks (but what a pain to maintain a sync source just to record audio :-)).

John

Link to comment
1 hour ago, John Dyson said:

 

Ideally (and correctly), do the conversion *with the very best possible algorithims* only *once*, then simple up/down convert around the normal rates.  That is even the philosophy of the use of my program, even though there *will* be people who use it more than once on each recording (I understand a redo because of a calibration or mode setting matter, but that is it.)

 

 

 

I think this is pretty much what everyone does. Even for home usage,  which is a different workflow, but still is essentially the same thing. 

 

At home, if I have a 192k audio file and I wish to play it on my Wavelength Proton, the software has to down sample it to 96k. Or to whatever sample rate works best. 

 

On the Proton for example, I play CD quality files at 88.2k, because that is where they sound the best on that DAC. The files are still stored as 44.1k though. Effectively, that is a one time upsample even if it happens with every play.

 

Understand, every bit of that happens without any attention from me. I set it up on the software one time, and that software is plenty smart enough to do what I tell it to do. It also happens on a server with plenty of processing power. Not on a low powered endpoint. 

 

So, when I play a 24/192k file on the Proton, the server knows I want that down converted to 24/96k. If I play an16/44.1k file, it knows I want that streamed as a 24/88.2k file. If it grabs a  DSD 64 file, it knows very well that should wind up at the Proton as 24/88.2k. The same files to different DACs, will be streamed differently. That DSD will be sent to the iFi iDSD Micro as native DSD, for example. 

 

And yep, some DACS just sound the best with bit perfect input. And for other DACs, it really makes no difference what you feed then, as they upsample to what they want anyway - like 100khz for Benchmarks, or 8X over sampling with GTO filters in the iDSD Micro BL. 

 

So yeah, I think we are agreeing to some extent. My big point is that this is all possible to do because of separating music players from servers, and being apply to apply processing power where it does the most good. Oh and that even a modest computer today has plenty of processing power to handle all these little tasks with essentially no bothersome delay.  Even downsampling 24/96k to 16/44.1 is no problem at all. 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment
4 hours ago, Paul R said:

You do not understand - many mics record up to 50khz, and LPs easily reproduce that. It is present on pretty much any vinyl rip. 

 Paul

 I did quite a bit of research before posting that reply, and I was unable to confirm your claims about typical vinyl recordings having musical content to 50kHz. At least not back when I owned Half Speed Mastered LPs bought at Audio Shows many years ago where they claimed a response to 35kHz on the cover. It may be possible with today's technology but we need to remember that most of the tape masters that have been used for high res downloads have been sorely lacking in the upper HF area, which resulted in many problems for HD Tracks etc.  as the masters were obviously not genuine high res material.

 

Alex

 

 

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
2 hours ago, Paul R said:

 

Rob knows his stuff.

 

His Pure Vinyl is the best vinyl recording software out there, period. Oh, it has a few quirks in the latest Beta version, but the results sound very good indeed. I really want, but can not afford, a pair of his Seta phono preamps too...

 

I usually just just refrain from arguing about this subject though. There are some people who are dead convinced that CD playback is the supremest pinnacle of all the supreme pinnacles in all of Oz high end audio. 😁

Don't lump me in with that statement about CD being the supreme pinnacle all of Oz high end audio

I have all of Barry's high res recordings and can hear the differences with the different formats, even the superiority of 16/48 as used for video over the same recording on RBCD at 16/44.1

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
5 hours ago, Paul R said:

 

Barry records everything digitally, and an LP could reproduce this without issue. 

 

Also, I think Barry uses 24/192k for everything, but you would have to ask him. His recordings are always astoundingly gorgeous.

 

Americas is 192
Confluence is 192
Equinox is 192
Kay Sa (New Album-SR006) is 192
Lift is 96
Winds of Change is 192
Kay Sa was recorded after a major clocking upgrade of the Metric Halo, and used Ethernet instead of Firewire .

 It is clearly improved over the previous releases and is more 3D sounding.

SR006 Liner Notes.pdf

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
3 hours ago, John Dyson said:

Refer to the reviews showing otherwise.  The distortion exists.

 

I just checked -- and did about 10seconds of internet lookup to verify -- refer to URL below

 

The term:  'cubic interpolation' for example   THAT IS NONLINEAR.

Cubic interpolation is not a band-limited filter. It's a quick approximation that might sometimes be good enough. Nobody serious would use it when quality matters.

 

3 hours ago, John Dyson said:

It is common to use such algorithms.  If you do the rather onerous common factor type algorithms to brute force a result -- might be used in fixed conversion LSI of some kind.  Not so good for normal programming (bits move around different on LSI than in programmning, some things are not easy to do in programming like in LSI.)

An arbitrary rational factor sample rate conversion can be done by resampling first up to a common multiple, then down to the target rate, both steps being integer ratio conversions. You have yourself said that these can be essentially flawless. If you look at the calculations performed in this process, you will notice that many intermediate results are actually unused. A lot of computational effort can thus be spared by not calculating them in the first place. The resulting construct is called a polyphase filter, and there is nothing exceptionally hard, let alone impossible, about doing it right.

 

3 hours ago, John Dyson said:

Read the following URL and the attached document with the comments that essentially agree with mine (and, I knew nothing about that URL until a few minutes ago)  -- hint again : cubic interpolation is nonlinear.

https://www.psaudio.com/article/sample-rate-conversion/

John

You have got to be joking, referring to PS Audio.

 

3 hours ago, John Dyson said:

That's about asynchronous sample rate conversion. Different beast. Not relevant.

Link to comment
9 hours ago, semente said:

 

Some less than transparent systems do add a cloud of fog and this makes in turn may generate a need for euphonic distortions.

But to reproduce those qualities through an optimal system they have to have been recorded...

 

They are recorded. That is, the sound cues that matter to the listening brain are picked up by the microphones, whether the person setting up the microphones went to some effort to 'optimise' pickup, or merely switched on some convenient recording device. I have been regularly "bowled over" when listening to recordings done right back at the start of the recording era, when I'm able to "see" into the space where the music happened - "the backing piano is well back, exactly there - and the playing is completely 'transparent' ..."

 

9 hours ago, semente said:

 

It baffles me how you can be so vocal about the importance of optimum playback, with which I agree with, and yet dismiss the equal importance of optimum recording.

 

The recording is now a historical document - I see the job of playback to present it exactly as it is - and it turns out there is plenty enough, every time, to make it 'work'.

 

9 hours ago, semente said:

 

Optimum playback will only extract whatever information has been recorded. But an un-optimal recording will not capture enough information.

Go on YouTube and play a few videos of live gigs made from the audience with mobile phones; it's easy to see my point. Or play an MP3 file of a proper stereo recording with loads of HF content...

 

Live gigs, using a PA setup?

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...