Jump to content
IGNORED

Chords New M -Scaler


Recommended Posts

On 7/23/2018 at 8:06 PM, mansr said:

All manufacturers seem to have some pet feature they emphasise the importance of above all else. For Chord, it's absurdly long linear phase filters. For Ayre it's rather shorter minimum phase filters. For Schiit it's the avoidance of sigma-delta modulators. There are many other examples. The interesting thing is that not only do these approaches differ in the weights they give to different aspects, they are in direct conflict. If two men say they're Jesus, one of them must be wrong.

 

As I said already, Rob Watts is technically correct, but I still think he's overdone it to an extent that is hard to justify.

 

Other manufacturers they can't control or design the DAC completely so they are constraint by the supplier and they have little to play with. But Chord with Rob they design it from the ground and I'm sure it is the most advance design ever made by human and took 30 years, but with there sound philosophy which some people may don't like.

Link to comment
On 7/23/2018 at 11:20 AM, ecwl said:

I have listened to DAVE with and without Blu2 M-Scaler for the past 14 months. I can tell you the sonic difference is not subtle. I completely agree with you that conceptually it seems hard to justify 1 million taps vs 164,000 taps. But it’s definitely something you should try to listen to some day.

 

You can do the same with Audirvana Plus on the Mac with the iZotope upsampling software just by moving a slider in a preference window.  In fact I think iZotope allows up to 2 million taps and as few as 10,000.  

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
17 hours ago, ecwl said:

In order to be able to do that, you need to upsample the signal say 44kHz 16-bit to a higher frequency sample say 705.4kHz so that you're filling in the gaps between the original samples to regenerate the original analog waveform.

Isn't this "filling in the gaps" a misconception? So I don't think that's the rationale behind the design. 

Main listening (small home office):

Main setup: Surge protector +>Isol-8 Mini sub Axis Power Strip/Isolation>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three .

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment
4 minutes ago, firedog said:

Isn't this "filling in the gaps" a misconception? So I don't think that's the rationale behind the design. 

 

That's right. The filling-in-the-gaps metaphor is "analog intuition," which often doesn't work well with digital audio.  ?

 

However, it doesn't mean upsampling can't be beneficial (though by another means than gap-filling).  That's presumably why it's been a standard step in processing digital audio since before there were such things as commercially available separate DACs.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
On 7/23/2018 at 7:06 PM, mansr said:

All manufacturers seem to have some pet feature they emphasise the importance of above all else. For Chord, it's absurdly long linear phase filters. For Ayre it's rather shorter minimum phase filters. For Schiit it's the avoidance of sigma-delta modulators. There are many other examples. The interesting thing is that not only do these approaches differ in the weights they give to different aspects, they are in direct conflict. If two men say they're Jesus, one of them must be wrong.

 

As I said already, Rob Watts is technically correct, but I still think he's overdone it to an extent that is hard to justify.

When you state that you are certain that no one can hear a difernce between 100k taps and 1M taps ,what are you basing that on?

Have you like some others here actually listend and compared for example Chord DAVE on its own and with a 1 M -scaler?

I for one wish that for digital to sound better ie more realistic with acoustic material PC usampling like for example via Audirvana or pure Music would be the equal of Chord's much more expensive versions BLU2 and new M-scaler.

My own still very limited comparisons and so far only via highend headphones and not under ideal conditions indicate that not only is there a VERY NOTICEABLE difference betwen DAVE 164k taps and BLU"'s 1M taps .

And also more importantly the upsampling via Audirvana or Pure music does not sound as effortless and realistic as even DAVE on its own even when I upsample to 32/768 in Audirvana or 64 /384 wth pure Music.

I for one want the SQ of Rob Watts M-scaler tech.But if I can avoid it I don't want to pay the price still asked for it .

But so far there is no question of its merits to me.

I have never before or after heard 16/44.1 sound as realistic as via DAVE/BLU2.

Who can deliver equal SQ at much lower price???

Not only in theory but also in practice?

 

 

Link to comment
On 7/23/2018 at 7:41 AM, mansr said:

Rob Watts is technically correct in that a longer filter gives a more accurate reconstruction. However, I'm certain that nobody can hear the difference between 100k and 1M taps. Conferring such great importance on the filter length falls, in my opinion, in the same category as talking about skin effect at audio frequencies: real phenomena of limited or no relevance to audio applications.

 

Separate and apart from what filter lengths might be audibly distinct from each other -

 

Would it be roughly correct to say:

 

(1) the number of taps is the number of times (or length of time) the filter acts on the signal; 

 

(2) as a consequence of #1, a filter with a greater number of taps will cut the signal further (more steeply) than one with a lower number of taps - at least to the point where the signal disappears into the noise? 

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
13 minutes ago, Jud said:

Would it be roughly correct to say:

 

(1) the number of taps is the number of times (or length of time) the filter acts on the signal; 

I wouldn't put it that way.

 

13 minutes ago, Jud said:

(2) as a consequence of #1, a filter with a greater number of taps will cut the signal further (more steeply) than one with a lower number of taps - at least to the point where the signal disappears into the noise? 

More taps allow for steeper filters, yes. More generally, the greater the number of taps, the more precisely you can shape the frequency response.

Link to comment
31 minutes ago, firedog said:

Isn't this "filling in the gaps" a misconception? So I don't think that's the rationale behind the design. 

 

"Filling in the gap" equals low pass filtering.  So I would not say it is a "misconception."

 

Sure, you can do it entirely in analog... and eventually, that gets done....  (i.e.even at 705.6kHz or whatever, it is still not the "infinite" sampling rate that is analog)..... 

 

The philosophy is, you can get better system performance if you do "more" of your work in digital and leave "less" work for analog.....

 

This is no different from the philosophy behind upsampling with HQP, or even in most oversampling DAC chips, etc, though Chord, by selecting what is essentially a windowed sinc function, is embracing the further philosophy that reconstructing towards "100 correctness" is better than towards psychoacoustic goals a la MQA, for example.  Whether you agree or disagree with the digital trumps analog premise is a different matter.  (e.g. if you are an NOS believer, you would do it all in analog and maintain that is the best sounding.)

 

If you had infinite digital compute power and infinite time, you can go to an arbitrary level of "smallness" in gaps with 100% mathematical correctness on the assumption that the original digitally sampled signal was bandwidth limited (below the Nyquist rate).  i.e. if I want the "gaps" to be small enough for a 1GHz sampling rate, I can compute that.  By 100% correctness, what I mean is, you take a bandwidth limited signal (below half 44.1kHz sampling rate) and sample it at say 44.1kHz and 705.6kHz.  If I apply the reconstruction to the 44.1kHz, I can get the exact same values as your 705.6kHz (modulo any small sampling errors in ADC).  This is true for any sampling rate you choose, 1GHz, 1 gadzillion Hz.... 

 

That came from the work of Nyquist, Shannon, Whittaker....   This reconstruction is Shannon-Whtitaker interpolation.  (Rob Watts and most others call it Whittaker-Shannon -- but I am biased due to historically having one of Shannon's disciples as an advisor).  I believe it was formally proven by Shannon shortly after WW2.

 

The Chord game is to get as close to that as possible and as high of an upsampling rate as possible with today's FPGA offerings, which is a length 1.016MM filter interpolating up to 16FS.

Link to comment
20 minutes ago, rayl1234 said:

 

"Filling in the gap" equals low pass filtering.  So I would not say it is a "misconception."

 

Sure, you can do it entirely in analog... and eventually, that gets done....  (i.e.even at 705.6kHz or whatever, it is still not the "infinite" sampling rate that is analog)..... 

 

The philosophy is, you can get better system performance if you do "more" of your work in digital and leave "less" work for analog.....

 

This is no different from the philosophy behind upsampling with HQP, or even in most oversampling DAC chips, etc, though Chord, by selecting what is essentially a windowed sinc function, is embracing the further philosophy that reconstructing towards "100 correctness" is better than towards psychoacoustic goals a la MQA, for example.  Whether you agree or disagree with the digital trumps analog premise is a different matter.  (e.g. if you are an NOS believer, you would do it all in analog and maintain that is the best sounding.)

 

If you had infinite digital compute power and infinite time, you can go to an arbitrary level of "smallness" in gaps with 100% mathematical correctness on the assumption that the original digitally sampled signal was bandwidth limited (below the Nyquist rate).  i.e. if I want the "gaps" to be small enough for a 1GHz sampling rate, I can compute that.  By 100% correctness, what I mean is, you take a bandwidth limited signal (below half 44.1kHz sampling rate) and sample it at say 44.1kHz and 705.6kHz.  If I apply the reconstruction to the 44.1kHz, I can get the exact same values as your 705.6kHz (modulo any small sampling errors in ADC).  This is true for any sampling rate you choose, 1GHz, 1 gadzillion Hz.... 

 

That came from the work of Nyquist, Shannon, Whittaker....   This reconstruction is Shannon-Whtitaker interpolation.  (Rob Watts and most others call it Whittaker-Shannon -- but I am biased due to historically having one of Shannon's disciples as an advisor).  I believe it was formally proven by Shannon shortly after WW2.

 

The Chord game is to get as close to that as possible and as high of an upsampling rate as possible with today's FPGA offerings, which is a length 1.016MM filter interpolating up to 16FS.

 

But none of this has to do with the "smallness" of the "gaps," since as the people you mentioned proved, once you have got more than 2 samples at the highest frequency of interest, there are no more gaps. 

 

Once we move from math to filtering in the real world, we have to contend with considerations like aliasing (minimized by steeper filters) and ringing (made worse by steeper filters).  My impression is that oversampling allows the filter to cut enough to help with aliasing while not causing as much of a problem with ringing as there might be without oversampling.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
7 minutes ago, Jud said:

 

But none of this has to do with the "smallness" of the "gaps," since as the people you mentioned proved, once you have got more than 2 samples at the highest frequency of interest, there are no more gaps. 

 

Once we move from math to filtering in the real world, we have to contend with considerations like aliasing (minimized by steeper filters) and ringing (made worse by steeper filters).  My impression is that oversampling allows the filter to cut enough to help with aliasing while not causing as much of a problem with ringing as there might be without oversampling.

 

The aliasing you are referring to (that is, in the oversampling/upsampling context) refers to incorrect filling in of the gaps bec of filter design.

 

Recall that the sinc function (time domain) in Shannon Whittaker is a perfect rectangular brickwall in the frequency domain.  i.e. it is of infinite steepness.  That's why it is 100% correct, i.e. no aliasing. So I think we are in fact talking about the same thing, but perhaps from slightly different angles.

 

Of course, it is impossible to realize because it is infinite in taps..... I tried to illustrate that with the frequency domain plots in my diagrams.... how the LPF in frequency domain representation has fewer aliasing side lobes as you increase the taps...

 

(Edit: P.S.: Actually, it is theoretically possible to realize with a finite PCM file bec you know it's all zeroes before/after the file... but computationally expensive...)

Link to comment
1 hour ago, The Computer Audiophile said:

I just contacted the US distributor for a review unit. I'm guessing they won't be available for a while. 

 

 

 

2 hours ago, The Computer Audiophile said:

I just contacted the US distributor for a review unit. I'm guessing they won't be available for a while. 

 

 

I had heard from my dealer in Fl it's around the fall time frame. 

The Truth Is Out There

Link to comment

If one excepts the premise that the upsampling and filtering of the Chord M-Scaler is far superior in SQ to any software approach via server, then I would be interested in why that is and what prevents the software approach via server from duplicating that accomplishment?  

Otherwise it's just technical knowledge upmanship that fails to connect to reality in practice and results.

(JRiver) Jetway barebones NUC (mod 3 sCLK-EX, Cybershaft OP 14)  (PH SR7) => mini pcie adapter to PCIe 1X => tXUSBexp PCIe card (mod sCLK-EX) (PH SR7) => (USPCB) Chord DAVE => Omega Super 8XRS/REL t5i  (All powered thru Topaz Isolation Transformer)

Link to comment
2 hours ago, The Computer Audiophile said:

I just contacted the US distributor for a review unit. I'm guessing they won't be available for a while. 

 

 

But you could get the Blu MKII mscaler now.  In fact, it may even be more accessible to get considering it's little brother (minus the transport) at a much reduced cost and minor improvements will be in high demand. 

(JRiver) Jetway barebones NUC (mod 3 sCLK-EX, Cybershaft OP 14)  (PH SR7) => mini pcie adapter to PCIe 1X => tXUSBexp PCIe card (mod sCLK-EX) (PH SR7) => (USPCB) Chord DAVE => Omega Super 8XRS/REL t5i  (All powered thru Topaz Isolation Transformer)

Link to comment
37 minutes ago, ElviaCaprice said:

If one excepts the premise that the upsampling and filtering of the Chord M-Scaler is far superior in SQ to any software approach via server, then I would be interested in why that is and what prevents the software approach via server from duplicating that accomplishment?  

The answer is nothing.  It is easy for the exact same approach as used by Chord to be accomplished in software.  In fact, I suspect that Jussi (miska) believes the oversampling/filters he has developed for HQPlayer are actually superior to what Chord does in Blu/Mscaler.  Rob Watts could certainly reproduce exactly the same (or even better, 2 million taps would be easy for a decent computer) via using his approach in a computer.

With Chord offering these as hardware solutions, we have only one way to get to somewhere, there is no reason we cannot to the same (or an even better) place through oversampling in software.  Anyone who believes differently is just fooling themselves, all it is required would be to run the same code int he software oversampling program.  Chord, of course, is a hardware company, so they offer their approach in hardware.  

If one wants to reproduce what Chord does in DAVE via software, take a look at the Stereopjhile test measurements of DAVE's filter response, and make an analogous filter in iZotope with Audirvana +, You will have to use isotope's RX-6 to get a graphic representation for comparing the results of the settings, but you can get very, very close to the same response as to what DAVE does.

 

Of course a listening test requires one to listen to other aspects of the DAC's performance, all aspects of sound quality are certainly not governed solely by the filter response-so how to do comparisons?  You cannot remove the other aspects of the DAC's performance from the equation.

SO/ROON/HQPe: DSD 512-Sonore opticalModuleDeluxe-Signature Rendu optical with Well Tempered Clock--DIY DSC-2 DAC with SC Pure Clock--DIY Purifi Amplifier-Focus Audio FS888 speakers-JL E 112 sub-Nordost Tyr USB, DIY EventHorizon AC cables, Iconoclast XLR & speaker cables, Synergistic Purple Fuses, Spacetime system clarifiers.  ISOAcoustics Oreas footers.                                                       

                                                                                           SONORE computer audio

Link to comment
31 minutes ago, ecwl said:

Except I think how iZotope defines 2 million taps is different than Chord. Chord's 1 million taps used for 16fs upsampling actually takes in say 1.41s of music (62500 digital samples) to compute each 16fs digital sample. It is my understanding that most software algorithm taps is actually a "virtual" tap length which means that if they are upsampling to 16fs, they are running 50 tap length filters to go from 1fs to 2fs to 4fs to 8fs to 16fs so 50x50x50x50 = 6.25 million. But that means they are only taking 0.00056s of music (25 digital samples) to eventually compute the final 16fs digital sample. I think Rob Watts has mentioned once that if he were to use this kind of calculation, then M-scaler should be thought of as having 125000x125000x125000x125000 taps = whatever...

 

So I think it is definitely possible to get a high-powered GPU and CPU to do the Chord type computation as sync filter and coefficients are standard math things. The issue is that to compute each digital filtered sample, you would need to store in memory the 62500 original digital samples, do the 62500 multiplications and add up the sum, and then do it again and again using 16 different sets of coefficients a million times to get to 1 million samples from the original 62500. FPGA can be programmed to have 100's of cores and store this in memory and do these computations in a power efficient manner in real-time. To do this with CPU & GPU on your PC, you would have to sort out how to manage the memory and then you would have to use your non-dedicated CPU/GPU cores to do the multiplications and additions one at a time so it would consume a lot of power. Your question is similar to asking why there are dedicated graphics chips or dedicated neural network chips. It is not because our general CPU cannot do the same computations as the GPU or the neural network chips. It's that they are not designed to do parallel processing so would take longer and be much less energy efficient. This is all fine if you want to take every song you own and upsample it to 705.4kHz flac ahead of time. But it wouldn't work if you just want to stream Tidal.

 

At least that's my understanding of the issues. Although people who actually write these software upsampling programs can correct me if I'm wrong.

 

If Chord actually has hardware that will do tasks billions of times larger than can be done now in CPUs or GPUs, and/or do them billions of times faster, I suspect the UK military, Intel, Apple, Microsoft, etc., would all long since have been knocking at his door. 

 

I could be wrong, of course.  I would be curious to know what @Miska has to say, if he cares to.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
4 minutes ago, Jud said:

If Chord actually has hardware that will do tasks billions of times larger than can be done now in CPUs or GPUs, and/or do them billions of times faster, I suspect the UK military, Intel, Apple, Microsoft, etc., would all long since have been knocking at his door. 

They use FPGAs from either Altera (now Intel) or Xilinx, both of which have much bigger chips for sale.

Link to comment

I concede that I have no expertise in DAC design or electrical/computer engineering. So my understanding of filter/DAC designs, FPGA vs Neural Network chips vs CPU vs GPU is limited and potentially wrong.

 

And I appreciate that we have had a few forum members point out that I am wrong.

 

I know people with more expertise are not obliged to share their knowledge.

I guess I would appreciate more if they can point out how I am wrong?

Link to comment
25 minutes ago, mansr said:

Whatever gave you that notion? A tap in a FIR or IIR filter is a well defined term. It can only mean one thing.

 

https://dsp.stackexchange.com/questions/8685/filter-order-vs-number-of-taps-vs-number-of-coefficients

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
4 hours ago, ecwl said:

Except I think how iZotope defines 2 million taps is different than Chord. Chord's 1 million taps used for 16fs upsampling actually takes in say 1.41s of music (62500 digital samples) to compute each 16fs digital sample. It is my understanding that most software algorithm taps is actually a "virtual" tap length which means that if they are upsampling to 16fs, they are running 50 tap length filters to go from 1fs to 2fs to 4fs to 8fs to 16fs so 50x50x50x50 = 6.25 million. But that means they are only taking 0.00056s of music (25 digital samples) to eventually compute the final 16fs digital sample. I think Rob Watts has mentioned once that if he were to use this kind of calculation, then M-scaler should be thought of as having 125000x125000x125000x125000 taps = whatever...

 

I don't know what iZotope does, but HQPlayer can do conversions in a single step, even to 512fs rate. That is much more than Chord does.

 

4 hours ago, ecwl said:

So I think it is definitely possible to get a high-powered GPU and CPU to do the Chord type computation as sync filter and coefficients are standard math things. The issue is that to compute each digital filtered sample, you would need to store in memory the 62500 original digital samples, do the 62500 multiplications and add up the sum, and then do it again and again using 16 different sets of coefficients a million times to get to 1 million samples from the original 62500. FPGA can be programmed to have 100's of cores and store this in memory and do these computations in a power efficient manner in real-time. To do this with CPU & GPU on your PC, you would have to sort out how to manage the memory and then you would have to use your non-dedicated CPU/GPU cores to do the multiplications and additions one at a time so it would consume a lot of power. Your question is similar to asking why there are dedicated graphics chips or dedicated neural network chips. It is not because our general CPU cannot do the same computations as the GPU or the neural network chips. It's that they are not designed to do parallel processing so would take longer and be much less energy efficient. This is all fine if you want to take every song you own and upsample it to 705.4kHz flac ahead of time. But it wouldn't work if you just want to stream Tidal.

 

Don't tell that to HQPlayer which has been doing that kind of stuff for many years, in realtime with only CPU or with CPU+GPU... That is not a problem to do in a computer. In addition, CPUs have been able to do multiple computations per instruction for a long time. GPUs are specialized on that and can do thousands of computations simultaneously.

 

For the price of M-scaler, you can get Nvidia's Titan V:

https://www.nvidia.com/en-us/titan/titan-v/

 

In addition, HQPlayer can monitor the output, adjust parameters and do recomputations of the data multiple times if necessary, in realtime, even if you are streaming something like Tidal. Because processing runs asynchronously, unlike in DACs, this can be done.

 

P.S. I just tested setting one of my filters to million taps and upsampling to 16x PCM, CPU load on my oldish quad-core Xeon E5 consumes less than 5% of CPU time.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
5 minutes ago, Miska said:

 

I don't know what iZotope does, but HQPlayer can do conversions in a single step, even to 512fs rate. That is much more than Chord does.

 

 

Don't tell that to HQPlayer which has been doing that kind of stuff for many years, in realtime with only CPU or with CPU+GPU... That is not a problem to do in a computer. In addition, CPUs have been able to do multiple computations per instruction for a long time. GPUs are specialized on that and can do thousands of computations simultaneously.

 

For the price of M-scaler, you can get Nvidia's Titan V:

https://www.nvidia.com/en-us/titan/titan-v/

 

In addition, HQPlayer can monitor the output, adjust parameters and do recomputations of the data multiple times if necessary, in realtime, even if you are streaming something like Tidal. Because processing runs asynchronously, unlike in DACs, this can be done.

 

 

Perhaps off topic for this thread but I never could figure out how to get hqp set up such that arbitrary audio via wasapi and also via windows mixer to pass through hqp and have it output 768 or 705.6 to the usb device of my choice. 

 

I tried and failed. If there’s a tutorial somewhere, I would find it helpful. I have other rigs to try- like the one in the office. 

 

That was what drove me to settle on an hw solution. 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...