Jump to content
IGNORED

MQA technical analysis


mansr

Recommended Posts

By graphs above, looks like MQA work like 24 bit uncompressed format.

 

But MQA lossless or lossy?

 

It is definitely lossy. It's just plain impossible to pack all the high-frequency content into the bandwidth afforded by the format. That's simple information theory.

 

The decoded files are also substantially different from the originals (assuming the 2L MQA samples were created from the masters provided).

 

I read about a probable "frequency-amplitude response correction", but real implementation of encoder/decoder is unknown.

 

It's only a matter of time before we learn how the decoder works.

Link to comment

My impression of the various steps in MQA decoding:

 

"first unfolding" to 88/96:

This is where -actual- file information is used to recover (lossy) information above the 44/48 KHz sample rate. This step is "universal" - ie no tailoring to the output device.

 

"rendering" to a higher bit rate:

This is entirely an upsampling step, very little if any information above 88/96 rate from the original file exists or is being used. The DAC chip's profile now plays a big role as it's really just upsampling at this point rather than reconstructing actual info.

 

Is this a possible interpretation?

 

I wonder how much, if any, the Bluesound decoder is tailored to the Bluesound DAC chain.

NUC10i7 + Roon ROCK > dCS Rossini APEX DAC + dCS Rossini Master Clock 

SME 20/3 + SME V + Dynavector XV-1s or ANUK IO Gold > vdH The Grail or Kondo KSL-SFz + ANK L3 Phono 

Audio Note Kondo Ongaku > Avantgarde Duo Mezzo

Signal cables: Kondo Silver, Crystal Cable phono

Power cables: Kondo, Shunyata, van den Hul

system pics

Link to comment
My impression of the various steps in MQA decoding:

 

"first unfolding" to 88/96:

This is where -actual- file information is used to recover (lossy) information above the 44/48 KHz sample rate. This step is "universal" - ie no tailoring to the output device.

 

"rendering" to a higher bit rate:

This is entirely an upsampling step, very little if any information above 88/96 rate from the original file exists or is being used. The DAC chip's profile now plays a big role as it's really just upsampling at this point rather than reconstructing actual info.

 

Is this a possible interpretation?

 

It's a possible interpretation, sure, but it's not the only one.

 

I wonder how much, if any, the Bluesound decoder is tailored to the Bluesound DAC chain.

 

So far we know that the Bluesound decoder output is bit-identical to that of the Tidal software decoder. No tailoring there.

Link to comment
So far we know that the Bluesound decoder output is bit-identical to that of the Tidal software decoder. No tailoring there.

It was not 100% clear to me, but I guess that the decoding from the Bluesound code is to a max of 96 KHz, correct? The plots all stop at 48KHz so I am guessing this is right.

NUC10i7 + Roon ROCK > dCS Rossini APEX DAC + dCS Rossini Master Clock 

SME 20/3 + SME V + Dynavector XV-1s or ANUK IO Gold > vdH The Grail or Kondo KSL-SFz + ANK L3 Phono 

Audio Note Kondo Ongaku > Avantgarde Duo Mezzo

Signal cables: Kondo Silver, Crystal Cable phono

Power cables: Kondo, Shunyata, van den Hul

system pics

Link to comment
It was not 100% clear to me, but I guess that the decoding from the Bluesound code is to a max of 96 KHz, correct? The plots all stop at 48KHz so I am guessing this is right.

 

It seems to double the input sample rate no matter what parameters I throw at it. The "render" part can upsample by higher factors.

Link to comment
It seems to double the input sample rate no matter what parameters I throw at it. The "render" part can upsample by higher factors.
Is there a render component in the Bluesound software?

If the design is as I detailed above, it would make sense from a design and software distribution standpoint to split the libraries in two components: the first unfold to 88/96 which is common across the board, and a device-specific component which upsamples.

NUC10i7 + Roon ROCK > dCS Rossini APEX DAC + dCS Rossini Master Clock 

SME 20/3 + SME V + Dynavector XV-1s or ANUK IO Gold > vdH The Grail or Kondo KSL-SFz + ANK L3 Phono 

Audio Note Kondo Ongaku > Avantgarde Duo Mezzo

Signal cables: Kondo Silver, Crystal Cable phono

Power cables: Kondo, Shunyata, van den Hul

system pics

Link to comment
Is there a render component in the Bluesound software?

If the design is as I detailed above, it would make sense from a design and software distribution standpoint to split the libraries in two components: the first unfold to 88/96 which is common across the board, and a device-specific component which upsamples.

 

There is a single library containing both decode and render parts. The decode runs first, then the output of that is optionally send through the render part.

Link to comment
Can you isolate the primary upsampler's response (magnitude, phase, impulse) for a non-MQA input,

and see if it is identical to Fig.6 here Mytek HiFi Brooklyn D/A processor–headphone amplifier Measurements | Stereophile.com and Fig.4 here Meridian Explorer2 D/A headphone amplifier Measurements | Stereophile.com?

 

I don't see what comparing to those graphs would tell. They don't appear related to MQA at all.

Link to comment

They are related. The filter is named 'MQA' on the Mytek. But the interesting thing is that it looks suboptimal, regardless to which filter church you belong, and yet ... the Meridian Explorer2 has a very similar/same filter. If the Blue Sound code again shows the same response (or a very similar one), then we know what sort of reconstruction filter is required by MQA, and this may yield insight in how the band splitting and joining is done, and what compromises were made there which may/may not impact on the quality of undecoded replay as compared to decoded replay.

Link to comment
There is a single library containing both decode and render parts. The decode runs first, then the output of that is optionally send through the render part.

So I gather the plots are after the first stage only? What does the second stage look like?

NUC10i7 + Roon ROCK > dCS Rossini APEX DAC + dCS Rossini Master Clock 

SME 20/3 + SME V + Dynavector XV-1s or ANUK IO Gold > vdH The Grail or Kondo KSL-SFz + ANK L3 Phono 

Audio Note Kondo Ongaku > Avantgarde Duo Mezzo

Signal cables: Kondo Silver, Crystal Cable phono

Power cables: Kondo, Shunyata, van den Hul

system pics

Link to comment
I took it to mean he's seeing the noise characteristic of the MQA renderer when playing some non-MQA 96 kHz files. No DAC I've ever seen normally generates such noise. Even the built-in upsamplers in the DAC chips are better than that.

 

I took it to mean the same thing, which is what makes no sense to me.

 

He said he has one non-MQA FLAC that produces the same noise profile when run through the renderer. My understanding is that the rendering/decoding uses metadata to reproduce (not perfectly of course) the original hi-res file. If there are FLACs (he said he has one, could be the only one, but that would be pretty statistically improbable...) that trigger the renderer/decoder, then it must be reading something in the file as metadata which when run through the process produces the noise profile. In the non-MQA FLAC it is not real metadata, it's some aspect of the track itself, or some byproduct introduced at some stage of it's production (anywhere in the production chain, from original recording, to mastering, to processing on Mussi's set up).

 

The fact that not all non-MQA FLACS display this to me argues that it is not an inherent problem with the MQA process, but something specific to the track itself. The only way to say the extra noise is due to the MQA process would be if the non-MQA FLAC he described actually is MQA, which should be easy to determine.

 

Baring that, I would say there has not been enough information presented in this thread to attribute the extra noise to MQA. That doesn't mean it isn't true, just that more investigation is needed.

Link to comment
My impression of the various steps in MQA decoding:

 

"first unfolding" to 88/96:

This is where -actual- file information is used to recover (lossy) information above the 44/48 KHz sample rate. This step is "universal" - ie no tailoring to the output device.

 

"rendering" to a higher bit rate:

This is entirely an upsampling step, very little if any information above 88/96 rate from the original file exists or is being used. The DAC chip's profile now plays a big role as it's really just upsampling at this point rather than reconstructing actual info.

 

Is this a possible interpretation?

 

I wonder how much, if any, the Bluesound decoder is tailored to the Bluesound DAC chain.

 

For the first unfold, I agree. This is just a compression operation that can be done in software or firmware.

 

For the second unfold, your description doesn't fully match the process described in the relevant MQA patent. Having previously downsampled and allowed ultrasonic aliasing, by upsampling and reversing the original FIR filter applied the ultrasonics are restored, along with ultrasonic aliasing. So its not the same as regular upsampling.

Link to comment
By graphs above, looks like MQA work like 24 bit uncompressed format.

 

But MQA lossless or lossy?

 

I read about a probable "frequency-amplitude response correction", but real implementation of encoder/decoder is unknown.

 

The terms imply a delivery method that aims to preserve all information. But MQA has always said that they do not preserve the original input. They apply DSP as part of the recording chain. That DSP is claimed to improve the way the file sounds, by correcting temporal errors. That is their claim. So words such as "lossy" and "lossless" are a little inappropriate ... MQA is always "lossless", but they believe that delivers a better sound. Its lossless in the same way that a crossover is lossless, or DSP room correction.

 

That aside, there are various compression and encoding schemes used by MQA. None are mathematically lossless. Bit depth is always truncated, from a mathematical POV.

 

I know MQA use the word "lossless" (or Bob Stuart has). I believe when pressed he talks about audibly lossless. That's a fair concept, but a lot harder to define and test than mathematical lossless.

Link to comment
The terms imply a delivery method that aims to preserve all information. But MQA has always said that they do not preserve the original input. They apply DSP as part of the recording chain. That DSP is claimed to improve the way the file sounds, by correcting temporal errors. That is their claim. So words such as "lossy" and "lossless" are a little inappropriate ... MQA is always "lossless", but they believe that delivers a better sound. Its lossless in the same way that a crossover is lossless, or DSP room correction.

 

That aside, there are various compression and encoding schemes used by MQA. None are mathematically lossless. Bit depth is always truncated, from a mathematical POV.

 

I know MQA use the word "lossless" (or Bob Stuart has). I believe when pressed he talks about audibly lossless. That's a fair concept, but a lot harder to define and test than mathematical lossless.

 

My understanding is that the frontend processing to "correct" ADC errors is distinct from the delivery format. I see no reason the former couldn't be delivered in a truly lossless container.

Link to comment
My understanding is that the frontend processing to "correct" ADC errors is distinct from the delivery format. I see no reason the former couldn't be delivered in a truly lossless container.

 

In theory, sure. They could do their DSP and then ship at 24/762kHz massive file...some of the publicly stated design goals of MQA are:

 

- compatible with existing consumer non hi-rez gear (ie. max 16/48)

- compatible with 24/48 gear

- compatible with existing entry level hi-rez (24/96) e.g. Dragonfly which limits the USB input rate to 24/96 (but internal DAC can do more)

- compatible with best DACs (i.e. super high sampling rate)

 

I like this approach, its a neat idea. I also live in a world where streaming mp3 is sometimes flaky. I don't want to stream massive files.

Link to comment
In theory, sure. They could do their DSP and then ship at 24/762kHz massive file...some of the publicly stated design goals of MQA are:

 

- compatible with existing consumer non hi-rez gear (ie. max 16/48)

- compatible with 24/48 gear

- compatible with existing entry level hi-rez (24/96) e.g. Dragonfly which limits the USB input rate to 24/96 (but internal DAC can do more)

- compatible with best DACs (i.e. super high sampling rate)

 

I like this approach, its a neat idea. I also live in a world where streaming mp3 is sometimes flaky. I don't want to stream massive files.

 

They could take the processed master and resample it normally to the usual delivery rates.

Link to comment
My understanding is that the frontend processing to "correct" ADC errors is distinct from the delivery format. I see no reason the former couldn't be delivered in a truly lossless container.

 

There is no reason that I can think of why the "de-blurring" process cannot be applied without the "Origami" process. Selling only the filters will be a tough thing but bundling it with lower file size make it attractive for streaming services.

Link to comment
That DSP is claimed to improve the way the file sounds, by correcting temporal errors. That is their claim. So words such as "lossy" and "lossless" are a little inappropriate ... MQA is always "lossless", but they believe that delivers a better sound.

 

In my opinion, we have either lossless (digital without distortions) or sound enhancer (that modify digital sound).

 

Many people love vinyl sound. But it is not lossless.

 

What is temporal errors here?

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment
There is no reason that I can think of why the "de-blurring" process cannot be applied without the "Origami" process. Selling only the filters will be a tough thing but bundling it with lower file size make it attractive for streaming services.

 

Or put another way, sell the "de-blurring" to the likes of Merging (Pyramix) and the recording studios pay. Sell it to the DAC and playback software vendors and everybody pays. Which do you think Mr. Stuart likes better?

Link to comment
It seems to double the input sample rate no matter what parameters I throw at it. The "render" part can upsample by higher factors.

So are you running two pieces of code, one for the first "unfold" and one for "rendering"? Sorry but I am a bit confused about this.

NUC10i7 + Roon ROCK > dCS Rossini APEX DAC + dCS Rossini Master Clock 

SME 20/3 + SME V + Dynavector XV-1s or ANUK IO Gold > vdH The Grail or Kondo KSL-SFz + ANK L3 Phono 

Audio Note Kondo Ongaku > Avantgarde Duo Mezzo

Signal cables: Kondo Silver, Crystal Cable phono

Power cables: Kondo, Shunyata, van den Hul

system pics

Link to comment
So are you running two pieces of code, one for the first "unfold" and one for "rendering"? Sorry but I am a bit confused about this.

 

The library contains two functions with names similar to "decode" and "render". I made one test program for each them, storing the output of "decode" in a file. Both functions could of course be called one after the other in a single program, but I wanted to look at the intermediate data anyway.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...