Jump to content
IGNORED

MQA at CES


Recommended Posts

I get that part. But MQA will market decoders because MQA decoded material sounds better, not because FLAC will sound worse if you don't get decoding capabilities. The I don't care what the decoded file sounds like is the part I'm not getting.

 

The "I don't care what the decoded file sounds like" (with MQA decoding) is probably (he can of course clarify for himself) based on the fact he will never incorporate a proprietary/closed magic box format into his ecosystem (personal or his software business) for a host of reasons, not the least of which is $financial$.

 

By the way, if what Miska is saying is true, this appears to contradict the explicit promise of Meridian that an MQA encoded file looks like standard PCM 16/44 encoding to a standard PCM DAC (and thus is essentially the same SQ), OR could be an artifact of these particular recordings in this A/B comparison, so more of these will obviously need to be done...

Hey MQA, if it is not all $voodoo$, show us the math!

Link to comment

For those of you who might not be seeing the implications of a closed/proprietary format, let throw a couple of analogies out there. MQA is not just another audio "product", like an speaker or amp or cable. It is not even a new source, like FM radio or streaming in the sense of Apple or Tidal. It is potentially something that could - depending on market direction- undergird all these things.

 

We are able to see and post on this forum because of something called TCP/IP. If I wanted to sell you a new router, or computer, or perhaps become your ISP, that would be one thing (and you could easily choose another product or service). If however, I patented a new version of TCP/IP (and this new version becomes adopted by the market), obscuring it's inner workings with "intellectual property" that would be something else altogether. Of course, this new version of TCP/IP would be faster, better, have more security, and promise end user "authentication" to make sure they were old enough to download porn (or that the porn is at least "the original" ;) ).

 

Perhaps some of you are "car guys". MQA is not yet another model or make, like a Ford or a Honda. It is more like the road, or the gas. What if I told you were going to have to pay a toll on every road you traveled on, or that your car from now on was going to run on hemp oil, and you could only by "authenticated" hemp oil from one seller?

 

We tend to under appreciate these things because they are the ground we walk on and we are looking at other things. Standard PCM encoding is sort of like the ground - what happens when one company comes up with an "improved" ground, patents it, and therefore controls it? You no longer own the ground on which to build your audio house (which is made up of things like speakers, amps, streaming services, etc.).

 

Now following this post will be at least half a dozen comments as to why my analogies don't hold up - and they don't, they have holes big enough to drive Wilson speakers through, but that is not the point as there is an important element of truth in them and it is that truth that we should keep in mind when thinking about MQA...

Hey MQA, if it is not all $voodoo$, show us the math!

Link to comment
OK, Let's for a moment try to understand the bold part. What exactly is specific in the MQA process to be able to compensate for the losses obtained in the first ADC steps? How can they "remove the adc signature" or whatever...that's a pretty bold statement that needs some clear explanation, in terms that the average music lover and audiophile can understand...at the moment I did not find any explanation...

 

In the clearest example would be those 2L recordings. 2L has all the original equipment back to 1993 they used. Meridian can test those to see what the transfer function is. I know Mr. Stuart has mentioned impulse tests etc. So in simplest terms they could fix response that wasn't perfect. They also say they can undo damage of time smearing filters and reproduce what the signal was before it was digitized then digitize it anew with filters that smear time less. If they do this do the point of truly knowing what the original analog signal was it would be more involved. In simple terms that is the claim.

 

In cases where they don't know or have the original equipment it is possible to look at the results and know the general filter type used and backwards compensate. Their exact process for this is unknown in detail to say the least. At least in theory I don't see that as impossible though how well they can manage this is something I would like to see examples of just to convince me. Like taking an old Sony PCM tape unit, send a known signal through it then show me the messed up version and how close to the original the MQA versions becomes.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
They also say they can undo damage of time smearing filters and reproduce what the signal was before it was digitized then digitize it anew with filters that smear time less.

 

That's called apodizing filter and it's nothing new. They have been themselves doing it for a long time in their equipment.

 

What you cannot do is to put back something there that the filter or quantization has removed, no matter how much you inspect the original equipment. If the information is not there, it is lost for good. For hires, you need hires.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
That's called apodizing filter and it's nothing new. They have been themselves doing it for a long time in their equipment.

 

What you cannot do is to put back something there that the filter or quantization has removed, no matter how much you inspect the original equipment. If the information is not there, it is lost for good. For hires, you need hires.

 

Yes, apodising it is, though they have implied it is more than that as well. I also don't believe they are claiming they can make hires of redbook. What they are claiming is they can more fully realize what redbook or even hires can accomplish. That some information has been compromised in the conversion which they can unravel and come closer to what the conversion should have been. Not adding lost info, just straightening out existing info. Again without more transparency we are left mostly guessing.

 

Let me ask you a question you might answer as it exceeds my knowledge of the matter. If I gave you a musical file, and could tell you it was recorded using a conventional FIR brickwall filter, could describe the filter parameters and tap length is it possible for you to process that and give me back a file that would have resulted had one of your good filters been used instead? For that seems to be their claim.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
…I believe Tidal is a pretty big deal to many of us, and the fact that they are (as of this hour - anyone at CES right now who can report on the latest developments??) going to switch over from a de facto open standard (of the last 30+ years: CD 16/44) to a propriety/closed "magic box" solution is worth a bit of discussion to say the least...

 

Okay, I get that part — Tidal's a big deal to me, too — but it's still probably better to see what exactly Tidal is streaming for MQA when they start doing it, and how the Tidal (or in my case, Roon) software decoder handles it.

 

I understand Miska's perspective, I think (up to the limits of my overall technical knowledge), and while his objections are valid and could affect me and other HQP users, we're now talking about a small subgroup of a small subgroup of the potential MQA "audience." For the time being, at least, I'm going to hope that the MQA software decoder used in Tidal and Roon will unfold MQA files to some flavor of standard FLAC that's at least Red Book quality. I can't predict success, but I'm pretty certain that the vocal cadre of Roon/HQP users will put as much pressure as they can on the Roon team to ensure that HQP continues to work at least as well as it currently does with Tidal via Roon (which is pretty darn great, BTW).

 

--David

Listening Room: Mac mini (Roon Core) > iMac (HQP) > exaSound PlayPoint (as NAA) > exaSound e32 > W4S STP-SE > Benchmark AHB2 > Wilson Sophia Series 2 (Details)

Office: Mac Pro >  AudioQuest DragonFly Red > JBL LSR305

Mobile: iPhone 6S > AudioQuest DragonFly Black > JH Audio JH5

Link to comment

I will quit my Tidal subscription if they start using MQA for all their "HIFI" streams, because it is much worse than RedBook. Then I can as well keep on using Spotify (which I'm also subscribing) at about same quality, but half the price and more content.

 

Realy ?

 

Now you sound very anti MQA. Is this based on your listing of a pure downloaded MQA vs redbok form 2L ? Which to me is the only way at the moment to "verify correct".

 

I do not have the technical knowlegde to argue against your analysis posted here :

Some analysis and comparison of MQA encoded FLAC vs normal optimized hires FLAC - Blogs - Computer Audiophile

 

But how can you do a correct analysis without a MQA decoding available ?

Would you take your time to explain the basics.

 

If you can't hear the difference between Tidal and Spotify, well ..... (Aka CD vs MP3).

And even clame a future hi res coded MQA streaming is worse than todays offered quality before the service goes online.....

 

At the moment we may not even know 100% for sure if the MQA redbok will be 16 or 24 bit. Only the fact that 2L convert 16 to 24.

Link to comment
OR could be an artifact of these particular recordings in this A/B comparison, so more of these will obviously need to be done...

 

I downloaded and checked every single of the MQA files and the corresponding DXD file from here:

2L High Resolution Music .:. free TEST BENCH

 

And every single of those exhibit the same pattern...

 

 

I do some guessing, but I doubt the MQA decoder expands the dynamics of the baseband down, even less removes the HF-noise. It probably just expands the frequency band up leaving the base band intact.

 

After some speculation and playing with two gentle noise shapers I have, I can cut the original to 352.8/8 with one and 352.8/10 with the other before the 0 - 22.05 kHz band begins to look like the MQA FLAC. Meaning that I would be left 14 - 16 bits per each 24-bit sample for encoding the band above 22.05 kHz...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I downloaded and checked every single of the MQA files and the corresponding DXD file from here:

2L High Resolution Music .:. free TEST BENCH

 

And every single of those exhibit the same pattern...

 

 

I do some guessing, but I doubt the MQA decoder expands the dynamics of the baseband down, even less removes the HF-noise. It probably just expands the frequency band up leaving the base band intact.

 

After some speculation and playing with two gentle noise shapers I have, I can cut the original to 352.8/8 with one and 352.8/10 with the other before the 0 - 22.05 kHz band begins to look like the MQA FLAC. Meaning that I would be left 14 - 16 bits per each 24-bit sample for encoding the band above 22.05 kHz...

 

The patent shows them using bits 13-16 for that along with the other info in the lower 8 bits.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
...., the MQA version has background noise at best at 16-bit level at best on some frequency areas, but above 15 kHz it begins to drop significantly below that of 16-bit. So compare noise levels especially between 15 - 20 kHz.

 

How do you explain where the noise went after that MQA file is decoded to 24/352,8.

How can a "noisey"24/44.1 convert to a much better 24/352.8.

 

Can your analyse SW have a bug ? :D

 

Does your software analyses after DAC as your ears do ?

(Cause it is the digital signal you are looking at?)

Link to comment
Now you sound very anti MQA. Is this based on your listing of a pure downloaded MQA vs redbok form 2L ? Which to me is the only way at the moment to "verify correct".

 

No, this is based on my listening of MQA vs RedBook file I produced with my own tools from the original 2L DXD files and especially the hires FLAC I also produced with my own tools from the DXD source that is both smaller than the MQA file and can be fully decoded by all standard FLAC decoders...

 

So where is the bandwidth saving for the streaming providers and why do we need to pay for a decoder instead of using the current free and open standard?

 

But how can you do a correct analysis without a MQA decoding available ?

 

I'm not interested to have MQA decoding, I'm interested to keep my current Tidal quality without degradation!

 

And I also wanted to argue, that you can stream the content fully with standard FLAC while using same or less bandwidth than the MQA takes.

 

And even clame a future hi res coded MQA streaming is worse than todays offered quality before the service goes online.....

 

It is not hires without MQA, it is even less than what RedBook is today, unless you pay for MQA.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Yes, apodising it is, though they have implied it is more than that as well. I also don't believe they are claiming they can make hires of redbook. What they are claiming is they can more fully realize what redbook or even hires can accomplish. That some information has been compromised in the conversion which they can unravel and come closer to what the conversion should have been. Not adding lost info, just straightening out existing info. Again without more transparency we are left mostly guessing.

 

None of which should require proprietary encoding/decoding, no?

Link to comment
So essentially one gets about noise shaped 44.1/12 out of the FLAC without MQA decoder. And I believe that also becomes the base-band of the decoded result, as-it.

 

That would be 44.1/12 with MQA (without proprietary decoding) vs. 44.1/16 from standard CD/Tidal today, correct?

 

edit: just want to add that I would be interested in your method applied to other recordings than just the ones from 2L since they are obviously an "early adopter" and may have taken some missteps with their process. However, only 2L has released MQA encoded files into the wild yet as far as I know (anyone can point us to others?) and is sort of remarkable in of itself given that Meridian is still "finalizing" it's product and/or strategy. Meridian reminds me of my favorite college football team, always fumbling the ball... :)

Hey MQA, if it is not all $voodoo$, show us the math!

Link to comment
None of which should require proprietary encoding/decoding, no?

 

Without knowing more about what they do I couldn't say. One could certainly do at least what I describe, and get similar results yes.

 

What I don't see is how this MQA can help at all with more than 99% of recordings. Things that have been through multiple layers of mixing, processing, compression etc. You can't do anything with digital filtering to fix all of that or even know what the original filtering was that I can see. Too much has been mucked up in the middle to work backwards from that in any greatly beneficial way.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
How do you explain where the noise went after that MQA file is decoded to 24/352,8.

How can a "noisey"24/44.1 convert to a much better 24/352.8.

 

I'm pretty sure now it doesn't go anywhere, it'll stay there. And it won't decode bit-perfect to the original 352.8/24.

 

Can your analyse SW have a bug ? :D

 

I checked on two different pieces of software on two different machines. And I'm pretty sure there's no bug that would recognize file as MQA encoded FLAC and make the results look worse for those... ;)

 

Does your software analyses after DAC as your ears do ?

(Cause it is the digital signal you are looking at?)

 

I have also audio analyzers so I can measure DAC outputs too, that's what I do all the time while developing software. But for this I know I don't need to measure it from analog domain to know what is the result.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Without knowing more about what they do I couldn't say. One could certainly do at least what I describe, and get similar results yes.

 

What I don't see is how this MQA can help at all with more than 99% of recordings. Things that have been through multiple layers of mixing, processing, compression etc. You can't do anything with digital filtering to fix all of that or even know what the original filtering was that I can see. Too much has been mucked up in the middle to work backwards from that in any greatly beneficial way.

 

Here is part of the interview from TAS where Bob Stuart somewhat speaks to your questions:

 

What’s wrong with the current digital chain from the source master to the playback device? What are some of the problems in current digital that MQA addresses?

 

Well, the first problem is with the question, specifically what you mean by the “master”? The idea that the master is a digital file that came out of the A-to-D is a descriptor that is used pedantically by the audiophile community but loosely in the studio. In the MQA world, we suggest that the master is actually the sound that created the file in the first place. Getting access to that is the most important thing. We want access to it without it being polluted by what the A-to-D converter does to it, and we want to play it back without the pollution of the D-to-A. The playback chain has problems, but so does the recording chain. Typical analog-to-digital conversion has concepts embedded that are not ideal from the point of view of the human listener seeking high resolution.

A purely academic exercise could have created entirely new A-to-D and D-to-A converters that would be fantastic. But pragmatically, there are hundreds of millions of DAC chips out there. You’re not going to go to Apple and say, “I need you to change the chip in my iPhone.” That’s not going to happen. So we worked out how to get the best out of those DACs that are already out there.

When the engineers listen in the studio, their DAC is almost certainly not your DAC. You as a listener can’t hear it as it was heard in the studio. MQA is not only about accessing the sound inside the file, but also managing the DAC in the studio and in the decoder on your device at home to produce a much closer sound to each other. We’re actually drilling upstream to the analog sound in the studio and downstream to the analog sound in your playback device. What we’re trying to do, conceptually, is directly connect together the modulators at both ends—the high-speed delta-sigma modulators in the A-to-D and the DAC. That’s the essence of a large step forward in transparency and accuracy, because when you do that, it all sounds more like the original analog sound.

If you’re in the recording studio you have access to the microphone feed, or the analog tape recorder, and can compare it to digital. We’ve been to scores of studios that tell us if they take an analog signal and feed it into the A-to-D and then straight into the D-to-A, what comes out the other end doesn’t sound like what goes in. It doesn’t because of the brickwall digitizing process, which creates pre-echo, post-echo, quantization of the wrong type, arithmetic noise, and temporal blur. Because of the pre-ringing and post-ringing you have to wait a long time to find out when a transient happens. That is unnatural because it’s too loosely connected to the natural world of sound.

We’ve designed MQA so that doesn’t happen. In fact, there’s no pre-ring and basically no post-ring, and everything’s compact and tight in the time domain.

Link to comment
Here is part of the interview from TAS where Bob Stuart somewhat speaks to your questions:

 

What we’re trying to do, conceptually, is directly connect together the modulators at both ends—the high-speed delta-sigma modulators in the A-to-D and the DAC.

 

352.8k is not going to be enough for that, modulators don't run at such slow speeds. And also about one third of sound of the DAC comes from the modulator. I like to replace the modulator too, not just the digital filter. So I run digital filters up to 24.576 MHz and feed my my own modulators with that... DSD stream comes as output.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Here is part of the interview from TAS where Bob Stuart somewhat speaks to your questions:

 

It actually speaks to my question not at all. I understand his description as it relates to a situation like 2L which did mixing and little or no processing in analog before hitting the ADC. And they know the ADC used. So nothing happened in between ADC and DAC.

 

My question is when there were several different times after the ADC that compression was applied, delay, reverb, multiple channels from different DACs mixed together, limiting was done etc. etc. Most music is heavily heavily processed. I see no way you can take the master tape or even mix tape and get back to the other end.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

 

So essentially it is 17-bit after decoding which is what they begin with. After a bit of "mastering analysis" of the 2L content I ended up selecting 120 kHz sampling rate and 18-bit resolution to preserve everything (all frequency harmonics and all dynamics). Encoding that as standard FLAC results in file smaller than the MQA file. And using more typical sampling rate of 176.4 left ~30 kHz unused bandwidth and 18-bit TPDF dithered samples (in zero-padded 24-bit container) results in completely typical FLAC that is very tiny bit larger than the MQA file (16.7 MB vs 17.0 MB).

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

It all seems to me , to be too little, too late. With the rapid increase in storage capabilities,and the trend to much faster download speeds, why do we need another proprietary format that still isn't quite as good as well implemented high resolution LPCM and DSD recordings ?

Besides which, given that many companies baulked at paying licencing fees to Sony etc., I can't see too many companies wishing to pay royalties to Meridian for a format that is never likely to become mainstream !

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
It all seems to me , to be too little, too late. With the rapid increase in storage capabilities,and the trend to much faster download speeds, why do we need another proprietary format that still isn't quite as good as well implemented high resolution LPCM and DSD recordings ?

Besides which, given that many companies baulked at paying licencing fees to Sony etc., I can't see too many companies wishing to pay royalties to Meridian for a format that is never likely to become mainstream !

 

Come on Alex, you didn't listen to the hype. ;)

 

This process gets us right back to the analog signal before it was digitized, giving the perfect analog playback connecting end to end. Let me see, perfect sound......where have we heard that before?

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

...Meridian claims that for first, audio is encoded to MQA at A/D stage. Then, obviously, we need to edit that captured audio in studio. All editors works with PCM, so for editing we need to decode first, then edit and mix, after that re-encoding again to MQA for streaming, and after transmission/delivery and decoding from MQA to some form of PCM, listeners pop-up and says - this file sounds better! Better of what? Better than original? Better than PCM?

 

And MQA lobbists (7digital) are busy right now, with work of encoding entire HiRes catalogs to MQA - where sound is captured as usual, not via MQA encoder. Again, the famous listener says - this sounds better...

 

I want to laugh like MQA did in their website - MQA | Home but without writings on the video...

Sorry, english is not my native language.

Fools and fanatics are always certain of themselves, but wiser people are full of doubts.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...