Jump to content
IGNORED

AudioQuest adds MQA Support to Dragonflies via firmware


Recommended Posts

Gang,

 

A little more information...

 

Yes DragonFly is an MQA renderer. We do not do the full decode. The first unfold is done in the application (Tidal, Audirvana and others as I stated above). This unfolding is sent to the DragonFly as MQA data and we pass that onto an MQA library inside the DragonFly code. The library does DSP functions on that data and setups custom filters for each song based on the MQA information for that song inside the ESS DAC chip.

 

This gives it the highest possible quality available for the DragonFly platform.

 

Thanks,

Gordon

Link to comment

Plissken,

 

No the metadata is not sent over USB in a wrapper. It's also not decompression, what ever that means.

 

Applications can send MQA material to any DAC for playback. The output just will not benefit from the MQA decoding if the DAC and application is not MQA enabled.

 

Thanks,

Gordon

 

Link to comment
5 minutes ago, miguelito said:

My understanding is the DF gets a 88k or 96k PCM stream unfolded by TIDAL or Audirvana - what rate is the ESS DAC upsampling to once the controller determines the upsampling parameters? Or is this a misinterpretation of the final stage of MQA rendering?

 

If you choose pass-through, then TIDAL won't be doing the first unfold I don't think - unless it still does it because it detects a DAC that has MQA rendering capabilities?

 

Yeap, A+ 3 is fabulous.

 

If you choose pass through then the MQA is basically not enabled.

 

Why do people think this is all about upsampling and decompression.

 

Guys if the files were compressed and require decompressing in a certain format then how do NON-MQA DACS playback MQA files?

 

Look it's much more than what everyone is speculating about. This is probably part of the problem with companies who think MQA is a bad thing. Maybe not, they might be a lot smarter than I am. But as a musician for some 50 or more years, I can tell you this is the real deal. Lowering the noise floor is a real undertaking.

 

Anyway, before you pass judgement you should listen.

 

Thanks,

Gordon

 

Link to comment
7 hours ago, revand said:

 

Am I right that in the TIDAL desktop app settings at streaming at Audioquest Dragonfly Red the only option to be cheked is Use Exclusive Mode to reach the highest quality available with the Dragonfly Red?

Many thanks

Andras 

My thoughts on MQA can be found here: The digital music revolution

 

 

Correct inside of Tidal your only setting should be exclusive mode.

 

MQA libraries in all the applications that support it have a data base of known MQA devices. They match the interface to each device so that it correct and aligns everything going to that particular DAC.

 

I would check with other companies on the pass through option. But I think you would want this off for any MQA capable DAC. Checking this will basically bypass the MQA content.

 

Thanks,

Gordon

Link to comment
1 hour ago, mansr said:

Wow, such hostility. Everything I've said about the MQA rendering process is true. I learned it by studying the actual code. I obviously don't know exactly how it is implemented on the Dragonfly, but here's what I do know:

  • The Dragonfly has a PIC32MX microcontroller based on a MIPS CPU. Anyone can open the case and see this.
  • MQA rendering in real time needs 100 MHz of CPU time on a more efficient ARM system.

Based on this, I find it unlikely that the DF microcontroller is actually performing the calculations. The PIC32MX simply doesn't have enough CPU power.

 

mansr,

 

The reason I get frustrated with you is simple. Your making assumptions about products that you either don't have or don't know enough about. This in the end will mislead users and causes false claims.

 

For instance your claim about 100MHz ARM processor is very vague. There are a boat load of different ARM processors and they vary vastly in performance and Audio capabilities. I can think of M series, A series, 9 series processors with some that have great capabilities and others that don't. Some have HS USB, others only FS USB. Some have cable I2S, others have really poor implementations. The poor ones require significantly more MIPS than a 100Mhz version would have and therefore would preform less than any Microchip MX processor would.

 

Take for instance most XMOS single core (what marketing calls 8 core, really 8 threads) that are the basis for a number of products in the industry. Six of those cores are used for USB & I2S, and almost 90% of the MIPS of those processors are used up and therefore are not really good candidates for MQA.

 

In the DragonFly line with the Microchip MX32 processor we wanted to make a product that was really low power that would work with all platforms. Something both the XMOS and ARM processors cannot do. When we started to work with MQA, we talked to the engineers at Microchip and they sent us DSP algorithms written in assembler. The reason being is that these processors have specific DSP functions which standard programming under C/C++ would not have access to. The engineers at MQA took that source code and optimized it for this implementation.

 

MIPS don't equal MIPS when you are talking about processors. You have to look at the entire system as a whole.

 

Heck take an iMX7 ARM processor from NXP/Freescale and compare it to say an iMX6UL. The iMX6UL will beat the pants on the 7 because of the IP it has for Audio. Just like the MX270 from Microchip will beat the pants off the MX795. You can't just make blanket statements about performance and suggest you know what's going on here.

 

That leads to misinformation and everybody that reads your posts will get confused.

 

Thanks,

Gordon

Link to comment
18 minutes ago, mansr said:

I measured the CPU cycles required to run the rendering code on a Cortex-A7 ARM device. It needed about 100 million cycles per second of audio. This CPU is more efficient per cycle than the MIPS M4K core in the PIC32. It also has bigger caches and better memory bandwidth. I read somewhere that the Dragonfly uses a PIC32MX270 which runs at 50 MHz. It would take one hell of an optimisation to run the upsampling algorithm on that and still have time for handling the usual tasks (USB communication etc). Assembly optimisation is something I have a great deal of experience with, for what it's worth.

 

mansr,

 

So you are admitting you don't have a DragonFly, correct? Then why are you even commenting here?

 

Actually the A7 has the same problems as the A5 does. With the MX DSP functions you can do a multiply and add which is a requirement for filtering in 1T state. On the A5/A7 processor that takes a ton more!

 

Thanks, but no thanks,

Gordon

Link to comment
7 hours ago, Chel2772 said:

Hello, 

 

I have updated my DF red but am confused about the bitrate display colors.

 

I thought that any MQA flie should show blue, but my DF changes color according to the bitrate shown in Audirvana streaming Tidal Masters.

 

So should it shine Blue for every MQA file or does it follow the old bitrate color scheme?

 

Thanks!

 

Chell2772,

 

As I stated above in my post the standard DragonFly colors apply to sample rates. When the DragonFly goes to purple it is playing back via MQA.

 

If you are having a problem with getting it to purple a couple  of things maybe happening:

 

1) Check to see you have a HiFi / Master account in Tidal->Settings->Streaming:

2) Make sure you have version 1.06 DragonFly. You can run the Device Manager again and it will tell you that and serial number and other stuff.

 

Thanks,

Gordon

Link to comment
10 hours ago, GUTB said:

Okay, so based on the comments from this thread:

 

1. MQA is unfolded to 88/96 (i.e., first unfold) by the MQA-enabled player.

2. The MQA is streamed to the DFR's controller.

3. The controller sends the stream into the Sabre with some DSP values.

4. The stream goes into the Sabre's SRC stage for SDM conversion.

5. DSP values are applied before the output stage.

 

So, BASICALLY, the DFR ***DOES NOT*** unfold the the MQA stream to the master resolution.

 

Assumptions:

  • The controller doesn't perform any further unfolding of the MQA stream.
  • The Sabre doesn't perform any further unfolding prior to SRC.
  • DSP values are related to master ADC corrections.

 

No not correct....

 

First off why does everyone feel like there is upsampling involved here? Also the ESS Sabre DAC is only doing it's normal job with standard PCM using minimum phase filters. Each MQA track has it's own set of filters which are then setup in the ESS DAC chip for MQA.

 

MQA does an unfold with the DragonFly processor then sends that to the ESS DAC chip with now the custom filter for that song.

 

~~~

 

Other MQA products may work differently... say the DAC chip does not allow the downloading of custom filters. Then that product would have to assume the responsibility of doing those custom filters in the main processor. This would require a much faster (and more current hungry) processor to do this.

 

The DragonFly is optimized for power so that it will work on all platforms. With this in mind MQA and AudioQuest (and me) came up with this idea which takes advantage of the DragonFly system without sacrificing power usage.

 

Thanks,

Gordon

Link to comment
6 hours ago, GUTB said:

OKAY GUYS.

 

I just compared two tracks from the same album: Led Zeppelin (Deluxe Edition), VOLUME 1 TRACK 6: Black Mountain Side.

 

I have this album in 96/24 from HDTracks, and it's available as a Master in Tidal.

 

It seems clear to me that, playing the track from Roon (streaming from my dedicated audio PC, connected to the same router as my broadband modem), is clearly INFERIOR to the Tidal version. Since the DFR supports 96/24, the Master version shouldn't be any higher resolution. But it seems like the MQA version has more micro-details present, just a deeper insight into the guitar. It's as if a veil had been lifted, and it sounds more like I would expect high-resolution audio to sound. Is this the MQA source correction magic at work? Or am I just falling for a cheap EQ trick?

 

Here's the 96/24 file. What do YOU guys think???

 

06-Black Mountain Side.flac

 

 

GUTB,

 

Thanks, I would agree with your assessment.

 

Do remember everyone what MQA is about and the lowering of the noise floor. If you look at the MQA site you can see how more of the music is going to be revealed by removing the unwanted noise associated with the track.

 

Some of the real early stuff is stunning to hear in MQA. The other nice thing is the MQA Masters library is not limited to audiophile tracks. There is a ton of stuff for everyone to listen too.

 

I would again suggest for MAC users to also try out the new Audirvana 3.0, Damien did a great job on that and it allows MQA and Tidal users a great experience.

 

Thanks,

Gordon

Link to comment
3 minutes ago, Jud said:

 

Can you point me to some specifics on lowering noise (in the process of remastering for MQA, I assume?) on the MQA site?  And can you recommend a few tracks/albums that you are referring to in terms of "some of the early stuff"?

GUTB,

 

You can find that information on the MQA site. It explains how MQA works.

 

http://www.mqa.co.uk/

 

Thanks,

Gordon

Link to comment
12 minutes ago, miguelito said:

Yes it is. And as much as I respect Gordon Rankin, he needs to be a little less dogmatic here. For example, someone used the term decompress to refer to embedded information in the pcm stream to set such filters. And no, there is no real information other than vagaries and mumbling in MQA Co's pages.

 

I would like to understand how filters are tagged and selected. The MQA file must have a universal dictionary of these than the specific DAC implementations translate into the actual filters for each DAC API.

miguelito,

 

First decompressed was used out of context. Tidal downloads in FLAC you don't decompress FLAC, you uncompress it. This was the wrong term and confused several people.

 

I don't know exactly how MQA determines which filter to use or how to unfold the data. The USB stream has some identifier in it that tells the MQA library in the DragonFly that it is MQA and what filter to load into the ESS DAC chip and also how to unfold the data DSP and other things.

 

If any of you remember Pacific Microsystems HDCD format what they did was embed in the LSB a signature for their custom filter and output level. MQA maybe doing the same thing.

 

Really why do we need to know details like this? Why don't you just enjoy what we spent months creating for you.

 

Thanks,

Gordon

Link to comment
10 minutes ago, abrxx said:

 

I am software developer not a hardware guy but it seems to me highly implausible that custom assembly code had to be written by the micro-controller's manufacturer purely to set ESS filters. Gordon has kindly divulged this info, so I can only assume that certain DSP is being done by the controller, as well as the setting of the ESS filters. As to what DSP that might be doing, I suggest we all go back and re-read the patent on how to implement the second and third folds. I posted the link to this is another thread.

 

One thing that is not clear to me is this issue of lowering the noise floor. Exactly what part of the MQA process is responsible for this? Does avoiding un-necessary sample rate conversions give us a better noise floor?

abrxx,

 

In most cases when building a system that is efficient in programming with time sensitive data it's best to code at the lowest level. In DSP functions there is a standard TAP function that uses pointers and multiples the sample(x#) times coefficient(x#) and adds that to the accumulated value. In that same function the pointer for samples and coefficients is increment by 1. A loop or unroll of the the length of the operation is done for speed. The output of the tap function becomes the new sample presented to the DAC chip.

 

In C/C++/C# this is coded and not done to the best of the abilities of the compiler->assembler unless there are specific functions declared for the microprocessor or DSP chip.

 

In the case of the DragonFly we did this to pack as much functionality into the product as we could without sacrificing the power usage.

 

~~~~

 

In regards to the noise removal, you will have to go to the MQA website to understand how that is done. The files on the server have already gone through this portion of MQA before they are downloaded to the application.

 

Thanks,

Gordon

Link to comment
4 minutes ago, PeterSt said:

So ... Either of this (or all) is happening :

 

 

A. Mr Rankin is not under MQA-NDA and makes up more than we can do all of us together.

 

B. Mr Rankin didn't understand anything of what he has been doing / attempting.

 

C. MQA is hoaxing all of us, including me, but not Mr Rankin.

 

D. Everybody is genuinely co-operating, but the DF just doesn't work as intended and nobody notices.

 

 

Have fun with this multiple choice. Maybe I overlooked possibilities.

 

Peter,

 

Really go troll somewhere else.

 

~~~

 

I am under NDA as anyone would be working with MQA.

 

I am not an employee of AudioQuest. I am a hired designer, like I do with the other 18 companies I do work for including Ayre, Berkeley, MBL...

 

crenca, Why do you follow guys like mansr? You know if he is so great then why is he wasting time trying to reverse engineer and put down MQA. Why isn't he out there making a product that is better than all of this? Why because he probably can't and if he can, great it would lend credibility to who ever he is.

 

All this bickering about "HOW" MQA works is useless. The main reason is that you have no idea what is done on the file side of things. What and how the files are encoded and what actually is in the format. Oh sure you can sit around and think as any engineer would about the possibilities. But to truly know what's going on is creating misinformation.

 

Look I say if you don't have a DragonFly, you shouldn't even be on this thread.

 

I have pretty much laid out how it works on the DragonFly, probably more than I should have. If you having issues or want to know how to set it up and stuff then I would be happy to answer it. Or if you are having problems I can help you out.

 

I am not getting paid for this, I am not a spokesperson for AudioQuest or MQA. I am just an engineer who has been programming my whole life. My bio is pretty well known... I designed PC's for a living (540K last count), wrote BIOS code in assembler, developed IC for communications (Ethernet, Token Ring, USB, 802.11 bridges). I left the six largest hardware software company as the Chief Engineer to work on my passion in Audio. Well that and I had 2 Class A products in Stereophile and received product of the year in Absolute Sound. Two jobs was one too many and I had other products to finish like dual DSP SPDIF DACs, preamps and yea even speakers. I have designed and worked on and sold over 165 products since then. It's been a lot of fun...

 

But really posts like this are just too disturbing. It really doesn't make any sense Peter why you even say things like this. It's one of the reasons I don't frequent here more often.

 

Thanks,
Gordon

Link to comment
34 minutes ago, PeterSt said:

C. MQA is hoaxing all of us, including me, but not Mr Rankin.

 

D. Everybody is genuinely co-operating, but the DF just doesn't work as intended and nobody notices.

I think the above options remain.

 

FYI, I am pro MQA as you are, with the difference that the options you seem to have available are different from mine. So see ? something ain't matching up.

 

 

Peter,

 

I know who you are and what you make. Also I am sure MQA would appreciate it if you would not prod questions like this. Anyway... to answer:

 

C) I don't think this is a hoax at all. Bob and the rest of the team at MQA discovered something truly unique years ago. So much did Bob believe in this that he left Meridian to pursue this full time.

 

D) DragonFly is a renderer version of MQA. This means the first unfold is done in the application. The second unfold happens in the DragonFly processor and sent to the ESS90xx DAC chip with custom filters that match the track being played.

 

Thanks,

Gordon

 

 

Link to comment
Just now, citsur86 said:

I have been using the Tidal App on my MacBook and I am getting the dark purple MQA color on my DFR.  Are you sure it is only TIDAL on Windows?

 

For now the Tidal desktop for Windows is the only app. I think that Amara has said it will release next month it's products for Windows and MAC. I know of another product which is a couple months out that will have this support as well.

 

There of course will be more playback options available. If you want the app your currently using to get MQA support then I would ask them to look into it.

 

Thanks,

Gordon

Link to comment
Just now, citsur86 said:

 

So then what am I actually getting when I am listening on my MacBook Pro to TIDAL Masters with DFR and it is glowing dark purple? 

 

I just answered that above... so here it is again:

 

DragonFly is a renderer version of MQA. This means the first unfold is done in the application. The second unfold happens in the DragonFly processor and sent to the ESS90xx DAC chip with custom filters that match the track being played.

 

As far as the content (i.e. original sample rate etc...), it's best seen in Audirvana as they spell out the data rates of the original track.

 

Thanks,

Gordon

Link to comment
3 minutes ago, abrxx said:

 

What if the track's original sample rate was > 192 Khz?

 

Does the DragonFly do a third unfold?

 

First DragonFly's max sample rate is 96KHz. So the data going to the dac cannot exceed that. Others here will say they know what goes on here, but I will say I don't. I don't know what the application does in the case of sample rates above the 96K threshold of the dac. Sure I could guess as they do, but I would rather not mislead you.

 

If the MQA LED goes Purple then you know your getting the best possible results from the file you are playing.

 

Thanks,

Gordon

Link to comment
16 minutes ago, citsur86 said:

I understand the difference between a hardware and software unfold and that the first "core" unfold happens in the application in the case of TIDAL.  The DFR is not capable of doing the Core unfold though right?  So if the TIDAL application on Macbook is not doing it, how am I getting anything to the headphones from the DFR on Macbook?

 

Ok this is a bit of a confusing question... Tidal, Audirvana and soon to be other renderer capable MQA applications will do the first unfold of the track. Is that your question or something else?

 

Thanks,

Gordon

Link to comment
21 minutes ago, citsur86 said:

I just found this article which states that both Mac and PC are indeed doing the first/core/software unfold of the MQA file.  Where did you hear it is Windows only.  I believe that to be incorrect.

 

Sorry but I did not say it was Windows only. MAC has both Tidal and Audirvana which I have stated a number of times.

 

Thanks,

Gordon

Link to comment
11 minutes ago, citsur86 said:

So if you are using the Dragonfly Red or Black and having Tidal do the "core" unfold, and it sends the Dragonfly 96KHz, then what was the point of the update to the Dragonfly to make it able to do the second unfold if it can only go up to 96KHz?  

 

So that the DragonFly can do the second unfold and with the MQA libraries match the MQA Filter for that track inside the the ESS DAC chip.

 

Remember MQA is preserving the authenticity of the music. Think about it that way!

 

So if you are playing some 24/96 file in standard PCM format in XYZ file type (i.e. AIFF/WAV or FLAC/ALAC) then you get that in DragonFly using minimum phase filters built into the ESS DAC chip.

 

If you get an MQA file it will be unfolded on the host application delivered to the DragonFly over USB and inside the MQA code on the DragonFly it will be unfolded and presented to the ESS DAC chip with the Filter that matches the track.

 

Thanks,

Gordon

Link to comment
8 minutes ago, mansr said:

The USB interface is limited to 96 kHz. Subsequent upsampling is done either by the microcontroller (which I doubt it has the CPU power for) or by the DAC chip itself using filter parameters specified by the MQA metadata.

Whatever the original sample rate, the MQA file contains true audio data only up to 48 kHz (and the 24-48 kHz band is heavily compressed). Anything above that is discarded at the encoding stage, and no amount of hand-waving can bring it back.

 

again, this is why we should not listen to people who speculate how things work. mansr, there is example upsampling code and libraries available for the MX processor. Even with MQA running we would be able to add another 256tap x2 (stereo) upsample if we wanted to. But that would just waste power, increase system noise and lower the overall quality of the experience.

 

Thanks,

Gordon

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...