Jump to content
  • entries
    19
  • comments
    286
  • views
    15911

JRiver vs JPLAY Test Results


mitchco

Recommended reading first The reason is that I am not going to reiterate the baseline components and measurements of my test gear already covered in that post.

 

Here is a high level block diagram of my test setup:

 

JRivervsJPLAYtestsetup.jpg

 

On the left side is my HTPC with both JRiver MC 17 and JPLAY mini installed. The test FLAC file is the same Tom Petty and The Heartbreakers, Refugee at 24/96 that I have been using for my FLAC vs WAV tests.

 

JRiver is set up for bit perfect playback with no DSP, resampling, or anything else in the chain, as per my previous tests:

 

JRiversettings.jpg

 

JPLAY mini is set up in Hibernate mode and the following parameters:

 

JPlayhibernate.jpg

 

On the right hand side of the diagram, I am using Audio DiffMaker Audio DiffMaker for recording the analog waveforms off my Lynx L22 analog outputs of my playback HTPC. All sample rates for the tests are at 24/96.

 

Here is the differencing process used by Audio DiffMaker:

 

AudioDiffMakerProcess.jpg

 

Audio DiffMaker comes with an excellent help file that is worth the time reading in order to get repeatable results. One tip is to ensure both recordings are within a second of each other.

 

As an aside, this software can be used to objectively evaluate anything in your audio playback that you have changed. Whether that be a SSD, power supply, DAC, interconnects, and of course music players.

 

My assertion is that if you are audibly hearing a difference when you change something in your audio system (ABX testing), the audio waveform must have changed, and if it has changed, it can be objectively measured. I find there is a direct correlation between what I hear and what I measure and vice versa. I want a balanced view between subjective and objective test results.

 

First, I used JRiver as the reference and I recorded about 40 seconds of TP’s Refugee onto my laptop using DiffMaker. Then I used JPLAY mini, in hibernate mode, and recorded 40 seconds again onto the laptop. I did this without touching anything on either the playback machine or the recording laptop aside from launching each music player separately.

 

Just to be clear what is going on, the music players are loading the FLAC file from my hard drive and performing a Digital to Analog conversion and then though the analog line output stage. I am going from balanced outs from the Lynx L22 to unbalanced ins on my Dell, through the ADC, being recorded by Audio DiffMaker.

 

Clicking on Extract in Audio DiffMaker to get the Difference produces this result:

 

JRivervsJPlayminihib.jpg

 

As you can see, it is similar to when I compared FLAC vs WAV. What the result is saying is that the Difference signal between the two music players is at -90 dB. I repeated this process several times and obtained the same results.

 

You can listen to the Difference file yourself as it is attached to this post. PLEASE BE CAREFUL as you will need to turn up the volume (likely to max) to hear anything. I suggest first playing at a low level to ensure there are no loud artifacts while playing back and then increasing the volume.

 

As you can hear from yourself, a faint track of the music, that nulls itself out completely halfway through the track and slowly drifts back into being barely audible at the end.

 

According to the DiffMaker documentation, this is called sample rate drift and there is a checkbox in the settings to compensate for this drift.

 

“Any test in which the signal rate (such as clock speed for a digital source, or tape speed or turntable speed for an analog source) is not constant can result in a large and audible residual level in the Difference track. This is usually heard as a weak version of the Reference track that is present over only a portion of the Difference track, normally dropping into silence midway through the track, then becoming perceptible again toward the end. When severe, it can sound like a "flanging" effect in the high frequencies over the length of the track. For this reason, it is best to allow DiffMaker to compensate for sample rate drift. The default setting is to allow this compensation, with an accuracy level of "4".”

 

Of course this makes sense as I used a different computer to record on versus the playback computer and I did not have the two sample rate clocks locked together. The DiffMaker software recommends this approach, but I have no way of synching the sample rate clock on the Dell with my Lynx card.

 

Given that the Difference signal is -90 dB from the reference and that the noise level of my Dell sound card is -86 dB, we are at the limits of my test gear. A -90 dB signal is inaudible compared to the reference signal level.

 

I am not going to reiterate my subjective listening test approach as I covered it off in my FLAC vs WAV post.

 

In conclusion, using my ears and measurement software, on my system, I cannot hear or (significantly) measure any difference between JRiver and JPLAY mini (in hibernate mode).

 

April 2, 2013 Updated testing of JRiver vs JPLAY, including JPLAY ASIO drivers for JRiver and Foobar plus comparing Beach and River JPLAY engines. Results = bit-perfect.

 

June 13, 2013 Archimago's Musings: MEASUREMENTS: Part II: Bit-Perfect Audiophile Music Players - JPLAY (Windows). "Bottom line: With a reasonably standard set-up as described, using a current-generation (2013) asynchronous USB DAC, there appears to be no benefit with the use of JPLAY over any of the standard bit-perfect Windows players tested previously in terms of measured sonic output. Nor could I say that subjectively I heard a difference through the headphones." Good job Archimago!

 

Interested in what is audible relative to bit-perfect? Try Fun With Digital Audio - Bit Perfect Audibility Testing. For jitter, try Cranesong's jitter test.

 

Happy listening!<p><a href="/monthly_2012_05/58cd9bc11cee0_jrivervsjplayanalogdifference_zip.abc5ef36e963925ad0e4deb087100dfd" class="ipsAttachLink ipsAttachLink_image"><img data-fileid="28076" src="/monthly_2012_05/58cd9bc11cee0_jrivervsjplayanalogdifference_zip.abc5ef36e963925ad0e4deb087100dfd" class="ipsImage ipsImage_thumbnailed" alt=""></a></p><p><a href="/monthly_2012_05/58cd9bc122aa6_jrivervsjplaydigitaldifference_zip.20206be38ed0e9589a31ef13f8b678e6" class="ipsAttachLink ipsAttachLink_image"><img data-fileid="28077" src="/monthly_2012_05/58cd9bc122aa6_jrivervsjplaydigitaldifference_zip.20206be38ed0e9589a31ef13f8b678e6" class="ipsImage ipsImage_thumbnailed" alt=""></a></p><p><a href="/monthly_2012_05/58cd9bc94d683_jrivervsjplayanalogdifference_zip.a113b760512958701d5cb35ef7e6ddac" class="ipsAttachLink ipsAttachLink_image"><img data-fileid="28326" src="/monthly_2012_05/58cd9bc94d683_jrivervsjplayanalogdifference_zip.a113b760512958701d5cb35ef7e6ddac" class="ipsImage ipsImage_thumbnailed" alt=""></a></p><p><a href="/monthly_2012_05/58cd9bc9523e8_jrivervsjplaydigitaldifference_zip.2e148f06b06fbf3b249a96e630e6facb" class="ipsAttachLink ipsAttachLink_image"><img data-fileid="28327" src="/monthly_2012_05/58cd9bc9523e8_jrivervsjplaydigitaldifference_zip.2e148f06b06fbf3b249a96e630e6facb" class="ipsImage ipsImage_thumbnailed" alt=""></a></p>

64 Comments


Recommended Comments



herring-point taken. I didn't make myself clear enough either. With JPLAY mini [perceived difference to JRiver bigger] and a measuring setup which says it all is just the same, i just assumed that you/we might not detect any differences between stealth and JRiver either [perceived difference is smaller]. So at least this is an assumption.

 

In *my* real life i just can't set them up in a way the music sounds the same. That's why.

 

Anyway, stealth player would be a good point to start if anyone were in for more on this case because it just doesn't belong to anyones camp, is free.

 

 

 

Cheers

 

Andi

Link to comment

That many people don't like reality. It's the same here. Eloise has pointed it out also. His tests on both subjects are impeccable.

 

 

 

Sadly, pointless, though interesting. We already knew, at least on Flac/WAV, the 'believers' will not change their views.

 

 

 

Doesn't matter. I smiled at a Jehovah's Witness the other day. Doesn't mean that I agree with him.

Link to comment

Thanks for doing this. I wonder whether it would be worthwhile to do the same test comparing Linux against Windows on a dual boot computer?

Link to comment

"And as long as anyone / the legions that are jumping by ("Oh my God! The truth! It is revealed!") is trying to install these findings as valid truth for really everybody..."

 

 

 

 

 

I believe this test performed a valid function for the forum, in that it serves as a marker for those people who would take a single data point and as you say "install these findings as valid truth for really everybody".

 

 

 

Seen in that light, it's grand entertainment. :)

Link to comment

A truth is a truth. The number of people don't matter. Facts even existed before people. And of course a truth is 'valid' by definition. Same as 'very unique, and 'past' history, both of which make people laugh.

 

 

 

That said, It *seems* perfectly valid to me and I can't find any holes in it. That of course does not mean that it actually *is* true.

Link to comment

The reason I wrote the FLAC vs WAV and this post was to show that my computer audio playback system is working correctly.

 

 

 

FLAC and WAV are lossless audio file formats, they are bit for bit identical.

 

 

 

Bit-perfect playback: "in audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file." http://en.wikipedia.org/wiki/Bit-perfect and "Poor device drivers often alter the data, resulting in it making not bit-perfect. This is especially true for device drivers used in consumer-grade sound cards."

 

 

 

If you are hearing a difference between any lossless audio file formats and/or bit perfect music players, then there is something not working correctly with your computer audio playback system (i.e. it is not bit-prefect playback).

 

 

 

The "free" measurement tools I presented can assist in troubleshooting what might be the issue(s).

 

 

 

On Windows, you can use:

 

 

 

DPC Latency Checker: http://www.thesycon.de/deu/latency_check.shtml DPC Latency Checker is a Windows tool that analyses the capabilities of a computer system to handle real-time data streams properly. It may help to find the cause for interruptions in real-time audio and video streams, also known as drop-outs.

 

 

 

To me, DPC Latency Checker is a critical tool because in my experience, a high latency computer is the number one reason where things go wrong. If you look at the latency on my computer, it is 10X below the accepted threshold. I designed my computer for this to ensure I never have any latency issues.

 

 

 

RightMark Audio Analyzer: http://audio.rightmark.org/index_new.shtml Excellent tool to measure the electrical noise present in your computer audio system. You can also check frequency response, distortion, etc., but it is the noise measurement is what we are mostly interested in.

 

 

 

Pro-tip, have a look at the size of the power supply I use in my computer. Again, in my experience, the more power, the less load = lower electrical noise. In addition, the Lynx L22 sound card has good noise rejection and a very low noise floor (-107 dB measured on my rig with DAC + ADC in external loopback mode).

 

 

 

Audio DIffMaker: http://www.libinst.com/Audio%20DiffMaker.htm Audio DiffMaker is a freeware tool set intended to help determine the absolute difference between two audio recordings, while neglecting differences due to level difference, time synchronization, or simple linear frequency responses.

 

 

 

I purposely included the DAC and analog line output amplifier in my tests to show that a) the Digital to Analog conversion and analog line output amplifier is not altering the bit-perfect waveform in anyway and b) the electrical noise of my playback computer is so low that I am into the noise floor on the measurement computer.

 

 

 

Meaning that my computer audio system is operating as it should be. Therefore, I should not hear any difference between any lossless audio file formats or bit-perfect music players.

 

 

 

The tools are free and the tests are simple. I encourage folks to try these tools out to ensure you are getting the best performance out of your computer audio playback system.

Link to comment

Interesting how qualified individuals have been working on this for some time, and suddenly it is "solved" err disreputed.

Link to comment

"If you are hearing a difference between any lossless audio file formats and/or bit perfect music players, then there is something not working correctly with your computer audio playback system (i.e. it is not bit-prefect playback"

 

 

 

So all lossless formats and all bit perfect players should sound the same if *your* computer is working properly.

 

 

 

So hopefully no more of this gibberish about 'truth for everybody' and a simple explanation "your computer is broken" if they hear differences.

 

 

 

Good Good Good.

 

 

 

Unfortunately it will not change anyone's opinion, but at least we know that such opinions can be safely ignored.

 

 

 

BTW. The current JRiver has a 'bit perfect' light. so you know that both it, and your computer, are working correctly.

 

 

 

So those who care about sound quality can concentrate on improving (latency, etc) their computers.

Link to comment

What's a 'qualified' individual? And who says he isn't?

 

 

 

We all knew what would happen, no doubt including the OP. And it is whole lot more constructive than those, and there some (not you) who *never* voice an opinion, but prefer to sit on the sidelines and criticise those who do, such criticism often based on their undefined 'long experience'.

Link to comment

Brings me right back to what screenmusic said earlier.

 

My point of view is about perception. Which is subjective and can be coloured or wrong. But, as long as you really need software helpers which give you numbers or curves to judge, if this player is the player you'll learn to love John Coltrane with, you miss my point.

 

And as long as I really need to listen to music for hours to evaluate, I might be missing yours.

 

To each his own.

 

 

 

Cheers

 

Andi

Link to comment

Press the download button and jump back!

 

 

 

My run-of-mill laptop is fine. With JRiver, a flight simulator (not the Microsoft one), a video on Internet explorer it's all fine. Task manager shows max 80% CPU with that all going, only 1 or 2% with just JRiver, latency says OK all the time.

 

 

 

Had a bit of a problem flying the airplane and watching the other dials!

Link to comment

If there is a disagreement between measurement and perception there are several possibilities. The perception of some individuals is different than most of us and includes something that can't be measured - some folks insist on seeing ghosts, talking to dead people etc. (one of them used to be a Prime Minister of Canada -MacKenzie King). That doesn't make their perception wrong - just not relevant for most of us. Another possibility is that the measurement is wrong or incomplete. Back in the 1950's when transistors first came out engineers claimed transistor amps to be nearly perfect. Many listeners disagreed and it was discovered that since they were testing the new transistor amps the same as they tested tube amps they were missing some distortions. Once they learned how to test and measure for these distortions the race was on to improve transistor circuits to get rid of those distortions. I'm not saying there is anything missed in these measurements, only that it is possible. We know that the human perception system is an amazing thing. Most people can drink wine and say I like it or I don't like it. Some people can drink wine and identify the types of grapes used and sometimes even the year of those grapes. I have taken part in, and run blind listening test (I know some people disagree with blind listening as a valid test) with all kinds of equipment (cd vs records, cables, speakers, amplifiers,....) I'm most likely to be cynical and agree with the measurements if the blind listening tests verify the measurements. When they don't then scientifically it means we need to try and find reasons why. In my own tests I would tend to agree that I don't hear these differences. My ears are pretty good, I play and build guitars - have for 40 years, have been all over the world studying guitar building. I have, I believe, like a wine taster, practiced listening for small subtleties in the sound of instruments, my sound system consists of about $20k of well regarded 2 channel cd playing equipment (although experimenting with computer based). I would be happy to have anyone who thinks they percieve a difference verify that. Every time I thought I could hear a difference that was not measureable and I set up a carefull ABX test I found that there was a reason or that I couldn't reliably hear the difference after all. The real challenge here I think is to those who feel that this test does not explain what they are hearing is to find a way to verify that. Mitchco has done an excellent job of describing and supporting his position, Those of you who disagree are free to do so of course, but please take he stance that you have a different opinion which is at this point only opinion.

Link to comment

If you are a wine taster then you should know that perception, although different in each person, it is very different in each subject, some will percieve more, others will percieve less and as a luthier you should know that every detail counts. If you build a solid adirondack spruce top acoustic guitar you know that the sound is different from a solid stika spruce top or the warmer tone of the indian rosewood against the brighter mahogany sides. If you use a cow bone on the bridge it will sound edgier and a little more sterile than a walrus fossilized bone, with richer overtones and sweeter treble.

 

Then everything resides in perception. I love sound, as a musician and producer I am thinking all the time in sound, always looking in alternatives to get a more realistic and natural sound in my recordings. I did many blind tests before, if someone turns the direction of an interconect cable then I will notice that. If you work on your perception it is incredibly how far it can reach, it is not an illusion it is real fact. Can you measure beauty?

Link to comment

Let me try this again, but in a different manner.

 

 

 

Mind experiment: Consider that computers, the grid, and the relevant audio components of a given system encompass almost infinite variables re SQ. Now consider JRiver and JPlay. Put JRiver in your system, call it system J. Start over, and put JPlay in your system, call it JP.

 

 

 

Compare system J to JP in mind (not listening). You've got two very different systems. Now unless everyone has the perfect computer for audio (whatever that is) it's likely that J and JP will sound different.

 

 

 

JP and other niche "highend" players attempt to mitigate the "flaws" in computers that contribute to less than "perfect" SQ. (That's how I see it anyway, I could be wrong as I know nothing real of any of these players). So in "perfect for audio" computers perhaps JRiver (which I don't think believes in mitigating computer audio flaws, to a significant degree anyway) and JPlay would sound the same.

 

 

 

However, if JPlay successfully mitigates flaws in "imperfect" computers it's likely to sound better or different from JRiver because, I think most of agree, computers can be and often are flawed in many ways when it comes to audio.

 

 

 

This would certainly explain why people hear differences when using different players.

 

 

 

Notice, by the way, that bit perfect is irrelevant here and that Mitchco's tests prove nothing for any system but the one he tested on and those that are exactly the same, possibly down to power source and time of day. This may sound like nitpicking, but a lot of the SQ differences seem to be mighty small nits too.

 

 

 

-Chris

Link to comment

Of course not, it is subjective, I might like a different guitar from you.

 

 

 

But the whole point of this test is that his results *are* measureable. they are *not* subjective, and on his particular computer they are identical. so no one will hear any difference. One might even say that the more perceptive person should be more sure of the 'no difference' than the less perceptive person.

Link to comment

http://en.wikipedia.org/wiki/Bit-perfect

 

 

 

"Bit-perfect audio/video does not perform any digital signal processing (DSP) such as channel matrixing, filters and equalizing and does not do any resampling or sample rate conversion (such as upsampling or downsampling). In audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file. Unaltered passthrough.

 

 

 

The data stream (audio/video) will remain pure and untouched and be fed directly without altering it.

 

 

 

Bit-perfect audio is often desired by audiophiles."

 

 

 

http://en.wikipedia.org/wiki/Audio_file_format

 

 

 

"Lossless compression formats enable the original uncompressed data to be recreated exactly"

 

 

 

Digital Audio has a series of technical specifications, coded file formats, application programming interfaces, etc., that all manufacturers of digital audio players, workstations, etc., must adhere too. This is what makes bit-perfect audio possible.

 

 

 

I totally agree, different computers have varying degrees of latency, electrical noise, performance issues, you name it. The list of free tools provided is a way to troubleshoot and verify computer audio performance.

Link to comment

One think Mich may want to test is running the same comparison using two deliberately non bit-perfect outputs. See if there is something measurable shown using the comparison. Perhaps using the software's EQ in a noticeable but subtle way...

 

 

 

Eloise

Link to comment

Very nice experiment. There is just one caveat I see. The software audio-diffmaker MAY in fact dilute some the difference between tracks which are only present in the time domain.

 

Here is their paper: http://www.libinst.com/AES%20Audio%20Differencing%20Paper.pdf

 

 

 

This sentence that me nervous:

 

"In the time domain, a digitally recorded track can be easily shifted only by whole samples. But if it is transformed into the frequency domain, delays can be easily varied by any amount by adjusting the phase of each component an increment proportional to its frequency. For this reason, trial and error iteration to optimize delay compensation to fine values is done in the frequency domain."

 

 

 

If I understand this correctly, the software iterates through different parameters to adjust and compensate for shifts in the time domain. This would inevitably discard all of the differences that may be present merely by overfitting. I may be wrong but a more conservative alignment approach may lead to results which better match reality. Probably not an easy task.

 

 

 

Just some thoughts.

Link to comment

I believe the results of the tests are completely valid within the limits of the test. There are a number of external aspects that limit the validity of the tests. Its point of measurement is the line level signal at the output of the DAC/preamp. Ideally you would want the acoustic signal, but that is very limiting. The next best is the speaker terminals.

 

 

 

You also need to be sure that adding the extra computer isn't somehow affecting the measurement (called observer effect) in a way that affects and limits the measurements. This and a lot of other things are why really good research is hard work.

 

 

 

First the results are only valid for the conditions of the test. the tests need to be repeated on a number of different systems to get any idea if the results are consistent or repeatable on other systems (I suspect they will be mostly but keep in mind that the Lynx card is very good and may to a better job of suppressing the differences in the source data timing). Then we really need to test for sensitivity. This is difficult for subjective stuff. There is some existing lit that can point to some things to test for.

 

 

 

I think this is important and I'll try to contribute when I get a chance, but I would not say these results are conclusive, just very indicative.

Link to comment

It all comes down to this "Hearing is believing". I bet he'll get similar result if he compare foobar or even WMP to J River, Difference between DirectSound/ASIO/WASAPI/KS, etc.

 

 

 

It's just hobby. Why so serious?

Link to comment

On ripped CDs I cannot tell the slightest difference between a maximally tweaked JRiver 17 and maximally tweaked Windows and WMP left 'as is', with Windows also left 'as is'.

 

 

 

Obviously my several tens of thousand dollar 'system' is total crap. My ears are fine as they are large and stick out.

 

 

 

It a hobby. And the music entertains us too :)

Link to comment

Ok, let me try this from yet another angle.

 

 

 

If so-called audiophile software players deliver a particular unaltered bit perfect file to the dac and they are playing on "perfect for audio" computers, said players should all sound the same. I'm assuming that if they don't, they have a problem and they're not ready for prime time. If we can't agree on that, well then we're talking beyond current measurement capabilities and we need not discuss it further. If we can agree on that, we've agreed on something of very little value, little has been revealed. And that's all that mitchco's test(s) reveal.

 

 

 

I think that players differ in how they deal with real world "imperfect computers" and that's why they may sound different. They need to be tested with that in mind; Then the tests will have value, and tell us something.

 

 

 

Mitchco acknowledges as much by having us test our machines. But we can't test for everything, and even if we could, then what? Much simpler if a software player takes care of that for us.

 

 

 

Players also have added features (various forms of dsp, upsampling) which set them apart from each other and which may make them sound better, but those features should all be disabled for testing purposes.

 

 

 

-Chris

Link to comment

is 'accurate reproduction' Presumably of the source you have available.

 

 

 

So ALL hi fi systems should sound alike. They don't, of course. But the more you pay the more they should sound the same. If not, there is no point in paying more that a few hundred dollars for the lot. High Fidelity does not mean 'I like the sound my expensive eqipment makes'.

Link to comment

It's hi fi, not perfect fi. If it were pe fi, you'd have a case.... but what's it got to do with anything in this thread?

 

 

 

-Chris

Link to comment




×
×
  • Create New...