Jump to content
IGNORED

How many bits, how fast, just how much resolution is enough?


BlueSkyy

Recommended Posts

Shannon-Nyquist theorem. That is the mathematical proof.

 

...that you cannot implement exactly in real world. It only conveniently assumes infinitely long signals with filters that are infinitely long ,so that nothing ever comes out of the filter, because it also has infinite delay. And assumes infinite precision of timing and resolution of the samples. But other than that, yes, it works nice...

 

View the video linked in my signature. You can skip to the 20 min 50 sec. mark, and watch about two minutes. Using analog sources and analog monitoring gear with AD/DA in between he shows you can move a band-limited square wave thru various amounts of delay between sample points and see the wave shape you get on the analog o-scope is exactly the same other than moving in time relative to a second squarewave. What more proof could you want? You have the theorem predicting something, and an analog monitoring system showing the theorem works as advertised.

 

IIRC, pretty heavily oversampled converters. Which makes pretty big difference...

 

Here's 19k sine wave from a NOS DAC running at 44.1k sampling rate:

musette-19k-44k1_2.png

 

And the same source data, same DAC, but now upsampled to 384k sampling rate before sending to the DAC:

musette-19k-384_2.png

 

So you certainly want to have the conversion running at higher rate than 44.1 kHz...

 

 

P.S. DPO-scope is good way to see how stable the waveform is, which also tells quite a bit about the reconstruction that you don't see in old-school scope.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Of course there can be... :D

 

Theorems don't prove anything about hearing, only maths that happen in digital domain and only under certain conditions.

I would say in this case they would indicate where it would be best to look for any advantage of hi res, and that would not appear to be with better tracking of the signal but with the opportunity for better filtering.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
...that you cannot implement exactly in real world. It only conveniently assumes infinitely long signals with filters that are infinitely long ,so that nothing ever comes out of the filter, because it also has infinite delay. And assumes infinite precision of timing and resolution of the samples. But other than that, yes, it works nice...

 

 

 

IIRC, pretty heavily oversampled converters. Which makes pretty big difference...

 

Here's 19k sine wave from a NOS DAC running at 44.1k sampling rate:

[ATTACH=CONFIG]31158[/ATTACH]

 

And the same source data, same DAC, but now upsampled to 384k sampling rate before sending to the DAC:

[ATTACH=CONFIG]31159[/ATTACH]

 

So you certainly want to have the conversion running at higher rate than 44.1 kHz...

 

 

P.S. DPO-scope is good way to see how stable the waveform is, which also tells quite a bit about the reconstruction that you don't see in old-school scope.

 

I must say I am surprised and dismayed at you Miska. If that is from the Musette at 44.1 it simply indicates a bad DAC. I don't have any images, but early DACs have displayed 19 khz with good waveform shape long ago in the early days of CD because I have seen them do it on my oscope. If your NOS DAC needs 384 khz to do a 19 khz wave, you should forget it as a poor design.

 

The video uses an inexpensive ADC/DAC from around the year 2002. So probably a sigma delta chip. If such chips give PCM 44.1 khz results with a nice clean wave and the NOS can't then it suggests problem for the NOS, not some blanket dismissal of results using modern DAC chips that aren't handicapped. Earlier in the video he steps 1 khz at a time from 15 khz to 20 khz and you can see the image Miska posted has no bearing on what is possible with modern DACs. (well calling a 14 year old low end DAC modern)

 

Here is the analog scope image of 18 khz from that video. He moved the camera, but you can see 19 khz and 20 khz is just as nice.

 

18 khz.png

 

Actually looks nicer than your 384 khz image. Though I understand that is just an artefact of the display.

 

Really intentionally deceptive on this one Miska. Shame on you.

 

Edit to add: It appears the Musette (assuming that is the NOS you are showing) doesn't use output filtering at all. Not digital and not analog. So you need to run 384 khz to let the rest of your gear, your speakers, and the 6th order filter of your ears perform the filtering. Terribly deceptive to use that as an example. Another name for a DAC with no output filtering is a broken DAC. Most certainly completely ignores bandwidth limiting required by Shannon-Nyquist.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Instruments don't produce pure sine waves.

 

True - they produce *combinations* of pure sine waves.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
Don't tell that to the orchestra! :)

 

Any two sine waves of a given frequency but arbitrary amplitude and phase, sum to produce a sine wave of the same frequency (different amplitude and phase)

 

This is true for complex signals which have components of multiple frequencies. Part of the Fourier transformation

Custom room treatments for headphone users.

Link to comment
I must say I am surprised and dismayed at you Miska. If that is from the Musette at 44.1 it simply indicates a bad DAC.

 

Any DAC running conversion at 44.1k is just bad.

 

It will look pretty much the same on any non-oversampled DAC where the conversion section actually runs at 44.1k. To correctly reconstruct it (from 16-bit data), you would need analog filter that has 96 dB attenuation slope in 20 kHz to 24.1 kHz band, thus rolling off by 96 dB in just 4.1 kHz wide band. You are not going to find that kind of analog filter in DACs... (and it would have completely horrible phase response in audio band)

 

When I have time, I can give you more pictures of NOS behavior from delta-sigma DACs, where the upsampling digital filters have been disabled.

 

I don't have any images, but early DACs have displayed 19 khz with good waveform shape long ago in the early days of CD because I have seen them do it on my oscope. If your NOS DAC needs 384 khz to do a 19 khz wave, you should forget it as a poor design.

 

You have probably been looking at oversampled DAC. Already CD's using TDA1541A chip used the SAA7220 digital oversampling filter to run the converter at 176.4 kHz. For example the Marantz CD-60 I had (from late 80's). 19 kHz sine from those is not as clean as the one in my picture at 384k rate, but better than the NOS.

 

The video uses an inexpensive ADC/DAC from around the year 2002. So probably a sigma delta chip. If such chips give PCM 44.1 khz results with a nice clean wave and the NOS can't then it suggests problem for the NOS, not some blanket dismissal of results using modern DAC chips that aren't handicapped.

 

All delta-sigma DAC chips are oversampled at least by factor of 64x and practically all have digital oversampling filters upsampling the input data to 352.8k (for 44.1/88.2/176.4 inputs) or 384k (for 48/96/192 inputs).

 

Modern DAC chips are not "handicapped" because they do upsampling and don't run the conversion at 44.1k rate which would be madness because the quality would be horrible!

 

Earlier in the video he steps 1 khz at a time from 15 khz to 20 khz and you can see the image Miska posted has no bearing on what is possible with modern DACs. (well calling a 14 year old low end DAC modern)

 

Because they are no running the conversion section at 44.1 kHz sampling rate! Instead the data is systematically upsampled to 352.8/384 kHz sampling rate as first thing! 44.1 kHz sampling rate is completely unsuitable for running high fidelity D/A conversion process. That's the entire point of my argument.

 

Same goes for A/D converters. Typical today's ADC runs at 5.6 MHz (when output is 44.1k PCM) delta-sigma modulation which is then converted using chain of digital filters down to 44.1 kHz 24-bit PCM.

 

Then D/A converter does the inverse, it converts the 44.1 khz 24-bit PCM input to 5.6 MHz (or higher like tens of MHz in ESS Sabre) delta-sigma modulation bitstream for conversion.

 

My point has been, that you could at least leave the PCM conversion at the typical intermediate rate of 352.8 kHz PCM, instead of shuffling the data down to 44.1k and then back up for no good reason (in modern world).

 

Here is the analog scope image of 18 khz from that video. He moved the camera, but you can see 19 khz and 20 khz is just as nice.

 

[ATTACH=CONFIG]31162[/ATTACH]

 

That's certainly from a heavily oversampled DAC....

 

Actually looks nicer than your 384 khz image. Though I understand that is just an artefact of the display.

 

Analog scopes have a natural smooth-function built in... ;)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Hi George,

I don't do classical very often, but given how everything else sounds I don't see why it would be represented poorly.

I think I gave my explanation for that.

I think the key thing appears to be how well the DAC does redbook... the VEGA seems to excel in this area and that's why I'm enjoying it so much - and as I've voiced before, the system just seems to be so much more forgiving that any other system/config that I had before.

 

By definition, of a DAC sounds great doing 24-bit, it will do 16-bit just as well.

 

I will continue to enjoy... but won't rule out hires in case I hear something I've been missing.;-)

That's perfectly OK it's (still) a free country

George

Link to comment
...that you cannot implement exactly in real world. It only conveniently assumes infinitely long signals with filters that are infinitely long ,so that nothing ever comes out of the filter, because it also has infinite delay. And assumes infinite precision of timing and resolution of the samples. But other than that, yes, it works nice...

 

 

 

IIRC, pretty heavily oversampled converters. Which makes pretty big difference...

 

Here's 19k sine wave from a NOS DAC running at 44.1k sampling rate:

[ATTACH=CONFIG]31158[/ATTACH]

 

And the same source data, same DAC, but now upsampled to 384k sampling rate before sending to the DAC:

[ATTACH=CONFIG]31159[/ATTACH]

 

So you certainly want to have the conversion running at higher rate than 44.1 kHz...

 

 

P.S. DPO-scope is good way to see how stable the waveform is, which also tells quite a bit about the reconstruction that you don't see in old-school scope.

Your figures prove that the 44.1 kHz sampled signal contains everything needed for accurate reconstruction. That incorporating a digital processing step aids in producing a quality output is completely beside the point.

Link to comment
Your figures prove that the 44.1 kHz sampled signal contains everything needed for accurate reconstruction.

 

Yes..

 

Whether it contains all the necessary information and whether it comes with embedded artifacts from the anti-alias filtering is another matter.

 

That incorporating a digital processing step aids in producing a quality output is completely beside the point.

 

No, it is precisely the point! If you don't unnecessarily shuffle the rate back and forth, you can leave out large part of that digital processing! Hires -> less processing needed. You could completely cut out the "Decimation Filter" block from the ADC chip and "8X Interpolator" block from the DAC chip, completely unnecessary stuff!

 

And oh yes, with my marketing hat on; by all means keep using RedBook as it needs hefty amount of DSP to make top quality analog signal out of it! Meaning more need for HQPlayer's upsampling!

 

If all content would be recorded in native DSD256, there wouldn't be as much need for HQPlayer upsampling. So whenever I'm advocating those top notch hires formats, I'm advocating less sales for HQPlayer. Well, people could still use it for speaker adjustments, digital room correction and such, but not need it so much for upsampling.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Your figures prove that the 44.1 kHz sampled signal contains everything needed for accurate reconstruction. That incorporating a digital processing step aids in producing a quality output is completely beside the point.

It's beside the point you are making (that more samples aren't needed to accurately represent the waveform), but is possibly still quite relevant to the OP's question about whether a higher sample rate makes for better digital audio.

 

Can we agree that the engineers who designed 8x oversampling into DAC chips decades ago did so for solid engineering reasons? And if that's granted, then can we go from there to saying there is no engineering reason for a decimation-interpolation sequence in the middle of the recording/playback chain, but this exists solely as an artifact of the way the music industry has evolved?

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
And oh yes, with my marketing hat on; by all means keep using RedBook as it needs hefty amount of DSP to make top quality analog signal out of it! Meaning more need for HQPlayer's upsampling!

If all content would be recorded in native DSD256, there wouldn't be as much need for HQPlayer upsampling. So whenever I'm advocating those top notch hires formats, I'm advocating less sales for HQPlayer. Well, people could still use it for speaker adjustments, digital room correction and such, but not need it so much for upsampling.

 

Is it just me or does it seem that upsampling say DSD128 -> DSD512 and DSD256 -> DSD512 takes more CPU cycles than say PCM44 -> DSD512 ? But do I really care? (the Firstwatt J2 that I'm using to drive my HD800 is perhaps one of the most inefficient use of electrons)

 

I guess the question comes down to: assuming A) recording at DSD256, and then B) state of the art conversion to Redbook, what percentage of people can tell the difference when upsampled to DSD512 and fed to the DAC? Assuming optimal playback equipment/conditions.

Custom room treatments for headphone users.

Link to comment
I guess the question comes down to: assuming A) recording at DSD256, and then B) state of the art conversion to Redbook, what percentage of people can tell the difference when upsampled to DSD512 and fed to the DAC? Assuming optimal playback equipment/conditions.

 

Isn't that the wrong question? If your DAC is doing internal upsampling anyway; whether to DSD512 or to something else, isn't the real question: "Can I tell a difference between where the upsampling occurs, how it is done and what filters are applied in that process?"

Synology NAS>i7-6700/32GB/NVIDIA QUADRO P4000 Win10>Qobuz+Tidal>Roon>HQPlayer>DSD512> Fiber Switch>Ultrarendu (NAA)>Holo Audio May KTE DAC> Bryston SP3 pre>Levinson No. 432 amps>Magnepan (MG20.1x2, CCR and MMC2x6)

Link to comment
It's beside the point you are making (that more samples aren't needed to accurately represent the waveform), but is possibly still quite relevant to the OP's question about whether a higher sample rate makes for better digital audio.

 

Can we agree that the engineers who designed 8x oversampling into DAC chips decades ago did so for solid engineering reasons? And if that's granted, then can we go from there to saying there is no engineering reason for a decimation-interpolation sequence in the middle of the recording/playback chain, but this exists solely as an artifact of the way the music industry has evolved?

 

Well if PCM 48/24 (or 96/24) is enough, then bits and sampling at that are enough whether via some purist version that performs as it should or whether you have in between DSP. If we go straight to some higher rate and higher number of bits transmitted that sounds like an unengineered method that will work. An engineered method is more efficient.

 

That is before we mention recording, mixing, mastering, and processing being easily doable in PCM. Then upon playback digital volume control and Room EQ that sort of thing also being easily handled in PCM formats. If the cost is some measured inaudible improvement and simplicity at the cost of those things plus a higher bit thru put, doesn't sound like a clear cut great trade off to me.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

It is very likely that the engineers who designed 8x oversampling into DAC chips decades ago did so for solid engineering reasons as engineers dislike doing things irrationally (tho are sometimes forced to do irrational things by their MBA Overlords).

 

But best practices decades ago may also not be best practices today.

Link to comment

That's how I read it (as long as the DAC does the right job with the data).

 

This thread is very good but for me as it has grown it seems to reinforce that the performance of the DAC with redbook source data decides if redbook is good enough, not that redbook can't be ever good enough as source data itself.

 

It ought to be the case however that most hires recordings will probably have had more care than redbook and that's a perfectly valid reason to choose it.

 

But the argument that 16/44 source data is unable to reproduce music to startling standards (to human ears) has not been made here (yet) - and that it happens in my lounge whenever I ask it to make that argument hard to accept.

 

;-)

 

Your figures prove that the 44.1 kHz sampled signal contains everything needed for accurate reconstruction. That incorporating a digital processing step aids in producing a quality output is completely beside the point.

Source:

*Aurender N100 (no internal disk : LAN optically isolated via FMC with *LPS) > DIY 5cm USB link (5v rail removed / ground lift switch - split for *LPS) > Intona Industrial (injected *LPS / internally shielded with copper tape) > DIY 5cm USB link (5v rail removed / ground lift switch) > W4S Recovery (*LPS) > DIY 2cm USB adaptor (5v rail removed / ground lift switch) > *Auralic VEGA (EXACT : balanced)

 

Control:

*Jeff Rowland CAPRI S2 (balanced)

 

Playback:

2 x Revel B15a subs (balanced) > ATC SCM 50 ASL (balanced - 80Hz HPF from subs)

 

Misc:

*Via Power Inspired AG1500 AC Regenerator

LPS: 3 x Swagman Lab Audiophile Signature Edition (W4S, Intona & FMC)

Storage: QNAP TS-253Pro 2x 3Tb, 8Gb RAM

Cables: DIY heavy gauge solid silver (balanced)

Mains: dedicated distribution board with 5 x 2 socket ring mains, all mains cables: Mark Grant Black Series DSP 2.5 Dual Screen

Link to comment

Re: comparing classical... see if you can do the same experiment on an Auralic VEGA?

 

... if someone want to make available 2 files of classical 16/44 and hires, I'll try it out.

 

Luckily the UK is also a free country.

 

:-)

 

 

I think I gave my explanation for that.

By definition, of a DAC sounds great doing 24-bit, it will do 16-bit just as well.

That's perfectly OK it's (still) a free country

Source:

*Aurender N100 (no internal disk : LAN optically isolated via FMC with *LPS) > DIY 5cm USB link (5v rail removed / ground lift switch - split for *LPS) > Intona Industrial (injected *LPS / internally shielded with copper tape) > DIY 5cm USB link (5v rail removed / ground lift switch) > W4S Recovery (*LPS) > DIY 2cm USB adaptor (5v rail removed / ground lift switch) > *Auralic VEGA (EXACT : balanced)

 

Control:

*Jeff Rowland CAPRI S2 (balanced)

 

Playback:

2 x Revel B15a subs (balanced) > ATC SCM 50 ASL (balanced - 80Hz HPF from subs)

 

Misc:

*Via Power Inspired AG1500 AC Regenerator

LPS: 3 x Swagman Lab Audiophile Signature Edition (W4S, Intona & FMC)

Storage: QNAP TS-253Pro 2x 3Tb, 8Gb RAM

Cables: DIY heavy gauge solid silver (balanced)

Mains: dedicated distribution board with 5 x 2 socket ring mains, all mains cables: Mark Grant Black Series DSP 2.5 Dual Screen

Link to comment
Re: comparing classical... see if you can do the same experiment on an Auralic VEGA?

 

... if someone want to make available 2 files of classical 16/44 and hires, I'll try it out.

 

Luckily the UK is also a free country.

 

:-)

 

You could try some of these:

 

2L High Resolution Music .:. free TEST BENCH

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
It is very likely that the engineers who designed 8x oversampling into DAC chips decades ago did so for solid engineering reasons as engineers dislike doing things irrationally (tho are sometimes forced to do irrational things by their MBA Overlords).

 

But best practices decades ago may also not be best practices today.

 

It is just 8x still today because of pricing / resource reasons. Top of the line DAC chips sell for around $2.5/piece which means that heavy cost awareness must be used when designing the DSP engine to those chips. There are other significant factors too - to keep cost of the final product down, only one clock input is used. And since the chips must run without cooling, the amount of current consumption must be limited. Another reason is that since the DSP engine resides on the same piece of silicon as the sensitive analog parts, less than millimeter away, the amount of noise generated by the DSP engine must be kept low.

 

But these days all that DSP stuff can be done externally, outside of the chip, without resource limits. Either in player software, or on a separate processor chip inside the DAC. That's why I do 512x oversampling digital filters in the software these days.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
But the argument that 16/44 source data is unable to reproduce music to startling standards (to human ears) has not been made here (yet) - and that it happens in my lounge whenever I ask it to make that argument hard to accept.

 

It can sound pretty good, but hires can sound a lot better, for a lot of reasons we've already discussed here, and on other threads over the years.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Well if PCM 48/24 (or 96/24) is enough, then bits and sampling at that are enough whether via some purist version that performs as it should or whether you have in between DSP. If we go straight to some higher rate and higher number of bits transmitted that sounds like an unengineered method that will work. An engineered method is more efficient.

 

That is before we mention recording, mixing, mastering, and processing being easily doable in PCM. Then upon playback digital volume control and Room EQ that sort of thing also being easily handled in PCM formats. If the cost is some measured inaudible improvement and simplicity at the cost of those things plus a higher bit thru put, doesn't sound like a clear cut great trade off to me.

 

There is no need to go below 352.8/384 kHz sampling rate. You can very easily do all that at that rate without cramming the signal down with brickwall filters.

 

But I'm running for example 5.1 channel to stereo downmixes for DSD64 content in realtime while at the same time upsampling it to DSD256. And it all works just great.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Is it just me or does it seem that upsampling say DSD128 -> DSD512 and DSD256 -> DSD512 takes more CPU cycles than say PCM44 -> DSD512 ? But do I really care? (the Firstwatt J2 that I'm using to drive my HD800 is perhaps one of the most inefficient use of electrons)

 

About the same for all those cases. I just checked and the loads are so similar that the differences are irrelevant.

 

I guess the question comes down to: assuming A) recording at DSD256, and then B) state of the art conversion to Redbook, what percentage of people can tell the difference when upsampled to DSD512 and fed to the DAC? Assuming optimal playback equipment/conditions.

 

That is not hard to try, because you can take some DSD256 recording and convert it to RedBook. My ADCs are limited to DSD128 (based on TI's PCM4202 chip), but already that makes a difference.

 

I made a special converter quite a long ago, it has single analog input stage, running to two PCM4202 chips in parallel. One of those chips is running in DSD128 mode, and the other one in 192/24 PCM mode (can be switched to 48/24 or 96/24 too). The output can be taken to a computer allowing parallel recording of PCM and DSD, and/or listened through on-board DAC based on Cirrus CS4398 DAC chip running either in DirectDSD or PCM mode depending on input. Switching the DAC input between the two source is quick.

 

Here's the converter with only DAC part of the board populated, I don't have better picture of the internals at hand:

dca1-v0.jpg

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Which two files would you recommend I compare?

 

(I only want to do this once... comparing files isn't what music enjoyment is about)

 

:-)

 

You could try some of these:

 

2L High Resolution Music .:. free TEST BENCH

Source:

*Aurender N100 (no internal disk : LAN optically isolated via FMC with *LPS) > DIY 5cm USB link (5v rail removed / ground lift switch - split for *LPS) > Intona Industrial (injected *LPS / internally shielded with copper tape) > DIY 5cm USB link (5v rail removed / ground lift switch) > W4S Recovery (*LPS) > DIY 2cm USB adaptor (5v rail removed / ground lift switch) > *Auralic VEGA (EXACT : balanced)

 

Control:

*Jeff Rowland CAPRI S2 (balanced)

 

Playback:

2 x Revel B15a subs (balanced) > ATC SCM 50 ASL (balanced - 80Hz HPF from subs)

 

Misc:

*Via Power Inspired AG1500 AC Regenerator

LPS: 3 x Swagman Lab Audiophile Signature Edition (W4S, Intona & FMC)

Storage: QNAP TS-253Pro 2x 3Tb, 8Gb RAM

Cables: DIY heavy gauge solid silver (balanced)

Mains: dedicated distribution board with 5 x 2 socket ring mains, all mains cables: Mark Grant Black Series DSP 2.5 Dual Screen

Link to comment
Which two files would you recommend I compare?

 

(I only want to do this once... comparing files isn't what music enjoyment is about)

 

:-)

Don't have any suggestions. Download all the 44 khz versions as they are smaller. Pick a couple you like and pick one of the highest resolutions you can play.

 

Sent from my Nexus 6P using Computer Audiophile mobile app

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...