Jump to content
IGNORED

why does streaming from local storage sound better than the same album, at same resolution streamed from Qobuz?


Recommended Posts

3 hours ago, davide256 said:

No, thats not the case. This is purely a function of software and endpoint hardware optimization. Qobuz legally has to do content protection to prevent piracy, which adds overhead to streaming  caching and causes background network activity demand on the CPU. An offline local file eliminates the network activity demand on CPU and a purchased file allows you to bypass the Qobuz app entirely for your preferred/optimal player.

 

This is the best, most plausible, explanation I have seen yet.

No electron left behind.

Link to comment
8 hours ago, Blackmorec said:

We can correlate those improvements to 

less jitter

less noise and ripple

less EMI

less vibration 

fewer cable reflections

less network traffic

Correlating doesn't actually show that anything is happening that improves or degrades SQ.

Can you actually show: a) cause and effect - any of your changes, and "less jitter" etc?

b) if you show changes in (a) does that cause any change at the output of the DAC?

c) Is that change audible? Changes at -115 db and more don't count. No one can hear them. And I'm being generous. 

Main listening (small home office):

Main setup: Surge protectors +>Isol-8 Mini sub Axis Power Strip/Protection>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three BXT (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three BXT

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment

If someone wanted to have the best chance of finding out the answer as to why the streaming service version of a given album sounds different than a local copy of the same album purchased from the same streaming service then you would need to chose what you believe is the same version of that album as a starting point. An example being, the streaming service only offers one version for sale. You buy it and use it to compare the streamed version.

 

The one part of that which you will never know is if both the download and stream are the same version (same catalog number, pressing..etc) because their is a clear lack of transparency with all the streaming services and digital download stores. I've yet to see one publish such info.

 

But anyway, what my point with the above is is that you could capture both streams and record them with Audacity, as an example. Once both were captured you can then export the spectrogram plots to .csv format. This export will show the Freq and dB level at each point that was captured. You can then do some fancy excel-foo on the tables of data to observe any differences.

 

Its not a perfect way to go about finding out but certainly more scientific than just trying to use your ears to figure it out.

 

I might know someone who did the above attempting to figure out an answer to the same question as posted by the OP of this thread. It was said that what they saw was that the streamed version was down in level across the whole time window of the song captured compared to the local version. At some points in the Freq scale by over 2db and at other points by .xxxx dB.

 

But with that said, I hear differences myself between streamed versions and local versions. Its not subtle either IMO. Local is better to these ears for sure.

Link to comment
30 minutes ago, cjf said:

If someone wanted to have the best chance of finding out the answer as to why the streaming service version of a given album sounds different than a local copy of the same album purchased from the same streaming service then you would need to chose what you believe is the same version of that album as a starting point. An example being, the streaming service only offers one version for sale. You buy it and use it to compare the streamed version.

 

The one part of that which you will never know is if both the download and stream are the same version (same catalog number, pressing..etc) because their is a clear lack of transparency with all the streaming services and digital download stores. I've yet to see one publish such info.

 

But anyway, what my point with the above is is that you could capture both streams and record them with Audacity, as an example. Once both were captured you can then export the spectrogram plots to .csv format. This export will show the Freq and dB level at each point that was captured. You can then do some fancy excel-foo on the tables of data to observe any differences.

 

Its not a perfect way to go about finding out but certainly more scientific than just trying to use your ears to figure it out.

 

I might know someone who did the above attempting to figure out an answer to the same question as posted by the OP of this thread. It was said that what they saw was that the streamed version was down in level across the whole time window of the song captured compared to the local version. At some points in the Freq scale by over 2db and at other points by .xxxx dB.

 

But with that said, I hear differences myself between streamed versions and local versions. Its not subtle either IMO. Local is better to these ears for sure.

 

So i picked an album that I bought from Qobuz, and is streaming at the same resolution still, with quiet and loud parts and while my ear is bothering me today some so take this with a grain of salt, I didn't hear any level differences. Playback chain was the same except for the source of the file being played. Volume Control was in HQP.

 

I can see that some record company exec would want streaming services to alter the stream in some way but I don't see them creating entire new masters just for streaming. Is there anything else that can cause the differences this mysterious someone may have found? A setting in Audacity?

 

 

Screen Shot 2021-09-24 at 11.50.36 PM.png

Screen Shot 2021-09-24 at 11.45.31 PM.png

No electron left behind.

Link to comment
14 hours ago, davide256 said:

Qobuz legally has to do content protection to prevent piracy, which adds overhead to streaming  caching and causes background network activity demand on the CPU. An offline local file eliminates the network activity demand on CPU and a purchased file allows you to bypass the Qobuz app entirely for your preferred/optimal playe

 

Not sure this is the case. There is an authentication mechanism. Security token is send with a streaming request. The token is checked before the file content is streamed. 

Link to comment

Well it has been established that Universal has "watermarked" streaming versions of albums. So they are going to sound slightly worse than other versions.

There's been discussion here about this before. You can search for it. 

(But, of course, only a paranoid conspiracy theorist would believe that)....

 

edit: saw that there's some online discussion that UMG stopped uploading watermarked versions of files to streaming services. Maybe, but it still means you don't know if what you are streaming is watermarked.

Main listening (small home office):

Main setup: Surge protectors +>Isol-8 Mini sub Axis Power Strip/Protection>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three BXT (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three BXT

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment
17 hours ago, plissken said:

 

This is easily tested. I'm game for testing this if others are. I'll help put in the effort if others will sign on to participate. But @Archimago already did something like this...

 

Doesn't mean it can't be done again.

Hi Plissken 

As you say, its extremely easy to test. Just throw an ethernet cable over my bannister rail and connect between router and server, thereby bypassing all my optimisation measures. And the result is a massive downgrade, like I’d downgraded from some Magico M2s to some very much lesser speaker….ridiculously easy to hear the difference.  A complete collapse of the fully immersive, 3 dimension musicians playing instruments to a much more 2 dimensional presentation.  I actually tried this manoeuvre before spending more money on further improvements just to make sure the effects I was hearing were indeed coming from the network. 

 

One thing I would freely admit…..that what changes with these network optimisations is the presentation of the music….for example the main effect of adding better power supplies could be heard in the pace, rhythm and timing and in dynamics and micro dynamics of the music, while with better cables and vibration control, the effects were mainly on detail recovery, air and atmosphere. By continually improving the network, the sound took on an entirely different nature, becoming holographically 3 dimensional and completely immersive, like you are sitting in the middle of a huge sphere of music, where the venue and its musicians are replayed in a most believable way (assuming the recording has those qualities obviously) The whole upgrade strategy brings predictable and reproducible improvements.

Link to comment
13 hours ago, firedog said:

Correlating doesn't actually show that anything is happening that improves or degrades SQ.

Can you actually show: a) cause and effect - any of your changes, and "less jitter" etc?

b) if you show changes in (a) does that cause any change at the output of the DAC?

c) Is that change audible? Changes at -115 db and more don't count. No one can hear them. And I'm being generous. 

Hi there firedog,

I suggest you read the very next paragraph I wrote following those ‘correlates’ 

 

“But what we haven’t yet done is qualify exactly what these  effects are having on the final music we listen to” 

 

Pretty much encapsulates your above comments, I would have thought…..at least that was my intent 

 

Also this arbitrary -115dB…..have a look at the amplitude difference between an LHS sourced 85dB @ 1m signal at a 4meter distant listening position at the left vs the right ear. The differences are very small….in the order of 0.4dB  on a 73dB signal and we have absolutely no problem in hearing that….no problem = the difference is HUGE because that’s how we assign directionality to the sound source.  The question is not “what absolute minimum level can we hear” ….the question should be “what minimal differential amplitude can we detect between our 2 ears”. I think you’ll find that the answers are VERY different in that we are FAR more sensitive to small differentials in amplitude than we are to overall signal amplitude.  And what seems to be changing via these network optimisations are indeed these differentials.  

Link to comment
19 hours ago, davide256 said:

But you aren't... please  understand that 1 reason why we do digital networks instead of analog network signal transmission is that it eliminates additive noise transmission.

 

No, thats not the case. This is purely a function of software and endpoint hardware optimization. Qobuz legally has to do content protection to prevent piracy, which adds overhead to streaming  caching and causes background network activity demand on the CPU. An offline local file eliminates the network activity demand on CPU and a purchased file allows you to bypass the Qobuz app entirely for your preferred/optimal player.

So why does changing from ethernet to fibre optic make a huge difference in terms of SQ? Why can you hear large improvements when replacing cheap network power supplies with much better linear supplies? Why can you literally transform the sound quality by upgrading network components without touching software or end point hardware?

 

and analog noise is an entirely different kettle of fish….nothing to do with what we’re talking about here. 

Link to comment
7 hours ago, Blackmorec said:

As you say, its extremely easy to test. Just throw an ethernet cable over my bannister rail and connect between router and server, thereby bypassing all my optimisation measures. And the result is a massive downgrade,

 

No, this can be done over your existing. It's called tunneling or VPN. It's a logical overlay over your physical infrastructure as it sits.

Link to comment
55 minutes ago, Blackmorec said:

So why does changing from ethernet to fibre optic make a huge difference in terms of SQ? Why can you hear large improvements when replacing cheap network power supplies with much better linear supplies? Why can you literally transform the sound quality by upgrading network components without touching software or end point hardware?

 

and analog noise is an entirely different kettle of fish….nothing to do with what we’re talking about here. 

If its directly attached to server or endpoint it matters for wired noise transmission which is analog voltage noise.. if its a device in between endpoint switches/routers

only the routing/switching efficiency  and jitter comes into play.

 

Digital noise is what you get from D/A conversion. IME D/A conversion is vulnerable to PS voltage noise interference and the most important place in the audio chain for

prevention of PS bus voltage noise affecting USB sender/receiver circuits. 

 

Haven't had a need to use the FE port on my Etherregen since I switched from the off brand Ethernet ports on an AMD board to Intel brand ports on a Z390 board.

That said I also no longer have any jitter/throughput anomalies by isolating all audio components to 1 Etherregen.

 

All you are doing by upgrading power supplies on gear in between is reducing internal network delay/ error correction overhead... but networks don't come with a "buy better gear"

light to tell you that.

Regards,

Dave

 

Audio system

Link to comment
3 hours ago, davide256 said:

If its directly attached to server or endpoint it matters for wired noise transmission which is analog voltage noise.. if its a device in between endpoint switches/routers

only the routing/switching efficiency  and jitter comes into play.

 

Digital noise is what you get from D/A conversion. IME D/A conversion is vulnerable to PS voltage noise interference and the most important place in the audio chain for

prevention of PS bus voltage noise affecting USB sender/receiver circuits. 

 

Haven't had a need to use the FE port on my Etherregen since I switched from the off brand Ethernet ports on an AMD board to Intel brand ports on a Z390 board.

That said I also no longer have any jitter/throughput anomalies by isolating all audio components to 1 Etherregen.

 

All you are doing by upgrading power supplies on gear in between is reducing internal network delay/ error correction overhead... but networks don't come with a "buy better gear"

light to tell you that.

Interesting!  Can you explain the basic physics behind how a better power supply reduces internal network delay and error correction overhead in terms of what’s causing what?

 

Link to comment
4 hours ago, plissken said:

 

No, this can be done over your existing. It's called tunneling or VPN. It's a logical overlay over your physical infrastructure as it sits.

Too many potential unknown variables.  With a cable swap the only variable is the current streaming network vs a direct cable. Easy and simple 

Link to comment
1 hour ago, Blackmorec said:

Interesting!  Can you explain the basic physics behind how a better power supply reduces internal network delay and error correction overhead in terms of what’s causing what?

 

A device with a noisier/min-spec power supply is more likely to error invoking  fault protection/error correction solutions processor overhead. Heat is also an enemy, 

 

If an Ethernet frame is bad the frame is dropped and the receiver  end at the IP layer should detect that a packet was lost and can either request data re-transmission

from the source  or for audio dither the missing data if it can't wait for re-transmission. IP layer expects packets to arrive out of sequence and if one is missing that is not

an instant re-transmit request, there is some timer that has to expire before a re-transmission request. Some audiophile player  solutions

sound poorer if you don't maintain very low network jitter and errors, even with endpoint device song buffers, no certainty why. 

 

 

 

 

Regards,

Dave

 

Audio system

Link to comment
22 hours ago, AudioDoctor said:

 

I can see that some record company exec would want streaming services to alter the stream in some way but I don't see them creating entire new masters just for streaming. Is there anything else that can cause the differences this mysterious someone may have found? A setting in Audacity?

 

 

 

 

Hello,

 

I dont disagree by any means that it would seem strange for the streamed version to have a separate pressing/version when dealing with an album that is only offered as one choice when playing and purchasing it from the provider.

 

The test was done using Free software so its possible that the pieces of software needed to pull off the test are not as accurate as some high dollar dedicated piece of software might be. But thats just me speculating.

 

I hear there are a few challenges present when trying to keep the playing field as even as possible for comparison purposes. One being, trying to synchronize the Start/Stop time stamps of the recording process for both streams. Since one is local to the LAN and the other is coming from the "Cloud" additional latency is introduced which translates into it taking longer for the stream to initialize with the Cloud version than it does with the Local LAN version.

 

I hear the closest one can probably get at achieving a truly in Sync recording of the two streams with the softwares involved would be by utilizing the signal sensing feature in Audacity. What that does is it allows the software to automatically Start the recording the moment that the noise floor of the track/stream reaches a predetermined level set by the User. So as an example, you set that value to be something like -60 or -90dbfs or something along those lines. As soon as the signal hits that point in level the recording starts. This applies to the end of the stream/track as well where it will then stop the recording in the same fashion. Compare that to having to sit there with mouse button in hand waiting to click the record button the moment your ear hears the stream start playing or as it fades out at the end.

 

In any case, where this comes into play (I hear) is when looking at the side by side .csv Excel output. If the timestamps don't line up exactly it becomes more challenging to line up and calculate the differences of each point that was captured.

 

But with that said, all the above shouldn't cause a reduction in output level as long as both streams are piped thru the same chain using the same settings and volume levels. So all that is just a long way of saying...I dont know why 🤣

Link to comment
22 hours ago, firedog said:

Well it has been established that Universal has "watermarked" streaming versions of albums. So they are going to sound slightly worse than other versions.

There's been discussion here about this before. You can search for it. 

(But, of course, only a paranoid conspiracy theorist would believe that)....

 

edit: saw that there's some online discussion that UMG stopped uploading watermarked versions of files to streaming services. Maybe, but it still means you don't know if what you are streaming is watermarked.

 

The watermark was also found on purchased and down loaded alleged lossless files too, so we're still not at the point where there is a separate master for streaming only and therefore that's not the cause of the difference heard in streaming versus local files.

No electron left behind.

Link to comment
22 minutes ago, cjf said:

Hello,

 

I dont disagree by any means that it would seem strange for the streamed version to have a separate pressing/version when dealing with an album that is only offered as one choice when playing and purchasing it from the provider.

 

The test was done using Free software so its possible that the pieces of software needed to pull off the test are not as accurate as some high dollar dedicated piece of software might be. But thats just me speculating.

 

I hear there are a few challenges present when trying to keep the playing field as even as possible for comparison purposes. One being, trying to synchronize the Start/Stop time stamps of the recording process for both streams. Since one is local to the LAN and the other is coming from the "Cloud" additional latency is introduced which translates into it taking longer for the stream to initialize with the Cloud version than it does with the Local LAN version.

 

I hear the closest one can probably get at achieving a truly in Sync recording of the two streams with the softwares involved would be by utilizing the signal sensing feature in Audacity. What that does is it allows the software to automatically Start the recording the moment that the noise floor of the track/stream reaches a predetermined level set by the User. So as an example, you set that value to be something like -60 or -90dbfs or something along those lines. As soon as the signal hits that point in level the recording starts. This applies to the end of the stream/track as well where it will then stop the recording in the same fashion. Compare that to having to sit there with mouse button in hand waiting to click the record button the moment your ear hears the stream start playing or as it fades out at the end.

 

In any case, where this comes into play (I hear) is when looking at the side by side .csv Excel output. If the timestamps don't line up exactly it becomes more challenging to line up and calculate the differences of each point that was captured.

 

But with that said, all the above shouldn't cause a reduction in output level as long as both streams are piped thru the same chain using the same settings and volume levels. So all that is just a long way of saying...I dont know why 🤣

 

I am far from an expert on this but is there a way to figure out the offset via a null test and use that to align them? My "null test" idea would invert one file, play them back and see how far off it is from cancelling out the other file. Then, because that difference should be consistent across the entire file, align them somehow with that time difference information?

No electron left behind.

Link to comment
7 hours ago, plissken said:

 

ZERO variables. Nothing would change with your setup.

Errrr…I’m having some difficulty with the scientific logic here. If there are no variables, there is no comparison. There has to be at least one variable. The whole point is to make a comparison between two things, so what’s the other thing I’m comparing my network with. Surely its an alternate network…..at least that’s how I understand VPN tunnelling?

Link to comment
1 hour ago, AudioDoctor said:

 

The watermark was also found on purchased and down loaded alleged lossless files too, so we're still not at the point where there is a separate master for streaming only and therefore that's not the cause of the difference heard in streaming versus local files.

You actually don't know that as a general rule, you are just assuming it is across all files. I have UMG files that don't have the watermark (as far as anyone can tell). 

You are welcome to any assumption you want, but responding as if your assumptions are the only ones possible or the only ones that make sense has no particular credibility over other scenarios. It's just your personal prejudices. And it shows your comments about paranoia and conspiracy theories are baseless.

Main listening (small home office):

Main setup: Surge protectors +>Isol-8 Mini sub Axis Power Strip/Protection>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three BXT (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three BXT

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment
8 hours ago, plissken said:

 

ZERO variables. Nothing would change with your setup.

Hi Plissken,

I added a lot to my previous post, but the edits were too late and rejected, so in addition to my last post…….

 

Errrr…I’m having some difficulty with the scientific logic here. If there are no variables, there is no comparison. There has to be at least one variable. The whole point is to make a comparison between two things, so what’s the other thing I’m comparing my network with. Surely its an alternate network…..at least that’s how I understand VPN tunnelling? Anyway, let me know the process and I’ll take a look. I really am keen to understand why upgrading network components like PSs and cables has such an impact. I’d like to know what those changes do exactly to the network stream and why those changes have such an impact on sound quality. 

You may not trust my ears, but I do, implicitly. For example, when I changed 2 DC cables on a switch and wi-fi bridge  from Neotech to Mundorf Silver/Gold I first heard an improvement, in that the system hinted at being more 3 dimensional, even holographic….but after a couple of days it was sounding worse, then quite a lot worse as I needed to up the volume by 2dB (no idea why).  The system then would sound better for a day, worse for several days, then better, then worse…until finally it suddenly sounded stunningly better….at which point it stabilised and remained completely stable in this new altered state. Along the way treble presentation changed, bass presentation changed and most of all, the 3 dimensional presentation.  I didn’t analyze the sound to hear those changes….I simply responded with feelings and emotions….loving it one day, finding it irritating the next. In terms of expectations, there were several occasions where it had sounded great and I thought it was finished running in, so my expectation on sitting down for a listen was that it was going to sound great, only for it to sound irritating with all the magic missing. But in the end, I was so impressed with the final sound using those Mundorf DC cables (very kindly made by Nenon BTW) that I changed the rest of my DC cables and all the internal PS cabling to the M silver/gold. The result was truly stunning but the running in was equally long and irritating. I recommended the cables to a friend, who obtained a pair and documented his listening impressions and he suffered exactly the same running in rollercoaster as I’d heard.  Over 45 years of building and refining hi-fi systems I’ve learned that true analytical listening is really quite a skilled process that requires a lot patience, careful comparisons and a really good, stable reference with which to compare, so instead of using the conscious, analytical part of my brain, which is prone to all sorts of conscious biases, I’ve learned to depend on the limbic part of my brain and simply monitor how the system is making me feel,  Joyous, happy, elated, excited etc or disappointed, unmoved, uninvolved, disinterested. I don’t need to know what specifically changed, just whether my system is more or less magical to listen to.  During listening I’ll write down adjectives that pop into my mind to describe what I’m hearing and how its making me feel, so in the end I can write a description of what I heard. 

With a science (but not IT) background Id very like to understand what’s going on, but I do get really irritated when someone quotes a lot a networking standards jargon and claims that changes can’t happen, because what’s actually going on is that theory and practice don’t match, when the measurement criteria is musical quality. The interesting question is therefor why?

Link to comment
14 hours ago, davide256 said:

A device with a noisier/min-spec power supply is more likely to error invoking  fault protection/error correction solutions processor overhead. Heat is also an enemy, 

 

If an Ethernet frame is bad the frame is dropped and the receiver  end at the IP layer should detect that a packet was lost and can either request data re-transmission

from the source  or for audio dither the missing data if it can't wait for re-transmission. IP layer expects packets to arrive out of sequence and if one is missing that is not

an instant re-transmit request, there is some timer that has to expire before a re-transmission request. Some audiophile player solutions

sound poorer if you don't maintain very low network jitter and errors, even with endpoint device song buffers, no certainty why. 

Thanks Davide, a very concise answer to my question…..but here’s the gotcha, and I’m honestly not trying to catch you out, just looking for an answer. 

Lets say that I replace the cheap-as-chips SMTP on the final switch before the server with a DC3, a double regulated and still one of the finest LPSs available.  Ive done this and the result is an almighty leap in sound quality. Great. So now let’s say I replace that DC3 LPS with the new kid on the block, the double regulated DC4. I’ve done this too, and the result was another mighty leap in sound quality….so do you think that the same mechanism you describe can still be responsible, given the close-to-SoTA quality of the DC3, or is there something else going on, whereby you can CLEARLY hear the quality of the LPS in the final presentation. I’d be quite happy to find out that the high level of sound improvement has nothing to do with the ACTUAL data stream, but what is going on that I can very clearly hear major improvements in presentation between different power supplies, cables, addition of anti-vibration measures etc. When I went into digital streaming, I believed that as long as the stream wasn’t error prone, things like cables and power supplies, that make major differences in analog, would make no difference in digital, yet in reality the replacement of a DC cable between the LPS and switch is clearly audible. Transformatory even.  I would love to be able to demonstrate these effects to you, so you can hear what I’m hearing. What’s more, all these affects are additive, such that a network where every component is optimized sounds way, way superior to your ordinary patch cord, an improvement that in analog would need more than one very major component upgrade to achieve.  And here’s what’s even wierder. With the very simple patch cord LAN, there’s a major difference between local and remote streamed files, but as you improve the quality of the network that gap closes, Significantly, to the point its virtually undetectable. But with ALL the upgrades and consequential SQ improvements, the remote  file SQ should, at least theoretically, overtake the local file replay, but that never happens. Why? Because as the remote file replay improves, so does the local. That a part I really don't understand but I’m far from the only person to have found this, which makes me wonder if the network improvements have nothing to do with data per-se and are more to do with other co-generated, co-transmitted effects that ripple through the network to disturb some key audio processes. 

Link to comment
1 hour ago, Blackmorec said:

Thanks Davide, a very concise answer to my question…..but here’s the gotcha, and I’m honestly not trying to catch you out, just looking for an answer. 

Lets say that I replace the cheap-as-chips SMTP on the final switch before the server with a DC3, a double regulated and still one of the finest LPSs available.  Ive done this and the result is an almighty leap in sound quality. Great. So now let’s say I replace that DC3 LPS with the new kid on the block, the double regulated DC4. I’ve done this too, and the result was another mighty leap in sound quality….so do you think that the same mechanism you describe can still be responsible, given the close-to-SoTA quality of the DC3, or is there something else going on, whereby you can CLEARLY hear the quality of the LPS in the final presentation. I’d be quite happy to find out that the high level of sound improvement has nothing to do with the ACTUAL data stream, but what is going on that I can very clearly hear major improvements in presentation between different power supplies, cables, addition of anti-vibration measures etc. When I went into digital streaming, I believed that as long as the stream wasn’t error prone, things like cables and power supplies, that make major differences in analog, would make no difference in digital, yet in reality the replacement of a DC cable between the LPS and switch is clearly audible. Transformatory even.  I would love to be able to demonstrate these effects to you, so you can hear what I’m hearing. What’s more, all these affects are additive, such that a network where every component is optimized sounds way, way superior to your ordinary patch cord, an improvement that in analog would need more than one very major component upgrade to achieve.  And here’s what’s even wierder. With the very simple patch cord LAN, there’s a major difference between local and remote streamed files, but as you improve the quality of the network that gap closes, Significantly, to the point its virtually undetectable. But with ALL the upgrades and consequential SQ improvements, the remote  file SQ should, at least theoretically, overtake the local file replay, but that never happens. Why? Because as the remote file replay improves, so does the local. That a part I really don't understand but I’m far from the only person to have found this, which makes me wonder if the network improvements have nothing to do with data per-se and are more to do with other co-generated, co-transmitted effects that ripple through the network to disturb some key audio processes. 

Not all software is created equal for network performance.  Even with an Oppo 103 I can hear a clear superiority of SMB file access over UPNP access for local streaming.

Euphony  offers a feature to "turn off the network" during buffered queue playback to prevent network processing from degrading sound.

 

If you hear a difference its because the receiver SW/HW solution has a weakness for when the network errors or adds unpredictable overhead during playback

I don't disagree that you are hearing what you hear with the receiver solution used but trying to assert the reason is end to end analog voltage noise transmission

is a false path,  like trying to explain electron tunneling in  a transistor using E&M theory without Quantum mechanics.

 

The basic reason local can sound better is that packet re-transmission/sequencing occurs locally with minimal latency, a few milliseconds at most vs 20-100 ms over a long haul

land line national network. If audio playback uses a normal file xfer for whole tracks before play with error checking, network performance is  largely removed from the equation

other than delay for start of track. The programs that try to start play before current track is fully downloaded are more challenged by network performance issues during

playback

 

Regards,

Dave

 

Audio system

Link to comment
9 hours ago, Blackmorec said:

Errrr…I’m having some difficulty with the scientific logic here. If there are no variables, there is no comparison. There has to be at least one variable. The whole point is to make a comparison between two things, so what’s the other thing I’m comparing my network with. Surely its an alternate network…..at least that’s how I understand VPN tunnelling?

 

I responded to the context of what you considered changing variables in your setup as far as hardware change up (that you would run a cable?) and that is what I meant by ZERO change to your variables in you physical layout.

 

I'm only saying we can introduce a VPN tunnel to the mix with the same copy of a song stored locally and one hosted.

 

Does that clear it up?

Link to comment
2 hours ago, davide256 said:

If audio playback uses a normal file xfer for whole tracks before play with error checking, network performance is  largely removed from the equation

other than delay for start of track. The programs that try to start play before current track is fully downloaded are more challenged by network performance issues during

playback

 

Yep. This all goes back to statements I made about what constitutes best practice. Even in networking we have RDMA (Remote Direct Memory Access) where two NIC's can setup a channel and go from RAM block to RAM block.

 

iPerf does something like this where it sets up RAM buffer on Server and Client side so the disk subsystem isn't involved in seeing how much network throughput can be achieved.

 

With JRiver and their killer buffering options, their ability to abstract themselves from the lower layers literally means it is a play back system that is feasibly capable of taking advantage of full wire rate and even exceeding disk I/O limitations if you were somehow crazy enough to put together a battery backed up system with a few TB of RAM to store all your music (probably want the ECC buffered type for that 😉)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...