Jump to content
IGNORED

Ars prepares to put “audiophile” Ethernet cables to the test in Las Vegas


Recommended Posts

(Very OT)

Maybe not. Accusations of collusion, corruption, and cheating abound in every other high dollar industry, and many audio publications have been tainted over the years. Although I find it hard to believe that high end cable manufacturers make intentionally fraudulent claims or collude to keep prices up, it's not beyond the realm of possibility.

Link to comment
Uh... really... what & where & why do you think that applies?

If it affects accuracy of data transfer, it could affect sound quality. Although packet order wouldn't be affected, other factors like variance in spacing/timing might. This is an entirely untested hypothesis that hardly seems worthy of sarcasm & scorn, my friend.

Link to comment

No you stated that Cisco provides evidence of this. Please produce it that's all.

 

You first say that Cisco has published, then say that it's possible but untested. Those are two radically different statements -- entirely aside from whether it's possible.

Custom room treatments for headphone users.

Link to comment
No you stated that Cisco provides evidence of this. Please produce it that's all.

 

You first say that Cisco has published, then say that it's possible but untested. Those are two radically different statements -- entirely aside from whether it's possible.

1. Latency may have a beneficial effect on internet forums.........despite your zeal to criticize, you really should read more slowly, carefully and accurately before responding. What I said was untested is my hypothesis that latency effects could affect sound quality. Latency and high speed trading accuracy are known to be associated, and several hypotheses are offered as to why. It may, in fact, be that the simple changing of market price occurs during transit, so that a trade entered at a specific price cannot be completed because of a price change between order transmission and order processing. But accuracy of data transmission may also be the cause, since a trade shouldn't be completed if bid & asked prices don't match - I don't know, and I don't care. I have no desire to dig any further into this, as it's both way OT and not relevant to the discussion I thought we were having.

 

2. Start with this white paper from Cisco. My statement that "Cisco has published evidence that ultra low latency also reduces the incidence of actual errors" was made in the context of high speed trading, not computer audio. This paper is the main source of my information.

 

3. Then read this one. (Pragmatic Network Latency Engineering: Fundamental Facts and Analysis) if you're up for 31 pages of technodetail. There's a table of the many effects on data transit, and the one that seems to me to be of the greatet potential impact on sound quality is "asymmetric network transit times and random jitter impact on clock synchronization". But again, this is my hypothesis and I can find no evidence that it's been tested by anyone for its relevance to computer audio.

 

4. You'll catch more flies with honey than vinegar.

Link to comment

 

3. Then read this one. (Pragmatic Network Latency Engineering: Fundamental Facts and Analysis) if you're up for 31 pages of technodetail. There's a table of the many effects on data transit, and the one that seems to me to be of the greatet potential impact on sound quality is "asymmetric network transit times and random jitter impact on clock synchronization". But again, this is my hypothesis and I can find no evidence that it's been tested by anyone for its relevance to computer audio.

 

4. You'll catch more flies with honey than vinegar.

 

You'll also sound more intelligent if you understand what you are reading.

 

You need to define WHAT clock the 31 pages of technodetail is referring to. I'll give you a hint: It's not the clock on a DAC.

 

Jitter on the Ethernet side, Wired, Wireless, Optical matters naught outside of such a huge timing variance to cause a buffer under run.

 

Computers and streamers do not stream in realtime off of Ethernet. They stream data out of a buffer. A buffer that is filled in an as needed and intermittent fashion.

 

Buffer always kept full? Great then any variance on the Ethernet side that is less than the buffer size (that is doesn't under run the buffer) will have absolutely zero, zilch, nada effect on playback.

 

This is an absolute.

Link to comment
You'll also sound more intelligent if you understand what you are reading.

Wow - another real sweet guy.....

 

I'm happy to learn from anyone willing to share information. You might try offering some - you'll sound more intelligent if you do (that's a hint too). If you know something I/we don't, it might be nice to let us in on the secret.

Link to comment
Wow - another real sweet guy.....

 

I'm happy to learn from anyone willing to share information. You might try offering some - you'll sound more intelligent if you do.

 

I just did. You have no idea what you are talking about.

 

How does jitter on Ethernet affect playback out of a buffer?

Link to comment
Do you think communication over the ethernet link stops while the buffer is being emptied or what exactly is your understanding?

 

Ethernet isn't a real time protocol. It's burst by it's very nature.

 

Let me ask you this:

 

Take a server and client PC. Place an Ethernet cable between them. Setup a SMB share called 'music'

 

Either map a drive letter to \\server\music or just use the UNC path.

 

Fire up JRiver, or Foobar and set the buffer to 20 seconds.

 

Two take aways:

 

1 The music will start playing as soon as data hits the buffer

2. The buffer will fill so you can pull the Ethernet plug and play for 15 seconds. Plug the cable in for 5 seconds, pull the cable, play for 15, plug the cable in for 5 seconds. Rinse repeat.

 

You will find that you can play an entire album this way with no interruption.

 

Data transmission will indeed stop for that buffer operation until a thresh hold event occurs and more data is requested.

 

The Data transmission rate is not going to throttle down to meet the speed of audio playback.

 

SMB is designed for wire saturation, not throttling. In fact with SMB 3 multiple connections are now aggregated automatically so if you have two GB nics you would get 180MB/second throughput.

 

It's called SMB Multichannel.

Link to comment
You'll also sound more intelligent if you understand what you are reading.

 

You need to define WHAT clock the 31 pages of technodetail is referring to. I'll give you a hint: It's not the clock on a DAC.

 

Jitter on the Ethernet side, Wired, Wireless, Optical matters naught outside of such a huge timing variance to cause a buffer under run.

 

Computers and streamers do not stream in realtime off of Ethernet. They stream data out of a buffer. A buffer that is filled in an as needed and intermittent fashion.

 

Buffer always kept full? Great then any variance on the Ethernet side that is less than the buffer size (that is doesn't under run the buffer) will have absolutely zero, zilch, nada effect on playback.

 

This is an absolute.

 

The "technobabble" seemed basically sound. I've worked in the field of computer networks, protocol design, router and switch architecture and network performance analysis and measurement and had a bunch of PhD level researchers reporting to me on this stuff, so I didn't see anything unusual in the paper. As far as I can tell, the recent extreme concern about latency is relevant to money robbing applications. I figured out a few years ago how to make distributed automated trading systems fair without concern over latency, but I concluded that there would be no way to get this implemented because the players want to cheat, having become used to an unlevel playing field for 100 years.

 

At least with audio, people want to get good sound, at least if it doesn't cost them too much money... In the case of audio there are a number of protocols which are not particularly latency sensitive, provided buffers are big enough. This includes all of the streaming music services and all the DLNP stuff.

 

As far as I know, the only latency critical music applications that would need tight control of latency and symmetric propagation delay are those based on AES67, which requires precise real time clock synchronization. How critical this would be to sound quality will depend on how a network is configured and the algorithms used for clock synchronization and the hardware implementations thereof as it impacts DAC master clocks. (Location of master clock close to ADC or DAC, details of PLL. DPLL, or frequency synthesizer, etc...) As I mentioned earlier, tighter latency requirements apply to interactive audio applications due to human factor issues, but these don't apply for straight recording or straight playback applications.

Link to comment
The "technobabble" seemed basically sound. I've worked in the field of computer networks, protocol design, router and switch architecture and network performance analysis and measurement and had a bunch of PhD level researchers reporting to me on this stuff, so I didn't see anything unusual in the paper. As far as I can tell, the recent extreme concern about latency is relevant to money robbing applications. I figured out a few years ago how to make distributed automated trading systems fair without concern over latency, but I concluded that there would be no way to get this implemented because the players want to cheat, having become used to an unlevel playing field for 100 years.

 

At least with audio, people want to get good sound, at least if it doesn't cost them too much money... In the case of audio there are a number of protocols which are not particularly latency sensitive, provided buffers are big enough. This includes all of the streaming music services and all the DLNP stuff.

 

As far as I know, the only latency critical audio applications that would need tight control of latency and symmetric propagation delay are those based on AES67, which requires precise real time clock synchronization. How critical this would be to sound quality will depend on how a network is configured and the algorithms used for clock synchronization and the hardware implementations thereof as it impacts DAC master clocks. (Location of master clock close to ADC or DAC, details of PLL. DPLL, or frequency synthesizer, etc...)

 

And zero of it applies to non-realtime listening of a file over a network.

 

None of it affects the DAC's master clock.

 

A stored file either on HD or in buffer contain no Jitter. Period. Jitter is a real time phenomena. Audio playback from a server/nas/ or even local HD is not real time.

 

Since I'm obviously full of B.S. read Steve Nugents rather well written article:

jitter

 

Pay attention to section labeled:

 

Jitter and Networked audio

 

Networked audio (Ethernet), both wired and WiFi is a unique case. Because the data is transmitted in packets with flow-control, re-try for errors and buffering at the end-point device, it is not as much of a real-time transfer as USB, S/PDIF or Firewire. The computer transmitting the data packets must still keep-up" the pace to prevent dropouts from occurring, but the real-time nature of the transfer is looser. Unlike with other protocols, there can be dead-times when no data is being transferred. Networking also avoids the use of the audio stack of the computer audio system since it treats all data essentially the same. This avoids kmixer on XP systems and the audio stacks on Mac and PC Vista. Because of the packet-transfer protocol of Ethernet and data buffering at the end-point, the jitter of the clock in the computer is a non-issue. The only clock that is important is the one in the end-point device. Examples of end-point devices are: Squeezebox, Duet and Sonos. This would seem to be the ideal situation, which it certainly is. The only problem that can occur is overloading the network with traffic or WiFi interference, which may cause occasional dropouts. The problem for audiophiles is that the majority of these end-point devices were designed with high-volume manufacturing and low-cost as requirements, with performance taking a lower priority. As a result, the jitter from these devices is higher than it could be. It should be the lowest of all the audio source devices available.

Link to comment
And zero of it applies to non-realtime listening of a file over a network.

 

None of it affects the DAC's master clock.

 

A stored file either on HD or in buffer contain no Jitter. Period. Jitter is a real time phenomena. Audio playback from a server/nas/ or even local HD is not real time.

 

Since I'm obviously full of B.S. read Steve Nugents rather well written article:

jitter

 

Pay attention to section labeled:

 

Jitter and Networked audio

 

Networked audio (Ethernet), both wired and WiFi is a unique case. Because the data is transmitted in packets with flow-control, re-try for errors and buffering at the end-point device, it is not as much of a real-time transfer as USB, S/PDIF or Firewire. The computer transmitting the data packets must still keep-up" the pace to prevent dropouts from occurring, but the real-time nature of the transfer is looser. Unlike with other protocols, there can be dead-times when no data is being transferred. Networking also avoids the use of the audio stack of the computer audio system since it treats all data essentially the same. This avoids kmixer on XP systems and the audio stacks on Mac and PC Vista. Because of the packet-transfer protocol of Ethernet and data buffering at the end-point, the jitter of the clock in the computer is a non-issue. The only clock that is important is the one in the end-point device. Examples of end-point devices are: Squeezebox, Duet and Sonos. This would seem to be the ideal situation, which it certainly is. The only problem that can occur is overloading the network with traffic or WiFi interference, which may cause occasional dropouts. The problem for audiophiles is that the majority of these end-point devices were designed with high-volume manufacturing and low-cost as requirements, with performance taking a lower priority. As a result, the jitter from these devices is higher than it could be. It should be the lowest of all the audio source devices available.

 

Apparently, you haven't looked at AES67 or RAVENNA where, for example, a DAC clock could come from various places. As with AES/EBU the clock could be located at the source, at an external master clock, or at the DAC. This is just AES/EBU and SPDIF clocking repeated all over again, this time using Ethernet.

 

I am getting the impression that some people here are "bits are just bits" people. I am not one of these people. My slogan is "Bits should be just bits, but they aren't at present."

Link to comment
Ethernet isn't a real time protocol. It's burst by it's very nature.

 

Let me ask you this:

 

Take a server and client PC. Place an Ethernet cable between them. Setup a SMB share called 'music'

 

Either map a drive letter to \\server\music or just use the UNC path.

 

Fire up JRiver, or Foobar and set the buffer to 20 seconds.

 

Two take aways:

 

1 The music will start playing as soon as data hits the buffer

2. The buffer will fill so you can pull the Ethernet plug and play for 15 seconds. Plug the cable in for 5 seconds, pull the cable, play for 15, plug the cable in for 5 seconds. Rinse repeat.

 

You will find that you can play an entire album this way with no interruption.

 

Data transmission will indeed stop for that buffer operation until a thresh hold event occurs and more data is requested.

 

The Data transmission rate is not going to throttle down to meet the speed of audio playback.

 

SMB is designed for wire saturation, not throttling. In fact with SMB 3 multiple connections are now aggregated automatically so if you have two GB nics you would get 180MB/second throughput.

 

It's called SMB Multichannel.

 

I'm not talking about it being a real-time protocol - I know it isn't

I'm talking about the issue of two possible noise sources being concurrent with playback of audio data from the buffer - noise on the Ethernet cable & signal integrity of the data on the Ethernet cable resulting in more work by the PHY causing self generated noise. Signal integrity is any shift from the ideal in the differential waveform. There's a tolerance level that is defined for error-free Ethernet signalling but this doesn't mean that is therefore noise-free

 

Your experiment may well be worth trying for anyone using wired Ethernet but as Tony says it needs a more controlled approach to the experimental design.

 

I don't have network connected audio, atm, but have played around with a Squeezebox & there certainly was a difference between Wifi & Ethernet as far as SQ was concerned - in fact unless you turned off the Wifi side of things it caused a reduction in SQ even when using wired Ethernet.

 

Never tried any Ethernet cables but can understand a possible operational explanation for differences in SQ.

 

Some "audiophile" cable pricing is the thing that gets most people up in arms about this - if all these cables were <$50 we would not have the flame wars that we see

Link to comment
Apparently, you haven't looked at AES67 or RAVENNA where, for example, a DAC clock could come from various places. As with AES/EBU the clock could be located at the source, at an external master clock, or at the DAC. This is just AES/EBU and SPDIF clocking repeated all over again, this time using Ethernet.

 

I am getting the impression that some people here are "bits are just bits" people. I am not one of these people. My slogan is "Bits should be just bits, but they aren't at present."

 

You are confusing yourself. I did professional edit suite installs. We used to install house clock to get audio, video, and switchers all on house sync.

 

This isn't needed for Ethernet based transmission. There IS no clocking data on Ethernet!

 

I've just shot a video and will see I can upload it here to prove my point.

 

You have ZERO clue what you are talking about.

Link to comment
1. Latency may have a beneficial effect on internet forums.........despite your zeal to criticize, you really should read more slowly, carefully and accurately before responding. What I said was untested is my hypothesis that latency effects could affect sound quality. Latency and high speed trading accuracy are known to be associated, and several hypotheses are offered as to why. It may, in fact, be that the simple changing of market price occurs during transit, so that a trade entered at a specific price cannot be completed because of a price change between order transmission and order processing. But accuracy of data transmission may also be the cause, since a trade shouldn't be completed if bid & asked prices don't match - I don't know, and I don't care. I have no desire to dig any further into this, as it's both way OT and not relevant to the discussion I thought we were having.

 

Ok fair enough. It's the former, whereas the latter -- CRC errors in the packets -- causes packets to be dropped and retransmitted rather than allowing actual data errors to be transmitted over the network. The number of faulty (and hence dropped) packets is easy to actually measure so it doesn't need to be a mystery. This is actually relevant to your original hyp: that network jitter affects audio SQ. If you want to test the effects of ultra low latency, you can pick up an infiniband switch and a few cards and a cable and test it out.

 

There is actually a lot of literature that deals with jitter and networking.

 

1) consider jitter as a "blur" of a signal that increases with distance traveled. This becomes a limiting factor in the distance a network cable can transmit a signal. In an electrical signal, as the signal travels down a cable it blurs/widens, and then a bit of noise can affect the time at which it transitions from 0 to 1. In an optical signal, the light beams bounce off the walls of the fiber (simplification but nonetheless) and similarly widen the pulse. So very precise clocking is very important for high speed networks -- that's what those OCXO and low jitter TCXO clocks are really made for.

 

2) now consider a 40g and 100g ethernet where 4 parallel lanes are tied together ... if out of sync then it doesn't work so well.

 

3) "green ethernet" is based upon the fact that, keeping the same data rate, s/n etc., there is a lesser need for power with shorter cables (and presumably lower jitter) and clearly there is a mechanism whereby a lower powered interface might be less "noisy"

 

In any case, I agree with you that these effects should be looked at.

Custom room treatments for headphone users.

Link to comment

Here you go guys. Proof that Ethernet isn't real time even though you can hear the music.

 

Please keep in mind this is low brow wireless for both the server and client computer and the file bit rate is 9216 Kbps. 1411 is full rate 16/44.1 (CD/Redbook).

 

Some of you may have to accept that others have the ability to teach you something. Just like I learn from others all the time.

 

Link to comment
Ok fair enough. It's the former, whereas the latter -- CRC errors in the packets -- causes packets to be dropped and retransmitted rather than allowing actual data errors to be transmitted over the network. The number of faulty (and hence dropped) packets is easy to actually measure so it doesn't need to be a mystery. This is actually relevant to your original hyp: that network jitter affects audio SQ. If you want to test the effects of ultra low latency, you can pick up an infiniband switch and a few cards and a cable and test it out.

 

There is actually a lot of literature that deals with jitter and networking.

 

1) consider jitter as a "blur" of a signal that increases with distance traveled. This becomes a limiting factor in the distance a network cable can transmit a signal. In an electrical signal, as the signal travels down a cable it blurs/widens, and then a bit of noise can affect the time at which it transitions from 0 to 1. In an optical signal, the light beams bounce off the walls of the fiber (simplification but nonetheless) and similarly widen the pulse. So very precise clocking is very important for high speed networks -- that's what those OCXO and low jitter TCXO clocks are really made for.

 

2) now consider a 40g and 100g ethernet where 4 parallel lanes are tied together ... out of sync and it doesn't work so well.

 

3) "green ethernet" is based upon the fact that, keeping the same data rate, s/n etc., there is a lesser need for power with shorter cables (and presumably lower jitter) and clearly there is a mechanism whereby a lower powered interface might be less "noisy"

 

In any case, I agree with you that these effects should be looked at.

 

And again has ZERO to do with playback out of buffer.

 

Again Jitter on Ethernet has nothing to do to effect the clock on the DAC.

 

See the paper I linked to. Read, comprehend, then understand.

Link to comment

Some "audiophile" cable pricing is the thing that gets most people up in arms about this - if all these cables were <$50 we would not have the flame wars that we see

+1

 

Continues to surpise me that there is more listening to ethernet cables than switches or NICs or drivers.

Custom room treatments for headphone users.

Link to comment
Here you go guys. Proof that Ethernet isn't real time even though you can hear the music.

 

Some of you may have to accept that others have the ability to teach you something. Just like I learn from others all the time.

Then you should have learned by now that nobody thinks Ethernet is real-time!!

Link to comment
And again has ZERO to do with playback out of buffer.

 

Again Jitter on Ethernet has nothing to do to effect the clock on the DAC.

 

See the paper I linked to. Read, comprehend, then understand.

 

You fail in your ability to think in system terms - Ethernet data delivery is not some isolated event that occurs completely isolated from the rest of the system. Ethernet receiver chips in audio device have an intimate connection with the downstream sensitive analogue systems such as DAC clock & DAC output stages via the ground plane.

Link to comment
Then you should have learned by now that nobody thinks Ethernet is real-time!!

 

But they do. Or they wouldn't argue that Jitter on and Ethernet cable is going to affect sound quality on a buffered device. Whether it be a PC/Mac/Linux box or Network attached DAC.

 

Your own post #159 just stipulated that it's real time. You said "Do you think communication over the ethernet link stops while the buffer is being emptied or what exactly"

 

I didn't state "I think" or "what exactly". I know, not think. Watch the video.

 

Yes communication over the Ethernet link stopped. For crying out loud...

Link to comment
Ethernet isn't a real time protocol. It's burst by it's very nature.

Digital signals are "burst" by their very nature. Whether cached or continuously streamed, the number of interruptions in signal is at least equal to the total number of packets transferred minus 1.

SMB is designed for wire saturation, not throttling. In fact with SMB 3 multiple connections are now aggregated automatically so if you have two GB nics you would get 180MB/second throughput.

 

It's called SMB Multichannel.

What does this have to do with jitter and latency in computer audio? Samba is a file access & transfer protocol at the application level. Multichannel facilitates load balancing by enabling multiple connections, eg in a data center. If there's more, please explain......nicely. I'd like to learn whatever there is to learn.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...