Jump to content
IGNORED

Network card Clock upgrade


Recommended Posts

It's tricky -- first of all clocks on real Intel NICs are already very very good -- check out Intel x520

 

Paul Pang starts out with really really cheap hardware -- ugh not sure that's advisable -- if you go fiberoptic you get better clocks then battery or LPS supply and that's better than good clock with bad supply.

Custom room treatments for headphone users.

Link to comment
jabbr - Any thoughts about how/why/if an improved (presumably) clock in a NIC might affect SQ?

 

I you are inviting me to speculate well then ... ;)

 

Let's start at the DAC and work our way back. For DSD let's use the Signalyst DSC1 (because the discrete design is simple and published). The discussion thread is here:Signalyst DSC1 - diyAudio) which takes as input a direct DSD signal or alternatively for PCM the I2S or PCM signals e.g. the PCM1704 datasheet discusses: http://www.qlshifi.com/jszl/PCM1704.pdf

 

Let's assume that the signal integrity of either the DSD, I2S or PCM lines is of paramount importance.

 

From the network, the bits necessarily have a clock domain crossing from the NIC clock to the master DAC clock (e.g. BCLK).

 

Perhaps having low jitter on the NIC input improves the clock domain crossing and if so, might result in less jitter on the BCLK.

 

That could be tested.

Custom room treatments for headphone users.

Link to comment
Yes PP’s network switches look like they are made from pretty ‘’standard’’ boxes but with his mods some seem to like them better. Ihave not tried but would like to find out if such things can make a difference.

My first inclination is toward a DIY approach because that’s what I’ve been doing audio wise for quite some time now.

So some DIY network switch work may be on the cards after some reading.

 

I greatly encourage your DIY approach, but rather than start with start with a $10 piece of hardware and then "upgrade", why not start with something really good and then try to improve. In any case consider the Intel x520 10g NIC. Consider its eye pattern performance and then consider what equipment you need to improve on its performance. DIY doesn't mean forgetting good engineering practices.

Custom room treatments for headphone users.

Link to comment
Jud, are the I2S and Ethernet clocks ever crossing? Does the paper answer that?

 

Here you go Jud, from the paper that Jabbr posted and directly answers your question about pulling the plug.

 

5.8.1 Multi-bit CDC signal passing using asynchronous FIFOS

Passing multiple bits, whether data bits or control bits, can be done through an asynchronous FIFO. An asynchronous FIFO is a shared memory or register buffer where data is inserted from the write clock domain and data is removed from the read clock domain. Since both sender and receiver operate within their own respective clock domains, using a dual-port buffer, such as a FIFO, is a safe way to pass multi-bit values between clock domains. A standard asynchronous FIFO device allows multiple data or control words to be inserted as long as the FIFO is not full, and the receiver and then extract multiple data or control words when convenient as long as the FIFO is not empty.

 

Again. Pull the plug and the music still plays. What does the clock on Ethernet cable have to do with it? And the buffered data from the Ethernet cable isn't even the buffer the audio application setups up or the buffer the USB bus uses.

 

You don't even need to have the paper Jabbr provided. You just need to have the same open mind that one would target toward a soldering iron and a oscillator toward the question I was asking.

 

So a way to test this is to modify the Intel NIC and take a standard one and place them in a LAG. You tell me when the modified NIC is active during playback and the bog standard NIC is active.

I see that you are at least starting to read the paper I provided rather than ASSUME a point you thought I was trying to make.

 

Clock domain crossings (the number depends on the exact system but for example): NIC to PCIe to USB to DAC input to DAC BCLK.

 

You've quoted the short answer which is correct. The long answer is that its more complicated in that the behavior of the Async FIFO is not fixed but depends on its own engineering. That's the rest of this one paper and there are many others on the topic. When designing and modeling such circuits, input and output constraints are specified. If the actual signals exceed the specifications, then instability can occur. The constraints have to do with jitter. The tighter the constraints that can be specified, the higher the performance that can be achieved (to a certain degree).

 

So yes, having better signal integrity and lower jitter on the input signals allows lower jitter on the output signals -- in general. Alternatively one could achieve the same output jitter/signal integrity with a more effective design -- much the same as different amplification circuits have different PSRR but you need to know what specs you aim to achieve in order to properly design the circuit to do so.

 

The paper does not address Ethernet clocks per se, rather clock domain crossing in general, but let me spell this out very clearly:

 

1) Data on the Ethernet line is clocked by the Ethernet clock domain

2) When data is converted from Ethernet to USB, it crosses the Ethernet clock domain to the USB clock domain

3) When data is converted from USB to I2S it crosses the USB clock domain to the I2S clock domain

4) If the last crossing is not gated by the DAC master clock, then there is an additional clock domain crossing between the I2S clock and the DAC clock

 

So of course the Ethernet and I2S clocks are crossed. A direct Ethernet input DAC might even directly cross these clock domains.

Custom room treatments for headphone users.

Link to comment
Can we collectively entertain the thought that Ethernet is a data and not audio standard? That it's Async and there is actually no clock placed on the data itself but just the analog wire frequency to get two end points to form a collision domain and manage the framing from there?

 

Of course there is a clock placed on the data. Just read the spec. Or since you are into pulling plugs -- why not just rip the clock out of your NIC and see if it works... FWIW: when I disconnect my Ethernet cable, the music stops ... period.

Custom room treatments for headphone users.

Link to comment
I meant to say I don't understand how this clock change can help.

 

I'm all for measurements -- though these can be hard. This topic is actually very technical and again proof would be (in my mind) making X change and then measuring phase error at the DAC clock.

 

But consider this (as an analogy): It is well known that PLL can re-sync a signal or sync 2 signals together but ... and this is the big but ... PLL only does this for "far out" phase error. Every PLL has a corner frequency below which the phase error is relatively less improved.

 

So let's not assume an async FIFO is perfect at reclocking and in the absence of actual measurements, its a mistake to assume that the reclocking has the same reduction in phase error at all offsets ... what if the async FIFO has the same corner frequency like a PLL? what if the slow wandering (close in phase error) causes a clock transition collision ever "n" seconds or whatever?

 

Close-in phase errors (these are the phase errors that "fatten" what would otherwise be sharp FFT peaks) are the hardest to correct. How many phase error by frequency offset plots have you seen published for actual DAC circuits (as opposed to their clocks)?

Custom room treatments for headphone users.

Link to comment
Sorry, there is no clock placed on the audio as it goes over Ethernet.

Sure. As I've said many times, ultimately the only thing that is important is the clocked data in the DAC itself.

Custom room treatments for headphone users.

Link to comment
The PLL is only sync'ing the the READ out of buffer for the requested clock rate by the DAC chip. I.E. 16/44.1 or 24/192 or what ever else.

 

....

 

Is this an actual case or just a mental exercise? What you are failing to recognize is that the PLL on the DAC is locking onto the buffer to clock data out of if. Again as long as the static, and this is the key term here, buffer is full, the send side clock is not material because you can't disassociate data rate from the event.

 

Are maintaining that the data in the static buffer has clocking data on it: "what if the async FIFO has the same corner frequency like a PLL" The only clock on that is the clock on the RAM that constitutes it's buffer or the USB buffer.

 

Again has nothing to do with the 125Mhz that the Ethernet cable is operational at. That data has been moved through several copies.

 

This isn't a zero copy stack.

 

OK several different issues are getting muddled together here:

 

1) what "PLL on the DAC" are you referring to? Are you insisting that all DACs use async FIFO as described in the paper? Have you looked at specific schematics? Please show me a specific example? (That said PLL is not the best way to get low phase noise DAC because of the corner issue I've described above)

 

2) Do you think all or even most DACs use async FIFO isolation? Yes I am suggesting and have provided literature as to why this should be done but it is not done uniformly -- for example let's take the Amanero XMOS USB to I2S interface... used on DAC such as Lampizator

 

3) Maybe it is a zero copy stack but why should that matter -- what is needed is dual readout memory which can be written to with one clock and then read out with a different clock and with very careful mechanisms to be sure there aren't read/write collisions, not only with memory but also registers-- this is the domain of FPGAs but FPGAs are not nearly universally used in DACs -- these technologies have advantages that go beyond filtering and upsampling.

Custom room treatments for headphone users.

Link to comment

Is this an actual case or just a mental exercise? What you are failing to recognize is that the PLL on the DAC is locking onto the buffer to clock data out of if. Again as long as the static, and this is the key term here, buffer is full, the send side clock is not material because you can't disassociate data rate from the event.

 

Are maintaining that the data in the static buffer has clocking data on it: "what if the async FIFO has the same corner frequency like a PLL" The only clock on that is the clock on the RAM that constitutes it's buffer or the USB buffer.

 

Again has nothing to do with the 125Mhz that the Ethernet cable is operational at. That data has been moved through several copies.

 

This isn't a zero copy stack.

 

A) not a mental exercise: IMG_2584.JPG

B) not a static buffer

C) No USB

D) Maybe zero copy stack

Custom room treatments for headphone users.

Link to comment
What is that from? What is the PHY on it as it is most likely buffered itself? Do you know the OS running it?

 

It doesn't look like an Intel Pro 1000GT which is what this thread is about. If your board is driven by an RTOS and there is no buffering going on then maybe an upgraded clock will improve things. I doubt very highly that what you pictured is an running an RTOS and no buffering.

 

I noticed the Kingston on it. If that's DRAM then AudioQuest thinks that is the worst sounding RAM module out there.

 

Correct is not an Intel Pro NIC ... to clarify I've never said that an Intel NIC could be casually improved so if we are limiting this s discussion to that then we are straying waaay off topic -- I don't use my 1000GTs for audio -- currently using x520s in most parts f my machines.

 

What you are seeing is a very highly integrated SoC that incorporates dual ARM and FPGA. There is extraordinary flexibility in handling Ethernet essentially allowing an SFP cage to be hung off the IO pins and the rest being handled on chip--or not. It runs Ubuntu Linux with a low latency kernel if that matters. Or FreeRTOS.

 

Clocks yes clocks it has clocks -- very good clocks -- no need to "upgrade" ;)

Custom room treatments for headphone users.

Link to comment

I happened on while looking at something else today:

https://wiki.trenz-electronic.de/display/PD/TEB0745+TRM

 

This is a carrier board that has 8 SFP+ cages and a single RJ45 for Ethernet. You can see that the SFP+ cages are powered but the data pins go directly to the "SOM" which is a board like what I posted above that contains the ARM/FPGA "SoC" chip.

 

You can see that the entire Ethernet (and TCP/IP) stacks can be handled by this chip. Why would someone want to do that? Oh perhaps 10gbe layer 1 switch or line rate packet inspection. In any case I'm not using this board but the pictures in the link are really good at illustrating my point.

 

But anyways when we talk about being able to do 16 channels of DSD1024 or 32 of DSD512 (or higher with 10gbe) thats what I'm taking about. That's a piece of cake for this stuff. The fun stuff starts if I can also design an on chip phase error measurement system with sufficient resolution (the sufficient resolution is the hard part ;) ;) ;)

 

So if, when, I get this running and if I can measure phase error on an actual DAC clock (the clocks are being sent into the chip to control the FIFO already) then some of these speculations about blah, blah, blah tweak, clock, memory, SSD, power supply and cables actually affecting the sound (and here I mean the actual stream of bits in the DAC) can actually be tested.

 

Oh and A/BX, well with 32 channels I think that can be handled in "software" (actually programmable logic or "PL")

Custom room treatments for headphone users.

Link to comment
  • 3 weeks later...
10 hours ago, plissken said:

 

 

Audio is cached. You can completely test the function of whether the externally powered clock is of importance by having someone start play back and while you are listening pull the Ethernet cable. You won't have to solder a thing and it won't cost a penny. So if your point is the external power then some one should get crackalacking on an external power supply for all three voltage busses (10, 100, 1000) as you won't even have to solder in a new clock at that point.

 

AGAIN...  The obvious elephant in the room: When the network cable is yanked during playback, and the system is still playing, What good is is 25Mhz clock with more zeros after the decimal point externally powered or otherwise? The Clock isn't even in use in this case. Is it somehow MORE 25Mhzrty?

 

The only thing the clock is there for is to sync up with the Ethernet port on the far end of the cable. There are two FIFO buffers on the NIC. One for the link and and one for what ever system bus (most often PCI).

 

Those realtek cards are still comparative pieces of shit to a $20-30 Intel Server NIC no matter the shade of lipstick you want to apply.


Nice video post. Properly explains subltlies like "greycodes". 

 

Haha, well my system doesn't work with the network yanked so there...

 

I totally agree that starting out with a POS NIC is a "suboptimal" way to spend a differential of $10 for an Intel.

 

The clock has to, you know, be within the designed spec in terms of "jitter" which affects setup times etc. so I can see how a bad clock could exceed tolerances and cause actual bit errors e.g. flops capturing on a transition etc. I can also see how ground noise and noisy PSUs could cause clocks to exceed specifications. I agree that as long as things are "within spec" that it shouldn't matter and looking into this, don't think that since the 1980s, that we've made any real progress in diminishing noise etc. The Intel stuff, even a previous generation or two available for $20, has terrific performance and, yeah, I'd be impressed if any casual tweaks could improve.

 

That said, I keep an open mind.

Custom room treatments for headphone users.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...