Jump to content
IGNORED

Network card Clock upgrade


Recommended Posts

Hi!

 

I use a SOTM sms-200 directly connected to my audio server. For the moment this sms-200 is connected to one of the two embedded network card of the mainboard of my server.

 

I have ordered a PCI-e network card (Intel Pro-1000 GT). I will use this network card to connect my sms-200. I would like to upgrade the clock of this network card with a better clock (TCXO, OCXO or something even better like for example a FEMTO clock).

 

I wonder if someone has already tried to upgrade the clock of a network card? I think that upgrading the clock of a network card should have the same impact as upgrading the clock in a USB card. I did some research on Google, but I’m surprised that I found nothing about that.

 

So, do you have experience with clock upgrade on a network card, and do you know what the frequency of the clock in a network card is?

 

Thanks in advance

 

CAT

 

It won't do you any good. You can start music playback, let it cache up, and and pull the the network cable and it will still play. All the clock upgrading matters not since computerized audio playback isn't real time,

Link to comment
Yes your statement is interesting, but in this case why does upgrading the clock of a router (in the case we don't use a direct connection) makes a difference?

I personnally didn't try it, but I have read positive post of many users regarding upgraded router (PPA router), which have a TCXO clock instead of the stock clock.

 

I have a video where I'm comparing a 315 foot cable and a 12 foot cable. In the video I pull the network cable out but my audio is still playing.

 

So what I would want to ask is what good is upgrading a clock on a NIC or router going to do since you aren't playing music off the cable but out of buffer, i.e. you can pull the cable?

Link to comment
Yes but for example upgrading to a better CPU, improving the OS (server 2016 with AO, fidelizer), improving SATA cable and using better memory are ways to improve an audio server, but as you said audio is not in real time with ethernet ... so how could you explain these improvement are there?

 

I know this is always complicated to understand audio improvement in computer audio, because sometime that look just weird..... but that works

 

Even high-rez play back of 24/192 for a machine you would purchase today is child's play for CPU's.

 

Until someone sits down and allows for their sighted evaluation to be removed from the evaluation equation and shows that there is such a thing as 'Audiophile RAM' or 'SATA Cables' then there would be something to talk about.

 

I'm not interested in either conjecture or sighted evaluations.

 

Again you can pull the cable, therefore ruining the clock on the router/nic and your audio will still play back. It is what it is. Find me an honest subjectivist.

Link to comment
All interesting points but not answering the question from

the OP.

He did not ask why but how.

Do you have an answer or not.:-)

 

 

I answered the question:

 

Start playback, pull the cable, the music plays on. How about this: Explain how the music is playing and why, with the cable unplugged, what would the upgraded clock on the NIC be doing for us in this instance.

Link to comment
No you did not.:-)

You picked out a very small part at the end to make your own view known.

I’ve read your post re network cables (interesting and well done) and respect the work that you’ve done there but TBH the question was re clock modding with NIC.

 

 

 

I've answered the original post.

 

Again, how can the clock of the PHY on either a router or a NIC card effect the audio playing if you have disconnected the cable?

 

Why are you avoiding an attempted answer?

Link to comment
No you did not.:-)

You picked out a very small part at the end to make your own view known.

I’ve read your post re network cables (interesting and well done) and respect the work that you’ve done there but TBH the question was re clock modding with NIC.

 

What is the clock modding of the NIC going to do for HEVC/ H.265 encoded 4K video? Colors sharper? Picture contrast better, less motion blur, more 3d 'ish', greater color saturation, larger color palette?

 

The clock on the NIC is fixed frequency. Each side syncs up and starts transmitting data.

 

Add to that, but isn't the GT1000 an old PCI (Legacy) adapter? Why not mod a current PCIe NIC?

Link to comment
Yes this is an interesting point of view, but I would like to have others point of view because what Plissken says is that all improvements before the sotm sms-200 are not useful. I don't really agree with this statement because I have tried lot of improvements and some are really helpful

 

We aren't talking about 'other improvements'. YOU specifically asked about modding the clock on a NIC. Your mind is already made up. Get the NIC, modify it, and enjoy.

Link to comment
Maybe you can show a little self control and stop trying to be the Batman of Computer Audio.

 

Guarding us against the evils of open mindedness.

 

if you can't be helpful then maybe you should do one.

 

Yes we'll have a hunt round and see if we can find any useful pointers and report back.

 

Thanks for nothing;-)

 

Again answer a simple question, if you can:

 

If you start music playback and you pull the Ethernet cable: What function is the upgraded clock on the NIC doing for the quality of the audio play back at this point?

 

That you are avoiding it means you are closed minded to the prospect that an upgraded clock is going to do nothing for you.

 

I'm open minded but not so open minded I've done let my brain fall out or lost the ability to critically think about the questions I've been asking you to answer. You can lead a horse to water but you can't make it think.

Link to comment
It's the preamble of the Ethernet Frame that syncs and sets the clock rate. Wouldn't be more algorithm based? I thought the oscillator just determined the speed of which the raw data is passed. That would still be in the physical layer. Do Ethernet cards have more than one clock?

 

Sent from my SAMSUNG-SM-G935A using Computer Audiophile mobile app

 

The clock on the PHY is fixed rate at 25Mhz and the multiplied from there. It's all layer 1 which is being discussed and the actual oscillator being modified.

 

Are you referring to data rate negotiation?

 

Still has no effect on sound quality since we are dealing with buffered systems and the clock, and what is really being talked about here is the jitter performance of the Ethernet cable, isn't going to alter the data that is ultimately sent over since it's not real time.

 

Check out this short Adnantech discussion.

Link to comment
I you are inviting me to speculate well then ... ;)

 

Let's start at the DAC and work our way back. For DSD let's use the Signalyst DSC1 (because the discrete design is simple and published). The discussion thread is here:Signalyst DSC1 - diyAudio) which takes as input a direct DSD signal or alternatively for PCM the I2S or PCM signals e.g. the PCM1704 datasheet discusses: http://www.qlshifi.com/jszl/PCM1704.pdf

 

Let's assume that the signal integrity of either the DSD, I2S or PCM lines is of paramount importance.

 

From the network, the bits necessarily have a clock domain crossing from the NIC clock to the master DAC clock (e.g. BCLK).

 

Perhaps having low jitter on the NIC input improves the clock domain crossing and if so, might result in less jitter on the BCLK.

 

That could be tested.

 

Packets don't directly pass from the NIC clock to the I2S or whatever is clocking them however. The only two clocks involved on the DAC are the I2S and the USB clock.

 

Again, start playback, pull the network cable, and the DAC is still receiving data. You are trying to bend a paper that doesn't have anything to do with the conjecture to fit.

Link to comment
Interesting topic. I won't rule out possible benefits or that some have experienced benefits from such an upgrade. However, I will provide my opinion and experience. I do the understand how this type of clock change could possibly help and I've never been able to hear differences when applying tweaks upstream of Ethernet audio equipment.

 

That's just me. I make no judgement about others or those with open minds trying to squeeze every ounce of sound quality out of their systems.

 

Sent from my Pixel using Computer Audiophile mobile app

 

Chris, it can not make a difference. Again start playback, pull the plug, and tell me what the clocking of the Ethernet PHY has to do with the playback that you are currently experiencing. A clock is about timing, timing is about jitter control to accepted parameters. Mac, Linux, Windows are not RTOS's and variance on the Ethernet side including some clock float, aren't going to affect playback unless the buffer empties.

 

Read the paper Jabbr provided. It doesn't support the conjecture because it's being misapplied / misunderstood.

 

It's really time to accept some things as absolutes. Just like a file copied from USB disk or over the Internet can't sound different if they are bitwise identical.

Link to comment
So while I agree that changes to the network card in a non-AOIP setup might not make a difference in theory it seems to me that only JABBR attempted to answer the OP's actual question and did so respectfully.

 

OP did not ask if it would make a difference. He asked how to upgrade so he could try it for himself.

 

I am finding the contentiousness and chest beating a bit tiresome on CA these days. What happened to respect and a sense of discovery? How about a bit less bark and a bit more wag?

 

Hang in there Nouchka. Folks with an open mind who experiment and take risks have led the way here...

 

Why is it that I can be asked to have an open mind about a topic I understand completely but when I ask:

 

If you start playback, pull the Ethernet cable, and the music still plays: What does the modified crystal on the NIC / Router have to do with it?

 

No one will venture an answer. Just throw something out there.

 

It's not chest beating if one is 100% correct. Ignorance turns into stupidity when left willfully uncorrected.

Link to comment
I do not know if I can credit this paper if they did not perform the critical "pull the plug" test.

 

 

Sent from my iPhone using Computer Audiophile

 

Jud, are the I2S and Ethernet clocks ever crossing? Does the paper answer that?

 

Here you go Jud, from the paper that Jabbr posted and directly answers your question about pulling the plug.

 

5.8.1 Multi-bit CDC signal passing using asynchronous FIFOS

Passing multiple bits, whether data bits or control bits, can be done through an asynchronous FIFO. An asynchronous FIFO is a shared memory or register buffer where data is inserted from the write clock domain and data is removed from the read clock domain. Since both sender and receiver operate within their own respective clock domains, using a dual-port buffer, such as a FIFO, is a safe way to pass multi-bit values between clock domains. A standard asynchronous FIFO device allows multiple data or control words to be inserted as long as the FIFO is not full, and the receiver and then extract multiple data or control words when convenient as long as the FIFO is not empty.

 

Again. Pull the plug and the music still plays. What does the clock on Ethernet cable have to do with it? And the buffered data from the Ethernet cable isn't even the buffer the audio application setups up or the buffer the USB bus uses.

 

You don't even need to have the paper Jabbr provided. You just need to have the same open mind that one would target toward a soldering iron and a oscillator toward the question I was asking.

 

So a way to test this is to modify the Intel NIC and take a standard one and place them in a LAG. You tell me when the modified NIC is active during playback and the bog standard NIC is active.

Link to comment
I see that you are at least starting to read the paper I provided rather than ASSUME a point you thought I was trying to make.

 

Clock domain crossings (the number depends on the exact system but for example): NIC to PCIe to USB to DAC input to DAC BCLK.

 

You've quoted the short answer which is correct. The long answer is that its more complicated in that the behavior of the Async FIFO is not fixed but depends on its own engineering. That's the rest of this one paper and there are many others on the topic. When designing and modeling such circuits, input and output constraints are specified. If the actual signals exceed the specifications, then instability can occur. The constraints have to do with jitter. The tighter the constraints that can be specified, the higher the performance that can be achieved (to a certain degree).

 

Thanks for the above but do you truly appreciate what you have written? So is the Intel Pro 1000/GT an 'unstable' product that needs to be made 'more stable'.

 

So yes, having better signal integrity and lower jitter on the input signals allows lower jitter on the output signals -- in general. Alternatively one could achieve the same output jitter/signal integrity with a more effective design -- much the same as different amplification circuits have different PSRR but you need to know what specs you aim to achieve in order to properly design the circuit to do so.

 

So an Intel NIC doesn't output jitter/signal integrity? Or otherwise not properly designed.

 

The buffer is what solves the clock crossing issues. The clocks are removed from the equation and the buffer can be read from 'at convenience'. The the reading systems clock is applied to stream that static data back out of buffer.

 

 

The paper does not address Ethernet clocks per se, rather clock domain crossing in general, but let me spell this out very clearly:

 

1) Data on the Ethernet line is clocked by the Ethernet clock domain

2) When data is converted from Ethernet to USB, it crosses the Ethernet clock domain to the USB clock domain

3) When data is converted from USB to I2S it crosses the USB clock domain to the I2S clock domain

4) If the last crossing is not gated by the DAC master clock, then there is an additional clock domain crossing between the I2S clock and the DAC clock

 

So of course the Ethernet and I2S clocks are crossed. A direct Ethernet input DAC might even directly cross these clock domains.

 

Jabbr this is my point. There is no Ethernet to USB boundary as it pertains to a USB DAC.

 

Data is read into buffer off the NIC, reordering and sorting and resend if required, then into buffer set aside in some POSIX style fashion by the CPU until it's needed to be read by an application that requested it, then into the buffer set aside by the player application, into another buffer *most likely USB* and then over the wire into the DAC buffer were IT's clock is applied killing any timing variance of a clock down stream.

 

In the case of Ethernet enabled DACs, Dante etc: "Alternatively one could achieve the same output jitter/signal integrity with a more effective design"

 

And this is what those designers have done. They have to because the market place will eat them up and spit them out.

 

I have another question. When you are streaming Tidal does the clock at their server farm, the interceding routing, your ISP etc matter?

 

I'm asking questions that people seem incapable of applying critical thought to.

Link to comment
Beyond all this: plissken, do you tend to listen more closely to someone who speaks to you from time to time in a normal tone, or someone who is constantly, loudly, in your face?

 

Yep, exactly. So if you really want to persuade people....

 

 

Sent from my iPhone using Computer Audiophile

 

No, I want threads to stop sliding when I ask a simple question:

 

When you pull the Ethernet cable and the music still plays does the clock on the Ethernet cable matter?

 

I tend to listen when someone makes a cogent argument. The OP already had an answer in mind before even asking the question.

 

Jabbr provided a paper that is going to provide all the information I need to make the point that needs to be made.

Link to comment
I won't rule out possible benefits or that some have experienced benefits from such an upgrade. However, I will provide my opinion and experience. I do the understand how this type of clock change could possibly help

 

 

This is the part that I was mainly responding to.

 

I'm now asking a third question (let me know if it's asking is unreasonable):

 

Can we collectively entertain the thought that Ethernet is a data and not audio standard? That it's Async and there is actually no clock placed on the data itself but just the analog wire frequency to get two end points to form a collision domain and manage the framing from there?

 

My fourth question is:

 

Why can I be asked to keep an open mind about using a clock that is more accurate out to some Nth decimal point....

 

But when I ask what happens to the sound quality of the audio with the enhanced clock when the Ethernet is pulled and the music continues to play it presents a problem?

 

Are my questions in any way unfair?

Link to comment
Of course there is a clock placed on the data. Just read the spec. Or since you are into pulling plugs -- why not just rip the clock out of your NIC and see if it works... FWIW: when I disconnect my Ethernet cable, the music stops ... period.

 

Sorry, there is no clock placed on the audio as it goes over Ethernet.

Link to comment
I'm all for measurements -- though these can be hard. This topic is actually very technical and again proof would be (in my mind) making X change and then measuring phase error at the DAC clock.

 

But consider this (as an analogy): It is well known that PLL can re-sync a signal or sync 2 signals together but ... and this is the big but ... PLL only does this for "far out" phase error. Every PLL has a corner frequency below which the phase error is relatively less improved.

 

The PLL is only sync'ing the the READ out of buffer for the requested clock rate by the DAC chip. I.E. 16/44.1 or 24/192 or what ever else.

 

"and the receiver and then extract multiple data or control words

when convenient as long as the FIFO is not empty"

 

It has zero to do with the 125Mhz signalling rate on Ethernet. When convenient is the key term here. The PLL has setup the retrieval clock from buffer. As long as the buffer isn't underrun we are fine and as long as the process does it's FIFO correctly the send side will never see a full up buffer where it needs to write data.

 

So let's not assume an async FIFO is perfect at reclocking and in the absence of actual measurements

 

I would rather assume what is in the paper you provided: That the async FIFO is with in the design parameters and working correctly. Perfection is a loaded term and is a bit of a smoke screen.

 

its a mistake to assume that the reclocking has the same reduction in phase error at all offsets ... what if the async FIFO has the same corner frequency like a PLL? what if the slow wandering (close in phase error) causes a clock transition collision ever "n" seconds or whatever?

 

Even if a buffer has the same frequency on the FI or FO side of things you can't discount the asymetric packet size.

 

I may be able to deliver 1k packet to FIFO for every 2 reads of .5k.

 

Is this an actual case or just a mental exercise? What you are failing to recognize is that the PLL on the DAC is locking onto the buffer to clock data out of if. Again as long as the static, and this is the key term here, buffer is full, the send side clock is not material because you can't disassociate data rate from the event.

 

Are maintaining that the data in the static buffer has clocking data on it: "what if the async FIFO has the same corner frequency like a PLL" The only clock on that is the clock on the RAM that constitutes it's buffer or the USB buffer.

 

Again has nothing to do with the 125Mhz that the Ethernet cable is operational at. That data has been moved through several copies.

 

This isn't a zero copy stack.

Link to comment
OK several different issues are getting muddled together here:

 

1) what "PLL on the DAC" are you referring to? Are you insisting that all DACs use async FIFO as described in the paper? Have you looked at specific schematics? Please show me a specific example? (That said PLL is not the best way to get low phase noise DAC because of the corner issue I've described above)

 

2) Do you think all or even most DACs use async FIFO isolation? Yes I am suggesting and have provided literature as to why this should be done but it is not done uniformly -- for example let's take the Amanero XMOS USB to I2S interface... used on DAC such as Lampizator

 

3) Maybe it is a zero copy stack but why should that matter -- what is needed is dual readout memory which can be written to with one clock and then read out with a different clock and with very careful mechanisms to be sure there aren't read/write collisions, not only with memory but also registers-- this is the domain of FPGAs but FPGAs are not nearly universally used in DACs -- these technologies have advantages that go beyond filtering and upsampling.

 

1> I was using your example. Correct PLL isn't the universal mechanism for jitter elimination.

 

2>It's not material because we are talking about a direct boundary between Ethernet and DAC and this simply isn't the case. What DAC has Zero buffer and uses ASync USB?

 

3> No, it's not a zero copy stack period. Correct on the read write collisions in the buffer: But to circle back how does a tighter tolerance TXCO make a system even 'more solved'.

 

Here's a typical block diagram of an Ethernet PHY notice there are two buffers. The next hop is the PCIe bus and that could be buffered or DMA'd if the NIC has the appropriate CPU. But it's still not near the buffer feeding the USB for the DAC or the DAC buffer, regardless of how they have it setup. So we are back to what difference does more numbers after the decimal point of an oscillator do for audio SQ?

 

ethernetphyblockdiagram.JPG

Link to comment
A) not a mental exercise: [ATTACH]33692[/ATTACH]

B) not a static buffer

C) No USB

D) Maybe zero copy stack

 

What is that from? What is the PHY on it as it is most likely buffered itself? Do you know the OS running it?

 

It doesn't look like an Intel Pro 1000GT which is what this thread is about. If your board is driven by an RTOS and there is no buffering going on then maybe an upgraded clock will improve things. I doubt very highly that what you pictured is an running an RTOS and no buffering.

 

I noticed the Kingston on it. If that's DRAM then AudioQuest thinks that is the worst sounding RAM module out there.

Link to comment
Correct is not an Intel Pro NIC ... to clarify I've never said that an Intel NIC could be casually improved so if we are limiting this s discussion to that then we are straying waaay off topic -- I don't use my 1000GTs for audio -- currently using x520s in most parts f my machines.

 

What you are seeing is a very highly integrated SoC that incorporates dual ARM and FPGA. There is extraordinary flexibility in handling Ethernet essentially allowing an SFP cage to be hung off the IO pins and the rest being handled on chip--or not. It runs Ubuntu Linux with a low latency kernel if that matters. Or FreeRTOS.

 

Clocks yes clocks it has clocks -- very good clocks -- no need to "upgrade" ;)

 

Agreed we are off topic. If someone wants to fart around with a NIC and solder on a new clock that's fine but it's not going to improve the Audio playback. We used to overclock the Motorola 680X0 series by soldering in a quicker Oscillator adding a heatsink.

Link to comment
  • 3 weeks later...
On 3/6/2017 at 6:13 AM, Mihaylov said:

0_aa37e_c16a0a6b_orig.jpg

0_aa380_9859a751_orig.jpg

0_aa37f_b3b505f3_orig.jpg

 

Fortunately the extended 25 MHz accuracy is going go be rendered moot by Asynchronous FIFO. I have to point this out: That NIC is like the cheapest built NIC you can purchase. Comparatively it's junk to a $20 Intel NIC.  I would rather hit ServerSupply or Amazon and just get an Intel Server NIC that are pulls out of new systems where they replacing them with 10/25/40/100G NICS. They are super affordable ($18 for a dual port Intel Server NIC is common).

 

 

Link to comment
3 hours ago, Crom said:

Here's my thoughts on this in case I can help the debate. I have been messing around with clocks and audio PC stuff for a few years and what might be relevant to this discussion is that my finding is that it's not always the quality/accuracy (etc) of the clock that provides the initial improvement, but rather the fact that the clock is separately powered.

 

It's true that the better the clock, the better the results but separately powering the clock (and the best I've found is by a lifepo battery) provides the best bang for buck.

 

I have changed the clock in this way, in pretty much everything I have in the 'signal' (or data) chain using exactly this method and I can't think off the top of my head where there hasn't been an improvement of one sort or another. Some examples, Compact flash adapter cards, motherboards, usb cards.

 

So, whereas logic might suggest this is a fool's errand, it's probably worth a couple of hours fiddling to give it a go and as the cards are so cheap to start with - buy two and A:B test.

 

 

Audio is cached. You can completely test the function of whether the externally powered clock is of importance by having someone start play back and while you are listening pull the Ethernet cable. You won't have to solder a thing and it won't cost a penny. So if your point is the external power then some one should get crackalacking on an external power supply for all three voltage busses (10, 100, 1000) as you won't even have to solder in a new clock at that point.

 

AGAIN...  The obvious elephant in the room: When the network cable is yanked during playback, and the system is still playing, What good is is 25Mhz clock with more zeros after the decimal point externally powered or otherwise? The Clock isn't even in use in this case. Is it somehow MORE 25Mhzrty?

 

The only thing the clock is there for is to sync up with the Ethernet port on the far end of the cable. There are two FIFO buffers on the NIC. One for the link and and one for what ever system bus (most often PCI).

 

Those realtek cards are still comparative pieces of shit to a $20-30 Intel Server NIC no matter the shade of lipstick you want to apply.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...