Jump to content
IGNORED

Optical Network Configurations


Recommended Posts

One piece of information that might be relevant to the current discussion:

the processor in the microRendu (iMX6) has the Ethernet controller connected to an internal bus with a maximum speed of 400Mbps. Since most of the peripherals are turned off, the Ethernet controller pretty much gets all of that.

 

Thus with a Gigabit connection it will not run full speed, there may be an issue between copper and fiber handling flow control.

 

I do not have a fiber setup to try this, but I have had no problems with a Gigabit connection with several different switches or series of switches.

 

Since CuBox-i has the same SoC, I've used it quite a lot and I have not experienced any problems. Of course there still could be some hardware incompatibility issues with some pieces of hardware. Or this particular use case may expose some bugs in networking gear. (for example I've seen dumb switches that don't pass UPnP discovery, but do pass NAA discovery which is really strange)

 

I've been running my CuBox-i's with two different kernels, the official Freescale (now NXP after the acquisition) and Debian ones. Both have been working for me. I have not yet checked which particular kernel version microRendu is using.

 

Anyway, microRendu, with the 2.2 software is working fine in my gigabit network environment, even with flaky but convenient flat ribbon CAT 5E cable.

 

My recommendation; play safe and use UTP (unshielded) cables with microRendu to let it and a DAC float properly vs the network.

 

 

P.S. One note, for example my old laptop is sensitive to ethernet cables. Anything less than proper CAT 6 and it will silently first downgrade gigabit connection to 100 megabit and then eventually 10 megabit crawling. Since this takes time to happen and quietly happens in background, for a while I was just wondering why backup over network takes forever...

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 2 years later...
9 minutes ago, jabbr said:

If your network is heavily loaded you can consider QoS etc. My own switches have adequate bandwidth that VLans are not necessary. 

 

HQPlayer/NAA for example is already using QoS capabilities, as long as the network infra supports it. And OS already deal with it.

 

Also RAVENNA seems to do the same.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Why would it make any difference in sound quality?

 

Tidal sounds the same regardless if I listen it over 4G data or VDSL2 and it has all kinds of other strange hardware on the way...

 

Only sound quality point I see in fiber is galvanic isolation. And type of fiber doesn't make difference on that. Apart from that it is just data pipe...

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
18 hours ago, jabbr said:

Theres also a meme meme floating around, apparently launched by @JohnSwenson (at least I’ve seen it attributed that way) that  jitter/phase noise in the network bitstream can somehow find its way to the DAC. 10G+ devices are designed to have lower jitter (must hit a tight eye pattern) and if this does affect the DAC somehow then that’s also an advantage. 

 

Lower jitter means more interference. Spread spectrum (maximum jitter) gives lower interference peaks... So I'm not ready to accept that without comparative measurements in different circumstances.

 

In this respect it would interesting to compare air gap isolation (WiFi) and optical networking...

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 1 month later...
4 hours ago, barrows said:

Hi, so flow control can be defeated in the settings for this device, just trying to confirm for sure!  Thanks, B.

 

Why would you want to defeat flow control? Especially in this kind of case where you could have 10 Gbps vs 1 Gbps link transition?

 

Of course if you prefer packet losses instead...

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
6 hours ago, jabbr said:

Also the datasheet for the iMX6 specifies pause-frames because the internal speed is capped at 400Mbs ... are the pause-frames generated by the hardware ie PHY and the device driver controls? Or does the driver generate the frames?

 

AFAIK, in most cases PHY advertises pause frame capabilities and deals with negotiation, but MAC is the the one that actually generates the pause frames - since it's the one owning the packet buffers. There is always orchestration between PHY and MAC (through MII) to deal with configuring such interactions and this is where the driver steps in (and Linux kernel internally abstracts this). Since driver code runs on the APU (CPU), it cannot properly deal with pause frames because for example in iMX6 case it is behind the slower link between MAC and CPU. So CPU cannot react properly to the upstream traffic it sees "after the fact", once it has traveled across the slow link. Overall, it cannot know when MAC is going to overflow the buffer. While MAC is always has up-to-date information being at the Ethernet side of the slower local bus link and the one that fills the packet FIFO.

 

There are now also newer standards (802.1Qbb) that support flow-control based on packet priority, so lower priority traffic can be put on hold if necessary and keep higher priority traffic flowing. This is important for cases where you have lot of intermixed traffic like between switches, and less important for cases like NAA where you usually anyway have just a one notable stream. These newer things are mostly supported by more fancy newer hardware. For example NAA connection attempts to utilize 802.1p type QoS/CoS that can benefit from such flow control categorization.

 

Even the cheapest integrated NICs for PCIexpress tend to include both QoS/CoS and flow control using the older more established standards, like the commonly used cheap Realtek 8111:

https://www.realtek.com/en/products/communications-network-ics/item/rtl8111g

You can see both 802.1p and 802.3x listed there. Plus hardware offload of various network checksumming operations. And also 802.3az (EEE) to keep power consumption low. So all the specs I list for NAA.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
15 minutes ago, jabbr said:

Actually it will. If there is no jitter out, or if the switch perfectly removes the signature of incoming jitter, then it’s a nonissue. 

 

Disregarding networking things, eye pattern and such tend to be problematic way of measuring low frequency drifts.

 

As an example is some of the ESS Sabre problems are quite tricky to measure because the level is low (-120 dB or so) and the the cycle takes about one minute. Low frequency drifts in the eye pattern at -120 dB level would be much much smaller than pixel size of the eye pattern plot.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 1 year later...
9 hours ago, jabbr said:

When lots of pause frames are present it can cause network congestion, that said there shouldn't be pause frames when endpoints that can handle a full 1Gbe are used.

 

In practice there are. All my big machines do it too. But remember that modern switches manage pause frames per port basis.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
1 hour ago, plissken said:

This is where any pause frames should be happening. Not by switch itself normally. That's my point. It's also the point of Cisco blog post. If a devices NIC driver isn't doing this it's a device I don't want on my network. Something is broken if you have to have the switch manage this.

 

In order for pause frames to work, auto-negotiation for the support needs to complete. On unmanaged dumb switches this is automatically enabled. But on managed switches you can configure whether the switch will support this. If the support is disabled, NIC hardware cannot enable the support because the upstream hardware is not responding to the auto-negotiation request or denies it.

 

So if you have a managed switch, you need switch to manage whether this functionality is supported on your network or not.

 

For example, on my network, on my Linux workstation I have this (without doing anything at the Linux side):

root@linux-wks:~# ethtool -a enp6s0
Pause parameters for enp6s0:
Autonegotiate:	on
RX:		on
TX:		on

 

And on the same network, with macOS I have this (without doing anything at macOS side):

jussi@MacMini ~ % networksetup -getmedia en0
Current: autoselect
Active: 1000baseT <full-duplex flow-control energy-efficient-ethernet>

 

And for example some statistics on one my HPE switches show that there is some flow control activity (see Transimitted Pause Frames and Received Pause Frames columns):

1354319662_Screenshotfrom2021-08-2617-06-04.png.c81c5d4114d595267e8850bd776e69dc.png

 

No NAA's involved directly on this switch. Just regular i9/Xeon computers and one NAS.

 

Similar statistics from one of my Cisco switches:

2118521922_Screenshotfrom2021-08-2617-50-32.png.ad5593a1335503ae32048b18f403a4cb.png

 

1 hour ago, plissken said:

On interswitch links we have other methods of mitigation: LACP LAGs, 10/28/56 SFP+, QSFP 40/100 as you know since you have HPE enterprise class switch gear. BTW I do 90% Aruba AO-S/CX-OS switching and in my rack I have a complete Aruba mobility stack: MM1K, 7210 controller, Clearpass, Airwave running on HPE Proliant Gen 8.

 

HPE switch decides itself which method it will use. I have Cisco and HPE switches as core switches and number of unmanaged (HPE and Zyxel) switches as per-room leaf switches.

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
22 minutes ago, plissken said:

I use other mitigation techniques. I've implementation guides from Mitel, Vocera, Getwell, Pyxis and none of them mention enabling flow control on the switch. Therefore I do not.

 

Well, as long as you are using an unmanaged switch it is enabled for you and you cannot change it unless you explicitly disable it from a NIC (so far all I've seen have auto-negotiation enabled by default).

 

If you use for example HQPlayer NAA, then my documentation states you must have it enabled. As you specifically don't want any re-send traffic on your NAA. And because it is needed by number of NAA hardware on the market. And even if not strictly required functionally, it reduces number of re-sends a lot on most hardware and thus electrical noise they produce.

 

If you use for example UPnP, you should also enable it for the same reasons.

 

Quote

Let me ask this another way since I don't know your product: Does a Pi4 running an NAA like Ropieee need flow control enabled on the switch?

 

Yes, Pi4 is not always able to keep up with sustained 1 Gbps inflows (RX) without resorting to re-sends from the source.

 

Even if you don't get occasional drop-outs or stuttering (due to network traffic stalls due to packet loss and re-sends), you get increase in packet processing overhead.

 

 

I also highly recommend to have EEE (802.3az) enabled for the same reasons. Cable length detection reduces the transmit power from source (less noise) and idle sleeps further reduce electrical noise.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
2 minutes ago, jabbr said:

Let’s ask a different way: if the Pi4 is sending out pause-frames, do you want to enable processing? Seems like if the device is requesting them then I’d want to handle. 

 

IIRC, out of all the stuff I have, only Seagate NAS is not requesting flow control support. If you have the support enabled, and they are not needed, the packets won't be sent out either and thus it doesn't have effect. If it is enabled and they get sent, then there's certainly reason for those pause frames being sent...

 

I think this is also baseline thinking for most unmanaged switches.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
18 minutes ago, plissken said:

What in audio requires 125MB/s for just stereo or mch playback?

 

Remember that the hardware FIFOs can be only for example 8 ethernet frames in size. And that everything is packet based. You may already overflow the hardware buffer with something like sending a single 128 kilobytes block at 1 Gbps. How many frames a specific piece of hardware has buffer before the local bus between NIC and the CPU. This is why I said "sustained". Because we don't know for how long the 1 Gbps can be handled, unless we specify "indefinitely". It may be just 1 ms, or maybe for 10 ms, or maybe for some other amount of time. It is still full 1 Gbps for that period.

 

You cannot be very certain about the local bus speed vs latency either. Let's say you have a big Intel CPU with 16 lanes of PCIexpress. These all are taken already by a GPU on 16x PCIe. You may have situation where there is enough GPU traffic between CPU and GPU that DMA transfer latency between NIC and CPU is affected. Which in turn means that the NIC may run out of it's local packet buffer - if it has any in first place! These same GPU transfers may also mean that kernel is spending considerable time in it's ISR, meaning it has interrupts disabled, and cannot therefore process other interrupts. Sometimes interrupts can be distributed differently between cores (see "cat /proc/interrupts" on Linux). On Windows, ISR latencies can easily exceed 10 ms!

 

When it comes to your NIC, check out "ethtool -k" for the NIC capabilities and "ethtool -S" for statistics to better know what is going on.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Just my 0.05€, trying to deal with hardware buffer overflows at application layer is conceptually doomed and failed attempt. If / when application layer knows about packet loss, already lot of Ethernet frames have been lost. And application layer cannot possibly know status and amount of hardware buffer available on gear at the other side of the network. This is already affected by things like activity by other applications and such. In first place, the media and MTU size is not even known by the application layer. It may not be even Ethernet. It could as well be ATM, WiFi, or something else on the way between hosts below IP protocol stack. And it shouldn't be of concern for anything above IP protocol stack.

 

On purpose, even OS stacks generally don't operate at Ethernet packet rate, because for example on 1 Gbps network 1500 byte MTU causes too high interrupt rate and too much overhead to process single packet at a time. Instead multiple packets are bundled in DMA transfer before interrupt occurs. Not to even mention 1500 byte MTU on 10 Gbps networks. Only the NIC hardware has fast enough reaction times to deal with proper flow control before packets reach the DMA buffer (which does not have deterministic bandwidth + latency).

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
1 hour ago, Zauurx said:

I find it hard to follow... but I must admit that a direct HQPlayer server > NAA connection without switch works very well (with very low latencies).

 

That will also certainly have said flow control enabled. But it is not exactly a network anymore... But I discourage multi-homed HQPlayer servers as it is just one big source of networking problems.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 2 weeks later...
  • 1 year later...
17 hours ago, jabbr said:

This could be something that @Miska etc could be concerned with if he cared about the rate at which packets are issued ...

 

NAA protocol is already designed such way that even if there would be leakage it would not create an issue.

 

I also prefer higher network speeds over lower ones. But already 1 Gbps is sufficient for most use cases. No harm with 2.5G, 10G or similar though, as long as you are either optical or CAT6(a) U/UTP cabling. STP cables are no-go. I use 500 MHz certified CAT6a cables myself, which are perfectly good for up to 10 Gbps.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
  • 2 months later...
20 hours ago, Bertel said:

Absolutely, SignatureRendu via flow control should throttle the switch - from my layman's understanding I thought that this is where the TX and RX Pauses com from, which I thought is fine - but I do have the occasional 1-3 second pauses (where I think HQPlayer has to stop playing, I can hear the fans go down until a few seconds playback resumes and the fans spin up again), I thought the Rx Overrun might correlate with that...?

 

If those pauses appear not because of CPU loading, but instead because of network, it is most likely due to network stalls caused by non-functional flow control.

 

20 hours ago, Bertel said:

Looks like the settings are 'on' so ok?

 

On that link, but please remember it needs to be properly propagated throughout network. So for each "cable step" aka link on the network between the final endpoints.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...