Jump to content

Puzzling Ethernet Behavior from mRendu


Recommended Posts



I was curious if anyone else notices a similar behavior from their mRendu that I seemed to have stumbled on.


While the mRendu is sitting idle without music playing the response time to Ping commands has significantly higher values then it does when music is playing. In my setup the mRendu, NAS and ROON Server Core PC are all connected to the same local switch so no routing is involved. The switch itself is an Enterprise Class 1GB Cisco 2960G. The Interface connected to the mRendu has been tested with the Link Speed/Duplex set to either "AUTO" or "1000 FULL" and the behavior is the same in both cases.


Below is an example Ping output taken from the ROON Core server which runs Ubuntu 16.04. You can see for the first 13 Pings (With Music Playing) response time shows values a good bit under 1ms AVG. Then, if I stop the playback as seen on the 14th Ping response times rise thru the roof.


Again, this test was done from the ROON Server which is locally connected to the same switch that the mRendu is. If I perform the same test against my NAS also located on the same switch Ping times look similar to the first 13 Pings from the mRendu test, all well under 1ms.


Thoughts? Is the mRendu falling into a low power mode immediately after the music stops? Strange and Interesting behavior



ROONSRV:~$ ping

PING ( 56(84) bytes of data.

64 bytes from icmp_seq=1 ttl=64 time=0.239 ms

64 bytes from icmp_seq=2 ttl=64 time=0.281 ms

64 bytes from icmp_seq=3 ttl=64 time=0.724 ms

64 bytes from icmp_seq=4 ttl=64 time=0.352 ms

64 bytes from icmp_seq=5 ttl=64 time=0.490 ms

64 bytes from icmp_seq=6 ttl=64 time=0.345 ms

64 bytes from icmp_seq=7 ttl=64 time=0.315 ms

64 bytes from icmp_seq=8 ttl=64 time=0.343 ms

64 bytes from icmp_seq=9 ttl=64 time=0.403 ms

64 bytes from icmp_seq=10 ttl=64 time=0.401 ms

64 bytes from icmp_seq=11 ttl=64 time=0.356 ms

64 bytes from icmp_seq=12 ttl=64 time=0.395 ms

64 bytes from icmp_seq=13 ttl=64 time=0.399 ms

64 bytes from icmp_seq=14 ttl=64 time=34.2 ms

64 bytes from icmp_seq=15 ttl=64 time=87.9 ms

64 bytes from icmp_seq=16 ttl=64 time=1.33 ms

64 bytes from icmp_seq=17 ttl=64 time=10.2 ms

64 bytes from icmp_seq=18 ttl=64 time=19.2 ms

64 bytes from icmp_seq=19 ttl=64 time=28.2 ms

64 bytes from icmp_seq=20 ttl=64 time=37.3 ms

64 bytes from icmp_seq=21 ttl=64 time=46.2 ms

64 bytes from icmp_seq=22 ttl=64 time=38.8 ms

64 bytes from icmp_seq=23 ttl=64 time=63.6 ms

64 bytes from icmp_seq=24 ttl=64 time=39.2 ms


--- ping statistics ---

24 packets transmitted, 24 received, 0% packet loss, time 23014ms

rtt min/avg/max/mdev = 0.239/17.149/87.928/23.941 ms

Link to comment
Share on other sites

  • 4 weeks later...

Perhaps I am overthinking this but...


Stop music request (computer sends call to mr to stop streaming) - this will change the network stack to include tcp which involves a series of handshakes to devices connected to the mr, which in turn will take the slightest more time to process network transactions and should deprioritize icmp when doing so. When the device is streaming (like any devices involved in streaming, ie NAS) it is using a stateless connection protocol via UDP- no negotiations or handshakes needed, no tcp trying to keep state between devices & no flow or congestion control operations taking place. So ICMP (ping) should not be getting deprioritized as TCP should not be present to take action on ICMP. This is prob. what you are observing here.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...