Jump to content
IGNORED

Optical Network Configurations


Recommended Posts

18 minutes ago, plissken said:

 

I want to change tack: You go find a network pro that will knowing what I, Barros, Jabbr, know that will take up a contrary position.

 

Because what's happening is like a MD debating health issues with their patient. I'd rather that patient go get another MD to bring to the table.

This is just a question? Does a network pro understand file creation? 

 

22 minutes ago, plissken said:

 

I want to change tack: You go find a network pro that will knowing what I, Barros, Jabbr, know that will take up a contrary position.

 

Because what's happening is like a MD debating health issues with their patient. I'd rather that patient go get another MD to bring to the table.

Are we just talking network pro here? Surely it’s ever aspect of the digital chain? Also is John S not a pro is his field? 

Link to comment
1 hour ago, ASRMichael said:

This is just a question? Does a network pro understand file creation? 

 

I understand we aren't talking a zero copy stack. I understand MD5 and CRC32.

1 hour ago, ASRMichael said:

Are we just talking network pro here? Surely it’s ever aspect of the digital chain? Also is John S not a pro is his field? 

 

We are talking about networking as it pertains to optical interfaces. I'm sure John is a pro in his field. The white paper he authored backs my (and others) position about breaking the electrical layer (wifi or optical) and getting as much xfer speed as possible (802.11AX or 802.3ae).

 

A pro would produce objective, reproducible, data. I've done this more than a few times over the years here. Everything from showing FLAC levels only affect compression time  and resource usage (PC and Raspberry Pi) and not decompression, to FLAC's being extracted to the original MD5 hash to Wired and WiFi buffered playback and what it looks like from a non-realtime perspective.

 

Barrows has been speaking rather sagely about all of this but it's falling on deaf ears. Interesting for a hobby that is focused on active listening.

Link to comment
1 hour ago, ASRMichael said:

Networking is about absolutes! Ha ha! We can agree absolutely that networks are not absolute! Otherwise there wouldn’t be failures/errors! Nothing in the universe is absolute! 

 

Networking is basically a 1 in 4 billion undetected error rate. Nothing is brickwall absolute but we are pretty darn close.

 

I deal with hardware failures by designing with redundancy in mind. Configuration and design how ever are the bane of my existence. Was doing T-Shoot today and something tangential popped up when looking at the RIB on a routing switch that has set off another chain events. Sometimes you're the bat, sometime the ball.

Link to comment
7 hours ago, barrows said:

Consider: if there was somehow a way which an Ethernet timing clock "embedded" its phase noise/jitter in the data, then the internet would not work because all of the accumulating clock jitter over hundreds, or even thousands of re-clocking steps would so corrupt the data as for it to become unintelligible.  The same could be said of streaming audio from the likes of Qobuz-if clock jitter is accumulating and "embedding" itself in the data, the sound quality of streaming audio would be hopelessly corrupted due to the hundreds of re-clocking steps on the way from the Qobuz servers to one's home (and indeed, I very much doubt also that these clocks are ultra low jitter ones, or that the power supplies used for the internet path from Qobuz to one's home are SJ designs, or even linear for that matter).


Barrows, you should know better than writing this. 
We’re not debating that Ethernet is error free. It is by definition. No one question that. 
(Well some do). Even if there is one or two bit errors, it doesn’t affect SQ. And you know it. 
 

Since we’re not discussing critical time synchronization over long distances, like the White Rabit project, let’s also keep the discussion after what’s happening your ISP modem. 
 

Also fiber will solve some issues, still it’s my understanding we are left with the phase noise issue, and possible some other gremlins that seems to be able to still travel with the digital signal. 
 

Q. What about fiber-optic interfaces? Don’t these block everything?


A. In the case of a pure optical input (zero metal connection), this does block leakage current, but it does not block phase-noise affects. The optical connection is like any other isolator: jitter on the input is transmitted down the fiber and shows up at the receiver. If the receiver reclocks the data with a local clock, you still have the effects of the ground plane-noise from the data causing threshold changes on the reclocking circuit, thus overlaying on top of the local clock.

Link to comment
2 hours ago, plissken said:

Which I also offer to you. I've made a proof of concept video showing how this can be done in real-time with me in another room swapping cabling while you listen.


We aren’t listening to the switch in your experiment. Rather the buffer. 
 

Wouldn’t we achieve the same with a WiFi USB dongle?

Music will play after we pull that connection.  
 

Do you accept to do it this way ? If no, why not ?

Link to comment
7 hours ago, barrows said:

The upstream clock in the switch is gone and technically cannot have any further influence on the sound quality once the file data is in the downstream buffer, and the Ethernet cable is unplugged.  The clock in the switch is not "embedded" in the data somehow, that clock is long gone when the data is in the downstream buffer, as data in a buffer has no clock reference at that point.  John Swenson has speculated that upstream clock phase noise does matter to downstream playback, but this can only happen when the Ethernet cable is in place (and is still unproven speculation).


To simplify, we can keep this related to fiber transfer. My understanding is fiber solves problem one, solved by the etherRegen. (Ref. White paper). 

Your statement above is more interesting to verify and understand better, as John says otherwise. 
 

This is all about understanding clock threshold jitter, and how it’s being affected further down the chain. 


We will just have to wait until the new products is ready for released. My understanding is he’s quite busy now. (Hopefully he has finished Sonore new products?)

 

Link to comment
1 hour ago, plissken said:

and getting as much xfer speed as possible (802.11AX or 802.3ae).

Not sure with paper says anything about speed. 

..as you said:

On 10/14/2020 at 9:41 PM, plissken said:

Considering that 100mbit is ~11 times overkill for 24/192 PCM data...


The only reason someone is promoting 10GB, is the expectation of less jitter. And we’re not 100% sure if there may be other effects we don’t see, (or hear😀). Examples effects of higher power demands. 

Reports say 10GB SFP+ sounds worse. And Finisar data sheets don’t show jitter measurements, as they do with some SFP data sheets. 

 

Even if buffers was the salvation, those that doesn’t have huge buffers (which is many), must still rely on the best possible Ethernet connection. And to 2000 ears, it has happened by the use of etherRegen. 
 

@plissken

Wouldn't it be same sound in your system if you added an USB disk to that PC, instead of network?

Should save you the hassle of unplugging the cable.

 

Link to comment
19 minutes ago, barrows said:

There is nothing "embedded" in the data when it goes into a receiving buffer, the data is perfect, and it contains nothing in terms of clocking, as their is no clock in the buffer, just the perfect data and nothing else.

What size is the buffer in opticalRendu and it must have neen a good reason copied it to microRendu 1.5 ?
By size, I suppose time is interesting to, but I think I have sound one or two seconds after pulling the plug.

Any reason not to make the buffer bigger ? 

Link to comment
1 hour ago, R1200CL said:

What size is the buffer in opticalRendu and it must have neen a good reason copied it to microRendu 1.5 ?
By size, I suppose time is interesting to, but I think I have sound one or two seconds after pulling the plug.

Any reason not to make the buffer bigger ? 

Buffer size used depends on the transfer protocol used (RAAT, DLNA, NAA, LMS) the are all different.  And some, like LMS allow for some adjustment of that.

That said, after having experimented with buffer sizes in the LMS settings, I struggled to hear any difference at all, to the point that the differences I might have heard could just as easily been imaginary.  If there were differences, there was certainly no clear conclusion: in other words, longer/bigger buffering was not superior.

As far as hardware buffer capability of the µR, uR, and oR, that would be proprietary and not something  Sonore would reveal to our competitors on a public forum.

 

"Any reason not to make the buffer bigger ?"

 

I would ask the following: Any reason to make the buffer bigger?

 

As long as one is not getting underruns, why would you desire a larger buffer?  There is no conceivable advantage to such which I can think of.  There are those who have reported on these forums that low latency is a desirable feature for playback software sound quality (but I can imagine no reason why it would actually matter) so from those persons perspective, buffering "should" be as little as possible...

SO/ROON/HQPe: DSD 512-Sonore opticalModuleDeluxe-Signature Rendu optical with Well Tempered Clock--DIY DSC-2 DAC with SC Pure Clock--DIY Purifi Amplifier-Focus Audio FS888 speakers-JL E 112 sub-Nordost Tyr USB, DIY EventHorizon AC cables, Iconoclast XLR & speaker cables, Synergistic Purple Fuses, Spacetime system clarifiers.  ISOAcoustics Oreas footers.                                                       

                                                                                           SONORE computer audio

Link to comment
1 hour ago, R1200CL said:

Not sure with paper says anything about speed. 

 

Most certainly does speak to speed if you're in the know. Johns phase noise point can only exist on Tx/Rx for starters. So as you go from 100 to 1000 to 10,000 your transmission interval is shorter and so is your exposure to "phase" noise.  Also if this is the case then you should be able to play back a file from local storage and tell when the NIC is in use since that is the argument being made.

 

Let me ask you this: When getting a tooth cavity filled do we go with an old drill at 4000 RPM or with a SOTA drill at 400,000 to 800,000 RPM?

Link to comment
5 minutes ago, The Computer Audiophile said:

This would be true if the buffer was 1k. It's like saying, "that's a picture of me when is was younger." All pictures are when you were younger :~)

 

Chris, your reply is pretty much the point that I, Barrows, Jabbr, have been trying to make. If it's a motion camera they even have something called a frame buffer. Some really high end cameras can store like 1,000,000 per second.

Link to comment
2 minutes ago, plissken said:

 

Chris, your reply is pretty much the point that I, Barrows, Jabbr, have been trying to make. If it's a motion camera they even have something called a frame buffer. Some really high end cameras can store like 1,000,000 per second.

I hear ya. 

 

I think buffers are often discussed without anyone knowing how they are implemented in audio components. Some people think an entire track is stored in a buffer that stores it as data without a clock attached or that in the buffer it's reclocked etc... This just isn't the case in many audio components. 

 

Anyway, I don't want to derail the discussion.

Founder of Audiophile Style | My Audio Systems AudiophileStyleStickerWhite2.0.png AudiophileStyleStickerWhite7.1.4.png

Link to comment
2 hours ago, R1200CL said:

Wouldn’t we achieve the same with a WiFi USB dongle?

Music will play after we pull that connection.  
 

Do you accept to do it this way ? If no, why not ?

 

I'm an open advocate of WiFi. I even shot a video of playing 24/192 over a $200 Asus laptop with 802.11G 54Mpbs. You could see the file max out the Network Monitor and then ramp down to idle while watching JRiver play. Then with no break in playback watch the Network Monitor ramp up again. Rinse and Repeat.

 

My WiFi consist of 3 TP-Link Omada 1350AC with PoE and the Omada Wireless Controller.  $56 a pop and I routinely and on average get 38MB/s. So 300%+ over 100MB/s copper and anywhere from $250 to $1850 less expensive than other options.

Link to comment

Getting interesting this, as we moving towards buffers, and how data is stored or moved further down. (With or without clock threshold jitter).
 

Maybe even the sample rate of music matters ? Meaning there is (I hope I’m right) much more data that must be processed and clocked with DSD512 or PCM768. Will it affect the possible amount clock threshold jitter ?

I suppose you would then also would need a bigger buffer for the same time frame of music. Hm. Not that it should be an issue. Just a thought. 
 



 

 

Link to comment
On 10/15/2020 at 12:11 PM, barrows said:

Consider: if there was somehow a way which an Ethernet timing clock "embedded" its phase noise/jitter in the data, then the internet would not work because all of the accumulating clock jitter over hundreds, or even thousands of re-clocking steps would so corrupt the data as for it to become unintelligible.  The same could be said of streaming audio from the likes of Qobuz-if clock jitter is accumulating and "embedding" itself in the data, the sound quality of streaming audio would be hopelessly corrupted due to the hundreds of re-clocking steps on the way from the Qobuz servers to one's home (and indeed, I very much doubt also that these clocks are ultra low jitter ones, or that the power supplies used for the internet path from Qobuz to one's home are SJ designs, or even linear for that matter).

 

Kicking what should be a dead horse: the fact that accumulating jitter would corrupt the internet is exactly the reason the Ethernet specifications require the "stressed receiver test" or "stressed eye pattern test": Jitter must not accumulate from upstream to downstream.

 

Actually the clocks are probably very low jitter because 100Gbe has taken over and even 10Gbe is considered legacy.

 

Linear power supplies: ha! my Mellanox 100Gbe NICs and switch *are* fed by SMPS. That said, Analog Devices did not develop the LT3045 for the home audio market 😂 and the design pattern of using ultra low noise onboard/oncard/onchip power supplies (often linear) is a very common design pattern, these guys have lots of resources, develop their own chips and boards and aren't dumb.

Custom room treatments for headphone users.

Link to comment
On 10/15/2020 at 4:11 PM, ASRMichael said:

I agree, but also disagree. Why? Because you can’t prove it either way. Is it not because we don’t have the right measurement tools yet? It works both ways as far as I’m concerned. 
 

 

The measurement tools exist. Believe me. It is absurd to think they don't.

 

Ok I'm not asserting that everything regarding SQ is related to jitter, quite the contrary, but the measurement tools regarding jitter exist.

 

Which is why we have an endless speculation, because folks throw out speculations but they aren't supported. In this thread, I want to keep the discussion away from speculation about metaphysical jitter, rather focus on specific products and how they interact with each other.

 

I am not saying that 10Gbe fiber is necessary or even sounds better, simply regarding jitter that these products exist, have had their jitter measured, and I have used these products. Products such as the opticalRendu which are ?1Gbe don't make any claims to jitter and I am not asking anyone to measure them. If people like the SQ that's all that is important.

Custom room treatments for headphone users.

Link to comment
On 10/15/2020 at 8:04 PM, R1200CL said:

The only reason someone is promoting 10GB, is the expectation of less jitter. And we’re not 100% sure if there may be other effects we don’t see, (or hear😀). Examples effects of higher power demands. 

Reports say 10GB SFP+ sounds worse. And Finisar data sheets don’t show jitter measurements, as they do with some SFP data sheets. 

 

 

I'm not "promoting" anything, I have zero financial interest. The discussion of ethernet jitter having any SQ effect is pure speculation.  I *am* saying that if network jitter is important (that itself hasn't been demonstrated) that 10Gbe switches don't pass upstream jitter downstream. 

 

If there are SQ differences between specific SFP(+) modules then let's hear people's impressions. 

Custom room treatments for headphone users.

Link to comment
1 hour ago, jabbr said:

 

I'm not "promoting" anything, I have zero financial interest. The discussion of ethernet jitter having any SQ effect is pure speculation.  I *am* saying that if network jitter is important (that itself hasn't been demonstrated) that 10Gbe switches don't pass upstream jitter downstream. 

 

If there are SQ differences between specific SFP(+) modules then let's hear people's impressions. 

 

Let me expand on this: there is a large contingent of people to believe that electrical characteristics of the server affect the SQ and that noise on the server can cross a network and ultimately affect the DAC hence SQ. Let's break down the types of noise that exist:

 

1) differential mode voltage/current noise

2) common mode voltage/current noise

3) phase noise

 

This isn't exhaustive nor comprehensive but obviously fiberoptic transmission provides strong common mode noise isolation.

Hitting a tight eye pattern assures that differential mode and phase noise are within low limits, specifically the 10Gbe specifications ensure that such noise is not transmitted across the network.

 

I am *not* saying that a 10Gbe switch has better SQ than for example the opticalModule feeding into the opticalRendu (to use products that @barrows is familiar with. I strongly believe in keeping things as simple as possible so if you have a network already in your house then its simple to get these two and you have fiber isolation as well as a low powered / low noise endpoint connected to your DAC. I think this fits the NAA model that @Miskacreated perfectly. 

 

I also use Wifi with a NUC as NAA.

 

The reason I've been "harping" about 10Gbe is simply to counter arguments that somehow noise might worm its way from a server to an endpoint across the network.

Custom room treatments for headphone users.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...