PFM Flea from Alex
The ultimate Power Supply Units for music servers (and other devices for cleaner power source)1 hour ago, Nenon said:
What is a "definite No,No"?
Don't feed power to both an Oscillator and an SSD from the same supply rail without further regulation (preferably) or further filtering of the power to the oscillator.
I use the attached, which is fairly similar to the P.F.M. Flea to separately power oscillators from an already low noise voltage regulator.
2019-06 Isolation Pt2
Sonore opticalRendu6 hours ago, MagnusH said:
Ok, that might well be the case. But once you add an optical transmission like fiber, that all goes away, so any clocks before the fiber won't matter as long as the data was delivered correctly. In fact, nothing before the fiber should matter at all, provided the data was delivered correctly (the FMC after the fiber will matter though).
All the optical does is block leakage, it doesn't get rid of clocking issues at all (it can actually make them worse). The fact that it is optical does not automatically apply some universal quantum time scheme that mystically aligns edges perfectly, If you send in a pulse, then another that is 50ns apart, then another at 51ns, then another at 49, that difference gets preserved at the receiver, the optical does not magically force all of them to be exactly 50ns.
The raw data coming out of the optical receiver goes into a chip that rebuilds the Ethernet signal using its own local clock, that is done with flip flops inside the chip, these flop flops behave just like any other flip flops, again no magic here. I was trying to avoid re-iterating what I have said before on this, but it looks like I'm going to have to do it anyway.
So how come this reclocking with a new clock is not perfect? As edges from the input stream go into a circuit each and every one of those edges creates a current pulse on the power and ground network inside the chip and on the board. The timing of that pulse is exactly related to the timing of the input data. The timing of the input data is directly related to the jitter on the clock producing the stream. This noise on the PG network changes the threshold voltage of anything receiving data inside the chip, especially the local clock going into the chip. This means the phase noise spectrum of the data coming in gets overlayed on top of the phase noise spectrum of the local clock. It's attenuated from what it is in the source box, but it is definitely still there.
THAT is how phase noise gets from one device to the next, EVEN over optical connections.
If you look at this in a system containing all uniformly bad clocks, you don't particularly see this, since they are all bad to begin with. BUT when you go from a bad to a very good clock you can definitely see this contamination of the really good clock by the overlaying of the bad clock. This is really hard to directly measure because most of the effect is happening inside the flop flop chip itself. You CAN see the effect on the data coming out of the flip flop.
This process happens all the way down the chain, Ethernet to USB, USB into DAC box, and inside the DAC chips themselves, finally winding up on the analog out.
Wherever reclocking is happening, how strong this overlay is depends primarily on the impedance of the power and ground network, both on boards and inside chips. A lower impedance PG network produces lower clock overlay, higher PG impedance give stronger overlay.
This is something that is difficult to find out about a particular chip, the impedance of the PG network is NEVER listed in the data sheets! I have somewhat of an advantage here having spent 33 years in the semiconductor industry, spending a lot of time designing PG networks in chips, I have some insight into which chips look like good candidates for low impedance PG networks.
On a side note, because Ethernet and USB are packet systems the receiving circuit CAN use a completely separate clock, the frequency just has to be close enough to handle the small number of bits in the packet. If it is a little to slow or too fast the difference is made up in the dead time between packets.
To reiterate none of this has ANYTHING to do with accurately reading bits, this is assumed. It IS all about high jitter on network clocks working its way down through reclockings to the DAC chips and hence to audio outs. All the work done on DACs in recent years has cleaned up the signals so dramatically that these effects are getting to be audible in many systems.
2019-06 Isolation Pt1
The understanding of "isolation" in digital audio has been my passion for at least 10 years. There is a LOT of misunderstanding on the subject floating around in audio circles. Here is a quick summary of my current understanding and how the current products fit in with this.
There seems to be TWO independent mechanisms involved: leakage current and clock phase noise. Various amounts of these two exist in any system. Different "isolation" technologies out there address one or the other, but very rarely both at the same time. Some technologies that attenuate one actually increase the other. Thus the massively confusing information out there.
Leakage current is a property of power supplies. It is the leakage of AC mains frequency (50/60 Hz) into the DC output. It is usually common mode (ie exists on BOTH the + and - wires at the same time, this makes it a bit difficult to see. There seems to be two different types, one that comes from linear supplies and is fairly easy to block, and an additional type that comes from SMPS and is MUCH harder to block. An SMPS contains BOTH types. They are BOTH line frequency.
Unfortunately in our modern times where essentially all computer equipment is powered by SMPS we have to deal with this situation of both leakage types coming down cables from our computer equipment. There are many devices on the market (I have designed some of them) for both USB and Ethernet, most can deal with the type from linear supplies but only a few can deal with the type from SMPS.
Optical connections (when the power supplies are completely isolated from each other) CAN completely block all forms of leakage, it is extremely effective. Optical takes care of leakage, but does not deal with the second mechanism.
Clock phase noise
Phase noise is a frequency measurement of "jitter", yes that term that is so completely mis-understood in audio circles that I'm not going to use it. Phase noise is a way to look at the frequency spectrum of jitter, the reason to use it is that there seems to be fairly decent correlation to sound quality. Note this has nothing to do with "pico seconds" or "femto seconds". Forget those terms, they do not directly have meaning in audio, what matters is the phase noise. Ynfortunately phase noise is shown on a graph, not a single number, so it is much harder to directly compare units. This subject is HUGE and I'm not going to go into any more detail here.
Different oscillators (the infamous "clocks" that get talked about) can have radically different phase noise. The level of phase noise that is very good for digital audio is very difficult to achieve and costs money. The corollary is that the cheap clocks used in most computer equipment (including network equipment) produce phase noise that is very bad for digital audio.
The important thing to understand is that ALL digital signals carry the "fingerprint" of the clock used to produce them. When a signal coming from a box with cheap clocks comes into a box (via Ethernet or USB etc) with a much better clock, the higher level of phase noise carried on the data signal can contaminate the phase noise of the "good" clock in the second box. Exactly how this happens is complicated, I've written about this in detail if you want to look it up and see what is going on.
The contamination is not complete, every time the signal gets "reclocked" by a much better clock the resulting signal carries an attenuated version of the first clock layered on top of the fingerprint of the second clock. The word "reclocked" just means the signal is regenerated by a circuit fed a different clock. It may be a better or a worse clock, reclocking doesn't always make things better!
As an example if you start with an Ethernet signal coming out of a cheap switch, the clock fingerprint is going to be pretty bad. If this goes into a circuit with a VERY good clock, the signal coming out contains a reduced fingerprint from the first clock layered on top of the good clock. If you feed THIS signal into another circuit with a very good clock, the fingerprint from the original clock gets reduced even further. But if you feed this signal into a box with a bad clock, you are back to a signal with a bad fingerprint.
The summary is that stringing together devices with GOOD clocking can dramatically attenuate the results of an upstream bad clock.
The latest devices form Sonore take on BOTH of these mechanisms that effect sound: optical for blocking leakage and multiple reclocking with very good clocks. The optical part should be obvious. A side benefit of the optical circuit is that is completely regenerates the signal with a VERY low phase noise clock, this is a one step reclocking. It attenuates effects from upstream circuits but does not completely get rid of them. This is where the opticalModule comes into play, if you put an opticalModule in the path to the opticalRendu you are adding another reclocking with VERY good clocking. The result is a very large attenuation of upstream effects. It's not completely zero, but it is close.
The fact that the opticalRendu is a one stage reclocking (which leaves some effects from upstage circuits) is why changing switches etc can still make a difference. Adding an OpticalModule between the switch and opticalRendu reduces that down to vanishingly small differences.
So an optical module by itself adds both leakage elimination and significant clock effects attenuation. TWO optical modules in series give you the two level reclocking .
An opticalRendu still has some significant advantages over say an ultraRendu fed by a single opticalModule, the circuitry inside the opticalRendu has been improved significantly over the ultraRendu. (it uses new parts that did not exist when the ultraRendu was designed). In addition the opticalRendu has the reclocking taking place a couple millimeters away from the processor which cuts out the effects of a couple connectors, transformers and cable. The result is the opticalRendu has some significant advantages.
An opticalModule feeding an ultraRendu does significantly improve it, but not as much as an opticalRendu. So you can start with an opticalModule, then when you can afford it add an opticalRendu, also fed by the opticalModule and get a BIG improvement.
I hope this gives a little clarity to the situation.
Raspberry Pi as a music server?
Raspberry Pi as a music server?2 hours ago, Giuanniello said:
Ok, I downloaded and install the LMS on my Mini which, by the way, runs a Core Duo 2 CPU which I upgraded from a Core Duo so it won't install any other OS higher than Snow Leopard but I am fine with it if it does what I wish it to do which is simply to act as a media streamer for music, movie wise I have them on the NAS and use Infuse to stream over the Apple TV4.
Now, once I have the server up and running, how can I control it off an iPhone to make things easy, I guess there is an iOS app but I can't manage to find it.
Congratulations! There are apps, but first try your iPhone web browser. Enter the IP address of your Mac and add port 9000 — for example, 184.108.40.206:9000 — and you'll open the original classic web interface. I suggest you install the Material Skin plug-in for an updated and mobile-friendly interface, after which you'd append /material/ to the address, for example 220.127.116.11:9000/material/
On the iPhone or on the Mac itself, click "Settings" at the bottom of the LMS page, and you'll find tabs for scanning your library, adding plug-ins, and more. To add the Material Skin plug-in, scroll down here for installation instructions: https://github.com/CDrummond/lms-material Go here for discussion and support: https://forums.slimdevices.com/showthread.php?109624-Announce-Material-Skin Other useful plug-ins include Music and Artist Information, What Was that Tune, Radio Paradise, and streaming services.
All of the above is free. On iOS the most popular app is not free but it is very good: iPeng. I was using an open-source app on Android, Squeezer, but have switched to the web-based Material Skin as it offers more features. There's lots of information on https://forums.slimdevices.com/ and there are many users here on CA as well.
Isolation & Reclocking
USB jitter - why does it matter?1 hour ago, Sound Hound said:
I'm putting together an 8 channel system with DDX amps for experimenting with ambisonics and multiway active setups.
since my background is computers and I've only recently forayed into audio, I'm a bit mystified by jitter and precision clocking.
I get that galvanic isolation and a separate, clean power source are important to the audio bits beyond the computer.
but I don't understand why the USB connection between such needs to have more than a reliable/accurate transfer of data.
does the jitter transmit stray signals into the latter stages? if not, then the only place high precision clocks are warranted is in driving the DAC or DDX stage.
surely any USB implementation is sufficient with the data adequately buffered.
reclockers?! iPurifiers?! pah - audio voodoo!
I'm cynical but ready to be enlightened!
Hi Sound Hound,
I have been working on this for years, I'm getting close to a complete end to end measurement, but test equipment to properly measure this stuff doesn't exist, I'm having to design and build my own as I go along. I can measure pieces of the chain now and the rest hopefully coming soon. Part of the slowness was getting laid off and retiring and moving to a new state. I now have a working lab again and am working on the next piece of test equipment.
The hypothesis goes thusly:
ALL crystal oscillators exhibit frequency change with power supply voltage change. This is known and well measured. A cyclical change in voltage causes a cyclical change in frequency which shows up in phase noise plots. For example if you apply a 100Hz signal to the power supply of the oscillator you will see a 100Hz spur in the phase noise plot.
A circuit that has a digital stream running through it will will generate noise on the power and ground planes of the PCB just from the transistors turning on and off that are processing that stream. This effect is very well known and measured. Combine this with the previous paragraph and you have jitter on the incoming data stream producing varying noise on the PG planes that modulates the clock increasing its jitter.
The above has been measured.
But shouldn't ground plane isolation and reclockers fix this? At first glance you would think so, but look carefully at what is happening. What is a reclocker? A flip flop. The incoming data with a particular phase noise profile goes through transistors inside the flip flop. Those transistors switching create noise on its internal PG traces, wires in the package and traces on the board. This noise is directly related to the phase noise profile of the incoming data. This PG noise changes the thresholds of the transistors that are clocking the data out thus overlaying the phase noise profile of the local clock with that of the clock used to generate the stream that is being reclocked. This process is hard to see, so I am working on a test setup that generates a "marker" in the phase noise of the incoming clock so it becomes easy to see this phase noise overlaying process.
This process has always been there but has been masked by the phase noise of the local clock itself. Now that we are using much lower phase noise local clocks this overlying is a significantly larger percentage of the total phase noise from the local clock.
Digital isolators used in ground plane isolation schemes don't help this. Jitter on the input to the isolator still shows up on the output, with added jitter from the isolators. This combination of original phase noise and that added by the isolator is what goes into the reclocking flip flop, increasing the jitter in the local clock. Some great strides have been made in the digital isolator space, significantly decreasing the added phase noise which over all helps, but now the phase noise from the input is a larger percentage, so changes to it are more obvious.
The result is that even digital isolators and reclocking don't completely block the phase noise contribution of the incoming data stream. It can help, but it doesn't get rid of it.
For USB (and Ethernet) it gets more complicated since the data is not a continuous stream, it comes in packets, thus this PG noise comes in bursts. This makes analysis of this in real systems much more difficult since most of the time it is not there. Thus any affects to an audio stream come and go. Thus just looking at a scope is not going to show anything since any distortion caused by this only happens when the data over the bus actually comes in. To look at anything with a scope will take synchronizing to the packet arrivals. Things like FFTs get problematic as well since what you are trying to measure is not constant . It will probably take something like wavelet analysis to see what is really happening.
The next step in my ongoing saga is to actually measure these effects on a DAC output. Again I have to build my own test equipment. The primary tool is going to be an ADC with a clock with lower phase noise than the changes which occur from the above. AND it needs to be 24 bits or so resolution. You just can't go out and buy these, they don't exist. So I build it myself.
I have done the design and have the boards and parts, but haven't had time to get them assembled yet. Then there is a ton of software to make this all work. Fortunately a large part already exists, designed to work with other systems but I can re-purpose it for this.
So it's not going to be right away, but hopefully not too off in the future I should be able to get to actually testing the end to end path of clock interactions all the way to DAC output.
Crespi on SMPSs
astron lpsu experience?
The Mytek Brooklyn DACs measure well—with their internal SMPS (another Mean Well model, similar to what Mutec uses in the MC3+ we were discussing). Yet Mytek acknowledges better performance when an external LPS is used, even going so far as to link to a couple on their web page.
We build about 250 per year of our big, choke-filtered, dual-output, 5-7.2A JS-2 linear supplies, and in 2017 about 50 of those went to Brooklyn DAC owners—who were, to a one, quite thrilled with the sonic result.
I even gave you a link to one of the Mutec engineers saying that for their new product they focused even more on the power supply and ditched the SMPS.
If you do not think power supply design matters to product performance that’s your choice. But if you acknowledge that it does matter, then you need to think of it all the way to the wall. If a preamp or DAC has its own complete AC>DC PS built in that’s fine—and hopefully the designers took care not just with the LDO regs for their various circuits, but also with the AC>DC rectification side to produce quality, low-noise, low-impedance, regulated gross DC for their network of lower voltage regs—while not also infusing them with common-mode AC leakage.
Unfortunately, the all too popular use of off-the-shelf, caged SMPS modules inside some gear (even the $10K Merging NADAC), is a hindrance to allowing otherwise very fine products to achieve the best SQ they are capable of.
So then why do manufacturers use SMPS bricks—either internally or externally? Two main reasons:
1) Cost. A SMPS wall-wart, brick, or caged module costs (in quantity) between $3 to $12. That’s a LOT less than the cost to design and build in even the most basic trans>diodes>caps>regulators LPS.
2) Certifications. The (typically Chinese) SMPS units already come with certificates and marks from sometimes as many as a dozen world certifying bodies. So the audio component manufacturer does not have worry about the hassle and expense of getting their AC-attached product’s PS certified.
You keep referring to “well designed hi-fi components.” Yet that is a spectrum, and is almost as vague as “well prepared food.” Yes, the food—or the hi-fi component may be “well done,” but that does not mean that it will taste or sound the best that it could be.
Jitter: The Digital DevilIndeed. But esldude has a point. I have also noticed that John S is happy to discuss the mechanics of jitter but generally steers clear of discussing its audibility. A wise man.
I can hear affects of jitter, but I have not been able to quantify exactly what aspect of the jitter correlates to what I hear. I know this is just "anecdotal evidence" that would not convince anybody since it was not garnered from a 5 year study with 10,000 people etc.
For me as a person who has been working on DAC designs for many years I'm not willing to spend the time and money to run every possible change through a 5 year study. It's just too slow as part of the design process.
My usual approach is to come up with a hypothesis and come up with a way to test it, design a circuit that I think might make a change, build a small circuit that I hope is just testing that one hypothesis (not always true) and listen to it and do a number of measurements. Most of the time these tests make no difference, or make it worse, but occasionally they make it sound better. If it does sound better I will send a copy of the circuit to a friend or two to see what they hear. If everybody in the test set agrees I then run a ton of measurements trying to come up with some correlation to what is being heard. I will usually go through 5 or 6 trials per year. Every year to year and a half I will then make several copies of a complete new DAC using the best of the old and the few new mechanisms which seems to make things better. These go out to a number of people for extended listening tests to see how successful I was.
After 10 years of doing this I'm getting some pretty good DACs and learned a LOT along the way.
Through all this I have found that decreasing jitter frequently does make significant improvements in sound. But strangely enough not always. There have been a few cases where decreasing jitter did nothing or made it worse. In one of these cases I did determine that the mechanism used for decreasing the jitter was increasing distortion through a not originally thought of mechanism.
I still remember one of my first jitter experiments many years ago. I had given up on trying to get get S/PDIF to work really well, adaptive USB was just not good enough and there was no way to do async at the time with out a HUGE expense of time. I found that the squeeze boxes were perfect for what I wanted to experiment on. Local fixed oscillators which directly controled the rate coming out of the buffer, a perfect architecture. So I bought a couple SBs and listened to them for quite some time as is to get a base line. Then I hacked into the I2S lines, sent them out to a board with Tent clocks on it, an FPGA to convert I2S to a pair of 1704s, reclocked that data with the Tent clocks and sent the clocks back into the SB instead of it's local oscillators. It sounded way better than what was coming out of the SB analog outs.
I wasn't using any good method for getting the clocks to the SB, just a simple ribbon cable, I didn't care if it picked up all kinds of jitter on the way since it was just for syncing the SB. After listening to this for a while I decided to listen to the analog outs from the SB. I fully expected it to sound horrible since I was sure the jitter being fed to it over the ribbon cable was worse than what was originally there. When I listened I was astonished to find it sounded quite a bit better than the "factory" configuration.
Later I tested the jitter on the clock in the SB and even with the not great signal integrity coming over that cable, it was still quite a bit lower than the factory configuration, and that really did make a significant difference. It was then and there that I realized that there really was something to this jitter stuff and I should look into it more.
Signature Series Rendu SPDIF/i2s - Discussion and Experiences5 hours ago, Cooler said:
Yes, they are USB to i2s, and there is no ethernet to i2s converter on the market or i didnt hear about that device. Now if you want get out all from the modern dac (with i2s input) you need ethernet to usb device and usb to i2s device, thats not only really expensive, but also to many conversions and connections, 2 PSU, additional cables etc. Why not to create new microRendu ethernet to i2s device, small and simple, it could be a new product and with global trend it should be very popular, especially if you would not leave $1000 price range.
I understand, that you have clearer picture, what people want and how to do your business, but just check the recent dacs market and the potential to sell that new device.
ps. i would definitely buy ethernet to i2s mR (with Roon Ready and HQ NAA of course)
There are some major technical issues with this. The processor in the micr/ultraRendu was chosen because it lets the USB subsystem be powered and clocked separately from the rest of the processor. That particular processor can only do I2S up to 192, and CANNOT to DSD over the I2S wires (other than DoP). Note that the the I2S spec has absolutely nothing to do with DSD. There are a few DACs and DDCs that multiplex in the DSD signals onto the same wires used for I2S, but it is NOT I2S.
If you are willing to live with the 192 maximum and no DSD, then it COULD output I2S, but then you will degrade the performance of the USB output. If you optimize USB, you get less than great I2S, if you optimize I2S you get less than gret USB (or you can get less than great both).
There are some ways to do both USB and I2S and do both very well, but they are neither simple nor cheap. There would be long development time and the end result would be expensive. Even just a Ethernet to VERY good I2S is not easy at all. None of the processors have an I2S block that will do it, it will take something like a processor interfaced to an FPGA to do it properly, with a hole bunch of other stuff in there to keep the processor from messing up the I2S timing. Again complex and expensive.
The reason you see it in USB input DDCs is that XMOS has USB audio code that does I2s and DSD over I2S wires. That makes it easy to do. There is nothing equivalent for something that can do Ethernet and support the various different Ethernet audio Protocols.
At this point the best way to do this is Ethernet to USB, to DDC to I2S. Anything else "simpler" is going to take a long time to come to fruition and cost a lot.
SuperclocksOn 4/25/2017 at 4:11 PM, Hammer said:
Are these clocks different than a rubidium clock from say Stanford Reseach Systems? I picked one up on the cheap off eBay and had been meaning to purchase a DAC such as a Mytek which accepts clock input to play around, but have not had the time. Has anyone tried this with good result? Thanks, hammer
Rubidium clocks are usually very bad to use for audio. They have very good long term stability, but high phase noise. The long term stability has nothing to do with audio but the close in phase noise is what is important. So a rubidium is exactly the wrong oscillator to use.
Another problem is that the rubidium is probably NOT going to be outputting a frequency that can be used directly by audio circuitry, so some for of frequency synthesizer is going to have to be used, and these ALWAYS increase the phase noise.
A rubidium is great for an actual clock (you can read the time) that you want to be accurate to the microsecond over years of run time, but not so good for audio.
ISO REGEN launch thread! (product web page up; photos, etc.)16 hours ago, rickca said:
Alex, you said that when you and John evaluated the Crystek CCHD-575, you quickly decided it was well worth using in the ISO REGEN.
Have you experimented with more expensive clocks? I'm trying to understand whether there's a point of diminishing returns even if you had no cost constraints.
No, we have not tried better clocks than the 575, the next step up in lower phase noise needs an OCXO. Note that inexpensive OCXOs do NOT have lower phase noise than the 575, you have to go to very expensive OCXOs to better it. And it is not just the cost of the OCXO itself. An OCXO takes a lot of power to run the oven and other circuitry, this will also add cost to the system. With the right OCXO we can probably still use the LPS-1 to power the board, but then you would have a very hard time powering anything else from the same one.
Both Alex and I are very much interested in producing items that are very high performance but still low enough in cost that a fairly large number of people can afford them. This is one of those cases where my gut feeling here is that spending the money on a better clock will give better results in a DAC rather than in an upstream device.
CLOCKS, what should we look for in next generation
I've been thinking about writing a primer on crystal oscillators and digital audio and this looks like the perfect place to put it. I promise I will leave out all the complex math that most articles are filled with. I'm NOT going to go into how it all works, since most people don't care, just what makes them different and how that matters for audio.
A crystal oscillator is a combination of a special piece of quartz crystal and an electronic circuit, the combination produces periodic signal at a specific frequency, several things can change this frequency:
Thickness of the quartz piece, this is the primary determining factor in the frequency
Temperature of the crystal, this is called the temperature coefficient (TEMPCO for short), it is the change in frequency for a small change in temperature. It is not constant but changes with temperature, this is the TEMPCO curve. All TEMPCO curves have a temperature where the TEMPCO is zero, this is called the inflection point. If you run the crystal at this temperature, small changes in temperature produce no change in frequency, THIS point is where you want to run a crystal oscillator. If the temperature is far away from this point a small change in temperature makes a big change in frequency, you do not want to be here.
Capacitance across the crystal, all crystal oscillators need some capacitance across the crystal to work, changing that capacitance changes the frequency.
Power flowing through the crystal. The oscillator circuit works by running power (in the form of an AC signal) through the crystal, changing the power changes the frequency.
TEMPCO is THE most important characteristic besides the thickness, so a lot of crystal oscillator design has to do with this.
Now on to "cut", this is how a slice of crystal is cut out of a block of quartz. This is all very complicated so I won't go into the details, just to say there are many ways to do this and the exact cut determines the properties of the oscillator.
The most common cut (BY FAR) is called the AT cut. Almost all the oscillators in your electronics devices use the AT cut. The primary reason for this is that the inflection point of its TEMPCO curve is at 25-35C, right around "normal" room temperature, especially in a box where the electronics warm it up slightly. With this cut you usually do not need to apply any temperature stabilization since it is at a point where a change in temperature makes a very small change in frequency.
The other cut we need to talk about is the SC cut, this is used in OCXOs, I'll talk about that later. This cut has much higher Q than the AT cut, which means much lower phase noise, BUT in order to get that the inflection point of the TEMPCO curve is at 95C. THIS is why an oven is needed, not so much to stabalize the temperature but to get the crystal to the inflection point where a change in temperature makes an extremely small change in frequency. The slope of the TEMPCO cut around the inflection point is much shallower than the AT cut, so a given change in temperature makes a much smaller change in frequency, IF it is at 95C, outside of that and it is worse than an AT. So you ONLY want to use an SC cut in an oven.
So what aspect of this is really important for digital audio? Most oscillator spec sheets spend a lot of time talking about their long term stability. It turns out crystals will change frequency over time (called aging). Some applications need this, digital audio does not. A 1 part per million change in frequency over years time is completely irrelevant. Another spec that is important for some application is the TEMPCO, how much the frequency is going to change as the heater turns on and off. Again, irrelevant to digital audio. What DOES matter is phase noise. I'm not going to go into any detail on this but that is what matters. It is not a single number but a graph, you have to see the graph to really get an idea of what it is.
The manufacturers are starting to realize this and are now making some fairly inexpensive AT cut crystals with extremely low phase noise. They don't have great aging or great TEMPCO but they DO have great phase noise.
There are three common crystal oscillator configurations you will come across in digital audio:
XO - basic simple crystal oscillator, always uses an AT cut crystal, susceptible to the ambient temperature (remember that 25-35C) changes a fair amount over the years, has a huge range of phase noise from one model to the next. Anywhere from $0.35 to $25.
TCXO- Temperature Compensated Crystal Oscillator. Standard AT crystal with a temperature sensor that feeds a voltage variable capacitor across the crystal. In order to have a large enough "pull range" to handle large changes in temperature the crystal is modified so the frequency changes a lot with a given capacitance change. Unfortunately this radically increases the phase noise of the crystal. Thus TCXOs are about the worst clock you can use for digital audio. You get much better temperature stability, which you don't care about, in exchange for much worse phase noise which you DO care about. A very bad trade off. So if you see a digital audio device with a TCXO, stay away.
OCXO Oven Compensated Crystal Oscillator. The oscillator sits in an oven that brings its temperature to 95C. Most writing you find on the net will say this is to stabilize the temperature, but the real reason is to bring an SC cut crystal up to 95C where its built in TEMPCO is zero. This gives extremely low frequency change with temperature, but the SC also has MUCH lower aging than the AT AND much lower phase noise than the AT. Thus the OCXO is great for both systems that require extremely low drift but also systems that require extremely low phase noise.
The problem is that OCXOs are not cheap, $100 and up (WAY UP). The cheapest OCXOs have about the same phase noise as the best AT cut XOs, for about 4 times the price. So for digital audio at least a low end OCXO is not particularly useful. You have to get in the $300 range to get OCXOs with significantly lower phase noise. As you go up from there you can get WAY better phase noise, but you really have to pay for it. So when looking at OCXO specs, all you need look at is the phase noise, all the stuff in PPB etc is irreverent. Don't waste money on getting the best in those specs. If a manufacturer just shows the PPB numbers and doesn't give phase noise, stay away.
Another thing that has been talked about is "atomic clocks". The "inexpensive" ones (less than $10k) are rubidium. These have EXTREMELY low long term drift, but very bad phase noise. There is NO reason to get one of these for digital audio. Sometimes a rubidium oscillator is paired with an OCXO, the rubidium "disciplines" the OCXO, this gives the best of both worlds, but if you spent the same amount of money on just the OCXO you could get much lower phase noise which is what matters.
In the next installment I'll go into frequency synthesizers and how recent changes are changing the landscape of clocks for digital audio.
Isolation & Reclocking
Are all Asynchronous USB chips/implementations created equal??Thanks for your explanations.
Would an opto-isolator between USB receiver chip and a separate, cleanly powered area with the master clock plus DAC chip, not prevent any noise/jitter from passing on from the computer? If the opto-isolator is powered and grounded through the USB connection will it matter? This certainly would make it easier and cheaper to connect the computer to the DAC.
As Alex mentioned I hate opto-isolators, there are much better isolators out there, I use the GMRs exclusively. Unfortunately they are pretty expensive.
Even using isolators does not completely block jitter. I'll try and get this across without pictures. A signal goes into the USB side of the isolator, current flows from the driver, through the input side of the isolator (whatever it is) and back through the plane to the driver chip. That signal passes through the isolator some how (light, magnetic field, radio waves, whatever) (yep one of the isolator technologies actually sends radio waves between the sides) and causes the receiver side to do something, which changes the signal on it's output. That output then drives the DAC chip or reclocking flop, which then sends the current back to the isolator output on the groundplane. The current ALWAYS goes in loops, thus the signal going to the DAC chip creates noise on the DAC side groundplane.
Thus any jitter on the signal crossing the isolation barrier is added to the inherent jitter of the isolator and that shows up as noise on the DAC side ground plane, even with the isolator! What the isolator does is prevent OTHER GPN such as being produced by the USB receiver itself from getting into the DAC groundplane. It's definitely worth it, but you still have to deal with the jitter on the I2S signals themselves which cross the barrier.
Because the I2S signals are fairly jittery after the isolators, you usually should reclock them before sending them to the DAC chip. Why do you need to do this? Isn't the jitter on the clock the only signal that matters? Because GPN also happens INSIDE the DAC chip. Jittery input signals generate noise on the ground traces in the DAC chip, which change how the clock signal is received. You can have an extremely low jitter clock going into the DAC chip, but if the I2S signals are very jittery, the GPN inside the chip will cause that ultra low jitter clock to look MUCH worse.
So you still have to look at the jitter on the I2S signals, even with a perfect clock.
There are ways to cut down on these issues by careful board layout, but you have to include these as part of the overall design from day 1 to make sure they will be effective. But even with this, some influence still gets through.
Some info on I2S and especially I2S between boxes.
I2S is very simple, no packets or overhead, one wire with serial data, alternating between left and right channels on the same wire, an LRCLK signal that says when it is right data and when it is left data and a bit clock, that specifies when to read the serial data line. In addition sometimes a "master clock" is sent along as well, this is an integer multiple of the bit clock.
The timing of the signal is on these wires. If the DAC chip is connected directly to the wires, there is no other timing, so whatever is generating the I2S signal IS determining the timing and jitter going into the DAC chip. Thus if the I2S signal is coming from another box/board, THAT is now directly determining the jitter going into the DAC chip. How the I2S signal gets between the boxes has a lot to do with how good that timing is really going to be.
It IS possible to use a local clock that will reclock the incoming I2S signals, but in order for this to work that local clock has to be sent to the source of the I2S signals so it can synchronize the signals to the local clock in the DAC. This requires a DAC that sends the local clock OUT and a source the synchronizes itself to the local clock. These are few and far between, and the few that do, do not always use the same clock signal or pins on the interface cable.
So lets take the more common case of an I2S source and DAC that do not synchronize to each other, the clock in the SOURCE is in charge. If the source has a REALLY good clock and the circuitry used to drive the link between boxes is REALLY low jitter, then this configuration MAY sound better than another interface using a local clock in the DAC. If the DAC does not have a particularly good clock AND the I2S source component DOES have a really good clock, then the I2S connection may sound significantly better. If the I2S source does NOT have a really good clock then the I2S connection is probably not going to be much better and may be worse.
Another aspect to this is that all of the box to box I2S implementations out there do NOT block leakage loops between the source power supply and the DAC power supply. IF a setup is using the approach where a clock is fed back from the DAC to the source it is possible to implement isolation on the I2S signals, but I don't know of anyone that has actually done that.
Now on to the details of different I2S implementations. Most I2S signaling is done as CMOS level digital signals on PC boards between chips. This type of signal is only good for a few inches on a PC board. ANYTHING else, especially between boxes needs something different.
The early implementations used a single ended line driver chip to drive 75ohm coax, the most popular implementation used a DIN connector, the same one use by the S_Video standard. There were several companies that used this.
Recently most implementations have shifted to a differential method (LVDS - Low Voltage Differential Signaling) sent over HDMI cables, primarily because they already exist and the cables have just the right number of wires to make this work. This LVDS over HDMI gives significantly better signal integrity than the earlier single ended implementation over S-Video connectors. Unfortunately not all CMOS <-> LVDS converters are really low jitter, so there can be higher jitter levels in the DAC than there should be.
With a well done LVDS interface on both the source and DAC the jitter in the DAC is going to be primarily determined by the source, if it does a good job, the jitter at the DAC chip will be good, if it does NOT do such a good job, then the jitter at the DAC chip will be higher.
So as with just about anything else in audio there are no absolutes here, it depends on the implementation in the DAC and in the I2S source component.
Digital Data Transmission Protocols
HDMI=ISSUES, USB=ISSUES, TOSLINK=ISSUES, WHAT ABOUT DLNA or Network?
DLNA etc are complex protocols riding on top of Ethernet, which in itself is a fairly complex protocol. This is going the wrong direction. The more complex the protocol the more work has to be done at the DAC which means more noise generated in the DAC to deal with those protocols.
I can come up with three requirements:
1) master clock is in DAC, right next to DAC chip(s).
2) protocol is very simple, preferably not bursty packet based.
3) full galvanic isolation
S/PDIF coax, I2S, HDMI don't meet #1 or #3
S/PDIF optical meets #2 and #3 but not #1
USB async meets #1 but not #2 or #3
Ethernet solutions meet #1 and #3 but not #2
So none of the common interfaces in use today meet all three.
So how do you you get something that gets all three?
The easiest way is to use optical and do two fibers, one going each direction, one sending the data from the computer to the DAC and one going from the DAC to the computer carrying the clock. If you do this right it works very well.
Isolation & Reclocking
USB Isolator advice neededWhat about Chord's upcoming 2Qute and Hugo TT which offer "galvanically isolated" USB 2.0 port? Did they come up with their own design, or is it marketing fluff?
You don't have to isolate BEFORE the USB receiver, you can isolate AFTER the USB receiver. The USB receiver chip is directly connected to the USB bus, but all the signals coming out of and into the receiver go through isolators. As long as you either have two power supplies or power the receiver (and input side of the isolators) from VBUS you have full galvanic isolation. Of course this has to be built in to the DAC, it's not a separate box that you can add to any USB DAC out there.
This quite easy to do, but you have to be careful of the implementation. ALL digital isolators add a lot of jitter to the signal, so you need to reclock the signals after the isolators. This means the low jitter master clock has to be on the DAC chip side, and that same clock gets fed back through an isolator into the "dirty" side and on to the USB receiver.
It's surprising how many designs get this wrong, they don't reclock or they put the master clock on the dirty side, both of which add a lot of jitter to the clock.
Uptone Audio RegenI don't recall John or Alex ever focusing on the REGEN as a USB reclocker. I think that's a mischaracterization of their technical explanations of its effectiveness.
As with many things it is not a simple yes or no. The term "reclocker" has several different meanings and some of them have a lot of baggage associated with them by audiophiles.
The basic engineering definition of reclocking is running a digital signal through a flip-flop. This will constrain the edges of the input signal to only change at the active edge (negative or positive) of the clock fed to the flip-flop. There a couple reasons to do this, one is to reduce jitter in the input signal (as long as the FF clock has lower jitter than the input signal), another is to synchronize the timing on the input signal to a different "domain".
Reclocking can be either synchronous or asynchronous. In synchronous reclocking the clock fed to the FF is exactly the same frequency as the clock used to generate the input stream (or an integer multiple). This can be because the to clocks were derived some the same clock, OR one of the clocks used a PLL to synchronize it to the other.
If the clocks are NOT the same this is called asynchronous reclocking, this can cause weird things to happen and can even result in bits getting lost. In some circumstances it actually does work, but you have to be very careful about it. Note this is completely different than asynchronous sample rate conversion. Don't get these confused. ASRC generates new bits to deal with the difference in clocks, asynchronous reclocking uses the same bits, but at different times.
For SPDIF there have been boxes called "reclockers" for quite some time, most of these employ some sort of PLL to generate a local clock that is synchronized to the timing of the incoming data. The incoming stream is then run through a FF being clocked by this local clock. IF the clock coming from the PLL has lower jitter then the result will be a lower jitter S/PDIF signal. Note that this is a very simple operation, the reclocker does not have to know anything about S/PDIF protocol, it just moves the edges to line up with the local clock. The primary reason for these boxes was to decrease jitter, but they also had the side benefit that they could also clean up the edges and if the designer did it right, provide a signal that more closely matched the impedance spec.
Now on to USB.
First off you cannot use the simple reclocker model as above with USB. It is a bidirectional bus, data goes both ways over the same wire, in order to do simple reclocking you need to know which way the data is going at any given time, but this is not easy with USB. There is no separate wire that says which way the bus is going. The ONLY way to do it is to actually decode all the bus transactions in order to figure which way the data is going. This takes a full blown USB protocol engine.
The easiest way to do this is a USB hub chip, it has a built in protocol engine and data buffer, a packet comes in, the data goes into the buffer, then it builds a new packet with the same data and sends it out the other end. The transmission is done with a local clock, so in a sense it IS being reclocked, but it is not the simple reclocking that is done in an S/PDIF "reclocker". An interesting aspect of this is that it is asynchronous reclocking, it uses a local clock that is not synchronized in any way to the computer clock, but this doesn't cause a problem because the data comes (and goes) in packets. If the local clock is a little slower than the computer clock the outgoing packets will take a little longer to transmit, but this doesn't cause a problem because there is a lot of dead time in between packets. What matters is the average rate of data, the local clock doesn't change this even if the rate of the bits speed up or slow down a little bit.
So yes the REGEN is a reclocker, but not the same thing as used in the S/PDIF reclocker boxes. The primary purpose for the REGEN was not the fact that it reclocks but that it builds a new wave form with better signal integrity. The reclocking comes along for free.
Uptone Audio RegenWhat is packet noise?
This is a term I coined to refer to the power and ground noise a receiver generates when reading a packet of data. For example in USB (and Ethernet) data comes in packets with a fair amount of space between packets. The receiver chip doesn't do much of anything in-between packets, this doesn't generate much noise on supply planes. But when a packet comes in the receiver goes into high gear processing the data that just came in, this processing generates a lot of highly variable current draw from the board which generates a fair amount of noise on the supply planes.
This noise comes in bursts, which is the packet frequency. For example USB high speed has packets at 8KHz, which is in the human hearing range. This noise can modulate processes in the DAC (such as the main clock oscillator and the DAC conversion to analog) producing subtle distortions which are in the audio range.
Isolation & Reclocking
Uptone Audio RegenInteresting explanation.
In the presence of a galvanically isolated USB interface like JLSound where clocks are on the isolated side and there are two separate ps (one for xmos chip and one for oscillators), can that "fair amount of noise on the supply planes" still be relevant? If yes, how?
The two separate PS share ground.
If we measure noise on both PS, what kind of measured value should we aim for in order to leave DAC unaffected?
Very good question. The isolation helps but is not nearly the panacea many people think. Lets travel through the system and look at both the power and signal and what happens to them as we go through the system.
So lets start with a USB receiver with bursts of high frequency noise on both the power and ground planes. This PG (power/ground) noise will modulate the data being sent to the isolators. It will slightly increase jitter and the amplitude of the pulses will vary with the noise.
This noisy power also goes into the driver side of the isolator. The signal going across the barrier (light, EM waves, magnetic field etc) gets modulated by this PG noise as well. The PG noise also changes the threshold of the input receivers, adding jitter to the signal.
On the other side of the barrier we have a couple things happening, the varying signal level, caused by the PG noise in the driver, also causes the receiver current to change, even with no signal applied. Thus the receiver causes PG noise on the "clean side" directly related to the PG noise on the "dirty side" It is definitely attenuated, but not by nearly as much as most people expect. Then we also have traditional logic noise caused by the fact the output is a normal logic signal, every time the output changes it creates noise on the PG planes on the clean side. The jitter on the signal created by the PG noise on the dirty side is still there PLUs jitter introduced by the isolation scheme. This jitter changes the spectrum of the logic noise on the PG planes on the clean side.
So then we feed the signal through a reclocking flop, which is supposed to get rid of all that jitter on the input. Well it helps, but no reclocking flops are completely effective. The PG noise at the flop still causes jitter to show up on it's output, PG noise changes the threshold where the flop detects the "switch" of the clock, thus increasing jitter on the output.
The result of this chain is that PG noise on the "dirty side" can still make it through to the "clean" side. It IS attenuated, but not completely gone.
Cascading such stages can theoretically help, but in order for that to work the reclocking clock has to get fed back through the isolators which significantly degrades the clock so it turns out cascading doesn't help much. (a two stage cascade does make things better, but not by a huge amount)
On the issue of PS supply noise THAT is a whole story in itself which needs to get tackled separately.
USB PHY SI
Uptone Audio RegenHi Pepsican, you give great advice over at HDPlex (Pepsica?). Good to see you here.
Yeh I'm not challenging the likes of JS, just trying to get my head around it. I got the impression that JS was writing about "signal integrity" as distinct from timing issues/jitter. This may well be my inadequate understanding. If the stream is bit perfect then,as you say, it then comes down to timing (at least at my simple level). If so the "regeneration" means reclocking? I (think I ) know that jitter is cumulative, referring back to seminal articles written by Julian Dunn. I do vaguely remember him also talking about "intrinsic jitter" and the idea that you can add jitter. Some even argue (lessloss.com) in principle against async reclocking (notwithstanding how good the implementation is) for this reason I think. They advocate synchronous reclocking to the original master clock just before the actual conversion process.
The question about JS2 was in relation to the physical connection to the mobo. here I am lost and was just planning on using Larry's lpsu, apart from anything else, having ( i guess) an atx style connector to plug into the ? 24 pin mobo connection.Ive only ever built destop pcs. Ive never powered any mobo with a wallwart thing supplying dc power via a barrell type laptop connection.
here goes I'll try and be concise.
What I have been finding in looking at DACs etc with USB inputs is that there is what I am calling "packet noise". This is bursts of noise caused by the USB receiver processing the packets of data. This noise shows up on both power and ground planes. Since the rate of packets is 8KHz there are strong components of this noise in the audio band. This noise can cause jitter in both clock oscillators, reclocking flops, and DAC chips. It can also go directly into noise on the output of DAC chips.
The question everybody asks then is: well what about the DACs that have full isolation between the USB system and reclocking on the DAC side? Unfortunately this noise likes to make it through even this. Exactly how this works is complicated, I have written about this in the Audiostream articles. And bits and pieces in other posts recently. The upshot is that neither galvanic isolation nor reclocking completely get rid of it. They help attenuate it some, but don't get rid of it.
This packet noise consists of two parts: noise from the USB protocol engine and from the USB PHY. The protocol engine noise does not depend on the input signal quality, just the data, so its impact is always going to be the same no matter what is done with the input. The PHY is the part that actually connects to the electrical signals on the bus, ITS contribution to packet noise IS dependent on the quality of the input signal. This is the part the REGEN targets.
A high speed USB signal runs at 480 mega bits per second, which is fairly high. Different cables and connectors can significantly degrade the "Signal Integrity" (SI). SI consists of the rise/fall times of the signal, noise on the signal and jitter of the edges. Increases in any or all of these can decrease the SI. The decrease in SI can be so large that it becomes difficult for the PHY to determine the actual bits. Thus the PHY contains several methods used to pre-process the analog signals in order to make it easier to determine the bits. Modern high speed serial interfaces work at all because of these techniques that have been developed over the years.
When the SI is very good, the PHY can turn off the pre-processing steps and easily determine the bits. As the SI degrades the PHY turns on different parts of the pre-processing as needed. Each of these steps takes a fair amount of power to operate, thus creating noise on the power and ground planes. The more processing the PHY needs to use to determine the bits, the more noise is generated. Thus part of the packet noise is directly related to the signal integrity of the incoming signal. The higher the SI, the lower the noise.
It is very important here to realize this is noise that is GENERATED inside the DAC by its own operation, it is NOT noise on the USB bus that is somehow getting into the DAC as is commonly thought.
The REGEN uses a common USB hub chip to create a new USB stream. I'm calling this a regeneration not just a reclocking. Because it uses clean power and a low jitter clock the output of the HUB has low noise and low jitter. By making sure the impedances are good and the REGEN is as close as possible to the DAC the rise/fall times have very small degradation.
The result is that the PHY in the DAC doesn't have to use any of its pre-processing arsenal so the packet noise is as low as it is going to get. Note: it does not get rid of the packet noise altogether, it is just as low as it can be.
The hub chip inside the REGEN has its own PHYs, which themselves generate packet noise on ITS power and ground planes. I have worked hard to minimize this noise, but it is still there. The result is that the REGEN itself is also sensitive to the SI of the signal fed to it, which is why USB cables on its input still make a difference.
I hope that is all clear. It is about as short as I can make it.
SD Cards usage on Rendu
Sonore microRenduJohn, I still don't get the decision to use a micro SD for the OS rather than some nonvolatile memory. Wouldn't NVM be faster and more reliable?
Glad you asked!
The iMX6 has three memory subsystems, the DDR3, which wee need to use for the main memory of the system, a very small simple, low power SD card subsystem, and the generic everything else memory subsystem. The later is what you use for NVRAM, flash chips etc. It is a large complex system designed to run very fast. This uses a lot of power and generates a lot of noise in the chip. The SD card controller is slow, low power and generates very little noise, and on top of that has its own power supply pins on the chip which cuts down even more on the noise it generates. So by using the SD card rather than something like NVRAM I can drastically cut down on the noise in the chip.
There are also things like SSDs, but they all need some form of high power bus to talk to (SATA, PCIE etc), which would mean I would have to turn on those subsystems.
On the reliability front, I have actually found that using on board FLASH or NVRAM is actually less reliable. I have worked with several embedded boards over the last few years that have had flash chips, that have had problems far more often than ones that run straight off an SD card. I think it has to do with where the controller is. With SD card the flash controller is built into the card, the software doesn't have to know anything about that. The inexpensive flash chips used with these systems do not have a built in controller, they require the OS to deal with the issues specific to flash memory. Linux has some good code for this, but if something happens with the kernel during runtime, it is very easy for the flash to get corrupted. I had one board that if power went out during boot the flash was guaranteed to be corrupted.
On top of that you have to have some method for programming the flash chip, various methods have been used for this over the years, but they are all WAY more complicated than sticking an SD card into a slot!
Because of all that going with the OS stored on SD card seemed like a very good idea.
A novel way to massively improve the SQ of computer audio streaming
A novel way to massively improve the SQ of computer audio streaming17 minutes ago, austinpop said:
I have high hopes for the EtherRegen. It would be great if it simplified network topology and improved SQ.
You're not the only one with high hopes my friend! It darn well better do both 'cause between development and first production run close to 6 figures in $$ will have gone into the launch...
A novel way to massively improve the SQ of computer audio streaming
A novel way to massively improve the SQ of computer audio streaming20 minutes ago, Johnseye said:
Has anyone compared using the USB 3 ports direct to the NUC board vs the optional USB 2 ports? Or has anyone compared USB v2 to v3 in general?
Signal integrity (seen on eye-pattern) is generally a bit better from USB 3.1 chips--even when just doing USB 2.0 480Mbps high-speed--as the internals of those more modern chips have to be very carefully designed so they can achieve USB3 SuperSpeed. But they do vary, and the SuperSpeed power circuitry can generate a bit of excess noise and current bounce on the ground-plane. Do NUC BIOS adjustments offer the ability to turn off USB3 SuperSpeed? That would be a good thing.
The Linear Solution Reference 1 Linear Power Supply: Viable Alternative to a Paul Hynes SR7?
The Linear Solution Reference 1 Linear Power Supply: Viable Alternative to a Paul Hynes SR7?On 1/15/2019 at 4:52 AM, auricgoldfinger said:
Given the lack of response, there must be some doubt regarding the statement on transformer emissions.
Uh, no doubt at all! Transformers used in power supplies radiate at low multiples of the AC line frequency, plus a small amount of ringing at around 175KHz (typical, varies, depending upon diodes and if an RC snubber on secondary is or is not used).
So I'm afraid that GHz-absorbing 3M material is not going to do squat around a transformer.
Maybe you gents are enjoying its effect on other components in the LPS, though again, nothing in a traditional LPS is going to be emitting anywhere near the GHz range. The 3M stuff is slightly heavy and single-stick on one side, so maybe you are getting some mechanical damping benefits. Kind of expensive for that application though.
If you want to block fields from power transformers, 3/8" aluminum is the ticket. Have to be careful with steel and especially with mu-metal as strong fields will cause those materials to saturate if they are close.
The SFP cage side of the SFP module is standardized, same physical, same electrical and same protocol. The part where you plug in the fiber can be different.
There are two common wavelengths, usually referenced as SX and LX. There are two different types of fiber: single mode and multi mode.
So any optical SFP module will work with any SFP cage designed for optical use (which is pretty much all of them). You DO have to match the fiber side. Use an SX to SX, LX to LX, single mode to single mode and multimode to multi mode. If you have an SFP cage on both sides, its easy just use the same model SFP module on both sides and use a cable that matches. If one side is an FMC with a built in optical interface (ie NO SFP cage), you need to find out what it is and get an SFP module that matches.
As to which of these combinations sound better, who knows. But be careful, LX is designed for much longer range, but that doesn't mean it is going to sound better between two boxes in the same rack. There are some people that think LX is actually bad for very short lenghts, the electronics are designed assuming a lot of attenuation due to the long fiber length, with a short cable that does not exist and are probably putting the receivers in a state they were not designed for. It may or may not make any difference, I have not spent any time comparing different combinations, other than do they WORK.