Jump to content
IGNORED

15 USB/SPDIF converters shootout


Recommended Posts

Miska: OK, I think I understand what you are trying to say here, I think you are talking about for your needs specifically, rather than talking about what everyone else needs:

 

"IMO, USB is just a bit better than S/PDIF. Firewire is already technically better than USB, although unfortunately fading out. Ethernet and especially PCIe (Thunderbolt) are much better.

 

I could even call design of USB standard "stupid", if HDMI wouldn't be worse…"

 

The fact is that USB class 2 audio transfer (assuming a good implementation of course) is "perfect" (meaning no data loss, and jitter at essentially the intrinsic rate of the oscillators used) for PCM (or DSD/DoP up to 2x DSD) up to 384 kHz. Neither Ethernet, nor Firewire is better AT ALL.

 

But, I understand that you do things much differently, and are transferring much different data in your approach, hence you may need more capability than what is offered by USB. But, please do not confuse the issue here, I know of nobody else who is doing what you are, so your specific case is moot to the discussion here, and is liable to confuse readers. Not only that, but I do not think the USB-SPDIF converters being discussed here would have any application for what you are doing.

SO/ROON/HQPe: DSD 512-Sonore opticalModuleDeluxe-Signature Rendu optical with Well Tempered Clock--DIY DSC-2 DAC with SC Pure Clock--DIY Purifi Amplifier-Focus Audio FS888 speakers-JL E 112 sub-Nordost Tyr USB, DIY EventHorizon AC cables, Iconoclast XLR & speaker cables, Synergistic Purple Fuses, Spacetime system clarifiers.  ISOAcoustics Oreas footers.                                                       

                                                                                           SONORE computer audio

Link to comment
So, please experts, riddle me this - who is best? Answers on a postcard or in this forum...

 

I wouldn't use any of those...

 

But design 2 looks worst, then design 1. Design 3 is pretty standard and design 4 looks like some attempt, but still gets it wrong.

 

And depending on the case, I would possibly go for AES with transformers at both ends, and check if I could float the ground.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
rather than talking about what everyone else needs

 

I think it's quite bold to say anything about what "everyone else" needs. I don't think I'm so much unique in being perfectionist.

 

The fact is that USB class 2 audio transfer (assuming a good implementation of course) is "perfect" (meaning no data loss, and jitter at essentially the intrinsic rate of the oscillators used) for PCM (or DSD/DoP up to 2x DSD) up to 384 kHz. Neither Ethernet, nor Firewire is better AT ALL.

 

Well, of course you can find suitable constraints where it is "perfect". Perfect for me is something that doesn't have any restrictions or down sides in any aspects. Ethernet has at least some amount of isolation by design (plus has fiber optic capability for perfect isolation), while Firewire for example has ways to synchronize multiple audio devices on the bus etc.

 

Plus of course ethernet is many-to-many, so any number of players can access any number of DACs without touching any wiring. And of course you can also easily go wireless.

 

Perfect interface would at least:

  • Be bus-master DMA capable
  • Many-to-many capable
  • Support fiber optic links
  • Support wireless links
  • Support multi-device clock synchronization (sample-synchronous playback and recording of unlimited channels)
  • Support separate clock devices and clock distribution
  • Galvanic isolation and differential signaling by design for copper links

 

Ethernet + Thunderbolt + Firewire comes closest.

 

Not only that, but I do not think the USB-SPDIF converters being discussed here would have any application for what you are doing.

 

Well, actually you can transfer one channel of DSD128 using DoP over single AES cable (or two channels of DSD64). You can also buy 16/16-channel (in & out, 8+8 wires) PCIe AES cards so you can have 8 channels of synchronized DSD128. It also works for recording. So it is actually kind of related.

 

Naturally PCIe can do bus-master DMA, so the card can access raw audio data directly from the computer's RAM. "All" PCIe cards with S/PDIF (or AES/EBU) output support this. Unlike USB where the computer has to actively packetize the audio for transmission.

 

From practical perspective S/PDIF == AES/EBU because most TX/RX chips transparently handle both.

 

Now at least there's a way to put together 7.1 channel DSD-playback system.

 

P.S. My humble point anyway was that built-in async USB isn't automatically better than S/PDIF. As usual it's so much more about the actual design.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

I just removed my Audiophilleo AP2 from my system after upgrading from a Hegel HD10 to a Benchmark DAC2. With the Hegel HD10, the Audiophilleo, powered from an AQVOX linear PSU, made a huge, night-and-day really difference. With the Benchmark DAC2, the USB input (with a iFi iUSB) is very slightly superior to the SPDIF input, even when using AP2 on the SPDIF input.

Link to comment
But design 2 looks worst, then design 1. Design 3 is pretty standard and design 4 looks like some attempt,

 

Really? Interesting.

 

Do you want to ask the audience, take 50/50 or call a friend? Or do you stick with this?

 

And depending on the case, I would possibly go for AES with transformers at both ends, and check if I could float the ground.

 

Interesting, now exactly what was the characteristic impedance of an XLR connector? 110Ohm? Plus minus how much?

 

And is the variance from the "correct" impedance greater or lesser than BNC to RCA and is the break in impedance longer or shorter with XLR compared to RCA instead of BNC...

Magnum innominandum, signa stellarum nigrarum

Link to comment
the S/PDIF receiver itself rarely adds any jitter, but it may fail to remove some, since it's just a low-pass filter for a jitter anyway.

 

I must take exception here.

 

The common digital receivers (Cirrus Logic, AKM, TI) all use a PLL with a VCO (even the WM Parts do, but their PLL is rather different). This VCO is on-board and very noisy (and very PSU dependent).

 

Cirrus demonstrated this exceptionally well with the CD8416. This implemnted the great idea to stop locking the PLL on the data low/high transitions, but instead to lock on the SPDIF/AES-EBU preamble.

 

The result - data/source related jitter practically disappears. Which is great news, except for a minor flaw, namely that the noiselike jitter from the VCO is now 20dB or so higher...

 

Now measured jitter is through the roof, when compared to the older CS841X parts, you cannot get the expected SNR from a given DAC Chip. When customers started complaining CS issued a "bugfix" version of CS8416 that allowed you to restore the Chip to the same behaviour as the older chips, except now you have no 192KHz support and all source jitter will ride through as before...

 

If only they would have changed the oscillator to something less noisy instead... But that would cost money.

 

In effect the PLL acts as feedback loop around the VCO that reduces the VCO noise by the feedback factor, but there is still some added, as loop gain is not infinite and loop bandwidth is not infinite. This is incidentally also the reason why such receivers cannot suppress any source jitter, for that the PLL bandwidth would have to be narrower and if you do that the VCO noise goes up.

 

The added jitter measures usually around 100-200ps rms depending who implemented the chip, who did the measurements and with precisely what filter settings and so on.

 

So, yes, all SPDIF receivers add jitter of their own and quite a bit (even for CD).

 

Plus second, no, most of them do not filter source jitter worth a damn, they do not fail to filter some, they fail to filter ANY in the audio range.

 

There is no particular reason why that should be so (as Wolfson Micro show, though not well enough). But that's the way it is. And this is why everyone is agonising about connectors and this or that and other.

 

Bad receiver chips coupled with bad analogue transmission line design and inappropriate serial formats. Well it was never supposed to HiFi, that SPDIF kludge...

Magnum innominandum, signa stellarum nigrarum

Link to comment
I just removed my Audiophilleo AP2 from my system after upgrading from a Hegel HD10 to a Benchmark DAC2. With the Hegel HD10, the Audiophilleo, powered from an AQVOX linear PSU, made a huge, night-and-day really difference. With the Benchmark DAC2, the USB input (with a iFi iUSB) is very slightly superior to the SPDIF input, even when using AP2 on the SPDIF input.

 

Hallelujah! USB finally beats S/PDIF!

Mac Mini 5,1 [i5, 2.3 GHz, 8GB, Mavericks] w/ Roon -> Ethernet -> TP Link fiber conversion segment -> microRendu w/ LPS-1 -> Schiit Yggdrasil

Link to comment
Hallelujah! USB finally beats S/PDIF!

 

I extensively A-Bed the USB input of the Benchmark DAC2 against its coax SPDIF input fed by my AQVOX-powered AP2 (much alphabet soup in a single sentence, sorry), and I identified a small, but clear and stable, difference: the bass was a bit more precise through the USB input. The difference was just barely audible with my favourite orchestral test CD (Bartok's Wooden Prince by Boulez-CSO) but more noticeable (though still small) with my preferred bass-range test CD (Ralph Thamar Otentik - music from the French West Indies). In the medium range, at first I thought the AP2-fuelled SPDIF entry was a bit warmer than the USB input, but after quite some listening I had to acknowledge that it was because it was slightly less precise. One striking thing, for instance, is that, on another CD I know very well (Chambre Avec Vue by Henri Salvador), when listening to this disc from the USB input, for the first time I was struck by how old Salvador was when he recorded this CD. With the SPDIF input or my previous Hegel HD10 (also fed from its coax USB input with the AP2), his voice was a tad warmer and did not sound as old. In a way, it sounded nicer with the HD10 or the SPDIF input of the Benchmark DAC2, but the USB input is just more precise, more detailed, which is ultimately what I am after.

Link to comment

Miska: Yes, of course. I was speaking theoretically and technically:

 

"P.S. My humble point anyway was that built-in async USB isn't automatically better than S/PDIF. As usual it's so much more about the actual design."

 

Right, I could not agree more. My posts here have always mantioned this comparison when considering a "perfect" implementation, or with "proper" implementation.

 

But, to the point of this, specific discussion: USB to SPDIF converters, vs internal (to a DAC) USB receivers, I think it is important to limit our posts to the topic at hand, and not expand the discussion into other areas, which confuse the issue. For the topic at hand we are talking about two channel audio, PCM (including DoP), at sample rates up to 192 kHz (since we are considering SPDIF DACs, and I am aware of no commercially available SPDIF DAC which accepts higher sample rates). When the topic is limited to this area, a properly implemented, internal to the DAC USB interface can be "perfect", and there are USB DACs commonly available for consumers to purchase, which use virtually "perfect" USB interfaces, such as Ayre, Wavelength Audio, and Aesthetix (I am sure there are others as well, but I do not know the actual implementation of all USB DACs). I say virtually, because there is always going to be some threshold of jitter, due to intrinsic clock innacuracies, power supply imperfections, and layout issues: but the basic design of the USB receivers in the mentioned DACs is close enough to "perfect" that they will outperform SPDIF, with the (uncommon) exception of fully asynchronous SPDIF receivers.

 

While you may be achieving actual "perfection", or really, closer to that ideal than, even, anyone else, with your approach. What you are doing is moot to this discussion as far as 99.9% of audiophiles are concerned, because no one can purchase components which operate in the way that you do things. I have much interest in your approach, from a theoretical standpoint, but bringing it up here confuses the topic at hand, as what you are doing is not relevant. If you would like to discuss your, unique, approach to digital audio playback, I am all for it, as I would love to learn more about exactly what you are doing. But please do so in a separate thread, or start a blog here to discuss it.

SO/ROON/HQPe: DSD 512-Sonore opticalModuleDeluxe-Signature Rendu optical with Well Tempered Clock--DIY DSC-2 DAC with SC Pure Clock--DIY Purifi Amplifier-Focus Audio FS888 speakers-JL E 112 sub-Nordost Tyr USB, DIY EventHorizon AC cables, Iconoclast XLR & speaker cables, Synergistic Purple Fuses, Spacetime system clarifiers.  ISOAcoustics Oreas footers.                                                       

                                                                                           SONORE computer audio

Link to comment
and I am aware of no commercially available SPDIF DAC which accepts higher sample rates

 

IIRC, Chord QuteHD accepts up to 384 kHz over S/PDIF and has also otherwise pretty good S/PDIF receiver.

 

but the basic design of the USB receivers in the mentioned DACs is close enough to "perfect" that they will outperform SPDIF, with the (uncommon) exception of fully asynchronous SPDIF receivers.

 

No, looking at measurements I've done, and for example HiFi-News, I would say you have something like 50/50 chance of getting better performance with USB than with S/PDIF when using a good S/PDIF transmitter like M2Tech hiFace (Evo). It is not uncommon to see jitter figures around 350 ps for async USB interface in DACs. And when there's difference, it's not that big. This is not because S/PDIF would be such a great interface which it is not (it is horribly poor), but it is likely because USB is much more complex to implement right. And S/PDIF has fairly long R&D evolution behind, so it'll take some years for USB implementations to get up to speed.

 

For example Mytek DAC seems to perform about as well with USB as it does with good S/PDIF or AES sources, thanks to JetPLL of the Dice chip.

 

Another common issue is RFI/EMI dirt in the output that doesn't impact the jitter measurements as such, but appear in spectrum analysis as spurious peaks over the spectrum. USB seems to be typically more sensitive to this than for example AES. This is something you can usually improve by using better and more quiet computer at the other end of USB cable. Comparing audio-optimized battery powered ARM computer to an audio-optimized PC seems to give roughly 10 dB reduction in these spurious noise components.

 

So I would say that it is impossible to make generalization that some interface type would be systematically better than other, but rather it is case-by-case, also largely depending on the source quality. So even if X performs better than Y in system A, it may still be vice versa in system B.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Miska: I have agreed with you numerous times now that proper implementation is critical, but for some reason you keep responding as if I have not mentioned this?

 

Chord uses asynchronous SPDIF reception in their high end DACs (one of the very few companies to do so), does the cute also use their asynchronous SPDIF receiver? As I mentioned previously, if one goes to the trouble to implement a truly asynchronous (no PLL) SPDIF receiver, then SPDIF performance can be virtually "perfect" as well. But, it is important for people to understand that these types of SPDIF receiver circuits are not common at all.

 

Sure, one can clean up measured jitter by adding an additional circuit, like the JET PLL you mention in the Mytek. ESS chips do this internally as well-my experience with such circuits is that they present sonic problems, and the better sounding solution is to not have the jitter in the first place, by using a truly asychronous receiver, with low phase noise oscillator(s). I do not know the specific technical mechanism as to why PLLs sound bad to me, and I understand that mathematically they should not be a problem, but that does not change my dislike for them. My ESS DAC sounds better when synchronously clocked from a low jitter source with the DPLL inactive...

 

Your discussion of RF problems showing in analog output is very interesting to me. Certainly, USB processors like the XMOS are high speed chips, and it is not surprising that they can induce RF problems in the output. Of course, this depends on the implementation. The best USB designs I am aware of use isolation between the USB processor, and the I2S output to the DAC chip, while re-aligniing the data with the masterclock right at the output. This approach minimizes any RF from the USB processor (like XMOS etc), isolates the grounds, and at the same time eliminates any additional jitter added by the isolation. No doubt, every USB interface does not use this approach, but there are commercial DACs which do.

SO/ROON/HQPe: DSD 512-Sonore opticalModuleDeluxe-Signature Rendu optical with Well Tempered Clock--DIY DSC-2 DAC with SC Pure Clock--DIY Purifi Amplifier-Focus Audio FS888 speakers-JL E 112 sub-Nordost Tyr USB, DIY EventHorizon AC cables, Iconoclast XLR & speaker cables, Synergistic Purple Fuses, Spacetime system clarifiers.  ISOAcoustics Oreas footers.                                                       

                                                                                           SONORE computer audio

Link to comment
So you are saying you've personally tested the WBT connector to be non 110 ohm? If so, then what characteristic impedance did you find?

 

You/we are mixing up 110 ohm AES/EBU and 75 ohm BNC.

 

I have not measured the WBT "75 ohm BNC". I have no intention to, but if someone supplies one, I will measure it, and post my findings here.

 

My reference to 110 ohms was in regards to what is printed on the jacket, of some well-known AES/EBU cables. The point is just because the manufacturer says it does not make it so. Even Belden is guilty of fudging the results.

 

And PS: is your buddy in Richardson? I worked at Kfab a while back. A lot of fun in Dallas. ;)

 

He was in the military side of the house................moved to semiconductor marketing. Which is where he was when he got a patent on the bug/feature.

Link to comment
Oh I don't know - give me a nice ST Fibre connection any day.... (grin)

 

I thought I was supposed to be the comedian around here. I guess not.

 

Talk about jitter...............yes, jitter, on fiber. The way it is done (i.e. wrong) leads to lots of jitter. Right way.......not hard to do, but no one in this bidnis knows that. Or, if they do, they ignore it.

 

Hint: fiber was designed to work where copper starts to crap out, iow around 1 km. Anything shorter.................asking for problems. At least with single-mode.

Link to comment
But it was my point. Plus that the S/PDIF receiver itself rarely adds any jitter, but it may fail to remove some, since it's just a low-pass filter for a jitter anyway.

 

So most of the time, S/PDIF receiver doesn't have any jitter itself, but the circuitry surrounding it may have................

 

Another guy who thinks he is a comedian.

 

Google PLL. You may learn something.

 

IMO, USB is just a bit better than S/PDIF...............

 

Just a bit. At least you get rid of the PLL, and all of its problems. You just add a poorly balanced line, that is not terminated, and has to limit its length to around 12', to deal with all the reflection issues.

 

Both work, both are bone-headed ways to get good sound. That isn't what they were designed for. (SPDIF for a test spigot; USB for printers, mice, and keyboards.)

 

I could even call design of USB standard "stupid", if HDMI wouldn't be worse...

 

I'll do it for you: it is stupid. (Doing my job as comedian, even though I am serious.)

Link to comment
It is quite excellent to have so many highly experienced and opinionated experts here.

 

I would like to pose a little contest to all of you experts. I have some purely hypothetical practical implementations of SPDIF outputs. Of course, it could be, they are based on real devices, who knows, right?

 

I should like to describe these in some degree of detail, covering physical layout and circuit.

 

The SPDIF outputs shall be connected to one of the following:

 

One DAC is fitted with the CS841X or AKM receiver (pick your favourite) and a direct (no transformer) input. The other is fitted with an equal input but a Wolfson Micro WM880X receiver. Neither employs secondary PLL's, Asynchronous Sample Rate Converters etc. and they shall be known from here as CS-DAC and WM-DAC. DAC Chip may be anything without ASRC build in.

 

We may presume source is a PC (that includes Mac, Linux, Android etc.) with the usual contamination of ground with possibly very high common mode voltages (as much as 1/2 of mains voltage) etc.

 

And I should like your expert opinions which of these entirely hypothetical SPDIF output arrangements coupled with which equally hypothetical DAC will produce the lowest measured jitter and which (if different) will sound best, looking at or listening to the analogue output of a given DAC...

 

So far the challenge, now the implementations:

 

Design 1:

 

Five parallel inverters (unbuffered) operating from 3.3V and driven by the sixth inverter drive a coupling cap (X7R SMD 0603 format 100nF) and a resistive divider of 330 Ohm & 91 Ohm. The voltage divider drives a SPDIF transformer without shield. Around 2 inch of wires twisted loosely together link the signal after the transformer to a 75 Ohm BNC Socket.

 

Design 2:

 

An unbuffered inverter operating from 3.3V drives an SPDIF transformer (actually same model as above) directly via a coupling cap (X7R SMD 0603 format 100nF). The resistive divider is placed after transformer. Around 2 inch of PCB traces link the resistive divider to a PCB Mount 75 Ohm BNC Socket. The traces are quite obviously not 75 Ohm striplines, an educated guess might consider them closer to 100 Ohm.

 

Design 3:

 

A TI Transmitter operating on 5V drives a transformer (similar design but different maker and model) via a coupling capacitor (100nF) and series resistor that produces 73 Ohm output impedance. The transformer connects to a PCB mounted 75 Ohm BNC connector with short traces.

 

Design 4:

 

Super Fast "popcorn logic" running at 4V implements a balanced drive to a transformer (similar to all earlier ones) with a parallel combination of NP0, X7R and electrolytic capacitor amounting to 100uF as DC blocker. The resistive divider is placed after the transformer and supplies 75 Ohm nominal (ensured by design) at better than +/- 3% and 0.7V P-P. All parts are SMD 0603. Additional RC "conjugates" are applied. The actual resistive divider and conjugates are placed directly below the output socket, which is RCA Type, for BNC an adapter is supplied. The applied conjugates are experimentally tuned using a TDR and a known 75 Ohm Cable, including the RCA socket and the RCA>BNC adapter.

 

So, please experts, riddle me this - who is best? Answers on a postcard or in this forum...

 

Magnum innominandum, signa stellarum nigrarum

 

Is "None of the above" an option?

 

#4 comes closest, but needs tweaking. Does not need to be as complex as you propose. Needs different, and better, parts.

 

I know only know of one company that does all that tweaking, with zobels, but I can not say who. I can only say it is a pain. Especially if you do it on the jack itself.

Link to comment
Interesting, now exactly what was the characteristic impedance of an XLR connector? 110Ohm? Plus minus how much?

 

And is the variance from the "correct" impedance greater or lesser than BNC to RCA and is the break in impedance longer or shorter with XLR compared to RCA instead of BNC...

 

The XLR actually is 110 ohms. The problem is there is too much stray reactance, mainly due to the way the cable is terminated. Right-angle connectors will add to that.

 

Sticking a RCA connector in, anywhere, blows the mismatch out of the water.

Link to comment
Bad receiver chips coupled with bad analogue transmission line design and inappropriate serial formats. Well it was never supposed to HiFi, that SPDIF kludge...

 

Exactly.

 

Add to that all the other problems that arise, from following the examples, in the data sheets and app notes, of how to hook the 2 ends together. Put using the absolute worst possible transformer at the top of the list.

Link to comment

So, to summarize:

 

1. s/pdif, usb, rca, aes-ebu, and st fiber (perhaps bnc too- not clear on that) all suck.

2. s/pdif converters suck.

3. running straight from the computer's usb buss to a usb dac sucks.

 

I enjoy reading these posts for educational purposes, but for those of us who are not electrical engineers, audio component designers or industry insiders, what is the practical "take away" from all this discussion?

 

Emphasis on the word "practical".

Speaker Room: Lumin U1X | Lampizator Pacific 2 | Viva Linea | Constellation Inspiration Stereo 1.0 | FinkTeam Kim | Revel subs  

Office Headphone System: Lumin U1X | Lampizator Golden Gate 3 | Viva Egoista | Abyss AB1266 Phi TC 

Link to comment

Another common issue is RFI/EMI dirt in the output that doesn't impact the jitter measurements as such, but appear in spectrum analysis as spurious peaks over the spectrum. USB seems to be typically more sensitive to this than for example AES. This is something you can usually improve by using better and more quiet computer at the other end of USB cable. Comparing audio-optimized battery powered ARM computer to an audio-optimized PC seems to give roughly 10 dB reduction in these spurious noise components.

 

If an ARM based computer is running at say <2% of its CPU (ie not doing any signal processing at all, not even converting Apple Lossless or FLAC to PCM), do you think there are any advantages in using an audio-optimized PC with a more powerful Intel processor? For instance, do the expensive PCs have better USB implementations with better timing accuracy or whatever?

System (i): Stack Audio Link > 2Qute+MCRU psu; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

 

Link to comment
If an ARM based computer is running at say <2% of its CPU (ie not doing any signal processing at all, not even converting Apple Lossless or FLAC to PCM), do you think there are any advantages in using an audio-optimized PC with a more powerful Intel processor? For instance, do the expensive PCs have better USB implementations with better timing accuracy or whatever?

 

More powerful tends to mean more noisy, while large part of the noise comes from peripherals. There's quite large correlation between power consumption and amount of electrical noise. While on the other hand it is useful to utilize powerful PC for doing signal processing. That's why I ended up splitting the two (processing and playback) apart and using network between the two. Just making the ARM dumb as possible with all unnecessary peripherals (like display adapter) disabled or non-existing.

 

Some of the biggest sources of noise inside a computer are:

- Display adapters (GPU)

- Mass storage adapters (SATA)

- SSDs and HDDs

- Switching power supplies

Also PWM controlled fans generate their own contribution. So leaving these out helps. And of course completely fan-less device is also acoustically silent.

 

Luckily PCs are also headed for lower power consumption, so it will most likely benefit audio. :)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Just transposed the WBT 110 model number for the impedance. Yes 75 ohms.

 

You/we are mixing up 110 ohm AES/EBU and 75 ohm BNC.

 

I have not measured the WBT "75 ohm BNC". I have no intention to, but if someone supplies one, I will measure it, and post my findings here.

 

My reference to 110 ohms was in regards to what is printed on the jacket, of some well-known AES/EBU cables. The point is just because the manufacturer says it does not make it so. Even Belden is guilty of fudging the results.

A Digital Audio Converter connected to my Home Computer taking me into the Future

Link to comment

Thank you Blake!

 

My ranking of AES-EBU > BNC > Toslink come from Berkeley Audio who I trust more than just about anybody out there. At least in their implementation this is how they rank them. Good enough for me to start with as a "layman" and I would just do some listening tests from there. If I remember what Michael Ritter said was that the higher voltage and differential signal of the AES worked the best with their unit. And he also mentioned how he preferred BNC over RCA.

 

digital inputs.jpg

 

So, to summarize:

 

1. s/pdif, usb, rca, aes-ebu, and st fiber (perhaps bnc too- not clear on that) all suck.

2. s/pdif converters suck.

3. running straight from the computer's usb buss to a usb dac sucks.

 

I enjoy reading these posts for educational purposes, but for those of us who are not electrical engineers, audio component designers or industry insiders, what is the practical "take away" from all this discussion?

 

Emphasis on the word "practical".

A Digital Audio Converter connected to my Home Computer taking me into the Future

Link to comment
More powerful tends to mean more noisy, while large part of the noise comes from peripherals. There's quite large correlation between power consumption and amount of electrical noise. While on the other hand it is useful to utilize powerful PC for doing signal processing. That's why I ended up splitting the two (processing and playback) apart and using network between the two. Just making the ARM dumb as possible with all unnecessary peripherals (like display adapter) disabled or non-existing.

 

Some of the biggest sources of noise inside a computer are:

- Display adapters (GPU)

- Mass storage adapters (SATA)

- SSDs and HDDs

- Switching power supplies

Also PWM controlled fans generate their own contribution. So leaving these out helps. And of course completely fan-less device is also acoustically silent.

 

Luckily PCs are also headed for lower power consumption, so it will most likely benefit audio. :)

 

So as you haven't mentioned better USB implementations I assume a good ARM implementation of USB, is as good as a USB implementation on one of the more expensive CAPS Intel based computers or similar. To me Miska's viewpoint seems to make a lot of sense, but I would like to hear an alternative position. There are four different CAPS computers at increasingly higher price levels. What is it that you get from a more expensive CAPS computer relative to the cheaper ones? Is it just a quieter PSU? More power for signal processing relative to EMI/RFI noise (not needed if you adopt Miska's distributed architecture). Or something else like making the best of a flawed Microsoft Windows OS, that wouldn't apply if you were using Linux?

System (i): Stack Audio Link > 2Qute+MCRU psu; Gyrodec/SME V/Hana SL/EAT E-Glo Petit/Magnum Dynalab FT101A) > PrimaLuna Evo 100 amp > Klipsch RP-600M/REL T5x subs

System (ii): Allo USB Signature > Bel Canto uLink+AQVOX psu > Chord Hugo > APPJ EL34 > Tandy LX5/REL Tzero v3 subs

System (iii) KEF LS50W/KEF R400b subs

 

Link to comment
So, to summarize:

 

1. s/pdif, usb, rca, aes-ebu, and st fiber (perhaps bnc too- not clear on that) all suck.

2. s/pdif converters suck.

3. running straight from the computer's usb buss to a usb dac sucks.

 

I enjoy reading these posts for educational purposes, but for those of us who are not electrical engineers, audio component designers or industry insiders, what is the practical "take away" from all this discussion?

 

Emphasis on the word "practical".

 

+1. You took the words from my mouth.

Link to comment
So, to summarize:

 

1. s/pdif, usb, rca, aes-ebu, and st fiber (perhaps bnc too- not clear on that) all suck.

2. s/pdif converters suck.

3. running straight from the computer's usb buss to a usb dac sucks.

 

I enjoy reading these posts for educational purposes, but for those of us who are not electrical engineers, audio component designers or industry insiders, what is the practical "take away" from all this discussion?

 

Emphasis on the word "practical".

 

The practical take away is that is no silver bullet, no single inherently superior protocol/system/cable etc.

 

There is nothing that can be applied "cookie cutter" from some app note or such.

 

If sufficient care is taken ANY of the protocols you mention can deliver very low jitter, low leakage of RF noise and so on and produce excellent results.

 

If such care is not taken (Sturgeons law applies) than any of these can deliver the worst sound ever that you heard.

 

So, USB, async or not, SPDIF via RCA or BNC or glass... non in themselves alone guarantee anything be it either a lack of quality or excellent quality. Most often the designs are lacking and a "fashion item" (Async this, 75 Ohm that, USB 3.0 thither or isolated the other) has been added for sales promotion.

Magnum innominandum, signa stellarum nigrarum

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...