Jump to content
IGNORED

A first test and some food for audio thought


bdiament

Recommended Posts

Gang,

 

Let me clear up a couple of things.

 

I have been doing PC/MAC development since 1981. Yes even Apple 2 crap...

 

Anytime you can use an internal operating system driver as apposed to a loaded driver the product will be more stable and require less support.

 

I of course tried several Firewire solutions before turning to USB. Again though there is an Asynchronous Firewire protocol it is not supported by any operating system. Therefore many of the Firewire Audio chips had extremely high jitter. Which is only slightly better with some of the Adaptive USB Devices. That being said companies have done both Firewire and USB solutions which are asynchronous and use custom drivers and these can work very well if implemented correctly.

 

USB 2.0 can support up to 24 channels at 24/192. Well realistically I have only been able to get that to 16 channels. Still way more than required. USB 3 is basically USB 2 with the addition of PCIe. This basically means you may in the future have card access for laptops and such. Which knowing the device driver problems associated with this will probably send USB3 into a whirlwind of issues when it comes out.

 

Latency... well if you are talking audio this is meaningless. For that matter take all the freaken time you want. It will sound better the larger the buffer is in the DAC and that will raise the latency up. The USB High Speed Class 2 Audio interface is blinding fast... almost 2x faster than Firewire is and the latency is 1/5. Again this will have nothing to do with sonic results.

 

When it comes down to it this discussion is really useless. Nobody can declare one interface better than another or one computer better than another without looking at the complete solution. There are many bad async devices already appearing. There are also some good adaptive ones.

 

You can read all you want about any of this. The problem is the writers don't have the same system as you do nor do they have a bunch of them. So it's complete tunnel vision. Really step back and look at what your reading. Everyone here is pretty smart, make some good decisions and don't any of this as gospel.

 

Go out and listen and try some of these ideas and make up your own mind.

 

Thanks

Gordon

 

Link to comment

"I'm going to give it a couple of days to allow more input before I give mine (as I seem to have an unrivalled effect of killing threads)"

 

too late, Gordon just did (kill it, I mean). :)

 

 

clay

 

 

PS, one more practical advantage of Firewire, it IS easier to eliminate the power leg in Firewire cables than with USB, by pulling the power pin (on FW400), or using a 4pin FW connector (on one end) with an adapter.

 

Plus, high quality Firewire interfaces seem less susceptible to differences in cables, and do NOT require incredibly expensive cables for optimum performance as many have claimed with USB DACs, even Async. ;0

 

 

 

 

 

Link to comment

I understand to you the discussion may be useless - you have already made your technology choices and put your money into the products you sell. (Which are quite nice products indeed.)

 

I am quite curious about:

 

The USB High Speed Class 2 Audio interface is blinding fast... almost 2x faster than Firewire is and the latency is 1/5.

 

This is the 2006 audio standard for USB? While I am not doubting your statement, I do not quite understand how it can be. Would you be so kind as to provide more detail?

 

Also, the latency (or more accurate, timing and packet queuing delays) are the basis of a theory I have been developing about how digital players are affecting the sound they emit. If you are telling me that is total hogwash, I can accept that. However, can you give me a hint as to what you believe the reason to be? Those differences, here at least, seem to vanish when I use Firewire connections.

 

This tends to support the idea of USB transfer being much more susceptible to outside influence than Firewire.

 

Of course, I may hear different things than other people. Subjective assessments again. If you cannot measure it...

 

I have been working with audio processing since about the late 1970s. (How else can one locate a submarine under hundreds of meters of ocean if not by sound? ) ATM transfer turned out to be really important to that technology, so I can see what that the technology you are developing is based upon sound principles and is significant.

 

-Paul

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

Paul,

 

I spoke a little early and pulled my notes out and the 5x latency was with Full Speed USB it is only really more like 3x better with USB Class 2 over Firewire using the OXFW971 type interface.

 

All of these numbers are 2x meaning out and back again.

 

At 24/96 we have this kind of turnaround:

 

Full Speed: 15ms

Firewire: 7ma

Class 2 HS: 3.8ms

 

At 24/192 we have these numbers:

 

Full Speed: NA

Firewire: 9ms

Class 2 HS: 2.8ms

 

Actually with Firewire the data is pretty much a straight line increasing across the samples per second graph.

 

USB on the other hand actually is more dependent on the sample rate and decreases the higher the sample rate is.

 

This is due to what is called microFrames. Unlike Full Speed which sends a complete frame in 1ms. The USB 2.0 High Speed protocol breaks these up into smaller chunks at 8 times that rate. Each one of the microFrames can have as much as 1024 bytes of data.

 

~~~~~~~~

 

Joe I am not saying Latency may have an effect over other products. But my work in Streamlength started back in the 80's when I was developing the first commercial PPP dial up bridge (Corporate wanted to sell it to Fortune 500 companies, I told them sell it to anyone the internet is going to be huge... why I left). Anyways... I designed a lot of modems and stuff back then and we worked on buffering huge back then because keeping the pipe full meant you had the best specs and back then every nibble counted.

 

So latency defined is merely the time between sending the frame and it actually coming out. Well we don't care if that is 2.2ms or 500ms because we are not really relating the time to anything (unless you are syncing this to video). So we don't care when the sample comes out.

 

If we buffer more on the dac side then the latency will go up. But what we gain is the less of a worry about what you are talking about the differential in time theory that be effecting the overall playing capabilities of applications, OS and hardware. That's because we are keeping the pipe full on the dac side and this expanded buffering is what makes it work.

 

Thanks

Gordon

 

Link to comment

Thanks Gordon!

 

That makes a lot of sense the way you put it. Do I understand what you are saying correctly?

 

For streaming audio to a DAC it really does not make a significant difference if you are using Firewire or USB 2.0 (Hi-Speed), so long as the DAC has a big whomping buffer and the interface can keep the buffer full. That implies the DAC must re-clock the audio data.

 

Any fluctuations caused by the extra CPU load imposed by USB are evened out at the buffered DAC, and latency becomes a non-issue, in all cases except for video synch.

 

If I understood that correctly, then audible differences between USB and Firewire in a system will usually be related more directly to the performance of the DAC than the interface, depending of course on the DAC. A DAC with a small buffer will have its performance affected far more dramatically by an interface choice than one with a large buffer.

 

In the case of a DAC with a small buffer, factors such as CPU load, software players, cables, and so forth have a much higher probability of producing audible differences than if the output is being fed to a DAC with a large buffer. The absolute value of that probability is unknown.

 

And all this applies to audio streamed out to a DAC, not audio being captured for recording purposes, which introduces a whole new set of issues.

 

If that agrees with your thinking, then I think I have some ideas on ways to calculate what the effects would be and test them. Just for fun on my part, as I have limited test gear and it is just a hobby to me.

 

There was some work done along these lines up in Canada somewhere, but I think it was for speaker acoustics. Wonder if any of it applies?

 

-Paul

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

“In the case of a DAC with a small buffer, factors such as CPU load, software players, cables, and so forth have a much higher probability of producing audible differences than if the output is being fed to a DAC with a large buffer. The absolute value of that probability is unknown.”

 

then any testing is very specific, say to a particular DAC, and any results and conclusions are only applicable to that setup and would seem to be meaningless if applied to a different setup.

 

For instance, I have owned and auditioned several DACs. The worse sounding DAC that I own is a Firewire DAC. When I auditioned the Berkeley Alpha against the Weiss Minerva, I preferred the Berkeley. Others here have stated that comparing the LIO-8 to the Alpha, they prefer the LIO-8.

 

Thus from my vantage point we are back to square one where implementation is everything.

 

 

 

Link to comment

Not at all.

 

If that is indeed what is happening, then the results can be calculated and measurements can be taken to validate or disprove the theory. If the calculated results match reality, then we have a pretty good theory. If not, then we still have learned a lot.

 

That is a far cry indeed from being back to the beginning.

 

 

-Paul

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

While it certainly doesn't matter for typical playback, in some situations, latency can be a concern. For example, when recording in the common multitrack style, the musicians performing the overdubs must be in sync with the original tracks laid down beforehand. Otherwise they'd hear a delay in their new performance relative to the original tracks; disorienting to say the least.

 

In a device like the ULN-8, which is often used for multitracking, the latency is deliberately kept very low. In my experience, this has not harmed its performance - at least relative to all the other converters to which I've compared it.

 

***

With regard to one of the questions posed by I_S, pros need gear that works, period. When the players have turned in a great performance, they, the producer and the label (or whoever is paying the bill) expect it was captured by the gear the engineer/studio is using.

 

Personally, I select the gear I use first for how it sounds but I also expect it to work every time. Whether it uses FireWire or shoelaces isn't something I'm concerned about; just as whether it uses op amps or discrete components isn't something I worry about - as long as it sounds like what I feed it.

 

I believe one of the main reasons we tend to see FireWire in pro installations is that it can handle multitrack productions at 24/192 without a hiccup. Theory notwithstanding, I am not (yet) aware of any USB devices that can do this. If and when they appear, if their performance is up to snuff, I'm sure we'll be seeing them in some pro installations too.

 

***

As an aside, this thread sure has taken off on a tangent. ;-}

 

Best regards,

Barry

www.soundkeeperrecordings.com

www.barrydiamentaudio.com

 

 

Link to comment

I select audio components on the basis of how well they (or how well I think they will) sound and cost. I also expect the audio equipment to work every time.

 

When it comes to testing I believe in changing only one variable at a time. This is usually more expensive and time consuming that many may resist but it is accurate. That's not to say that you can not test or measure audio components with many different variables, it just means that your results and conclusions will be dependent on the mix of different variables.

 

Which is why for me as a listener at the end of the day I only care about whether A sounds better than B and whether A is worth the cost.

 

Link to comment

I believe that USB2.0 was forced on Apple. In Apple's universe, USB1.1 was fine for everyone plugging in keyboards, mice, and other such low bandwidth devices. If users wanted some heavy duty throughput, there was built in firewire.

 

If you look at Apple's model line-up, they stubbornly refused to go to USB 2.0 until 2003, when their last PowerMac, the G5, debuted.

 

 

CD

 

Link to comment

Interesting thread. Great effort from Barry, good to know that iTunes and other unnamed music players are bit perfect when they should be (volume at 100%, no EQ etc.). But I think the REAL deal would be to digitize the ANALOG output of a given DAC playing tracks coming from different software players and comparing those digitized samples/tracks. That could prove that despite receiving the same numeric data, DACs can produce different analog results using different software players on the connected computer, it's not just "placebo" effect. Depending on how controlled the environment (power supply, etc.) one track may be digitized more than once then averaged to avoid the distortion of for e.g. fluctuations in AC power.

 

Maybe Barry has some additional DAC handy to use it and feed the Metric Halo's analog inputs? :) As I understood the ULN-8 is a very high quality A/D converter as well, ideal for such a test.

 

? MBP ? M2Tech hiFace ? Heed Q-PSU/Dactilus 2 ? Heed CanAmp ? Sennheiser HD650

Link to comment

Hi I.G.

 

At some point, I'll try capturing the analog output of the ULN-8 (which can easily be brought back into the unit for capture).

 

While I would not tend to suspect a difference in view of the fact that what I captured was the output of the apps, just prior to the DAC chip, I'll still add this test, when testing resumes -i.e. when time avails. The first step would be to do this a few times with a single app and verify the analog output is identical between iterations. (I would not "average" because any difference between two captures from the same app would tell me the test is not useful for comparing apps. But... we'll see.)

 

Best regards,

Barry

www.soundkeeperrecordings.com

www.barrydiamentaudio.com

 

Link to comment

Barry,

 

Here in Cincy we are a hot bead of MetricHalo design. We are the largest beta site they have and almost every studio that I work with has their equipment in use. I have a ton of respect for BJ and others at MH and have worked on almost every model they have made.

 

But remember this is high end audio and not recording. In high end audio we are allot more annal about design. Things like latency have no purpose here. Heck I don't even use opamps anywhere. Instead I would use discrete transistors for gain. I don't use any feedback anywhere... heck I am making a guitar amp for the NAMM show next month... no feedback. I made a dual battery powered microphone preamplifier and the design uses only 6 cross coupled 0.1% matched 142dB SN discrete transistors per side. By cross coupled I mean that the matching also is between channels.

 

Anyways I am getting off the topic at hand. The MH equipment would make a huge step forward to High End (if they wanted to) by the elimination of DCDC converters in the digital section. This alone can have a drastic effect on jitter and also on SN in any dac/adc chip. Not to mention the exterior effect it has on other things it is attached to (mic pre's, compressors, etc...) or shares mains (AC) connection with.

 

Testing for bit true in an analog domain is really not a good idea. You really need to test completely in the digital domain. This was one of the reason why for USB 2.0 HS Class 2 that my first device is a USB to SPDIF converter. This way I can show it's totally bit true and then we can move over to the dacs. I can still measure their I2S internal buss for bit accuracy using my TEK Scope as it has a Linux OS and I have a TEK app that breaks out I2S into samples that I can save off to disk.

 

~~~~~~

 

In regards to sound and what people like and don't like... well... I think that is pretty simple. If we all listened the same way and had the same system then there would only be one device to choose from.

 

Luckily we all have our own ideas about what is best and so we all like different things.

 

Thanks

Gordon

 

Link to comment

Hi Gordon,

 

Understood regarding latency. That's why I said "In some situations, latency is a concern." It isn't for audiophile playback but I use my audio system for my work as well (though it still isn't an issue for me since I record direct to stereo, with no overdubs).

 

As to the other aspects of MH design, what you suggest may indeed result in even better sound. What I know is that whatever its "flaws", for my ears (and for my work), I have not heard anything that approaches, much less matches or beats, B.J.'s designs, particularly, the ULN-8.

 

Hearing the sound of my mic feeds from an A-D-A conversion (or any recording device) is a first in my experience and is one I rejoice in with my '8 capturing the sound at 24/192.

 

I use the '8 for listening in my own high end setup too. And the recordings I create with it are shown at their best (not surprisingly) with a good high end playback system. Can it be bettered? I would always hope so for any audio device. Right now, until I hear something I find to be more true, I think the '8 has the field all to itself. (Just my perspective of course. We all hear things differently. For my work as well as my listening pleasure, I have to go by what my ears tell me.)

 

Next week, I'll be recording what I hope will be the next release on Soundkeeper: a quartet playing Haitian flavored jazz. If all goes well, it will be released in multiple formats, including 24/192 aif or wav (customer's choice) files on DVD-R.

 

***

Back to the subject at hand: some folks here have requested capturing the analog output of the ULN-8. Won't hurt to try but my initial idea was to capture the digital output of each app, just prior to the DAC stage of the '8. Certainly, I wouldn't attempt to verify bit accuracy to the source file after a trip through analog.

 

Best regards,

Barry

www.soundkeeperrecordings.com

www.barrydiamentaudio.com

 

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...