Jump to content

Rate this topic

Recommended Posts

Hi Everyone,


I would like to know in detail about the different implementations of data pathways. I currently use windows and I can hear stark difference between music players with different libraries. Most music players with same library sound similar or identical. HQplayer, Winyl and Xmplay sound identical at identical settings while musicbee is similar but slightly sounds lacking in depth. Aimp sounds very different. Foobar sounds very different and lifeless (apparently goes through windows mixers even in asio and has measurable distortion). Albumplayer sounds very different. I can pretty much narrow it down to - if any uninteded processing is happening (like in foobar), or to the way instructions are laid out. The science of getting jitter free data out of CPU is a topic in itself, and I would like to know if there is any kernel/compiler available for linux/bsd that tries to remove the effect of speculative Execution Jitter. From what I know, the current ways are to have tonnes of Lfence instructions or modify the clang so that the instructions are in such a way that spec/ooo execution or other enhancements always returns a predictable sequence (a miss). I have tried a few such programs and while I can hear differences (with the usb dac and supra cable I have), I am not sure if it is all improvements or if there is any skipping going on.


I am planning to upgrade my dac. I am thinking of the dddac but haven't setted on one single dac yet. Without a doubt it will be diy and my intention is to use custom coded upsamplers/dithers that I would pre-processes and store before sending the data in 192khz or 176khz, 24 or 32 bit.. I am now confused on the available ways to transmit the information from the processor to the DAC. I will split up the question into two parts.


First question I have is, What are the available protocols that communicate data to the dac? I currently use USB, I can see that the data is fetched, packeted, sent to the USB port dma controller. From here it goes to a USB slave which also has a buffer and converts this stream to timed i2s data which will be fed to the dac. Since USB transfer sequence is timed (i think at 125ms between polls) Isosynchronous mode of USB frame skips frames if 44100hz sample rate is used (8000*5 or 6 alternating missing 100 samples). Asynchronous buffers these things so no interpolation in terms of data transfer but doesn't have any error detection or correction and god knows what type of errors/issues could pop in (I hear changes in cables so I don't believe all is well in USB). I went around looking at other protocols - spdif, toslink etc. SPDIF uses manchester encoding and the slave has to decode clock from the data stream. This kind of makes it necessary to have very good engineering in both the master and the slave, and i'm unsure if they will be that much issue free even after that. And the standard looks variable, general certification is only upto 48khz 24 bit though there are implementations that go till 192khz 24 bit. Toslink does the same with optical format (but not the high quality optical standard used in internet transmission and hence apparently varies with cables). I am unsure if any of these support error detection and/or correction. Also all of these seem to have single serial data line. Is there any dual channel data alternative (Ethernet seems to support dual channel, or is it just full duplex they are meaning?). Another interface I saw was aes which is similar to spdif (or is it a subset of spdif?). Ethernet based audio transfer seems to be what is used in the network streamers, with custom protocols on top of the ethernet protocol if I'm right and it seems to be a good way to go about things before sending to the slave. And direct i2s communication over RJ45 or HDMI seems to be available as well but then again i-i communications are a blank slate meant mainly for inter-ic data pathways and I'm unsure how the protocol will hold up to sending over a large line. I would like to get more enlightenment on this topic and subjective experiences of what works best at the moment, the reliable brands of decoders, reclockers, the available chips, diy-options etc.


The second question I have is, how is the data taken out from the CPU/memory. For usb, cpu communicates to the USB controller and fills up the data in its buffer before it sends. Another option I've seen common in single board computers is that i2s is available as an i/o from the board itself. As usual, since a general mass production unit may not have good timing (besides they run out of switching power supplies), there are units that take in this i2s stream, buffer, and regenerate them into i2s,spdif or usb signals (like allo kali,digione etc). I have seldom come across chips that communicate to cpu through PCIe and spit out i2s signals (except maybe pinkfaun). Why is it so. What are the other options available for taking data out from cpu with as minimal jitter as possible (provided the software makes sure the cpu is not introducing a lot of jitter). Are there any dedicated computer architectures meant to handle music streaming well enough (ignoring realtime microcontrollers running dos like operating systems).


I'm sorry if it sounded too pedantic. I already have a setup I enjoy even out of usb, and I'm only planning to get a better experience. I just want to learn all the options before diving in. And as an EE student, learning these is a lot of fun, and implementing even more so. This is the reason I'm buying diy dac as well and doing my own custom upsampling, just to learn.

Edited by manueljenkin
slight grammar corrections

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...