Jump to content
IGNORED

Getting my head round things ...


Recommended Posts

Forgive me if this post is completely superfluous or misleading to others ... but I've been trying to get my head round how digital audio (and specifically the SPDIF interface) works all weekend - I think I've got it sorted now but might still be on the wrong track - hopefully someone can tell me if my thoughts are correct.

 

Assuming a CD quality track, the "sound" is stored at 44100 samples per second (44.1khz) with each sample being a 16bit number - a value between 0 (silent) and 65536 (maximum level). On a CD these are represented bit pits which are read by a laser.

 

When we get to the SPDIF interface, these values have to be sent serially down a single cable. As we have only a single cable all 16bits can't be sent at once, so we have to send one at a time. This increases the frequency of the bit on the cable to 44.1x16 - or 705600 bits sent each second. Is this correct? The specification of the SPDIF interface indicate that the signal is 0.5V p-p. I am assuming therefore that if you measured the output of an SPDIF interface that it would alternate (very rapidly) between 0V and 0.5V as the bits are send down the cable. Each bit is therefore represented by a 0.00000141723 second (approximately 1.4uS). If you measured the stream I'm assuming it would be like a barcode - where each bit is the same length of "pulse" on the cable with no gap in between - appearing to be long and short pulses.

 

A 16bit sample would (I assume) have to have additional bits added as frame data, etc. but these are presumably well defined in the IEC 60958-3 standard.

 

Jitter therefore, is the phenomenon where the edges of the bit sent don't line exactly with the pulses of the clock controlling the receiving of the signal. The clock pulses to receive the 5th bit (for example) when the 4th is still being sent. Is this correct? This can be for a variety of reasons but a big reason is because the two clocks (sending and receiving) don't agree exactly on what 44.1khz actually is. This is where a Master clock comes in as them both the sending and receiving devices are using the same clock.

 

Now in computer audio, we have to read the PCM data from a file, and then internally to the computer this has to be reformed so that it can be sent via the SPDIF interface. I'm thinking this is a two stage process - first the software player and operating system combined have to convert the PCM data to something that can be fed to the sound-card (note this maybe built into the computer's motherboard, maybe a separate PCI card or a USB connected device). The sound-card then using a combination of both soft and hardware needs to convert this to the SPDIF standard that can be understood by all other devices.

 

I think people argue that issues can arise at this stage even while the data is kept in its bit-perfect state. Is this correct?

 

Sorry for such a long post ... I'm just trying to understand the basics of how digital audio is transfered from one system to another. I'm not sure if I've tried to over complicate it or if my explanation to myself has tried to simplify it too much. I've been reading lots of different things on the web, each explaining the same thing in different ways and I hope I've now got it straight. I like to think I'm reasonably well informed while not an expert in digital audio from an engineering point of view.

 

Thanks

Eloise

 

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment

Would love a reply to this post from an expert who knows what the exact chain of events are between bits on a disk and a signal entering a dac. Great post and hopefully will allow us to get some good info.

 

After two recent comments I am particularly interested in what is done to a PCM stream encapsulated in a WAV or AIFF and whether it is processed at all and also the packaging of these into the spdif transport.

 

Link to comment

Just one thing missing afaict: The data stream actually has to be twice what you suggest because it is BMC encoded to avoid long string of 0's or 1's. The BMC encoding doubles the number of bits but makes the transitions easier to detect, if I understand the standard. Btw, although this mixes clock and pcm data in the same stream it is still easy to remove the jitter. Because the added transition bits are in a predictable place it is trivial to remove them and to re-clock the entire stream according to the bit-rate specified in the meta-data. This is why, contrary to some expert views, one can design a DAC that rejects jitter coming in on the SPDIF channel 100%.

 

- John.

 

Link to comment

Audio_ELF:

 

Here is a good article on the subject: http://www.stereophile.com/features/396bits/index.html

 

The main issue with S/PDIF & its professional equivalent AES/EBU is that the master clock is at the source (sound card) and the receiving DAC must recover and sync to this clock. Doing this with low jitter is difficult, as the above article describes. The authors address the question of whether the interface is fundamentally flawed.

 

Recent high end DACs, e.g., Weiss and Berkeley Audio, do a very good job of overcoming this difficulty. See for example some of the white papers and manuals on the Weiss web site.

 

My own choice of the Weiss DAC2 was strongly influenced by its FireWire mode, in which the master clock is in the DAC and the computer is slaved to it. But it also sounds great with S/PDIF input from a Meridian G08 CD player.

 

RayW

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...