Jump to content
IGNORED

Optical drive


Recommended Posts

Often I read that the software used for ripping is more important than the drive.

I will skip the comment about the 100% accuracy and that after ripping it sound exactly like the master.

In this series of articles about optical drives I will start with the brushless DC spindle motor and the Jitter created by offset drift of the Hall effect sensors. The impact started to be tackled in 2000 and some solutions were proposed: http://www.dtic.mil/dtic/tr/fulltext/u2/p011838.pdf , few years later the research is still on-going: http://www.chalcogen.ro/883_Paun.pdf.

The power supply do have an impact on this phenomena, this document from Honeywell may help to understand: http://sensing.honeywell.com/index.php?ci_id=47847

So when you are ripping Jitter can be created depending on the quality of the spindle motor that your drive is using, the quality of power supply and the compensation that is put in place.

 

If you find that of interest the next article will be about optics and laser diode and how they can affect the sample amplitude.

 

 

imagecompatibility.png

 


Link to comment

The Objective Lens

 

« Presently disc drive use lenses for focusing the laser beam to a diffraction-limited spot. These lenses consist of two aspheric surfaces, have fairly large numerical apertures, and are essentially free from aberrations. The numerical aperture of a lens is defined as NA = sin teta, where teta is the half-angle subtended by the focused cone of light at its apex. A 0.5 NA lens, for example, will have a focused cone whose full angle is 60°. The diameter of the focused spot is of the order of lambda0/NA, where lambda0 is the vacuum wavelength of the laser beam. It is thus clear that higher numerical apertures are desirable if smaller spots (and therefore higher recording densities) are to be attained. Unfortunately, the depth of focus of an objective lens is proportional lambda0/NA2, which means that the higher the NA, the smaller will be the depth of focus. It thus becomes difficult to work with high NA lenses and maintain focus with the desired accuracy in an optical disc drive.

But a small depth of focus is not the main reason why the optical drives operate at moderate numerical apertures. The more important reason has to do with the fact that the laser beam is almost invariably focused onto the storage medium through the disc substrate. The disc substrate, being a slab of plastic, has a thickness of 1.2 mm. When a beam of light is focused through such a substrate it will develop an aberration, known as coma, as soon as the substrate becomes tilted relative to the optical axis of the objective lens. Even a 1° tilt produces unacceptably large values of coma in practice. The magnitude of coma is proportional to NA3, and therefore, higher NA lenses exhibit more sensitivity to disc tilt. Another aberration, caused by the variability of the substrate's thickness from disc to disc, is spherical aberration. This aberration, which scales with the fourth power of NA, is another limiting factor for the numerical aperture.this lead to develop a servo mechanisms whereby the tilt and thickness variations of the disc are automatically sensed and corrected using a Gaussian focus as a reference point. »

This theory about solving spherical aberrations is questionable because the limit of the scaling laws for correction are easily reached :

http://homepage.tudelft.nl/99s1c/pdfs%20mypapers/ao_2005_1.pdf

 

Spherical aberrations can creates small phase shift with a direct incidence on sample amplitude.

 

Next article laser diodes.

 

 

cdplay.gif

 


Link to comment

Laser diode:

A general characteristic of laser diodes is their uniqueness. Although laser diodes are usually manufactured in great numbers in one process, every laser diode is slightly different in comparison to another unit of the same batch. As a consequence, the emission properties such as central wave- length, shortest pulse width or achievable output power does slightly vary from diode to diode.

CD use a pulsed laser which is a system which will emit light in the form of optical pulses, rather than a continuous wave (CW). There are numerous methods to achieve laser pulsing, but the end result follows the same principles.A pulsed laser periodically emits pulses of energy in an ultra short time duration.The duration, or pulse width for laser diodes can range from nanoseconds to picoseconds. The average power of a pulsed laser is defined by the amount of energy released over the period of the cycle which equal Energy × frequency.

Which bring us back to timing (jitter) (sorry Roch :)) and Power (you are often right Alex :)).

 

 

 

 

Noise and Jitter: http://www.informit.com/articles/article.aspx?p=1087655&seqNum=2

 

 

Infos about CD and LED : http://www.ld-didactic.de/documents/de-DE/EXP/PHO/4747124EN.pdf

 


Link to comment
Hi alfe

 

A while back I followed up on your recommendation of a Plextor LB950UE as a good quality option for occasional CD replay.

 

Via the Plextor I really quite like CD replay via HQP or my Bryston BDP and wondered if you felt the Plextor was about as good as it gets or if there is a current external drive out there to better it for ultimate SQ?

 

Thanks again

 

The Plextor is a good product add a good LPS use it as a standalone on flat surface and you are done.

 


Link to comment

Conclusion:

If all the ripping software are equal the drives are not.

You have a better result with single wavelength drives avoid high aperture ones.

Regulated power is a must have.

Avoid slim drives and slot ones.

Avoid heat, a pause between two CD's when you rip is welcome.

And don't forget the optical drives are not designed for audiophiles :)

 


Link to comment
When you refer to "better result", are you referring to better sound quality when playing a CD, better ripping performance (less errors), or rips that sound better?

 

See you coming Tom :) if we are heading to the bits are bits theory then they better be on time.

PCM audio is two component, bits which give a signal value and step time which give the sample rate, for SQ you can't have one without the other no matter if it's playback or rips.

 


Link to comment
  • 2 weeks later...

Talking about optical drive lead us to the ripping process of CD audio witch was designed to play continuously and not to be read as individual sectors.

This series of articles called" what the hell is happening to the bits before they get static in an HDD":) may help to understand the ripping process.

 

CD rom sectors are 2352 bytes long divided into 2048 bytes of data plus 304 bytes of synchronisation, header and additional ECC information that are used to control positioning and free errors read. For audio all the 2352 bytes are used for audio data and the only way to address an audio sector is to use the Q sub code information.

Good understanding of the encoding of an audio CD may help: Chip's CD Media Resource Center: CD-DA (Digital Audio) page 9

http://www.computeraudiophile.com/f8-general-forum/encoding-and-decoding-cd-22848/

https://en.wikipedia.org/wiki/Compact_Disc_subcode

 

Timing again:)

 


Link to comment
Hi Alfe,

 

Thanks for all you detailed explanations (after & before) !

 

Is Nero at 1x good for CD ripping to hard disc also? (I own a Plextor CD only with some good LPSU).

 

Thanks,

 

Roch

 

Hi Roch,

 

I personally use Nero for ripping my disc, I always run a CRC 32 check before starting ripping and then compare at the end.

The speed is depending of the condition of your disc but if you want to go safe route 1X will do.

 

cheers,

Al

 


Link to comment

From this document: http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-130.pdf

A section or a sector call it the way you want:) is 98 F3 frames.

F1 frame is 24 bytes audio data at the input of the CIRC encoder.

F2 frame is 32 bytes at the output of the CIRC encoder.

F3 frame is F2 with a control byte input to 8-14 encoder.

For error correction purpose the 24 audio bytes in a frame do not represent consecutive audio samples (p.35) in fact the longest delay between input and output from CIRC encoder is 108 F1 frame times.

The Q sub code determine the sector identity but the audio data are spread over 108 sectors instead of 98 and there is no clear definition of witch audio audio samples belong to the time frame covered by the sector determined by the absolute time of the Q sub code.

I will explain later the interpretation of the drive manufacturer to this degree of freedom.

Before that, next is how to store jitter:)

 


Link to comment

the run-lengths represented by the pits and lands on the disc, have discrete values, determined by the EFM modulation. The nominal RUN-LENGTH of pits and lands are:3T,4T,5T,...,11T,whereT=1/fEFM-clock =1/4.3218MHz=231nsec.

The actual long-term average length of a pit or land of run-length nT is called the EFFECT LENGTH.
The difference between the momentary length and the long-term average length of a pit or land of run-length nT is called the JITTER.

3T is represented by 001....... 11T by 00000000001.

For example in discs with long playing the linear velocity reaches its lower limit, which means that the pits and lands become shorter (higher density). Therefore the pits and lands representing the higher EFM frequencies (I3) (P 8-13 ECMA doc) are closer to the optical cut-off frequency, resulting in smaller amplitudes in the read-out signal.

Offset drift of the spindle motor may affect the bit size, higher is the jitter more is difficult to the reader to make a difference between 3T and 4T or 4T and 5T.... witch lead to a variation of amplitude of the original signal.

 

Unknown.png

 

 

Next little thumb and checksum:)

 


Link to comment

3 statements:

 

-All form of error checking involves adding something to the digital pattern.

 

http://ccsun.nchu.edu.tw/~imtech/course/ods/Chapter%203%20-%20Error%20Correction.pdf

 

-CRC are based on polynomial arithmetic, for example CRC 32 will detect all errors that span less than 32 contiguous bits within a packet and all 2 bits errors less than 2048 bits apart (wonder why CD rom data is 2048 byte, may be easy computation:))

Undetected errors when data are spread on different block are explained in this document: ftp://ftp.cis.upenn.edu/pub/mbgreen/papers/ton98.pdf

 

-CRC performance is independent of data values it’s only the pattern of error bits that matter http://users.ece.cmu.edu/~koopman/pubs/KoopmanCRCWebinar9May2012.pdf

 

Something to digest before starting (respectfully please I hate being called dumb) this controversial subject :)

 

Next drive manufacturer interpretation.

 


Link to comment
+1

 

BTW, this thread might be more active if you could explain things in simpler terms for idiots like myself.

 

For example, you seem to imply that Alex's observation that identical data stored on a hard drive can somehow sound different but I can't see the explanation based on the references you provided. Can you help us connect the dots?

 

I would like to believe that Alex is right but so far no one has come up a plausible explanation.

 

Tom I never said that identical data may sound different, I say identical checksum may sound different.

in the statement I also said that CRC is independent of data values.

I also explained in" how to store jitter" the effect on amplitude so the dot are easy to connect.

 


Link to comment
Reading all this information it's clear that there are lot of technical issues influencing data extraction from audio cd's. The main question I'm not getting an actual answer on is what the main cause for different sounding rips could be, rephrase, what the cause(s) could be if there were to exist different sounding rips from the same source material.

 

-Unreadable blocks and/or read errors causing different error corrections?

-Am I correct in assuming "jitter" only has an audible effect on realtime playback? I understabd burned cd's of varying burn quality cause varying amounts of jitter on read, can this have any effect on the bits being written to harddrive?

 

You are connecting the dots :) there is more info coming and we can draw the evident conclusion all together.

 

Thanks to you, to Jud and Tom at least now I know that I'm not writing for myself:)

 


Link to comment
I guess this is the part that I am missing.

 

For example, Alex provided me with two files that he insists sound different. I calculated the checksum of both files and did a byte by byte comparison of the files and as far as I can tell they are identical. Are you saying that they may be different?

 

I understand, you are talking bit and Im talking sample value, ok take an example for a 3T pit you may have a smaller amplitude in the read out signal due to the length effect but you still represent it by 001.

 

*I have also checked the 2 files from Alex and they were exactly the same but his subjective statement pushed me to do some research and I thank him for that.

 


Link to comment
This matches my understanding. The signal may have a lower amplitude but there is no mechanism for capturing this lower amplitude in the digital data that is written to the HDD when the CD is ripped.

 

It is also my understanding that there can only be two types of rips: one that match the binary data on the CD (correct ones) and ones that don't (incorrect ones).

 

Alex has observed rips that seem to fall between the two. They are correct and identical to each other at the binary level but the quality varies according to the power supplied to the drive during the ripping process.

 

I don't believe that such rips can exist. What do you think?

 

We didn't finish to link the dot,3T is the smallest value but if you have a smaller amplitude in the read out signal what will happen to a 4T? you will read it as 3T than it's represented by 001 instead of 0001.

when we will go further in explanation with sector, data chunk , block error read,splicing in checksum you will understand that's why I told it need a demonstration.

 


Link to comment
If the amplitude drops below the threshold level and the data changes from 0001 to 001, this will be a read error and, if this error is uncorrected, the resulting rip will have a different checksum than a rip in which the data didn't change from 0001 to 001, right?

 

In theory yes, but real data are different animals.

 


Link to comment
I understand that it is possible for different files to have the same checksum but the chances of this occurring in the situation are so small that I think we can ignore this as a possibility. Also, as I mentioned above, I compared the data at the byte level and found it to be identical.

 

Alex has said that you have proposed phase shift and the hall effect as reasons why identical data files may sound different. I am having a hard time with this as I believe these factors are only applicable when data is being read from the CD. Once the CD has been ripped, there is only binary data and these factors don't come into play. Is my understanding incorrect?

 

No my message was phase shift=same checksum=different sound.

 


Link to comment
Thanks for the clarification. I look forward to hearing more when you are ready.

 

All the best,

 

Tom

 

Tom,

 

Just for clarification I never said and will never say that identical bits sound different, I just tried to find out why Alex and others can hear a difference.

I could see only two possibilities the data are spread on different packets that may affect the bit rate(BER) or it make the jitter higher during the reading process or the non uniformity of the checksum is the culprit.

I had to start first with the behaviour of the optical drive to try to explain what happening first and if the two possibilities can be covered.

So please let's forget about the bits are bits war, and let's continue to try to find what is behind the mathematical assumptions.

And anyhow if at the end you are not convinced by my demonstration I'm sure that in any case you are learning stuff about the optical drive:)

 


Link to comment
That works if the drive (or a drive dependent variable like offset) is part of what goes into the checksum.

 

Hi Jud,

 

I will explain the drive offset in the manufacturer interpretation but as some of you are in hurry to see in witch direction I'm going.

 

 

First this may help:Differences Between CRC And Checksum | Difference Between | Differences Between CRC And Checksum

 

Second the reason why ripping software use CRC 32 is that it will detect the majority of errors by checking:

Any 1 bit error

Any two adjacent 1 bit errors

Any odd number of 1 bits errors

Any burst of errors with a length of 32 or less

Majority it's not all, what if with overlapping we have a burst longer than 32 bits?

 

That's what the demonstration is about: accuracy...

But getting there needs lot of thing to be explained before jumping to the conclusion that different bits may sound different:)

 


Link to comment

Let's go back to P19 of the ECMA document,the tolerance for nominal time is +/- 1s witch represent 7350 frames.

Now if we add to maximises focus an offset is applied:

http://www.ieeecss.org/CSM/library/2008/june08/11-June08ApplicationsOfControl.pdf

(Tom from this document you will understand what I meant for phase shift)

And that usually the drive manufacturers will use the most significant bit of the first left channel of the time frame of the first frame of the sector.

You will get the famous drive offset:)

 

*edbk: I'm not teasing it's just about timing, I'm not retired yet.

 


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...