Jump to content
  • 0
IGNORED

Is bit depth about dynamic range or data?


audiojerry

Question

I thought after all this time I was correctly explaining bit depth and sample rates to my non-audiophile friends, but now I"m not so sure. I thought that bit depth or bit size determines how much information can be captured in a single sample taken from an analog signal. So if, for example, you are recording a symphony orchestra, there are lots of instruments creating a lot of complex tonal information and sound  levels. This creates a complex analog waveform, and when you take a sample of this waveform, you are going to digitize it and store it in a file. This single sample of the waveform would obviously contain a lot of information about what was happening in this symphony orchestra in that instant of time. The larger the bit depth, the more information you can capture, and you have a better quality file to produce a better quality recording.

 

But now I'm hearing that bit depth is all about dynamic range. That seems too simplistic to me.  

Any experts out there who can set me straight?

 

 

Link to comment

Recommended Posts

  • 0
17 minutes ago, fas42 said:

 

Where digital "gets it wrong" for many people in the real world of playback, is that critical information that is encoded at relatively quiet levels compared to the maximum signals that are occurring at the same time, is too distorted by imperfections in the playback chain to be easily discerned by the listening mind - people hear this all the time in sub-par systems; a track which is a complex mix of sounds is played, and it "sounds a mess!" ... the dynamic range is there, as a technical, measurable characteristic, but distortion of low level information is too great - and subjectively "you can't hear what's going on" ...

 

I recently posted a clip of a track from a Ry Cooder album, and the response was that it "just collapses into a bowl of mush" - this is a classic symptom of inadequate effective resolution of the playback chain; subjectively, the "dynamic range" is not good enough ... and this has absolutely nothing to do with the encoding using only 16bits.

 

 

While I don't have too much of a problem with the first paragraph, if the posted clip was from YouTube, even most mediocre systems should have no problems playing virtually all YouTube Audio without any real problem.

 

I also agree with this comment from Mansr

Quote

Even with flat 16-bit TPDF dither, a 1 kHz tone at -100 dBFS is audible over headphones, even though the dither noise is subjectively louder.

 

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
  • 0
7 minutes ago, sandyk said:

 

While I don't have too much of a problem with the first paragraph, if the posted clip was from YouTube, even most mediocre systems should have no problems playing virtually all YouTube Audio without any real problem.

 

 

Again, the posting of a YouTube clip is there as an easy way to reference the style of the music, etc - it wouldn't make sense to prove you how good my system was, by ringing you up on my old fashioned corded phone, and holding up the handset so that you could listen ... 🙂.

 

Played that CD of Ry Cooder a couple of visits ago to the local audio friend ... ummm, was not good - not a mush, but very dodgy on the ears, 😉.

Link to comment
  • 0
22 minutes ago, fas42 said:

it wouldn't make sense to prove you how good my system was, by ringing you up on my old fashioned corded phone, and holding up the handset so that you could listen ... 🙂.

 

The old fashioned analogue corded phone was vastly superior in the area of clarity, in the  latter days of the Analogue networks (at least in Australia) , to anything currently available via Digital networks with their typical 300-3,000HZ frequency response  , and did quite a reasonable job with reproducing music too, after the replacement of the old carbon microphones with electret type microphones . Rocking Armature type receivers even did a reasonable job reproducing the sound of Air Conditioning given the use of later opamps than the 709 to drive them.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
  • 0
44 minutes ago, bluesman said:

The Bell system “speech band” was 300-3400 Hz through decades of dial phone use.  Bell Labs did a lot of research to determine everything from the optimal frequency response of their phones to the size of the holes in the dial and buttons on touch tone phones. The equipment was very high quality until the demise of Bell - and it was tough as nails.  I suspect that those black dial phones were bulletproof!

 

I blew a 6L6 in my guitar amplifier on a gig in the summer of 1968. It was almost midnight, and I had no spare.....but we had another 2 hours to play. So I called the phone company’s repair service from the club, explained my predicament, and asked if they had any tubes I could buy. The guy who answered asked where I was and said he’d get back to me.  About ten minutes later, a Bell System truck pulled up and the driver brought two 6L6s to the bandstand, telling me I should replace both for best sound. I asked what I owed him, and he asked for my home phone number - he told me it was “repair service” because I was a customer!

Yes, that's what I call service. :D

The bulk of our Analogue network ,at least in Sydney was determined by the distance and the amount of attenuation. and used balanced transmission feeds .

 Outlying and new areas had to use PCM systems of course, the passband of which varied a little between the system manufacturers. . To maintain a high S/N, the earths of the various exchanges needed to be very low, or there could be a small amount of hum. The earth systems occasionally needed upgrading.

We had a problem with the remaining amount of Strowger (USA) equipment at Chatswood in Sydney, as the Ring Tone was derived via  capacitors from the actual LF ring, and didn't get through the Carrier Systems to some other states.

 It would appear to be a No Progress call until somebody suddenly answered. My O.I.C asked me for assistance , so  I used a DIY 52V transistor amplifier from the new ARE11 Processor controlled exchange to modulate the LF ring with the 400HZ ring tone which was used to supply the remaining Strowger gear until it was replaced. This also benefitted a couple of large PABXs in the area which were fed ring from the exchange via dedicated pairs.

 

How a Digital Audio file sounds, or a Digital Video file looks, is governed to a large extent by the Power Supply area. All that Identical Checksums gives is the possibility of REGENERATING the file to close to that of the original file.

PROFILE UPDATED 13-11-2020

Link to comment
  • 0
On 2/2/2020 at 6:02 PM, audiojerry said:

I thought after all this time I was correctly explaining bit depth and sample rates to my non-audiophile friends, but now I"m not so sure. I thought that bit depth or bit size determines how much information can be captured in a single sample taken from an analog signal. So if, for example, you are recording a symphony orchestra, there are lots of instruments creating a lot of complex tonal information and sound  levels. This creates a complex analog waveform, and when you take a sample of this waveform, you are going to digitize it and store it in a file. This single sample of the waveform would obviously contain a lot of information about what was happening in this symphony orchestra in that instant of time. The larger the bit depth, the more information you can capture, and you have a better quality file to produce a better quality recording.

 

But now I'm hearing that bit depth is all about dynamic range. That seems too simplistic to me.  

Any experts out there who can set me straight?

A bit is on or off, represented by either state as the number 2. That is raised to the power of the bit depth. 
 

2 raised to 16 is a signal with a maximum count and theoretical resolution of 65,536. 
 

2 raised to 24 is a signal with a maximum count and theoretical resolution of 16,777,216. 
 

A higher bit depth gives a signal that can be resolved into finer detail  Whether that finer detail makes a difference is a matter of whether your hardware can process that signal to good effect. 
 

Frequency is simply how many samples are taken in a given time period. 
 

Think of it as a camera - you can have cheap black and white film or gorgeous Kodachrome - that’s the bit depth. 
 

The frequency is the shutter speed - really fast to freeze that Grand Prix car rounding at Eau Rouge or a slower speed that just can’t quite capture all the detail. 
 

 

 

 

Link to comment
  • 0
On 2/2/2020 at 6:02 PM, audiojerry said:

I thought after all this time I was correctly explaining bit depth and sample rates to my non-audiophile friends, but now I"m not so sure. I thought that bit depth or bit size determines how much information can be captured in a single sample taken from an analog signal. So if, for example, you are recording a symphony orchestra, there are lots of instruments creating a lot of complex tonal information and sound  levels. This creates a complex analog waveform, and when you take a sample of this waveform, you are going to digitize it and store it in a file. This single sample of the waveform would obviously contain a lot of information about what was happening in this symphony orchestra in that instant of time. The larger the bit depth, the more information you can capture, and you have a better quality file to produce a better quality recording.

 

But now I'm hearing that bit depth is all about dynamic range. That seems too simplistic to me.  

Any experts out there who can set me straight?

 

 

Both...  It is about dynamic range and signal quality.  However, the resolution is essentially dependent on the dither existing...   If a signal is properly dithered, if you narrow the frequency range, then you can get a wider dynamic range.  So, the #of bits is more about information.  The amount of information is proportional to (bits * sample rate).   If you narrow the frequency range (filter), then the dither noise decreases which increases the dynamic range for that limited frequency range.

 

(The above isn't the whole story, and might have a few minor details wrong, but trying to explain a concept in this very short time that I have to write this!!!  Very busy, and have to run!!!!)

 

John

 

Link to comment
  • 0

I like the photo resolution analogy - it helps this layman understand.

 

I also like the dithering explanation, but I question the argument that human hearing can compensate for shortcomings in the digital recording. Does that viewpoint imply that the quality of the digital recording or the quality of one's dac aren't important?

 

To me an exceptional dac is an absolute requirement to tolerate listening digital music    

Link to comment
  • 0
On 3/1/2020 at 12:49 PM, audiojerry said:

I like the photo resolution analogy - it helps this layman understand.

 

I also like the dithering explanation, but I question the argument that human hearing can compensate for shortcomings in the digital recording. Does that viewpoint imply that the quality of the digital recording or the quality of one's dac aren't important?

 

To me an exceptional dac is an absolute requirement to tolerate listening digital music    

In my experience the DAC wasn't as important as what it was being fed with - or from.  I won't disagree that some manufacturers do a better job than others, particularly on the analog side of things.  But, much like back in the mainframe days there was always a big sign somewhere in the computer room - GIGO - Garbage In, Garbage Out.  

 

The biggest changes recently wasn't with the DAC, but going from a laptop feeding it through USB to a streaming input over Ethernet where the DAC/DSP does all the work.  Same DAC, better sound.  

Link to comment
  • 0
5 hours ago, SJK said:

In my experience the DAC wasn't as important as what it was being fed with - or from.  I won't disagree that some manufacturers do a better job than others, particularly on the analog side of things.  But, much like back in the mainframe days there was always a big sign somewhere in the computer room - GIGO - Garbage In, Garbage Out.  

 

The biggest changes recently wasn't with the DAC, but going from a laptop feeding it through USB to a streaming input over Ethernet where the DAC/DSP does all the work.  Same DAC, better sound.  

 

Yes, I completely agree - almost completely. I had a chance to audition a PS Audio Direct/Stream dac with an ethernet bridge in my system for a week. It allowed me to compare it to my Oppo Sonica dac with an ESS 9038PRO Sabre chip.S9038PRO and USB/ethernet streaming.The Oppo represented a big sonic improvement over my previous dac, and at $700, it was a tremendous value in my

opinion. To my chagrin the PS DirectStream was far superior to the Oppo. Even digital recordings from the mid 80's that I could not tolerate for the most part sound very good through the DirectStream. I don't know if its 1 bit comversion was the reason, but I did not like the realization that a multi thousand dollar component could bring about such improvement.   

Link to comment
  • 0
56 minutes ago, audiojerry said:

 

Yes, I completely agree - almost completely. I had a chance to audition a PS Audio Direct/Stream dac with an ethernet bridge in my system for a week. It allowed me to compare it to my Oppo Sonica dac with an ESS 9038PRO Sabre chip.S9038PRO and USB/ethernet streaming.The Oppo represented a big sonic improvement over my previous dac, and at $700, it was a tremendous value in my

opinion. To my chagrin the PS DirectStream was far superior to the Oppo. Even digital recordings from the mid 80's that I could not tolerate for the most part sound very good through the DirectStream. I don't know if its 1 bit comversion was the reason, but I did not like the realization that a multi thousand dollar component could bring about such improvement.   

Perhaps you could expand on exactly what was the difference, whether the input type or the source. 
 

PS Audio was ethernet input, Oppo was USB?  And that the PS Audio made a difference?
 

Enquiring minds need to know....

 

Link to comment
  • 0
On 2/2/2020 at 6:21 PM, fas42 said:

Bit depth is about the signal to noise ratio. If you reduce the depth, you get a higher level of random noise - tape hiss is the obvious analogue variant. A decent digital encoding can capture that tape hiss with ease, so "everything that matters" is being transferred

 

This all assumes that the person who might be playing around with bit depth, while recording and/or mastering, knows how to apply the correct dither, at the correct point of operations ... get it wrong, and you can hear the mistake.

 

Human hearing can compensate for random loss of data, or excess noise, remarkably well - good handling of digital data can rely on that ability, to make even poor bit depth "sound OK".

 

Dynamic range is purely about mastering decisions - nothing to do with bit depth.

SNR has nothing to do with either bit depth or the frequency of a digital recording. There is no noise, at least not in digital terms. 
 

A digital recording captures the source, whatever that may be, the same as an analog recording and with consideration for the digital ADC front end.  
 

You make reference to tape hiss and yes, in an analog world there was great effort made to move as far away from the noise floor as possible.
 

I’m deeply confused as to how that relates to a digital recording with the two values under discussion - bit depth and frequency. 

 

 

Link to comment
  • 0
21 hours ago, SJK said:

SNR has nothing to do with either bit depth or the frequency of a digital recording. There is no noise, at least not in digital terms. 
 

A digital recording captures the source, whatever that may be, the same as an analog recording and with consideration for the digital ADC front end.  
 

You make reference to tape hiss and yes, in an analog world there was great effort made to move as far away from the noise floor as possible.
 

I’m deeply confused as to how that relates to a digital recording with the two values under discussion - bit depth and frequency. 

 

 

 

A recorder will typically use as least 16 bits for recording, which means you won't hear noise, assuming decent settings - but there is nothing to stop one reducing the bit depth of that recording afterwards; which will introduce audible noise ... we have a basic, older digital camera which I once tried to use for capturing some playback; which was close to useless because the SNR was terrible; nominally 16 bits, but the automatic gain control was hopeless.

 

Modern devices are normally fine, but being careless, in a manner that compromises the bit depth, will certainly be audible.

Link to comment
  • 0
1 hour ago, fas42 said:

 

A recorder will typically use as least 16 bits for recording, which means you won't hear noise, assuming decent settings - but there is nothing to stop one reducing the bit depth of that recording afterwards; which will introduce audible noise ... we have a basic, older digital camera which I once tried to use for capturing some playback; which was close to useless because the SNR was terrible; nominally 16 bits, but the automatic gain control was hopeless.

 

Modern devices are normally fine, but being careless, in a manner that compromises the bit depth, will certainly be audible.

I have no idea what you’re talking about.

 

I’m thinking that you’re posting too often and not really following the conversation. 
 

Dude. Slow down, stop and smell the flowers.  
 


 

 

Link to comment
  • 0
On 2/2/2020 at 6:21 PM, fas42 said:

Bit depth is about the signal to noise ratio. If you reduce the depth, you get a higher level of random noise - tape hiss is the obvious analogue variant. A decent digital encoding can capture that tape hiss with ease, so "everything that matters" is being transferred

 

This all assumes that the person who might be playing around with bit depth, while recording and/or mastering, knows how to apply the correct dither, at the correct point of operations ... get it wrong, and you can hear the mistake.

 

Human hearing can compensate for random loss of data, or excess noise, remarkably well - good handling of digital data can rely on that ability, to make even poor bit depth "sound OK".

 

Dynamic range is purely about mastering decisions - nothing to do with bit depth.

Dude. Stop.
 

It has nothing to do with SNR, that’s only manifested in that long gone analog world. 
 

The topic for discussion is bit depth and frequency. 

 

In a digital world there is no noise floor, tape hiss, or limited bandwidth. Our conversation is about how bit depth and frequency affects our ability to capture the moment. 
 

Et t’u bien compris?

Link to comment
  • 0
32 minutes ago, SJK said:

In a digital world there is no noise floor, tape hiss, or limited bandwidth. 

 

False, true, false. Two out of three wrong.

 

In a digital world to have linearity we need dither, and dither is noise. No dither means accepting quantization distortion - so a 'no noise floor' digital world would be a 'quantization distortion' digital world. Subjectively far less appealing to listen to than relatively benign noise.

Link to comment
  • 0

AFAIK bit depth and dynamic range are related but not equivalent. bit depth of a capturing or displaying device gives you an indication of the highest possible dynamic range and levels of precision eg a bit-depth of 16 has a potential of 65,536 values of resolution and potential for 65,536:1 dynamic range. However, for a dynamic of 10,000: 1 you only need two values not 10,000 values so in reality bit depth is more an indication of potential number of values. An 8-bit file will give you potentially 256 values and potentially, at the extremes, a dynamic range of 256:1.

Sound Minds Mind Sound

 

 

Link to comment
  • 0

I have an opera CD, which bugged me from the earliest days after I bought it - there was this peculiar noise, right at the beginning of the first track, which disappears quite quickly ... after many years I was in a position to rip that track, and study the waveform ...well, well - some dingaling had mucked up the transfer from tape; and it was quite obviously suffering from severe degradation from bit depth loss, for about 30 secs. The chap probably suddenly realised his mistake, and then set it correctly from then on - but never went back and fixed the opening bit ... poor mastering captured for eternity, set in plastic ... 😁

Link to comment
  • 0
On 2/3/2020 at 1:33 PM, yamamoto2002 said:

Audio tape recorders use AC bias to improve signal linearity.

 

AC bias signal frequency effectively defines upper limit of audio tape recorders of recording frequency , it is typically configured 50kHz in R2R

 

Interestingly, because of the way that the tape nonlinearities work, the bias frequency should be 3X higher than the highest audio frequency.  It all comes from some rather tricky and obscure math.   Of course, up to a point, the higher the bias frequency, the better.

 

John

 

Link to comment
  • 0
7 hours ago, opus101 said:

 

False, true, false. Two out of three wrong.

 

In a digital world to have linearity we need dither, and dither is noise. No dither means accepting quantization distortion - so a 'no noise floor' digital world would be a 'quantization distortion' digital world. Subjectively far less appealing to listen to than relatively benign noise.

I agree -- but another point, with proper dither, the 'resolution' effectively becomes 'infinite' -- well, in a way.   There is a trick that if you have a properly dithered signal, you can narrow the bandwidth, then mathematically increase the bit resolution.  This 'trick' is used in cell systems all of the time, where they might use a raw HW 12 or 14bit A/D converter, only giving a rather narrow dynamic range in RF terms.  However, when they do a narrower selection of the signal (not just frequency domain shrinkage), then the dynamic range increases.

It is all about information content, and as long as a rule that one cannot increase the information in the signal, then all kinds of nice things fall out.  This all works only on a dithered signal.  Without dither, then all kinds of evils start appearing.

 

John

 

Link to comment
  • 0
10 hours ago, opus101 said:

 

False, true, false. Two out of three wrong.

 

In a digital world to have linearity we need dither, and dither is noise. No dither means accepting quantization distortion - so a 'no noise floor' digital world would be a 'quantization distortion' digital world. Subjectively far less appealing to listen to than relatively benign noise.

I'll cheerfully admit I'm wrong - the conversation to this point keeps bringing up tape hiss as if you can't have a recording without it having been through a tape medium - which stopped being true some decades ago.  If we're talking about digital recording perhaps we should use dynamic range as a realistic measure for comparison.

 

Link to comment
  • 0
On 3/2/2020 at 6:18 PM, SJK said:

Perhaps you could expand on exactly what was the difference, whether the input type or the source. 
 

PS Audio was ethernet input, Oppo was USB?  And that the PS Audio made a difference?
 

Enquiring minds need to know....

 

Sorry I did not get around to answering your question. I'm not a reviewer, and I get a rash when I try to describe my sonic experiences using typical analogies, so I will briefly offer a few observations.

 

When I compared the PS to the Oppo, I did not have a wifi switch in my listening room to use ethernet, so I used USB for comparison. Simply put the PS sounded less digital. What does that mean? For me I originally thought I liked digital because I thought I was hearing a lot more detail (clarity), and I wasn't hearing surface noise, clicks and pops, rumble, etc. But over time I found that my enjoyment when listening to music via vinyl was eroding when listening to cd. What used to be involving and relaxing grew increasingly annoying, and even painful. It was almost as if the nerve endings in my inner ear or brain were being irritated. No matter how much I spent on upgrades to equipment, cables, and isolation periperals, I could never overcome that seemingly inherent flaw in digital music.   

 

I did realize improvements by upgrading things like power supplies, dacs and transports, and, to a great extent, streaming. But I was never completely satisfied, so I always relied on vinyl playback for my preferred listening pleasure. 

 

As I said before, the Oppo dac offered a solid improvement over my previous twice as expensive dac, but in comparison to the PS, there simply was no comparison. Why? The PS took away much of the hardness of digital, it put flesh back on the bones of instruments, voices sounded more natural and human, and the irritation to my nerve endings virtually went away (unless the quality of the digital recording was nasty).  I wouldn't say the PS offered more detail or resolution, but I was able to hear into the performance more and pick out more nuance. I was just able to enjoy the music for music's sake, and I could forget about hi-fi. Things got even better once I was able to use ethernet.

 

Hope that helps. 

 

Link to comment
  • 0
18 hours ago, audiojerry said:

Sorry I did not get around to answering your question. I'm not a reviewer, and I get a rash when I try to describe my sonic experiences using typical analogies, so I will briefly offer a few observations.

 

When I compared the PS to the Oppo, I did not have a wifi switch in my listening room to use ethernet, so I used USB for comparison. Simply put the PS sounded less digital. What does that mean? For me I originally thought I liked digital because I thought I was hearing a lot more detail (clarity), and I wasn't hearing surface noise, clicks and pops, rumble, etc. But over time I found that my enjoyment when listening to music via vinyl was eroding when listening to cd. What used to be involving and relaxing grew increasingly annoying, and even painful. It was almost as if the nerve endings in my inner ear or brain were being irritated. No matter how much I spent on upgrades to equipment, cables, and isolation periperals, I could never overcome that seemingly inherent flaw in digital music.   

 

I did realize improvements by upgrading things like power supplies, dacs and transports, and, to a great extent, streaming. But I was never completely satisfied, so I always relied on vinyl playback for my preferred listening pleasure. 

 

As I said before, the Oppo dac offered a solid improvement over my previous twice as expensive dac, but in comparison to the PS, there simply was no comparison. Why? The PS took away much of the hardness of digital, it put flesh back on the bones of instruments, voices sounded more natural and human, and the irritation to my nerve endings virtually went away (unless the quality of the digital recording was nasty).  I wouldn't say the PS offered more detail or resolution, but I was able to hear into the performance more and pick out more nuance. I was just able to enjoy the music for music's sake, and I could forget about hi-fi. Things got even better once I was able to use ethernet.

 

Hope that helps. 

 

Thanks for a detailed response.  I've seen not so subtle changes with hardware but was never sure if it was due to a change in connection type or simply because they did a better job with the analog side of things.  I went from a Bryston BDA-2 DAC with a BDP-2 player over USB because with the player there was a better sound than with a laptop.

 

When I went to a PS Audio DSJ I found I didn't need the BDP-2 player anymore, the sound from the laptop still with USB was great.  It's so hard to quantify in empirical terms what will make a difference without consideration for an entire system.

Link to comment
  • 0
1 hour ago, SJK said:

Thanks for a detailed response.  I've seen not so subtle changes with hardware but was never sure if it was due to a change in connection type or simply because they did a better job with the analog side of things.  I went from a Bryston BDA-2 DAC with a BDP-2 player over USB because with the player there was a better sound than with a laptop.

 

When I went to a PS Audio DSJ I found I didn't need the BDP-2 player anymore, the sound from the laptop still with USB was great.  It's so hard to quantify in empirical terms what will make a difference without consideration for an entire system.

 

This is not intended as a one-up, but just to share my personal experience:

 

After hearing the DS SR, I grabbed a good deal on a DS JR demo unit with Bridge II from Underwood Wally. I was very pleased with the sound and promptly put my Oppo up for sale, which I sold for more than list, once Oppo announced it was stopping production. Unfortunately, I began experiencing frequent drops in the DLNA connection with my JRiver player when using the PS Bridge. 

 

After numerous calls with PS Support to try to fix the issue, PS surprisingly offered to exchange the PS JR with a PS SR plus $1500. Although that was more than I ever anticipated paying for a dac, i rationalized that the offer represented a nearly $3k discoount, so I bought it but asked PS to ship the SR before I returned the JR. This allowed me to directly compare the SR with the JR. As much as I liked the JR, the SR put a smile on my face that I couldn't wipe away. The SR just takes the sound of the JR to a higher level. For whatever reason the SR does not have the Bridge issues of the JR. I have read that the dac designer, Ted Smith, is brilliant.

 

With the dac's ability to perform free firmware updates, I am a real fan.           

Link to comment
  • 0
On 2/2/2020 at 6:02 PM, audiojerry said:

I thought after all this time I was correctly explaining bit depth and sample rates to my non-audiophile friends, but now I"m not so sure. I thought that bit depth or bit size determines how much information can be captured in a single sample taken from an analog signal. So if, for example, you are recording a symphony orchestra, there are lots of instruments creating a lot of complex tonal information and sound  levels. This creates a complex analog waveform, and when you take a sample of this waveform, you are going to digitize it and store it in a file. This single sample of the waveform would obviously contain a lot of information about what was happening in this symphony orchestra in that instant of time. The larger the bit depth, the more information you can capture, and you have a better quality file to produce a better quality recording.

 

But now I'm hearing that bit depth is all about dynamic range. That seems too simplistic to me.  

Any experts out there who can set me straight?

 

 

You are right about bit depth being about the amount of information per sample.  However, that information per sample maps to dynamic range -- esp if the signal is dithered.    Dynamic range can be further modified by narrowing the bandwidth (if dithered.)  It isn't all right or all wrong, it is kind of both.

 

(Sorry, this is a repeated answer -- my browser got into a weird mode, but I am leaving the reply, because it shows the original questioner was essentially correct, but not the whole background.)

John

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...