Jump to content
IGNORED

24/192 Downloads ... and why they make no sense?


Recommended Posts

"I am treating the samples as instantaneous vaules, all be it with slew and Integration error over the much shorter time it takes to obtain a sample, and ignoring quantitization and noise."

 

Sorry, to me that sounds like a bunch of unrelated technical terms strung together. Perhaps you could elaborate a bit, as I can't make head or tail of it?

 

Link to comment

"You have not shown any timing differences."

 

OK, so you didn't understand my picture at all. That's OK. If you don't get it, you don't get it.

 

"Those signals in your graph may or may not be related to each other in any way."

 

No, I can assure you they are maternal cousins.

 

"Timing resolution is an invariant function of the sample rate."

 

That is not a definition. That is a statement. Any references to support that claim?

 

And could you please provide the definition of "timing resolution" I asked for?

 

Hard to debate something we haven't even defined.

 

"In other words timing resolution is shown by the x axis in this case."

 

No. The x axis shows time. Not timing resolution.

 

 

Link to comment

I agree that hopefully the search for ever higher quality will never stop. But there is not much *good* 24/192 music to listen to yet, and effectively zero 24/384. Lets hope 24/192 gets off the ground, which it has not done yet, before 'obsessing' about even higher rates.

 

Re SACDs and the like, maybe it is a country thing. It remains popular in Japan, I believe, but here in the UK it never really got started. I have never even seen an SACD, let alone heard one. We still have a big 'record shop' in the nearby town, of 400,000 people, and I have never seen one there, never, not over many years. It is the only record shop left in the town. Regarding hi res downloads, or even CD quality, I am the only person I know that has ever heard of them. Downloads mean iTunes, and the people I know think that is for teenagers.

Regards

 

Link to comment

and similar places. But personally I don't know anyone who has ever bought 'silver disc' music by mail order of any kind. People seem to like going to a record shop and browsing. I know I do, and I have never gone in with any particular music in mind. But there are very few left. Supermarkets have 'this months popular CD releases', but that's all. I bought 'Whats the Story Morning Glory' from my local supermarket when it came out, because I liked the cover. I thought it was crap :)

 

Link to comment

Ok, I'll try something ...

 

Let's try to look at things I do. 1:1 representation of what the D/A conversion should do. Or actually, what it does when analogue behind it can follow.

 

This takes distance of whatever "manipulation" is in advance of D/A to improve upon whatever needs improvement. So yes, this will be "filtering" when Redbook is the original, and no, it needs nothing when HiRes which is assumed to be filtered sufficiently.

 

Let's think PCM because it is the most easy to "see";

So, taking distance of the filtering, the net result going into the D/A chips imply transients. How large ? that depends. It depends on the sample rate. So, the slower the sample rate, the larger the transients will be from whatever ever-before analogue there was. But, since this is per time unit in the end it doesn't matter much. Thus, increase the sample rate by a factor of two, and within the same time unit now two samples represent the one transient from before; In the same time frame the same transient happens.

 

How accurate the transient goes from the starting point to the end point (in volume level) depends on the bit depth; the more bits, the more accurate the transient will be. Or say : the more accurate the length of the transient will be on the vertical axis - the volume level of it.

 

Of course, the higher the sample rate, the better granularity (which is another word for resolution, but hopefully more understandable) of the volume level will work out. So, while the same transient will happen in the same time frame anyway, it is the granularity of the bit depth which makes the transient accurate. But the bit depth is only useful when the samples are taken frequently enough. And so yes, the accuracy of the transient is the result of sample rate + bit depth. Both need eachother.

 

What I tried to sneak in with the above, is your sense of all being unrelated to whatever higher layer we use to think about : frequencies, how they can be represented by D/A conversion (A/D ahead of it), and how they may be more or less molested by stronger or slower filters.

IOW : Forget about this for now.

 

Thinking 1:1 all what needs to happen is that this transient -no matter it was somewhat lower because of filtering - has to be D/A'd the best we can. Keep in mind, I am talking PCM, and let's say the sample rate is 352.8 for any realistic original source.

 

Now, do we think that all these volume steps of transients will go from the one volume level to the next smallest step available, in even the 24 bit domain ? They most certainly do NOT. Just look at files and you will know. So, from one sample to the other, most easily 10 volume steps can be skipped. What does this mean ? well, that the transient happened (for the dB value concerned) within the two adjacent samples at 352800 of them per second.

 

Someone may do the math on what frequency this implies.

I won't.

 

I won't, because this is not about frequencies. Not in the 1:1 thinking. It just is about a transient which is so fast that it needs more than 352800 samples per second, to nicely utility that very next volume step available.

Btw, don't ask me what natural (music) sound it takes to have such fast transients, and all I know is that they happen in music data. But take a rimshot to have the idea of something which is a fast transient.

 

Of course, down to the matters it implies an a sine of infinite frequency were the transient be infinitely steep. But it doesn't work like that in digital audio data.

 

1:1 again, the transient is just there as a one way up, never going down again. Oh, it will, but this is virtual and burried in the following music samples. So, that frequency of the infinitely high frequency sine can be there alright, but it will (try to) ride on the further envelope of that rimshot and which is of lower frequency.

What is and remains is that this one transient of 10 volume steps are there. It is a given fact.

 

Still with me ?

 

Good. This is not a 10us or whatever lenght of a "sample" hence frequency. Why ? because it doesn't go down again. But remember : it does, but now riding on another lower frequency wave and with lower amplitude because the attack of it was the worst (for transient hence amplitude).

 

So, it is nothing of 10us of length, but it does go up in ...

Now do that math.

 

There now is nothing which tells me that any following analogue including speaker needs to be able to follow a "frequency" for that PULSE. The frequency is not there. But the pulse is. But up only (or down only).

 

This pulse goes into the D/A chip and within the time frame this happens (which is one sample in my example) the chip might be able to follow it. Well, it should, thinking about my own NOS1 which easily does it at full scale (this means from 0V to 2V). This still adheres the 1:1.

Now things may get nasty, because when the D/A can follow this (here the "infinite sines start to apply), and I mean including the gain stage behind it, it will go into the main amp (forget about the preamp for now). Here too "infinite sines" apply, and when it can do it, next in turn is your tweeter.

This is not so devistating as it looks, because this is one way only. So, an overshoot me be implied, but there's no immediate turning backwards to zero or more (minus). It is more easy than we might think.

 

Hoping this is all clear a little, only *now* start thinking what we all do with the time domain when this filtering is applied. It will affect the transients I talked about and it will do that in advance of everything. Nothing actually capable will have a chance anymore; the transient has gone. And worse, it smeared the samples around it, because that is how filtering works (upon transients - which are perceived frequencies). And if the filter doesn't work upon the transient, it will work upon "detected" high frequencies which it tries to make more nice. All will be flattened, but thinking "with smear as the means" is the better idea of it.

 

What remains, of course, is that the missing samples like in Redbook to do it right, indeed require correction (think "reconstruction"). So what we need is a filtering means which adequately takes care of that, without smearing.

Oh, we call that ringing.

 

Ok, I tried something.

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

"You have not shown any timing differences. Those signals in your graph may or may not be related to each other in any way."

 

Then I suggest you don't take my word or picture for it. Find out yourself. There is an easy way. Take (either for real or in a mathematical simulation package such as Mathlab) a sine wave of less-than-22-kHz-frequency (to stay below nyquist) at a 192 kHz sample rate. Now make a copy of it, and delay/shift the copy by one sample (5.2 us). Downsample both waves to 44.1 kHz. Compare waves. What you will find is two sine waves, one delayed by 5.2 us compared to the other. So, can a 44.1 sample rate represent time differences smaller than 10 us? Yes. Q.E.D.

 

Link to comment

SACD's are widely available in the UK, along with indoor flushing lavatories, the internal combustion engine, telephonic communication and smallpox. Indoor cold running water is standard for most, with hot running water being rolled out nationally to members of the Conservative party. (It was deemed unnecessary for members of the Labour party, as they simply need to open their mouths in order to rid themselves of bodily waste).

 

It is rumoured that when the Greek government gives us our money back, we are going to invest in a network of super fast interwebclouds. However, the House of Lords has also put a bid in for some luxury yachts so we will have to wait and see.

 

The UK is, as ever, at the cutting edge of Victorian technology and we should be rightly proud of that. Unfortunately, when it comes to the downloading of high resolution, smega bit rate, unsmeared usb cables we are sadly lacking and will probably have to hobble, caps v2 in hand, to the White House.

 

But we do have SACD's!

 

Link to comment

Steepness of transient and thus frequency content of it, is ultimately defined by the microphone frequency response. IOW, mass and area of the diaphragm. Since it defines the maximum physical acceleration it can have.

 

Fastest responses are possible from piezoelectric sensors. But sensitivity of these in air is bad.

 

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

OK, they do exist in the UK, but they are pretty hard to find. I never saw any in the big HMV and Virgin record shops in Southampton. One of those shops has since closed, cannot recall which.

 

So we won't ever be getting any interwebclouds for us oiks or yachts for the lords then?

 

Was going to build a CAPS V2 but could not find the parts. Maplins only had steam engines and catswhiskers.

 

BTW. Read a great SF novel called 'The Difference Engine'. Set in a Victorian world where Babbage got the money to develop his brass computers and as a result Britain really did rule the world.

 

Link to comment

I understood your picture perfectly. It is an illustration that a 44.1 sample rate can reproduce two frequencies that are below 20.5k. Sure it can. In fact, it can reproduce a whole bunch of frequencies below 20.5K, even when they are playing at the same time. There is no time factor implicit in that, though if you integrate over a very long period, you can infer the trends.

 

That is not timing resolution, and either you know that or you do not really understand what you are talking about.

 

Show me a 44.1 sample rate that can resolve and reproduce a 10us event (get that word, "event") and I'll agree you have timing resolution that low.

 

You already know it cannot do that, you need a sample rate of 100K+ to resolve that, so this is nothing but a ridiculous conversation.

 

As for me publishing babble, at least I don't grab stuff off Wikipedia for heaven's sake. I have not seen you offer the math for any of this stuff, as Miska did.

 

-Paul

 

 

 

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

You mean when it is reproduced I think, right?

 

During recording I am not sure how they would be integrated, other than some filter to account for the small but real integration error that happens because of the time it takes to collect a sample.

 

-Paul

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

"I understood your picture perfectly."

 

No. You did not understand it at all.

 

"It is an illustration that a 44.1 sample rate can reproduce two frequencies that are below 20.5k."

 

No. It is an illustration that a 44.1 sample rate system can reproduce two signals with a time difference of 5 us between them, and reproduce them both (as well as reproducing the time difference between them).

 

"There is no time factor implicit in that, though if you integrate over a very long period, you can infer the trends.

 

There is a 5 us delay implicit and explicit in that, and if less than half a waveform is a "very long period" to you, then so be it.

 

"Show me a 44.1 sample rate that can resolve and reproduce a 10us event (get that word, "event") and I'll agree you have timing resolution that low."

 

Why should I? The Kunchir research was not about events. It was about timing differences (get those two words, "timing difference"). I don't know why you keep ranting about events.

 

So far, the only thing we agree on is that this "conversation" (actually, I think "conversation" requires both parties to listen, so I am not sure it is the right word to use) is nothing but ridiculous.

 

At leas I don't just spout totally unsubstantiated opinions. Both Miska and I have spent a lot of effort in trying to educate you (by providing you illustrations, references and even a way that you can verify the thing yourself). What have you done? You have not even tried to explain your rather garbled pseudo-technical sentences when I have asked for clarification, and you have not defined the terms you have been using when I have asked for them,, as you seemed to shift the meaning of terms you were using to be whatever was convenient for you.

 

So, in order to spare the poor dead horse from more flogging, I am done with this thread until you actually provide something more than just rhetoric, hot air and unsubstantiated opinions. How about answering my questions, defining the terms you keep throwing around, and actually doing the test I suggested? Even better, how about *you* providing some math that supports your position?

 

 

Link to comment

"During recording I am not sure how they would be integrated, other than some filter to account for the small but real integration error that happens because of the time it takes to collect a sample."

 

The integration happens in the low pass filter you need to ensure the nyquist limit is adhered to.

 

Link to comment

Sure, I knew you weren't going to pony up any facts or real answers.

 

If it is any comfort to you, let me tell that the part that you don't get is not entirely intuitive to people without any digital signal processing background, and you are not the only one confused by it. See this discussion[/i].

 

 

 

Link to comment

I'm not the math challenged person in this conversation. Do you get all your references from the web or do you ever work out the math for yourself?

 

You have your opinion, but it's just an opinion. I notice you never "ponied up" any example of a 10us event being resolved by a 44.1 sample rate.

 

Obviously, you can't, because it cannot be done.

 

I am quite aware of how to align two waveforms to a resolution finer than their sampling interval. It isn't even that complex using DFT representations. And yes, it is an iterative process.

 

That isn't an increase in the temporal resolution, and isn't going to precisely reconstruct data that was not captured.

 

-Paul

 

 

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

I see you just have to have a final word - so I know it won't end here :)

 

"I notice you never "ponied up" any example of a 10us event being resolved by a 44.1 sample rate"

 

You never really read what I write before answering, do you?

 

"I am quite aware of how to align two waveforms to a resolution finer than their sampling interval. It isn't even that complex using DFT representations. And yes, it is an iterative process."

 

Let me try to rephrase that... How about "Red trumpet pink elephant cronosynchronous infabulation warp space"?

 

No, petunias. Definitely petunias.

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...