Jump to content
IGNORED

CPU Load and Sound Quality


STC

Recommended Posts

6 minutes ago, STC said:

In my example, I could capture the output digitally and the 24/96 files should show the difference. 

 

I’ve done that and the result showed no real difference beyond normal, well below-audible threshold noise. This was between CPU being nearly idle and one overloaded with an intensive CPU stress test.

Link to comment
33 minutes ago, PeterSt said:

 

Wrong is a big word. Insufficiently detailed is another.

 

image.thumb.png.0551196544c37bbe57ecd340cb7d3c18.png

 

Here is my running time cursor again (shown many times since 2009). This repeats exactly once per second. It shows the difference of two situations of the same playback software (in a separate EXE), one with a separate GUI program showing the running cursor (this does not play audio) and one without that GUI (the audio still playing, in that separate program).

The vertical scale comprises of sample values, best resolution being the screen pixels (zoomed at will). If the maximum spread would be 50 pixels (plus/minus hence peak to peak), then this can be seen as a value of 50 out of 65536 (this is 16 bits playback here). Maybe someone can do the math on how much dB this would be, and whether any Archi-program would be able to show it. Mind you, which program also will average;  you can clearly see that the average is close to zero, because of the excursions going both sides (plus and minus) on about a sample per sample basis. Such a program would show nothing (IMO).

 

Notice that the red line is the reference (in this case the audio playing without the GUI / cursor program running) and that the wobbling in the left part is the ADC noise vs. the playback noise against that reference. And notice that both the reference recording and the observed recording, obviously contain inherent noise. So two recordings of the same situation would show the wobbling line only. But if one of the two has an even slightest difference for CPU load (or whatever load for that matter) it shows against the other.

DAC and ADC clocks are (and have to be) shared. Notice that the shown differences at least partly can exist because of the ADC itself being influenced by noise coming over the clock line and other (also backdoor) connections. But since this is all so easily audible, I would not count too much on that influence-alone.

Back at the time I always checked for bit-perfectness of any software situation by means of capturing the digital output of the one PC on the digital input of a second PC, and next compare the files after trimming the start and end.

 

I have documented many speaking examples of differences between bit perfect players against each other, all 100% repeatable if only the situation remains the same.

 

image.thumb.png.e147e36b9459004a5e1ca0fb879eaf2d.png

 

Without too much elaboration, my documented text going with this one is :

 

Knowing that Q1=4 seems more reliable, Compare12 the most clearly shows that in between Q1=4 and Foobar a pattern is present. It is mild though.

Note : Q1 is a dial in XXHighEnd with 34 different values back at the time. Today it can have 1700 different values. It implies a CPU "resonance" because it deals with the size of  a certain internal buffer (which can be from close to 1 sample to 23M samples or so).

And this is ONE dial. Quite many more exist, each influencing an other part of the (OS) playback system. And they all do their job in conjunction, hammering on a crucial single CPU core, or more of them, depending on what I want to achieve with it.

There is no real science in this that I can discover after so many years by now (going back to 2006), but from empirical finding quite a lot can be controlled at will (like wanting a different bass in a certain direction).

 

Surely you could reason that this all can't be audible (this is actually the common sense for the Archi-method). Well, tell that to the users of XXHE. But at least you can't say it isn't measurable.

 

Peter,

 

50 out of 65536 is -62dB. I’d be very surprised if Archimago’s software/hardware couldn’t resolve this —  any decent 16bit ADC is more than adequate to do this with lots of room to spare.

 

So let’s try to repeat your measurement results. If I have a DAC playing a tone at say -90dB and record the analog output using a quality 24 bit ADC (accurate to about -110dB). I’ll use a separate laptop to record the output of the ADC. Say I start and stop Prime95 or another torture test a few times during this recording on the computer generating the -90dB signal.

 

Would you say this should produce some alternate bands of noise and no noise in the recording? This is a much more intensive test than a cursor blinking in your example, so should have an obvious effect, no? Anything that you see wrong with this test setup?

 

 

Link to comment
7 minutes ago, PeterSt said:

 

But Paul, thus difference exists for one sample duration ... Nothing is going to measure that in dB. Try it yourself ! Thus :

 

+25, -25, +25, -25 and that a sufficient amount of time. This is what you see in my graphs. If you ask me, that results in 0dB net difference. And might you agree, then try one time +25 on to a minute. You won't see that in dB, nor would you hear it. But it could be graphed, like I do ...

 

But let me know whether you might agree.

 

 

I’m not sure why not. One sample at 16/44.1k can be easily measured if the ADC is running at say 24/192k, no? 

Link to comment
Just now, PeterSt said:

 

You could find it, yes. In the wave form (oscilloscope memory). That is what I do. But measure it somehow ? on what bin size ?

But you can try it with your software ?

 

My software let’s you zoom in on the waveform as far as you want, in time and amplitude (I’m not talking about FFTs). So you can certainly see individual samples. But so does Audacity and I’m sure many other software packages. 

Link to comment
12 hours ago, PeterSt said:

 

Yes. But the stance was that Archimago should be able to measure it (according to you). And from there I say: try it yourself with your own software.

You might already have a problem with the not-clean environment. So for example, if I measure the inherent system noise (DAC output) at something like -146dB (anyway, 30uV p-p), which I can measure because I have an analyser for that (have you ??), and in your situation that would be -120dB which is still very very good ... then what ? Then you may not be bothered at all by tiny noise differences (but meanwhile all smears because of the larger pile of noise). And please don't say that this will be inaudible ... (because we talk measurement now).

 

Peter, certainly I don't think a single number like THD, or THD+N or jitter or whatever represents everything about a DAC or a system. DeltaWave produces some RMS null values, jitter and phase offsets as part of the analysis, but these are also not enough to completely describe the distortions or noise all by themselves. That's why in DW there are plots of various data over time, frequency, etc. to allow you to view the differences in much more detail than could possibly be revealed by a single number. Easy to zoom in on any portion of the plot to see the magnitude of the differences.

 

-146dB from a 2v output represents about 100nV, not 30µV. I'd be curious what equipment lets you measure such a low level signal. Even the best AP analyzer I'm aware of doesn't go down much below -120dB or so. And no, I don't have an AP analyzer. Which one do you have?

 

I am suggesting a simple test trying to reproduce what you posted. First, I'd like to see if there is noise produced by CPU load that isn't there when CPU is idle. Capture a waveform at the output of a DAC with a quality ADC at high sampling rate, then compare it to the original waveform. No averages, no FFTs, just simple waveform comparison. Once we confirm that CPU activity causes unusual noise at the output of a DAC, we can then talk about analysis, measurements, audibility, etc. Fair?

Link to comment
40 minutes ago, PeterSt said:

Then, nobody said that the lowest usage of the CPU implies the lowest noise, no matter how intuitive it would seem that this is so. The least noise means: the least varying current draw.

 

Well, yes. But if 99% cpu utilization with many active threads and context switches doesn't generate a difference compared to a 1% load, I don't see how a blinking cursor could do this so easily :) 

 

46 minutes ago, PeterSt said:

The task ahead of you is virtually impossible without the necessary experience

 

I have some experience here... having written a full multi-tasking O/S for 80386 processor in the mid-80s. Task switching, scheduler, memory manager, DMA, interrupts, device drivers, storage management, protection rings, all that is old hat.

 

So I'll try to reproduce your 'blinking cursor' result. But from past experience, this will not be easy. Maybe I can find a DAC or two where the CPU load does affect the output. I'll give it a shot.

 

 

Link to comment
46 minutes ago, fas42 said:

 

You see, Alex, that's where we have a problem here ... the technicals want the cart before the horse - unless something is measurable, it can't possibly exist 😜... the poor ol' universe, out there, struggled with self confidence for millennia, because humans hadn't worked how to 'measure' it - only now, slowly, is it starting to be feel OK about being so unusual, because people are pumping out more and more numbers about it - its sense of reality will finally fall into place when mankind gets all the i's dotted and t's crossed ... 🙂

 

Trouble is, this tidiness has failed to make it into, say, the medical world - wouldn't it be great if you were not feeling OK, and then the best experts of the day proclaimed that they can't find anything wrong ... you would be instantly cured, because "It was all in your head", 😉.


You got that wrong, Frank. I don’t need measurements, I just need valid evidence. So far, the closest was from Peter, which is why I want to confirm it.

 

 All else was talk about others and what they may or may not have heard and how many people here believe in it and the products that claim to reduce CPU load wouldn’t exist If it wasn’t true. This second hand evidence is not something that I can take seriously, especially since it contradicts my own experience.
 

Patiently awaiting your next car analogy to explain why this is all wrong-headed ;)

 

Link to comment
16 minutes ago, fas42 said:

 

You want visual evidence? - our infamous Gearslutz collection of converter loop captures provides plenty of that, in that every DA/AD chain does it slightly differently - is it such a stretch that extraneous electrical activity, possibly impacting the chain, causes variations also? 😀

 

The bolded bit above is what it's all about - one has to have it happen "in front of one's ears" to start believing things; and it's so, so easy to make excuses when one doesn't want to believe ... "the room was all wrong!"; "it's a distortion enhancing the recording!", "I had too much to drink!" - that's a real goody, 😜, "the recordings were specially selected to make it sound different!", etc, etc, etc, ...

 

When one knows, in the sense of inner knowing, that something is a factor, then it becomes easy to set up an experiment which demonstrates that - to oneself. This has nothing to do with expectation bias - one can try one direction, and it makes no difference; another direction, and things get worse; then the next variation is tried, and a definite gain occurs - this is something like cooking: you play with added ingredients, to see if you can make the flavour better - or should we discard those notoriously unreliable taste buds? 🙂

 

Oh dear ... the car analogy lost out to an upstart - who would have thunk it?


You forget that I’ve run these exact tests with ADC loop through multiple DACs and multiple ADCs in the process of developing and testing. If there ever was a dependency on CPU load, it was not at all obvious. I’ve also run these tests specifically to check for the effect of CPU load, as I was trying to understand all the sources of distortions introduced in such a loop. I didn’t find any effect of CPU load, I’m afraid. In fact, different capacitance of the interconnects had a greater effect than any changes to CPU load I’ve tried.

 

Wait! What? No car analogy? Now I’m disappointed ;)

 

Link to comment
7 minutes ago, fas42 said:

 

And the simple answer, in your case, may be that that the loops you were using were sufficiently well engineered for it not be be significantly affected by the types of electrical interference that can occur with changes in CPU usage. Does this translate to mean that all chains are that well engineered?

 

Personally, I have zero desire for all these factors to matter - they make life messy! I don't have a need, as some audiophiles do, to make my rig incredibly sensitive to tiny variations - this strays into hifi being a hobby where the vehicle is everything, and the road you drive on doesn't matter at all.

 

There, there ... you got your analogy, this time ... feeling better, now? 😉


Of course, this could be that my specific DACs/ADCs are immune. As an example, Peter’s Lush^2 USB cable actually made an obvious difference with an older Emotiva DAC, but not with any of the others I’ve tried, including a $50 Behringer USB interface.

 

Link to comment
4 minutes ago, mansr said:

I thought Emotiva was better than that. What kind of difference did it make?

 

Lush produced more jitter and spurae over a generic USB cable. It didn’t improve things. This was with XDA-2. May have to do with ground loops or shield connections, since these are configurable on Lush^2.

Link to comment
1 hour ago, Blackmorec said:

No? Surely it depends almost entirely on WHAT they’re listening to? 

There are one or two folk have included pictures of their system, which comprise some fairly decent components; and some of the worst set-up I’ve every seen. Set-up so bad you can see why they can’t hear fairly major differences that would otherwise be obvious were their system decently organised. 

 

Personally, I see this as an easy put down used against anyone you don’t agree with: their system isn’t transparent enough.
 

But how would you judge someone’s system if you’ve never heard it? By price? By number of Stereophile-approved components? By number of tweaks, such as REGEN devices and Entreq boxes and specialty power cables? Why would it be up to you to decide how good or bad something sounds and not up to the person who actually put the system together?

Link to comment
5 hours ago, PeterSt said:

 

Paul, I hope you understand I wasn't trying to offend you. My experience is/was quite similar, back from the 70's, though main frames. But

 

 

that requires different experiences and in the end knowledge all together. When I started with this, I wasn't measuring computers as such, electrically. Why would I have. Why would anyone do such a thing. Why would anyone do such a thing on behalf of audio ? He surely should be crazy doing such things (mind you, this still counts today, only not for me). So as I said earlier on, there is no science in it that I can see, but working on these matters for over 10 years brings some empirically found knowledge (yes) which is still hard to describe because it is all unknown, not written about and even rejected by those who are educated in the field.

 

The 1 % load would probably be the worser one because of more distinct (noticeable) peaks. Trust me.

 

Get yourself a power meter. 20-30 USD. Put it in the mains outlet, and put your PC in it from there. Btw, notice that for me this would not be a situation to listen to/through because it will sound bad. Yep.

 

A PC, hence CPU, and obviously the one I am using (which is a 115WTDP or so Xeon)  is capable of about "instantly" drawing 20-30W more from the mains. I never measured the "instantly" but I regard it to be a few milliseconds (others may know or may be able to find the data on ATX PSU (latest version)). And I say 20-30W because that is what I see myself in an idle environment, not really power controlled or optimized. Thus this is NOT derived from some kind of instant testing, and putting the CPU from idle state to engaging all of its cores to 100% instantly (but theoretically it would imply something like a 100W more, as "instantly").

Now how to think that this will not influence everything and all, hooked to the mains, or what will be going on over interlinks (USB) otherwise. Of course, when all it made for it, the influences vanish. Part of it is a linear PSU which is slower (yes, think about that too). But when too slow, things won't be able to cope and again SQ worsens (this is why a very small TDP processor never sounds good, but actually this is another story - though notice that it is related because you can't solve the 20-30W issue by means of a lower wattage CPU). The headroom of the Porsche vs the Trabant (sorry).

 

The movement of that cursor is that 20-30W increase. Try it. And once you see it happening, you start to dig what the heck is going on in that stupid OS, why it happens, and that indeed it is all so.

Not when *I* start playback, this is all moved out of the way. Now the usage is varying 1.5W only (which I still deem to much but this is USB related and out of my real control), with playback (say 32/705.6) going on on top of it.

Btw the whole OS tweaking (also done by XXHighEnd) causes a normally idle running system like this - that consuming 90-120W - to lower that consumption to 45W.

Now also think of a 20-30W varying on to 90W is less impacting than 20-30W on to 45W. So these elements also play its role.

 

 

I'll stop here, because this isn't supposed to be a lecture or something of that kind. But it hopefully tells that my systems programming experiences from back in the days, don't do really much to the knowledge required for this stuff. One thing though: your interest will go in this direction easily, because of that base knowledge and experiences.

Btw, I recall it took me more than a year to tame the Garbage Collection of the Windows OSes (OSes because in each version they change things to it). So this alone validates some reading into Mark Russinovich System Internals books.

And when all that has been done, we're up to influencing the lot because we can now first see it. Next up is whether it would be able to change sound. Say that this is today, anno November 2019.

 

Obviously for me the path went the other way around; I first made the influencing from some theoretical ideas (which computer playback for the better also was such an idea), that worked and then some kind of first next was writing this analysis software.

And all I really learned from there is that the least deviation of those graphs compared to the reference line, would be the most accurate representation of what's in the data (the file). Already that part is killing, because when it is 16/44.1 Redbook, that won't be what we play. It has to be filtered first ... Anyway, this is why I showed the graph of the lesser accurate channel. These things *can* be measured. And it is easy to see how a discrete number could be dedicated to the deviations. 

 

image.png.4c28df0c001d2381d512dcf4852f6247.png

 

As a matter of fact, I started out with such discrete numbers, but it didn't tell what was going on ...

Still it could be an accuracy number.

 

 

Sure, it's possible that peaky, noisy power draw can result in junk being introduced into the AC line. This does indicate a poor PS design that doesn't include enough capacitance to handle the demands of the circuit or not enough filtering and regulation to remove unwanted noise. As it is, when I test, I usually have a laptop running on batteries doing the measurement. This way it's not affected by the AC line and can't create ground loops through the mains.

 

So can you please describe how the top graph was captured? Was it captured as digital data, or analog output using an ADC, scope or analyzer? And at what point in the DAC was this captured?

Link to comment
1 hour ago, Blackmorec said:

How would I judge someone’s system if I’ve never heard it? Now there’s a good question which I’ll happily answer.

 

By recognising the type of speakers and the way they are positioned far too close together, too close to the wall and with no toe-in

By seeing components on a table with lots of lose, unsecured panels that will vibrate, or on a cabinet that’s essentially just a huge resonant wooden box

By seeing components stacked on top of one another with no thought to radiated EMI or vibration control

By seeing how power and interconnect cables are tangled with each other and coiled up behind components

By noting the use of cheap power strips with poor contacts and variable earth impedances. 

By seeing speakers positioned right next to and level with large cabinets that will cause huge amounts of diffraction.

By seeing systems installed in highly asymmetrical rooms with walls or windows to one side and complete open space on the other, completely destroying any chance of a sound stage and imaging. 

I could continue but I’m sure you get the picture (s’cuse the pun)

 

When I look at a system that’s been installed without due care and attention I know several things about how it sounds. I also know that its not producing true high fidelity sound that’s true to the recording, its not revealing fine detail and its entirely unable to resolve the differences brought by things like better cables. So when people with such systems tell me that cables et al make no difference, I believe them, because I know exactly why that’s the case. 

 

The real problem comes when that system sets the quality benchmark for its owner. It uses good quality components that review well, so their conclusion is it must sound excellent and must be capable of reproducing differences between cables, in the event such a thing actually exists. 

 

That's reasonable, but mostly superficial. It doesn't take into account actual component matching process, or even things like proper positioning of speakers based on measurements rather than on rule of thumb. Or what happens with a headphone system. So, yes, you can probably eliminate some very obvious poorly put together systems, but anyone who's spent any time  on system design and optimization will be missed by the above list. 

 

Oh, and I consider any system that's revealing of differences in cables to be poorly designed and engineered :)

 

Link to comment
8 minutes ago, Blackmorec said:

Ah ha. And here’s me thinking that physics was about equations, formulae and graphs when all the time a few well worn cliches was all I needed. That damned physics master. 

 

But jokes aside;  I’m not talking hearsay...because personally I’ve heard it first hand.  Its only hearsay for you, who hasn’t, presumably ever. 

I want you to try something. I would like you to listen to a piece of music, a Michel Petrucciani Trio album would be great. The guy was truly a genius, so I’m sure you’ll enjoy his music.  Now while you’re listening I want you to imagine hearing the following:

  • Increased reverberation from the hall...a faint echo in the RH channel that’s obviously hall ambience, only heard on forte sections where the energy is sufficient to activate the hall’s reflections. 
  • For Michel, I want you to imagine better defined timing and phrasing and more tension in his playing
  • Then I’d like you to add some further detail and definition to Richard Tee’s brush strokes
  • And finally I’d like your to add a little more timbre and body to Miroslav Vitous’s gentle bass plucks....not too much, just a little. 

And I want you to do all this without thinking about it. That’s right, I want you to imagine those things subconsciously, but at the same time your not consciously imagining them, i want you to be consciously hearing them so you perk up and say, “wow, that sounds better”

I’d also like you to do the same thing next time you listen to this album, or even better every time you listen to it. 

So that’s Michel Petrucciani done....let’s move on to the next album. Any favorites you’d like to imagine improving?  

 

Seriously, this happens all the time. I've been a victim of this many times. Listening to music using some fancy new component, finding amazing differences and improvement only to later find that I forgot to switch the input, and was listening to the old component I've had for years. So, what does this prove?

Link to comment
2 hours ago, Blackmorec said:

Actually break in is a pain in the ass, because it really does generate doubt in a new purchase. But there’s a very simple way to avoid it.  Simply install the new component and leave it playing music for a few days without listening. When you come back the transition is complete and you hear the new component’s final sound. So no ears adjusting, just the sound before and after.

 

No, not ears. Your post-purchase-paranoia subsides after a few days but you keep insisting it's the new component breaking in. Admit it to yourself, you'll be a much happier person! (this one is a freebee also, I'm just too nice!)

Link to comment
37 minutes ago, PeterSt said:

 

Paul, why ?

 

Simply because. It's a flaw that would make the device inconvenient to consume and render it unusable and unpredictable for a lengthy period of time. If it really exists in a device, it's a broken design. Any engineer aware of such a lengthy break-in period would design around it to ensure the stability of the circuit during the break-in. Amazingly, nobody does this. Not in the pro audio field, not in the sensitive electronic equipment that measures individual photons, not in LIGO in the extremely sensitive gravitational wave detectors. So why is it that audiophiles are the only group in the world that suffers from cable and component break-in? 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...