Jump to content
IGNORED

Bit perfect software changing sound. How?


Recommended Posts

2 minutes ago, March Audio said:

A PC motherboard ground plane is an absolute mess of noise currents that do not stop becuase you have fiddled with and reduced a few processes.

Well that is nothing more than just your opinion. There are enough people who find an optimized pc to sound better than an Ethernet streamer or a galvanically isolated ddc.

Link to comment
10 minutes ago, idiot_savant said:

But my point is even if we assume this noise is distinct enough from everything else to matter - it's *extremely* difficult to see how optimising a file could reduce this noise. Consider, for example, if you play the same file twice - it's not read from the file system twice, it will be read from the filesystem cache, *even* if the application doesn't explicitly buffer it. I'm also a bit confused about your "lower level language" assertion, could you elaborate?

 

 

 

your friendly neighbourhood idiot

 

 

I'm not sure what's stopping you from playing the same file twice. To store it to RAM there will need to be an access at some point. 

 

There are softwares that have been written in assembler code, like this : https://www.igorware.com/small-player/download (this one is a little too old though, but there are similar approaches even today). There's more, that have been written in different approaches but the overall thing is the code that executes during playback is slimmer and necessitates less activity. Some use specific instruction sets in pc to reduce total number of accesses. Generic players are based on multiple layers of abstraction to make it work with all systems at the expense of proper optimization for specific capable systems.

Link to comment
5 minutes ago, idiot_savant said:

And I think we've drifted onto two subjects, which are different:

 

(1)Can optimising a PC ( e.g. task scheduler, playback software etc ) affect e.g. RF, EMC that might conceivably affect a DAC

 

(2)Can we modify a file on the to enhance (1)?

 

Just to be clear, I can think of no way for (2) to happen if the *data* is identical on a modern OS - consider if our OS was running a deduplicating file system, where identical files are identified and only one copy is physically kept on the disk? Or a file on a NAS, where we absolutely have no control over it?

 

your friendly neighbourhood idiot

 

I don't think windows 10 does any data deduplication by default. Haven't seen any evidence of that on my system.

Link to comment
Just now, idiot_savant said:

@manueljenkin - for playing the same file twice, is it equally optimised on the second playback is my point, as it will not have been read.

 

 

As for "fewer accesses" - to what? RAM? Disk? OS calls? Like I say, you have to realise how much stuff is happening *all* the time just to let you see a static screen. Are we really saying some *instructions" sound better than others?

 

 

your friendly neighbourhood idiot

 

Yes. One less source of noise can help. And yes some instructions "can" sound better than others if we go by this. They definitely have different intrinsic noise and determinism patterns.

Link to comment

Also @idiot_savant you gotta understand something. I have tried it, and to me it brought in improvements so I'm searching to see how it truly works at a deeper level (I have a fair idea on the generic principles, but I like to explore further). The other guy seems to not hear any changes (I don't know if he has tried It properly but that's his own) so he's interested in "debunking" it with any abstraction he could. And I'm debunking his "abstractions".

 

However you're making speculations even without trying when it wouldn't cost you anything to try it. Not sure if refusing to try an easy task relevant to the discussion is part of the "objective" process.

Link to comment
38 minutes ago, idiot_savant said:

@manueljenkin 

You do realise that in a modern CPU each x86 instruction doesn't really map into what a CPU does any more?

Yes but it still doesn't mean two different codes get compiled to the same instruction sets.

 

38 minutes ago, idiot_savant said:

@manueljenkin 

As for noise, consider the following:

a 4k framebuffer is 3840*2160*32 bits = 33MBytes. This has to be read from a framebuffer ( DDR local to a graphics card typically ) at 60 times a second. That's the equivalent of reading 3 CDs a second from DDR and shoving it out of an HDMI port.

And this generates less noise than choosing instructions apparently.

I never said GPU displaying wallpaper produces "less noise" than music playback. And of course most of these tools do have modes to remove image displaying load on the cpuhttp://wtfplay-project.org this is a commandline os so not much to render there and when playing you don't even see cursor blinking so the image is static. Xxhighend comes with an unattended mode where only a static image is displayed and when the next song is played it will actually take time to update (likely generates images at that instant and loads to GPU memory). It would be naive to think cpu generates all pixels at every instant of time and loads into GPU memory for displaying 😅, then there is no purpose for a GPU (so your assertion of the noise you have mentioned is based on a speculation, not a fact). GPU has a parallel pipeline to generate these, has its own architecture that might have its own noise patterns (need not be as high as cpu for the same task) and send via hdmi port.

 

So we have less to no correlation here. Yes if you're using a normal desktop environment then yes there will be noise from GPU but even here the access noise is a separate thing with its own patterns. Do they influence each other? Very likely. Can one completely mask the differences of the other? May or may not be! So far my experience has shown the "may not be" case. But this is irrelevant for me since I use my players in unattended mode and above paragraph shows why your argument doesn't hold well.

Link to comment
29 minutes ago, idiot_savant said:

@manueljenkin 

I'm the one giving numbers and facts, these aren't speculations. If you really want me to install a VM so I can install some potentially virus-ridden code, and use a debugger to try and find out what this thing does, then I'm afraid you might not like the results. After all, if it's really written in a low-level language, that should be pretty simple to prove

 

Sorry, installing these on vm will also likely induce overhead and cause issues in analysis. The code hasn't caused any issues to my system but YMMV.

Link to comment
35 minutes ago, idiot_savant said:

@manueljenkin - look, I understand - you've heard something, and have become interested in how that might have happened, read some stuff on the internet and so on. That's great! But you can't mistake that for actually knowing how stuff works.

HDMI is constantly running - so I'm not talking about generating frames, I'm saying even for a static image ( black even ), *something* has to store that image *somewhere* to transmit it over the HDMI wire. Now this amount of data (33MBytes)  has to be *continually* sent.

33Mbytes is once again your speculation. If you want to display just commandline texts in black and white I doubt you need more than 4 bits (or even 2) per pixel. Neither do they need to have the content at 4k resolution. So the total access size might be much smaller and GPU can upscale them both in color bit depth and resolution (otherwise I am not seeing much point of these devices begin SIMD to begin with). Also GPU should be accessing the RAM through its own dedicated DMA. All of this is assuming GPU doesn't have a cache or the cache is not used, otherwise, the scenario gets even easier

Link to comment
2 minutes ago, PeterSt said:

 

All access is noisy. Noisy for ground-plane coupling in over at the DAC from all sorts of angles and backdoors and direct lines including galvanically isolated ones. From there the least what happens is oscillator influence, hence jitter. And there we have it.

Just to add, by jitter, the point to probe is the clock and data pulses being fed to the dac unit.

Link to comment
3 minutes ago, March Audio said:

Wrong.  The point to probe is the dac analogue output.

There are ways to fake low jitter at dac output (and it'll affect sq poorly even though it'll improve sine squiggle numbers). The parameters to measure at the dac analog output to judge fidelity at present is not very conclusive. Measuring at the input will show the differences for what they are.

Link to comment
6 minutes ago, March Audio said:

No there aren't.  Please explain these faking methods.

 

Anyway, as I asked, do you have anyvevidence that problems feed through to the areas you mention?

I can just put a 1khz oscillator at the analog section, send a 1khz input but with noisy clock and data lines. And it can still give me very low deviation at the output 😁. Now tell me if it'll produce nice song if I input a song.

Link to comment
15 minutes ago, idiot_savant said:

 

So my contention is that there are huge amounts of access that you have no control over, and a *very* small amount that you can.

 

So why are we introducing this noise-ridden thing into our precious listening rooms then? It obviously isn't for the screen ;)

 

 

your friendly neighbourhood idiot

Huge vs small is mostly your perception. It's about reducing issues in any area that is feasible. There's also something known as correlation. Certain types of noise correlate more to audio issues (8khz tizz from 125us polling if the system priority is too high, or other issues which cause sudden spike during these polling) than others. So it's not quite as direct as things may seem, and of course this area is too profound so we don't have any well established conclusive correlation metrics yet (and unlikely anytime soon, we haven't even figured out human hearing beyond a certain basic abstraction).

 

If you don't want "this" noise ridden thing, maybe get a vinyl record player (but live with its own flaws).

Link to comment
2 minutes ago, idiot_savant said:

Huge: 2Gbytes/second

small:  176.4kbytes/second.

 

I don't think that's perception, that's pretty simple math.

 

So I can't use a CD player then?

 

your friendly neighbourhood idiot

 

 

You surely can use a cd player. But then it's got its own problems that you may have to correct and could get expensive and I can't turn it to a pc to do general purpose computation when I need it to (when I'm not listening to music).

Link to comment
12 minutes ago, Racerxnet said:

I'd suggest that you are the one with fairy dust at the end of a rainbow, looking for a pot of gold. This is a objective forum, show proof or take it elsewhere. You deleted my reply like a wimp with no credible evidence. They pushed March Audio out on the other thread. Provide evidence on the software you are supporting. 

Well you were building upon a claim by @Currawong and putting blame on the developer with your narrative skewing. The dev never claimed the software to do defragmentation and there is no evidence for the software to be doing defragmentation. Your post might have been worthy elsewhere but it is off topic and misleading in this topic with an irrelevant and flawed base so there's nothing wrong in me deleting it.

 

And here you make an abusive remark on me. (Calling me a wimp).

Link to comment
12 minutes ago, March Audio said:

So considering that pretty much all audio recordings are made with computers these days I suppose you think that the quality is stuffed before we replay it at home?

Depends on how the computer is optimized (both physically, power supplies, pcie to usb/audio out cards and their clocks etc, and software wise). How the DAW code is written and what it does. I've heard good references of sq of DAW like Steinberg Wavelab but that's another topic.

Link to comment
17 minutes ago, idiot_savant said:

Or is it better to create loads of alleged problems with "access noise" for myself, and then have to keep plugging and unplugging screens etc whenever I want to switch tasks?

Different strokes for different folks. I see more value in a 3000$ PC that I could use for data crunching and one that I could just turn into an audio transport of acceptably low noise when I want to listen to music (if I'm critically listening then I'm unlikely to be doing any other task, but that's me).

Link to comment
2 minutes ago, PeterSt said:

 

You can't. But we must wonder whether it is about those. I have no real evidence of that. And people over at Phasure really tried in this realm (in vain, if you'd ask me).

But the audio playing PC itself and its main (Linear) PSU are of vast importance. More important than a DAC these days.

PC pcb design is generally very high level stuff (very large multi layer PCB), and the power supply design (regulators etc) is a very intricate thing as well (including the power delivery circuit inside the processors). I am not sure if it is the same SMPS we encounter with other generic devices (there's massive developments on this front on the low power area, and it has also been successfully expanded to certain areas in audio - my burson fun uses a SMPS that sounds very good). Certain ports can be an after thought in things like laptops but not sure if any such thing happens in workstations.

 

Of course an unclean power supply entering all these may leave residuals so improvement in any area will be worthwhile but power delivery system in pc motherboards isn't as bad as we could think.

Link to comment

Well your arguments again stem from an assumption that PC power supply is sloppy work. Far from it. A 12V supply is regulated in multiple stages to ensure that there is enough buffer in place to take any disruption that changes power consumption would bring and it is generally very low noise because it'll have to run through multiple layers in the CPU. Can they be improved by a better power supply input? Surely yes, and a better power supply input can also help the rest of the pcb. You can afford to do this much level of buffering and filtering because it is power (a specific fixed voltage and current with some transient deviation). But you can't do this multiple levels with data which is a switching sequence of pulses or else you'll be losing speed.

 

Data lines hence are non deterministic as mentioned earlier. There's not much ways to fully control it other than controlling your software, and since they use ground as reference, they both play together creating the problems.

 

 

Link to comment
1 minute ago, Racerxnet said:

As much as I try not to buy into the magic, I bought the Jcat card and did a comparison to the DACup ports on the Gigabyte board. I could not hear a difference. What the Jcat did do is provide a more robust connection. It seems that the Gigabyte boards suffer from lag/interupts to the connected devices on USB. I run a custom Bios which gives me greater control to stabilise the OS, RAM timings, and CPU. Maybe people should start tweaking the bios for better compatabitity and a stable OS first. 

 

I did notice a definite improvement with the Berkeley Alpha USB to Spdif converter. I have isolated the noise/jitter fed to the DAC. 

Thanks for the reference. First time I'm hearing about a motherboard manufacturers looking at their usb ports to this attention atleast in terms of power supply regulation. No idea of jcat though, it does seem to use high quality components (last I remember seeing a crystek oscillator and nec UDP720201).

Link to comment
Guest
This topic is now closed to further replies.



×
×
  • Create New...