Jump to content
IGNORED

Bit perfect software changing sound. How?


Recommended Posts

Righty-ho, please nobody attack me :)

 

I've kind of not really read this thread as it appears to be degenerating rapidly, but this *is* the objective-fi forum, so let's see what we can objectively ascertain, no?

 

1: So we have some playback software, that claims to benefit SQ on playback. Assertions have been made to both the positive and negative as to this, but as far as I can see, we don't have any measurements, making this a purely subjective claim

2: An Additional feature is this "optimise" feature, that can be run more than once, that creates improved copies of the original source material. These improvements are very fragile. Again, subjective claims to positive and negative, and the files have been nulled successfully ( although I'm not sure anyone's done a simple file compare or a CRC )

 

So, does the claimed improvement only work with the associated software player, or with all players? Is it PC specific? Can more than one OS be used?

 

Now, if the improvement only works on one player, it is feasible that the contents of a file could be restructured whilst maintaining the actual reconstructed audio, and this is in some way beneficial to that player - anyone who has ever attempted to write an audio file parser will understand this ( typically, there are a bunch of "chunks", which can contain audio, metadata, other guff ).

 

Actually, what file types can you optimise? WAVs? DSF? FLAC? All files?

 

If it is attempting to interact with the file system, does it require administrative rights?

 

your friendly neighbourhood idiot

 

 

 

 

 

Link to comment

@The Computer Audiophile, in the *very* opening post on this sub-forum:

 

Other than the logical uses of this sub-forum, it will also be used to house objective-based comments that seek to refute someone's personal experience  posted elsewhere. For example, if someone says that two bit identical files sound different, the new Objective-Fi sub-forum is the place to begin / continue the discussion unabated and away from appeals to authority or unprovable psychoacoustic comments. 

 

 

your friendly neighbourhood idiot

Link to comment

I'm sorry?

There's a very fundamental problem with this theory, that I'll attempt to explain for anyone who cares, and it is this:

The application *cannot* have the level of access to the file that would be required to alter the kind of things you are talking abut. As you have said yourself, there is enormous non-determinism in a PC - there are typically 3 levels of CPU cache, plus a file cache run by the OS, and there are people who's entire jobs are dealing with caches. Because of this, you simply *cannot* guarantee with any certainty when anything is read or written to. 

The whole function of an OS is to provide "abstraction" from the hardware - so that an application doesn't care if the file is on a USB stick, HDD, SSD etc. - or even what brand they are and the OS will actively block any efforts by something attempting to gain lower level access.

You're also missing out several layers of RAM access between the original file and the USB interface ( so the playback software will almost certainly use the OS audio stack, which talks to the USB audio driver, etc etc )

 

You also have to consider how much *stuff* a CPU is doing when you're just looking at a static screen, and any extraneous noise added by reading a few megabytes of data will be swamped in this. I honestly don't doubt that you perceive a difference, but I equally honesly believe that if things were as "fairytale" as you are stating, we wouldn't be having this discussion because the internet wouldn't work

 

your friendly neighbourhood idiot

 

Link to comment

But my point is even if we assume this noise is distinct enough from everything else to matter - it's *extremely* difficult to see how optimising a file could reduce this noise. Consider, for example, if you play the same file twice - it's not read from the file system twice, it will be read from the filesystem cache, *even* if the application doesn't explicitly buffer it. I'm also a bit confused about your "lower level language" assertion, could you elaborate?

 

 

 

your friendly neighbourhood idiot

 

 

Link to comment

And I think we've drifted onto two subjects, which are different:

 

(1)Can optimising a PC ( e.g. task scheduler, playback software etc ) affect e.g. RF, EMC that might conceivably affect a DAC

 

(2)Can we modify a file on the to enhance (1)?

 

Just to be clear, I can think of no way for (2) to happen if the *data* is identical on a modern OS - consider if our OS was running a deduplicating file system, where identical files are identified and only one copy is physically kept on the disk? Or a file on a NAS, where we absolutely have no control over it?

 

your friendly neighbourhood idiot

 

Link to comment

@manueljenkin - for playing the same file twice, is it equally optimised on the second playback is my point, as it will not have been read.

 

 

As for "fewer accesses" - to what? RAM? Disk? OS calls? Like I say, you have to realise how much stuff is happening *all* the time just to let you see a static screen. Are we really saying some *instructions" sound better than others?

 

 

your friendly neighbourhood idiot

 

Link to comment

@manueljenkin - right, this is the objective forum. If you're going to make these factual claims:

 

They definitely have different intrinsic noise and determinism patterns.

 

then I'm sure you can back them up? You do realise that in a modern CPU each x86 instruction doesn't really map into what a CPU does any more?

 

Does the optimisation work on an NTFS compressed drive, as supported natively by windows 10?

 

As for noise, consider the following:

a 4k framebuffer is 3840*2160*32 bits = 33MBytes. This has to be read from a framebuffer ( DDR local to a graphics card typically ) at 60 times a second. That's the equivalent of reading 3 CDs a second from DDR and shoving it out of an HDMI port.

And this generates less noise than choosing instructions apparently.

 

I'm the one giving numbers and facts, these aren't speculations. If you really want me to install a VM so I can install some potentially virus-ridden code, and use a debugger to try and find out what this thing does, then I'm afraid you might not like the results. After all, if it's really written in a low-level language, that should be pretty simple to prove

 

 

your friendly neighbourhood idiot

 

 

Link to comment

@manueljenkin - look, I understand - you've heard something, and have become interested in how that might have happened, read some stuff on the internet and so on. That's great! But you can't mistake that for actually knowing how stuff works.

HDMI is constantly running - so I'm not talking about generating frames, I'm saying even for a static image ( black even ), *something* has to store that image *somewhere* to transmit it over the HDMI wire. Now this amount of data (33MBytes)  has to be *continually* sent

 

Now, in a good PC with a separate GPU, that has it's own DDR that is being read, but in a more generic PC, that will have something like an APU that uses *the same DDR*. Now, internally, the frame buffer might be cached, but even you must see that having a 2Gbyte/s RAM access inside the same box as something having a 176kByte/s RAM access are orders of magnitude different?

 

You might claim there is no correlation, but how is reading a framebuffer to keep a static image less intensive than reading a buffer to provide audio to a DAC? This has *nothing* to do with how you create that image.

 

And you do know that your CPU is *constantly" accessing memory anyway, right, even if that is access is just to say "nothing to see here"? 

 

your friendly neighbourhood idiot

Link to comment

Erm,

 

I'm not talking about *updating* the screen.

*sigh*

So... Application->many,many layers of graphics stuff->framebuffer->HDMI

If you crash a PC, does the screen turn off? Or - maybe, just maybe the last thing the OS does is draw a blue screen in the framebuffer, that still needs to be sent by the GPU

 

your friendly neighbourhood idiot

 

Link to comment

Can people not do maths or read on this forum?

I'll do it again for @manueljenkin

Let's take a really low res screen, say 720p. HDMI *requires* 8 bit-per-channel sRGB as a *minimum*. So *each frame* is 1280 (width) * 720 ( height ) * 3 ( 24 bits/pixel ).

This gives us 2.7Mbytes/frame, of which there are 60 a second, giving us 165Mbytes/second, or over 900 times the bandwidth of CD-quality audio.

My 33Mbytes/frame was assuming 12-bits/channel, which is fairly common.

It doesn't matter what you *show* - be it a command line, a blank screen etc. there is a framebuffer that is *written to* by the CPU/GPU and *read by* a constantly running frambuffer->HDMI bit of hardware. I'll repeat. How can adding or removing a few instructions come close to a 900x bandwidth increase ( at a minimum )

 

This is *not* speculation, this is actual sums done on actual specs.

 

@PeterSt - so now I'm not allowed to use a screen?

 

your friendly neighbourhood idiot

 

 

 

Link to comment
8 minutes ago, PeterSt said:

All access is noisy. Noisy for ground-plane coupling in over at the DAC from all sorts of angles and backdoors and direct lines including galvanically isolated ones. From there the least what happens is oscillator influence, hence jitter. And there we have it.

 

So my contention is that there are huge amounts of access that you have no control over, and a *very* small amount that you can.

 

So why are we introducing this noise-ridden thing into our precious listening rooms then? It obviously isn't for the screen ;)

 

 

your friendly neighbourhood idiot

Link to comment
1 minute ago, manueljenkin said:

You surely can use a cd player. But then it's got its own problems that you may have to correct and could get expensive and I can't turn it to a pc to do general purpose computation when I need it to (when I'm not listening to music).

 

What problems does the CD player have that I have to correct?

Why can't I have a PC, that I turn off when I'm listening to music, *and* a CD player, that I can listen to as well?  Maybe even both at once?

 

Or is it better to create loads of alleged problems with "access noise" for myself, and then have to keep plugging and unplugging screens etc whenever I want to switch tasks?

 

I'm genuinely confused - we apparently add noise, that we apparently have to jump through hoops to ameliorate when we don't have to at all?

 

your friendly neighbourhood idiot

Link to comment
2 minutes ago, PeterSt said:

RDC man, RDC !

 

RDC? Is that some kind of remote desktop ( like RDP? ) Frankly, I find those things extremely unsatisfactory.

 

As for the 705600 bit stereo, that's 5.6Mbytes/sec - still somewhat smaller than the 2GBytes/s of "accesses" in my example

 

And isn't all this "noise" transferred mysteriously via EMC or RF? So this PC with a screen can't be in the listening room?

 

your friendly neighbourhood idiot

Link to comment
Guest
This topic is now closed to further replies.



×
×
  • Create New...