Jump to content
IGNORED

Why does latency or offset change the sound?


Recommended Posts

OK, it's another explain something to the idiot thread!

 

As I understand it, latency of playback means: The delay between sound being processed by e.g. playback software and it coming out of the back of the computer. Am I wrong, and if so ( or even if not ), how can this change the sound?

 

your friendly neighbourhood idiot

 

Link to comment

 

IS, you probably know more about this than me, but I'll bite.

 

I'm not aware of latency being of any importance in playback, so long as the latency is the same in each channel, obviously.

 

The most likely occurrence of latency (in digital playback) would seem to be when DSP is employed.

 

We do know that Sonic Studio are not too concerned about latency, as evidenced by that fact that Amarra has noticeably more latency than iTunes. This seems to fly in the face of their claims of more efficient processing, and lack of DSP (unless invoked by user), to me anyway.

 

Latency in recording is a much more significant issue, and occurs not only within DSP, but also in the ADCs.

 

 

 

 

 

 

 

 

 

Link to comment

cfmsp,

 

Thank you for your opinion, which you have expressed extremely succinctly - one of the reasons ASIO was invented was to reduce latency - this was much more to do with feedback to performers, ( e.g. listening to themselves through feedback systems ), but it seems that a common opinion is that lower latency = better sound. I guess I'd better come clean and say that I can't see how it can make an improvement, but await ( and encourage ) opinions to the contrary,

 

As far as Amarra goes, my position has been made quite clear on the forum - I'm sceptical, but until my demo version comes through, I'm not going to make judgements

 

I promise that I'm only here to try and help people think about things - I'm hoping that my postings help people separate the signal from the noise for themselves,

 

your friendly neighbourhood idiot

 

Link to comment

I was convinced for a while that I could hear differences depending on where the latency sliders were in ASIO and there were plenty of people around on internet forums reporting the same thing. Curious as to why, I researched it as best I could but could find no rational explanation as to why it would make any difference. Listened a little more skeptically and - nope - I hear no difference after all. Funny how brains work. So now I set the sliders to the max and forget about it.

 

I.S., something tells me if there was a valid explanation you'd be the first to let us know.

 

Re Amarra, don't forget to allow sufficient time for it to burn in before evaluating it critically.

 

OE

 

hFX Classic fanless i7 SSD > Locus Nucleus / SW Diverter HR > RWA Isabella LFP-V Pro / New Sensor Genalex Gold Lion E88CC > ALO Sennheiser HD 800 balanced[br]

Link to comment

As Olive said, it is commonly known for many years, and I don't think I started that back then.

Contrary to Olive -as I explained in the other thread- I very often forget to set it back to minimum, and always noticed the same day of listening. No exception.

 

Ah, you asked "how come". Right.

I don't know.

As said elsewhere on the software matters, it will have power (consuming) implications on the DAC. Funny thing is though, and I'm only realizing this during writing this, while I especially built a DAC to overcome this, including the influence from software (make it immune with super shunt stuff and all) ... this goes through the soundcard first really (passing through SPDIF), so now I'm not sure anymore what I tried with that DAC. It still will be so that the soundcard uses a buffer of 48 (in my Fireface case + 116 or so) samples instead of 1024 etc., so I guess it communicates with the DAC more often at the lower latencies, but then ... where is the large buffer in the DAC ?

 

So, I don't know.

Maybe it influences the outgoing jitter on the soundcard ...

 

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Since no one has come up with any theories here, I've been doing some thinking, and here's how it goes:

It's can't be anything to do with the latency per se, as this would (effectively) mean that when I choose to press "play" affects how software sounds ( since this is all latency is ).

So, how about processing that may be involved? Surely data going through a buffer can't make any difference ( especially since there a loads of buffers in a PC/Mac?

Now, here's the thing: The way playback software works ( Peter, feel free to correct me here ), is that the playback software reads from the file ( or a buffer, whatever ), and presents it to the audio subsystem of the OS. The OS then does whatever it does, and indicates to the playback software when it needs some more ( in reality, it is double, possibly triple-buffered - it plays back one buffer, while the software fills up another, and then switches them when it needs to ).

So, the lower the latency, the smaller the buffer. This means that surely the playback software (and OS ) has more work to do with lower latency, since the OS will be requesting data more often? And this will require more CPU usage ( which some people say is bad for the sound )?

 

your friendly neighbourhood idiot

 

Link to comment

But in the end maybe for 50%.

 

CPU *will* be higher, because the *driver* is more busy at dealing with ... whatever it is (you can just check it), but the buffer the program sees does not change size (not in Vista/WASAPI anyway).

As I said in the other thread, the buffer the program sees can be changed by the program, but it won't happen by setting the latency in the driver.

 

In the end it is as you say ... double buffer etc; the software only sees the first (DAC direction).

 

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

So, we have the folks at Amarra saying latency isn't an issue, Peter saying it is.

 

The cPlay explanatory notes on AA suggest that the driver buffer level should match the player's output latency, measured in samples (which might suggest it's not a matter of high or low per se), that the player's buffer setting shoud be large when the file is at 48k or less and small for high output rates above 96k, and that in all cases the buffer sizes work best with low ASIO buffer/latency settings of less than 512 samples.

 

The ASIO4ALL release notes on the other hand state that audiophiles should just set the sliders to the max (inferring latency is not an issue on playback).

 

I think all a punter can really do is listen and decide what they like, but out of interest I would like to know the reasoning behind why it matters, if it matters.

 

hFX Classic fanless i7 SSD > Locus Nucleus / SW Diverter HR > RWA Isabella LFP-V Pro / New Sensor Genalex Gold Lion E88CC > ALO Sennheiser HD 800 balanced[br]

Link to comment

FWIW : Amarra (Sonic) only says that they have no interest in low latency, "because this is audio, and not (live) performing". Besides, for them it may justify the slow response ?

 

In the end everybody not recognizing (or knowing) that it makes a difference for SQ will state that it is better to have a longer buffer. It avoids drop outs as good as possible ...

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Peter,

 

If buffering makes a difference in the sound, wouldn't this be a(nother) likely explanation for SSDs sounding better?

 

Presumably limiting the amount for 'hard disk access' would be a greater positive impact from use of SSDs, but elimination of buffering interactions would seem to benefit sound even if music was stored on an external disk, yes?

 

 

clay

 

 

Link to comment

As for this one, ALL modern hard disks have a fairly substantial cache built in - typically 8MB or so, I'm not as up to date as I should be on these things :(

Additionally, ALL OSes will use most of the available RAM to act as a disk cache.

 

What does this mean?

In all cases, all of the time, the data read by audio software is read from RAM. SSD, HDD, compact flash, USB pen drive, typing samples into a WAV file...

 

The latency issue ( to me ) sounds for all the world like there was a really good reason for reducing latency ( feedback to the performer during recording, for example ), which has been taken as a "golden rule" for playback ( where it is irrelevant ). Indeed, as far as I can tell, decreasing latency increases CPU processing, makes software MORE prone to clicks/pops ( as buffers underflow ), whereas increasing it increases the time between you pressing "PLAY" and the music starting.

 

please, if you feel that lowering the latency setting is contrary to what I think, I'm not going to say "you're wrong" - I just can't see it, and basically would welcome if you could re-run any tests you've done,

 

your friendly neighbourhood idiot

 

 

 

 

 

Link to comment

I'm not going to say "you're wrong" - I just can't see it, and basically would welcome if you could re-run any tests you've done,

 

i_s, I am not sure whether you are addressing me, but that assumed ...

 

I now recall that -where I could easily show differences between player settings and players- I could NOT show or prove this latency thing causing differences by changing (in my case) Fireface settings.

I almost forgot about that ...

 

I must add somethingh though;

Because of the necessity to use related clocks (or otherwise two subsequent takes of the same would always look different) I reasoned that I couldn't capture jitter variations by this. Don't ask me to re-do that reasoning, but it was part of the whole lot of interpretation, with the conclusion (or assumption maybe) that I was never looking at jitter differences in the first place. Still dangerous because quite some types of jitter exist, but what to say if I take 10 recordings (digital -> analogue -> digital) from the same settings and they are all exactly the same apart from one LSB ?? I say : or there is no jitter (unlikely) or I can't see it because of the (clock) means used (also unlikely to me, bu if I must choose ...).

 

Long story short : The first thing (obviously) I did, was measuring the differences of which were known they are audible. They all worked, apart from that latency thing ...

 

Clay, a buffer is not necessarily memory, although most often it is. But I think - thinking in the context here - it needs seeing that it is always about the *amount of* buffers. It is that what takes the processing (and throughput time) ... copying from the one to the other. In XXHighEnd only I use easily 6, but they are all outside of the real playback. Just think something like a FLAC needs to go to WAV first before it can be offered to the OS buffers. Put the WAV into memory and you added a buffer, put it right into the OS buffer (which can't happen with FLAC), and you avoided one.

 

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...