mansr Posted November 16, 2018 Share Posted November 16, 2018 2 hours ago, diecaster said: Why would latency make a difference in sound quality? Latency of what? Link to comment
mansr Posted November 17, 2018 Share Posted November 17, 2018 14 hours ago, LTG2010 said: ''AudioLinux is based on realtime custom kernels and on the work of that part of linux community trying to achieve very low audio and processor latencies.'' See here: www.audio-linux.com Again, latency of what? From which event to what action? Link to comment
mansr Posted November 17, 2018 Share Posted November 17, 2018 14 minutes ago, LTG2010 said: I think it's partly based on DPC latency and how the OS prioritizes tasks, I'm no expert just enjoying the extra clarity over windows I was using previously. DPC is a Windows term. Link to comment
mansr Posted November 17, 2018 Share Posted November 17, 2018 4 minutes ago, LTG2010 said: which is measurable in Linux and used in Kernel Design. How can something that doesn't exist in Linux be measurable there? Link to comment
mansr Posted November 17, 2018 Share Posted November 17, 2018 29 minutes ago, LTG2010 said: Realtime Processor latency tests were carried out using an Osciliscope test in Linux as well as others and compared with DPC latency tester in Windows. It's detailed on their website if of interest. So what latency did they measure, and what is the relevance to audio playback? Link to comment
mansr Posted November 17, 2018 Share Posted November 17, 2018 15 minutes ago, One and a half said: This noise can be a wide range of frequencies, and the common 0V is the medium where this causes the most problem, also known as common mode noise. No, that is not what common-mode noise means. Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 7 hours ago, nbpf said: In this context, latency typically dentoes the time it takes for a process to react to an interrupt signal. I know that. The question was which interrupt, and what must the software do in response? What is the deadline, and how was it calculated? Unless those questions are answered, simply claiming "lower latency" is meaningless. Link to comment
Popular Post mansr Posted November 18, 2018 Popular Post Share Posted November 18, 2018 52 minutes ago, nbpf said: I do not think so. A scheduler is necessarily unaware of what the specific processes that have to be managed are actually performing. A real-time scheduler simply attempts at reducing the max. latency as measured by standard latency tests like cyclictest, see https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/cyclictest/start. Perhaps this is achieved at the expense of average latencies or of at the expense of min. latencies. It is very easy to measure the impact of different system setups (e.g. fixed vs. variable CPU frequency, running a system from RAM or from disk, standard vs. real-time kernel) on latency tests. Whether these differences have impacts on the sound quality and, if so, why and what are meaningful measures of such impacts are further interesting questions. The scheduler latency measured by that tool is just one of many in a system. In relation to audio, all I'm seeing is vague claims about something having "lower latency" without specifying which latencies, what the bounds are, or why it matters. sarvsa, esldude and jabbr 3 Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 6 minutes ago, jabbr said: the option of running in RAM That's another nonsense claim. Every modern system executes code residing in RAM. It is impossible for the CPU to do anything else. sarvsa 1 Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 4 minutes ago, beerandmusic said: i think they mean entire os running in ram so it doesn't need to retrieve parts not in ram first. I could see that as a benefit. If you have enough RAM, everything ends up cached there anyway. Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 3 minutes ago, greenleo said: Sorry, I don't get you. The code is executed in the CPU. In principle, the code can be residing totally in the cache, just think of the Win 3.1 that runs only 4MB RAM 25 years ago and nowadays the cache can be more than 8MB. RAM is the storage, hence the virtual memory using HDD as the RAM. The keypoint of the RAM OS, I believe, is no more reading from the local storage. I/O reduced and then noises reduced. The CPU fetches instructions from RAM (or cache if present there). The same goes for data. The kernel loads any required code or data into RAM and keeps it there until the pages are required for something else, which only happens if the whole system doesn't fit in RAM. In that case, pre-loading it all wouldn't be possible either. Jud 1 Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 3 minutes ago, greenleo said: I agree with these in general. If the size of data is ever increasing then pre-loading is impossible. However, if a track is playing repeatedly or an album is played repeatedly, pre-loading them into the RAM is not impossible. I thought we were talking about the software (OS and applications), not music files. Obviously, the entire music library won't fit in RAM. Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 6 minutes ago, nbpf said: All I can say is that I have played around with a real-time kernel for Raspbian last week. The effects on minimal, average and maximal latencies as measured by cyclictest are very obvious and can be reproduced both on an idling system and on a system that transcodes 24bit/192kHz files. I don't doubt that. What's missing is any relevance whatsoever for audio playback. 6 minutes ago, nbpf said: As mentioned there, I have not found the reported latency differences to have an obvious impact on the sound quality of my system. I don't doubt that either. Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 22 minutes ago, LTG2010 said: The entire Operating system is loaded into RAM, there is no SSD/ Hard disc /SATa interface for the CPU to communicate with. It's got to be stored somewhere. Link to comment
mansr Posted November 18, 2018 Share Posted November 18, 2018 32 minutes ago, Ralf11 said: Does anyone think Low Latency has any mechanism by which to affect SQ?? Not anyone who understands how computers actually work. esldude 1 Link to comment
Popular Post mansr Posted November 19, 2018 Popular Post Share Posted November 19, 2018 6 minutes ago, beerandmusic said: arriving late would affect data or noise? everyone agrees that the dac gets the data with 100% accuracy, so arriving late affects "noise"? Late arriving data results in dropouts or pops/clicks. esldude, Jud and barrows 2 1 Link to comment
mansr Posted November 27, 2018 Share Posted November 27, 2018 17 minutes ago, lmitche said: So how can the NUC deliver more bits to the DAC? What do you think you mean by that? Link to comment
mansr Posted November 27, 2018 Share Posted November 27, 2018 6 minutes ago, lmitche said: Perhaps I should have said more bits are represented in the waveform output by the DAC. Still makes no sense. asdf1000 1 Link to comment
mansr Posted November 27, 2018 Share Posted November 27, 2018 1 hour ago, jabbr said: I think it’s possible that in a high noise environment, there might be dropped bits across a USB interface which might not error check. There is no such thing as dropped bits. The USB checksum is guaranteed to detect any 2-bit error in a packet. Isochronous mode does not resend bad packets, but they are detected. Besides, it is trivial to transfer a few hours of music over USB and verify that no errors occurred. Dev 1 Link to comment
mansr Posted November 28, 2018 Share Posted November 28, 2018 10 hours ago, jabbr said: Do you have a DAC that indicates when a USB packet fails checksum, or another technique to measure the error rate using Isosynchronous mode? Software program — I’m sure people would like to test their USB connections if they could do this at home. I have done such tests by sending random data to a USB-S/PDIF converter and capturing it's output with another USB device. By comparing the captured data to the transmitted, it is then trivial to detect if any errors occurred and, if so, what the receiver did with them. Link to comment
Popular Post mansr Posted November 29, 2018 Popular Post Share Posted November 29, 2018 21 minutes ago, pkane2001 said: Infrasound, yes. Ultrasound, no. If you stick a finger in an ultrasonic cleaner, you'll feel a tingle. esldude and pkane2001 1 1 Link to comment
mansr Posted December 2, 2018 Share Posted December 2, 2018 8 minutes ago, esldude said: But for trust your ears audiophiles hearing is believing. And vice versa. lucretius 1 Link to comment
mansr Posted December 2, 2018 Share Posted December 2, 2018 3 minutes ago, Ralf11 said: YES, it is. However, I did not make the claims that you did. Now please answer the question. You already know the answer. Link to comment
mansr Posted December 3, 2018 Share Posted December 3, 2018 9 hours ago, Ralf11 said: upsampling is non-bit perfect? It alters the bits, so no. The term bit perfect doesn't really make sense where an intentional alteration takes place. Link to comment
mansr Posted December 5, 2018 Share Posted December 5, 2018 8 hours ago, Jud said: Speaking of Tilray, one of their Board of Directors "completed the Cable Management program at Harvard Business School." Really! As in cable TV? Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now