pkane2001 Posted November 23, 2020 Share Posted November 23, 2020 https://archimago.blogspot.com/2020/11/on-measurements-listening-and-what.html Vote in the poll and feel free to provide any evidence (not just an opinion) to support or refute one side or the other, including @Archimago. Here's the original article: http://www.theabsolutesound.com/articles/measurements-listening-and-what-matters-in-audio/ fas42 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 23, 2020 Author Share Posted November 23, 2020 1 hour ago, Speedskater said: Note that the Robert E. Greene editorial is 1600 words long and that the Robert Harley reply is 1000 words. http://www.theabsolutesound.com/articles/measurements-listening-and-what-matters-in-audio/ It seems that Mr. Harley took off on a tangent (rant) and his reply has little to do with the editorial. So, 1600 is objectively longer than 1000 words, that part is true :) The Computer Audiophile 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 23, 2020 Author Share Posted November 23, 2020 32 minutes ago, semente said: Is Harley protecting the industry or just defending his own approach to audio? His arguments are but a house of cards... Yes, to me it also seemed like it was more of a rant than an argument. I found Archimago's rebuttal about jitter and power cords more interesting. Speedskater 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Popular Post pkane2001 Posted November 24, 2020 Author Popular Post Share Posted November 24, 2020 2 hours ago, fas42 said: Why I put Archimago last is summed up in his conclusion, Key word here is "much" - my current active speakers are a case in point; delivering outstanding subjective results for 'ridiculous' money, where only a small number of tweaks were necessary to achieve an acceptable standard - this is indeed maturity of much of the technology. However, there is still an absence of the knowledge that "everything has to be got right", for the listening experience to deliver - which people like Archimago aren't strongly motivated to explore. I'd disagree with that, last point Frank. Archimago has spent a large amount of time investigating and exploring exactly this space. He didn't come to the same conclusions as you, though. "Everything has to be got right" is not supported by any evidence that you've presented so far, nor could it be, since there's no definition of what that "everything" is. Do you account for thermal noise? Do you correct for the parasitic capacitance between the PCB traces? Do you know if the wire used inside your speaker drivers is oxygen free copper? Or just some sort of copper alloy with impurities? What do you do to stop cosmic ray hits from interfering? Or sun spots? Etc., etc., etc. In real life, one must compromise. "Everything" can't be right, or we'd never get anywhere. In my experience, and I believe that's what Mr. Greene is saying along with Archimago, is that everything has to be right enough. That "everything" isn't as important as some other things, and there are some very large elephants in the room that must be addressed before you get to swatting tiny bacteria. For example, things like a power cord are one of the last things you should be focused on to improve the sound of your system, especially if you've not yet dealt with speakers, and their proper position and room interactions. "Good enough" is often all we can do. And Mr. Greene and Archimago appear to be all about trying to figure out what that "good enough" really is. daverich4, audiobomber, Ajax and 2 others 2 2 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 1 hour ago, PeterG said: Harley has built his career and TAS around the small differences that Greene asserts are insignificant, and he believes that these small differences are what being an audiophile is about. By that standard, Greene is spreading disinformation that could mislead newer listeners and/or more gullible readers. But you can see that Greene might have exactly the same opinion of Harley's viewpoint, right? -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 17 minutes ago, opus101 said: It seems Greene thinks electronics in general is 'good enough'. Fair enough for him, that's based on his experience. My own experience differs - to me, not all DACs are 'good enough'. So I must disagree - electronics quality IME affects how much sense is able to be made from reproduced music. When recordings make more sense the enjoyment level rises considerably. I didn't read it quite like that. What I thought Greene was stating is there are often very large errors in the transducer part of audio chain, with way, way smaller errors in the electronics. So, instead of spending time trying to squeeze that last error at -120dB out by using a better power cord, one may get much further by first trying to solve the large errors with speakers or headphones that often rise to the level of many dBs. He doesn't deny that some tiny effects might be audible, but he states this: Quote Some of these tiny effects may be audible, but the important point is that there is seldom any mechanism for deciding if the changes are to the good or not. If there is no way to know why some change, of a power cord say, affected the sound, there is no way to decide whether the effect, if any, was positive or not. How could you tell? Believe the manufacturer? Believe reviewers, who have as little basis as you yourself? This is a major issue. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 28 minutes ago, opus101 said: In terms of 'objectivity' its clear Greene sets up a straw man about 'soundstage'. He says : This idea of evaluating everything in terms of soundstage is potentially a major source of confusion. I've not seen any argument from any reviewer or audiophile where everything's evaluated in terms of soundstage. But if anyone has a link for an example, I'm game to read it. Soundstage is mentioned quite frequently as the argument against measurements. As in "we don't know how to measure a soundstage". I've encountered this argument plenty of times myself. semente 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 2 minutes ago, opus101 said: Yes, I agree that's what he's saying. But notice that 'way smaller' is from the point of view of our current measurement capabilities, not from the point of view of perception. Where I agree with him is that some things matter more than others, I disagree on what those things are. He's determining important ISTM from a numbers pov. I'd say that's non-sensical, what matters is what's perceived by the listener. Perception of differences can also be measured, and has been for many things, like amps, DACs, power cords, speakers, headphones. Are you able to show any evidence that a swap of a power cord can make more of an audible difference (assuming both are functional, of course!) than swapping say, speakers or headphones? -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 6 minutes ago, opus101 said: I'm not at all interested in the question as its about 'audible differences'. To me they're a distraction. I'm not sure what you're saying. In order to be perceived, differences must be audible. The test subject must be able to differentiate between two devices by listening, otherwise any perception they claim is not due to audio differences. vmartell22 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 1 minute ago, opus101 said: I'm saying what Robert M Pirsig says in 'Zen and The Art of Motorcycle Maintenance' : “The test of the machine is the satisfaction it gives you. There isn't any other test. If the machine produces tranquility it's right. If it disturbs you it's wrong until either the machine or your mind is changed.” That, of course, is fine. But then you are disagreeing with Greene, Harley, and Archimago, since they all seem to think that there's a way to predict how much satisfaction an audio device will give to another user. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 7 minutes ago, sandyk said: Perform Non Sighted testing as several members (including Audiophile Neuroscience and myself) did a few years ago. We all independently came to the same conclusion that the more expensive cable did sound a smidgin better, resulting in a "cleaner" sounding presentation. Alex, again, I ask you to stop. You keep saying the same things in every thread, with no evidence to back up anything you claim. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Popular Post pkane2001 Posted November 24, 2020 Author Popular Post Share Posted November 24, 2020 8 hours ago, opus101 said: Going back to Greene's apparent lacuna on 'soundstage' for a moment. After stating his straw man he says this : Since no one has any idea of what kind of soundstage ought to arise from most recordings, soundstage is not really a sensible criterion for evaluation of anything. Hmm, dismissive over-much? In the course of my DAC development in the past week or so I've uncovered (in the limited context of multibit DAC design) something objective that appears to affect soundstage. That is - noise in the analog stage after the DAC chip. I'm using a passive filter followed by an opamp (which can't be a virtual ground because of the preceding filter). The opamp introduces noise as far as I can ascertain beneath the dither level of RBCD (-93dB) but a lower noise-gain circuit using the same opamp makes the soundstage bigger. I don't though have any evidence that the soundstage is clearer and larger beyond that of my own ears. So you found the soundstage to be "bigger". Some will say that your testing method is not to be trusted. But let's say it really is true, how did you decide that the difference in the opamp noise level was responsible for the change? And how do you know that the change was for the better? Phase response, larger cross-talk, increased uncorrelated noise can all "improve" the sense of the soundstage. Objectively worse results, such as with vinyl playback, can make the sound appear more spacious, less constricted to some. Others might hear the extra distortions. I think Greene is really pointing out the need to objectively understand the differences at a deeper level than "what sounds good to me", since that carries very little predictive value for other listeners. Quote Some of these tiny effects may be audible, but the important point is that there is seldom any mechanism for deciding if the changes are to the good or not. If there is no way to know why some change, of a power cord say, affected the sound, there is no way to decide whether the effect, if any, was positive or not. botrytis, semente and sandyk 2 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 55 minutes ago, bluesman said: Give me enough good information and the job is as easy as pie. The data must include purchase history, historical satisfaction, repeat purchases, mean time of ownership, how and why each device was dismissed from the stable, mods done, social media posts questioning how to improve each one, what his or her friends bought / sold and when, etc. Add in everything we can know about each of the devices themselves, including all technical data and what reviews the subject read before, during, and after ownership of each piece. Accuracy improves with each additional subset, e.g. stability of interpersonal relationships, job security, illness, unexpected downturns, etc. Facts like knowing that one purchase was rapidly followed by a flood of web posts asking for ideas on improving the new acquisition while another was followed by a year of quiet enjoyment add to the accuracy of such predictions. I suspect what you describe isn't as simple as it sounds. How many data points would you think you'd need to learn to accurately predict what drives someone's preferences? Five? Ten? A hundred? And how would you go about doing it, at a distance, by reading someone else's posts or reviews to determine what truly drives their preferences? Is it look and feel that affects them? Price? Brand name? Advertising? Influence of other reviews? Engineering or design principles or components used? Some interaction of components in their system? Actual audio performance of the device you are interested in? Or is it some complex and variable weighted average of all of these and probably of hundreds more factors? Realizing, of course, that most people don't have a full understanding themselves of all the drivers that lead them to prefer something over something else. You are, in effect, suggesting that you'll be able to understand, explain, and predict choices and motivations of someone you hardly know at a distance, from a few known purchase decisions and a few posts on internet forums. I'd argue that this is a fool's errand, especially if you're interested in any sort of accuracy. I often can't predict what my wife would prefer, and I've spent most of my life with her, observing her preferences and talking to her about her choices thousands of times. So, no, I don't think it's as simple as you describe :) -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 31 minutes ago, bluesman said: You skipped a bit of my post - I clearly said it takes “enough good information”. Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models. Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example). It’s a very valuable population health tool. The sheer amount of available data is astounding, and it’s very revealing. Look up Lyle Ungar’s work - he’s been studying this for years. We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data. Here’s a simplified example. If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it. If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem. If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement. If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection., And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation. Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture. Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money? Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more. Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases. It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website. Believing that our behavior is private and inaccessible to others is hopelessly naive. Many industries are monitoring and guiding much of our lives right now. Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it. And they’re very often right. I'm very aware of data mining practices and large data sets -- something I've worked with long before the web and tweeter and deep learning. I worked on deriving patterns from large data sets by training neural nets back in the 80s and 90s, way before this type of stuff became popular. But you can't buy more than a few data points on an individual's audio preferences if all they post is a couple of their purchase decisions and maybe a few comments and a few reviews. If you are talking about deriving common patterns from the larger data sets spanning multiple audiophiles, then I'm with you: that might even be an interesting project. But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 21 minutes ago, fas42 said: It is the noise that infests most audio playback that causes the problem; whether it's due to the behaviour of an opamp stage, or from a variety of other sources, is not really that relevant - the common factor is that the presence of the noise makes it too difficult for the ear/brain to unravel, decode the low level cues and clues in the recording; unconsciously, the mind "gives up" trying to understand the meaning of the low level 'hash' in the playback - and "soundstage" is severely diminished. You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Popular Post pkane2001 Posted November 25, 2020 Author Popular Post Share Posted November 25, 2020 36 minutes ago, Rexp said: 'Measurement equipment allows us to determine the accuracy of audio reproduction' from @Archimago Which measurements determine whether a sound has been reproduced accurately or not? What Kal said... Measurements can show if the reproduction is closer to the recorded signal. If it's different in some specific ways from measurements from another device, this can also tell which one is more true to the original, and what needs to be corrected in the other device. Even if we don't yet know every possible thing to measure, what we do know is still very useful since it allows us to make meaningful, repeatable comparisons and get to the root cause. This article from Benchmark covers my philosophy. Quote When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements. and Quote Any design process that relies solely on listening tests is doomed to fail. If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection. We may arrive at a solution that just masks the artifact with another less-objectionable artifact. On the other hand if we focus on eliminating every artifact that we can measure, we can quickly converge on a solution that approaches sonic transparency. To me, this approach leads to finding real issues and real solutions. Stabbing in the dark at all possible noise sources until everything "sounds just right" simply doesn't fit my temperament, sorry Frank :) botrytis and Kal Rubinson 2 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 9 minutes ago, opus101 said: Since you identify the above as 'your philosophy' @pkane2001 is the 'redesign' here purely random or guided in some way by the result of the listening test? Wait. What you quoted isn't my philosophy. That's what Benchmark said is wrong with just using listening tests during a design, and I agree. My philosophy is that any proper listening test that produces an unexpected result that doesn't mesh with existing measurements is a reason to try to find a way to measure the "unknown effect" rather than to try to fix it through a random trial/error process. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 16 minutes ago, opus101 said: You quoted Benchmark and said they cover your philosophy. That's what I quoted, your quote of them. So when you agree with them in characterizing that process as 'wrong' I'm curious about the details of that purportedly 'wrong' process? To agree with them surely you must know what process they're talking about here? Here's what I quoted, and I highlight the part I felt particular kinship to: Quote When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements. The process that I believe is wrong, as I've already mentioned twice, is to use listening tests to try to correct for errors that can't be measured, by random trial/error. Also mentioned in the Benchmark quote: "If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection." -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 5 minutes ago, opus101 said: I'm having trouble parsing your last sentence 'by random trial/error'. So I take it the answer to my question is 'its random' in which case I agree and I think their process is probably just a strawman. After all, you quoted their marketing materials right? I don't get what you don't get. Marketing materials or not, that's the approach I take with listening tests and measurements. If I can't measure something I can identify audibly, I look for a way to measure it. To me, understanding the root cause and being able to find it again is much more valuable than just patching up a problem by guessing. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 2 minutes ago, opus101 said: I'm getting it. Their marketing materials of course want to paint their approach in the best possible light, hence they set up a strawman and demolish that, hence implying they're the truly enlightened ones in the audio business and that others are by implication imbeciles. So do you think that's what I'm doing also? I'm not using Benchmark to validate my own philosophy. I used it as a shorthand for me not to have to write all that text to describe what my philosophy is. If you want to argue about why my philosophy is wrong, then let's have that discussion. I really couldn't care less about Benchmark or their products or marketing, although I hear they measure well :) -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 3 minutes ago, opus101 said: If your question is 'Do I think you're trying to paint your own products in the best light by marketing them using strawmen?' then the answer's definitely a 'no'. I'm not even clear if you've got stuff to sell. I rather suspect we're talking at crossed-purposes here. I've been focussing on what process Benchmark wish to discredit in their marketing. You've been talking about what your own philosophy is. Two rather different focusses no? I'm not selling a thing, and yes, I think we are talking at cross-purposes. Maybe I was a bit lazy by having Benchmark describe my philosophy :) opus101 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 30 minutes ago, opus101 said: I'm interested in what Benchmark have to say about development using listening tests because in the past I had the pleasure of working with a guy who did (pro) audio design/development using an ABX box he built himself. Here's one of his posts on Gearslutz (the second post on this page) : https://www.gearslutz.com/board/music-computers/542885-paul-frindle-truth-myth-4-print.html I like him. Finding it hard to disagree with most of what he said in that thread. opus101 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 52 minutes ago, Rexp said: Take a real world example, if I record a voice at 24/192 and compare to the original, it sounds close enough to be deemed an accurate reproduction. No doubt the measurements would be similar. Now if I downsample to 16/44 it doesn't sound like the original but the measurements won't reflect this will they? Of course measurements can reveal if the recording is at 16/44 or 24/192. Do you mean that you can tell the difference between speech recorded at 24/192 and the downsampled version at 16/44? That's not a hard test to perform. In fact, the recent Hi-res test by Mark Waldrep was of a very similar design. Didn't you take part in that test? How did you do? danadam 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Popular Post pkane2001 Posted November 25, 2020 Author Popular Post Share Posted November 25, 2020 6 hours ago, fas42 said: Interesting the need to use phrases like "stabbing in the dark", and "random trial/error" to describe the process of finding the cause and effect linkage - it's almost as if the actual measuring of some electrical anomaly is more important than the fixing of the issue; that is, if the playback sounds wrong, and you manage to come up with some numbers "that describe it", then you can relax and keep listening to the fault, without being overly concerned with rectifying it 🙂. If one uses a good technique, that of using recordings which highlight the defective behaviour, then it's usually very quick to pinpoint a cause/weakness combination - the hard work is often then to work out a solution which is not expensive, and which delivers a robust - meaning it works under all scenarios - fix. The sessions at the friend up the road of some hours is usually enough to locate where there is a bottleneck in the SQ - it may take weeks to devise a 'smart' resolution, which I leave to him. Most people seem to find it hard to understand the approach, it seems - you listen to a recording you know well, and it's definitely sub-par, probably from noise. Most noise issues come from a lack of physical integrity in some area, or electrical interference - it's quit easy to alter these factors, usually; and the feedback from trying things gives you the knowledge to make the next move. But of course it is "stabbing in the dark", Frank! I'm talking about your method of "fixing" audio faults. You may recall recommending to me to open up my speakers to find out if they have soldered connections AS THE FIRST STEP in troubleshooting my system. Sorry, but if that's not random, then I don't know what is. There's no technique here, it's stabbing in the dark. But yes, I prefer to find the root cause, understand the problem, and do so without resoldering every solder joint on each PCB in my system and then listening for any improvement after each in the hopes that I got it this time! botrytis, kumakuma and jabbr 3 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 15 minutes ago, Jud said: Perhaps you are evaluating her preferences subjectively rather than objectively. 😉 That's the problem! Jud 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now