PeterG Posted November 24, 2020 Share Posted November 24, 2020 16 hours ago, pkane2001 said: But you can see that Greene might have exactly the same opinion of Harley's viewpoint, right? Yes, definitely. But Harley is the gatekeeper at TAS. Also, that does not mean that Greene's view is equally valid. pkane2001 1 Link to comment
semente Posted November 24, 2020 Share Posted November 24, 2020 15 hours ago, opus101 said: Going back to Greene's apparent lacuna on 'soundstage' for a moment. After stating his straw man he says this : Since no one has any idea of what kind of soundstage ought to arise from most recordings, soundstage is not really a sensible criterion for evaluation of anything. Hmm, dismissive over-much? In the course of my DAC development in the past week or so I've uncovered (in the limited context of multibit DAC design) something objective that appears to affect soundstage. That is - noise in the analog stage after the DAC chip. I'm using a passive filter followed by an opamp (which can't be a virtual ground because of the preceding filter). The opamp introduces noise as far as I can ascertain beneath the dither level of RBCD (-93dB) but a lower noise-gain circuit using the same opamp makes the soundstage bigger. I don't though have any evidence that the soundstage is clearer and larger beyond that of my own ears. It must be taken into account that Greene's comments are entirely concerned with the reproduction of classical music. And, as most people know, you can only achieve a reasonably realistic soundstage using minimalist mic'ing. But even that depends on the mic technique use (spaced vs. near-coincident) and the distance of mics to sources. His site is down at the moment but there you will find a few pieces about this subject. On the other hand, the soundstage of multi-mic'ed studio mixes is not captured but fabricated. Which is why he says that "soundstage is not really a sensible criterion for evaluation of anything." "Science draws the wave, poetry fills it with water" Teixeira de Pascoaes HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256) Link to comment
bluesman Posted November 24, 2020 Share Posted November 24, 2020 2 hours ago, pkane2001 said: You are, in effect, suggesting that you'll be able to understand, explain, and predict choices and motivations of someone you hardly know at a distance, from a few known purchase decisions and a few posts on internet forums. I'd argue that this is a fool's errand, especially if you're interested in any sort of accuracy. I often can't predict what my wife would prefer, and I've spent most of my life with her, observing her preferences and talking to her about her choices thousands of times. So, no, I don't think it's as simple as you describe :) You skipped a bit of my post - I clearly said it takes “enough good information”. Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models. Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example). It’s a very valuable population health tool. The sheer amount of available data is astounding, and it’s very revealing. Look up Lyle Ungar’s work - he’s been studying this for years. We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data. Here’s a simplified example. If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it. If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem. If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement. If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection., And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation. Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture. Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money? Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more. Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases. It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website. Believing that our behavior is private and inaccessible to others is hopelessly naive. Many industries are monitoring and guiding much of our lives right now. Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it. And they’re very often right. Bill Brown 1 Link to comment
Popular Post fas42 Posted November 24, 2020 Popular Post Share Posted November 24, 2020 8 hours ago, opus101 said: I surmise that it was the noise in that particular opamp stage because I listened to a couple of options - one being a lower noise opamp and the other being lower noise- gain with a higher noise opamp. I'm not done yet though as I'm experimenting with a 3rd circuit configuration which I predict to have even lower noise - to see what the results are. The first two produced similar improvements in the soundstage. I know it was for the better because my listening satisfaction increased. It is the noise that infests most audio playback that causes the problem; whether it's due to the behaviour of an opamp stage, or from a variety of other sources, is not really that relevant - the common factor is that the presence of the noise makes it too difficult for the ear/brain to unravel, decode the low level cues and clues in the recording; unconsciously, the mind "gives up" trying to understand the meaning of the low level 'hash' in the playback - and "soundstage" is severely diminished. For me, this is trivially obvious ... I had a system that went from just a OK soundstage, however people choose to interpret that word, to one that could throw up a monstrous one, depending completely on what was in the recording - and which would then slowly drop back to a mediocre imitation, over ten minutes or so ... this was purely because a highly subtle tweak was no longer effective, and had to be refreshed. What was happening was that the unsettling, to the audio processing part of the brain, noise was slowly building, and reached a point where my mind, completely unconsciously, couldn't recognise the soundstaging cues any longer. vmartell22 and sandyk 1 1 Link to comment
fas42 Posted November 24, 2020 Share Posted November 24, 2020 6 hours ago, botrytis said: Sound stage is really a psychoacoustic phenomenon. It is an interplay from the speakers, room, and ears. It is how our brain then discerns that soundstage. It may be we are so used to hearing music live, that we naturally and automatically assign soundstage to the music. I disagree. The soundstage is 100% due to what's on the recording - easily proven with a system that is capable; simply play 3 tracks in a row with completely different acoustics, and the soundstages will completely change, at the end of one going on to the next - with good examples, it's like entering different universes, it's almost a shock to one's physical senses. Quote I mean from a studio, how can one actually have a soundstage when, in these times, people record alone and then compile those recordings? Very simple ... the soundstages of all the separate sound events coexist- they are layered on top of each other, and each can be focused on in turn, and seen as having a separate identify. A visual analogy is having 3 or 4 images of completely different things on top of each other in a photoshopping program. with equal levels of transparency for each - there's the montage; and then there is also each image, with full integrity, when you closely focus on it. sandyk 1 Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 31 minutes ago, bluesman said: You skipped a bit of my post - I clearly said it takes “enough good information”. Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models. Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example). It’s a very valuable population health tool. The sheer amount of available data is astounding, and it’s very revealing. Look up Lyle Ungar’s work - he’s been studying this for years. We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data. Here’s a simplified example. If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it. If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem. If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement. If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection., And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation. Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture. Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money? Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more. Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases. It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website. Believing that our behavior is private and inaccessible to others is hopelessly naive. Many industries are monitoring and guiding much of our lives right now. Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it. And they’re very often right. I'm very aware of data mining practices and large data sets -- something I've worked with long before the web and tweeter and deep learning. I worked on deriving patterns from large data sets by training neural nets back in the 80s and 90s, way before this type of stuff became popular. But you can't buy more than a few data points on an individual's audio preferences if all they post is a couple of their purchase decisions and maybe a few comments and a few reviews. If you are talking about deriving common patterns from the larger data sets spanning multiple audiophiles, then I'm with you: that might even be an interesting project. But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
pkane2001 Posted November 24, 2020 Author Share Posted November 24, 2020 21 minutes ago, fas42 said: It is the noise that infests most audio playback that causes the problem; whether it's due to the behaviour of an opamp stage, or from a variety of other sources, is not really that relevant - the common factor is that the presence of the noise makes it too difficult for the ear/brain to unravel, decode the low level cues and clues in the recording; unconsciously, the mind "gives up" trying to understand the meaning of the low level 'hash' in the playback - and "soundstage" is severely diminished. You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Popular Post fas42 Posted November 24, 2020 Popular Post Share Posted November 24, 2020 41 minutes ago, pkane2001 said: You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. It's an observation - of a reaction to human hearing to what is presented in the playback - that has been repeated, constantly, over decades. How do I prove it - well, to myself, I simply sabotage the sound, by reversing a couple of tweaks; okay, "soundstaging" is lost - after a few hundred times, I guess I pick up a pattern in what is going on - silly me! To prove it, I guess one of them good ol' scientific papers will do the trick - right, organise a couple of hundred people, hire some venue to fit them all in. and all the other expenses needed for such affairs ... hmmm, "a go fund me!" will be needed, me thinks; so, how do I do that again, ... 🙃. Turns out noise is the key "ill" - which is why those incredibly well spec'ed, or incredibly expensive rigs, sometimes sound bland, boring, or downright unlistenable to ... sandyk and vmartell22 1 1 Link to comment
botrytis Posted November 25, 2020 Share Posted November 25, 2020 1 hour ago, pkane2001 said: You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. Since Frank can't measure the noise (he has stated that previously), he seems to be parroting the 'urban legends' out there. Green basically said, there is more about speaker placement, room treatments, etc. that are important to deal with than with noise from the electronic chain. vmartell22 1 Current: Daphile on an AMD A10-9500 with 16 GB RAM DAC - TEAC UD-501 DAC Pre-amp - Rotel RC-1590 Amplification - Benchmark AHB2 amplifier Speakers - Revel M126Be with 2 REL 7/ti subwoofers Cables - Tara Labs RSC Reference and Blue Jean Cable Balanced Interconnects Link to comment
opus101 Posted November 25, 2020 Share Posted November 25, 2020 3 hours ago, semente said: It must be taken into account that Greene's comments are entirely concerned with the reproduction of classical music. And, as most people know, you can only achieve a reasonably realistic soundstage using minimalist mic'ing. But even that depends on the mic technique use (spaced vs. near-coincident) and the distance of mics to sources. I agree - my diet is overwhelmingly of classical music and hence my comments were made in that context. I tend to gravitate towards the more minimally mic'd recordings too. Link to comment
Popular Post opus101 Posted November 25, 2020 Popular Post Share Posted November 25, 2020 12 minutes ago, botrytis said: Green basically said, there is more about speaker placement, room treatments, etc. that are important to deal with than with noise from the electronic chain. Yeah and my experience is the opposite. I have done precisely zero about room treatment, small attention to speaker placement. And previously I hadn't paid enough attention to noise in the electronics chain but I have changed my mind as a result of the surprising result I got when I lowered noise. fas42 and sandyk 1 1 Link to comment
fas42 Posted November 25, 2020 Share Posted November 25, 2020 15 minutes ago, botrytis said: Since Frank can't measure the noise (he has stated that previously), he seems to be parroting the 'urban legends' out there. Green basically said, there is more about speaker placement, room treatments, etc. that are important to deal with than with noise from the electronic chain. Because, there is always going to be noise in the replay; it's impossible to completely eliminate it - what matters is whether it matters, subjectively. IME, there is a "good enough" level of it being part of the mix - yes, it would be interesting to monitor exactly what's going in the 'quality' of that noise, to make it audibly significant or not - but that's for further down the track ... Link to comment
Popular Post opus101 Posted November 25, 2020 Popular Post Share Posted November 25, 2020 2 hours ago, fas42 said: It is the noise that infests most audio playback that causes the problem; whether it's due to the behaviour of an opamp stage, or from a variety of other sources, is not really that relevant - the common factor is that the presence of the noise makes it too difficult for the ear/brain to unravel, decode the low level cues and clues in the recording; unconsciously, the mind "gives up" trying to understand the meaning of the low level 'hash' in the playback - and "soundstage" is severely diminished. That's my take too - and its what's lacking in Greene's article. What makes sense to the listener isn't being considered in his approach, just what moves the needle. sandyk and fas42 1 1 Link to comment
Popular Post fas42 Posted November 25, 2020 Popular Post Share Posted November 25, 2020 This was in Greene's piece, as quoted by Paul, Quote Some of these tiny effects may be audible, but the important point is that there is seldom any mechanism for deciding if the changes are to the good or not. If there is no way to know why some change, of a power cord say, affected the sound, there is no way to decide whether the effect, if any, was positive or not. In fact, there is an excellent mechanism, using observation, to determine whether a move is positive or not. Simply use a recording where part of the mix is very unpleasant to the ears, or sounds a mess - if the change is of benefit, the sound becomes clearer, easier on the ear, exhibits more detail, is less messy - all the attributes of what you are listening to improves. sandyk and vmartell22 1 1 Link to comment
bluesman Posted November 25, 2020 Share Posted November 25, 2020 2 hours ago, pkane2001 said: But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree We’re actually not disagreeing at all. It takes the big data approach to build the model before we can apply it to individuals. But it’s eminently doable right now, and the data are readily accessible. An individual may only post a few times about a purchase - but he or she leaves a huge trail of searches, downloads, vendor inquiries etc that are equally important. They can all be tracked by IP address, screen name, etc. I’ve built successful predictive models for hospital readmissions, success of treatment for heart failure, when to stop medications, etc. I even built a model for criterion based diagnosis of Covid-19 in March when it became obvious that we wouldn’t be testing random population samples to identify patterns of spread. Even with the support of a group that does NFL predictive analytics, I couldn’t convince anyone who mattered that it was a worthwhile effort. It is. Link to comment
Popular Post botrytis Posted November 25, 2020 Popular Post Share Posted November 25, 2020 1 hour ago, opus101 said: Yeah and my experience is the opposite. I have done precisely zero about room treatment, small attention to speaker placement. And previously I hadn't paid enough attention to noise in the electronics chain but I have changed my mind as a result of the surprising result I got when I lowered noise. I have done what you did to my detriment. I now know that room treatments, etc are one of the most important things one can do, besides DSP to deal with room interactions. I can't measure the noise in my chain but could be that I my equipment is not sensitive enough but since I can't hear it, does it matter? Frank and you sound like a friend of mine who is never satisficed with his equipment because he expects a certain level of response and is disappointed when it doesn't happen but all he does is plunk things down and expect it to be brilliant. It couldn't be further from the truth. daverich4 and Anonamemouse 2 Current: Daphile on an AMD A10-9500 with 16 GB RAM DAC - TEAC UD-501 DAC Pre-amp - Rotel RC-1590 Amplification - Benchmark AHB2 amplifier Speakers - Revel M126Be with 2 REL 7/ti subwoofers Cables - Tara Labs RSC Reference and Blue Jean Cable Balanced Interconnects Link to comment
opus101 Posted November 25, 2020 Share Posted November 25, 2020 2 minutes ago, botrytis said: I have done what you did to my detriment. What thing that I did have you done? I'd like to understand more here, please explain. Link to comment
Rexp Posted November 25, 2020 Share Posted November 25, 2020 'Measurement equipment allows us to determine the accuracy of audio reproduction' from @Archimago Which measurements determine whether a sound has been reproduced accurately or not? Link to comment
Popular Post Kal Rubinson Posted November 25, 2020 Popular Post Share Posted November 25, 2020 5 minutes ago, Rexp said: Which measurements determine whether a sound has been reproduced accurately or not? Logically, all of them do even the ones that are inaudible to humans. Now, if you had asked about their significance to the listener, I might answer differently................................or, more likely, not at all. 😉 botrytis and pkane2001 2 Kal Rubinson Senior Contributing Editor, Stereophile Link to comment
fas42 Posted November 25, 2020 Share Posted November 25, 2020 14 minutes ago, Rexp said: 'Measurement equipment allows us to determine the accuracy of audio reproduction' from @Archimago Which measurements determine whether a sound has been reproduced accurately or not? Very simple technique I've used over the years - I have the strange idea that a recording should sound like the recording; rather than a recording overlaid with the patina of the playback setup - so, I build up a sense of the intrinsic nature of the recording, by noting the characteristics of it when being replayed with the finest possible states of the reproduction chain - this then becomes a reference, for that recording, and the same characteristics have to be on show when I'm optimising a rig, or evaluating some setup I'm not familaar with. Some people seem to have the peculiar idea that every system should make a particular recording sound different from how every other system reproduces it ... I've never quite got the logic of this thinking ... 😁. Link to comment
Popular Post pkane2001 Posted November 25, 2020 Author Popular Post Share Posted November 25, 2020 36 minutes ago, Rexp said: 'Measurement equipment allows us to determine the accuracy of audio reproduction' from @Archimago Which measurements determine whether a sound has been reproduced accurately or not? What Kal said... Measurements can show if the reproduction is closer to the recorded signal. If it's different in some specific ways from measurements from another device, this can also tell which one is more true to the original, and what needs to be corrected in the other device. Even if we don't yet know every possible thing to measure, what we do know is still very useful since it allows us to make meaningful, repeatable comparisons and get to the root cause. This article from Benchmark covers my philosophy. Quote When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements. and Quote Any design process that relies solely on listening tests is doomed to fail. If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection. We may arrive at a solution that just masks the artifact with another less-objectionable artifact. On the other hand if we focus on eliminating every artifact that we can measure, we can quickly converge on a solution that approaches sonic transparency. To me, this approach leads to finding real issues and real solutions. Stabbing in the dark at all possible noise sources until everything "sounds just right" simply doesn't fit my temperament, sorry Frank :) botrytis and Kal Rubinson 2 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
opus101 Posted November 25, 2020 Share Posted November 25, 2020 5 minutes ago, pkane2001 said: Any design process that relies solely on listening tests is doomed to fail. If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection. Since you identify the above as 'your philosophy' @pkane2001 is the 'redesign' here purely random or guided in some way by the result of the listening test? sandyk 1 Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 9 minutes ago, opus101 said: Since you identify the above as 'your philosophy' @pkane2001 is the 'redesign' here purely random or guided in some way by the result of the listening test? Wait. What you quoted isn't my philosophy. That's what Benchmark said is wrong with just using listening tests during a design, and I agree. My philosophy is that any proper listening test that produces an unexpected result that doesn't mesh with existing measurements is a reason to try to find a way to measure the "unknown effect" rather than to try to fix it through a random trial/error process. -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
opus101 Posted November 25, 2020 Share Posted November 25, 2020 Just now, pkane2001 said: Wait. What you quoted isn't my philosophy. That's what Benchmark said is wrong with just using listening tests during a design, and I agree. You quoted Benchmark and said they cover your philosophy. That's what I quoted, your quote of them. So when you agree with them in characterizing that process as 'wrong' I'm curious about the details of that purportedly 'wrong' process? To agree with them surely you must know what process they're talking about here? sandyk 1 Link to comment
pkane2001 Posted November 25, 2020 Author Share Posted November 25, 2020 16 minutes ago, opus101 said: You quoted Benchmark and said they cover your philosophy. That's what I quoted, your quote of them. So when you agree with them in characterizing that process as 'wrong' I'm curious about the details of that purportedly 'wrong' process? To agree with them surely you must know what process they're talking about here? Here's what I quoted, and I highlight the part I felt particular kinship to: Quote When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements. The process that I believe is wrong, as I've already mentioned twice, is to use listening tests to try to correct for errors that can't be measured, by random trial/error. Also mentioned in the Benchmark quote: "If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection." -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now