Jump to content
IGNORED

Archimago on Greene vs Harley


Archimago/Greene/Harley  

40 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

15 hours ago, opus101 said:

Going back to Greene's apparent lacuna on 'soundstage' for a moment. After stating his straw man he says this :

 

Since no one has any idea of what kind of soundstage ought to arise from most recordings, soundstage is not really a sensible criterion for evaluation of anything.

 

Hmm, dismissive over-much? In the course of my DAC development in the past week or so I've uncovered (in the limited context of multibit DAC design) something objective that appears to affect soundstage. That is - noise in the analog stage after the DAC chip. I'm using a passive filter followed by an opamp (which can't be a virtual ground because of the preceding filter). The opamp introduces noise as far as I can ascertain beneath the dither level of RBCD (-93dB) but a lower noise-gain circuit using the same opamp makes the soundstage bigger. I don't though have any evidence that the soundstage is clearer and larger beyond that of my own ears.

 

It must be taken into account that Greene's comments are entirely concerned with the reproduction of classical music.

And, as most people know, you can only achieve a reasonably realistic soundstage using minimalist mic'ing. But even that depends on the mic technique use (spaced vs. near-coincident) and the distance of mics to sources.

His site is down at the moment but there you will find a few pieces about this subject.

 

On the other hand, the soundstage of multi-mic'ed studio mixes is not captured but fabricated. Which is why he says that "soundstage is not really a sensible criterion for evaluation of anything."

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
2 hours ago, pkane2001 said:

You are, in effect, suggesting that you'll be able to understand, explain, and predict choices and motivations of someone you hardly know at a distance, from a few known purchase decisions and a few posts on internet forums. I'd argue that this is a fool's errand, especially if you're interested in any sort of accuracy. I often can't predict what my wife would prefer, and I've spent most of my life with her, observing her preferences and talking to her about her choices thousands of times. So, no, I don't think it's as simple as you describe :)

You skipped a bit of my post - I clearly said it takes “enough good information”.  Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models.  Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example).  It’s a very valuable population health tool.  The sheer amount of available data is astounding, and it’s very revealing.  Look up Lyle Ungar’s work - he’s been studying this for years.  
 

We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data.

 

Here’s a simplified example.  If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it.  If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem.  If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement.  If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection.,  And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation.

 

Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture.  Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money?  
 

Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more.  Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases.

 

It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website.  Believing that our behavior is private and inaccessible to others is hopelessly naive.  Many industries are monitoring and guiding much of our lives right now.  Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it.  And they’re very often right.

Link to comment
6 hours ago, botrytis said:

Sound stage is really a psychoacoustic phenomenon. It is an interplay from the speakers, room, and ears. It is how our brain then discerns that soundstage. It may be we are so used to hearing music live, that we naturally and automatically assign soundstage to the music.

 

 

I disagree. The soundstage is 100% due to what's on the recording - easily proven with a system that is capable; simply play 3 tracks in a row with completely different acoustics, and the soundstages will completely change, at the end of one going on to the next - with good examples, it's like entering different universes, it's almost a shock to one's physical senses.

 

Quote

I mean from a studio, how can one actually have a soundstage when, in these times, people record alone and then compile those recordings?  

 

Very simple ... the soundstages of all the separate sound events coexist- they are layered on top of each other, and each can be focused on in turn, and seen as having a separate identify. A visual analogy is having 3 or 4 images of completely different things on top of each other in a photoshopping program. with equal levels of transparency for each - there's the montage; and then there is also each image, with full integrity, when you closely focus on it.

Link to comment
31 minutes ago, bluesman said:

You skipped a bit of my post - I clearly said it takes “enough good information”.  Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models.  Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example).  It’s a very valuable population health tool.  The sheer amount of available data is astounding, and it’s very revealing.  Look up Lyle Ungar’s work - he’s been studying this for years.  
 

We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data.

 

Here’s a simplified example.  If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it.  If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem.  If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement.  If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection.,  And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation.

 

Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture.  Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money?  
 

Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more.  Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases.

 

It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website.  Believing that our behavior is private and inaccessible to others is hopelessly naive.  Many industries are monitoring and guiding much of our lives right now.  Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it.  And they’re very often right.

 

I'm very aware of data mining practices and large data sets -- something I've worked with long before the web and tweeter and deep learning. I worked on deriving patterns from large data sets by training neural nets back in the 80s and 90s, way before this type of stuff became popular.

 

But you can't buy more than a few data points on an individual's audio preferences if all they post is a couple of their purchase decisions and maybe a few comments and a few reviews. If you are talking about deriving common patterns from the larger data sets spanning multiple audiophiles, then I'm with you: that might even be an interesting project. But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree.

Link to comment
21 minutes ago, fas42 said:

It is the noise that infests most audio playback that causes the problem; whether it's due to the behaviour of an opamp stage, or from a variety of other sources, is not really that relevant - the common factor is that the presence of the noise makes it too difficult for the ear/brain to unravel, decode the low level cues and clues in the recording; unconsciously, the mind "gives up" trying to understand the meaning of the low level 'hash' in the playback - and "soundstage" is severely diminished.

 

You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. 

Link to comment
1 hour ago, pkane2001 said:

 

You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. 

 

Since Frank can't measure the noise (he has stated that previously), he seems to be parroting the 'urban legends' out there.

 

Green basically said, there is more about speaker placement, room treatments, etc. that are important to deal with than with noise from the electronic chain.

Current:  Daphile on an AMD A10-9500 with 16 GB RAM

DAC - TEAC UD-501 DAC 

Pre-amp - Rotel RC-1590

Amplification - Benchmark AHB2 amplifier

Speakers - Revel M126Be with 2 REL 7/ti subwoofers

Cables - Tara Labs RSC Reference and Blue Jean Cable Balanced Interconnects

Link to comment
3 hours ago, semente said:

It must be taken into account that Greene's comments are entirely concerned with the reproduction of classical music.

And, as most people know, you can only achieve a reasonably realistic soundstage using minimalist mic'ing. But even that depends on the mic technique use (spaced vs. near-coincident) and the distance of mics to sources.

 

I agree - my diet is overwhelmingly of classical music and hence my comments were made in that context. I tend to gravitate towards the more minimally mic'd recordings too.

Link to comment
15 minutes ago, botrytis said:

 

Since Frank can't measure the noise (he has stated that previously), he seems to be parroting the 'urban legends' out there.

 

Green basically said, there is more about speaker placement, room treatments, etc. that are important to deal with than with noise from the electronic chain.

 

Because, there is always going to be noise in the replay; it's impossible to completely eliminate it - what matters is whether it matters, subjectively. IME, there is a "good enough" level of it being part of the mix - yes, it would be interesting to monitor exactly what's going in the 'quality' of that noise, to make it audibly significant or not - but that's for further down the track ...

Link to comment
2 hours ago, pkane2001 said:

But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree

We’re actually not disagreeing at all. It takes the big data approach to build the model before we can apply it to individuals. But it’s eminently doable right now, and the data are readily accessible.  An individual may only post a few times about a purchase - but he or she leaves a huge trail of searches, downloads, vendor inquiries etc that are equally important.  They can all be tracked by IP address, screen name, etc.
 

I’ve built successful predictive models for hospital readmissions, success of treatment for heart failure, when to stop medications, etc.  I even built a model for criterion based diagnosis of Covid-19 in March when it became obvious that we wouldn’t be testing random population samples to identify patterns of spread.  Even with the support of a group that does NFL predictive analytics, I couldn’t convince anyone who mattered that it was a worthwhile effort.

 

It is.

Link to comment
14 minutes ago, Rexp said:

'Measurement equipment allows us to determine the accuracy of audio reproduction' from @Archimago

Which measurements determine whether a sound has been reproduced accurately or not? 

 

Very simple technique I've used over the years - I have the strange idea that a recording should sound like the recording; rather than a recording overlaid with the patina of the playback setup - so, I build up a sense of the intrinsic nature of the recording, by noting the characteristics of it when being replayed with the finest possible states of the reproduction chain - this then becomes a reference, for that recording, and the same characteristics have to be on show when I'm optimising a rig, or evaluating some setup I'm not familaar with.

 

Some people seem to have the peculiar idea that every system should make a particular recording sound different from how every other system reproduces it ... I've never quite got the logic of this thinking ... 😁.

Link to comment
5 minutes ago, pkane2001 said:

Any design process that relies solely on listening tests is doomed to fail. If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection.

 

Since you identify the above as 'your philosophy' @pkane2001 is the 'redesign' here purely random or guided in some way by the result of the listening test?

Link to comment
9 minutes ago, opus101 said:

Since you identify the above as 'your philosophy' @pkane2001 is the 'redesign' here purely random or guided in some way by the result of the listening test?

 

Wait. What you quoted isn't my philosophy. That's what Benchmark said is wrong with just using listening tests during a design, and I agree.

 

My philosophy is that any proper listening test that produces an unexpected result that doesn't mesh with existing measurements is a reason to try to find a way to measure the "unknown effect" rather than to try to fix it through a random trial/error process.

Link to comment
Just now, pkane2001 said:

 

Wait. What you quoted isn't my philosophy. That's what Benchmark said is wrong with just using listening tests during a design, and I agree.

 

 

You quoted Benchmark and said they cover your philosophy. That's what I quoted, your quote of them.

 

So when you agree with them in characterizing that process as 'wrong' I'm curious about the details of that purportedly 'wrong' process? To agree with them surely you must know what process they're talking about here?

Link to comment
16 minutes ago, opus101 said:

 

You quoted Benchmark and said they cover your philosophy. That's what I quoted, your quote of them.

 

So when you agree with them in characterizing that process as 'wrong' I'm curious about the details of that purportedly 'wrong' process? To agree with them surely you must know what process they're talking about here?

 

Here's what I quoted, and I highlight the part I felt particular kinship to:

Quote

When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements.

 

The process that I believe is wrong, as I've already mentioned twice, is to use listening tests to try to correct for errors that can't be measured, by random trial/error. Also mentioned in the Benchmark quote:  "If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection."

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...