Jump to content
IGNORED

Archimago on Greene vs Harley


Archimago/Greene/Harley  

40 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

 

https://archimago.blogspot.com/2020/11/on-measurements-listening-and-what.html

 

Vote in the poll and feel free to provide any evidence (not just an opinion) to support or refute one side or the other, including @Archimago. Here's the original article:

 

http://www.theabsolutesound.com/articles/measurements-listening-and-what-matters-in-audio/

 

Link to comment
1 hour ago, Speedskater said:

Note that the Robert E. Greene editorial is 1600 words long and that the Robert Harley reply is 1000 words.

http://www.theabsolutesound.com/articles/measurements-listening-and-what-matters-in-audio/

 

It seems that Mr. Harley took off on a tangent (rant) and his reply has little to do with the editorial.

 

So, 1600 is objectively longer than 1000 words, that part is true :)

 

Link to comment
32 minutes ago, semente said:

Is Harley protecting the industry or just defending his own approach to audio? His arguments are but a house of cards...

 

Yes, to me it also seemed like it was more of a rant than an argument. I found Archimago's rebuttal about jitter and power cords more interesting. 

Link to comment
1 hour ago, PeterG said:

Harley has built his career and TAS around the small differences that Greene asserts are insignificant, and he believes that these small differences are what being an audiophile is about.  By that standard, Greene is spreading disinformation that could mislead newer listeners and/or more gullible readers.

 

But you can see that Greene might have exactly the same opinion of Harley's viewpoint, right?

Link to comment
17 minutes ago, opus101 said:

 

It seems Greene thinks electronics in general is 'good enough'. Fair enough for him, that's based on his experience. My own experience differs - to me, not all DACs are 'good enough'. So I must disagree - electronics quality IME affects how much sense is able to be made from reproduced music. When recordings make more sense the enjoyment level rises considerably.

 

I didn't read it quite like that. What I thought Greene was stating is there are often very large errors in the transducer part of audio chain, with way, way smaller errors in the electronics. So, instead of spending time trying to squeeze that last error at -120dB out by using a better power cord, one may get much further by first trying to solve the large errors with speakers or headphones that often rise to the level of many dBs. He doesn't deny that some tiny effects might be audible, but he states this:

 

Quote

Some of these tiny effects may be audible, but the important point is that there is seldom any mechanism for deciding if the changes are to the good or not. If there is no way to know why some change, of a power cord say, affected the sound, there is no way to decide whether the effect, if any, was positive or not. How could you tell? Believe the manufacturer? Believe reviewers, who have as little basis as you yourself? This is a major issue.

 

Link to comment
28 minutes ago, opus101 said:

In terms of 'objectivity' its clear Greene sets up a straw man about 'soundstage'. He says :

 

This idea of evaluating everything in terms of soundstage is potentially a major source of confusion.

 

I've not seen any argument from any reviewer or audiophile where everything's evaluated in terms of soundstage. But if anyone has a link for an example, I'm game to read it.

 

Soundstage is mentioned quite frequently as the argument against measurements. As in "we don't know how to measure a soundstage". I've encountered this argument plenty of times myself.

Link to comment
2 minutes ago, opus101 said:

 

Yes, I agree that's what he's saying. But notice that 'way smaller' is from the point of view of our current measurement capabilities, not from the point of view of perception. Where I agree with him is that some things matter more than others, I disagree on what those things are. He's determining important ISTM from a numbers pov. I'd say that's non-sensical, what matters is what's perceived by the listener.

 

Perception of differences can also be measured, and has been for many things, like amps, DACs, power cords, speakers, headphones. Are you able to show any evidence that a swap of a power cord can make more of an audible difference (assuming both are functional, of course!) than swapping say, speakers or headphones?

Link to comment
6 minutes ago, opus101 said:

I'm not at all interested in the question as its about 'audible differences'. To me they're a distraction.

 

I'm not sure what you're saying. In order to be perceived, differences must be audible. The test subject must be able to differentiate between two devices by listening, otherwise any perception they claim is not due to audio differences.

Link to comment
1 minute ago, opus101 said:

 

I'm saying what Robert M Pirsig says in 'Zen and The Art of Motorcycle Maintenance' :

 

“The test of the machine is the satisfaction it gives you. There isn't any other test. If the machine produces tranquility it's right. If it disturbs you it's wrong until either the machine or your mind is changed.”

 

That, of course, is fine. But then you are disagreeing with Greene, Harley, and Archimago, since they all seem to think that there's a way to predict how much satisfaction an audio device will give to another user.

Link to comment
7 minutes ago, sandyk said:

Perform Non Sighted testing as several members (including Audiophile Neuroscience and myself) did a few years ago.

We all independently came to the same conclusion that the more expensive cable did sound a smidgin better, resulting in a "cleaner" sounding presentation.

 

Alex, again, I ask you to stop. You keep saying the same things in every thread, with no evidence to back up anything you claim.

Link to comment
55 minutes ago, bluesman said:

Give me enough good information and the job is as easy as pie.  The data must include purchase history, historical satisfaction, repeat purchases, mean time of ownership, how and why each device was dismissed from the stable, mods done, social media posts questioning how to improve each one, what his or her friends bought / sold and when,  etc.  Add in everything we can know about each of the devices themselves, including all technical data and what reviews the subject read before, during, and after ownership of each piece.  Accuracy improves with each additional subset, e.g. stability of interpersonal relationships, job security, illness, unexpected downturns, etc.  Facts like knowing that one purchase was rapidly followed by a flood of web posts asking for ideas on improving the new acquisition while another was followed by a year of quiet enjoyment add to the accuracy of such predictions.

 

I suspect what you describe isn't as simple as it sounds. How many data points would you think you'd need to learn to accurately predict what drives someone's preferences? Five? Ten? A hundred? And how would you go about doing it, at a distance, by reading someone else's posts or reviews to determine what truly drives their preferences? Is it look and feel that affects them? Price? Brand name? Advertising? Influence of other reviews? Engineering or design principles or components used? Some interaction of components in their system?  Actual audio performance of the device you are interested in? Or is it some complex and variable weighted average of all of these and probably of hundreds more factors? Realizing, of course, that most people don't have a full understanding themselves of all the drivers that lead them to prefer something over something else. 

 

You are, in effect, suggesting that you'll be able to understand, explain, and predict choices and motivations of someone you hardly know at a distance, from a few known purchase decisions and a few posts on internet forums. I'd argue that this is a fool's errand, especially if you're interested in any sort of accuracy. I often can't predict what my wife would prefer, and I've spent most of my life with her, observing her preferences and talking to her about her choices thousands of times. So, no, I don't think it's as simple as you describe :)

 

Link to comment
31 minutes ago, bluesman said:

You skipped a bit of my post - I clearly said it takes “enough good information”.  Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models.  Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example).  It’s a very valuable population health tool.  The sheer amount of available data is astounding, and it’s very revealing.  Look up Lyle Ungar’s work - he’s been studying this for years.  
 

We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data.

 

Here’s a simplified example.  If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it.  If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem.  If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement.  If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection.,  And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation.

 

Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture.  Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money?  
 

Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more.  Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases.

 

It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website.  Believing that our behavior is private and inaccessible to others is hopelessly naive.  Many industries are monitoring and guiding much of our lives right now.  Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it.  And they’re very often right.

 

I'm very aware of data mining practices and large data sets -- something I've worked with long before the web and tweeter and deep learning. I worked on deriving patterns from large data sets by training neural nets back in the 80s and 90s, way before this type of stuff became popular.

 

But you can't buy more than a few data points on an individual's audio preferences if all they post is a couple of their purchase decisions and maybe a few comments and a few reviews. If you are talking about deriving common patterns from the larger data sets spanning multiple audiophiles, then I'm with you: that might even be an interesting project. But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree.

Link to comment
21 minutes ago, fas42 said:

It is the noise that infests most audio playback that causes the problem; whether it's due to the behaviour of an opamp stage, or from a variety of other sources, is not really that relevant - the common factor is that the presence of the noise makes it too difficult for the ear/brain to unravel, decode the low level cues and clues in the recording; unconsciously, the mind "gives up" trying to understand the meaning of the low level 'hash' in the playback - and "soundstage" is severely diminished.

 

You have some evidence to back this up, Frank, or is this just an opinion? Sure, noise can cause all kinds of ills in sound reproduction, but noise is not the only issue in audio. 

Link to comment
9 minutes ago, opus101 said:

Since you identify the above as 'your philosophy' @pkane2001 is the 'redesign' here purely random or guided in some way by the result of the listening test?

 

Wait. What you quoted isn't my philosophy. That's what Benchmark said is wrong with just using listening tests during a design, and I agree.

 

My philosophy is that any proper listening test that produces an unexpected result that doesn't mesh with existing measurements is a reason to try to find a way to measure the "unknown effect" rather than to try to fix it through a random trial/error process.

Link to comment
16 minutes ago, opus101 said:

 

You quoted Benchmark and said they cover your philosophy. That's what I quoted, your quote of them.

 

So when you agree with them in characterizing that process as 'wrong' I'm curious about the details of that purportedly 'wrong' process? To agree with them surely you must know what process they're talking about here?

 

Here's what I quoted, and I highlight the part I felt particular kinship to:

Quote

When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements.

 

The process that I believe is wrong, as I've already mentioned twice, is to use listening tests to try to correct for errors that can't be measured, by random trial/error. Also mentioned in the Benchmark quote:  "If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection."

 

Link to comment
5 minutes ago, opus101 said:

I'm having trouble parsing your last sentence 'by random trial/error'. So  I take it the answer to my question is 'its random' in which case I agree and I think their process is probably just a strawman. After all, you quoted their marketing materials right?

 

I don't get what you don't get. Marketing materials or not, that's the approach I take with listening tests and measurements. If I can't measure something I can identify audibly, I look for a way to measure it. To me, understanding the root cause and being able to find it again is much more valuable than just patching up a problem by guessing.

Link to comment
2 minutes ago, opus101 said:

I'm getting it. Their marketing materials of course want to paint their approach in the best possible light, hence they set up a strawman and demolish that, hence implying they're the truly enlightened ones in the audio business and that others are by implication imbeciles.

 

So do you think that's what I'm doing also?  I'm not using Benchmark to validate my own philosophy. I used it as a shorthand for me not to have to write all that text to describe what my philosophy is. If you want to argue about why my philosophy is wrong, then let's have that discussion. I really couldn't care less about Benchmark or their products or marketing, although I hear they measure well :)

 

Link to comment
3 minutes ago, opus101 said:

 

If your question is 'Do I think you're trying to paint your own products in the best light by marketing them using strawmen?' then the answer's definitely a 'no'. I'm not even clear if you've got stuff to sell.

 

I rather suspect we're talking at crossed-purposes here. I've been focussing on what process Benchmark wish to discredit in their marketing. You've been talking about what your own philosophy is. Two rather different focusses no?

 

I'm not selling a thing, and yes, I think we are talking at cross-purposes. Maybe I was a bit lazy by having Benchmark describe my philosophy :)

 

Link to comment
30 minutes ago, opus101 said:

I'm interested in what Benchmark have to say about development using listening tests because in the past I had the pleasure of working with a guy who did (pro) audio design/development using an ABX box he built himself. Here's one of his posts on Gearslutz (the second post on this page) : https://www.gearslutz.com/board/music-computers/542885-paul-frindle-truth-myth-4-print.html

 

I like him. Finding it hard to disagree with most of what he said in that thread.

Link to comment
52 minutes ago, Rexp said:

Take a real world example, if I record a voice at 24/192 and compare to the original, it sounds close enough to be deemed an accurate reproduction. No doubt the measurements would be similar. Now if I downsample to 16/44 it doesn't sound like the original but the measurements won't reflect this will they? 

 

Of course measurements can reveal if the recording is at 16/44 or 24/192. Do you mean that you can tell the difference between speech recorded at 24/192 and the downsampled version at 16/44? That's not a hard test to perform. In fact, the recent Hi-res test by Mark Waldrep was of a very similar design. Didn't you take part in that test? How did you do?

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...