Jump to content
IGNORED

Optical Networking & SFPs


Recommended Posts

On 6/22/2020 at 11:19 AM, cat6man said:

I personally define objective proof in audio as requiring measurements that confirm a hypothesis.  I do not include blind or double blind testing as 'objective' (your mileage may vary) so let's continue on my version of a possibly 'objective' answer. any engineers or scientists here?

As a tenured professor at a university medical center, I and my colleagues rely on DBT as a valid research tool.  Well designed, properly powered double blinded clinical trials have been the basis for many major scientific achievements that have saved countless lives.  If the p value on well chosen, appropriately tests of the delta between a placebo or control cohort and the active study cohort is 0.02, there was a 98% probability that the observed effect occurred because of the intervention and a 2% probability it happened purely by random chance. I don’t understand how you can dismiss this as not being objective.

 

The problem with most amateur DBT is how it’s done, not the principle behind it.

Link to comment
8 hours ago, cat6man said:

However in most(?) cases at the medical center, i'd guess that you have some (what i'll call) objective output measure such as blood pressure, survival rate, visual acuity, heart ejection fraction........i.e. things you can measure. 

 

compare that with 'the soundstage is wider' or 'it sounds more real' or 'i hear more breath on the vocals'

The critical factor in choosing a test is whether it can answer the question being asked.  As you point out, the question asked by audiophiles is often unanswerable and therefore untestable as asked.  It’s not possible to determine if a given cable improves SQ with blinded testing - for a start, one audiophile’s smooth is another’s muddy. But it’s possible to determine if there’s a consistent difference between two that subjects can identify with 95+% consistency in enough well done blinded listening trials to be statistically significant.  Correctly identifying one alternative from another with 95% certainty in well designed and well conducted DBT is objective.  And this can be useful info.
 

The “best” questions for any study are objectively measurable, as you point out.  But many healthcare decisions are made on the basis of parameters that can be as vague and nebulous as “how real it sounds”, e.g.  sense of well-being, intensity of pain, Quality of Life Years, patient satisfaction, and likelihood of recommending. Picking and using good tests to get valid, repeatable results is dependent on what question is asked, how it’s asked, and in what form an answer is sought.  And measurement systems are less reliable than we believe.  Even “simple” blood pressure measurement is not simple, e.g. 3 consecutive readings 5 minutes apart can vary widely.

 

Other factors ignored by those who oversimplify DBT include consistency among multiple raters and consistency of the same rater over time.  If a subject correctly identifies A or B in 95% of presentations in one trial session but averages 53% over 5 identical sessions, that 95% session is not representative - it’s random chance.  Controls are also needed.  Present the same alternative to raters multiple times and see if they think they perceive differences that aren’t there.

Link to comment
32 minutes ago, bluesman said:

Present the same alternative to raters multiple times and see if they think they perceive differences that aren’t there.

To bring this back OT, well done DBT could help determine if network platforms and components make discernible difference in SQ.

Link to comment
22 minutes ago, The Computer Audiophile said:

There will always be people who, either real or not, believe they are outliers or different from the test group and the results don't ring true for them. 

And there really are a few clinging to each end of the bell shaped curve.  But most of us are like most of us. Only in Lake Wobegon are the children all above average.

Link to comment
39 minutes ago, Audiophile Neuroscience said:

As you point out p=.05 is the 'usual' figure nominated but some would like a bit better than this depending on the situation. @manisandher in the Red Pill/ Blue Pill scored p=0.1, a 99% consistency ie probability that his perception was not a product of random chance or guesswork.

A p value of 0.1 means a 90% probability, not 99%.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...