Jump to content
IGNORED

Ars prepares to put “audiophile” Ethernet cables to the test in Las Vegas


Recommended Posts

Sorry. Whatever happens in Vegas, stays in Vegas.

Tell that to the guys banned from casinos worldwide for card counting!

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment

That is one badge of honor I would really enjoy. Apart from the bit where the Chicago mafia makes a hit.

 

I'm beginning to understand the choice of venue now.

 

There are some fine comments in that thread.

 

The audio test I always consider raises the stakes: the owner of the company that makes the device has to take the test (or an appointed surrogate who makes the same claims).

 

If the device representative can correctly detect the qualities set forth in their claims, they win $1000. If they cannot detect the qualities, they get a 220V shock to their genitals. The test lasts ten rounds.

 

I think this would go a long way towards weeding out the charlatans.

Link to comment

I just ordered a set of Blue Jeans ethernet 6a. I don't know if better ethernet cables can make a difference. I'm sort of an agnostic on the issue. I do agree that there are some logical (non-snake-oil) reasons to think they could.

 

But what is clear to me is that most inexpensive "off the shelf" ethernet cables don't even meet spec; and often the RJ-45 plugs themselves are clearly not very good - you can see and feel the lack of a good connection.

 

So I figured at the very least the BJC meet spec (they test each cable) and have well made connectors. I will let you know if in my non-scientific audition I can hear any difference. They also are a fraction of the price of the "audiophile" brands, so the cost involved is quite small.

Main listening (small home office):

Main setup: Surge protector +>Isol-8 Mini sub Axis Power Strip/Isolation>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three BXT

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment

“Can humans run the 4-minute mile (4MM)?” is ambiguous and can be interpreted as “can any human, even once, run a 4MM?” or “can some humans consistently run a 4MM?”, and the answers to both are yes. But it can also mean “can the {average, typical, normal} human run a 4MM?” and the answer would be no.

From the article: The goal will be to see if a statistically significant number of test subjects can differentiate…
… indicates they will see whether the {average, typical, normal} person can hear a difference and the answer will be no. But the target customer believes they can hear better than {average, typical, normal}. So what’s the point? If instead, they would test for whether “any individual can differentiate for a statistically significant number of trials”, it is still likely that will find none, but at least they can be praised for an honest, sincere attempt. Their current plan is just preaching to the choir, and they know it...
we also know that this test won’t sway anyone

 

This will be entertaining:

 

Ars prepares to put

 

Why entertaining? We already know the result.

Link to comment
There are some fine comments in that thread.
The audio test I always consider raises the stakes: the owner of the company that makes the device has to take the test (or an appointed surrogate who makes the same claims).

 

If the device representative can correctly detect the qualities set forth in their claims, they win $1000. If they cannot detect the qualities, they get a 220V shock to their genitals. The test lasts ten rounds.

 

I think this would go a long way towards weeding out the charlatans.

 

LOL. Silly, but LOL...

 

But $1000 won't even cover air fare and hotel (and losing in Vegas ;-) I'd prefer that the manufacturer offer free cables to the first X people (1?, 10?) who can hear the difference a statistically significant number of times. It would motivate people who believe they could benefit to try really hard and would also show the owner is confident. The owner may be deaf.

Link to comment
“Can humans run the 4-minute mile (4MM)?” is ambiguous and can be interpreted as “can any human, even once, run a 4MM?” or “can some humans consistently run a 4MM?”, and the answers to both are yes. But it can also mean “can the {average, typical, normal} human run a 4MM?” and the answer would be no.

… indicates they will see whether the {average, typical, normal} person can hear a difference and the answer will be no. But the target customer believes they can hear better than {average, typical, normal}. So what’s the point? If instead, they would test for whether “any individual can differentiate for a statistically significant number of trials”, it is still likely that will find none, but at least they can be praised for an honest, sincere attempt. Their current plan is just preaching to the choir, and they know it...

 

 

 

Why entertaining? We already know the result.

I completely agree & it is the usual problem that is encountered with perceptual testing by amateurs - the test is full of experimenters bias which is guaranteed to give a null result.

 

As you said the null hypothesis isn't clearly established - it looks like it's a test to answer the question you suggested "whether the {average, typical, normal} person can hear a difference" & I agree with you - the answer is known to be "no". The same as would be achieved for any blind testing of almost anything that doesn't have gross, obvious differences - wine, etc.

 

What is disingenuous about this is that the supporters of blind testing who know full well that the test is flawed will stay quiet about these flaws, preferring instead another null feather in their already festooned cap which they pretend not to be wearing ("because null results prove nothing"). If null results prove noting why do they waste everybody's time with flawed blind tests which are guaranteed to return null results :)

 

Leave perceptual testing to honest experts in the field who aren't agenda driven, says I.

Link to comment
From the article: The goal will be to see if a statistically significant number of test subjects can differentiate…

… indicates they will see whether the {average, typical, normal} person can hear a difference and the answer will be no. But the target customer believes they can hear better than {average, typical, normal}. So what’s the point? If instead, they would test for whether “any individual can differentiate for a statistically significant number of trials”, it is still likely that will find none, but at least they can be praised for an honest, sincere attempt. Their current plan is just preaching to the choir, and they know it...

I quite agree with you ... its unimportant how many people can detect that there is a difference, all that is important is that one person can consistently pick a difference.

 

Why entertaining? We already know the result.

I think the discussion about it will be more entertaining than the result.

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
I quite agree with you ... its unimportant how many people can detect that there is a difference, all that is important is that one person can consistently pick a difference.

 

 

I think the discussion about it will be more entertaining than the result.

Only if you are entertained by the usual uninformed bickering that you can find on most audio forums regarding blind testing!

Link to comment
its unimportant how many people can detect that there is a difference, all that is important is that one person can consistently pick a difference.

That begs the question of value. Even if there is an improvement that's consistently identifiable by a small group of super-hearers, is it worth the marginal cost to those who can't hear it?

 

StokesPenguin.jpg

Link to comment
That begs the question of value. Even if there is an improvement that's consistently identifiable by a small group of super-hearers, is it worth the marginal cost to those who can't hear it?

 

StokesPenguin.jpg

Ah, this is a common mistake - mixing up the difficulty of a positive result on a blind test with the notion that you have to be a super-hearer to do so.

 

If a difference is gross then everybody will hear it & blind testing is not required but as the difference gets smaller it becomes difficult very quickly to differentiate A from B, not because they are not different but because the very process of blind listening imposes a different style of listening - a style that needs special techniques to avoid being confused.

 

Fact is that all perceptual blind tests are difficult to get right - none more so than audio blind testing - there are so many factors to control in order to ensure that what is being examined is auditory abilities.

 

What is usually required is firstly identifying & isolating a specific audible factor that consistently & reliably you can differentiate sighted. This requires some expertise & training to be able to do this. This is much different to the usual casual listening we do over time when auditioning equipment at home. It's the equivalent of being able to identify & describe in language the differences heard.

 

The second ability needed is retaining focus on the above audible factor throughout the multiple listening that is demanded by blind listening tests.

 

So these two factors are the main things being tested in blind testing - not super hearing!

Link to comment
I quite agree with you ... its unimportant how many people can detect that there is a difference, all that is important is that one person can consistently pick a difference.

 

 

I think the discussion about it will be more entertaining than the result.

 

Why does this group consistent spend much more time and thought on cables than the equipment that connects them? "Audiophile" copper Ethernet cables are simply the answer to the entirely wrong question: if you are at all concerned about the SI of Ethernet connections then go optical. A $5 Corning optical Ethernet cable will best any so-called audiophile electrical Ethernet cable

Custom room treatments for headphone users.

Link to comment
Why does this group consistent spend much more time and thought on cables than the equipment that connects them? "Audiophile" copper Ethernet cables are simply the answer to the entirely wrong question: if you are at all concerned about the SI of Ethernet connections then go optical. A $5 Corning optical Ethernet cable will best any so-called audiophile electrical Ethernet cable

Yes, definitely could/should work but are there similar problems with this Corning solution as with the USB Corning optical cable? Connections issue & reliability issues, AFAIK?

Link to comment
I just ordered a set of Blue Jeans ethernet 6a. I don't know if better ethernet cables can make a difference. I'm sort of an agnostic on the issue. I do agree that there are some logical (non-snake-oil) reasons to think they could.

 

But what is clear to me is that most inexpensive "off the shelf" ethernet cables don't even meet spec; and often the RJ-45 plugs themselves are clearly not very good - you can see and feel the lack of a good connection.

 

So I figured at the very least the BJC meet spec (they test each cable) and have well made connectors. I will let you know if in my non-scientific audition I can hear any difference. They also are a fraction of the price of the "audiophile" brands, so the cost involved is quite small.

 

You did the right thing. I bought theirs too, because I was stringing it under the house, and I did not want to get it done and find out an internal wire had snapped. The Home Depot-grade stuff is notorious for that.

Link to comment
“Can humans run the 4-minute mile (4MM)?” is ambiguous and can be interpreted as “can any human, even once, run a 4MM?” or “can some humans consistently run a 4MM?”, and the answers to both are yes. But it can also mean “can the {average, typical, normal} human run a 4MM?” and the answer would be no.

… indicates they will see whether the {average, typical, normal} person can hear a difference and the answer will be no. But the target customer believes they can hear better than {average, typical, normal}. So what’s the point? If instead, they would test for whether “any individual can differentiate for a statistically significant number of trials”, it is still likely that will find none, but at least they can be praised for an honest, sincere attempt. Their current plan is just preaching to the choir, and they know it...

 

 

 

Why entertaining? We already know the result.

 

 

I agree with you. If any ONE person can, in a statistically robust and reproducible manner, identify the better cable double-blind, that should be sufficiently compelling. There is no need for more than one test subject.

Link to comment
I quite agree with you ... its unimportant how many people can detect that there is a difference, all that is important is that one person can consistently pick a difference.

 

 

I think the discussion about it will be more entertaining than the result.

 

 

This.

Link to comment
Ah, this is a common mistake - mixing up the difficulty of a positive result on a blind test with the notion that you have to be a super-hearer to do so.

 

If a difference is gross then everybody will hear it & blind testing is not required but as the difference gets smaller it becomes difficult very quickly to differentiate A from B, not because they are not different but because the very process of blind listening imposes a different style of listening - a style that needs special techniques to avoid being confused.

 

Fact is that all perceptual blind tests are difficult to get right - none more so than audio blind testing - there are so many factors to control in order to ensure that what is being examined is auditory abilities.

 

What is usually required is firstly identifying & isolating a specific audible factor that consistently & reliably you can differentiate sighted. This requires some expertise & training to be able to do this. This is much different to the usual casual listening we do over time when auditioning equipment at home. It's the equivalent of being able to identify & describe in language the differences heard.

 

The second ability needed is retaining focus on the above audible factor throughout the multiple listening that is demanded by blind listening tests.

 

So these two factors are the main things being tested in blind testing - not super hearing!

 

 

It isn't a mistake at all. If physically significant differences do not exist, then they cannot be detected by anyone, ipso facto.

 

If one person can detect differences in a statistically reliable manner (and the experiment itself is not flawed), then those differences most probably do exist, and the burden is then shifted upon those who want to claim that there are no significant, audible differences. They have to demonstrate that the person who perceived the differences is audiologically or neurologically atypical, which is a very difficult thing to do (given the extraordinary nature of the claim).

Link to comment
I agree with you. If any ONE person can, in a statistically robust and reproducible manner, identify the better cable double-blind, that should be sufficiently compelling. There is no need for more than one test subject.

Well, there is more to it, I believe.

For a useful test that one subject has to be one who has overcome the obstacles outlined below. Do you think that they already have this person or are they expecting to find someone on the day, from the public, with these attributes?

 

Look at this guys detailing of his experience with just such blind testing. He was blind testing high-res Vs red book audio & on that thread he has posted his many positive ABX results, thus confirming that he is not guessing. He had already established that he consistently had a preference for high-res audio. He's a recording engineer so he has some expertise & training in listening to audio. His thread from 2013 is here

 

You will see three crucial elements:

- self training i.e. isolating a particular characteristic to use

- finding the correct program material

- retaining focus during testing

 

Some of the quotes from it:

It took me a **lot** of training. I listened for a dozen wrong things before I settled on the aspects below.

 

I try to visualize the point source of every single instrument in the mix--that's why I picked a complex mix for this trial. I pinpoint precisely where each instrument is, and especially its distance from the listener. Problem is, both versions already have *some* spatial depth and placement, it's only a matter of deciding which one is deeper, and more precise. I've tried making determinations off of a particular part, like a guitar vamp or hi-hat pattern, but can't get above about 2/3 correct that way.

The better approach is just to ask myself which version is easier to precisely visualize, as a holistic judgment of all the pieces together. Equally effective, or rather equally contributing to the choice, is asking which version holistically gives me a sense of a physically larger soundstage, especially in the dimension extending directly away from me--thus the idea of listening to reverb characteristics.

Having to listen to four playbacks (A/B, X/Y, for one choice) gives rise to the problem of desensitization. Neurons naturally give decreased response to repetitions, so I've found I can target my answer more easily if I pause 5-10 seconds between an A/B (or an X/Y). Otherwise, A/B is always easier than X/Y.

 

"Keeping my attention focused for a proper aural listening posture is brutal. It is VERY easy to drift into listening for frequency domains--which is usually the most productive approach when recording and mixing. Instead I try to focus on depth of the soundstage, the sound picture I think I can hear. The more 3D it seems, the better. "

 

Caveats--Program material is crucial. Anything that did not pass through the air on the way to the recording material, like ITB synth tracks, I'm completely unable to detect; only live acoustic sources give me anything to work with.
Link to comment

I think you are misunderstanding me. Let's say you do the tests on 1000 people. 999 people's results support the null hypothesis. One does not.

 

What do you conclude, or do next?

 

I think the best thing would be to test that 1 person several more times, perhaps under more careful conditions, etc., to see if this is simply a statistical anomaly, or whether they can reproducibly detect the differences. (If it is a statistical fluke, then the more repeats you try, the less likely you are to see a positive outcome survive).

 

If that one person can reproducibly detect differences, then potentially audible differences do in fact do exist.

Link to comment
It isn't a mistake at all. If physically significant differences do not exist, then they cannot be detected by anyone, ipso facto.

 

If one person can detect differences in a statistically reliable manner (and the experiment itself is not flawed),

And that is the crucial text - the experiment is flawed to start with. If you don't understand the complexities & difficulties of perceptual testing, you really should not be trying to do pseudo-scientific tests that are guaranteed to be flawed. If you read my last post you will see just some of the headline issues in ensuring the test is not flawed. There are many more outlined in the ITU standards document BS1116.1 It's just ignorant or divisive to try to take such sensitive scientific test procedure out of the lab & try to pretend that they are still rigorous.
then those differences most probably do exist, and the burden is then shifted upon those who want to claim that there are no significant, audible differences. They have to demonstrate that the person who perceived the differences is audiologically or neurologically atypical, which is a very difficult thing to do (given the extraordinary nature of the claim).
But there won't be any differences found - it's guaranteed by experimenter's bias!!
Link to comment
I think you are misunderstanding me. Let's say you do the tests on 1000 people. 999 people's results support the null hypothesis. One does not.

 

What do you conclude, or do next?

 

I think the best thing would be to test that 1 person several more times, perhaps under more careful conditions, etc., to see if this is simply a statistical anomaly, or whether they can reproducibly detect the differences. (If it is a statistical fluke, then the more repeats you try, the less likely you are to see a positive outcome survive).

The chances of finding one person in 1,000 from the public who will prove positive on this blind test is infinitesimally small. the test is a flawed test to begin with - set-up with almost 0% chance of success for all the reasons I already gave (& some more, I didn't).

 

I they pick from a group of people who have trained themselves to hear any difference that might exist & they can use their own program material In a a system & listening environment they are familiar with, then maybe it is trying to deal with some of the known issues but there is still the administration of the test itself to attend to.

 

Valid blind testing is not trivial & requires much more expertise to do correctly than is demonstrated here

Link to comment

OK, for the sake of argument, consider a hypothetically flawless experiment, so we can focus (just for now) purely on the interpretation of the results. Put down the manual and think about what it means for one person out of your random sample of 1000 listeners who can, in a statistically robust and reproducible manner, can reproducibly identify the expensive cable in a (hypothetically) perfect double-blind trial.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...