Jump to content
IGNORED

True to life recording? - We are fooling ourselves!


STC

Recommended Posts

On 8/16/2019 at 11:43 AM, fas42 said:

 

Yes, I call it the Cocktail Party Effect ... it means that I'm able to switch my attention from one source of sound in the environment, to another, when there's a mix of sounds occurring ... I'm not sure whether I'm the only one who can do this, though ...

Its basically attention focusing. Let’s say you’re not concentrating on the music and the violins register as sounding a little irritating. When you actually pay attention you hear some perfectly good violins and some voices hitting high notes very close to the violins’.  Don’t pay any attention and the sound you hear becomes something of a wash and the similar frequencies aren’t separated. But pay attention and you hear immediately that the voices are happening in a slightly different part of the soundstage and can be easily separated from the violin tones, if you pay attention to the music. 

Link to comment
10 hours ago, STC said:

 

A stereo creates soundstage by making use of two speakers’ sound level/intensity difference. You identify using the level difference between the two speakers to reconstruct the image. This image is known as phantom image because it isn't real. Technically, if you could also get one single timing difference between the two speakers then you get a more precise and sharper image; i.e. location of the performers.  

 

 In live performance, the sound from each performer is only from a single spot. Your ears only receives one level, timing, phase cues for each ear for each source. These cues will be consistent irrespective of where you are in the room. This is what distinguishes as natural and unnatural to us when we hear a sound. 

 

Unlike what you alleges, in life performance we would always able to to pinpoint accurately where the performers are because all the cues relating spatial hearing is received correctly to the ears. 

 

In a stereo playback, the position of instruments across the soundstage which is a sound field between the most extreme side where sound emerges. This is limited to the width of the two loudspeakers in a true stereo recording. 

 

It is simply impossible to claim that you could recreate the stage at any position outside the mid point of the triangle. Outside the mid point you would perceive the sound of the speaker closer to you more louder than the other which destroys the ability of stereo to fully utilize the difference of level between the two speakers to create the phantom image. 

 

It is not possible. But if you insist your brain could reconstruct the positioning information then it is definitely impossible because me and the rest of the people I know could not. Having said that, there can be some occasions where it is possible where the recordings have a very small number of instruments and the instruments are split between extreme left and right channels. For an example, the Sax could be strictly confined closer to the left speakers and a drum to the right. In such circumstances, it is generally possible to perceive a stage because you are can hear two distinct source. So in a way, what you are saying is possible for certain type of recording. 

Its really very simple.

In real life an instrument launches a sound wave that creates a circular, expanding set of soundwaves in air. The soundwaves travel at a fixed speed, losing energy on the way as they expand and compress the air ahead.  As the soundwave reaches your ears, depending on your head’s orientation it will either reach both ears simultaneously (when the origin of the sound is at 90 degrees to both ears i.e straight ahead or straight behind or it will take longer to reach one ear than the other, in which case it will have slightly different timing, slightly different phase and slightly different amplitude. Your brain utilises the differential between the 2 signals to provide location. If there’s no difference, the sound originated from straight ahead...if it arrives first at the right ear, it’s origin is some distance off centre to the right and the same for left. 

 

What stereo does is to duplicate those same 2 signals that reach the ears from a natural sound and give them the same differential amplitude and delay as the original, so your ears can’t tell that the sounds are coming from 2 locations...rather it hears the same signal in both ears, with the same differential amplitude and delay as the signal from the single, natural source. When the 2 sets of signals (natural and stereo) reaching each ear are more or less identical, how can the brain tell if they come from a single source (as in nature) or a double, coordinated source (as in stereo). The short answer is it can’t, because all the qualities the brain requires to define location are present in both sets of signals...namely the differences caused by hearing a sound with 2 ears, placed at different locations around the head.    

Link to comment
1 hour ago, Kal Rubinson said:

I think that is a great simplification.  While the sound wave may expand in all directions, it does not usually do so symmetrically in a circular or, more specifically, spherical pattern but is decidedly asymmetric due to the shape and configuration of the instrument and the presence of the player.  For example, the sound pattern that is created by a violin comes from the strings, the body and the f-holes (which direct the energy in different ways) and all are greatly influenced by the position of the player who holds it on one side of his body.

Kal, we are talking about sound waves, not sound patterns. A sound wave can be reflected, diffracted or refracted. A sound pattern is simply a collection of sound waves that may or may not have been subject to one or more of the above. But what reaches the ear and is analysed are sound waves. That they belong to a sound pattern is not relevant to the discussion beyond the fact that the sound pattern should be the same in both ears save for some amplitude, timing and therefore phase shifts caused by the waves travelling further to enter one ear vs the other.  

The fact that a violin is a complex instrument has no bearing on how we hear it naturally vs reproduced from a stereo system 

Link to comment
1 hour ago, STC said:

 

That’s what Blauert says too. 

 

 

This can be debunked easily. Take a stereo recording. Delete one channel and copy the other channel to it. Adjust the amplitude. You still hear the phantom image. 

 

Secondly, get agood microphone and record a true stereo recording of a small live ensemble. At the same time, also record the same performance using a a binaural microphone. Now play the recording and record them with the binaural microphone. Listen to both and you would realize how fake and blur the stereo playback is. Interaural crosstalk is a well researched area and you can find many research papers on them. Anyway, it is hard to describe this to someone who is yet to hear what a binaural sound over loudspeaker is. 

Nope. Take one channel of a stereo signal, duplicate it and adjust the amplitude and play it back over a stereo system. What you’ll get is a centre image and NOTHING else. In other words a mono sound. Why? Because the entire content has the same delay and the same amplitude difference. In the original stereo recording, musicians and vocalists all have their individual locations and therefore their individual differences in delay and amplitude, but you throw away that difference when you take only one signal, so all you are left with is a single centre image with musicians playing at different levels (according to the level that was on the channel that was duplicated) . 

 

Regarding your example, I presume you mean, “take a good stereo microphone” right? 

Anyway I’ve had a passing interest in binaural recordings over the years but you’re right in that I’ve never heard them on anything other than headphones, which is where they’re generally demonstrated.  I actually wasn’t aware of any real interest in binaural recordings for speakers. 

However I can clearly explain why your example would sound strange. A binaural microphone is still just a microphone with the very simple ability to transduce sound pressure waves into voltage. They are nothing like the human hearing system, which involves massively complex cognitive and psycho-acoustic processes, so what you hear vs what you re-record and then hear are bound to be very different because you’ve stripped out all the brain’s functions and recorded stuff in each channel that our normal hearing would ignore. Once its recorded in the opposite channel, our hearing can no longer ignore it because you’ve completely changed its characteristics.   

 

 

Link to comment
1 hour ago, Ralf11 said:

 

Let me offer a few tidbits.  The complex waveform is distorted as it bounces off the pinna, and then experiences more distortion as it propagates thru the ear canal.  It then hits the eardrum which further distorts it due to the sluggish response of the physical membrane, and the inertia of the 3 small bones attached. 

 

The bones provide a 3x amplification but smear the transient response even further.  They also have a particular resonance frequency that can gum things up.  The mangled result is delivered to the oval window which is sluggish too, like the outer ear drum.

 

More distortions follow and eventually the sound waves cause vibrations in the Organ of Corti (which has NOT been tuned by any organ-meister, I assure you!).  But as it spreads along the deeply buried tapered wedge that is for some screwball reason wrapped into a spiral, parts of it resonate from different frequencies so it is kinda "tuned" - and I mean kinda!

 

Hair cells (if any remain after your life of live concerts, jack-hammers going outside your apt. window, and bus horns) are stimulated by this mechanical energy and emit neuro-chemicals, which then have to cross a FREAKIN' synapse to trigger another damn nerve and its impulse.  My grandmother could design a better system and she is dead.

 

You can nit-pick but that's the gist of it.  It is a messed up system, full of distortions, transduction of one type of energy to another, losses everywhere, filtering,  and a parade of horribles.

 

And we didn't even get into the psycho-acoustic or cross-sensory problems.

That is possibly the most hilarious message I think I may have ever read on any forum. Well done Ralf 

Link to comment
4 hours ago, STC said:

Just to prove you're wrong. Stereo can work with just level difference although timing can sound more accurate. There is no such thing as stereo creating one image. Number (5) sample shows you how the original right channel can now sound like the left channel of the original recording.

STC, with respect, this thread is starting to behave like the Oozlum bird. 

 

There was never any doubt that manipulating levels and phase causes image shifts. Its how stereo works  and what I been arguing since the beginning, so I’ve no idea why you went to so much trouble simply to prove what I’ve said all along.  If the 2 signals in the stereo channels are similar or the same, save for some subtle shifts in amplitude and phase, the brain will combine the 2 signals into a single signal with direction/location.  Its been like that since man hunted with stone tipped spears, because its based on our innate hearing ability. 

 

As for there being no such thing as stereo creating one image, simply switch your pre-amp to mono. Your 2 speakers will now both receive the same signal, with no differences in amplitude or timing and they’ll produce ONLY a centre image. Why? Because there’s no differential signal. All the information is there, buy its now combined and delivered as 2 identical L&R signals, played through both loudspeakers equally loud.  Your brain picks up 2 matching signals,  from 2 spatially different sources. In nature when 2 signals match perfectly in frequency content, time and amplitude, with the only difference being caused through headshape, the only logical conclusion is that both sounds came from the same single source, so that is what you hear. 

 

BTW, I no longer own headphones. I used to use AGG K1000 ear speakers then some Stax but that was when my daughter was studying. The experience was great but very different to loudspeakers. 

Link to comment

Here’s why I’m confused and why I reference the Oozlum bird. 

40 minutes ago, STC said:

The (5) sample clearly shows that a loudspeakers cannot produce the image shift based on timing because it is receiving two timing difference which cause the phantom image to be smeared. 

 That’s what you just wrote

 

Now lets look at what you wrote previously 

 

4 hours ago, STC said:

5) This is the most important one. Here you can see why stereo replay over loudspeakers' cannot retrieve all the information due to crosstalk. Listen with headphones and through loudspeakers. This the mono track (2) with identical copies of original right channel in both. The only difference in this there was a 250microseconds delay added to the right channel which will shift the position of the "Shhhhhss"  to the left and can be heard clearly with headphones but almost centred with loudspeakers playback

Let me summarise

Timing can’t shift an image

Timing does shift an image 

If your point was that you have to cut out crosstalk as head phones would for timing to shift an image then that’s a good point. BUT, I’ve never written that phase or timing ALONE will alter the location of an image. Amplitude differences are an essential part of the brain’s ability to detect location. In fact, frequency content, amplitude and timing are all critical elements. 

BTW, a 250ms delay is massive in terms of distance, so a bit of an out of context example. 

 

Both phase and frequency aspects are more centred around the brain’s evaluation of the signal to confirm whether its a single source or not. Amplitude is related to location. If frequency and phase relations don’t match, the brain won’t combine the signals and you will hear 2 separate locations.  If phase and frequency do match then the chances are very high that in nature we are dealing with a single point source, so the 2 signals are combined, using the amplitude shift to give location. 

Link to comment
40 minutes ago, STC said:

 

Who said 250ms?  

Ah-ha microsecond...I misread.....yes that delay makes sense 

 

46 minutes ago, STC said:

In the case of identical signal from each speakers but one is inverted, you do not hear silence because the ears receives two different sound. However, if you were to listen to one speaker only at a time both will sound exact the same unless you the one of the rare person with sect who could hear absolute phase. 

 

When the the wind reaches the ear at the same time, what was heard as two do not merge into one image because the ears are still hearing two separate sound which provides their own ITD and ILD. However, since the sound from one speaker reaches the other by a delay of 250 microseconds, the sound which hit the ear at the exact moment from each speaker cause confusion as the two phases were different. The brain becomes confused and loses the ability to locate them. What you hear is like sound coming from every direction. 

You know I think were are saying more or less  the same thing but mis-interpreting each other’s writing. Its rather a complicated subject to write clearly about. Let’s agree to errr agree 😁 

Link to comment
45 minutes ago, STC said:

 

I am afraid we are not. You are equating how human ears work to stereo production. That is a myth perpetuated by audiophiles and also some audio manufacturers. Stereo is nothing more than two sound and got no relation to our hearing. If it is true that stereo contains all the information of phase, amplitude and timing that can be accurately reproduced to the ears than you will hear natural 3D sound which will sound like a recording made by you wearing a binaural microphones. Even that too wouldn’t be inaccurate because you have to place the microphone exactly where your ear drums are. 

Actually STC, you’re correct. We don’t share the same view at all. Its just that i’m getting bored with these non-sensical arguments and thought I’d be pleasant about it.  I’m not equating how human ears work with stereo production. I’m stating how stereo reproduction fools the ears into hearing something that isn’t there. Stereo reproduction is designed to fool the ears, not be like them.  Stereo done right works by producing 2 signals that the ears treat like they treat every other signal in nature. Its not more complicated, nor simpler than that.  The difference is that in nature every signal originates from a single points source, whereas in stereo, the signal originates from 2 closely matched sources. Which brings me right back to where I started and where I now intend to stop. 

Link to comment
40 minutes ago, STC said:

 

There you go again. A sound from a speaker is a sound like any other sound you hear. Whether they are from a bird tweet, drum, car, horn, siren, guitar and everything else. Even an electric guitar played live is a sound like the same coming out from a speaker. The ears function to detect sound. Any sound and the HRTF works similarly whenever any sound reaches the ears. 

 

Whenever you keep on saying “in nature” you are giving the impression that the sound from the speakers are not. They are. There is no distinction in sound perceived by us irrespective where the originates. It is absurd to think just because you hear a single sound from stereo speakers, stereo somehow manages to defy human’s hearing.  A simple 17cm AB microphones acoustic transmission of a stereo playback would provide the evidence the evidence. I have provided mine. 

 

 

I’ll have one more go at this....I can’t resist 

 

A sound from a speaker is like any other sound you hear. True...absolutely true

 

A SPEAKER....singular 

 

But in stereo the sound is split and THE SAME sound is produced by a second speaker, which NEVER happens in nature.  Now I have 2 simultaneous signals reaching my ears, where the amplitude between the 2 can be be manipulated. 

 

If all I  had was a single loudspeaker it would send signal to both ears and the ears would detect the difference between the 2 amplitudes caused by the diameter of the head and the different distances travelled and it would locate ALL the sounds as coming from the same place where that loudspeaker is standing. 

 

In stereo, with 2 loudspeakers we set out to fool the brain. Instead of a single sourced signal reaching both ears L & R,  with a difference in amplitude that corresponds to the geometry between loudspeaker and head, we instead send a signal to the left ear and another separate signal to the right. Our ears hear these 2 signals and see that they match each other perfectly, other than some subtle  changes in amplitude and phase.  Because WE have generated those 2 signals we can manipulate the relative amplitude of all the Signals’ elements,  such that the brain assigns different locations to each of the elements, the so called sound stage. 

 

Now whether you think this works or not is probably down to your system and your experience, but in my system the soundstage can be huge, highly focussed and highly specific, so I would report that in a well sorted system, which mine is, the stereo illusion works pretty much perfectly and you hear not a single trace of individual loudspeaker. 

 

 

 

 

 

Link to comment
1 hour ago, STC said:

 

Ah finally we are on the same wavelength. But two questions. 

 

1) what do you mean we send one signal to the left ear and another separate signal to right ear?

 

2) how the sound emerging from one speaker imposes self discipline to itself so that it doesn’t travel beyond the intended ear? Unless headphones that’s not possible. The ears will always perceive two sets of signal for one phantom image. 

Oh, that’s good!

 

What I mean by we send one signal to the left ear and 1 signal to the right ear is that we have 2 loudspeakers, one carrying the left signal intended for the left ear and the other carrying the right signal and intended for the right ear. 

Now obviously the signal intended for the left ear is also going to reach the right ear, albeit a little later, and here’s where psychoacoustics, the brain, helps out.

 

The ears will always receive (not perceive!!!) two sets of signal for one phantom image. But the psycho-acoustic phenomenon known as the precedence effect or law of the first wavefront works as follows:

When a sound is followed by another sound separated by a sufficiently short delay, listeners perceive a single auditory event with spacial location dominated by the first arriving sound. And that’s why when you record the signal reaching the ears (the signal you receive) it sounds different on replay to the signal you actually perceive.

 

So, in summary there’s a difference between what you receive and what you perceive thanks to psycho-acoustics. When you record what you receive and replay it, there are no psycho-acoustics so for a recording, what you receive is also what you perceive, whereas when you hear the signal, what you receive is NOT what your perceive. 

Link to comment

I should also add that this psycho-acoustic ability to identify and favour the first arriving wavefront is quite delicate and any impactful set up shortcomings will send you back to hearing 2 speakers. Something as seemingly trivial as a large piece of furniture in close proximity to a loudspeaker can cause enough diffraction to unbalanced the 2 signals in which case the brain treats them separately.  Bad contacts can similarly cause problems, in fact anything that could cause a channel imbalance of some kind, so plenty of alternatives, which is what makes it sensitive. 

And Frank is quite correct when says that its like a switch. Somewhere in those synapses the signals are either routed one way or the other, (either/or) so a switch would be appropriate. And that’s why you always only hear one or the other, not both. They don’t run in parallel, they run singly and selectively according to their defined route through the brain. The switch must be autonomous and probably is conditional upon whether the 2 ear signals match well enough to be combined. When it is, you hear the combined signal with all the spatial information whereas when it isn’t you’ll hear your 2 loudspeakers as sources 

 

One interesting little trinket was the idea that out of phase signals confuse the brain significantly enough to make it present this overall amorphous everywhere yet nowhere undifferentiated soundstage. 

Link to comment
10 hours ago, STC said:

 

Yes. I use presence effect all the time. From 50 microseconds to 100ms. Ambiophonics is all about understanding the precedence effect and clear understanding of it required for effective crosstalk cancellation and the exact values to create the virtual concert hall reverberation. 

 

But it is a fallacy to think you don’t perceive the second sound (5) sample clearly shows that the image is no longer shifts in stereo playback because our ears receive two sets of ITD cues unlike listening it with headphones. Either you can explain (5) or you do not. 

 

 

Hey STC, this discussion could still finish up with us both agreeing.  

 

Regarding example 5, here’s what I’m GUESSING is going on, based on physics and logic. 

 

With headphones the actual sound sources are clamped to each ear, so the distance involved from sound source to eardrum is a couple of centimetres, and the distance is fixed,  identical L&R and with no external reflections or diffraction.

 

So; 

sound travels at approximately 322 metres a second

250 microseconds (us) is 0.000250 seconds 

In 250us sound would therefore travel 322,000 X 0,00025 cms = 8cm,  approximately the width of the back of your hand 

 

Loudspeakers are anything from 2000 to 4000 cm away from each ear and the head moves quite freely between them. 

 

With headphones, the delay in the signal in terms of distance more or less equals the distance between sound source and eardrum, so essentially with the delay, when the sound wave hits the left eardrum, the right signal is only just being generated at the headphone membrane, so the ear/brain has no difficulty in sensing the difference between the 2 channels and in assigning precedence to the first arriving signal. The ratio between delay and distance is ca 1:1 

 

With speakers at say 4000cm distant,  the ratio between distance and delay is 8/4000 is 0.002: 1  (1:500) and head movement i.e error can be larger than the delay. Further, any minute differences in speaker position vs ears will add to the error.  As soon as the error gets even close to the delay, the effect disappears (I noted elsewhere how sensitive the precedence effect is). 

 

So in essence, headphones will provide a highly controlled stable environment capable of resolving a 250us delay, whereas with speakers, the far greater latitudes of movement and therefore experimental error mean that 250us is to all intents and purposes undetectable. 

 

Is that such a problem?  Not really, because the precedence effect is only considered to work between 1 - 40ms, so minimum 4 times longer than 250us. It probably doesn’t work below 1ms for exactly the above reason.   

 

So to answer your question, due to their fixed nature and the much smaller distances involved between sound pressure wave source and eardrum, headphones have the ability to resolve far smaller (shorter) times delays between L&R channels than loudspeakers. 

Link to comment

I should summarise the above by asking if this difference in resolution (which is about 1 order of magnitude (2:1 vs 1:500)  between headphones and loudspeakers is important. Experimentally sure, however in the real World, the delays we are interested in i.e the ones that bestow location on musicians all fall well within AN ACCURATE and WELL SET UP loudspeaker’s ability to resolve and are reinforced by amplitude differences.  The smaller delays, ie the delays caused by the head when the L channel reaches the R ear we anyway want to ignore in terms of assigning location. 

 

 

 

Link to comment
17 hours ago, STC said:

 

I am speechless. Do you understand what psychoacoustics is? 

 

Looks like you are not interested to find out how this is even possible. Either the loudspeaker positional information is correct and the headphones playback wrong or vice versa. 

In an argument or debate when one of the parties attacks the other instead of providing counter argument,  it usually means they can offer no credible or plausible arguments like facts, statistics, references etc. You may want to bear that in mind when posting stuff like the above.  

 

Again in a debate, when a reaction is ‘over-the-top’ aggressive, this is known as ‘pushing someone’s  buttons’ and very clearly I pushed yours, interestingly with some fairly basic physics, neuroscience and logic. Clearly this thread “True to life recording? We are fooling ourselves” is designed to give you a platform to champion some concept, idea, cause or product. Does the idea of  ‘true 3D from standard stereo” kind of rain on that parade?

Link to comment
45 minutes ago, STC said:

 

I guess that’s another way of saying you do not know what psychoacoustics is.

 

 

Well i don’t have a Masters in it, if that’s what you mean. Psycho-acoustics is a very complex subject, but what it does (as it applies to this discussion) is to enable my standard, 2 channel stereo to deliver huge, room-busting 3 dimensional sound-stages from absolutely regular Quboz music streams. Literally millions of albums with complex, gorgeous acoustics made crystal clear by the brain’s ability to ignore/exclude extraneous signals that confuse and interfere with directionality. 

But that’s only one example of psycho-acoustics. If you think about the different ways you can listen to music, pick out a voice in a crowded room, pick out warning sounds on a busy street etc. you’ll find many more examples. Essentially psycho-acoustics are pretty much anything that has to do with humans’ perception of sound.  Psycho-acoustics are what gives meaning to and makes sound pressure waves reaching the ears understandable to a conscious mind. 

 

So, as we say in Yorkshire, ‘Let’s get down to brass tacks here”. What is it you are promoting or championing that requires you to discredit standard stereo’s abilities? I’m curious

Link to comment
52 minutes ago, STC said:

 

 

You don't see yourself contradicting to what you asserted earlier? 

 

No, there are all sorts of psycho-acoustic effects. We are talking about a particular one, so perhaps I should have said “no law of the first wavefront” is applied.  But hey,at’s what we call nitpicking. 

 

52 minutes ago, STC said:

 

I am championing against the audiophile snake oil and BS. Reminding new members to this wonderful hobby about the basics of stereophonics and psychoacoustics. As an example, you wrote something about headphones, head movements and room acoustics in an futile attempt to explain why and the tearing of paper in the money track which you failed to ask yourself what if that experiment was conducted in an anechoic chamber with the head fixed. 

Aha, so that’s what it is. Got it. You’re an anti-snake oil and BS man. A sort of Audiophile evangelist. Fair enough.  Then let me give you a little advice, if I may.

Firstly politeness and consideration will get you a lot further and win you a lot more support than simple, dumb personal attacks. 

Likewise, presenting facts, figures, stats in a well constructed and logical argument instead of making personal attacks generally bestows you with a lot more credibility. 

Third, attacking what you see as the competition instead of presenting your own arguments and benefits is a surefire  way to fail. Why? Because it gives your opposite number licence to present all their arguments and demonstrate why your counter arguments  are just so much bum fluff.  

Finally, some of your arguments just border on the ridiculous. Building a wall down the middle of your face for example or making a recording of a recording and comparing that to human hearing is not going to sway a lot of people. Anyway, now I’ve sorted out your motives and given my advice, for what its worth,  I’m really done. I’m here for fun, not self flagellation and that’s what perpetuating this discussion is getting to feel like  . 

Link to comment
9 hours ago, fas42 said:

ST, note that Blackmorec has achieved most of the behaviour displayed by competent playback through careful choice of the hardware, and plenty of extra effort in refining its setup - exactly the philosophy I espouse. Of course he has his own slant on what is critical, in that he believes somewhat extreme fussiness in dealing with reflections is essential; as a contrast, another enthusiast who achieved truly invisible gear had the attitude that it was all about the speakers - since his interest was in the design and building of speakers it made perfect sense that he would think this way ... now, what's that story again of trying to understand what an elephant was in the dark, and depending upon where you felt, your opinion was completely different? :) ... underneath it all there is always the integrity of the elephant, and it's always remains so, irrespective of what people think they've got ...

 

The simple truth is that recordings have all the information for full blown, immersive presentations to be generated - no prettying up, makeup is necessary ... just the data of what's been captured is good enough - can you handle it? :P

Actually Frank and I are not far off on agreeing on most things actually because I hear this sonic picture where loudspeakers as a source totally disappear, replaced by high focused images of instruments playing in free, independent space, according to where they were placed by sound engineers.  I also find this ability to hear the sound differently is like a switch....its either there or not, like a switch,  rather than it slowly coalescing from the ether. And I realise that in order to achieve this sonic Nirvana, you have to remove a lot of the extraneous noise, distortion and inaccuracies from your system.  Once you do, the sonic picture changes entirely and you get a highly focused soundstage with width, depth and height differentiation. But this illusion isn’t static. It can be brought into greater focus, its timbral information can be improved,  the air between instruments can be imbued with sonic character,  instruments can change from being sounds in space, to sounding like actual instruments playing the music or actual people singing the songs. 

So is this some kind on alchemy? No, of course not. Its electronics, physics and psycho-acoustics and is the goal of most audiophiles and audio manufacturers  Now Frank says that i’m obsessive about reflections and room treatments but that is not entirely correct. I really only use diffraction on the wall behind the listening position because I find it really takes away a fairly obtrusive room identity to the sound caused by delayed reflections,  and leaves a very neutral space that will support rather than mask whatever acoustic is on the recording. 

 

But now we get into areas that Frank and I disagree. Frank believes that once you get the illusion, suddenly nothing else, like speaker position, listener position etc  matters. The listener hears the magic no matter where they sit/stand etc.   Now I do get this, partially, in that when I move I don’t suddenly hear 2 loudspeakers as sources. Rather sounds are still independent of the speakers....there’s not suddenly 2 loudspeakers playing music, but when I move, the depth, the width, the height, the image specificity, the acoustically charged air between instruments, in other words, the entire soundstage collapses, or at least makes a lot less sense.  Now for me, getting that soundstage, something that can sound altogether bigger than my room, with acoustics that are all about the venue and nothing about the room was a labour of several decades, this despite buying equipment with no obvious shortcomings. But, and here Frank and I agree again, I did do a few simple things during those decades that enhanced my SQ far beyond the £££ paid. Dedicated earth was one, dedicated mains with really good quality cabling another, a really good rack to avoid vibration a third, improved power-supplies for my network etc.  And most probably soldering interconnects would be up there if I was prepared the sacrifice $000,s and never again sell the gear I own (or sell it for 0,0x on the $ 😫

But nowhere along the way have I EVER felt that the room doesn’t matter, because with or without the illusion, the room will always DOMINATE proceedings if its colorations, and it ALWAYS has colorations are not sufficiently benign to allow the recording acoustics to dominate.   Essentially you are always listening simultaneously to 2 performances, the musicians as they played on the recording and the recording as it plays in your room. The recording contains a portion of the recording venue’s acoustics and the recording playback contains all your room’s acoustics. If the latter is too intrusive, it will overpower and blot out the former. This is just logic. 

In the end, this is ONLY about the soundwaves reaching our ears, nothing else. All this room treatment, speaker placement, vibration control, reworking mains supply etc etc is only so we can deliver GOOD vibes to our ears such that our Psycho-acoustic system can create a believable illusion of musicians playing instruments and singing songs. The better and more accurate the vibes, the better the illusion....also just simple logic. 

 

 

 

Link to comment
15 hours ago, fas42 said:

 

Thanks for that rundown, Blackmorec, it gives me an excellent handle on where you're at.

 

 

Not obsessive ... people could, would say I'm obsessive in worrying about system integrity - it's merely that one has "cornered" a key factor about what's important, and so a lot of effort is spent on dealing with that.

 

 

There's an excellent, historical reason for why I think the way I do - when it happened for me, I got the whole shebang; there was no collapse or diminution of soundstage when I moved around, none whatsoever ... I lucked on getting a 'premium' version straight off - there appeared nowhere to go, in terms of "making it better!"

 

The big drama for me was to stop the gear from losing that peak tune that made such happen - that's always been the battle.

 

 

You see, and very unfortunately ... that simple hardwiring of the whole chain could be the one last major obstacle to getting the next level of SQ. Every rig along the way that I've played with has always had that happen as the first step - because it's always transformed the playback from something that I couldn't tolerate for any length of time, to a status where most recordings could be thoroughly enjoyed at some volume level - plenty of issues still, yes, but the sense of 'musicality' was in the room.

 

 

That's the theory - but my experience is otherwise. The performance of the room is secondary, as far as the ear/brain is concerned - in the same way as if you had a live rock band - no PA - squeezed in to your living room; as compared to being set up on a sports oval, the sound of what you hear changes, but the integrity of the primary sound makers doesn't alter. In the case of a recording, the primary sound makers are the music plus the captured or manipulated acoustic; and that is what dominates - it nulls out the room contribution, subjectively.

 

 

The soundwaves reaching out ears are split, by our mind - one lot is attached to the recording event; the rest is the reaction of the room to the former. In the same way someone can talk to you at the same time as you listen to live music, and you will never mistake that as being part of the music making - so also can our minds discard what it evaluates as not being part of the primary sound of the recording.

 

The big problem normally is that the mind has to work too hard to separate these two sound streams, so it gives up - it's too much of a jumble; so then you have to apply acoustic treatments, solutions to aid the listening brain focusing on what you want as the primary stream. But, the good news is that if one achieves a high enough attenuation of distortion artifacts in the replay chain, then this is adequate for the mind to do full separation, without acoustic aids.

Well there’s one point there that I have to say has merit and that’s the removing of connectors. Not something I would ever do given that my system is a substantial investment that I like to recoup some of when I upgrade, but you are right that in total there could be some substantial degradation and therefore gains......non-gas-tight pressure contacts......especially those network plugs that i always view with dark suspicion.  

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...