Jump to content
IGNORED

Bits is bits?


Recommended Posts

9 hours ago, fas42 said:

 

You're apparently talking about the beloved audiophile sweet spot, that tiny, tiny little location in the room where it "all comes together". Umm, I have never been the slightest bit interested in that, because the universal :) sweet spot is a far, far nicer thing to have - it means that I can move around anywhere in the room, and the house, and there is a 100% consistency in what I hear - I don't have to be in a special spot for any "magic" to happen.

 

That microscopic audiophile sweet spot gets the job done, because you are making sure that the direct sound is so strongly 'concentrated' that the message in the recording finally makes sense - I see it as being akin to a https://en.wikipedia.org/wiki/Stereoscope, where you have to have everything in alignment for any illusion to occur ... this is just not my cup of tea; what I'm after is the sense of real music making, irrespective of how I'm listening.

 

Hi Frank,

Would you like to share the attributes of the stereo sound waves that create the illusion and don’t change when you move? The problem as I see it is that in nature there’s a single signal that remains constant when you move, while in stereo there’re are 2 signals that change independently when you move, thus changing the signal you perceive. 

Link to comment
8 hours ago, fas42 said:

 

A comprehensive response !  :)

 

 

What I call convincing sound produces the illusion that I have described many times, with true mono source over stereo speakers - it happened the first time I achieved the necessary SQ, over 3 decades ago, and it has always repeated. That is, the soundstage exists in a space beyond the speakers as far back as the recording cues indicate - a 1910's Nellie Melba recording positions the piano about 12 feet behind her, say. There's no precise lateral positioning of course, but some clues give a sense of 'sideways' sound - how our brains interpret what we hear, from prior learning.

 

 

IME the illusion manifests in spite of what 'damage' has been done during the process of recording - how I interpret this is that the ear/brain wants the sound to make sense - this is something I built up a better understanding of over the years: there were recordings I thought were always going to be hopeless, just too poor in quality, archival in nature - and then I was bowled over when the replay lifted another notch, and the content snapped into shape ... I'm now very familiar with this behaviour; a marginal, historic capture initially has a somewhat cartoony quality to it, a "for interest only" presentation; and then when the SQ steps up correctly, everything changes - I am now involved with it emotionally, it's music being performed by real people.

 

 

 

 

 

 

Because the brain is interpreting what it hears, rather than merely responding to physical stimuli - the difference in measured SPL is digested by the hearing mechanism, and the results of its analysis - a completely unconscious process - is that it's hearing sounds from a real event which just happens to conveyed to you via two separate 'openings' in an invisible barrier between yourself and the event. If too many contradictions in the sound heard add up then the outcome is that our minds decide upon a complete different scenario; that it's merely sound being projected by a source, the loudspeaker. When people play with the  'sweet spot' that's exactly the same process in operation; move beyond the sweet spot and the whole thing falls apart - the mind switches, instantly, from one interpretation to another ... ultimately, all I'm doing is expanding the beloved audiophile's sweet spot to fill the whole room; it's an exercise in improving the illusion, and not something radically different.

Hi Frank,

I’m a lot closer to agreeing to many of your points than I am to disagreeing. I agree that when the brain receives signals in each ear that it can relate, using timing, amplitude and phase, it treats the signal as if it emanated from a single point source, rather than from 2 loudspeakers and the speakers completely disappear as point sources, leaving only a wide, deep and (in some cases) high soundscape of musicians playing in their own individual spaces. The initial sound of an instrument, be it piano, drum, guitar, flute or even voice, starts as a pinpoint of intense sound that expands to fill whatever acoustic space the engineer has allowed. In the event that 2 musicians share very similar sonic frequency spectrums (voices for example), this precise location of the start of each note makes identifying and deconvoluting the 2 separate voices very easy and relaxing, requiring no conscious effort on the part of the listener. Its this type of improvement we hear when implementing improvements in power supplies, cables, electronic circuitry etc.  We hear more information, yet the information was always there, just hidden for lack of aural clues to its whereabouts. And when we don’t deconvolute the 2 signals, they actually contaminate each other, leading to a certain mudiness and confusion in the sound. This can come across as harshness, when it comprises a lot of high energy high frequency waves. 

But and this is a big BUT, you can’t claim the effect comes from the brain’s ability to detect subtle, related differences in 2 signals, then ignore massive changes to those 2 signals by claiming that the brain somehow ignores those differences. It doesn’t. What you hear when standing close to the speaker is nothing like what you hear when positioned exactly as the sound engineer who created the recording intended. That huge, deep, wide and high soundstage will be nowhere to be found, because the signals that created it in the first place are no longer present, replaced by something entirely different, with a completely different distribution of amplitudes and phases and utterly different timing clues. By moving you change the characteristics and relationships of the 2 signals that the brain uses to create the effect, thus the effect changes. It may not collapse back into 2 sources, but its absolutely certainly not the same. That would be physically Impossible regardless of any psychoacoustics applied. 

I use a lot of psychoacoustics in the design of my system. A smallish, reflective room with a large area of diffusion, lossy in the bass, utilising the Haas ”Law of first wavefront’ to deal with reflections below the echo threshold, thus adding intensity to the performance. Some recordings literally light up the whole room, with almost perfect imaging and focus and a huge sense of something real happening.  Both the speakers and the room disappear, leaving only musicians playing in their original acoustic.  Its taken a lot of careful set-up and optimisation to reach that point and here again we agree if those same optimisations are focused on even relatively inexpensive equipment it can sound damned good. We also agree that most recording sound good to exceptional, some having the ability to change ‘state of consciousness’ due to their unusual and intense acoustics. 

Where we disagree is your assertion that the same effect is available anywhere in the room and outside of the room. If this is true for you then  I can categorically state that you’ve yet to achieve anywhere near the pinnacle of what’s available and possible when the sources of stereo signals are carefully controlled to match exactly what is on the recording. Remember, the 2 stereo signals are different and its that difference that creates the effect you’re looking for. Change your position and you change those differences and the effect they create.....that’s just schoolboy physics. 

Link to comment
1 hour ago, STC said:

 

Two loudspeakers are still two point source irrespective how well the engineer recorded the music. It is still two point sources creating the phantom image. 

 

If somehow your brain can ignore two point sources as one then the impossible magic can happen to any sound. Binaural hearing is still at work irrespective whether they are natural sound or reproduction. 

The point is that the 2 loudspeakers are not functioning as 2 point sources. If they were, then each would be producing its own discreet, non-related signals and the brain would treat them as 2 discreet sources. The key word there is non-related.  What we actually do in stereo is to take a point source signal and share it between 2 loudspeakers, such that each loudspeaker is carrying part of the signal. the signals from each loudspeaker are thereby very closely related. The relationship is a function of amplitude, phase and time and the goal is to mimic the signal that would reach each ear if the source would be a point source rather than 2 loudspeakers. 

So Binaural hearing is indeed what is happening in both cases..... natural or reproduction. BUT, binaural hearing has a gatekeeper. It only works if 1. The sound is indeed natural  with only a single point source or 2. The sound is a reproduction from 2 point sources (loudspeakers), where the relationship between the 2 signals is very similar to what would reach the ears in the case of a natural single point source. 

If, due to system shortcomings, the relationship between the 2 signals becomes distorted such that the 2 signals no longer replicate what would reach the ears from a single source in nature, the brain detects 2 independent sources of sounds and that’s what you’d hear....2 loudspeakers...2 point sources, spatially differentiated.  The magic only happens when the relationship of amplitude, phase and time across the 2 loudspeaker outputs is correct. 

 

Going back to fast42’s point, a natural sound always remains in the same perceived place, regardless of where we chose to stand. That’s because the signal only has a single source, and although the amplitude,  phase and timing change with position, the relationship between the signal reaching each ear is always correctly maintained i.e the only thing that changes is related to the listener’s head position.  When the signal has 2 sources and the position changes, that relationship is affected by both the listener’s head and the changed geometry between the 2 point sources. 

Link to comment
40 minutes ago, STC said:

Sound from any source-be it a trumpet or speaker, the spot where they come from will be localized.  A sound is still a sound to the ears. It does not interpret them differently because they are stereo recordings.

 

[No, it interprets them the same way...Its the stereo signals that are different. That’s the whole point]. 

 

In stereo recordings, the left and right channel is captured separately 

 

[Not that i’m aware of. As far as I know a mixing desk performs that function]  

 

There is no way during playback that the reproduction from the speakers will produce identical phase as the ones originally reached the stereo microphones

 

[They don’t have to be the same as the microphones...there only has to be a phase relationship  (match) between the 2 loudspeaker outputs in order to indicate to the brain that the signals are related and have the same source]. 

 

The only cue now perceived by us is the level difference. Although, it is does contain some timing difference but they can be anything from 0 to several tens of microseconds depending on the microphones configuration    

 

[you’re mixing up the recorded information with the information you get in playback. They are 2 separate processes]. 

 

For an example, XY will not contain any timing difference but AB configuration can have 100microseconds difference.

 

[no idea what you’re talking about here]

 

In stereo reproduction, a speaker at 30 degrees to the left will still be localised by the ear/brain. 

[Then why can’t i hear it?]

 

However, when another signal is reproduced by the right speaker, that too will be localized by us to be coming from the right. However, now both signals will have level difference which now will produce a phantom image between the speakers. The brain will still register the original sound location but at the same time it will also perceive the level difference by the two speakers.

 

[The brain only knows about the signals reaching our 2 ears. Its default assumption is that both signals come from the same source and that the difference in level is down to the sound travelling around the head (which it knows) and the position of the sound source relative to the ears (which it ‘calculates’ based on differences between the 2 ears.)]

 

And lastly, the role of pinna which will localize the original sound from the respective speaker which cannot be eliminated. 

 

[There’s a lot of research on this too complex to summarise in one sentence. The pinna basically serves to help differentiate and localise the signals reaching both ears]. 

 

This may not make sense to modern man but if you started you music listening with single speakers  and when you first listen to stereo you would feel that stereo to be unnatural and lacked intensity. (You can get some reference from recording engineers in the early 50s when migration from mono to stereo took place). But as we have familiarized with stereo sound, we somehow now do not recognize the confusion caused by the 3 conflicting cues produced by the stereo playback. 

 

[There is no confusion....a single signal reaches each ear and the combination of the 2 create the perception, which is what we hear, exactly as in nature. No confusion]

 

An easier way to understand this, is to arrange 6 or more speakers on the stage and each producing a mono signal of one instrument. Now convert the all the mono signals and convert them to stereo and play them with two speakers. A quick AB would reveal how fake the stereo sound when compared to each sound coming out from one speakers. With the 6 (or more speakers), the cues produced by the speakers are correct where the ILD, ITD and pinna all would be localizing them correctly unlike in stereo.

 

[Essentially what you are saying is that 6 sounds produced by 6 point sources will sound different to the effect created by 2 point sources with appropriate signal balancing. I’m sure that’s true. The problem is that all 6 sounds become location fixed, can only originate from a speaker and each sound must be recorded discreetly. Not ideal for recording and selling music, so not a viable solution to the problem, rather just a different way of producing a different sound]

 

There is also visual aid involved here but that can be eliminated by closing the eyes.

 

[Visual is not an aid, its a contradiction that should be eliminated by closing the eyes]

 

 

Wow, that’s a complex reply, a lot of which I would argue

My replies are in [square brackets]. I hope its comprehensible. If anyone can be bothered, is there an easy way to divide up a message like this into lots of quotes. Remember I said easy!

 

Link to comment
7 hours ago, STC said:

 

Ears hear sound. Each ear receive one soundwave. When the sound source is off the centre (laterally or horizontally), the signal which reaches the left and right ear is now distinguished by the level, timing and changes in the frequencies content provides the cues of the location. This is actually a learned skill and some are better at localization than others.

No, its not a learned skill or something you can teach or train...you can only refine it, like sense of smell or sense of taste. Its an innate survival mechanism. Without it mankind could not have hunted and would have themselves been hunted to extinction. 

7 hours ago, STC said:

Stereo signals are just two signals produced to trick the brain to produce the phantom image which in reality does not exist as sound cannot emerge from a space without vibration.

Stereo signals are something more special. Its not just 2 signals..its 2 signals with a relationship to one another. A single sound, from a single source is artificially split and played from 2 sources in order to mimic the sound that would other reach each ear from a single source. With a single source, the sound reaching each ear would have a different amplitude and phase based on the different distances travelled. Stereo seeks to mimic that difference by providing the 2 signals with

the correct amplitude differences such that the brain is tricked into thinking its still hearing a single source.  Of course the phantom image does not exist in space...it exists in your brain and is therefore what you hear, because you only hear what the brain produces after its processed the signals....you don’t hear pre-processed and post processed. 

7 hours ago, STC said:

 

There is nothing special about it as you can take one mono signal and reproduce the same on the other channel where you can now arbitrarily place the sound to be coming from anywhere between the two speakers or even outside the two speakers. Just record a mono recording of someone's voice and by playing with level difference you can make as if the person is moving in the stereo recording. Any recording engineer or someone with a DAW could easily doing it and been doing it. Play with phase and you can now move it outside the speakers.

Nothing different to what i’ve been saying. You take a signal, divide it and you can pan its position anywhere between the 2 speakers. However, distort or delay one of the signals just slightly and you’ll hear 2 separate sources....L & R....no phantom image 

7 hours ago, STC said:

I meant with a stereo microphone the left and right channel are captured with each unique sound waves. A reproduction of the two signal is required to recreate the sound field. 

Ah-ha

7 hours ago, STC said:

How do you think the phase relationship can exist during playback. A slight shift in your position would have a different phase of the soundwave hitting the respective ear drums. Draw a chart with a 2kHz sinewave and see how much the change by a slight shift in the head or the speakers position. There is nothing you can do to ensure the exact phase relationship can exist during playback as soundwaves travel not linearly. 

Of course you’d have a different phase reaching each ear, and of course a slight shift of the head would change what you hear. That’s the very point!  In nature when a sound wave is generated its a wave, with 360 degrees of phase. At any point when that wave reaches something like your ear its going to have a phase of X degrees. When reaches your second ear its going to have a different phase, according to the extra distance its had to travel around your head. The shift in phase will vary according to the size of your head and the direction from which the sound came. BUT, BUT the 2 phases are related...in essence its a single wave detected at 2 points so the phase detected will be  a function of the extra distance travelled. Your brains firmware is well aware of your head and pinna shape, so uses the 2 signals with their different phase to calculate the origin of the sound.  

7 hours ago, STC said:

How? Please explain. 

When you listen to a recording you are hearing the sum of 2 things; 1.  What’s on the recording and 2. What the replay mechanism further adds to that sound. The goal is to minimise 2 but a recording must be played in a venue of some sort and that venue has a signature, just like the recording venue  has a signature. In order to hear the recording venue’s signature, the signature of the replay venue needs to be as neutral and benign as possible. You seem to be mixing up characteristics of the recording, with characteristics of the replay mechanism.  

7 hours ago, STC said:

This is important because this will explain my other paragraphs. 

It may be important but i don’t at all understand what you’ve written, so if it really is important please find a way to express it in a way my simple intellect can grasp. Thanks. 

7 hours ago, STC said:

You are hearing it. That's why we always know that a stereo sound despite having a soundstage you still know that not natural.  A sound to be natural, all the cues must correspond to the cues that would occur in nature. The closer you reproduce them the more natural it becomes. However, a mono vocal or instrument with single speaker can be very hard to be distinguished from the original.

The reason a mono sounds very like the original is because the original is a mono sound. All sounds in nature are mono....point sources. But they also have another characteristic. Location and that’s a trick that mono can’t pull off. Its position remains static. It can’t move unless the thing that generates the sound moves. 

 

The fact that the ear receives something doesn’t mean that you hear it. Hearing something is essentially an act of consciousness....if a sound doesn’t make it to the consciousness, you don’t hear it. If the brain takes 2 signals, one from each ear, combines them and makes the result conscious, you hear only the combination and not the original 2 signals. So there’s no 3 signals (original L & R + processed L&R), the brain only makes conscious the processed signal. 

 

In nature, we take the signal reaching each ear, process it and hear a sound with direction. 

In stereo, we manipulate what is essentially a mono signal with no location, by splitting it across 2 sources in order to provide the missing directionality.  In order for the brain not to detect 2 discreet signals we have to ensure that the proper relation exists between the 2 signals to make sure the brain detects what it believes to be a single sound. 

 

Think about this. When a sound is produced in nature it does not include any  positional information. The positional information is added by you and your entire hearing mechanism including head, pinna, ears etc.  What we then hear is the position of the sound source relative to us.  

All stereo does is seek to emulate the natural sound AS IT REACHES OUR EARS, not the sound as its  produced, which has no positional information...but the sound as the signal is divided and enters our ears, which has had positional information added. If you get that you get the whole thing.  

 

7 hours ago, STC said:

 

 

Link to comment
12 hours ago, fas42 said:

 

Not in a position right now to reply to all said since last posting - I will just state now that subjectively what is experienced is precisely what you would hear if the real performance was occurring beyond the speakers, and you moved around in the vicinity of that, including going outside the room. Precision in imaging is a minor element of that, a curiousity; and which is far less interesting than the sense of engagement with the music making.

Let’s for a moment discuss the difference between said live performance and the stereo reproduction. 

In the live performance you’d have sound waves originating from a single point source and impinging on both ears. The difference in the signal reaching each ear would be a function of source position, your position and head and pinna shape and size. 

In stereo, there isn’t a single source of sound, there are 2, so the sound you hear would be a function of source position 1(S1), source position 2 (S2), your position vs S1, your position vs S2 and your head and pinna shape.  In stereo, the sounds are balanced by the recording engineer such that when you are sitting exactly midway between the 2 sources, the 2 sources balance out to equal the same amplitudes as those coming from a single source in its desired position. In any other position, the stereo sounds will lose their relationship and sound different.  

Link to comment
13 minutes ago, STC said:

 

It is a learning skill like walking. http://www.jneurosci.org/content/25/22/5413

https://scholarworks.wmich.edu/cgi/viewcontent.cgi?article=3912&context=masters_theses

 

Actually what you posted was a scientific paper on how hearing has to be retaught after the ears are lost. That’s like saying that you have to learn to walk because someone with prosthetic limbs needs to learn to walk.  Walking is not a learned skill. Its a natural attribute that every single able bodied human can do without a single minute’s instruction or training. 

Running isn’t a learned skill, despite all the World’s best athletes having running coaches.  Of course every skill, learned or natural can be refined or honed, hence people like perfumers and genuine wine experts have refined their natural abilities. But everything about listening to audio  is based on a natural, intrinsic ability. This, like anything else can be refined but we don’t need be an audiophile to hear the full effect of stereo, since its based on an innate ability that everyone with well functioning hearing has.  

]

13 minutes ago, STC said:

 

They are just two signals. A real stereo can be more accurate but as far as stereo reproduction is concerned, you can take a mono recording and duplicate them (exact copy) as two channels. When you reduce the level in one channel the sound moves towards the louder loudspeaker. There is no need to split the signal to create the stereo effect.

This is just semantics. Whether you duplicate or split, the resulting 2 copies have a direct relationship to the original, which is the point. 

13 minutes ago, STC said:

 

There is no need to divide to create the phantom image from a mono recording. In fact, most studio recordings are made from mono tracks and artificial placed.

If you don’t divide (or duplicate) the signal in the second channel, you’ll have a mono signal playing in one channel,  either hard left or hard right. Split, duplicate and adjust amplitude between the 2 signals you can place the signal wherever you want between the 2 speakers.   ‘Artificially placed’ essentially means part of the signal is played from one channel and the rest from the other....in other words a divided signal. 

13 minutes ago, STC said:

 

Delays one signal also alters the position. That's what I am doing with crosstalk. I can exactly tell how much delay is required to shift the phantom image.

Of course it does. Sound takes time to travel and phase = time. Time is proportional to distance, therefore phase is proportional to distance (along with wavelength) and distance is what gives anything position relative to something else

13 minutes ago, STC said:

Stereo is just a method to trick the brain to reconstruct the positional space. If a stereo recording is all about phase as you alleged then it should also able to produce sound from above and behind you since you are saying the a microphone will capture phase, amplitude accurately. Unfortunately this is not true and fueled by audiophiles and high end manufacturer. 

Do you live anywhere near the North East of England?  If you do I’d consider inviting you round for a listening session. 

Sit down, close your eyes and listen. What you’ll hear has nothing to do with the room (at least that you can detect) the speaker position, which you will also not be at all able to detect and everything to do with the recording and either it’s recording venue or the sound stage that the recording  engineer created. When asked where certain sounds originate you’ll point to extreme positions way away from the boundaries designated by the speaker/listener triangle. With some recordings, which I’ll happily post, you’ll end up pointing to a spot somewhere beyond the ceiling. 

 

 

Link to comment
29 minutes ago, STC said:

 

I have the Chesky binaural recording but I couldn’t find any track names “ The Storm”. Height is can be perceived due to suggestive knowledge for HF and especially if your tweeter are way above your ears. Technically, stereo do not have height information but often perceived to hear own especially sounds like bird tweeting or bee buzzing. 

 

QSound is OT. I think most of moats here too are OT and not a bit on bits. Better I stop here. 

I can play you a track or two of electronic/acoustic music that features rising frequencies that start bottom left and trace a path across the soundstage to end above ceiling height right, then reverse. 

I can play you tracks where some sort of electronic jingling/shimmering bells cross the soundstage describing a large floor to ceiling wave pattern. These are essentially normal red book files streamed from Qobuz, so nothing particularly special, other than the sound engineering, which is SoTA. 

Link to comment
21 minutes ago, mansr said:

Care to mention which tracks, specifically?

Sure, as I come across them I’ll post a few album/tracks IDs that anyone can access on Qobuz with examples of various types of soundstage and recordings. 

BTW, if you want some perfectly lovely classical music, beautifully selected to meet a fairly broad range of tastes and generally extremely well performed, you can’t beat Radio Swiss Classic. 128kbps  (i’m not taking the piss here AT ALL btw, this a serious recommendation.) They play lovely music, spend minimal time talking with only music introductions  and the replay standard is well high enough to thoroughly enjoy, with lovely, if basic sound staging and enough intensity, colour and vibrancy to bring the music to life. And the announcement voices are a great tool to check the performance of your system .....things need to be be mostly right to get good  sound from 128kbps and the voices will immediately reveal when something isn’t, which is BTW an excellent tool for monitoring the quality of your network as any shortcomings will show up. 

 

I won’t forget the tracks. I have my notebook beside my listening chair.  I won’t just post ‘stunts and tricks’ I’ll post albums with outstanding and exciting sound stage content, where the presentation is an essential part of the music and let you guys decide who can hear what.   And try my recommendation of RSC...its a goodun

Link to comment

Ok....lets start with a classic......Mike Oldfield’s Tubular Bells 2003 ...I stream mine from Qobuz in CD resolution

 

Right from the get go this album should be lighting up your room and ‘Introduction’ will do just that. With massive vibrancy and great musical power it should sound hugely energetic and exciting. The soundstage is nicely balanced with excellent width and depth. Tonal colours are accurate and dense Not so much height ceiling wise but the soundstage is nicely layered with some instruments set higher than others. 

Play the whole album cos its all good, but I’ll highlight a few tracks

Next try Basses.  This track will show you how well your system weaves rhythms. 

Wild, exciting, exuberant, intense introduction...some lovely mid-bass power and rhythmic involvement of the highest order. There’s a purposeful lack of focus in the soundstage that creates a sort of cloud of sound, which then resolves into Chrystal clear and highly focussed guitar which servers beautifully to highlight your system’s treble abilities, transparency and ability to handle high high-frequency energy. There’re lovely gentle guitar notes with oodles of timbre, demonstrating your system’s ability to portray warmth

 

Thrash....The Strummed guitar is brilliant, lovely driving rhythm like a warm, stiff breeze. 

 

Russian...and more lovely acoustic guitar; full, warm, centre stage then we’re offffff

Listen to that soundstage.....full of atmosphere, the room full of deep rolling bass played low with other instruments layered above. Instruments introduce stage left then gradually take their place in the soundstage. The music energises the room 

 

Caveman lovely bit of fun....lovely solid beat and entertaining sound stage with various characters popping up everywhere.....then it gets serious....speed, dynamics, pace, rhythm, timing....if this does get you moving, book a medical. 

 

Ambient guitars possibly some of the nicest ambient music you’ll ever hear because there’s simply oodles of emotion and aural beauty...and beautiful use of the huge spaces offered by the sound stage. 

 

Finally Sailor’s Hornpipe 2003....nothing more to say!

 

So there you are. The above album contains tremendous soundstage width and depth  and demonstrates that height differentiation in a recording is a system/recording attribute  (not all systems will resolve height...its one of the last things to appear during system optimisation.  All in all a very nice album with a tremendous presentation that is key when listening to the music. I will however post a couple of albums that have considerably more height content, whenever I come across them. 

Link to comment

OK, I had a hunch I remembered which album I was ‘quoting’ and I found it first try. 

Again Qobuz in standard resolution 

 

Shpongle - Tales of the Inexpressible 

So if you want a soundstage that fills your room, goes floor to ceiling and beyond and has electronic generated tones that do the floor-to-ceiling thing, here it is. This album is about music,  but a lot of the artistry lies in its presentation. Soundstages are massive, instruments and tones highly agile and there are sections of this album where the acoustics are so different and altered they’ll feel like they’re altering your consciousness.  A good percentage of this album happens at ceiling height. 

 

I have a suspicion that the ability to hear height Involves the pinna of the ear as much as the stereo system as it helps to rest your head against the chair’s headrest when listening, which has the effect of tipping your head back slightly, which really allows you to hear the height element perfectly. It isn’t however about tipping the head, as only part of the music happens at ceiling height...the rest is divided between floor and ceiling. All tipping your head back very slightly does is allow you to differentiate and therefore hear the height element much more clearly. 

 

 

Link to comment
5 hours ago, STC said:

 

What is the major difference ( if any) between a point source sound in nature and a point source sound from ONE loudspeaker? What mechanism do think capable of telling the brain this sound is from a speaker and therefore do not localize it? :) 

Now there’s a good question!

In nature a sound is emitted as circular sound waves which travel away from their source. When they reach you, depending on your position, the sound wave will enter both ears. Again depending on your position the sound-wave will have to travel further to enter one ear vs the other and when it does enter the ears it will have a different amplitude and a different phase...one will also be very slightly delayed. This difference between the 2 signals is computed by the brain, which combines the 2 signals into a single signal with direction.  You may then turn your head in the direction of the signal.....guided by the fact that as you turn your head the signals reaching each ear become equal. At that point your eyes are pointing direct at the sound source, allowing you to identify what it is and where it may be. Survival capability....purely instinctual. Its happens fast and automatically, so part of the autonomous nervous system. This is not a learned skill and the best you can do is refine your capabilities, like a tightrope walker trains his balance. 

 

When you are in your listening room listening to one loudspeaker, let’s call it Ch1, the exact same thing happens. The signal from Ch1 reaches both ears Ch1.1 and Ch1.2 and your brain hears a single source with direction. 

 

Now switch on your second loudspeaker.Ch2....essentially what you’ve just done is to replace Ch1.2 with Ch2. So now the 2 signals reaching your ears are Ch1 and Ch2

 

The brain can’t localise a speaker because its not getting a single speaker signal to both ears. Your  second ear’s signal is swamped by the 2nd speaker Ch2 and visa versa. So the brain still has 2 signals reaching each ear, so what does it do? Well assuming that the relationship between the 2 signals is correct in terms of timing, phase, amplitude and frequency content,  it processes everything normally, as if it were sound-waves from point sources so now you hear the musicians with their spacial location.....business as usual for the brain.  But for this trick to work, as i’ve said dozens of times, the 2 signals in each ear must relate properly. If the brain can’t find that relationship it assumes that the signals have different sources and will then process them as such, presenting you with 2 loudspeakers and their locality. 

This detection of location is not just a matter of amplitude and nothing else. If it were, crosstalk between the 2 ears would become problematic. Fortunately your brain has algorithms that will identify certain signals and ignore others....meaning you can still pick out directionality in the presence of a lot of interfering noises, either autonomously or simply by focusing your conscious attention on what you want to hear and ignoring the rest. Again survival. 

Link to comment
6 hours ago, fas42 said:

 

And it's frustrating for me too when I don't have something right there, right now, to show! So far I've found the typical audiophile hardest to get on board, because they're usually looking for the wrong things - the spouses are the ones who clap, because they appreciate the lack of distortion in what they're hearing. At the moment the aim is to get an easily transportable rig, which is so simple that it's relatively easy to thoroughly 'debug', and to make robust against the electrical environment it may encounter - i.e., the results as posted in the Simple Media Server thread I started.

 

Your remark about which part I soldered again shows you misunderstand my approach - nothing I do has the smell of a ritual behaviour; they are all the results of finding where something is weakly implemented, to the point where it has an audible impact ... I have repeated over and over again that I always hardwire all the links in the playback chain - that's a perfect example of a soldering that I find critical; but you choose to disregard this, because it doesn't suit you to have to deal with this requirement.

I think that mainly the reason people ignore your advice to solder everything is that it would totally destroy the resale value and integrity of their electronics and cables.  This technique of soldering cables or doing without cables altogether is implemented throughout my system by the manufacturers.....(Innuos, Devialet, Magico, Sean Jacobs).  But I’d never dream of opening up their  boxes and destroying several expensive cables in order to solder everything together.  I guess that’s why kit needs to be old and cheap....that way you squeeze out its maximum value for money; no question. 

Link to comment
1 hour ago, STC said:

Crosstalk is problematic since the invention of stereo. It was not so evident till the 70s because of other errors were far greater than crosstalk. At the same time, we too managed to adapt to the different sounding loudspeakers stereo production to be similar to real sound. It is not. Otherwise, the sound of loudspeakers replay captured by microphones close to your ears would sound almost the same ( take into account of HF loss due to distance).  However, if you were to capture a single speakers sound they would sound alike (almost) with headphones. HRTF will always at work and do not discriminate natural and reproduction sound. For ears they are all still sound. Unless you have reference to support then there is nothing to add this OT posts. 

 

A microphone is a simple transducer that converts sound pressure waves into an electrical signal . 

Ears are a whole different thing entirely. They have sound shaping, direction finding pinna and more importantly they have a human brain that processes the entire signal before making it ‘conscious’ .

 

When I listen to my stereo system, I hear focused images of instruments and voices that sound as though they came from a single point of origin, have exactly the timbre I expect from the instrument, have a certain ‘humanness’ (breath sounds, wet-mouth sounds,  fingers scraping along metal strings etc.), a dynamic note shape that starts with a pinpoint source and expands into its own acoustic space, an interplay with other instruments that can be utterly magical and an alluring beauty that soothes, excites, generates joy, all contained in a beautiful acoustic that’s as large or small as the recording engineer elected to make it. More than that I don’t really need.

 

And given that this is a hi-fi forum where ideas are exchanged, I don’t feel very motivated to dig out references....of which there are plenty; all rigorously scientific and therefore requiring a lot of energy and concentration to fully absorb. I did that for a living....I do this for enjoyment

Link to comment
11 minutes ago, Summit said:

The ability to reproduce a realistic sound stage and all other SQ aspect associated with the sound of a live concert depends of the fulfillment of tree general conditions. All need to be really good for us to get the sensation that we are hearing music that sounds like it’s been played on a stage in front of us. The reproduction is never 100% like IRL though, but if all tree conditions are accomplished we can get pretty close.   

 

  1. The record. If the sound stage is small, the ambient over damped or any other limitations the recorded will sound like that. That includes the height. A good and accurate recording of the event therefore paramount.
  2. The listening room. A big room with high ceiling and good acoustic there you can set up the speakers further away (everything else held equal) will present a better sound with bigger sound stage and with more air between the musicians.  
  3. The audio system. Well matched gear of good SQ will reproduce a more realistic sound stage, deep bas and all other aspect related with the sound of a live concert better and more lifelike than a stereo setup with not as good and matching gear. The audio gear should of course also match the size and the acoustics of the room.

 

So yes the height of the sound-stage and the ability to reproduce that and many other sound aspect realistic depends on record, listening room and your audio system. If OTOH one of the above is not fulfilled the sound will not sound even close to a realistic sound-stage.

I don’t necessarily agree with point 2. I used to think that a large room was desirable but then I learned that a large room has challenges, just like a small room. The major advantage of a big room is that you can position speakers in free space and there’s a lot of flexibility to find the ideal listening position. The disadvantage is that it can be slightly echoey, with its own acoustic that gets superimposed on the recording’s acoustic. Big rooms also require much larger speakers to fully energise.  Large rooms can support deeper bass but that can also be problematic if dimensions cause some major bass resonances (standing waves). Finally large rooms can typically accommodate more people sitting in a reasonable position. In a large room its also easier to integrate (hide) acoustic treatments, but correspondingly more expensive to treat

 

A small room has its many challenges too. It needs a good source of diffusion so you don’t get a lot of energetic sound waves bouncing in the same direction. . It needs to have an optimum RT (reverb time) and it needs well matched speakers that don’t cause bass problems. Also the need to position speakers in free space doesn’t go away so there are far more limitations on speaker positioning and listening position, which often in a small room is a one man affair. Finally its very difficult to employ acoustic treatments in anything other than a superficial manner, so the room has to be reasonably good acoustically from the get go. 

Link to comment
2 hours ago, Summit said:

 

I didn’t specify how big a big room and what room is small. I presented three conditions that will be of importance for reproducing an accurate sound and sound-stage.

 

IME, a big room with high ceiling and good acoustic there you can set up the speakers further away (everything else held equal) will present a better sound with bigger sound stage and with more air between the musicians.  In conditions three I stated that the audio gear should of course also match the size and the acoustics of the room.

I wouldn’t argue that its generally easier to set up a pair of speakers in a large room, thanks mainly to the freedom of placement and listening position it bestows...but there’s a big BUT coming up....

 

Are you familiar with the Haas effect, otherwise known as the precedence effect or law of the first wavefront?  In essence when a direct sound is followed by reflections of that first sound we hear the reflections as an echo, unless the reflections arrive within the first  20-30ms of the original wavefront (which equates to approximately 20 - 30 feet round trip for the reflection). When that criteria is met we perceive a single auditory event and all the reflections are added to the original signal. Essentially what this means is that in a smallish room, reflections are integrated into the original wavefront, giving the signal increased intensity and a more clearly identified and focussed spacial position. With the small room not imposing its reflections and acoustics onto the signal, the signal is free to communicate to the listener the original recording venue acoustics, with no interference from the room. This is not the case for large rooms, where reflections need to be dealt with otherwise the room acoustic will tend to mask or at least confuse  the recording’s acoustic. 

 

If a big room makes the music sound big and airy, that’s not what you want. You want big, airy recordings to make big airy sounds, and small intimate recordings to sound appropriately immediate.  You really don’t want the room to add anything, and that’s easier with a smallish room than a large one. 

 

So what does this mean? It means that in a smaller room you generally need much less acoustic treatment (a rear wall of diffusion is usually sufficient as you don’t want multiple reflections)  AND you can produce sounds with a huge acoustic, if that’s what’s on the recording. 

 

Also, when you listen to say a grand piano in a room, the power of the instrument will often bring the room alive....light it up with music and saturate all the air with its melodious tones until the room feels full of music, but without any ugly resonances or emphasis. That’s much easier to do in a small room, requiring altogether less power and loudspeaker area, so its also considerably cheaper. 

Link to comment
3 hours ago, fas42 said:

 

There is an alternative - silver paste treatments work just as well if very carefully applied ... the key point is that every point of weakness means that all the strengths elsewhere are wasted - and that's the main reason that well made, usually expensive gear does well overall - the cost saving, implementation shortcuts are normally at a minimum.

Yeah, thanks for the input. However I would whisper a little caution with silver paste. Its goal is to increase conductivity and reduce impedance caused by poorly made physical contacts. But in modern electronics, conductivity and insulation are very cosy bedfellows, often lying adjacent to one another. Any ‘creep’ from that silver paste (and it does migrate) and you could have a bit of a disaster on your hands. A friend on mine, who is a main-stream manufacturer once reported that the most common fault they rectified in their equipment was caused by contact enhancement products....so all i would say is “Caveat Emptor”  

Link to comment
Guest
This topic is now closed to further replies.



×
×
  • Create New...