Jump to content
IGNORED

Soundstage Width cannot extend beyond speakers


STC

Recommended Posts

10 minutes ago, fas42 said:

I would find it easy to achieve your standard of playback by not being fussy enough - but I'm not interested in compromising ...

 

Said the man who is currently listening to music on his laptop's internal speakers...

 

Sometimes it's like someone took a knife, baby
Edgy and dull and cut a six inch valley
Through the middle of my skull

Link to comment

Ummm, I'm using such to assess contents in YT clips, and other posted files. It does the job well enough to easily hear differences, and to make clear flaws in the captures of other playback gear, say.

 

No, it doesn't have walloping sub 20Hz bass, and shelved off treble - but these tend to get in the way of hearing what's important in the sound.

 

Also, I thought all DACs sound the same, and a simple circuit feeding speakers working effectively as headphones would be as good as it gets - or have I got that wrong ... ?

Link to comment
3 minutes ago, fas42 said:

Also, I thought all DACs sound the same, and a simple circuit feeding speakers working effectively as headphones would be as good as it gets - or have I got that wrong ... ?

 

This doesn't help you, as it looks too serious.

 

:sarcasmemoji:

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
6 hours ago, miguelito said:

What I mean is really very simple: Imagine a very simple recording setup where you have two microphones separated by about a foot, each recording to the left and right channels respectively. If you have a sound source at any angle, the timing of the signal arriving at each microphone will be different (sound will have to travel longer to reach the farther microphone). This translates in a phase difference between the two channels. On playback, each of the speakers will play with the same phase difference, recreating the effect of the side placement of the source. And it can easily be past the speaker itself if the phase difference is large enough. 

 

My point was that no processing is required to achieve this phase difference, it happens naturally given the mike placement.

 

 

I agree but how do you explain this?

 

A 2000Hz frequency wavelength is about 17.171cm. Let's say your pinnae and the OTRF is also spaced exactly 17.171cm. A source from 171.71cm to the left and right receivers will be exactly at the centre. Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Now if there is another source at about 6.1 degrees towards the left of the right receiver at a distance of 120.197cm, the distance to the left receiver is also 120.197cm. 120.197cm is 7 times the wavelength of a 2000Hz frequency. The phase is exactly the same reaching both receivers yet we localize both at two different locations. The only difference between the two is the amplitude of the phase reaching the ears (receivers) which is the level; and timing. That is the time taken to reach each ear. I can explain localization with this two but phase is not providing the answer.

 

So how do you explain this with phase difference?

 

 

Link to comment
4 hours ago, STC said:

 

 

I agree but how do you explain this?

 

A 2000Hz frequency wavelength is about 17.171cm. Let's say your pinnae and the OTRF is also spaced exactly 17.171cm. A source from 171.71cm to the left and right receivers will be exactly at the centre. Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Now if there is another source at about 6.1 degrees towards the left of the right receiver at a distance of 120.197cm, the distance to the left receiver is also 120.197cm. 120.197cm is 7 times the wavelength of a 2000Hz frequency. The phase is exactly the same reaching both receivers yet we localize both at two different locations. The only difference between the two is the amplitude of the phase reaching the ears (receivers) which is the level; and timing. That is the time taken to reach each ear. I can explain localization with this two but phase is not providing the answer.

 

So how do you explain this with phase difference?

 

 

 

Ignore. There was a mistake in the second part. 

Link to comment
8 hours ago, STC said:

 

I think I understand what you are saying. But let's clear the first confusion here. I am referring to sound outside the listener/speakers triangle. That means making the left sound going further away towards the left of the left speaker.

 

Dual mono recordings- is just two recordings played simultaneously with one speaker each. The image will not shift from the respective speaker's position unless there is panpotting involved. 

 

Good speakers is not the limitation factor at any direction, it is how the recording is made and how you set up your audio system that is limiting the size of the stage. Normally no one record any lead instrument so that it going to sound like it’s far far to the left or very distant to the right because no real live stage look like that. But ambient sound and echoes from sidewalls make the total soundstage sometimes bigger than the distance between the speakers, and sometimes not.  

 

Dual mono recordings are two channel recordings. In all 2 channel recordings you have 2 different channels which means that you can record or mix so that a musician is only playing in one or both channel(s) and consequently it will have a left – right – all over the place soundstage.

Link to comment
14 hours ago, miguelito said:

What I mean is really very simple: Imagine a very simple recording setup where you have two microphones separated by about a foot, each recording to the left and right channels respectively. If you have a sound source at any angle, the timing of the signal arriving at each microphone will be different (sound will have to travel longer to reach the farther microphone). This translates in a phase difference between the two channels. On playback, each of the speakers will play with the same phase difference, recreating the effect of the side placement of the source. And it can easily be past the speaker itself if the phase difference is large enough. 

 

My point was that no processing is required to achieve this phase difference, it happens naturally given the mike placement.

 

Please ignore my earlier response to this.

 

I repeat the same with some modification.

—————————————————

I agree but how do you explain this?

 

A 2000Hz frequency wavelength is about 17.171cm. Let's say your pinnae and the OTRF is also spaced exactly 17.171cm. A source from 171.71cm to the left and right receivers will be exactly at the centre. Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Now if there is another source at about 0 degrees towards the left of the right receiver at a distance of 137.368cm, the distance to the left receiver is 120.197cm. 120.197cm is 7 times the wavelength of a 2000Hz frequency and 137.368cm is 8 times of the wavelength. The phase is exactly the same reaching at both receivers yet we localize both at two different locations. The only difference between the two is the amplitude of the phase reaching the ears (receivers) which is the level; and timing. That is the time taken to reach each ear. I can explain localization with these two but phase is not providing the answer.

 

So how do you explain this with phase difference?

Link to comment
8 minutes ago, Summit said:

 

Good speakers is not the limitation factor at any direction, it is how the recording is made and how you set up your audio system that is limiting the size of the stage. Normally no one record any lead instrument so that it going to sound like it’s far far to the left or very distant to the right because no real live stage look like that. But ambient sound and echoes from sidewalls make the total soundstage sometimes bigger than the distance between the speakers, and sometimes not.  

 

Dual mono recordings are two channel recordings. In all 2 channel recordings you have 2 different channels which means that you can record or mix so that a musician is only playing in one or both channel(s) and consequently it will have a left – right – all over the place soundstage.

 

I understand but the issue here is non processed stereo recordings. In such circumstances, without the aid room reflection, the soundstage will be stuck between the two speakers as localization in stereo is based on level and timing difference.  Some here say phase, but maybe they can provide a convincing explanation.  

Link to comment
15 hours ago, semente said:

 

That is not real 2-channel stereo.

 

Of course it is. I linked to 7 different mic tech. Btw you can use more than one stereo mic to capture a live gig without manipulate the soundstage. Like with speakers it’s very important to place all the different mics correct.

 

https://www.soundliaison.com/index.php/253-bach-live-edition-1

 

https://www.soundliaison.com/index.php/276-carmen-gomes-sings-the-blues

Link to comment
5 minutes ago, STC said:

Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Ignoring the fact that you say yourself that you can't explain all, I say that you are in the right direction now. B| So please continue.

 

Yesterday when I announced to stop typing, my post started out with this :

 

-----

7 minutes ago, STC said:

 Is your antenna capable of receiving omnidirectional signal equally from all the angles? If the answer is YES then how can you tell whether the sound originates at 45 degrees at 6 meters away and another sound which originates from 135 degrees to your right?

 

Yes, antennas receive from any direction. But it is not important (and not detrimental).

Keep in mind it is about TWO antennas. OK ?

And it is not about a time difference between the two. Indirectly maybe.

 

Just thinking 2D.

We have the two antennas. Let's observe the middle between them. 2 meters longitudinal to them, springing from the middle there's a frequency radiating source.

What will be the phase angle at each of the both antennas ?

Answer : we don't know because the frequency was not given.

But is the angle the same ?

 

No the angle is not the same. Think about this. Only if the antennas were at the same position, the angle would be the same. But now it is not. They are 10cm apart.

Do we know where the radiating object is ?

Well, looking at the almost equal phase angles *and* knowing its distance (that assumed for now), we can tell that it is at these 2m dead ahead, but it can also be at about 45 degree angle still at 2m distance, assumed the frequency is dividable on to 200cm. It can be at 90 degrees as well. It can be 135 degrees (which is 45 degrees behind), 180 degrees (dead behind) and 225 degrees or 270, etc. Distance is always 2m - this was a given for now.

 

Apart from that it can be at 8 locations (only in the 2D plane) and which we can calculate because each of these locations has a relative phase angle to the both antennas ... (meaning : the relationship in radials remains the same) we can of course do this for any minute angle. We can do it for the complete circle. Still, for each relation (which is one pair of (thus 2) numbers, it can be at 8 positions. This is not unique(ly identifying) ...

-----

 

And there I stopped.

 

 

 

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
8 hours ago, STC said:

 

There is one big difference when you replay this recording. While the microphones capture the sound with the same interaural time and level difference like our ears, the playback is done with two radiating sources. A single saxophone on the left in a real stage is now reproduced as two saxophones are located at 30 degrees to the left and right.

I do understand this. My post was answering another that postulated that speakers create the imaging. I was just pointing out that its the ears/brain that create the imaging based on the differential signals they receive from the 2 speakers. 

 

Let me put all this discussion in a slightly different way. Our hi-fis and specifically our speakers don’t produce the sound of a violin say, which is mono and physically located in space. Our speakers produce 2 signals from the violin which closely match the 2 signals our ears would have received while listening to the original violin. Those 2 signals are different and its the differential that allows our brain to ‘localize’ the sound. 

Link to comment
8 minutes ago, Blackmorec said:

Our speakers produce 2 signals from the violin which closely match the 2 signals our ears would have received while listening to the original violin. Those 2 signals are different and its the differential that allows our brain to ‘localize’ the sound. 

 

This is important but...

 

The violin like all real sound is in mono as there is no stereo sound in nature. This mono sound goes into our ears. ONLY ONE signal to each ear. 

 

With speakers, ideally it should be only ONE signal for each side but we also hear the opposite speaker. So each ear now receives TWO signals for what supposed to be only two signals from a single source. 

Link to comment
15 minutes ago, STC said:

 

I understand but the issue here is non processed stereo recordings. In such circumstances, without the aid room reflection, the soundstage will be stuck between the two speakers as localization in stereo is based on level and timing difference.  Some here say phase, but maybe they can provide a convincing explanation.  

 

First of all all recordings are processed and then we listen to a record we are listing to a trickery. All live concert consist of a direct sound and an ambient sound. It’s the very thing about live music as contrasting to studio recordings that only consist of direct sound and some added fake ambient sound.

Link to comment
13 minutes ago, Summit said:

 

Of course it is. I linked to 7 different mic tech. Btw you can use more than one stereo mic to capture a live gig without manipulate the soundstage. Like with speakers it’s very important to place all the different mics correct.

 

https://www.soundliaison.com/index.php/253-bach-live-edition-1

 

https://www.soundliaison.com/index.php/276-carmen-gomes-sings-the-blues

 

Nope, not REAL 2-channel stereo...

 

From the first link:

DPA 4006 TL matched stereo pair (A-B Stereo)
Neumann USM 69i (Soloists)
Neumann TLM 103 (Cello)
Neumann KM184 (Oboe 1-2, Violin 1-2, Viola and Bassoon)
AKG C414 XLS (Organ, Violone)
 

2nd link:

Carmen: Audix SCX25
Folker: Josephson C700
Peter: Josephson C700
Bert: Josephson C617 (overheads) - Audix D6 (basdrum)
Main system - Schoeps MK5 (AB)
Micpre's: Merging Horus
Microphone cables: AudioQuest Yokon

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

 

 

Quote

the playback is done with two radiating sources.

 

I'd say that it is crucial to "see through" the fact that just *because* it is two sources, the mapping of the sound is allowed to "re-happen" in mid space. This is where the frequency waves collide most.

It still requires the "phase thing" but at least we should be able to see how the two radiators can do that. One never can.

Three will be better. Four better again. Etc.

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
7 minutes ago, Summit said:

 

First of all all recordings are processed and then we listen to a record we are listing to a trickery. All live concert consist of a direct sound and an ambient sound. It’s the very thing about live music as contrasting to studio recordings that only consist of direct sound and some added fake ambient sound.

 

Two (adequately positioned) mics, two channels, two speakers, no multi-track mixing-down to two channels. That is real (2-channel) stereo.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

Something else perhaps :

Numerous times I have been referring (Phasure forum) to things being created or recreated in mid space (from sound from loudspeakers). Just because I noticed it happening, relative to a prior situation where it was not happening yet (say some upgrade). So thinking from there makes things a little different because what we hear may not even be in the recording.

I think, but not sure,  that this could be the "electric butterflies" happening. They sparkle energy ar various places and you can kind of feel that this just emerges in mid air because of the interaction from frequencies (better : waves). The sparkle here is literal. Think firecrackers. So not figurative at all. It really sounds dangerous.

 

More in sparring and brainstorming mode today (hopefully that is good), I also wonder by now why nothing of the Seagull shows through headphones. The thing doesn't even seem to be there, never mind it is a crow. As if this does not happen in our brain at all, but really in mid space and there only (and from there we perceive it of course).

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment
2 minutes ago, PeterSt said:

I think, but not sure,  that this could be the "electric butterflies" happening. They sparkle energy ar various places and you can kind of feel that this just emerges in mid air because of the interaction from frequencies (better : waves). The sparkle here is literal. Think firecrackers. So not figurative at all. It really sounds dangerous.

 

hero.jpg

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
9 minutes ago, semente said:

 

Two (adequately positioned) mics, two channels, two speakers, no multi-track mixing-down to two channels. That is real stereo.

 

No, if only two mics was allowed we would have extremely few stereo recordings besides home recordings. Most use one stereo mic and some nearfields mics to capture a live gig. You can have your definition about what real stereo is, but I don’t agree and nor do most others. Look up stereo and see the description.

Link to comment
16 minutes ago, Summit said:

First of all all recordings are processed and then we listen to a record we are listing to a trickery

 

Anyone can make recordings without any processing using stereo microphones and the phantom image will appear between the speakers. Having said that, I have no preference whether the recording is closed mic’ed or multi miked as long they good the essence of a good recording correct.  

Link to comment
2 minutes ago, Summit said:

 

No, if only two mics was allowed we would have extremely few stereo recordings besides home recordings. Most use one stereo mic and some nearfields mics to capture a live gig. You can have your definition about what real stereo is, but I don’t agree and nor do most others. Look up stereo and see the description.

 

It isn't stereo because you want it to (well, also because you're not Trump).

 

We are discussing why real (2-channel) stereo only positions images of sources between speakers. Any other "stereo" is off-topic (and not real stereo).

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

realstereo_grey.jpg

 

True stereophonic sound, as devised by Blumlein, is quite capable of reproducing such an event with just two channels and two loudspeakers, so why complicate the issue with more channels and more speakers? The problem is, that as consumer electronics manufacturers and media providers concentrate their efforts increasingly on home theatre, stereo is being increasingly sidelined. We can already see this happening, with a rapidly diminishing choice of affordable stereo hi-fi components, the market being polarised towards low-cost, all-in-one mini systems at one end of the scale and exorbitantly priced specialist components at the other. Similarly with recorded media. Apart from the stream of re-issues, how many contemporary recordings are made using the real stereo techniques which have served us so well in the past?
There is a distinction of course between multi-track recordings mixed down to two channels and stereo. The latter provides a completely different listening experience - an experience which is now in danger of disappearing if the industry believes there is little future in it.
Hence the Campaign for Real Stereo.

 

https://www.tnt-audio.com/topics/realstereo_e.html

 

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...