Jump to content
IGNORED

Soundstage Width cannot extend beyond speakers


STC

Recommended Posts

35 minutes ago, PeterSt said:

 

I use nothing.

 

 

This is how I mentioned the two antennas and frequencies etc. in the post you did not want to read.

 

With one radiating (frequency) object of 2cm diameter and two antennas 10cm apart, that object can be localized in a space of 12x6x2.5m at an accuracy of 0.1mm.

 

Mind the distance of the antennas and how the object can be e.g. 10 meters to the left of both (one + 10cm) under an angle of even 90 degrees (the side of the stage and line of performers example) and beyond of course (91, 92 ... degrees).

 

There is no difference between the two antennas (receivers) being transducers (radiators AKA speakers) and the object of 2cm being projected in the 3D space. Both use the exact same mechanism, though reversed (what radiated (the object) now receives (the instrument in space) and what received (the antennas) now radiate (the speakers).

It is all about how the radiated frequencies form a unique phase relationship in the projected space.

The important side note (just repeating myself) :

 

While this all works with GHz frequencies, it does not work at all for the way lower frequency of audio. However, this is exactly why it works for "sounds" (listen to the crow and how sharp-boundaries the on/off frequency of its throat is) and not for instruments as such because their general frequency is too low to localize. Read : to form a unique phase relationship in the 3D space. The whole shebang is unrelated to phase manipulation (like in Q-Sound) because it is not necessary. It works as it is and it works the same as Q-Sound. One difference : with Q-Sound the whole spectrum will be manipulated so the low frequencies now appear to be elsewhere just the same - something no-manipulation can not do. If you listen closely to Q-Sound sounds, you will notice an out of phase (inside out) behavior.

 

The vector idea is nice, but is the very same as phase ANGLE. So where we tend to speak about phase differences, it might be good to understand that this shows by the difference in phase angle. These are sheer numbers for math.

 

The other clue might be that colliding frequencies of the proper phase in air, add (why did I quote from that 1 out of 100 emails with vrao). If a stereo microphone captures (read : catches a 0.01mm instance) of a sound which are a bunch of frequencies, then it can be regarded that this moment of capture is the optimal amplitude for that sound (it doesn't matter where the wave of each of the frequencies resides (think degrees)). When this is radiated again by two speakers, somewhere in air this same optimal amplitude emerges again. One crucial thing : this "somewhere" could be at a 1000 places because it is not unique for location and this is because of the waves being far too long.

I can't determine the phase angle of a 0.01mm part of a 50Hz frequency wave. It will be zero.

 

It is not super easy to see that 

a. low frequency waves are harder to locate than higher frequencies;

b. that where amplitudes add up, the sound is louder at that point (think LF standing waves now);

c. and that when the frequencies are sufficiently high and TWO radiators form it, at one point in space it adds up and sound loud.

 

Ad c. There's the seagull.

But only because of its very square sound with sufficiently high frequency. And then still it is too low. This works partly by illusion because the high frequencies are the only "sounds" which allow localization and the lower frequency (say formed by the beak of the beast) are found to be on the same location as the higher frequencies, by our brains. This was exactly @Abtr's point (though seen from a distortion point of view, but this does not matter).

 

 

 

 

I need to read it again because the first round did not make any sense.


 

Quote

 

With one radiating (frequency) object of 2cm diameter and two antennas 10cm apart, that object can be localized in a space of 12x6x2.5m at an accuracy of 0.1mm.

 

Mind the distance of the antennas and how the object can be e.g. 10 meters to the left of both (one + 10cm) under an angle of even 90 degrees (the side of the stage and line of performers example) and beyond of course (91, 92 ... degrees).

 

There is no difference between the two antennas (receivers) being transducers (radiators AKA speakers) and the object of 2cm being projected in the 3D space. Both use the exact same mechanism, though reversed (what radiated (the object) now receives (the instrument in space) and what received (the antennas) now radiate (the speakers).

It is all about how the radiated frequencies form a unique phase relationship in the projected space.

 

 

This paragraph alone is confusing. Human cannot distinguish sound whether it is from back or front without the aid of pinnae. So how is this similar to human hearing. Moreover, the location of sound with two antennas of 10cm can be easily explained with time difference alone. Alternatively, imagine a bad with head size not more than 1.5cm  that could locate an insect less than 1cm area in pitch dark.

 

Also remember, you cannot use this analogy because we receives sound with two ears.

Link to comment
41 minutes ago, PeterSt said:

 

Did you already answer the question on the Cuckoo ?

x-D

 

Why can we not locate/find the Cuckoo ?

Hint : you won't even know which direction to go, once you are closer.

 

I am making an effort to understand the antenna post which you brought to response to my post. Now I am trying answer your post so that I can bring this topic back on track but you are asking about cuckoo.  Let’s stick to this one, ie the antenna which is nothing extraordinary because it can be localize by time difference alone. 

Link to comment
18 minutes ago, PeterSt said:

 

But we do. That is, unless you can quote me somewhere on saying that it can be done in an anechoic room just the same. And mind you please, bass traps are different things than diffusers. So people can apply bass traps all right, but diffusers - it could be personal. I don't use neither.

 

May image is as sharp as mentioned blade just the same because the far more delimited waves are allowed - no, required to reflect on the walls. If Diana Krall would be singing in here I would allow it too. And I am fairly sure that she won't ask me te close the curtains.

Luckily not. ;)

 

The reflections are a necessity (please hear me) to define those spots in space. The more angles the "sound" comes from, the more different waves the more chance of unique phase relations (for the point in space). With no reflections there's two data points only (apart from the wider beam but that gets too complicated) and two data points are not sufficient. Not for the still way too low frequency.

In the end we agree. I never tried it but I don't see it happening out in the (anechoic) field. This is already easy to see by the lines (the path / route) a sound may follow. Like the seagull which follows the ceiling (overhead).

 

In my previous house I had a pillar in the middle of the room (RH). I still know the sounds which hung up to that from Madonna's Immaculate Collection (Q-Sound). Same with a fireplace extension to the left of me. All follows the (wild) boundaries of the room and cabinets and such. Or in between. Never outside of it. Behind ? never further back than the wall behind. Easiest to hear with rolling sounds. Vogue (Immaculate Collection) is a good example of it. Vogue-vogue-vogue-vogue. And up at the wall behind. Always.

 

And thus it is about reflections. Even with phase manipulation.

Unless you're Lyngdorf. Then it goes outside of the room (somehow). Not for me.

 

 

PS: Isn't the answer to the cuckoo not in Google, or what ?

 

Too long didn’t read. 

 

Just say whether your method will work in anechoic chamber or not?

Link to comment
23 minutes ago, Yucca06 said:

 

 

I have stereo 2 channels only.

Processing is from an analog processor, used to amplify harmonics (SPL Vitalizer, really great)

NO multichannel.

An the sound is all around me.

 

Yes because it works on crosstalk cancellation principle. 

 

“The final control is for Stereo Width, and it operates on a very simple and well‑known principle. Some of the left‑channel signal is inverted in phase and fed into the right channel, while some of the right‑channel signal is reversed in phase and fed into the left channel. This has the effect of widening the stereo image beyond the speakers”.

Link to comment
2 hours ago, semente said:

 

My guess is that your brain is combining the sound coming from both sources and for this reason it cannot come from any point in space outside of those two sources but can only shift between one an another due to changes in amplitude / balance.

 

Precisely, sound location with stereo is based on Interchannel level and time difference. There is phase too involved but that’s left out here. 

 

It cannot extend beyond the speakers without room reflection. It will give you a spacious feel and stage that appears bigger that it is. 

 

Remember Bose 901(sic) which radiate sounds to the side to create the live feel?

Link to comment
16 minutes ago, PeterSt said:

 

Aha. So now I understand why you think I never reply to your not-so-questions. Thanks.

 

 

Next time, at least look at it. Possibly you see the answer to you never answered question in exactly sentence #1.

Try it.

 

And in the second para you also said no reflection is needed which means you are claiming it should work in anechoic.

Link to comment
2 hours ago, PeterSt said:

 

I use nothing.

 

 

This is how I mentioned the two antennas and frequencies etc. in the post you did not want to read.

 

With one radiating (frequency) object of 2cm diameter and two antennas 10cm apart, that object can be localized in a space of 12x6x2.5m at an accuracy of 0.1mm.

 

Mind the distance of the antennas and how the object can be e.g. 10 meters to the left of both (one + 10cm) under an angle of even 90 degrees (the side of the stage and line of performers example) and beyond of course (91, 92 ... degrees).

 

There is no difference between the two antennas (receivers) being transducers (radiators AKA speakers) and the object of 2cm being projected in the 3D space. Both use the exact same mechanism, though reversed (what radiated (the object) now receives (the instrument in space) and what received (the antennas) now radiate (the speakers).

It is all about how the radiated frequencies form a unique phase relationship in the projected space.

The important side note (just repeating myself) :

 

While this all works with GHz frequencies, it does not work at all for the way lower frequency of audio. However, this is exactly why it works for "sounds" (listen to the crow and how sharp-boundaries the on/off frequency of its throat is) and not for instruments as such because their general frequency is too low to localize. Read : to form a unique phase relationship in the 3D space. The whole shebang is unrelated to phase manipulation (like in Q-Sound) because it is not necessary. It works as it is and it works the same as Q-Sound. One difference : with Q-Sound the whole spectrum will be manipulated so the low frequencies now appear to be elsewhere just the same - something no-manipulation can not do. If you listen closely to Q-Sound sounds, you will notice an out of phase (inside out) behavior.

 

The vector idea is nice, but is the very same as phase ANGLE. So where we tend to speak about phase differences, it might be good to understand that this shows by the difference in phase angle. These are sheer numbers for math.

 

The other clue might be that colliding frequencies of the proper phase in air, add (why did I quote from that 1 out of 100 emails with vrao). If a stereo microphone captures (read : catches a 0.01mm instance) of a sound which are a bunch of frequencies, then it can be regarded that this moment of capture is the optimal amplitude for that sound (it doesn't matter where the wave of each of the frequencies resides (think degrees)). When this is radiated again by two speakers, somewhere in air this same optimal amplitude emerges again. One crucial thing : this "somewhere" could be at a 1000 places because it is not unique for location and this is because of the waves being far too long.

I can't determine the phase angle of a 0.01mm part of a 50Hz frequency wave. It will be zero.

 

It is not super easy to see that 

a. low frequency waves are harder to locate than higher frequencies;

b. that where amplitudes add up, the sound is louder at that point (think LF standing waves now);

c. and that when the frequencies are sufficiently high and TWO radiators form it, at one point in space it adds up and sound loud.

 

Ad c. There's the seagull.

But only because of its very square sound with sufficiently high frequency. And then still it is too low. This works partly by illusion because the high frequencies are the only "sounds" which allow localization and the lower frequency (say formed by the beak of the beast) are found to be on the same location as the higher frequencies, by our brains. This was exactly @Abtr's point (though seen from a distortion point of view, but this does not matter).

 

 

 

16

 

I am not going to run in circles with you. So I am what to thrash out this post first.

 

Going back to your 

Quote

 

With one radiating (frequency) object of 2cm diameter and two antennas 10cm apart, that object can be localized in a space of 12x6x2.5m at an accuracy of 0.1mm.

 

Mind the distance of the antennas and how the object can be e.g. 10 meters to the left of both (one + 10cm) under an angle of even 90 degrees (the side of the stage and line of performers example) and beyond of course (91, 92 ... degrees).

 

 

 Is your antenna capable of receiving omnidirectional signal equally from all the angles? If the answer is YES then how can you tell whether the sound originates at 45 degrees at 6 meters away and another sound which originates from 135 degrees to your right?

Quote

 

There is no difference between the two antennas (receivers) being transducers (radiators AKA speakers) and the object of 2cm being projected in the 3D space. Both use the exact same mechanism, though reversed (what radiated (the object) now receives (the instrument in space) and what received (the antennas) now radiate (the speakers).

It is all about how the radiated frequencies form a unique phase relationship in the projected space.

 

 

How can you reverse the concept with speakers and hearing?  Two antennas equal to two speakers. Ok that accepted. But then a single source equals to two ears? we receive sound from two distinct spots. So how to use this analogy?

Link to comment
5 minutes ago, PeterSt said:

 

Language problem alert !

 

 

May = My of course. Now look :

 

because the far more delimited waves are allowed

 

no, required

 

to reflect on the walls.

 

If not language problem than my lousy writing. But anyway, I say the exact opposite of what you apparently read.

 

Summarized in more straight language :

Waves are required to reflect on the walls ...

 

... or else there is insufficient phase angle data to combine to be conclusive about the location.

 

Thank you so much! That's what I have been saying. The soundstage cannot exceed beyond the speakers' outer boundary unless there is phase manipulation or reflection from nearby walls.

 

Now go back to the other post. After that, we will do the cuckoo.

 

 

5 minutes ago, STC said:

 

I am not going to run in circles with you. So I am what to thrash out this post first.

 

Going back to your 

 

 Is your antenna capable of receiving omnidirectional signal equally from all the angles? If the answer is YES then how can you tell whether the sound originates at 45 degrees at 6 meters away and another sound which originates from 135 degrees to your right?

 

How can you reverse the concept with speakers and hearing?  Two antennas equal to two speakers. Ok that accepted. But then a single source equals to two ears? we receive sound from two distinct spots. So how to use this analogy?

1

 

Link to comment
2 minutes ago, PeterSt said:

 

Found the cuckoo ?

I first emphasized the importance of the answer, repeated the question, repeated it again you saying that you don't like the question, repeated it once again and now

 

I repeated it again.

 

Looks childish ?

 

 

It is a long long way before we are both up to that.

"Both" because I can not explain if the basics are not present. I understand that you want to know, though.

And no, I won't elaborate on the cuckoo because you have difficulties in wanting to hear what I have to say. If the internet tells you somewhere, I should be good. Btw, I am not claiming that the internet/Google has the answer. Now on to your other question because that's feasible to respond to (in next post).

 

Ahh as expected. Because I know the antenna cannot tell the difference.

Link to comment
27 minutes ago, Jud said:

 

Of course it can. Carver’s Sonic Holography would do it at the push of a button, using sum and difference signals. Recordings with Q Sound could locate a source anywhere around the room using phase effects. So it can be done by the recording in a couple of different ways; or it can be done by the room.

 

 I personally would rather have it done (at least to a noticeable extent) by the recording, since if all recordings have the same huge soundstage in your room, it’s of course inaccurate, and for me it becomes boring and even irritating very quickly (as listening to a friend’s system with Carver’s Sonic Hologram Generator did for me many years ago - he eventually became bored with it too and got rid of it).

 

Carver is one of the early guys attempted crosstalk cancellation which didn't work quite right due to the limits of the technology then. With crosstalk cancellation, the real soundstage can be retrieved but I am not bringing in crosstalk in this discussion because this is about stereo.

Link to comment
Just now, PeterSt said:

 

Ah, OK. Then I stop typing. bye.gif.3eb6fb9096ab1f11108c8b9c840cd439.gif

 

No don’t stop. We still need to do the cuckoo. Btw, if you can measure the delay of the signal arriving between the two antennas, I can give you the location where the sound originated from. It won’t be accurate because the speed of sound in your room is not known nor I do not know the frequency of the emitter. But it will be close enough. 

Link to comment
12 minutes ago, semente said:

Triangulation? That's 3 antennas

 

That how your location is tracked with mobile phones. They need three towers. Here Peter says he could do that with two which is possible but the rear mirror image of that will cause confusion where you cannot tell whether it is front or back. Even humans too at times cannot locate a sound that comes from the circle of confusion which is resolved with head movements.    

Link to comment
21 minutes ago, Blackmorec said:

Worth bearing in mind that in order to maintain clarity in identifying the source of any sound in a reverberant environment, human hearing operates what’s called the Precedence Effect or ‘Law of first waveform’. Essentially any reflection of a sound that arrives at the ear within ca. 40ms of the first wave is fused with the original wave and the original source direction preserved. 

 

Ideally for good, clear imaging a hi-fi room should be small enough to ensure all reflections fall within 30ms.  Anything after 40ms is heard as an echo, which would tend to impact clarity and confuse imaging.  

 

In light of the above, it can be seen that a reflective room will not increase the perceived width of the soundstage.

 

What the above does for us humans is to preserve our ability to localise sound in a reverberant environment.  

 

It is theoretically correct but there are other experiment with lateral reflection and real room scenario where image shift takes place. It also affects the perceived loudness level. 

Link to comment
7 hours ago, Summit said:

 

Soundstage width can extend beyond speakers and am going to explain why. First thing first; am saying that speakers are not limiting there the sound appear to be coming from and not where it really comes from.  

 

sasha-hifi-audio-speakers.thumb.jpg.8bfcba7f0e3bc44e9e4edce661b3bc16.jpg

 

The left speaker can produce an image that the guitar is located to the right of where the transducer actually is placed on the speaker and that the singer is on the guitars right or left side, or in the middle of the stage. The same but opposite is true for the right speaker. Now to my point: >95 percent of all speakers is not made to be used as just left or right speaker, meaning that you can use whichever as left or right speaker. If both speakers can make an image that seems to be closer to the middle of the stage than the transducer are placed and the speakers are interchangeable it will mean that the speakers is not the limitation factor at any direction. All limitation is then made by the recording, or should I say placing of the mics and of course your placement of speaker in the room.

 

Remember that it’s a big difference between playing dual mono and a true stereo. With dual mono you can play with amplitude and get some left - right sense, but it’s together with phase we get stereo and pinpoint image.

 

I think I understand what you are saying. But let's clear the first confusion here. I am referring to sound outside the listener/speakers triangle. That means making the left sound going further away towards the left of the left speaker.

 

Dual mono recordings- is just two recordings played simultaneously with one speaker each. The image will not shift from the respective speaker's position unless there is panpotting involved. 

Link to comment
8 hours ago, Blackmorec said:

Hi STC, I’d be interested to read any good reference or link you could provide. ?

I can imagine changes to loudness levels as reflections that arrive early from different directions are summed together with the origin. 

 

I use https://www.amazon.com/Sound-Reproduction-Psychoacoustics-Loudspeakers-Engineering/dp/0240520092

and https://www.amazon.com/Master-Handbook-Acoustics-Alton-Everest/dp/0071603328 and hundreds of papers on this topic.

 

Just a word of caution here. These books are written for people with some basic understanding in hearing. When I started to read them many years ago I understood them differently and now with a little bit better understanding of the keywords, the message in the book is more refined. Moreover, I have a Ralph who is a good teacher to explain when I need further clarification on any of these topics.

 

Some of the things written there were results of lab condition experimentation which is very different from real listening in the room. Also, read Bluaert on Spatial Hearing.

 

Did you watch the Uni of Duke video, I attached earlier. There are about 5 short courses on human hearing which will help you to understand these books better.

Link to comment
6 hours ago, Blackmorec said:

The soundwaves carry all the image information but they cannot arrange themselves or create anything...they simply carry information as uniform pressure variations.....waves, which are the same everywhere in the room.

The recording and therefore the soundwaves contain all the information on timing, phase, frequency and amplitude that the brain needs to assign lateral location, depth, height and extent of the soundstage, etc.

 

The speakers transmits the data. The ears capture that data, send it to the brain as nerve impulses and the brain processes those impulses to make music and assign locations of the musicians, build the sonic soundstage and give the air in that soundstage presence and texture. The more perfect the information provided to the ears, the more perfect the sonic picture your brain can build. 

 

There is one big difference when you replay this recording. While the microphones capture the sound with the same interaural time and level difference like our ears, the playback is done with two radiating sources. A single saxophone on the left in a real stage is now reproduced as two saxophones are located at 30 degrees to the left and right.

Link to comment
6 hours ago, miguelito said:

What I mean is really very simple: Imagine a very simple recording setup where you have two microphones separated by about a foot, each recording to the left and right channels respectively. If you have a sound source at any angle, the timing of the signal arriving at each microphone will be different (sound will have to travel longer to reach the farther microphone). This translates in a phase difference between the two channels. On playback, each of the speakers will play with the same phase difference, recreating the effect of the side placement of the source. And it can easily be past the speaker itself if the phase difference is large enough. 

 

My point was that no processing is required to achieve this phase difference, it happens naturally given the mike placement.

 

 

I agree but how do you explain this?

 

A 2000Hz frequency wavelength is about 17.171cm. Let's say your pinnae and the OTRF is also spaced exactly 17.171cm. A source from 171.71cm to the left and right receivers will be exactly at the centre. Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Now if there is another source at about 6.1 degrees towards the left of the right receiver at a distance of 120.197cm, the distance to the left receiver is also 120.197cm. 120.197cm is 7 times the wavelength of a 2000Hz frequency. The phase is exactly the same reaching both receivers yet we localize both at two different locations. The only difference between the two is the amplitude of the phase reaching the ears (receivers) which is the level; and timing. That is the time taken to reach each ear. I can explain localization with this two but phase is not providing the answer.

 

So how do you explain this with phase difference?

 

 

Link to comment
4 hours ago, STC said:

 

 

I agree but how do you explain this?

 

A 2000Hz frequency wavelength is about 17.171cm. Let's say your pinnae and the OTRF is also spaced exactly 17.171cm. A source from 171.71cm to the left and right receivers will be exactly at the centre. Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Now if there is another source at about 6.1 degrees towards the left of the right receiver at a distance of 120.197cm, the distance to the left receiver is also 120.197cm. 120.197cm is 7 times the wavelength of a 2000Hz frequency. The phase is exactly the same reaching both receivers yet we localize both at two different locations. The only difference between the two is the amplitude of the phase reaching the ears (receivers) which is the level; and timing. That is the time taken to reach each ear. I can explain localization with this two but phase is not providing the answer.

 

So how do you explain this with phase difference?

 

 

 

Ignore. There was a mistake in the second part. 

Link to comment
14 hours ago, miguelito said:

What I mean is really very simple: Imagine a very simple recording setup where you have two microphones separated by about a foot, each recording to the left and right channels respectively. If you have a sound source at any angle, the timing of the signal arriving at each microphone will be different (sound will have to travel longer to reach the farther microphone). This translates in a phase difference between the two channels. On playback, each of the speakers will play with the same phase difference, recreating the effect of the side placement of the source. And it can easily be past the speaker itself if the phase difference is large enough. 

 

My point was that no processing is required to achieve this phase difference, it happens naturally given the mike placement.

 

Please ignore my earlier response to this.

 

I repeat the same with some modification.

—————————————————

I agree but how do you explain this?

 

A 2000Hz frequency wavelength is about 17.171cm. Let's say your pinnae and the OTRF is also spaced exactly 17.171cm. A source from 171.71cm to the left and right receivers will be exactly at the centre. Since this is 10 wavelengths of 2000Hz frequency the phase will be the same for left and right receiver (pinna or microphone).

 

Now if there is another source at about 0 degrees towards the left of the right receiver at a distance of 137.368cm, the distance to the left receiver is 120.197cm. 120.197cm is 7 times the wavelength of a 2000Hz frequency and 137.368cm is 8 times of the wavelength. The phase is exactly the same reaching at both receivers yet we localize both at two different locations. The only difference between the two is the amplitude of the phase reaching the ears (receivers) which is the level; and timing. That is the time taken to reach each ear. I can explain localization with these two but phase is not providing the answer.

 

So how do you explain this with phase difference?

Link to comment
8 minutes ago, Summit said:

 

Good speakers is not the limitation factor at any direction, it is how the recording is made and how you set up your audio system that is limiting the size of the stage. Normally no one record any lead instrument so that it going to sound like it’s far far to the left or very distant to the right because no real live stage look like that. But ambient sound and echoes from sidewalls make the total soundstage sometimes bigger than the distance between the speakers, and sometimes not.  

 

Dual mono recordings are two channel recordings. In all 2 channel recordings you have 2 different channels which means that you can record or mix so that a musician is only playing in one or both channel(s) and consequently it will have a left – right – all over the place soundstage.

 

I understand but the issue here is non processed stereo recordings. In such circumstances, without the aid room reflection, the soundstage will be stuck between the two speakers as localization in stereo is based on level and timing difference.  Some here say phase, but maybe they can provide a convincing explanation.  

Link to comment
8 minutes ago, Blackmorec said:

Our speakers produce 2 signals from the violin which closely match the 2 signals our ears would have received while listening to the original violin. Those 2 signals are different and its the differential that allows our brain to ‘localize’ the sound. 

 

This is important but...

 

The violin like all real sound is in mono as there is no stereo sound in nature. This mono sound goes into our ears. ONLY ONE signal to each ear. 

 

With speakers, ideally it should be only ONE signal for each side but we also hear the opposite speaker. So each ear now receives TWO signals for what supposed to be only two signals from a single source. 

Link to comment
16 minutes ago, Summit said:

First of all all recordings are processed and then we listen to a record we are listing to a trickery

 

Anyone can make recordings without any processing using stereo microphones and the phantom image will appear between the speakers. Having said that, I have no preference whether the recording is closed mic’ed or multi miked as long they good the essence of a good recording correct.  

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...