Jump to content
IGNORED

EarSpace!!!!


Blake

Recommended Posts

On ‎10‎/‎8‎/‎2018 at 12:35 PM, esldude said:
Quote

If I play music over speakers you hear the direct sound first, the next several milliseconds of reflected sound is filtered away from our hearing.

 


 

 

It is known as Haas effect. It is not filtered away. The reflected sound is converted to provide spatial information as long as the delay between the direct and reflected sound is short enough.

 

On ‎10‎/‎8‎/‎2018 at 12:35 PM, esldude said:

 

Quote

This happens at least partly due to directional cues arriving later from a different direction detected by the shape of the outer ear, and from our ears constantly moving slightly (even if we think we are holding them still).  The brain can find reflections that have a similar signature though delayed and ignore those.  

This is converted as spatial information.

 

On ‎10‎/‎8‎/‎2018 at 12:35 PM, esldude said:

If I record the sound at listening position, then the microphone picks up direct sound and reflections.  When I play that back over speakers the delayed and reflected sound comes from the same physical spot.  The effects of our outer ear have no way to separate that from recorded music.  Nor does movement of our head.  So it gets treated as a primary direct sound mixed in with the music and it is not filtered.  Those recorded reflections will create secondary reflections of those in the room, but our ears filter that away.

 

All recordings whether in a church or a listening room, captures the reflected sound which adds space and reverbs. Having said that, the distance of microphone from the source limits the amount of reflected sound picked up by the microphone.

 

 

Link to comment
33 minutes ago, esldude said:

First, I don't seem to get the real feel with binaural recordings that other people report.  So maybe my reports aren't comparable with those who do.  In this case, it was better than usual in that sound objects seemed on a line from just outside each ear thru the middle of my head.  Often these sound fully in my head and clustered near the top third of my head subjectively.   

 

Binaural recording is always about hearing what someone heard through their individual pinna. The ear space is usually fixed at 17.5cm and the shape of the head is like an average Caucasian. Furthermore, it also depends your own ears frequency response which can sometimes go very wrong for some people when listening to binaural recordings. Basically, no two men hear alike.

Link to comment
7 minutes ago, esldude said:

My guess is good microphones in my own ears might make binaural recordings which are good to me.  Still I wanted to know how the original video linked sounded to others. 

 

That’s how it should work. It will be hard for you to distinguish.

 

I have heard better binaural recording but I am not familiar with those tracks. 

 

Link to comment
1 hour ago, esldude said:

My understanding is some part of our ability to dismiss the room is related to small continuous head movements we all make.  I get this from some writings of James Johnston.   And there is also the effect of us turning our head in larger movements.  The thing the Smyth Realizer is said to fix.  

 

I've also read, and wish I kept the paper, where it was said in an anechoic chamber listening to a pair of loudspeakers, the room is dead of course, and you just hear sound from speakers at the speaker mainly.  But if you had the listener place their head on a chin rest and against a forehead rest to arrest small head movements the sound for at least many people suddenly imaged mostly inside their heads similar to what most recordings do over headphones.  

 

So in mitchco's recordings I hear a juddering sound at times.  Most noticeable in the Giorgio track.  It is almost as if each track were on tape and one track or the other is sticking and releasing rapidly.  Or as if one channel is speeding up and slowing rapidly rather like one channel only flutter. So I wonder if this is involuntary head movement by mitchco or some vibration of the mics in his ears or what exactly is it?

 

I remember reading something like that but I am not sure that was the purpose or conclusion of the paper. In any case, we still have pinnae for direction finding and with ILD and  ITD, I doubt the sound will be inside your head.

 

These three elements for direction finding exist In anechoic chamber and reflection or lack of it is of no relevance excerpt for the timbre and spatial information.

 

The problem with binaural recordings is the image is 3 dimensional but lacks realism when your head movement and the 3D image doesn’t tally. For an example, if you use a binaural in ear recording where there is one person with guitar on the left, drum on the right and at your back a piano, you will hear exactly that with the headphones as long as you don’t move your head. 

 

If you turn your head to the back to hear the piano, the 3D image will still be fixed as your orginal recording perceptive so the stage will still be the same even when your head is turned 180 degrees. This is where the realism collapses. To address this we have dynamic head tracking where the sound changes according to your head movement. So when your turn to the back, the piano will sound in front and the drum and guitar at the back. 

 

This unrealistic imaging is more prevalent in stereophonic where even a slight swaying of the head to left or right destroys the phantom image. As you can guess by now, no one seemed to be bothered about it now. 

Link to comment

The chart from Master Handbook of acoustics is referring to a SINGLE reflection. It was suggested based on the experiment that lateral reflection can be used for spaciousness and stereo imaging, PROVIDED early interfering reflection is eliminated. How can this be achieved?  

 

As as far as soundstage is concerned, reflection from 120 degrees can help and this is based trial and some old literature about pinnae stimulation from the rear.  

Link to comment
7 hours ago, Blake said:

I still need to listen on headphones (which is what Jana suggested in her intro).

 

A binaural recording must be listened to with headphones. When you listened to them over loudspeakers you introduced many errors that it destroys the natural cues of ILD and ITD. Furthermore, the frequency response inside your ears is modified extensively that it will never sound correct over the speakers. You also introduce crosstalk with speakers.

 

image.thumb.png.7ec06123937bf361e3f09f191f76c19e.png

Link to comment
On 10/10/2018 at 5:23 PM, esldude said:

I've also read, and wish I kept the paper, where it was said in an anechoic chamber listening to a pair of loudspeakers, the room is dead of course, and you just hear sound from speakers at the speaker mainly.  But if you had the listener place their head on a chin rest and against a forehead rest to arrest small head movements the sound for at least many people suddenly imaged mostly inside their heads similar to what most recordings do over headphones.  

 

Now I remember. I think you are referring to cone of confusion where without head movement you could not tell the location accurately because of identical ILD and ITD. This is an unique situation and hardly possible in normal listening as we also could distinguish the sound location from the spectral content learnt from prior knowledge. 

Link to comment
28 minutes ago, pkane2001 said:

 

It would be an interesting experiment to try IR's from different systems to hear how they all sound. Anyone who has done measurements of their speaker system at the listening position using REW sine sweeps should be able to create an IR wave file that captures their room and system characteristics. If everyone could then upload their IR file to share with others, we'd have a very interesting collection to play with! All that's required is a player/renderer software that supports convolution (for example HQPlayer) and a set of quality headphones. If there's interest, we could start a separate thread for this discussion.

 

Heck, I might actually finally learn what Frank's sorted system sounds like ?

 

 

Why looking for the imperfect IR of listening rooms? Google for free IRs of real concert halls, churches and many others. 

Link to comment
42 minutes ago, esldude said:

The idea is for me to take the impulse response and let others use it with convolution so they can listen over headphones and get a good idea (we hope) of what listening at my house sounds like.  Ditto for other people

 

Maybe I am still stuck with the OP intention.

 

What will you be using for convolution? Dirac?

Link to comment
23 minutes ago, esldude said:

if we get this figured out to work better than binaural

 

It will never be better than binaural. 

 

BTW, i think there is some confusion about the terminology used here. Maybe, convolution is using real IR to create reverbs. Another way is using artificial reverbs. It is possible, to use IR of the room and inverse the frequency response and use them for room correction. I thought that's what the OP intended but then some of the other replies caused a bit of confusion.

Link to comment
3 minutes ago, esldude said:

Well to be very clear, binaural is piss poor for me.  If you wished to let people use headphones to hear what the sound was on another system at another location what would you suggest?

 

My complaint was with binaural being so poor.  With a stereo pair at the listening position being so poor.  It was suggested using IR and convolvers would let you hear over headphones a better result to accurately let one hear what another location sounded like. 

 

Why not burn about $75 and get the Roland binaural microphone and make few recordings. Use the same earphones and listen to the recordings. Theoretically,  you should hear the same ( a little compromise is required). 


 

Quote

 

It was suggested using IR and convolvers would let you hear over headphones a better result to accurately let one hear what another location sounded like.

 

 

Unless the IR was recorded with a binaural microphones, the room signature will never be the same.

Link to comment

Only a binaural recording will do but still it is still someone’s pinnae. Hearing is like thumb print. It is unique to each individual.  

 

The mpre practical iS to capture the direct sound reaching your ears and the 360 degrees IRs of the room. The other person should able to replicate the sound provide his own room acoustics is eliminated. 

Link to comment
7 minutes ago, pkane2001 said:

 

The same as what? I think you're still missing the point of an IR convolution. It is to reproduce the effects of the room and the characteristic sound of the playback system, including the speakers for another listener, with a different system at a different location.

 

Capturing IR doesn't require binaural microphones, and in fact, IR is probably better to be captured through one of the channels rather than through a stereo mic.

 

I know what It is. I use over 100 of them for each channels. That’s the reason this topic about Dirac room correction using IR and listening to another room with the IT is confusing. 

 

Could you please ease list down all the software and impulse response you use to “to reproduce the effects of the room and the characteristic sound of the playback system, including the speakers for another listener, with a different system at a different location.”  

 

That at will be helpful for me to understand how you are dealing with the convolution because I am doing it very different understanding. At times, you are on same page but at times you are saying something else what IRs not supposed to do. That’s where the confusion is. 

Link to comment
11 minutes ago, pkane2001 said:

 

IR is a measurement of a system response to an infinitely short pulse.  Convolution is a mathematical operation, applying a function to a function. Convolving a waveform with an impulse response results in applying the same system response that was measured to the new waveform. In effect, it applies the same distortions, reflections and other system characteristics of the originally measured system.

 

I use REW to compute impulse response from a sine sweep. The sine sweep measurements are done from my listening position using a measurement microphone and my speaker system.

 

I use HQPlayer to apply this generated impulse response to my playback system.

 

 So this impulse response is without correction?  

Link to comment
7 minutes ago, pkane2001 said:

 

Impulse response is a measurement of the overall system, all warts, reflections, and corrections included.

 

No correction is implied by IR. But, if your system has correction activated during measurement, it will also be part of the captured IR. 

 

 

Ok. That cleared the big confusion I was having. I use the IR to correct the room defect. That was long time ago. So you are just using it to feed your headphones listening to sound like listening to your speakers in the room. Am I correct?

But that will not correct the inside your head perspective with headphones. Maybe the tonal balance will be similar. 

 

Edit: I think I got this mixed up with the other thread using IR with sloping treble. My apologies. 

Link to comment
8 minutes ago, pkane2001 said:

If I upload my IR file, you can use it to listen to my speaker/room combination through your own system and headphones.

 

I am confident enough to say  that under blind test, your IR will not sound like your room in my system. 

 

There was another thread by a manufacturer who uses true stereo IR and XTC. There they provided some good explanation about the IRs.

 

firstly, when I convolute your IR with the original recordings, I am adding additional reverbs and errors to the frequency response of the recordings. This reverbs maybe true acoustics signature of your room but the original IR was recorded of room acoustics of 360 degrees directions. Unless your microphone is pointed to one direction, the recorded IR will have omnidirectional IR. This can give a likeable sound but never the accurate sound. Venue’s acoustics signature must be captured at each reflecting angles and reproduced the same. Only then, it will sound close enough like the orginal event. 

 

This is doesn’t mean the sound you perceive is different. You will hear your room sound because the acoustics signature is stored in your memory and when you pick the slightest hint of the acoustics signature, your brain equates the sound to your room.  This is the same as the claims made that they can hear full orchestra sound like in a concert hall with stereo setup. It is there because your brain associates the reverbs to the audio memory and creates the imaginary soundfield. It is perfectly valid until you hear a better system that creates the event. 

Link to comment
18 minutes ago, pkane2001 said:

 

Our ears are not omnidirectional, and neither is a microphone. Neither one will capture 360 degrees to reproduce system/room response. Could the captured IR be different when pointing the mic in different directions? Probably. How different? And what if I point the microphone in the same direction as my ear at the listening position? Will this capture something closer to my listening IR? 

 

If you can point me to the thread where this was discussed, I'd love to try to understand what that manufacturer was saying about IR.

 

 

If hearing is not omnidirectional than you won’t be hearing people who talk to you from behind you. 

 

If hearing is not Omnidirectional anechoic chamber should not make a difference to you. 

 

If hearing is not omnidirectional, side wall and rear wall treatment will not make a difference to you. 

 

There are omnidirectional microphone . Your multi channel SACDs rear channels are usually recorded with those. 

 

https://www.dpamicrophones.com/mic-university/directional-vs-omnidirectional-microphones

 

I can give you my IRs but they are true stereo, i.e you need to convolute them with 4 channels convolution engine such as SIR2 or WaveIR. HQplayers convolution engine is most likely only stereo. If you are keen I will modify them for you. 

 

 

Link to comment
1 minute ago, pkane2001 said:

 

I guess I'm missing your point. You first said that IR cannot be accurate because hearing is omnidirectional, and then you point to omnidirectional microphones that can capture IR. So where's the problem?

 

 

The problem is direction of the room acoustics reaching your ears. They reach from various angles at your ears/microphones. They have direction and different frequency response. To reproduce them you must also have the speakers at the same angle those sound arrive at your ears. This is the reason why recording engineers will always place the microphone within the critical distance from the direct sound. This is to avoid capturing the true room acoustics which will make the direct sound muddy because “surround” ambience sound is now confined to the two spots of the front speakers. 

 

Here is a a crude analogy, when you record a performance on stage, you are expected to put the speakers where the performers were. Now, if you decide to put your speakers behind you then you are producing the frontal soundstage from the rear. It will sound wrong. But if you were to reverse the microphone towards the audience and record them during the applause, the sound will sound correct if the speakers placed at the back. 

 

This is is one reason why during a performance the applause is always do not sound correct because it will come from the front stage. 

 

In in case of IR, it comes from everywhere and therefore it should be confined to one spot and expect it to recreate the venue acoustics signature. 

Link to comment
4 minutes ago, pkane2001 said:

 

HQPlayer can apply separate convolution to 4 left and 4 right channels. 

 

True comvolution deals with left and right channel for each channel. Anything less than that is a compromise.  

 

The short explanation is, a sound originates from the left side of the stage will be reflected from the left  and right wall too. So the left channel must be convinced to both channels. 

Link to comment
6 hours ago, pkane2001 said:

Again, I'm afraid I'm missing your point. HQP can apply the same or different IR files to each of the 8 channels. Depends on your needs. This was in answer to your statement:

 

From the manual of HQplayer. It is not even stereo. But you are also using matrix to mix. How did you capture the different  IR? 

 

Convolution engine requires impulse responses to be mono RIFF (WAV) format files. If some of the channels don't need processing, or are not used, clearing the filename will disable convolution engine for those channels

Link to comment
14 minutes ago, esldude said:

Now there is the idea our brain lets us mostly hear the direct sound of speakers and filter out the room other than low frequency issues below the Schroeder frequency.

 

It doesn’t filter out although the term “filter” sometimes used in some papers. Indirect sound will always heard and interpreted.  Depends on the arrival time, it gives sense of space and reverbs. As long as the delay is not too long between the preceding sound, it will be interpreted as one. Otherwise, you will hear echoes. 

Link to comment
26 minutes ago, esldude said:

That sounds like filtering to me.  Yes our ear responds to all of it.  But depending upon arrival time our subjective perception does not hear those early reflections or is only mildly heard.  Which is why I said in very large spaces the various reflected sounds are so late they'll be heard as a sense of space.  

 

If what you are saying then the sound in anechoic room should be the same as the sound in a room with early reflection. 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...