Jump to content
IGNORED

Article: Tigerfox Immerse 360 Review


Recommended Posts

7 hours ago, bobfa said:

The 3d reflection is not there when the wall is not up!    Virtually any recording works! A good pick is BT "This Binary Universe"


Wow ! this is amazing!

 

Interesting that the isolation of the sound to each ear is done by the pod alone and without any DSP, although I am still not sure how the crosstalk of the speaker is eliminated or reduced since the line blue as describe in the diagram can be neutralized by the reflection. 

Link to comment
8 hours ago, botrytis said:

The timing reflections are mostly in the midrange/high end as these are 'directional' waves, unlike bass, below 200 Hz, which is omnidirectional. Also, the enclosure enhances the bass already produced by the speakers.

 
I have been reading TF patents but still keen on finding out how the cancellation is done by the reflection. The patent diagrams showing Lc and c being the crosstalk but I am still unsure how the c was eliminated. 
 

Can this be used in tandem with my other XTC to provide better cancellation?

Link to comment


Thank you for the lengthy reply. Much obliged.

 

1 hour ago, ROPolka said:

New technologies are very difficult to describe especially those with so little prior reference to compare them to!


I am well versed with most of your reference you quoted in patents that you filed and that’s the reason I was curious to see the workings of the pod.

 

 

1 hour ago, ROPolka said:

One of the ways the Immerse 360 pod "eliminates" crosstalk isn't really by stopping it from happening in the first place. But by the pod greatly overpowering the smaller, weaker time-corrupted crosstalk quantity of sound that reaches the opposite ears of the listener.

 
I suspected that TF is masking the crosstalk. However, I am still unsure how the reflected sound could reach within the about 90 μs (based on the speakers setup as in the demo videos) ITD to reach the ear to mask the crosstalk. Is the pod touching the speakers critical for the effect? The only way stereo speakers could produce the frontal azimuth of 180 degrees is when no crosstalk is heard within about 700 μs. In the absence, the stage is confined within the width of the speakers placement except with effects such as QSound or simple phase/level trick. 
 

1 hour ago, ROPolka said:

as is done by normal stereo capture like in Pink Floyd's Time on the Dark Side of the Moon, or by design),


Dark Side of the moon is QSound processed. It works great in such enclosures. In the late 90s, where the QSound was demonstrated - it was within a small circle of space surrounded with curtains just with stereo setup. QSound supposed to produce a more immersive experience than typical stereo but limited to short duration effects.

 

 

 

 

 


 

 

Link to comment
2 hours ago, STC said:

Dark Side of the moon is QSound processed. It works great in such enclosures. In the late 90s, where the QSound was demonstrated - it was within a small circle of space surrounded with curtains just with stereo setup. QSound supposed to produce a more immersive experience than typical stereo but limited to short duration effects.


My apologies. Dark side of the moon wasn’t recorded with QSound. I was referring to Amused to the death. dark side of the moon was experimented with quadraphonics which supposed to give you a 360 lateral effect. The 90s demo was with Amused to Death. 

Link to comment


 

6 hours ago, ROPolka said:

Do high performance headphones position sounds from two channel audio in their proper locations around the listener?


I am sure the multiple authors you referred in your patent already explained about headphones difference. Without pinna, the will be no externalization and sound will be confined in side the head. By using pinna filter either generic or personalized then you have the externalization and stage become life size.
 

6 hours ago, The Computer Audiophile said:

Christina Aguilera’s Stripped in 12 channel Atmos has tracks with her vocals only in the rear channels. Playing the two channel Atmos version, from two front speakers in the Immerse 360, are you saying the vocals will only be heard behind the listener, even though the sound is coming from the front and only two speakers?

 One is object based audio and another is channel based. I am skeptical that it could do so but listeners can be convinced to believe so. I have witnessed this in so called room device which did nothing but listeners were convinced they were hearing the sound as described by designer. 
 

5 hours ago, The Computer Audiophile said:

I’m also trying to distinguish the differences between Immerse 360 and the Bacch SP. Bacch is all DSP and while it presents an immersive style sound from two speakers, it has nothing to do with accurately reproducing what’s on the recording. It makes an image the designer thinks you want to hear.


This is so wrong and misleading. The object is to deliver the exact ILD and ITD of each channel without corruption.  Occasionally, in the hands of novice you get weird positioning but the problem is the recording itself and that too can be addressed. 
 

5 hours ago, ROPolka said:

As to the importance of physically touching the speakers to the wall, I've found that that's open-ended at this time but actually not needed many times in my speaker testings and listening sessions.

 Thanks for confirming this point. I think it is a novel approach to mask the IAC errors but it could be technically explained. 
 

Once again, thank you for your time. 
 

ST

 

Link to comment
17 minutes ago, The Computer Audiophile said:

 

When I've sat through Bacch SP demos and heard Sonny Rollins playing almost behind me, I concluded it has nothing to do with accurately reproducing the source material. How could it? The music was never meant to sound like that and never released in a format to sound like that. DSP is causing the wrap-around effect. Neat effect, but effect nonetheless. 


I have read another review of BACCH SP and when he describe the sound I knew it was not correctly setup. The concept of BACCH is solid but implementation requires precise adjustments. Problem is BACCH trying to please the 60 degrees crowd and under such implementation you need to be an audiophile to get it correctly done. I cannot afford BACCH but technically it supposed to function like any other XTC with added advantage of head tracking.  I have read his early development papers including the IR approach for cancellation. But at 60 degrees speakers position you are going to have phassy effect if they still want to achieve what it supposed to do at 20 or lesser degrees speaker position. Was Choueiri there during the demo?

 

Mind telling which Sonny Rollins album and track you listened to.

 

Thank you. 
ST

Link to comment
13 minutes ago, The Computer Audiophile said:

He setup the demo specifically for me and customized it for my ears etc...

 

Can't remember which Rollins album. 

 
You are not the first one!😂😂😂

 

There was another place where his setup didn’t work either.  60 degree solution is not feasible for all, IMO.  Not to say not workable but requires elaborate setting up.
 

Thanks, Chris.  

Link to comment
56 minutes ago, The Computer Audiophile said:

I should also say that I thought the demo was really cool, but I don't believe it has anything to do with accuracy to what's on the recording. 


I think I remember about the Sonny Rollins CD. I think it was something like hard left and right panned sound. Over cancellation can result in the sound being placed right to the ears in the case of hard panned sound. The problem is during cancellation they didn’t take ( IMO) take into account that most stereo recording mastered to provide 60 degrees stage. If they are going to rely on clean cancellation hard panned recording going to sound weird with sound coming at the extreme left or right and some would even say behind. Correct cancellation should take into account of weird wrap around sound . I guess they didn’t. 

 

 

Link to comment
17 minutes ago, ROPolka said:

I've heard headphone manufacturers say the reason is because the speakers in headphones are positioned on each side of the head and "out of view" of these 3 areas around the head. So, if headphone users close their eyes and hear a generic sound, they seem to guess wrong most of the time as to where sounds positioned in these locations are coming from.


Headphones cannot provide outside the head experience because the role of pinna is eliminated. Without pinna filtering the frequencies reaching the ears, the brain cannot tell if the sound is coming from front or back or top or bottom. The unique shape of the pinna alters the frequency response depending from which direction they are coming from.  Localization by pinna is a learning process as brain needs to learn the difference in the FR coming from different direction. I am not sure how the headphone manufacturer justify their position by using visual aid for localization.  

 

SmythRealizer mimics the changes of the frequency response to create real space with headphones. I think Samsung apps now takes picture of your pinna to create the filter for externalization of sound. 
 

cheers!

ST

Link to comment
40 minutes ago, ROPolka said:

Accuracy of the recording is important!. Thank you!

 

Just a note that, even tho immersive sound from conventional two channel audio is the objective, the technologies of the Immerse 360 and Beach-like systems are very far apart from each other. Almost total opposites in so many ways.

 

One is exclusively digital based and the Immerse 360 is pure acoustic based. I can't image the difficulty of designing a digital system to obtain accuracy of a recording as there are so many ways it could go off or wrong.

 

Conversely, with the Immerse 360, the dare-I-say "natural" result of using purified sound itself (audio is not corrupted up by the room, speakers or lack of knowledge of the listener) is the automatic revelation of the true original accurate recording itself. 

 

Where "purified audio" here means audio that's been cleaned-up to where the traditional huge sound reproduction problems of playing back two channel in a room have been removed or corrected - including removing the room itself to where it is not the massive acoustic "elephant" that it was, to where the speaker's powerfully corrupting crosstalk have been completely corrected, and to where the listener is not in change anymore of positioning the speakers and the listener in their perfect golden triangle locations.

 

The accuracy of the Immerse 360, therefore - and logically - is accuracy beyond the normal accuracy that's been limited with normal two channel playback in a room. The acoustic result of this is accuracy beyond what was even heard or experienced before in two channel playback.

 

That truth is what I believe the definition of accuracy is, especially for an acoustic designer.

 


How come this was misquoted? I didn’t say that!  
 

Anyway, accuracy according to what? We do not have a reference. Toole described this as ‘circle of confusion’. Your 30 degree to the left sound could be 40 degrees to the mastering engineer. Someone can claim that the system is producing the accurate sound ( I am not referring to FR and confine to localization only), they need to be sure the sound is reproduced with the exact ITD and ILD as heard and capture by the mics. The reproduction should produce exactly the same placement within reasonable stage. 
 

If you ask Miller or Choueiri or Glasgal about the reproduction they would insist that correct based on measurement but if you ask the mastering engineers they would insist that not what it supposed to be. But common sense would tell you that Sonny Rollins shouldn’t sound at your extreme left with 3D but setup that wrongly done would show that because that was mathematically correct. Technically, sound just coming from one channel should be at the extreme left. but that’s wrong. What is accurate here?

Link to comment

The science behind this is rather simple to understand. The pod is basically a small horse shoe auditorium. A room in room. The sound of such design had been studied extensively in concert hall design. The effect would be rich sense of envelopment of sound wrapping the listener. A more focused sound. 
 

Is it immersive?  Is it 3D? That depends of the understanding of the the true meaning of immersive or 3D is audio. You have heightened feel of sound coming from all direction due to reflection. Can the inherent stereo interaural crosstalk be reduced or masked?  No. 
 

ATMOS, multichannel , Aura 3D and others are just a step forward to deliver closer to the  binaural sound to the listener.  Even stereo too is an early attempt to produce the front stage to match the cinema screen. 
 

So what 3D sound is encoded in stereo recordings that need to be retrieved? None. You only need to ensure that they are delivered correctly to the ears so that the brain is convinced that the sound from the soundscape of stereo phantom stage are real sound as one would hear in natural event. 
 


 


 

 

Link to comment

Any system that supposed to minimize the crosstalk will produce better separation of the instruments and clarity. It will help to distinguish sound that generally goes unnoticed in conventional stereo playback. The difference is obvious ( provided done correctly) and overall more natural experience.

 

This can be easily demonstrated by using your standard stereo.

 

For examples:-

 

1) Listen to Sonny Rollins Solitude in Way Out West. With crosstalk cancelled system you will notice the clinging sound isolated and floating separately. Previously, this sound was not even noticed n typical stereo despite its being there. But once you heard them in the crosstalk cancelled system you would now also notice it in stereo.

 

2) Peiju Lien - Whisper (MA Recording). You will hear bird chirping ( very faint and not part of the intended recording) but that sound was so buried that it will not be easily detectable although audible in typical stereo playback. When I thought it was a bad recording, I was told I was hallucinating but it’s there and audible but without crosstalk cancellation or reduction it would not be distinct and standard out among other sound.

 

These difference are the result of 3D production of the stereo playback. This can happen if TF360 is doing some sort of masking of the inherent IAC.

 

Whether the sound of TF360 is similar to ATMOS playback can easily be proven with a $100 binaural mics. The difference would be obvious. But I understand why one would hear them similar like ATMOS…..the brain is good at recreating the sound scene in the head based on what’s previously heard. A simple binaural mics would prove otherwise.

Link to comment
6 hours ago, ROPolka said:

Just a note about our human ability to remember sound. 

 

That's an interesting thought.

 

I have not had the same experience of people being able to remember the sound even for a very short time.


 

3 hours ago, ROPolka said:

The average person doesn't need special binaural mics or complex measuring devices to reliably determine whether a soundtrack's sounds are heard in the same physical locations around the listener in the TigerFox Pod as with an ATMOS playback of the same soundtrack.


You just contradicted yourself. 
 

Echoic memory lasts just few seconds. But sound scene is reconstructed based on prior knowledge. Just place your phone on the other side of the place you usually put and you will notice that when the phone rings you would naturally hear as if it is coming from the side where you usually put them. Once you realized it’s not there then the localization cues are used to find the phone. 

Link to comment
17 hours ago, ROPolka said:

Positioning observation: After listening to more than 50 such dual recordings (ATMOS & stereo versions of the same soundtrack) streamed with Tidal using various playback devices and in the TigerFox Pod with an assortment of speakers, I have not personally heard individual sounds positioned in noticeably different locations between the ATMOS and the original stereo version of the same soundtracks.

 

Does this help answer your question?

 

Thank you for the detail description.

 

I couldn’t reply to this much earlier as I couldn’t play the Dolby version earlier to try to understand what Bob was describing. After listening to both version of Time the difference in details is telling but with the AirPod spatial I didn’t hear any out of ordinary difference perhaps a little sense of height.

 

For music, most of the direct source is a frontal event. Multi channel me ATMOS could be adding reverbs for the envelop feel and occasionally some sound just to trigger surround effect of sound around you. So for music, you are right to say there’s not much individual sound outside the normal soundstage of stereo. Now it makes sense to what Bob described. Thanks again for clarifying the point.

Link to comment
2 hours ago, ROPolka said:

However, it's important to understand that:

 

(1.) Where there is no prior "learning", echoic "memory" does not come into play. That is, if the listener never heard a sound's location before, there is no echoic "memory".

 

How this relates to the TigerFox sound positioning accuracy statement in italic above is, even where the listener never heard a soundtrack before (and therefore never "learned" a sound's particular location), a sounds spatial location around the listener is immediately localized by the TigerFox Pod to the point where the listener can clearly point to it's exact physical location around them. Not only its 360-degree location, but the listener can also hear and relate it's location around them as a factor of depth or distance, height and movement as that sound was positioned in the original immersive stereo content. 


[Before I continue to respond, let me make my position clear. My interest in TFP ( TigerFox Pod) is the claim that the crosstalk is masked. I am interested to find out how it is done because it can help my own setup where I use crosstalk cancellation for my system. How interaural crosstalk cancellation is achieved is well documented with several AES papers published. Now, the question is whether crosstalk masking is possible?]
 

Echoic memory I referred to was in relation to localization as the duration was short. Audio sensory memory is said to be somewhere from 10 to 20 seconds. Curiously, sound that that we became familiar with experience can be recalled even after several years. Since you are now sharing observation of listeners who had not previously heard the particular tracks then I suggest we stick to how human localize sound and how stereo create phantom images to appear to come from a particular direction. These subjects have been well researched and understood.

 

Now let’s consider what stereo can do without TFP.


1) The stage width will be within the spread of the speakers.

 

2) Depth depends variance of reverbs where we recreate the distance based on prior knowledge to estimate the distance based on the changes in the frequencies and reverbs.

 

3) although there no height information in stereo, it is possible the changes in frequencies can give you sense of height like how the LEDR test produces height information. I’d leave it to the reader to decide if the height is above the head like in ATMOS or limited to the frontal elevation along the speakers.

 

3) stereo with effects can:-

 a) produce sound to appear even from the rear. QSound recordings are good example.

 b) the information of sound coming from extreme side or rear often just for short duration. 
 

This is what ordinary stereo without TFP could do in a well treated room.

 

From the measurements you posted earlier, TFP indeed focuses the sound and increases the level. This is well documented in concert hall research of shoehorn shaped halls. I fully agree and believe with this claim and it can indeed sound better than the typical listening without the pod.

 

The only point that you can demonstrate to prove that 360 degrees sound with the pod is to demonstrate how the masking changes the position of the sound. This involves sound arriving within the 90 μs ( estimation based on the speakers position as shown in one of the TFP video) being masked that calculation could clearly establish the arrival time of the reflections.

 

Looking at the size of the pod, all reflections are likely to be within 10ms. There could be reflection happening immediately with the direct sound but this reflection should all come within 650 μs to mask any crosstalk sound that’s going to reach the opposite ear for sound localization purpose. As I said earlier, the pod’s dimension indicates reflections to come for about 10 ms and that’s within Haas effect that would no have any effect on localization. BUT can correlated prolonged reverbs of 10 ms long of certain frequency band somehow affects the localization to the extend the interaural crosstalk masked? That’s what I am interested to find out if the claims about TFP can be verified.

 

I cannot think of a proper method to demonstrate the rear localization. Even with binaural recordings you won’t perceive the externalization with BRIR. But the following video should be a good demo to show the difference with and without TFP with binaural recording. It is not going produce the full effect but obviously there will be differences in the placement. 

 


 

 

 

 

 

 

 

 

 

 

 


 

 

 

Link to comment
2 hours ago, ROPolka said:

Your question on how crosstalk is cancelled by reflection

Stereo speaker crosstalk is completely cancelled by the Immerse 360 by the capture, preservation and the mathematical control of massive quantities of indirect sound that otherwise would be lost, damaged or damaging sound and sound information in any room.

 

It works by precisely time-aligning this huge quantity of normally "excess" throw away sound by carefully orchestrated it from the instant it exits the speakers and continues to force control it in a coordinated way all the way to the listener's location.

 

This was generally explained in a prior post and much more completely in one or more of our issued utility patents.

 

You mentioned you have read portions of one of TigerFox's patents referring to a crosstalk illustration there. To more fully understand what is going on with the product and crosstalk, please read all of that particular patent's content, especially the parts that refer to the crosstalk illustration you saw. It's quite lengthly (more lengthly than what belongs here) but it should completely answer your inquiry.

 

Removing the corruption of crosstalk, and how this is functionally accomplished, is one of the parts of our patent's intellectual property that was completely new to the world. We organized and shared that new information in order to receive patent protection. (As you know, US and foreign utility patents are given only for revealing previously unknown, substantially novel and functionally important information).

 

Cancelling crosstalk is only one of the sound reproduction problems the Immerse 360 corrects in a synergistic way. While you're there, the patents get into many more.

 

Does the Immerse 360 work with other products to provide better cancellation?

About your question on this, because the Immerse 360 cancels crosstalk on its own in a very low cost way. Because it works reliably in most any size, shape and sound quality of room, including working in virtually any location in the room and while facing in any direction.

 

And because its results are latency free and do not interfere with or intrude upon the original sound signals. Because of these operational results (while it operates in an energy-efficient, sustainable way), there's no need to further correct crosstalk, especially by using other add-on methods or products that work by intruding into the sound signals or by cancelling one or more parts of the original audio signal.

 

By keeping how the Immerse 360 works as simple and intrusion-free as possible (as audiophiles know) it is then more possible to allow the electronics and the quality of the original music to unfold and bloom, to be heard and enjoyed in a more pure way - which provides the basis for getting the best sound out of one's content.

 

One thing to keep in mind, tho. This new technology is nothing like something experienced before. It needs to be experienced because it does a number of things for the sound never done before. And in new ways never experienced synergistically before.

 

I hope this is helpful.  I plan on getting into measurements in the next few days.

 

My best, Rick

 

PS, here's a comparison illustration that hopefully helps to graphically explain how sound looks after it leaves the speakers without controlling it, vs. it being captured, controlled and orchestrated by the Immerse 360 when it leaves the speakers and time-aligning it to converge with synchronization at the listener's location:

 

image.thumb.png.4a4b77750812626e339fe23de8743b47.png


I or rather “we” have gone through the 107 page patent and no where I/we find how cancellation was done. You also referred to Glasgal and Ambiophonics papers. I too rely on them. Miller’s DSP was the one I started with for cross stalk cancellation before moving on to other DSP. You also referred to Gardner besides Miller and Farina. Except seeing their papers getting mentioned in the footnote, you have not shown which part of their papers supported your patent. 
 

I have estimated  the duration of the reflection. That duration of reflection is only correct if somehow the sound waves stop magically at the listener and doesn’t go to the other side of the wall and gets reflected. You have not engaged on that point.  You have not answered any of the questions I asked. You just repeated the same as in the patent which also told nothing about the question I asked. 
 

The only way crosstalk can be cancelled by physical plane is by placing them in the middle. 

 

 

Link to comment

4D00F0BF-41C2-45DC-8429-E949C006BE68.thumb.jpeg.c2d64bf78b3e316d40c573c251a02c9a.jpeg5CB5E948-BEFC-4D1C-8ED0-5BBC1B7FEE10.thumb.jpeg.a845d516c92d4f5267c5f0f6c92aaaf2.jpeg
 

 

Looking at the patent diagrams where it was described that the speakers were placed 36 inches apart. Mathematically, the sound wave path C will be longer than L. The precise timing of the sound reaching the ears can be calculated. The same goes for the reflection.

 

To cancel the crosstalk the reflection should arrive at the same time as C but none of the reflection can be the same length/distance as C and therefore cancellation is not possible. Even masking is not possible as non of the reflection can reach at the same time as C since the path is longer.

 

So far this has not been explained anywhere.

Link to comment
5 hours ago, ROPolka said:

Sorry you were confused by this one part of this one illustration in one of our patents. I sense your frustration.

 

In patents, it is helpful to keep in mind that the illustrations are supported by the content. Because the content is more important, it needs to be carefully looked at in it's entirety and included in a discussion of the illustration.

 

For that, so various parts of the illustration are not misunderstood or misinterpreted, I need to differ to the patent content describing this illustration which I mentioned is lengthly (too lengthly for this forum) and goes through the entire patent.

 

Let's continue this part of the conversation therefore off-line if, after reading the content, you would like to discuss this one part of the physics further.

 

This one illustration, by the way, is only one of many different illustrations and embodiments in our patents that, as a whole, describe what's going on with the system. As you'll see there, there are many ways to explain how and why it works.

 

In general, however, here's some boiled-down relevant information that may help.

 

Of importance is that the reflections don't have to be perfectly the same exact length in order for the system as a whole to work in a human functionally-perfect way.

 

Flexibility and forgiveness are importantly built into the design of the Immerse 360 acoustic system!

 

If all of the reflections, for example, were required to be exactly the same physical length for the system to work, the sweet spot would be smaller and the system would be more restrictive.

 

Other shape-oriented factors as well come into play in making the physical structure work smoothly, efficiently and practically.

 

It might help to also keep in mind that this isn't theory here anymore. The system works!

And it works well with enough with built in versatile forgiveness to work immediately out of the box, including with a simple 3-minute tool-free, electronics-free, and wire-free setup, along with being adaptive to different types, shapes and sizes of speakers and rooms, and it being able to compatibility work with different electronics along with a multiplicity of different content from high-performance music playback, to 360-degree video games and full theater surround sound movies.

 

There are other functional and difficult to get one's head around important things going on here as well that need to be included in an objective discussion of functional integrity. Like my prior mention of the golden spiral and golden ratio that directly relates to the physical design of the Immerse 360's structure (see general Googled short videos explaining this amazing physical phenomenon).

 

Another difficult to get one's head around thing going on here too that's related to the functional design of the Immerse 360 is the musical instrument soundboard. Why and how it works. And how it relates to the Pod and a Stradivarius violin (which I will touch on in another post). 

 

I'm looking forward to it!

 

My best

 

Rick


Hi Rick,

 

I am not disputing the fact that pod is capable of being immersive since sound being reflected back by focusing towards the listens. You can such filling when you play music in an empty cargo container or tiny bathroom. The patent could have put some spectrum graph and the absorption coefficient value of the pod to explains that point. 
 

The main point is the patent was for “Portable Sound System”  NOT for immersive or crosstalk cancellation. So maybe I am confused to think this is doing some sort of crosstalk cancellation based on the claim.
 

Thank you for your time. Not many designer or manufacturer would do so. 

 

IMG_1112.thumb.jpeg.fe21cebd47ca16c252a9b715672290d5.jpeg


Cheers!

Link to comment
5 hours ago, The Computer Audiophile said:

What interests me most about seeing the TigerFox measurements is to see if something is way out of the ordinary.


The measurements are there but it only proves the focusing effect. Even placing umbrellas or even unused satellite dishes around the listeners would should the increase in dB. 
 

What the measurements need to show is whether it can improve the spatial imaging. That can easily done by taking the level difference between the two ears. Accurate measurements is difficult but a simple Sound Professional mics can prove the point. If the level difference is greater with the pod then it will be more dimensional. Could it achieve 10 dB or more to be effective? I doubt but who knows?

Link to comment
7 minutes ago, The Computer Audiophile said:

I'd like to see some standard measurements with and without the pod. 


There are many measurements graphs in the patent and all confirming the higher reflection. From the graph, you can get general idea the absorption coefficient of the material for a given frequency. Changing the materials can alter the response. They all confirm other research on concert hall design of horseshoe architecture. 
 

IMG_1126.thumb.jpeg.b110259a8e69ff0b77d1bdb6800b82aa.jpegIMG_1125.thumb.jpeg.efbd69df292ca1f2d497aa1eb37750f9.jpeg
 

 

These measurements were taken by placing the microphone in the centre. However, with ears it gets complicated as the receiving point is two.

Link to comment
13 minutes ago, The Computer Audiophile said:

That's certainly some interesting data. In a way it's like an EQ. The measurement without the pod looks much flatter. 


I am not sure why the response drops after 12.5KHz. Looking at Stereophile’s measurements the difference is quite a lot and more so the loss in HF in a non anechoic room. 
 

IMG_1127.thumb.jpeg.c53c6a849ccf6980c12c9e1de7a00707.jpeg

Link to comment
41 minutes ago, The Computer Audiophile said:

 

If we setup the pod, take a measurement, the remove the walls of the pod and take a measurement without changing anything else, we will have some data from which to work. if one's room is sufficiently large or none-lively, it could be very relevant. 

 

If the pod measurement looks like this (below), that certainly tells us something.

 

comb-filterhead.jpg


IMO, it will not look like that. The reflection is pretty even due to a small area . The reflection would have started within 1 or 2 ms and evenly hit the listeners continuously for more than 10ms. That is only for the first reflection. The reflection will continue for much longer. I don’t recall seeing IR chart in the patent we do not know the RT which will have a bigger effect on the listeners. 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...