Jump to content
IGNORED

Temporal Confusion


monteverdi

Recommended Posts

I was reading about Meridian’s MQA and one statement was that it was based on the newest psychoacoustical research but I have not seen any references on what that research is. I anyhow hate claims that something is based on research but without any proof of that. As a scientist I believe that any statement should be verifiable.

Not knowing what Meridian refers to I made some web searches. Almost all of which is not very new research!

1. Frequencies: Humans can hear between 16-24Hz lower limit to 12-24kHz higher limit depending on age, gender, hearing damage. Also dependent on levels see Fletcher-Munson. Bone conduction is often quoted for extended frequencies but I think it is irrelevant for music as it is pitch and pace insensitive at higher frequencies.

2. Dynamic range: there is a lower limit which is frequency dependent again see Fletcher-Munson. The upper limit is time dependent. Long term sound can cause hearing damage already 90dB but short term upper maximum comfort level is at 120dB. Not clear what short term means msec or sec?

3. Time resolution about 10µsec (which would equate 3.4mm sound travel in air). This is mostly import for localization of sound.

4. Detectability of differences (like distortion) is frequency and level dependent. I did not find much about that except https://www.meridian-audio.com/meridian-uploads/ara/coding2.pdf on page 25.

There are a lot of other interesting effects like masking etc.

 

So what would that mean for the digital reproduction of audio?

From 1 and 2: 48 kHz sampling rate and 20 bit should be sufficient. Imagine a single wave form representing that frequency and dynamic range from a single source (like a microphone). A time change of that wave form by 10 µsec should not be audible by itself but when referenced to another wave form like left/right audio channel or other microphone that time shift will become audible according to 3.

So my confusion: Do you need a higher sampling rate than 48/20 to get that time resolution or or as long as encoding and decoding preserves that time resolution 48/20 is sufficient? What about digital filters and pre-ringing?

More of a problem preserving the 10µsec time resolution I see in recording and loudspeakers. Already different microphones and cables will have different LCR values which will cause phase shifts and therefore time shifts >10µsec. Maybe that is the reason I like many simple stereo recordings with only 2 identical microphones and cables. I wonder what can be achieved in terms of time resolution with modern multi-micing, digital mixing and corrections. One could digitally correct for time errors but with what precision is that done? Loudspeakers with multiple drivers and a cross-over have significant time problems. Alignment within less than a msec is difficult if they are at all time aligned.

Link to comment

I'll give you answers that probably most here will disagree violently with.

 

Time resolution of 48khz/20 bit audio is a few magnitudes better than the required microsecond level. So that is a non-issue. This can be and has been demonstrated.

 

Digital filters ring at a frequency you can't hear. So your ear drum doesn't respond, meaning the rest of your auditory nerve and processing in the brain can't respond to them either. (though other effects of filters may be heard, just not the ringing)

 

The problems you imagine with speakers and probably microphones being larger than the microsecond level are correct. Though if both channels are equally altered we still might notice a smaller difference than the level of gross incorrectness in those other parts of the signal chain.

 

I would also note the best time resolution between ears doesn't occur at high frequencies. It occurs around 800 hz. Your perception of imaging at higher frequencies is mostly or wholly due to level differences, and not timing. What data exists also shows your hearing is not all that sensitive to phase differences at higher frequencies it can hear.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
I was reading about Meridian’s MQA and one statement was that it was based on the newest psychoacoustical research but I have not seen any references on what that research is. I anyhow hate claims that something is based on research but without any proof of that. As a scientist I believe that any statement should be verifiable.

Not knowing what Meridian refers to I made some web searches.

 

Well, have you tried searching for 'Robert Stuart Psychoacoustics'?

 

Almost all of which is not very new research!

 

Does it matter if the research isn't new? What if the research is old but still not applied?

 

3. Time resolution about 10µsec (which would equate 3.4mm sound travel in air). This is mostly import for localization of sound.

 

Now yes (and I've stripped the rest above), this is very, very important and one of the characteristics of MQA is dealing with the time domain content.

 

So what would that mean for the digital reproduction of audio?

From 1 and 2: 48 kHz sampling rate and 20 bit should be sufficient.

 

Stuart says you won't be able to do proper reconstruction of the audio below at least 192kHz.

 

More of a problem preserving the 10µsec time resolution I see in recording and loudspeakers. Already different microphones and cables will have different LCR values which will cause phase shifts and therefore time shifts >10µsec. Maybe that is the reason I like many simple stereo recordings with only 2 identical microphones and cables. I wonder what can be achieved in terms of time resolution with modern multi-micing, digital mixing and corrections. One could digitally correct for time errors but with what precision is that done? Loudspeakers with multiple drivers and a cross-over have significant time problems. Alignment within less than a msec is difficult if they are at all time aligned.

 

Stuart views the issue from the perspective of a wide chain, but I'm not sure he takes into account the microphone and the speakers. He does mention ADC, Digital or Analogue Processing, DAC, so if the equipment is known, i.e. from the record liner notes or from the label or studio, then inverse processing can be done to remove the effects of these equipment (and it is said to use the filters by Meridian).

 

For example, if a particular ADC was used and is known to have a certain response, then MQA can be used to undo that response and instead use another more transparent one.

 

In addition to this, MQA also will encode differently based on how human auditory perception works differently according to the type of sounds it gets: natural sound, animal sound or speech sound.

 

So, overall, the time domain info is crucial to MQA's quality, but there are also some clever folding compression techniques which work within the frequency domain coding and the backward compatibility (fall-back only) with existing equipment is an excellent idea as well.

 

This is a very well executed project. Hoping to hear it soon and compare it to DSD128.

Dedicated Line DSD/DXD | Audirvana+ | iFi iDSD Nano | SET Tube Amp | Totem Mites

Surround: VLC | M-Audio FastTrack Pro | Mac Opt | Panasonic SA-HE100 | Logitech Z623

DIY: SET Tube Amp | Low-Noise Linear Regulated Power Supply | USB, Power, Speaker Cables | Speaker Stands | Acoustic Panels

Link to comment
Well, have you tried searching for 'Robert Stuart Psychoacoustics'?

 

Yes, there is a 2011 paper from him but the most recent citation in it is from 1995. I am not indicating that older research is irrelevant, quite in contrary but the claim of MQA is based on recent psychoacoustic research is not supported! I would like to know any link etc. for it but I do not believe just marketing announcements!

 

I do not say that MQA is or is not a significant advancement in music recording and distribution. I have not heard it nor do i have enough information about it.I like the idea of smaller file sizes (especially for streaming) and better sound /music reproduction by understanding what actually we can hear and not by focusing on technical parameters which are undetectable by us. I am only adversed to unsubstantiated marketing claims and the uncritical reproduction of them.

Link to comment

Yes, there is a 2011 paper from him but the most recent citation in it is from 1995. I am not indicating that older research is irrelevant, quite in contrary but the claim of MQA is based on recent psychoacoustic research is not supported! I would like to know any link etc. for it but I do not believe just marketing announcements!

 

I do not say that MQA is or is not a significant advancement in music recording and distribution. I have not heard it nor do i have enough information about it.I like the idea of smaller file sizes (especially for streaming) and better sound /music reproduction by understanding what actually we can hear and not by focusing on technical parameters which are undetectable by us. I am only adversed to unsubstantiated marketing claims and the uncritical reproduction of them.

 

I believe beanbag has a copy of the AES papers from late last year related to MQA. So perhaps he could divulge from the references in the paper what recent research is involved.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
The Nyquist limit survives the Fourier transform from the frequency to time domain intact.

 

Come on Bill.......you can't speak to audiophiles that way. Surely you know this?

 

Speaking of Temporal confusion......has anyone seen the movie Predestination? Now that is your temporal confusion if there ever was.

 

If you could go back in time to just one event in audiophile history to alter the trajectory of high end audio what would it be?

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

If you could go back in time to just one event in audiophile history to alter the trajectory of high end audio what would it be?

 

Oh Dennis, if you don't start a new thread with that title and question then I'll have to! That's a good one.

 

Of course part of me wants to say that I'd get the Philips/Sony people to hold out for 24/96 as the standard for CD. But then again, maybe holding CES in some city other than Las Vegas would have been almost as significant. ;)

Convincing Saul Marantz not to sell out to Superscope and the Tishenckie (sp?) brothers might be next on my list. Or convincing Tomlinson Holman to stick it out with his own company (Apt Holman) instead of inventing THX (now he works for Apple). There are 100s of other forks in the road not taken. That really could be a great new thread.

--Alex

Link to comment
Oh Dennis, if you don't start a new thread with that title and question then I'll have to! That's a good one.

 

Of course part of me wants to say that I'd get the Philips/Sony people to hold out for 24/96 as the standard for CD. But then again, maybe holding CES in some city other than Las Vegas would have been almost as significant. ;)

Convincing Saul Marantz not to sell out to Superscope and the Tishenckie (sp?) brothers might be next on my list. Or convincing Tomlinson Holman to stick it out with his own company (Apt Holman) instead of inventing THX (now he works for Apple). There are 100s of other forks in the road not taken. That really could be a great new thread.

--Alex

 

Okay, ask and you shall receive. I'll start a new thread on that very topic. Like all of your suggestions by the way.

 

 

http://www.computeraudiophile.com/f8-general-forum/if-you-could-go-back-time-23240/#post391197

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
The Nyquist limit survives the Fourier transform from the frequency to time domain intact.

 

Yes, but the that's not the issue. If you want to have good time resolution in frequency domain, you need to use short transform, but then you lose on the frequency resolution. And if you want to have good frequency resolution (steep filter), you lose on time resolution. You fundamentally cannot have both good frequency and time resolution with Fourier transform simultaneously.

 

Result of recent study was that hearing exceeds this time-frequency resolution limit of Fourier transform. Reason is that hearing uses wavelet transform instead. See

Morlet wavelet - Wikipedia, the free encyclopedia

But there is much more to study on the exact types and properties of the wavelet banks of hearing, since the wavelet doesn't seem to follow the same function for all the frequency banks. One of the main properties is that the hearing filter bank is logarithmic while Fourier transform filter bank is linear.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Yes, there is a 2011 paper from him but the most recent citation in it is from 1995. I am not indicating that older research is irrelevant, quite in contrary but the claim of MQA is based on recent psychoacoustic research is not supported! I would like to know any link etc. for it but I do not believe just marketing announcements!

 

It could be interesting to know about more recent supporting research. Couldn't you email Bob Stuart?

Dedicated Line DSD/DXD | Audirvana+ | iFi iDSD Nano | SET Tube Amp | Totem Mites

Surround: VLC | M-Audio FastTrack Pro | Mac Opt | Panasonic SA-HE100 | Logitech Z623

DIY: SET Tube Amp | Low-Noise Linear Regulated Power Supply | USB, Power, Speaker Cables | Speaker Stands | Acoustic Panels

Link to comment

Very interesting, Miska.

 

I wonder if the log nature is used by MQA for proper perceptual encoding.

 

Result of recent study was that hearing exceeds this time-frequency resolution limit of Fourier transform. Reason is that hearing uses wavelet transform instead. See

Morlet wavelet - Wikipedia, the free encyclopedia

But there is much more to study on the exact types and properties of the wavelet banks of hearing, since the wavelet doesn't seem to follow the same function for all the frequency banks. One of the main properties is that the hearing filter bank is logarithmic while Fourier transform filter bank is linear.

Dedicated Line DSD/DXD | Audirvana+ | iFi iDSD Nano | SET Tube Amp | Totem Mites

Surround: VLC | M-Audio FastTrack Pro | Mac Opt | Panasonic SA-HE100 | Logitech Z623

DIY: SET Tube Amp | Low-Noise Linear Regulated Power Supply | USB, Power, Speaker Cables | Speaker Stands | Acoustic Panels

Link to comment
Very interesting, Miska.

 

I wonder if the log nature is used by MQA for proper perceptual encoding.

 

According to the patent the answer is no.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

Thanks esldude. I believe I have downloaded the patent file but haven't gone round to tackling the reading of it.

 

I was reading about the MQA format again and supposedly, Stuart says that at least 192kHz is necessary (with 24 bits?).

 

Now, one could technically license the technology so as to make a computer-based MQA-enabled software decoder.

 

So, does that mean that the decoder will definitely need at least a 192kHz 24bits DAC for the end-user to have MQA playback?

 

In other words, even with that software decoder, if your DAC is limited to say, 96kHz 32-bits, you wouldn't be able to say that you have MQA Playback, correct?

Dedicated Line DSD/DXD | Audirvana+ | iFi iDSD Nano | SET Tube Amp | Totem Mites

Surround: VLC | M-Audio FastTrack Pro | Mac Opt | Panasonic SA-HE100 | Logitech Z623

DIY: SET Tube Amp | Low-Noise Linear Regulated Power Supply | USB, Power, Speaker Cables | Speaker Stands | Acoustic Panels

Link to comment
Thanks esldude. I believe I have downloaded the patent file but haven't gone round to tackling the reading of it.

 

I was reading about the MQA format again and supposedly, Stuart says that at least 192kHz is necessary (with 24 bits?).

 

Now, one could technically license the technology so as to make a computer-based MQA-enabled software decoder.

 

So, does that mean that the decoder will definitely need at least a 192kHz 24bits DAC for the end-user to have MQA playback?

 

In other words, even with that software decoder, if your DAC is limited to say, 96kHz 32-bits, you wouldn't be able to say that you have MQA Playback, correct?

 

Don't know for sure. At a miminum a decoding DAC will need to be 24/96.

 

One example in the patent shows why, and they use a 24/96 explanation.

 

MQA first splits the band into below and above 20 khz. It would use 48 khz for each half. One sometimes forgotten idea about Shannon-Nyquist is you can use your bandwith where you want it. For instance you could use 48 khz sample rates to record a 20 khz wide band between 140 khz and 160 khz. So MQA will use 48 khz below 20khz and another 48khz from 20khz to 40 khz.

 

The 20-40 khz band MQA will use lossless compression to cover up to about 30 khz. It will use lossy compression to cover above 30 khz. Unlike additive dither we normally see in audio it will use subtractive dither and noise shaping. It will keep only the difference signal between the original 20-40 khz band and what is in the 20-30khz band. It will encode the info to perform subtractive dither in that lossy code in a way upon decoding it can be retrieved. So MQA will only be lossless to 30 khz and will be heavily compressed though of good resolution due to noise shaping and subtractive dither. Lets it encode potentially a wide bandwidth in only 3 bits. The dither allows it to have enough resolution to reconstitute a good fascimile of the original whole wide bandwidth signal.

 

Stuart has been clear that we don't actually hear the ultrasonics in his opinion. He thinks we do hear filtering. 30 khz and lossy above that up to much higher frequencies will allow gentle presumably non-audible filtering with better time domain performance. Which is why he keeps calling MQA audibly lossless. Nothing we could hear is lossy, and the rest is just a fancy way to get equivalent filter performance of high rate audio.

 

Now whether we will see some MQA equipment good to only 96 or whether he will require 192 is something I don't know. My guess is you will need 192. Of course this can (and Meridian has hinted it will) be available in software. So anybody with a 192 DAC and the right software could playback the MQA files. Will they do the same for 96....I don't know. They have said MQA could be used with any original file up to 384 khz. So it sounds like they would do their MQA processing and let you play back the 384 file at 192 I think.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Don't know for sure. At a miminum a decoding DAC will need to be 24/96.

 

One example in the patent shows why, and they use a 24/96 explanation.

 

MQA first splits the band into below and above 20 khz. It would use 48 khz for each half. One sometimes forgotten idea about Shannon-Nyquist is you can use your bandwith where you want it. For instance you could use 48 khz sample rates to record a 20 khz wide band between 140 khz and 160 khz. So MQA will use 48 khz below 20khz and another 48khz from 20khz to 40 khz.

 

The 20-40 khz band MQA will use lossless compression to cover up to about 30 khz. It will use lossy compression to cover above 30 khz. Unlike additive dither we normally see in audio it will use subtractive dither and noise shaping. It will keep only the difference signal between the original 20-40 khz band and what is in the 20-30khz band. It will encode the info to perform subtractive dither in that lossy code in a way upon decoding it can be retrieved. So MQA will only be lossless to 30 khz and will be heavily compressed though of good resolution due to noise shaping and subtractive dither. Lets it encode potentially a wide bandwidth in only 3 bits. The dither allows it to have enough resolution to reconstitute a good fascimile of the original whole wide bandwidth signal.

 

Stuart has been clear that we don't actually hear the ultrasonics in his opinion. He thinks we do hear filtering. 30 khz and lossy above that up to much higher frequencies will allow gentle presumably non-audible filtering with better time domain performance. Which is why he keeps calling MQA audibly lossless. Nothing we could hear is lossy, and the rest is just a fancy way to get equivalent filter performance of high rate audio.

 

Now whether we will see some MQA equipment good to only 96 or whether he will require 192 is something I don't know. My guess is you will need 192. Of course this can (and Meridian has hinted it will) be available in software. So anybody with a 192 DAC and the right software could playback the MQA files. Will they do the same for 96....I don't know. They have said MQA could be used with any original file up to 384 khz. So it sounds like they would do their MQA processing and let you play back the 384 file at 192 I think.

 

I still think this sounds like a "best of both worlds" scenario. It seems to me we currently waste an *awful* lot of bandwidth encoding very little content (above 20kHz or, as you say, above 30kHz). Lossless up to 30kHz, excellent temporal resolution / gentle filtering, backwards compatibility, etc. . . I would love to see this take off, at last as an alternative to traditional CDs and high resolution downloads.

John Walker - IT Executive

Headphone - MacMini running Roon Server > Netgear Orbi > Blue Jeans Cable Ethernet > mRendu Roon endpoint > Topping D90 > Topping A90 > Dan Clark Aeon 2 Closed / Focal Elegia

Home Theater - Mac Mini running Roon Server / AppleTV > Blue Jeans Cable HDMI > Denon X3700h > Anthem Amp for front channels > Revel F208-based 5.2.4 Atmos speaker system

Link to comment
I still think this sounds like a "best of both worlds" scenario. It seems to me we currently waste an *awful* lot of bandwidth encoding very little content (above 20kHz or, as you say, above 30kHz). Lossless up to 30kHz, excellent temporal resolution / gentle filtering, backwards compatibility, etc. . . I would love to see this take off, at last as an alternative to traditional CDs and high resolution downloads.

 

DSD is quite efficient encoding in that sense, it has many of the properties... Even simple DPCM encoding reduces size compared to uncompressed hires PCM. FLAC does very good job without losses. So I don't see much point in having some new lossy encoding where everybody would need to pay one company for licenses (although we've seen a lot of such, MP3, AAC, H.264, etc).

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
DSD is quite efficient encoding in that sense, it has many of the properties... Even simple DPCM encoding reduces size compared to uncompressed hires PCM. FLAC does very good job without losses. So I don't see much point in having some new lossy encoding where everybody would need to pay one company for licenses (although we've seen a lot of such, MP3, AAC, H.264, etc).

 

Yes I actually think DPCM is probably how they are doing the compression in MQA though I don't recall it specifically identifying it as such. The method of lossy part is also not disclosed that I recall.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

 

Stuart has been clear that we don't actually hear the ultrasonics in his opinion. He thinks we do hear filtering. 30 khz and lossy above that up to much higher frequencies will allow gentle presumably non-audible filtering with better time domain performance. Which is why he keeps calling MQA audibly lossless. Nothing we could hear is lossy, and the rest is just a fancy way to get equivalent filter performance of high rate audio.

 

So if the high frequency content is only there to keep the filters behaving correct in the time domain why can the high frequency band just added during conversions (or subtracted during A to D) or is that un-hearable frequency content determining the filter function. So does the "supersonics" have to be part of the music files or can be adhoc generated (maybe adjusted to the waveform and filter)?

Link to comment
The dynamic range is quite different in the different frequency bands and that is also what Bob Stuart shows. So would one need lossy compression in the frequency band above 30khz or just only reduce the dynamic range there to a few bits?

 

That is part of how the compression can work so effectively. In that range very little over the 3 LSB's are needed. That is one piece of what MQA does. But they also do lossy compression. Their scheme and you would need to ask them why. Perhaps just getting the bitrate as low as possible.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
So if the high frequency content is only there to keep the filters behaving correct in the time domain why can the high frequency band just added during conversions (or subtracted during A to D) or is that un-hearable frequency content determining the filter function. So does the "supersonics" have to be part of the music files or can be adhoc generated (maybe adjusted to the waveform and filter)?

 

Not sure I understand the question. If you don't have some information with the signal, you can't know what the original signal was or what needs to be there. Without some indicator you don't know if a signal stopped at 30 khz or had some other components. You don't know if a transient was steep enough to fit the lower bandwidth or not. Again their scheme.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
Maybe my question is whether the slope of a signal has to be steeper than what the audible frequencies can represent to reproduce all the transients?

 

I don't know what Mr. Stuart's opinion on that is.

 

My opinion is no. The highest frequency at the highest sound level is as steep as your ear will respond to therefore you don't need the rest. Mine is an unpopular opinion around here.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment

2. Dynamic range: there is a lower limit which is frequency dependent again see Fletcher-Munson. The upper limit is time dependent. Long term sound can cause hearing damage already 90dB but short term upper maximum comfort level is at 120dB.

 

Monteverdi,

 

Ear has dynamic range about 120 dB.

 

CD 16 bit has dynamic range about 130 dB.

 

PCM 24 bit has dynamic range about 146 dB.

 

PCM 32 bit has dynamic range about 200 dB.

 

Below not enough correct definition of SNR and avoided attention to threshold of pain, only for example:

 

1. Normalize loudness levels CD to 0 dB human ear.

 

2. CD 16 bit: Minimal perceived signal has «SNR» (difference level of signal and noise floor, it is not classical definition SNR) 10 dB = -120-(-130 dB).

 

3. PCM 24 bit: Minimal perceived signal «SNR» 26 dB = -120-(-146 dB).

 

4. PCM 32 bit: Minimal perceived signal «SNR» 80 dB = -120-(-200 dB).

 

We don’t listen normalized levels as above. Records don’t has only high and middle levels. Especially for classical music.

 

We increase level for comfortable playback of quiet places too.

 

So enough many time we listens signals with less SNR (more distorted) than with high levels.

 

Thus direct comparing of ear’s dynamic range and some wider CD’s dynamic range it is not fully correct for music with wide dynamic range.

 

Yuri

AuI ConverteR 48x44 - HD audio converter/optimizer for DAC of high resolution files

ISO, DSF, DFF (1-bit/D64/128/256/512/1024), wav, flac, aiff, alac,  safe CD ripper to PCM/DSF,

Seamless Album Conversion, AIFF, WAV, FLAC, DSF metadata editor, Mac & Windows
Offline conversion save energy and nature

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...