fas42 Posted April 12, 2021 Share Posted April 12, 2021 An interesting new thread in that 'other place' is developing, https://www.audio “science” review/forum/index.php?threads/making-very-small-distortions-errors-audible-with-music-signals-some-examples.20886/ . This is refining a method for finding meaningful differences in all those things which are "unmeasurable", and therefore "inaudible" 😁 ... Will be interesting to see how this develops - hopefully, to some degree countering all those "misleading measurements", 🙂. Link to comment
stereo coffee Posted April 21, 2021 Share Posted April 21, 2021 Perhaps the most misleading thing is not measuring equipment with respect to recognised standards that the majority of home audio equipment complies with. If we observe for example consumer line level which is nominal 310mv RMS . .... until they get that right I would suggest ignoring all measurements that fail to understand consumer line level, as anything higher is meaningless with respect to equipment you use every day. Link to comment
The Computer Audiophile Posted April 21, 2021 Author Share Posted April 21, 2021 2 hours ago, stereo coffee said: Perhaps the most misleading thing is not measuring equipment with respect to recognised standards that the majority of home audio equipment complies with. If we observe for example consumer line level which is nominal 310mv RMS . .... until they get that right I would suggest ignoring all measurements that fail to understand consumer line level, as anything higher is meaningless with respect to equipment you use every day. Can you link to this documented as the recognized standard? Founder of Audiophile Style | My Audio Systems Link to comment
stereo coffee Posted April 21, 2021 Share Posted April 21, 2021 Yes, https://en.wikipedia.org/wiki/Line_level Also https://audiouniversityonline.com/consumer-vs-professional-audio-levels-what-is-the-difference/ Link to comment
pkane2001 Posted April 21, 2021 Share Posted April 21, 2021 3 hours ago, stereo coffee said: Perhaps the most misleading thing is not measuring equipment with respect to recognised standards that the majority of home audio equipment complies with. If we observe for example consumer line level which is nominal 310mv RMS . .... until they get that right I would suggest ignoring all measurements that fail to understand consumer line level, as anything higher is meaningless with respect to equipment you use every day. Why ignore? We very rarely use devices such as DACs, amps, etc., at their nominal output level, unless you never adjust volume. While it may be useful to also measure at nominal level, it's not that you must ignore all other measurements, as long as these are consistent. Most DACs I've measured, for example, measure better at 2v output than at 0.316, the same with head-amps simply due to better SNR. March Audio 1 -Paul DeltaWave, DISTORT, Earful, PKHarmonic, new: Multitone Analyzer Link to comment
The Computer Audiophile Posted April 21, 2021 Author Share Posted April 21, 2021 49 minutes ago, stereo coffee said: Yes, https://en.wikipedia.org/wiki/Line_level Also https://audiouniversityonline.com/consumer-vs-professional-audio-levels-what-is-the-difference/ If I was designing product, I wouldn't use Wikipedia as my source of published industry standards, especially when it's only listed as "The most common nominal level for professional equipment is +4 dBu (by convention, decibel values are written with an explicit sign symbol). For consumer equipment it is −10 dBV, which is used to reduce manufacturing costs." Do you know of any standards organization that has published an accepted line level standard? Founder of Audiophile Style | My Audio Systems Link to comment
bluesman Posted April 21, 2021 Share Posted April 21, 2021 2 hours ago, The Computer Audiophile said: Do you know of any standards organization that has published an accepted line level standard? AES defines a line level standard in their Pro Audio Reference (which, as they describe it, defines concepts, terminology, standards, history, and "assorted surprises"): "line-level Standard +4 dBu (pro) or -10 dBV (consumer) audio levels". As described in the linked video in stereo coffee's post above, there are different reference levels for pro and consumer audio equipment. The units of measure (UoM) are different, and the operating ranges are different. This is important to those who want to combine pro and consumer audio devices, since the input level required to drive a pro gain stage to its rated output will be a higher voltage than the maximum output levels of most consumer equipment can deliver. And the output levels of pro line level devices will be much higher than those of consumer line level devices, which can generate grossly excessive and damaging SPLs from your poor speakers. This explains the common complaint of the unknowing audiophile who sticks a pro device into a consumer audio chain and finds that the overall gain (as manifested in SPL from the speakers) is either grossly lower or higher than it was with the consumer device that was replaced with a pro unit. Using similar gain settings on variable controls, a consumer preamp won't drive a pro amp to full output, and a pro line level device will overdrive a consumer amplification stage. If you use fixed maximum output levels in DACs etc, you may find yourself in need of new speaker drivers if you don't properly rebvalance your levels among devices. For analog audio devices, the consumer UoM is the decibel volt (dBV), while the pro UoM is the decibel unloaded (dBu). A 0 dBV level (1 RMS volt) will push 1 milliwatt through a 1 kOhm load. A 0 dBu level (approximately 0.78V) will push 1 mW of power across a 600 Ω load. The most common nominal use level for consumer audio equipment is −10 dBV, and the most common nominal level for professional equipment is +4 dBu. For digital audio devices, the UoM is the dBFS (decibel relative to full scale). A 0 dBFS level is the maximum level achievable with digital equipment - there is no level >0 dBFS, which for 16 bit audio is the digital word 1111 1111 1111 1111 (representing the analog signal). The lowest possible 16 bit level is -96 dBFS (the digital word 0000 0000 0000 0001). "There is an industry standard (unofficial, AFAIK, but commonly followed in boradcasting and commercial audio). Here's the gist of it (from the Alabama Broadcasters Association, for those interested in learning more): "It makes sense to calibrate all equipment such that when 0 dBFS occurs at the mixing console, every additional, or down stream piece of gear should be set to match the 0 dBFS level. In this manner, we can faithfully know that the entire system is calibrated, and matched to the output of the mixing console. As for a nominal operating level the recommended practice is to utilize -12 dBFS as the reference. This would be to observe dynamic peak levels to land at the -12 dBFS indication, thereby leaving 12dB of headroom for the system. Assuming 0 dBFS is to be at the highest audio level before clipping occurs, which corresponds to an analog level of 24 dBu, +4 dBu is the same as – 20 dBFS." The manipulations applied to audio by recording engineers can change the above. For example, compression and EQ can grossly alter peak levels, which may raise or lower the headroom needed to stay between desired SPLs and clipping. Real time DSP can change optimal live recording parameters, while post processing can turn a beautifully captured performance into one in serious need of normalization etc. An audiophile needs a lot of knowledge to avoid common errors that grossly affect SQ, usability, and the general joy of listening. The assumption that pro equipment is better than consumer equipment is simply wrong. Link to comment
The Computer Audiophile Posted April 21, 2021 Author Share Posted April 21, 2021 2 minutes ago, bluesman said: AES defines a line level standard in their Pro Audio Reference (which, as they describe it, defines concepts, terminology, standards, history, and "assorted surprises"): "line-level Standard +4 dBu (pro) or -10 dBV (consumer) audio levels". As described in the linked video in stereo coffee's post above, there are different reference levels for pro and consumer audio equipment. The units of measure (UoM) are different, and the operating ranges are different. This is important to those who want to combine pro and consumer audio devices, since the input level required to drive a pro gain stage to its rated output will be a higher voltage than the maximum output levels of most consumer equipment can deliver. And the output levels of pro line level devices will be much higher than those of consumer line level devices, which can generate grossly excessive and damaging SPLs from your poor speakers. This explains the common complaint of the unknowing audiophile who sticks a pro device into a consumer audio chain and finds that the overall gain (as manifested in SPL from the speakers) is either grossly lower or higher than it was with the consumer device that was replaced with a pro unit. Using similar gain settings on variable controls, a consumer preamp won't drive a pro amp to full output, and a pro line level device will overdrive a consumer amplification stage. If you use fixed maximum output levels in DACs etc, you may find yourself in need of new speaker drivers if you don't properly rebvalance your levels among devices. For analog audio devices, the consumer UoM is the decibel volt (dBV), while the pro UoM is the decibel unloaded (dBu). A 0 dBV level (1 RMS volt) will push 1 milliwatt through a 1 kOhm load. A 0 dBu level (approximately 0.78V) will push 1 mW of power across a 600 Ω load. The most common nominal use level for consumer audio equipment is −10 dBV, and the most common nominal level for professional equipment is +4 dBu. For digital audio devices, the UoM is the dBFS (decibel relative to full scale). A 0 dBFS level is the maximum level achievable with digital equipment - there is no level >0 dBFS, which for 16 bit audio is the digital word 1111 1111 1111 1111 (representing the analog signal). The lowest possible 16 bit level is -96 dBFS (the digital word 0000 0000 0000 0001). "There is an industry standard (unofficial, AFAIK, but commonly followed in boradcasting and commercial audio). Here's the gist of it (from the Alabama Broadcasters Association, for those interested in learning more): "It makes sense to calibrate all equipment such that when 0 dBFS occurs at the mixing console, every additional, or down stream piece of gear should be set to match the 0 dBFS level. In this manner, we can faithfully know that the entire system is calibrated, and matched to the output of the mixing console. As for a nominal operating level the recommended practice is to utilize -12 dBFS as the reference. This would be to observe dynamic peak levels to land at the -12 dBFS indication, thereby leaving 12dB of headroom for the system. Assuming 0 dBFS is to be at the highest audio level before clipping occurs, which corresponds to an analog level of 24 dBu, +4 dBu is the same as – 20 dBFS." The manipulations applied to audio by recording engineers can change the above. For example, compression and EQ can grossly alter peak levels, which may raise or lower the headroom needed to stay between desired SPLs and clipping. Real time DSP can change optimal live recording parameters, while post processing can turn a beautifully captured performance into one in serious need of normalization etc. Thanks for the information. It still seems very loose. Do you know of something with an official number like "EBU3276" which I use for room correction? March Audio 1 Founder of Audiophile Style | My Audio Systems Link to comment
Popular Post bluesman Posted April 21, 2021 Popular Post Share Posted April 21, 2021 55 minutes ago, The Computer Audiophile said: Thanks for the information. It still seems very loose. Do you know of something with an official number like "EBU3276" which I use for room correction? Sadly, I do not - but I don't think it's loose. The reasoning behind these units of measure is sound (no pun intended, but it's not a bad one) and has evolved along with technology. The original basic reference UoM was the dBm, which is a unit of electrical power. 0 dBm = 1 milliwatt, regardless of the load or the voltage drop across it. So maintenance of a standard line level in dBm throughout a device chain obviously requires impedance matching across interfaces. And the dBm UoM is useful for expressing low power levels like line level inputs and outputs in mics, preamps, DACs etc. A lot of modern audio equipment is much more sensitive to voltage than power and is therefore more dependent on voltage matching than on power matching across devices. So the dBu was created as a UoM with a reference level of 0 dBu = 0.775 volts. This standard was intentionally set so that both 0 dBm and 0 dBu are indicating the same 1 mW of power through a 600-ohm load (which, as you know, has been an industry standard for decades). The difference is that 0 dBm is only 0.775V into a 600 Ohm load, while 0 dBu is always 0.775V regardless of load. From here, it gets a bit confusing (but still not loosely or ill defined). There are the dBv and the dBV, which are reference ratios of voltages. 0 dBv = 0 dBu whle 0 dBV = 1V. And there's the dBW, a UoM for wattage ratios. 0 dBW = 1 watt of electrical power. This is not equal to 1 acoustic watt, which is a measure of sound power at the transducer. In fact, most of our speaker systems are only somewhere between 1 and 4% efficient - they put out about one acoustic watt for every 25 to 100 watts of electrical power pushed through their voice coils (or other actuating mechanisms). The Computer Audiophile and Bill Brown 1 1 Link to comment
stereo coffee Posted April 21, 2021 Share Posted April 21, 2021 7 hours ago, pkane2001 said: Why ignore? We very rarely use devices such as DACs, amps, etc., at their nominal output level, unless you never adjust volume. While it may be useful to also measure at nominal level, it's not that you must ignore all other measurements, as long as these are consistent. Most DACs I've measured, for example, measure better at 2v output than at 0.316, the same with head-amps simply due to better SNR. Because 0316 mv RMS is the level that consumer equipment has at its output. If we look at reviews of equipment, figures used , rarely if EVER relate to what we end up hearing. Certainly a test signal can be arranged to run through a CD player but few of us enjoy listening to test signal, or would invite friends around to listen to that. There is a human like magnetism too that artificially by using larger looking figures, may also be selling advertising space. We are being fooled, and should wake up to firstly politely requesting that equipment be measured correctly, but due the habitual habit now routinely present, my best advice would be to ignore reviews that cannot get such a important figure correct.... turn the page. How consumer line level nominal 0.316mv RMS relates to the bigger picture, is to underline a choice - that is whether to add reactance between your source and power amp, or whether not to. The best scenario is to have your power amp, having its sensitivity close to consumer line level, so as not to add reactance. Link to comment
stereo coffee Posted April 21, 2021 Share Posted April 21, 2021 13 minutes ago, stereo coffee said: Because 0316 mv RMS is the level that consumer equipment has at its output. If we look at reviews of equipment, figures used , rarely if EVER relate to what we end up hearing. Certainly a test signal can be arranged to run through a CD player but few of us enjoy listening to test signal, or would invite friends around to listen to that. There is a human like magnetism too that artificially by using larger looking figures, may also be selling advertising space. We are being fooled, and should wake up to firstly politely requesting that equipment be measured correctly, but due the habitual habit now routinely present, my best advice would be to ignore reviews that cannot get such a important figure correct.... turn the page. How consumer line level nominal 0.316mv RMS relates to the bigger picture, is to underline a choice - that is whether to add reactance between your source and power amp, or whether not to. The best scenario is to have your power amp, having its sensitivity close to consumer line level, so as not to add reactance. Or make a decision to always use pro level equipment and not mis- match, Your choices will be severely restricted though and change dramatically, if taking the pro level path. The focus would change to begin trying to assume the knowledge and experience of for instance of mastering engineers. I can see we be eternally frustrated, not being given access to the original tapes.... Consumer line level equipment is the mainstream, and is sensible in terms of our continued enjoyment of recorded sound. Getting as close as we possibly can to what was recorded, not trying to change what was recorded, is then achieved. Link to comment
bluesman Posted April 22, 2021 Share Posted April 22, 2021 3 hours ago, stereo coffee said: Because 0316 mv RMS is the level that consumer equipment has at its output. 14 hours ago, stereo coffee said: If we observe for example consumer line level which is nominal 310mv RMS The way you say these things makes me wonder if you understand what that figure means. The usual voltage at fixed level outputs from line stage preamplifiers (e.g. tape outputs), CD players etc is about 300 mV. This is the RMS equivalent of the actual standard, which is defined as -10 dBV. Many line level outputs in consumer equipment can apply a peak voltage drop of well over 2V across the input they're driving and still be within their performance spec. Some preamps and HT receivers can put out much higher levels - I recall a few HT receivers over the last 10+ years that would top 7V from "line level" outputs. But most variable preamp line output receptacles used to drive power amps are variable between 0 and a minimum of 2V as determined by the "volume control". Unless you have an amplifier with unusually low input sensitivity, you're not likely to be listening in most home settings at a -10 dBV preamp output level because it will be much too loud. A 50 watt amplifier driving an 8 Ohm load at its maximum rated output is applying a 20V drop across that load. If that amplifier has a typical gain of about 28 dB, it needs about 0.8V at the input to drive it to 50W. Unless you have some unusually insensitive speakers and/or a ballroom in your house, you're not going to come anywhere near 50W in normal listening. The output of other devices ahead of the power stage is not referenced the same way. Phono stages have gains of about 40 dB (high output MM), 58-60 dB (high output MC), or 70+ (low output MC) to step up the very low output of a phono carteridge sufficiently to drive a preamp's line stage(s) - and most phono stages have maximum outputs of about half a volt. 4 hours ago, stereo coffee said: How consumer line level nominal 0.316mv RMS relates to the bigger picture, is to underline a choice - that is whether to add reactance between your source and power amp, or whether not to. The best scenario is to have your power amp, having its sensitivity close to consumer line level, so as not to add reactance. Unless you're using a fixed output line stage to drive a power amplifier without input attenuation (i.e. no "volume control"), you can pair most current preamps and amplifiers with little concern for their output levels and input sensitivities - and you don't need a buffer stage between them. You should always check that the input sensitivity of a power amplifier you're considering is a functional match for the output level range of whatever preamp you want to use with it, since there are a few devices out there today that are outside the usual range of such electronics. But most preamps and amps today are quite compatible with regard to preamp line out levels and amplifier input sensitivity. The Computer Audiophile 1 Link to comment
stereo coffee Posted April 22, 2021 Share Posted April 22, 2021 As consumer line level is 0.316mv RMS it just needs to match to sensitivity of a power amp to be close to that same figure. In between of course just resistance to attenuate, as it adds no reactance, other than minor contribution of cabling. Within that resistance attenuation product, there needs to be ability to independently refine shunt and series resistance..... and you are there. The result is hearing your consumer audio source component, and not unnecessary reactance in between, no coupling capacitance semiconductor contribution , or shifting level up or down as the case may be. Surely that is what this pursuit is all about, getting to hear what the source component provides ? Link to comment
stereo coffee Posted April 22, 2021 Share Posted April 22, 2021 1 hour ago, bluesman said: But most preamps and amps today are quite compatible with regard to preamp line out levels and amplifier input sensitivity. But preamp line level outs are adding reactance. this is absolutely fine if you also resign to not ever wanting to hear what your source can actually provide. Instead far better is to always design around not adding reactance other than minor contribution of cabling, which is to always have sensitivity of your power amp close to consumer line level - lets say below 500mv RMS and then be enabled to use resistance attenuation. The ability to experience the capability your source component is the desired result with any audio system, putting layers in between to satisfy difference to consumer line level, that you think is adding something, is abstract at best. The Quad 306 I think being the best example of a power amp, being from a manufacturer of considerable experience ( since 1936 ) has sensitivity of 0.375mv RMS ... perfectly matching consumer line level... they knew what they were doing. Link to comment
March Audio Posted April 27, 2021 Share Posted April 27, 2021 On 4/22/2021 at 11:05 AM, stereo coffee said: But preamp line level outs are adding reactance. this is absolutely fine if you also resign to not ever wanting to hear what your source can actually provide. Instead far better is to always design around not adding reactance other than minor contribution of cabling, which is to always have sensitivity of your power amp close to consumer line level - lets say below 500mv RMS and then be enabled to use resistance attenuation. The ability to experience the capability your source component is the desired result with any audio system, putting layers in between to satisfy difference to consumer line level, that you think is adding something, is abstract at best. The Quad 306 I think being the best example of a power amp, being from a manufacturer of considerable experience ( since 1936 ) has sensitivity of 0.375mv RMS ... perfectly matching consumer line level... they knew what they were doing. Reality is that the vast majority of modern consumer digital sources have an output of, or very close to, 2V rms at 0dBFS. It's been this way for years. This may not be a "standard" but it certainly is the current convention. Most equipment is now designed around and work with this level. The 0.316V level is effectively redundant. The vast majority of power amps sit between 25 and 30dB gain as a result. As an example one of our ower amps would require 43dB of gain to reach full power output (425 watts into 4 ohms) if the signal input was limited to just 300mV. This amount of gain is bad noise levels, the 2 volt current convention is far more sensible. Can you explain what you mean by "adding reactance"? In an active pre amp the input impedance will be unrelated to the output impedance. Link to comment
Miska Posted April 27, 2021 Share Posted April 27, 2021 On 4/21/2021 at 3:30 PM, pkane2001 said: Most DACs I've measured, for example, measure better at 2v output than at 0.316, the same with head-amps simply due to better SNR. Most optimal level vs THD+N figures for DACs seem to be around -10 dBFS. THD drops while noise is not yet starting to dominate. In addition, many clip inter-sample overs. March Audio 1 Signalyst - Developer of HQPlayer Pulse & Fidelity - Software Defined Amplifiers Link to comment
stereo coffee Posted April 27, 2021 Share Posted April 27, 2021 5 hours ago, March Audio said: Reality is that the vast majority of modern consumer digital sources have an output of, or very close to, 2V rms at 0dBFS. It's been this way for years. This may not be a "standard" but it certainly is the current convention. Most equipment is now designed around and work with this level. The 0.316V level is effectively redundant. The vast majority of power amps sit between 25 and 30dB gain as a result. As an example one of our ower amps would require 43dB of gain to reach full power output (425 watts into 4 ohms) if the signal input was limited to just 300mV. This amount of gain is bad noise levels, the 2 volt current convention is far more sensible. Can you explain what you mean by "adding reactance"? In an active pre amp the input impedance will be unrelated to the output impedance. Indeed players have that capability, but the media played remains strictly and sensibly at consumer line level at 0.316V RMS.... Reactance can be described as inertia against the flow of current, notably contained naturally in the electrical properties of capacitors and inductors. Reactance is added to a circuit, by any such component in a circuit particularly in the signal path. A review of schematics of active pre's will locate, how many reactance components, they use in order to pass signal from input to output. Link to comment
Popular Post bluesman Posted April 27, 2021 Popular Post Share Posted April 27, 2021 1 hour ago, stereo coffee said: Reactance can be described as inertia against the flow of current That's a definition I never heard before. Reactance refers to any force in an electric circuit that opposes change in the flow of current, not the flow of current itself. Your definition (apart from the term inertia) would include resistance as well - but the two are entirely different, even though both oppose the flow of current. In a pure DC circuit (of which there are many in audio equipment), there is resistance but there is no reactance. Reactance is a phenomenon of alternating current, and its magnitude is frequency dependent. Impedance is the combination of resistance and reactance and is therefore frequency dependent because of the reactive components. I assume you're using the term inertia because inertia opposes changes in mass motion. But inertia is a function of kinetic energy, for which the unit of measure is the joule ( calculated from mass and velocity). Reactance has nothing to do with mass or velocity, so describing it as inertia is a cute metaphor but not a valid definition. Reactance is a function of magnetic and static forces. Inductive reactance is caused by the magnetic field generated around a conductor when alternating current flows through it. The induced magnetic field opposes changes in the current flow that creates it. Capacitive reactance is caused by the temporary "storage" of flowing electrons in a dielectric substance inserted into a conductor - the flow is "captured and released", which delays the phase of the output relative to the input waveform. I join MarchAudio and others above in wondering where you're getting some of the confusing concepts you've set forth, e.g. " preamp line level outs are adding reactance". Line level outputs can be driven by many elements, depending on circuit design. They're not all the same and each design has its own set of operating parameters - yet you're lumping them all together. There are balanced and unbalanced, 0 gain and finite gain, inverting and non-inverting, direct coupling vs transfomer/choke/capacitor coupling, cathode follower vs plate follower, etc etc etc. There are many fine systems in which more than one device has line level outputs. I doubt that their owners have resigned themselves to "...not ever wanting to hear what your source can actually provide". March Audio and Bill Brown 1 1 Link to comment
fas42 Posted April 27, 2021 Share Posted April 27, 2021 17 minutes ago, bluesman said: That's a definition I never heard before. Reactance refers to any force in an electric circuit that opposes change in the flow of current, not the flow of current itself. Your definition (apart from the term inertia) would include resistance as well - but the two are entirely different, even though both oppose the flow of current. A slight correction, if I may, 🙂 ... Quote Electrical reactance, (is) the opposition to a change in voltage due to capacitance (capacitive reactance) or in current due to inductance (inductive reactance); the imaginary component of AC impedance. Link to comment
March Audio Posted April 27, 2021 Share Posted April 27, 2021 2 hours ago, stereo coffee said: Indeed players have that capability, but the media played remains strictly and sensibly at consumer line level at 0.316V RMS.... Reactance can be described as inertia against the flow of current, notably contained naturally in the electrical properties of capacitors and inductors. Reactance is added to a circuit, by any such component in a circuit particularly in the signal path. A review of schematics of active pre's will locate, how many reactance components, they use in order to pass signal from input to output. With respect you have a misunderstanding about this. The signal level is entirely dependant upon how loud you turn the volume, and of course the level at any given moment in the recording. With modern digital sources it will be at any level between 0v and 2 volts rms. Thanks for your explanation but I know what reactance is, I was really looking for some kind of explanation of why you think it's "bad". Link to comment
stereo coffee Posted April 27, 2021 Share Posted April 27, 2021 1 hour ago, bluesman said: That's a definition I never heard before. Reactance refers to any force in an electric circuit that opposes change in the flow of current, not the flow of current itself. Your definition (apart from the term inertia) would include resistance as well - but the two are entirely different, even though both oppose the flow of current. In a pure DC circuit (of which there are many in audio equipment), there is resistance but there is no reactance. Reactance is a phenomenon of alternating current, and its magnitude is frequency dependent. Impedance is the combination of resistance and reactance and is therefore frequency dependent because of the reactive components. I assume you're using the term inertia because inertia opposes changes in mass motion. But inertia is a function of kinetic energy, for which the unit of measure is the joule ( calculated from mass and velocity). Reactance has nothing to do with mass or velocity, so describing it as inertia is a cute metaphor but not a valid definition. Reactance is a function of magnetic and static forces. Inductive reactance is caused by the magnetic field generated around a conductor when alternating current flows through it. The induced magnetic field opposes changes in the current flow that creates it. Capacitive reactance is caused by the temporary "storage" of flowing electrons in a dielectric substance inserted into a conductor - the flow is "captured and released", which delays the phase of the output relative to the input waveform. I join MarchAudio and others above in wondering where you're getting some of the confusing concepts you've set forth, e.g. " preamp line level outs are adding reactance". Line level outputs can be driven by many elements, depending on circuit design. They're not all the same and each design has its own set of operating parameters - yet you're lumping them all together. There are balanced and unbalanced, 0 gain and finite gain, inverting and non-inverting, direct coupling vs transfomer/choke/capacitor coupling, cathode follower vs plate follower, etc etc etc. There are many fine systems in which more than one device has line level outputs. I doubt that their owners have resigned themselves to "...not ever wanting to hear what your source can actually provide". Definition was not mentioned, rather I said "can be described as " If we look at what are regarded as good active pre's do you find the better types often marketed as reference types, make effort try to minimise reactance ? the schematics I looked at, all did this. Better still is to just use resistance to attenuate, accompanied by matching consumer line level source voltage to power amp sensitivity. its a good recipe for continuous audio enjoyment. Link to comment
stereo coffee Posted April 28, 2021 Share Posted April 28, 2021 48 minutes ago, March Audio said: With respect you have a misunderstanding about this. The signal level is entirely dependant upon how loud you turn the volume, and of course the level at any given moment in the recording. With modern digital sources it will be at any level between 0v and 2 volts rms. Thanks for your explanation but I know what reactance is, I was really looking for some kind of explanation of why you think it's "bad". You only need to attenuate consumer line level, which is NOT 2v RMS its nominal 0.316v RMS , You might be reading too many reviews that test equipment at that high level, that is unrelated to what is at a RCA socket when playing CD's in home audio systems. Why its bad is because your source component contains the very best opportunity of presenting music as it actually can be, adding anything in-between other than resistance to attenuate, simply detracts from that opportunity. Link to comment
Jud Posted April 28, 2021 Share Posted April 28, 2021 2 minutes ago, stereo coffee said: You only need to attenuate consumer line level, which is NOT 2v RMS its nominal 0.316v RMS , You might be reading too many reviews that test equipment at that high level, that is unrelated to what is at a RCA socket when playing CD's in home audio systems. Why its bad is because your source component contains the very best opportunity of presenting music as it actually can be, adding anything in-between other than resistance to attenuate, simply detracts from that opportunity. Your "explanation" of why this is bad is actually just a restatement of your original position. Would you please try to give an actual explanation as to why it causes deterioration in the resulting sound? Also, you mentioned looking through schematics of good preamplifiers. Can you please mention which particular preamps you viewed schematics of? Thanks. March Audio 1 One never knows, do one? - Fats Waller The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature. Link to comment
bluesman Posted April 28, 2021 Share Posted April 28, 2021 1 hour ago, stereo coffee said: You only need to attenuate consumer line level, which is NOT 2v RMS its nominal 0.316v RMS I think I figured out why you're clinging to this artificial dichotomy. The reference you keep citing as "consumer line level" (0.316 VRMS) is the approximate RMS voltage of a steady 1 kHz sine wave across a 1 Ohm resistor at -10 dBV and is in the middle of the usual range for mean analog line levels when playing music on consumer electronics. Although louder than most of us listen most of the time, this level is used because it leaves sufficient headroom for peaks to 2+V peak to peak (not RMS). In other words, it's the highest mean playback level you can use for music with a wide dynamic range without pushing distortion on peaks to levels beyond rated specs for the equipment in question. It's a rare (and foolish) audiophile who pushes his or her equipment to the limit of its rated specs (let alone beyond). Peak to peak voltage equals RMS volts x 2*sqrt(2). Program peaks at 0 dBV will measure up to 2+V peak to peak with most music when played at an average level of -10 dBV RMS (which is, again, very very loud). So music playing at -10 dBV (which is the ~0.316 VRMS average level to which you cling) will be reaching peak volumes of up to 0 dBV (which is about 2 V peak-to-peak) within the spec'ed distortion of the equipment. In short, 0.316 VRMS (-10 dBV) is the reference average playback level for music that contains peaks of up to 2+ volts peak to peak (0 dBV). Average playback levels are stated in RMS volts while peaks are stated in peak-to-peak volts rather than RMS (and I do not know why this is the way it's done). Look at the graph below for a comparison of RMS, peak, and peak-to-peak voltages. The Computer Audiophile 1 Link to comment
Don Hills Posted April 28, 2021 Share Posted April 28, 2021 29 minutes ago, bluesman said: I think I figured out why you're clinging to this artificial dichotomy. The reference you keep citing as "consumer line level" (0.316 VRMS) is the approximate RMS voltage of a steady 1 kHz sine wave across a 1 Ohm resistor at -10 dBV ... 1 ohm resistor? Shurely shome mishtake... ☺️ (dBm takes into account the source/load resistance, dBu / dBV doesn't.) opus101 1 "People hear what they see." - Doris Day The forum would be a much better place if everyone were less convinced of how right they were. Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now