Jump to content

John Dyson

  • Content Count

  • Joined

  • Last visited

1 Follower

About John Dyson

  • Rank
    Audio DSP/SW developer, sometimes listener

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Was just taking a nap, listening to the Cars -- noticed something strange during the realtime play of the decoder -- the Cars greatest hits album that I have is DOUBLE ENCODED also!!! John
  2. Don't give up -- listen to reason... You are actually important to me -- I want to help (probably causing more friction -- but I really want to help the general understanding.) Here we go -- do you hear the 5msec jitter in the DHNRDS? That is how much (AND MORE) jitter that is happening in the DHNRDS all of the time. Why does it not make any difference? Buffering to the final clock. (Actually, it is possible for it to jitter as much as 1/3 second or more.) Do you hear even a dropout in the recordings? (Only if the CPU is overwhelmed during realtime play.) Do you hear the myriad of rate conversions (even though I limited them)? Do you know how much stuff that would JUST NOT WORK if the 'common knowledge' prevailed... Not even a CD player would work. The problem with all of this is that claims are made based upon defective experiments and anecdotes. Here is my anecdote -- I took the 'brickwall ringing' as fact a long time ago, until I realized that the ringing wasn't really ringing. Superficially, things might look one way, but the reality might be very very different. John
  3. They sound different because of 1) electronics differences causing analog based noise (bad circuit/layout design -- different currentt flows ), or 2) brain-state/hearing differences. Digital difference -- NADA. Truly control the experiment, anecdotal evidence is NOT evidence in controversial and technically unsupportable claims. Dont make claims about digital bits being different, unless the experiment is truly controlled as such. I sometimes hear differences when they aren't there, and have learned that audio/listening memory is approx 10-15seconds. Sure, general sounds are longer, but precision hearing is extraodinarily short. John
  4. The pits represent a complex digital coding which either produces errors or accurate signal after proper demod. CDs aren't like laserdisks where the noise pollution in the signal can reach the target (even after massaging with TBC.) CDs are a totally different animal, where bits are either erroneous or not -- there is no in-between in the case of CD data. CD data comes in coherent blocks, and if there are little 'wobbles' in the data, then the entire block (pretty big chunk of data) is rejected or attempted corrected back to data-perfection -- NOTHING in between. There are cases where CD data can cause errors, but they are DISCRETE errors and not noise in the traditional analog sense. If there is a noise coming from CD, it is a discrete GLITCH, which might or might not be corrected by ECC. Any timing errors are buffered and resynched to a clock -- usually some kind of on-board circuit. That clock, however lousy is meaningless WRT audio jitter unless a D/A is onboard, and there are bad design analog leakages in the circuit. When differences might occur in the audio, the strongest comes from analog conducted noise caused by suboptimal physical circuit layout. The second layer of noise comes from a capacitive or inductively coupled noise, but tends to be weaker than the conducted noise. The third layer of noise comes from radiated sources -- but isn't really very strong on a sanely designed circuit. Any non-error causing deviations on a CD media cannot come into play -- because the CD data is NOTHING like the audio that it represents. Any small noise changes mostly come from ham-handed grounding issues where the digital ground leaks into the audio ground path (conducted), or insanely bad layout causing conducted or radiated interference. As each group of bits from a CD is read and correct -- it is either 100% accurate or meaningless garbage. The various groups of bits are re-assembled after ECC, and the resulting output is either accurate, or gross/glitch fill-in. The similar higher level reading techniques are used on your *CD/DVD/Blue-ray* technologies as your HDD. If you have little errors that create noise in the digital domain, then heaven help you when trying to read a CD for data. Sub-bit errors don't happen, unless there is ham-handed design in the circuitry, and the signal is resolved to analog. Clock jitter also need not apply in this case -- the only way that clock jitter comes into play on this matter is if the jitter is so very bad that the circuitry/reader design can no longer lock onto the CD (super unlikely), or the jitter leaks directly into ANALOG audio because of the ham-handed circuit layout described above. If your CD drive doesn't resolve directly to analog, then the clock jitter is like internet timing jitter -- it is all resynched at the D/A conversion. This is NOT religion, but comes from a developer type who comes from AT&T Bell Labs research -- no longer a person with engineering discipline, but still does a modicum of research. If I still 100% subscribed to engineering discipline, my current project wouldn't function -- but the same rules apply, that is: REALITY. The singular place where clock issues come into play is at the D/A conversion point, and the analog output is partially dependent on the ANALOG design of the circuit board, and also the clock jitter at that point. (Clock jitter can even be added by bad design, even if you use a precision external clock unit.) John
  5. About the release -- heads up, and request opinions. There is a decoding submode option 'b'. It is different from the old 'b', but has a similar purpose. When you try to do decodes -- the straight --fcd, --fce, etc etc etc.., if something seems a little odd, try adding a 'b' argument, like: --fcd=b, or --fce=Gb, or whatever. I wanna know if the 'b' enabled should be the default. If it helps, use 'b' for now, if it doesn't help -- or hurts, then tell me. I'll maybe able to remove 'b' all together, and we have simplified things. My hearing is so darned unreliable, especially near the end of the day, I need to ask for help. I wanna simplify this things, but a lot of the 'options' and 'submodes' come from my inability to decide. JOhn
  6. New release: V1.4.7C. Very nice, very important. The best sound yet. Super-duper clean. When I do A/B, the previous version wasn't bad, but this is REALLY REALLY smooth. You an sometimes hear where the midrange might be suppressed on the older version, or something is partially missing. These problems are greatly improved. In the midst of working on documentation, had HW failure, needed a full re-install on Windows box, etc. Well, back running and able to do the release. No substantive user differences, the best decoding commands are --fcd, --fce, --fcf, --fcd=G, --fce=G, --fcf=G. Same as before. There have been some very substantive sound improvements, but normally use exactly the same settings as before. The decoding is much more robust, but once in a while, you might benefit from perhaps one '-'. If you still have scripts that used two '-', that is okay, but normally one '-' is all you really need. There were some bugfixes (e.g. disabling HF1 works more correctly now, but when do you really need that?) There also are some minor frequency tweaks that zero in on the correct EQ more precisely -- the result being better cancellation of generated artifacts. Most of the time, you won't notice the improvements, but they are really helpful. There are some other internal improvements, but nothing that means much to the end user. The decoder is MUCH MUCH more tight WRT the proper EQ. The good news is that the raw decoding of DolbyA material is perfect, but the EQ has been a painful and blind reverse engineering. Here is an example, of the VERY TOUGH two level decoding as done against the Al Stewart recordings. I am not claiming perfection, but the original CD is little and narrow sounding, and the decoded version is much more normal. This was decoded by: sourceCD -> decode0 -> decode2 -> output snippet. The entire album is done very well like this example. Unlike a single level decode, this one is two levels and much more difficult to do. IT REALLY DOESN"T SOUND BAD, esp when compared to the compressed original. If I heard a snippet of the DolbyA master, it would be a little better. However, this is very listenable: (Al Stewart, Year of the Cat, If it doesn't come naturally). This recording is tougher than 'Dreamworld' to do (which, btw, is near perfect now.) Dreamworld is also two layers. * I tried to balance the levels -- the CDorig version has lots of compression and gives a 4dB level advantage. I tried to match the audible levels. Example snippets are attached, look in the repository for V1.4.7C. YOU NEED THE NEW .DLLs in the zipfile!!! The example snippets are directly attached... This is the first build on the new environment -- LET ME KNOW IF THERE ARE RUNTIME PROBLEMS!!! https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0 John aldemo-CDorig.flac aldemo-decodedsnippet.flac
  7. Even though I am working on various quality issues, after a certain level of quality, then simple enjoyment (or lack of) is the criteria. Do you like the music? Did someone really botch the performance? Did the recording engineer really mess up? Unless the technical defects are severe, then technical issues seem to be secondary. There are two (or more) ways/modes of listening, sometimes just a plausible rendition of the basic music is plenty good enough (e.g. AM radio in the car.) When listening for beauty, that might be another mode... Perhaps listening for audio experience and music enjoyment, yet another mode... Severe variations that damage these various modes might qualify as a 'bad' recording. On the other hand, on the technical side the biggest *normal* defects in my mind are 'woody' midrange, almost sounding like the music is coming through an old horn speaker, distorted bass, 'blank' vocal chorus, and damaged stereo image. These are super common defects, and one reason why I started losing my hi-fi habit in the late 80's early 90's. Squishy cymbals/high hats aren't all that distracting -- hearing high hats and cymbals in the real world CAN be very distracting. Squishy high-hats are easy to describe, but really aren't much of an irritant to me. I think that many people have accomodated common recording defects nowadays. This is similar to mostly accomodating the hiss and ticks/pops of the olden days. John
  8. I am still working on a beginning user guide, it is focused on decoding procedures and steps. One difficult thing for me, I am trying to avoid technical blather -- a 'beginner' isn't going to want to be overloaded with even more complexity and details. Also, I ran into an INTERESTING, very INTERESTING case. I will NOT be covering this case in the beginners guide, but will definitely mention the possibility... The Al Stewart recording, 1976 Year of the Cat, CD FA 3253, is a REALLY interesting case. It has a defect that make decoding challenging, and also requires TWO PASSES to decode to non-FA. It is a strange beast. Here are the data points: 1) Defect: The CD has the lows rolled off. To properly decode, the following EQ is needed: bass [email protected]/Q=0.8409 and [email protected]/Q=0.8409. Without the LF correction, the material produces all kinds of LF distortion -- ugly. (Yes, you have to add bass to make it clean -- it is NOT an overload kind of problem.) DolbyA decoding does require a nearly flat response (small perturbations are okay, but not multiple dB errors.) This kind of thing is one of the 'tells' that prove DolbyA compression in the recording. Decoding and encoding must be mirror image at the lower frequencies. 2) The first pass must have the HF1 decoder turned off (--hf1off), --fwide=none (no image modification) 3) The second pass must have the MF and HF1 decoders turned off (--mfoff, --hf1off), --fwide=tpop (the minor stereo correction) There are several other odd settings that are used for the recording. Some day, when I can really start contributing to @lucretius's list, there will be a full decoding spec for the CD. The recordings are sounding very good now, no grain, but still not totally satisfied -- the decoded CD sounds as good as any commercial version that I have heard -- and MUCH better than the non-decoded CD. Using two passes is a bit troublesome, but since the DHNRDS has very low distortion, it isn't as bad as if trying to use DolbyA HW. The caveat about recordings like this: the anti-distortion code might have strange interactions if used two times in a row. Part of what the code does is to remove parts of the signal that look like modulation products (it is a complex thing as it doesn't just remove modulation products, but does so as it sees what it is doing -- really, really subtle, implicit design.) It does seem that the anti-MD does still work well with two sequential passes, even in the higher quality, anti-MD modes. Note about the reason why they are using DolbyA HW units for multi-band compression - it is probably about exquisitely expert design in the attack/release circuitry, where even though it still produces distortion, it is much less than a simple, straightforward design. It is also very optimal -- you might notice that most compressors might have more simplistic attack/release circuitry, but the DolbyA is highly dynamic and very good. I simply don't think that attack/release can be any faster, producing low distortion, that is, without the crazy code in the DHNRDS. (On the DHNRDS, I have tested the anti-distrotion by turning off the attack/release -- creating effectively about 2msec attack/release, and producing very low distortion. It can only do this and still sound good because of the anti-IMD and anti-MD code.) Hardware from the '60s couldn't do what the DHNRDS does. This (CD FA 3253) is a perfect example of the FA encoding being intentional, and using the DolbyA HW units as selectable mode multiband compressors. By turning off the HF1 compressor when producing the material, then the result has less distortion. The HF1 band is complex and has extra & strange time delays causing interactions into the HF0 band. It is a reasonable decision to turn off the HF1 compressor if using a DolbyA as a multiband general purpose compressor. (Also, the extra 5dB of compression above 9kHz is kind of silly and is problematical anyway.) Why they are using DolbyA HW units for multi-band compression? - it is probably about the brilliant attack/release circuitry, where even though it still produces distortion, it is much less than a simple, straightforward design. It is also very optimal -- you might notice that most compressors might have more simplistic attack/release circuitry, but the DolbyA has highly dynamic characteristics and still very good. I am impressed every time that I review the design concepts. Amazingly, he made a compatible unit both with diode (linear gain vs. current, exponential gain vs. voltage), and jFET (which has extreme variability from device to device.) More important question: Why are they doing the compression AT ALL? I think that my posting would be bleeped if I write MY opinion about using the 'bad-touch' compression. Mindless compression on recordings distributed on contemporary digital media (90+dB dynamic range) is 'bad touch'. Attempting to recover from the damage, is 'good touch' :-). John
  9. The thing that bothers me about filters that aren't linear phase (constant delay) is that the delay is different vs. frequency. This means that the time-of-arrival will be distorted for filters other than linear phase. Maybe sometimes it sounds better, maybe it is a personal preference and not a general truism -- I do not know. If a filter does not have constant delay, that is tantamount to temporal distortion. However, in the extremely complex software that I wrote and have steward ship of, if I was stuck using IIR filters (a kind of minimum phase) or not-linear phase FIR filters, the timing relationships would have made the project impossible. Or, at least impossible to do all of the filtering and processing that it does do. The only not-linear phase filters that I use, are the super wide band accurate 90deg Hilbert transform filters, and the few IIR filters needed to properly emulate a DolbyA. Here is an idea of the beauty of a linear phase filter complex. I want to add a 200-300Hz chunk of a signal along with a 400-500Hz chunk, and make the bands very sharp... All you need to do is to pick a fixed number of taps based on your skirts, use the same for both filters, then filter the signal through each bandpass filter and add them together. No muss, no fuss -- everything stays in sync. IN fact, I have a simple-to-utilize scheme that allows adding signals with different tap counts in the filters, the delays being compensated, and everything is still good. Another real-world complication: the DHNRDS has a propagation time of about 1/2 second. Much of that propagation are Hilberts, high pass and low pass filters... There is no way that I could predict the propagation if using minimum phase filters and equalizers. Intermediate still have the same problem. As it is now, through that 1/2 second of propagation, complex processing, etc - the DHRNDS is accurate (or should be) to just a few samples (well under 10.) Last time I checked, two different modes on the DHNRDS still keep in sync about 2-5 samples. Most of my recent tests show that you can subtract signals with different decoding modes, and actually hear the improvement of the higher modes. That is, even different modes will elicit exactly the same output files (modulo difference in quality.) I use not-linear phase filters with trepedation, and would certainly use them if I needed. However, anything but a linear phase will temporally distort your signal. (Hilbert transforms are also not-linear phase, but they have well defined characteristics that are easy to handle.) John
  10. Those are the good, minimum specs that are needed for reasonable judgements. Hobby claims need that kind of characterization also (perhaps to a bit less detail just because of practicality.) I am also skeptical of the effects of the input transducers. I haven't seen the full specs recently -- but for those still into vinyl -- the impedance thing, esp for MM is important. Since MM/MC preamps are common in the hobby realm, claims about distortion really need to include the impedance of the transducer. But yep -- this is getting into the realm of relatively meaningful. It WOULD be nice if a transparent explanation would be given for non-techie audiophiles also -- what does that 0.001% mean if it increase to 0.01% at 30kHz? You know what I mean... IMD is also important, where THD vs IMD have different importance at different frequency. To do the specs in full detail can be onerous. But, it isn't about the 'specs' per se... Like, how bad is the IMD. Is IMD even an issue in the design (it can be an issue, but hopefully a good design won't make it worse than what it should be.) I can keep on rambling on about this without communicating more of what I intended -- but as we all know (both people who have a preference for objective or subjective), raw numbers are meaningless unless the *effect* of those 'numbers' (however diminishingly small or big) on the sound is the important thing. I'll sign off on this subject because my point is made -- and I only feel uncomfortable when being too much or too little focused on the objective specs, and also uncomfortable if the subjective effects aren't verified/tested. When I suggest 'testing', I mean using experiments with controls. These true subjective tests, with some reasonable scientific/statistical/blind method are very inconvenient, but can be amazingly beneficial to both the consumer and the engineer doing a design. For the design, it is more about verification, but for the consumer, it can be about validation or choice. (probably other reasons.) I guess -- I try to say, don't discount ANY information source, and use whatever tools you can make available. Doing things 'right' can be incredibly inconvenient, and happily, much of the time, we are lucky -- and HOPEFULLY whatever mistakes we make in design/testing and as a customer end up being inconsequential. (To be polite, unless someone directly replies and effectively asks for response, I'll just lurk again :-)). John
  11. I think that these matters about specsmanship would be a little more honest if there were open explanations when/if some initially extreme value for a measurement might be helpful. Just saying: my circuit has 0.0001% distortion and is so much better than 'Sams' circuit with 0.0002% distortion is a rather useless argument or comparison. Those are blind comparisons with no context. I have this wonderful little paper that some person wrote on op-amps, creating his own specs with LOTS more details than normal manufacturers specs. That 'wonderful' paper helps to show the behavior with much more circuit context involved. It provides much more helpful behavior information for more realistic circuit configurations (of course, not perfect.) There are so many choices of op-amps, that good, understandable objective measurements are so helpful to get started. I can agree that a blind spec without explanation of context is just a little better than irrelevent. This reminds me of the old 'lines of resolution' spec for SVHS and VHS decks. The number was meaningless, but we consumers always know that 'bigger is better', right? Heh -- they way that those 'lines of resolution' were measured was almost meaningless WRT actual quality of video reproduction. They were especially meaningless when comparing consumer vs pro video equipment. (Nowadays, such matters are anachronistic -- we are so spoilt with almost flat & more linear video response in comparison.) This would be similar to the 0.001% vs 0.00001% distortion... For example (another one of my diversions): how many such 'wonderful' preamps characteristics are measured with a source that truly emulates (for example) a MM cartridge? Such transducers are well known to have very wide ranging characteristics and high impedance in certain frequency bands. Such a high transducer impedance can certain cause negative effects WRT modulation of input impedance vs. signal waveform/frequency/level. The noise matter bothers me less, because it is almost impossible for a competent designer to create much more preamp noise than an MM cartridge at frequencies where the ear is sensitive. However, I seldom see a real distortion measurement where the source is at least a model of a cartridge (or, perhaps a low impedance signal feeding through a cartridge of choice, so that an actual distortion measurement can be done.) Big, fat low noise jFETS or medium geometry BJTs can have noticeably changing input capacitance in a normal amplifier circuit. This changing capacitance acts superficially similar to a changing resistance, and in certain cases can produce significant distortion. It WOULD be nice if the measurements actually measured in-circuit/in-situ behavior, and comparisons were made with the priorities openly explained. At least, when someone says that (for example) the 0.25dB down at 20kHz and 1dB down at 30kHz has the effect of significantly changing the sounds of the cymbals crashing -- the judgement can then be prioritized by the person reading the spec. At least, I know how I would prioritize that interpretation of that raw frequency response spec, esp at 30kHz. (BTW, the change in cymbals crashing might be caused more by dynamic input impedance effects, say, on MM preamps.) The general categories of objective & subjective can matter -- the problem with relevence has to do with the priority/usefulness of the actual spec, and the measurement/usability situation for that spec. John
  12. Hey -- while trying to figure out how to describe choosing between the modes -- I found a REALLY CLEAN recording, and think that it might be interesting to show the comparisons (both simple objective measurements and the subjective-oriented results.) This weekend and probably a few more following days will be usage guides and docs. The decoder MUST be easy to use and usability has been the primary focus for the recent updates. The problem with the decoder design is that I have NO objective measurement for general performance, and tweak-tweak-tweak is so very time consuming, error prone and frustrating!!!! The recording for testing/checking decoding is from the 'Brothers In Arms' Dire Straits album, selection#2, Walk of Life. I have two relatively supurb commercial versions, and the decoded version to demo. I also have a 'j random' version from an old CD -- only one cut from it, but is much more compressed than the two commercial versions shown here. * You can really hear the relative lack of compression in the decoded version. Also, the raw CD has more HF compression, but even the MFSL version is somewhat compressed. (As mentioned previously -- I have another CD, SOMEWHERE that is much more compressed than any of these.) The decoding parameters were: "--fce --tone=-13.45 --xpp=max --wof=1.19", for the decoded version. There was ZERO post-decoding EQ. A full copy of the album would be improved by the following, but I did *not* use this on this demo: "--pe45=2,0.75 --pe375=4,0.75 --pe1k=4,0.75 --pe9k=K,-0.75 --pe10p5k=4,-0.75 --pe12k=2,-0.75". This EQ makes the tonality sound closer to the MFSL version, but the general differences in sound character are still intact. * when decoding, I found a very significant decrease in fog when using the --xpp=max instead of --xpp alone. There is a difficulty in choosing between "--fce" and "--fcd=G", because they are so similar, but different enough for getting the very best, most precise results. (--fce is 1 [email protected], stop at 22kHz, --fcd=G is [email protected], stop at 10.5kHz, and 1 [email protected], stop at 20kHz.) You can see where the results would be similar, except the overlap on the --fcd=G, and some recordings DO benefit from this. My challenge is that I am trying to describe how to choose between --fcd=G and --fce in documentation, and I need to learn what the differences sound like. Gonna try to explain it. Here are the snippets: start at 60seconds into '02 - Walk of life", lasting for 30 seconds. There is ZERO monkey business in this demo, except a serious attempt to present with approx the same apparent audio levels (e.g. stripping the playback level metadata, audibly matching the levels are reasonably as I can.) Also, included is a screen grab of the primitive SoX stats, with the summary 1) MFSL version. (pk - RMS: 16.37) 2) Raw, high quality FA CD. (pk - RMS: 16.63) 3) decode of the high quality FA CD. (pk - RMS: 17.82 (Actual snippets at the end). John Dire-Walk-MFSLCD.flac Dire-Walk-ORIGCD.flac Dire-Walk-decoded.flac
  13. I think that my diversions as examples might diffuse the importance of what I am saying. Let me try to translate: we have so many great tools nowadays, IF we use them, and not try to be biased like: 'my hearing is so good that I don't need to measure', or 'the old tools are good enough', maybe we can do things better, quicker, more optimally, etc. Maybe even have more fun doing other things also. More time to do new things because of less time wasted. Maybe each of these wonderful claims of sufficiency are true, but sometimes also we over-estimate our own capabilities. Sometimes, even we totally miss the mark. More succinctly: I don't believe in an objective vs subjective view of things. However, relying too much on the subjective, and not taking FULL advantage of the available objective measurement tools is a bit anachronistic, and often a waste of time. This is a corollary of my anti-'Tweak-tweak-tweak' stance. I don't mean to say: NEVER 'Tweak tweak tweak', but instead why not take advantage of the WONDERFUL tools that we already have. Aggressively avoid 'tweak tweak tweak' instead. It isn't either or -- the reason for the Dolby diversion is simply because he did SUPER well considering the limited tools available. He could have done better with the WONDERFUL stuff that we have today.* I doubt that given the time frame of his early work, that he would have had the chance to do a computer simulation for a first cut optimization. I could do a wonderful low noise pre-amp without computer optimization and without careful spectral distortion analysis for the results -- but why not take *full* advantage of current tools? It isn't difficult to do so. I try to be more self critical instead of being totally self sufficent, knowing that my hearing is 'good enough'. I know that EVERYONE HERE has human hearing and human intelligence -- and I have known some of the most brilliant people that there are -- but sometimes being biased towards one or the other technique might make someone a little less productive and innovative than they could have been. Some of the brightest people that I have known (and certainly one degree of separation from some of the VERY brightest) have also been stunted by needlessly set-in-stone opinions. * Today, R Dolby wouldn't have even needed to do his 'DolbyA NR', but just using him and his situation as an example. John
  14. Update on the *great* quality of the decoding examples (just reviewed a few -- perhaps a 10% suboptimum rate in the list that I created, but still darned good.) After all of the decoding tests that I have done in the last year, there has been a checkered past on quality. These latest results are astoundingly good. I don't know if many of you remember my travails on Supertramp recordings, Fleetwood Mac, Carpenters, ABBA, etc. Going back to the previous results and comparing with the examples given yesterday -- these new results are, for all practical purposes, using the correct sources, near perfect or *perfect*. SIMPLE decoding results, better than my previous, but not as good as my 'tweakd' examples yesterday can be achieved with JUST --fcd or --fce, and proper setting of --tone=. It is REALLY that simple now. No arguments are really needed... For example, the 'precise' results for certain recordings are --fcd=G, but in reality, very good results can be gotten with --fce alone. There is a very slight difference in the 9-11kHz region between --fcd=G and --fce, the difference is dB or so. Just --fcd or --fce are good enough probably for most casual listening (except for VERY FEW select recordings and certain good classical stuff.) The less accurate setting might actually sound better -- but accuracy is my goal now. Today is documentation update day. I think that I am going to be forcibly distracted away tomorrow also, but gonna try to update all of the background information and start integrating more work from others. It might take a few more days as I am much worse at communcation skills than doing 'new' development. It takes me several times longer to do concise documentation than it does with someone with normal writing skills. So, be patient. I am going to TRY to stay away from programming for a few days. John
  15. I don't disagree with your abilities/skill, but here is my position: The problem is that just using 'scopes in the traditional ways are not very selective in providing noise information. They are okay at general information, but it takes spectral and other presentations to study what is going on. I am NOT claiming that *aided* measurements are the only way to find problems, but nowdays we have so many easy-to-use information sorting aids, there is no reason not to use them. Here is one of my long, blathering anecdotes (off topic, but an exemplar): Do you know how to make an amazingly good AM/MW/SW receiver, very simply, if you know what you are doing? A very simple stable oscillator at approx 4X receving frequency, a fancy analog switch circuit with a few specially chosen *almost commodity* analog switches, a very simple, carefully laid out circuit, and a good, wideband (at least 96k, but 192k is better) 24bit stereo audio interface, and connect to your computer. Maybe a slight amount of input selectivity is a little helpful and minimal input gain -- any analog RF amp is tricky design with low enough distortion not to make the recevier worse than the raw RF switching converter. Perhaps use the RF amp as a buffer for radation, and impedance match from a short wire antenna. (The switching device likes to work in the 50-150ohm range.) (Direct conversion SW receiver, using audio interface, full demod capabilies AM/FM/SSB/digital/etc.) The only real limiation is the base band bandwith of the composite signal limited to 1/2 sample rate. This VERY VERY simple design will very often out-perform a very highly engineered, very fancy analog SW receiver. With a little more work, can easily blow it away. There is a whole series of very new (pretty much in the last 20yrs), innovative designs, both for the switching RF converter designs and even traditional RF mixer designs (A guy named Trask did some good papers on his super-innovative improvements to RF mixer designs at lower frequencies, and I forget the name of the person who designed this crazy-good, but simple SW level receiver concept.) I think that Trask did both a brilliant switching mixer & lossless feedback derivative of the traditional MC1496 type scheme. Both methods used real scientific and innovative thinking. With the anecodote above -- which is wisest? -- to do a lot of hard-core, grating engineering/design to develop a retrogressive SW receiver, perhaps the biggest advantage is that it 'looks and feels' like a traditional SW receiver, or a relatively 'smart' design using an ingenious RF receiver that takes advantage of current technology, and doesn't have any of the IMD effects of a superhet design? The question is rhetorical, because there is no real answer -- but a direct conversion approach tends to be very common nowadays -- eliminating huge chains of complex, hard earned circuitry. I am not claiming that the new way is the only way to do it, but USING new technology opens up opportunities and gives more information for understanding what is going on. Clear away the weeds, rather than deal with all of the weeds left over from the past. Sometimes the old way is okay, but an 'oscilloscope', still being a super useful tool -- just measuring gross noise levels (unless some kind of information processing, like spectrum analysis) is doing it the hard way. There be dragons, otherwise. Using my current project as an example... I am pretty sure that if R Dolby had the technology commonly available today, and designing the DolbyA, he would have made significant improvements over the eventual current design... He was a genius, and did a wonderful job much better than I would have -- but he would have also caught some audibly noticeable (because of the demands of current recordings) flaws in the design. What he DID do was amazingly good -- for an easily understandable example, I made the mistake of criticizing is FET/transistor combination gain blocks that he used -- until I analyzed them. They work very well - but he was a special genius -- some of his design parameters are very counter intuitive relative to what a hobby person would probably do today. I was super surpised with the performance of his little gain blocks. I doubt anyone corresponding here today are at the innovative level of Mr Dolby, but the avialble technology aids tend to level the playing field for us, more average innovative pseudo-geniuses :-). John
  • Create New...