Jump to content

bluesman

  • Content Count

    2406
  • Joined

  • Last visited

  • Country

    United States

Posts posted by bluesman

  1. 7 hours ago, Harold Shand said:

    I am not so much looking for component suggestions rather than for first principles on how to achieve optimal stereo sound quality for my viewing, without resorting to multi-channel A/V receivers, soundbars &c.

    Unless you really want the complexity of many (if not most) of the suggestions offered besides mine, you’ll probably find great joy in a decent A/V receiver even if you only use 2 of its channels. SQ is excellent once you get above $4-500 street price.  All have DSP. Many at that price do DSD, and even the lesser ones have decent 24/192 DACs.  All you need is HDMI cables and a pair of decent speakers.  
     

    As for video quality, gamers have different definitions from ours. They need immediacy, rapid refresh rates, and minimal motion artifact none of which will improve TV beyond the quality of the source program even at 4K.  And gamers trade contrast for brightness, which can degrade TV and movie viewing at extremes like 100,000:1.

  2. 2 hours ago, R1200CL said:

    The normal and best and cheapest option is digital out (very often toslink) from TV to DAC.  Haven’t you done yet ?

    This is how we do it in our living room.  I ran an optical cable from the Samsung’s audio out to my SMSL DAC, and we get great stereo TV sound through my Prima Luna amp & Focal speakers.

     

    In our bedroom, I ran the optical audio cable from a smaller Samsung TV to a low latency BT transmitter, with the LL receiver connected optically to a pair of active Edifier speakers.  The TV’s in a built-in wall unit surrounded by clothes closets, with no place to put speakers & no way to run power or audio cables to them.  The LL Bluetooth is fine, with good audio-video sync and far better SQ than the TV.  
     

    In the library, I’m using my years old Pioneer Elite AV receiver with passive Edifier speakers in 4.0 configuration right now, driven by HDMI.  I’ve used it for simple 2 channel audio off & on, and it’s surprisingly good.  But today’s $400 HT receivers are even better and make perfect sense for TV sound even in “only” 2 channels. They all have sub line outputs, and a small powered sub does add a lot to TV sound (I have an inexpensive 8” Yamaha that we love).

  3. 2 hours ago, pkane2001 said:

    But if you're talking about understanding what motivates a single individual to buy a specific piece of equipment based on a few of their posts and reviews, then, no I don't see how that's possible, then we'll just have to agree to disagree

    We’re actually not disagreeing at all. It takes the big data approach to build the model before we can apply it to individuals. But it’s eminently doable right now, and the data are readily accessible.  An individual may only post a few times about a purchase - but he or she leaves a huge trail of searches, downloads, vendor inquiries etc that are equally important.  They can all be tracked by IP address, screen name, etc.
     

    I’ve built successful predictive models for hospital readmissions, success of treatment for heart failure, when to stop medications, etc.  I even built a model for criterion based diagnosis of Covid-19 in March when it became obvious that we wouldn’t be testing random population samples to identify patterns of spread.  Even with the support of a group that does NFL predictive analytics, I couldn’t convince anyone who mattered that it was a worthwhile effort.

     

    It is.

  4. 2 hours ago, pkane2001 said:

    You are, in effect, suggesting that you'll be able to understand, explain, and predict choices and motivations of someone you hardly know at a distance, from a few known purchase decisions and a few posts on internet forums. I'd argue that this is a fool's errand, especially if you're interested in any sort of accuracy. I often can't predict what my wife would prefer, and I've spent most of my life with her, observing her preferences and talking to her about her choices thousands of times. So, no, I don't think it's as simple as you describe :)

    You skipped a bit of my post - I clearly said it takes “enough good information”.  Of course “a few” decisions and posts is an insufficient number. But this can be done - and it is, every day, by thousands of data scientists with access to sufficient information to build statistically sound models.  Access to an individual’s social media posts, web searches, etc is a treasure trove of objective and highly predictive data that are more accurate at diagnosing disease than a lot of traditionally used medical and demographic info (eg this typical example).  It’s a very valuable population health tool.  The sheer amount of available data is astounding, and it’s very revealing.  Look up Lyle Ungar’s work - he’s been studying this for years.  
     

    We obviously have patients’ permission to access what we use for research - but you can buy vast deidentified datasets and build models with great accuracy, as long as you have enough good data.

     

    Here’s a simplified example.  If a man does a web search on treatment for increasing urinary frequency, you only know that he either has the problem, knows someone with the problem, or is curious about it.  If he only does it once and his other web searches are compatible with a young adult, he’s more likely to be writing a report than he is to have a medical problem.  If his other interests suggest that he’s middle aged and he searches again every few months at a slowly increasing rate, the most likely reason is benign prostatic enlargement.  If his web profile suggests a young adult and he searches every few days, adding burning pain to the second search, he probably has an infection.,  And if his searches suggest his age to be 60+ and he also seeks info on unexplained weight loss, prostate cancer becomes a more likely explanation.

     

    Now throw in tweets about how he feels. Add his credit card purchasing data and you start to get a clearer picture.  Obviously it takes more than a few data points. But current and historical behavior definitely predict future behavior. Why do you think consumer data are worth so much money?  
     

    Knowing that a given audiophile had returned 6 out of 10 equipment purchases would tell you something about him or her. Access to the alleged problems prompting return might offer even more insight. Knowing that Stereophile (to which Amazon says he has a Kindle subscription) reviewed all 10 favorably a month or less before purchase, but that an audio website he visits frequently panned the 6 he returned just before he returned them focuses the picture a bit more.  Run a correlation analysis on performance data of the units in question - if it turns out that 6 of 6 were returned and replaced with items that all shared some measured “improvement”, we’re developing a model likely to predict his satisfaction with future audio purchases.

     

    It takes thousands of data points to support a sound and useful model. But you can buy or otherwise access millions of data points today - this is how those “targeted ads” somehow follow you from website to seemingly unrelated website.  Believing that our behavior is private and inaccessible to others is hopelessly naive.  Many industries are monitoring and guiding much of our lives right now.  Predictive analytics are telling them what you’re going to buy next year, what you’ll pay for it, and how soon you’ll replace it.  And they’re very often right.

  5. 13 hours ago, pkane2001 said:

    That, of course, is fine. But then you are disagreeing with Greene, Harley, and Archimago, since they all seem to think that there's a way to predict how much satisfaction an audio device will give to another user.

    And there is a way.  In fact, there are multiple ways using predictive analytics and a dataset that includes detailed info on a given audiophile's experiences with audio devices.  For me, trying to predict satisfaction from traditional technical metrics is usually a fool's errand.  I'm an old surgeon, so I always viewed behavioral science as weak when I was younger.  Now I realize that our behavior contains more information about our interactions with the world than most object-related direct measurements.  Many social scientists are not as rigorous about their data analyses and standards for significance as I'd like - but they're correct in believing that our behavior contains a lot of objective information if we look closely enough and think outside the box.

     

    Give me enough good information and the job is as easy as pie.  The data must include purchase history, historical satisfaction, repeat purchases, mean time of ownership, how and why each device was dismissed from the stable, mods done, social media posts questioning how to improve each one, what his or her friends bought / sold and when,  etc.  Add in everything we can know about each of the devices themselves, including all technical data and what reviews the subject read before, during, and after ownership of each piece.  Accuracy improves with each additional subset, e.g. stability of interpersonal relationships, job security, illness, unexpected downturns, etc.  Facts like knowing that one purchase was rapidly followed by a flood of web posts asking for ideas on improving the new acquisition while another was followed by a year of quiet enjoyment add to the accuracy of such predictions.

     

    With enough good data, it's easy to predict a given audiophile's satisfaction with the next device within a confidence interval that inversely reflects the number of devices and experiences, the consistency of the subject's behavior toward audio devices, the consistency of behavior toward the rest of his or her world, etc.  I'd build a model using random forest plots, recursive partitioning, or any of a number of other approaches that will identify the biggest dichotomies in behavior, i.e. kept 24 of 27 devices for more than 6 months that had at least one rave review after purchase, but sold 16 of 18 that were given lukewarm post-buy reviews in a major publication.

  6. 40 minutes ago, ASRMichael said:

    it did work, long process to get just 10 albums, but was my favourite albums at the time

    ...and that's how progress is made.  It's easier now, and before long it'll be the status quo.  Check out the latest AI/VC efforts on line - they're really pretty cool, even though they're still add-ons.  Once some genius figures out how to integrate them into SoCs, we'll be in fat city!

  7. 33 minutes ago, LarryMagoo said:

    Roon with Voice control would just be icing on the music listening cake

    The new crop of VC/AI programs will almost certainly be able to control Roon.  What's needed is the ability to send the same call to the processors in response to a voice command that's generated by clicking on a Roon icon. It's not as simple as that sounds, but it's doable today.  I've been playing with this using Braina, but I've not yet succeeded.

     

    Siri is almost certainly capable of doing this now - there are many custom business applications out there resulting from licensed use of Siri technology.  But I suspect that Apple's not about to support any platforms at their own expense that don't augment their revenue stream beyond their costs.

  8. On 11/14/2020 at 11:33 AM, mitchco said:

    This article on Ableton suggests that it can also be done with ASIO4ALL on the PC. I did not look closely and have not tried it myself: https://www.noterepeat.com/articles/how-to/101-ableton-live-using-multiple-interfaces

    There used to be several digital audio interfaces that would work together to increase the I/O complement - most were PCI cards. The purpose of this was to enable multitrack recording back when affordable, high quality desktop recording was emerging for home and small studio use.

     

    Here's a LINK to an excellent discussion of this topic from Sound on Sound, but it's 15 years old because this is not done much any more.  As they point out, M-Audio Delta devices supported use of 4 units together on Win, 3 on Mac, and 8 on Linux (using OSS drivers). There were also MOTU and ESI DAIs that worked in multiples.  But that was back when external / USB DAIs were less common and MC units were very expensive. Most musicians and others who record today just buy a MC USB DAI.

     

    From this article:  "If you ever think you'll need more inputs and outputs than you have at present, the best approach is to choose an interface that already has multi-device drivers, such as the ones I've mentioned. Then, when you buy another compatible interface, your ASIO (Audio Streaming Input Output) compatible audio applications will simply see one larger interface. Most musicians find this runs like a dream, although in the case of multiple PCI cards, very occasionally the odd PC motherboard may throw a spanner in the works and prevent the cards from running smoothly alongside each other."

     

    Using multiple 1 or 2 channel DAIs for simple MC playback is rather awkward for most audiophiles.  These devices are designed for recording, so the controls and layouts favor this.  Playback is a secondary consideration, as it's only there for monitoring. And, of course, you have multiple industrial looking devices and their interconnects sitting around. It's really not for most audiophiles.  The best approach for most is to buy MC equipment.  I know of no good way to send pairs of MC outputs to different USB ports.

  9. On 11/1/2020 at 8:41 PM, Audiophile Neuroscience said:

    A friend of mine is wanting to buy a JRiver Id to connect directly to his USB DAC, an Emotiva Stealth DC-1.

     

    The Emotiva manual states it supports UAC2 - USB Audio Class 2.

     

     From the Interact forum JimH notes that Id has Debian (not Samba as stated in Wiki). A Linux distribution may support UAC2 but apparently still not work with a specific DAC.

     

    Does anyone have first hand knowledge of an Id working with the Emotiva Stealth DC-1?

     

    Cheers and thanks

    Somebody’s misinformed.  Samba is a group of programs that use a common file sharing protocol and allow a Linux box to be a domain controller. With Samba (and similar programs), you can access files on Win machines and networks from Linux machines & vice versa.  Debian is an operating system on Linux kernels. The two are neither mutually exclusive nor determinants of compatibility between a computer and a DAC.

     

    Samba is one way for the Id to access files on a Windows network share.  But this has nothing to do with driving the DAC - it’s how the file gets from the network to the Id. Once the file’s open in JRMC, it’s processed and sent to the DAC via USB.

     

    The USB2 drivers in all current and recent versions of Debian, Ubuntu, and other major Linux distros are UAC2 compliant. My Stealth has worked fine with every device I’ve described in any of my posts and articles.  This includes NUCs running multiple Linux distros, Raspberry Pis with many different OSs, Win10 PCs, an Asus Chromebox, a 2005 Toshiba Satellite laptop on Ubuntu 18 and Ubuntu Studio 20, etc.

     

    Anything is possible, but I know of no reason why a DC1 wouldn’t work with a JRMC Id.  Mine has played fine with dozens of diverse computers on many Linux platforms both old and new.

  10. 19 minutes ago, ShawnC said:

    Again sorry, @bluesman 

    No worries! Your posts are appropriate expressions of a valid point of view, and I appreciate your clarification.  But we see many AS posts about minor distinctions in sound quality (many of whose very existence is in dispute among audiophiles) from people who strangely dismiss major sonic differences that they should be hearing, as demonstrated by those two guitar videos I posted.  I'd almost go so far as to say that anyone who can't hear any difference between them (a Maton concert body flat top and a 17" d'Aquisto archtop) might want to reconsider calling himself or herself an audiophile. 

     

    There's absolutely no reason for anyone to know anything about specific instruments to understand, appreciate, and love music.  But you'd have to screw up recording these guys and their guitars really badly to make them sound alike.  The same is true for voices.  I've heard some recordings of Sarah Vaughn and of Ella Fitzgerald in which it was not easy to identify which of the two was singing.  And their voices both changed a lot as they aged.  Fortunately, there are so many good recordings of each over the years that you can appreciate how their voices matured and became more expressive.  Hearing all of these things and much more is part of the experience of listening to music.

     

    PS:  I"m not a recording engineer - I'm a musician.

  11. 1 hour ago, ShawnC said:

    I didn’t say anything about being accurate. Just that a human voice and or an acoustic guitar or combo has a fantastic sound quality to listen too. You could say the same for a violin and vocals or whatever. But any decent engineer should baby able to recreate the human voice and an acoustic or whatever without an trivial problems. Now to listen to a portion of music and pick out every detail like what model of guitar and guitar strings, pickups, amps and mics that were used are beyond the scope of this thread and wasn’t what my post was about. 

    Wow - I'm sorry you took umbrage at my post. To be honest, I didn't (and still can't) imagine that it brought so negative a response.  To me (and, I bet, to most audiophiles), a recording that masks or alters gross differences in SQ to the point at which they become inaudible is screwed up by any definition.

     

    The differences I'm describing are not so subtle that it takes any knowledge of guitars to hear them, and it doesn't matter whether or not you know which is which.  What matters is that you can hear the differences I'm discussing - they are big and should be clearly audible over any half decent system.  

     

    The usual subject of sound quality discussions is over vague and subjective subtleties like harshness, clarity, warmth, balance, etc.  The sonic differences between a Martin D28 and a Gibson L50 are huge - they dwarf harshness, clarity etc in audibility and in importance to the musical program.   Any audiophile should be able to tell that they're different instruments when hearing them side by side.  Of course you don't know which instrument is which - there's no reason you should, and it doesn't change what I'm saying at all.

     

    Listen to the difference between these two guitars played solo.  Each is ideally suited to the music being played and the way in which it's played.  Neither performance would sound the way the performer or the engineer intended it to sound if played on the other guitar.

     

     

     

  12. 4 hours ago, ShawnC said:

    When It's mostly acoustic guitars and clean voices, It's hard to screw that up on a recording

    I respectfully disagree with this.  Accurate capture and playback of acoustic guitars (and human voices) is one of the more difficult tasks for a recording engineer.  Presenting a lifelike acoustic guitar sound and image (ie not too big and not too small) is not that difficult.  But an accurate recording sounds like the guitar that was recorded, rather than a generic instrument.

     

    A Gibson J200 sounds quite different from a Martin O-16NY, and a big archtop with a carved spruce top (eg an 18” Gibson Super 400 or an original Epiphone Emperor) sounds quite different from a 16” laminated maple box like the ES175 that Wes Montgomery played on his first few albums.  Similarly, a single cone resonator guitar like a National Style 0 sounds very different from a tricone.

     

    There are many well known and widely loved recordings on which it’s not at all clear what’s being played.  This often reflects the general belief that they all sound alike, which becomes a self fulfilling prophecy.

     

    Good single unit speaker systems like the reviewed unit are actually well suited to acoustic guitar reproduction. The sound source is similar to a guitar, with different parts generating different spectral segments and projecting the sound from what’s really an effective single point once you’re at least a few feet away from it.

     

    I do think it important to consider the cabinet on which you place it.  A big wooden box with 10+ cubic feet of interior space and unbraced flat surfaces can add a lot of unwanted resonance. It’s not dissimilar to an acoustic guitar of similar size.

     

    I’d also add that the latest BT AptX HD sounds decent+ and is not your father’s Bluetooth.  But if you want to use a BT device in a home theater or MC system, you have to use the latest low latency codecs - and I can hear a slight degradation in definition and presentation between HD (better) and LL.  The LL smears everything just enough to hear it in better systems.  
     

    OTOH, I haven’t found a MC distribution system that lets me send individual channels over a network.  So I don’t know how we could use one or more of these units for HT or MC audio - and this Mac piece would probably be an excellent rear pair.  You can use LL BT driven by a MC DAC to drive powered speakers in decent synch with acceptable (although sometimes barely audible) delays among them, if you can’t run wires and want bigger sound in a particular space than your TV or stereo system will provide.

     

    I really like this McIntosh device, and the review is stellar - informative, easy to read, and enjoyable.  I won’t be spending 3 large on such a device, but if I wanted one this would be high on my list.  It’s great to know that the sheer physical pleasure of McIntosh products persists unchanged through all these years and ownership changes. I think Frank would be pleased.

  13. 1 hour ago, SNJay said:

    I feel like my audio balancing may be off, and/or my microphone recording has too much gain

    Hi and welcome!  I wonder if you're not overlooking a few basics.  I love your video, and I don't think there are any major audio problems.  The sibilance in your voice may be a bit pronounced, but given the devices on which most people listen to videos, it probably enhances SQ from average computer speakers and inexpensive powered systems (which inures to your favor compared to those mixed for the best audio equipment).  Your background music is a bit down in volume and lacks some weight.

     

    I can't tell from your description if you're concerned about the relative levels and frequency spectra of your voice vs background music, or if there's something else on your mind.  You don't describe your full setup, so we don't know how you're mixing in your background and from what source(s).  First, I think the RE20 has a midbass EQ switch.  If I'm correct, you might want to change its setting to see if that balances out your vocal spectrum (if your voice is what concerns you).  Second, the Cloudlifter is a cool little preamp - but I'm not sure it's helping you the way you're using it.  It minimizes extraneous noise and is a big help when recording acoustic instruments at higher gain levels into the DAW.  But for close mic'ed vocal use, I wonder if it's not extrraneous at best.  I hear no clipping or other serious sonic aberration on one listen through my desktop system.

     

    If the level and/or spectrum of the background music is your concern, you need to look at the entire signal chain from source to recording input to see what's what.  Do the source files sound fine when played back at full listening levels?  Are they mp3s, FLACs, wavs or something else?  Are they being processed in some way during recoding or are they being captured in their native formats? 

     

    I assume you're not running the background in real time while recording your part, so you should be able to edit it fully to get what you want.  If you're using a video production program like Lightworks (my favorite), you should have complete control over every parameter of your audio before you add it.  Hopefully, the above will help you.  I hope you find the solution you want!

     

    FWIW, we don't know what your voice sounds like - so it's hard to know how accurately it's being presented in the video.

  14. I agree completely with what's already been posted.  Your equipment choices are excellent.  But if I understand you correctly, you also asked if you're making a mistake with 2 channels.  I strongly suggest that you use your budget for a really good 2 channel system and get used to fine sound in stereo before considering MC.  Multichannel audio is really fine and really fun, but I would not compromise SQ to get more channels.  With the latest crop of MC "integrators" (e.g. the MiniDSP UDIO8) and DACs (e.g. the ESI Gigaport), you can easily add more channels of equal quality when you're ready.

     

    On the other hand, if you're already on the fence about stereo vs MC, you could start with a UDIO8, a pair of less expensive but still very good 2 channel DACs (e.g. Topping, SMSL, iFi etc), and 4 8030s for about the same total outlay as the Benchmark + 2 Genelecs. 

     

    A third MC alternative with fine SQ at lower cost is an ESI Gigaport eX and 4 Genelecs.  The Benchmark 3 is a 192k device as I recall, and it "only" does DSD64.  So you're not losing much with the ESI, which is also a 24/192 device but does not do DSD64.  I haven't compared the Benchmark with the ESI for SQ and suspect that the ESI's a tiny bit behind - but probably not by much.

  15. 1 hour ago, palpatine242 said:

    For an audiophile system with voice control, can't you use a Bluesound Node 2i with digital output to an external DAC which feeds your stereo?  You can then use Google Assistant or Alexa to choose a song to play on the Node 2i.  I would assume that Tidal Connect to the Node 2i could be controlled with Google Assistant and Alexa as well.

    Yes you can.  There are several streamers like this, but a comparison was far beyond the scope of this article. Yamaha makes 2 models with which I'm familiar, Denon has the Heos system and devices, etc.  Voice control in all of these is limited by what Alexa, GA etc can do.

     

    From the Bluesound website, "...you can use voice commands to play saved playlists, select your favorite radio station, adjust volume levels, or even group Players together".  The Node 2i will respond to Alexa using the Bluesound skill and to Google Assistant using middleware called Blue Voice. As long as you've set up everything correctly from the DAC downstream, you can control the stated functions from the Node - but you can't control any other element in your system.  There are enough downsides to this to deter me from using it.

     

    For example, your power amp gain control has to be set high enough to encompass the loudest playback you'll ever use, if Alexa's controlling your system volume at the streamer.  This leaves your speakers vulnerable to any transients generated in your front end but not attenuated by the variable gain stage, e.g. at turn-on and turn-off or when switching sources.  If you have an uncontrollable pop with any function, your voice controller can't cut the gain before executing it.

     

    The BS Node is marketed to "...instantly [breathe] new life into your decades-old stereo equipment" by adding network and web streaming sources. The only analog inputs I see are in the combo 3.5mm optical / line level jack, which must be how one connects a turntable or CD player.  I can't tell if you can stream the output over the LAN or WLAN (I assume not) but the 2 way BT should let you drive BT speakers with any source.  The USB input is only for storage devices - you can't connect a USB turntable or other real time USB source.  There are many limitations to this approach to voice control, although it does work within the limits of system and technological constraints.  Stay tuned - it will get better!

  16. 1 hour ago, audiobomber said:

    The last sentence needs an ending.

     

    Note that you don't need JRiver to use DLNA with Chromecast Audio. I use my CCA devices with two QNAP controllers (Music Station and QMusic), and with BubbleUPNP. I believe there are others too, maybe MConnect and Kazoo?). No voice control though, which I don't care about. 

     

    Whoops!  Chris was having some problems with the formatting of the original document I sent (a conversion from odt to docx).  When I converted it to a pdf for him, I must have converted the wrong draft.  Here's what it should have said:

     

    "If you want to have voice control over JRiver playing through a Google device as a zone, you’ll have to use Alexa. If she’s not sharing a device with the GA, you can link her from an Amazon device or a third party host using an app like Helea Smart."  [Chris, if you can drop this in, it will save others the irritation of the typo.]

     

    I'm sorry if I gave the erroneous impression that you had to use JRMC in order to cast to CCAs in general.  My point was that if you use Google smart speakers but want to have voice control over JRMC playing to them, you have to use Alexa either from an Alexa-enabled device or with a 3rd party integration app. Google speakers will show up as DLNA zones in JRMC if you have BubbleUPnP etc running along with JRMC, so there's some functional integration there among JRMC, Alexa and the GA (albeit crude integration).  But it's not ideal, and it's one reason we chose Amazon / Alexa for our primary smart platform.

  17. 3 hours ago, blue2 said:

    Very professional article! Piqued my interest so did a bit of Googling:

     

    System

    Platform/Hardware

    Notes

    aido

    Robot with GUI

    cameras, multiple CPU's and GPU's, not available yet, price?

    athena

     

    Open source software project written in Python

    bixby

    Samsung mobile devices

    Samsung's Google assistant

    hound

    Automotive

     

    jibo

    Robot for healthcare and education

     

    josh

     

    interfaces in posh homes to Lutron lighting, Sonos, Crestron thermostat, home security, smart TV's, home theatre

    mycroft

    Runs on Raspberry Pi etc.

    Open source software project (Python?), Mk I was $180 now sold out, Mk II coming soon

    ubi ucic

    Runs on Android and Linux

    Ubi Kit is free for developers supports Google assistant and Alexa

    viv.ai

     

    Viv is an artificial intelligence platform - intelligent personal assistant software created by the developers of Siri, bought by Samsung. Now to be integrated into Bixby 2.0

    Thanks for your time & comments!  There's so much potential here that I'm amazed the audio industry hasn't recognized how important VC & AI are for development and sales of future products.  We all experience mechanical controller failures in everything we use, from audio to cars to coffee machines.  Tiny touch screens, bubble switches, touch sensitive controls etc are only pseudoelectronic - they still have physical parts that fail too often.  We could eliminate most of those last century pieces and concepts by integrating excellent voice recognition and synthesis with AI. Imagine no more noisy pots, no cracked or dented bubble switches, no broken or lost knobs, minimal internal wiring, etc.

     

    Then imagine being able to control and monitor every audio parameter of interest to us in real time using voice input and synthesized voice response.  Throw in AI's ability to monitor real time performance and identify impending failures by detecting as yet inaudible changes in everything from voltage & current stability at various points to distortion to early ID of asymmetry in channel outputs.  In addition to telling your system what you want to hear (and how and where and when...), you could ask for a status check and get a verbal response plus a downloadable log report.  You could set up spontaneous verbal warnings when voltage, temperature, and other metrics go out of spec.  You could alter or switch amplifier operating characteristics to A-B changes in SQ.

     

    How about a status report when powering up, e.g.  "Good morning, Bob - your system is in perfect operating condition and ready to play"?  Get vocal alerts as needed - "THD in your left channel has increased to 105% of the right channel.  Diagnostics show early failure of V4 with no other abnormality.  Replace tube."

     

    This is cool stuff!  I can't wait to play with it all as it develops.

  18. 39 minutes ago, jrobbins50 said:

    Good article. I use five Harmony Hubs (only $70 each at Amazon) at home, each tied to a different Gmail and Amazon account.

    Thanks!
     

    Your willingness to use multiple accounts and devices to “integrate” functions says that you’re flexible and adventurous, like me.  But we’re in the minority by far - most people would think we’re a bit daft to go that far.......and we shouldn’t have to.  I believe it won’t be long before there are more universal platforms and approaches available to us.  But until then, let’s stretch the envelope to see how much it can hold 😁

  19. 1 hour ago, The Computer Audiophile said:

    This is fantastic @bluesman

     

    I've been following josh.ai for a while now. I think josh is the company to watch in the high end space for sure. 

    Josh is cool, for sure - but they seem to be using current methods and tools to achieve something for which current methods and tools are not ideally suited.  Unless they're the ones to come out of the garage or basement with the next big thing in AI coding and output modalities, they'll be looked on as crude when someone else finally succeeds.  I know of no current platform that integrates the various functions necessary to achieve accurate and efficient voice control over devices in disparate systems, including what I know of Josh (which admittedly isn't a lot at the design level).

     

    Right now, there are too many data bouncing 'way too far over 'way too many jury-rigged networks to do this smoothly.  And platform integration is not in the cards for an industry that profits largely from differentiation, so there's not likely to be one approach shared by all. This is a lot like the world of electronic medical records.  The most they hope for is "interoperability" - and that has us unable to share data universally across all healthcare institutions plus payers and the scientific community.  All Epic users can share their data if they wish, as can users of several other major EMR platforms.  But these programs aren't written in the same languages and they run on different architectures.  So if your hospital is on Epic and the one where you ended up unconscious in the ER because you fell off the train is on Cerner, you're out of luck unless you carry your medical records around on a USB drive or a CD.

     

    It's a lot like needing home hubs for Z-wave, Zigbee, Google Home, Samsung Smart Things, SmartLife, and Apple Home because you have a few devices that work on each platform.  There are several smart speakers that require you to control some audio functions from Alexa and some from Google Assistant - this is no way to run a railroad.  As I was just saying to my watch, "Siri, tell Alexa to open House Band; Siri, tell Alexa to tell House Band to tell JRiver to play music by Wayne Henderson in the master bedroom; Siri, tell Alexa to make the music louder;"

  20. 1 hour ago, accwai said:

    That was then, but this is now

    I raced Formula Vee in SCCA for almost 20 years.  The first Vees were like Cadillacs compared to current ones.  At 6’2” with 36” arms, I had plenty of room in Formcars etc. By the time I got into FV from C sedan (1275 Cooper S), they’d already shrunk some - but I still fit into my Zink C4 once I made a custom seat to lower me and stretch me out a bit.  And I was consistently in the top 10 at regional races years after most Zinks were on Medicar.  

     

    As I and my Zinc aged gracefully, the average driver shrunk to fit each new design because every advance in aerodynamics & suspension design left less cockpit room but cut lap times.  So successful drivers grew younger and smaller and I fell further back in the pack despite mods like zero-roll, aero and NACA scoops, and lower drag body panels.  I wear a 12A shoe, and there’s insufficient room to hold my feet straight up in many current FV (& FF) models. 

×
×
  • Create New...