Jump to content

ray-dude

Premium
  • Content Count

    416
  • Joined

  • Last visited

Everything posted by ray-dude

  1. Geoff, I used this as the baseline for my setup: https://github.com/mjp66/Ubiquiti/blob/master/Ubiquiti Home Network.pdf Far more complex than most folks need, but you can filter down from here. My mods were to connect a second ERX-SFP (configured as a generic switch) to one of the ethernet ports for the home network, and connect my WiFi access point (with my WiFi VLANs) and home wired ethernet to the second ERX (this optically isolated my home ethernet and WiFi network from the first ERX) I also configured one of the other ethernet ports on the first ERX to have its own subnet and DHCP server, for my audio network. Basically, on the first ERX I have the above setup, with my audio net configured on its own subnet on the home network (192.168.9.x), the SFP port on the first ERX configured as part of the home network and connected to a second ERX via fiber (configured as generic switch) where the rest of my home network plugs in (WiFi, and wired ethernet) My first ERX is powered by a DXPWR dual regulated 12V supply, energized by a PowerAdd Pilot2 battery. Same supply is powering my ISP-provided ONT (basically the equivalent of a cable modem for fiber). My audio network is ERX ethernet port to Sonore opticalModule (powered by DXPWR dual regulated 5V supply, energized by PowerAdd Pilot2 battery) to Finisar SFP module to single mode fiber to Sinisar SFP module to StarTech optical NIC in my Extreme. See below for details
  2. Now there is an old friend! My '030 Cube is safely in a storage locker, as is the Megapixel Display case that some former coworkers kindly converted into a fish tank for me. Extraordinary how much of the future was invented from whole cloth 30 years ago...
  3. I've been in absolute denial about everything Nick has done and is doing with DC4 and DAVE. With my DC3-era SJ supply, the lift with DAVE was second only to mScaler, but at this point, I'd prefer the DC3 to mScaler with DAVE. Hope to be able to hear a DC4 in my chain soon, and fall completely off the denial wagon
  4. Jason Isbell has been opening up his vault of concert recordings, and making them available on Bandcamp. With Bandcamp's Friday "all proceeds go to the artist" deal, I picked up a bunch today. The Alabama date is absolutely lovely, and really captures the feel of being at an Isbell concert. Looking forward to giving the CO and NY dates a listen as well.
  5. I think the term VLAN can be confusing...it is a specific way to configure a network, while most folks are speaking to the general concept of isolating (fully or partially) a network, not setting up an actual VLAN. With the goal of minimizing network traffic seen by my music server (mainly background broadcast traffic), I looked at the following approaches: 1 (LAN) - Isolate my audio server on a dedicated audio network by configuring a dedicated network for a specific ethernet port on my ERX 2 (VLAN) - Isolate my audio server on a dedicated audio network by configuring a VLAN 3 (Subnet) - Isolate my audio server on a dedicated audio subnetwork I started with 1. In this situation, there was a firewall between my home network and my audio network. To allow my laptop (running Roon remote) to connect to my audio server, I set up a firewall rule that said that if I initiate a network session from my laptop to the music server, the firewall will allow traffic between the computers. Worked well, but I had to ping the music server to initiate a session before being able to use Roon remote. OK for me, but definitely not civilian friendly. I did however hear a marked improvement in SQ with the music server isolated from all the background network traffic, so I was encouraged to keep moving forward. With 2, in a VLAN you configure the network interface on devices to assign them a VLAN ID (1, 2, 3, etc). The router can then use the VLAN ID to configure virtual networks where only devices with the same VLAN ID can see each other. Same idea as 1, but you don't need a shared physical ethernet connection to manage isolation, so easier to move wired devices between VLANs without having to rewire the network. From a routing perspective, you need to play the same firewall rule tricks to allow traffic from one VLAN to the other VLAN. No net benefit to me over 1 I ended up with 3. I configured my ERX to have the ethernet port for my audio network on its own subnetwork (I happen to use 192.168.9.x), with a DHCP server for that subnet. In this configuration, routed network traffic can freely go between the subnet with my laptop/phone etc and the subnet with my music server (no firewall hacks needed). However, broadcast network traffic is not routed between subnets. Works like a charm. I should note that in all these scenarios, Roon relies on broadcast packets to discover roon devices/remotes/endpoints on the network. When you fire up Roon on your laptop, you will not see the Roon server on its own LAN/VLAN/subnet. On the Choose Your Roon Core it will sit there looking and nothing will show up. The trick is to hit the help link, which will allow you to enter the IP address of your Roon Core (on a different network). Once you do that, Roon will use routed traffic to make the connection with the Roon Core, and it will show up and work as normal. Net net, at this point if I disconnect the network connection to my music server, I honestly can not tell the difference (maybe the barest hint of a difference, but so vanishingly small that I have zero confidence in calling it a difference).
  6. Key to my new world order has been the high efficiency single driver speakers, driven directly by my DAC. It has allowed me to eliminate the cross over in the speaker (devastating to my sense of reality) and the amplifier (ditto, but less so with the right amp) My speaker drivers are 104dB sensitivity, so they are remarkably light and fast, and they are point source so I can have perfect phase alignment and no dispersion between drivers. My DAC (Chord DAVE) has remarkably low noise floor and remarkably fast dynamics, with only a couple of elements on out the output (the 2W "amp" is intrinsic to the analog output stage, so the analog signal goes through remarkably few components) It is the speaker that is the biggest compromise for "traditional" high fi for me. I came from B&W 802d3's and adore the B&W sound. These were life time dream speakers. As soon as I heard a modest $1400/pair set of Omega Super Alnico Monitors (single drivers), it was a revelation, and I knew I needed to leave the B&W dream behind. I struggled mightily for a long time to get that sense of reality from the B&Ws, but I just couldn't With the single drivers, the biggest things I give up are tonal balance, and the sense of "power" (not loudness...plenty loud even with 2W). Interestingly, I found that within a couple days my brain fully adjusts to tonal imbalances and doesn't notice them, but it NEVER adjusts to the sense of reality being gone. With the sense of "power", one never gets that with a live singer or piano player or horn player, one instead gets a compelling sense of space from the power of their voice/playing/etc. The single drivers have an amazing sense of space. I am transplanted into the physical space where the recording was made but I have given up the "blow your hair back power cord" feeling, Before this life pivot, tonal balance and physicality were key for me, with a sense of space being a nice occasional bonus. That has completely inverted. I appreciate deeply a MegaFi setup that delivers perfect tonal balance and tangible physicality, but I infinitely prefer to be in the studio with Coltrane.
  7. At the risk of being the person intruding in a passionate debate at a dinner party... At RMAF and my local dealer, I have heard amazing systems that are the pinnacle of a sound that I sought for decades, but they hold only intellectual interest for me now (which is remarkable, given how passionately I sought out those heights for so many years). They are truly a world class HiFi experience of listening to music, but only hint at what I've come to think of experiencing and participating in an in person performance. I shared the experience before that even when walking down a street, I can tell whether it is a live performer in a coffee shop or whether it is recorded playback. Needless to say, the distortion through walls and glass with street noise raising the noise floor is atrocious HiFi, but I know it to be real, and one draws me in, and the other does not. With traditional HiFi rigs, the analogy I use is moving from looking at a photo of a forest to an even better photo of a forest to a full 100" 4k HDR OLED photo of a forest, where you start to get an inkling of what it is like to look through a window at a forest. If you work hard enough, the "through a window" feeling becomes more and more prevalent and the window gets more clear and larger and you start to get the barest hint of being in a forest with no window at all. I compare that to walking through a forest, where even with scratched up sunglasses that cast a yellowish tint, I am unambiguously IN A FOREST, and all my senses have shifted to a completely different of experience and engagement and feeling of being alive. That difference is not due to fidelity of the image. It is the amalgam of sensory inputs that cause my brain (which has been trained by Darwin and 53 years of hard knocks) to switch to "this is real, pay attention" mode. It takes precious little to break that sense of reality and go back to trying to get a better and better photo, then a better and better window. The last several years for me have been about starting all over, and trying to get that sense of reality from the ground up. It has been devastaingly humbling, but incredibly rewarding. So much that I put on the first tier "this can never be compromised" I've realized just doesn't matter once my brain kicks into "this is real" mode. Back to my earlier analogy, given a choice between listening to Carly Simon live in a noisy coffee shop with the crappiest acoustics and listening to Moonlight Serenade on a $1M PinnacleFi system, find me in the coffee shop, completely engaged and over the moon delighted for the experience, leaving afterwards inspired and elevated by the artistry. I listen to the mega Wilson and YT setups and I'm blown away by how incredible they are (truly...after decades of tweaking and tuning I know intimately what an incredible achievement and performance level they are delivering), but it is now a intellectual interest rather than a passion. I'll happily give up 90% of what they deliver, to get that sense of reality (the walking in the forest experience) that they struggle to deliver (at least for my brain). All that being said, the reaction of people when they hear my rig is decidedly bimodal: there are those that have a proverbial red pill moment and want more and more of that reality rush, and others that are scratching their heads going "I thought you had a nice stereo system...what's up with this?" The former group has had their brain click in on that sense of reality, the later is focused on what I was willing to give up to get that sense of reality. The sharp divide I've seen in my living room really highlights how differently our brains get triggered, and the different response we all seek in music.
  8. Dan (@dmance) is indeed a fellow traveler! My last way excessive write up was for his Opto*DX product (with some wide detours into RF/power hygiene). See: https://www.head-fi.org/showcase/audiowise-opto•dx-optical-isolation-bridge-for-dual-spdif.23757/reviews#review-22155https://www.head-fi.org/showcase/audiowise-opto•dx-optical-isolation-bridge-for-dual-spdif.23757/reviews%23item-review-22155 It was quite remarkable what one can hear as one starts to strip away all the things that were keeping you from hearing it. For me, DAC's like the Chord DAVE are true reference pieces. Everything else in a chain takes something away from it. The trick is to eliminate those things or minimize the impact of those things, to get as close to true references as you can.
  9. with a vlan, I’d have to tag traffic and the vlans would be fully isolated without routing/forwarding rules. Since I want my audio server to be able to talk to Roon remote on my laptop and phone (and my file server), that would have been a pain. I had it set up this way initially, but the way I had firewall rules basically required session initiation (I used ping) before the connection would be open. Probably better ways to do it, but not very civilian or audio guest friendly. with a separate subnet, broadcast traffic is not going between the subnets. Only routed traffic goes to and from the audio server, so I can minimize packets to my audio server without having to play wild routing/forwarding games (my audio server is the only device on my audio subnet)
  10. My home ethernet traffic (including WiFi) is on my 2nd EdgeRouter X SFP, including my Unifi Access Point for WiFi. I have LANs on my WiFi for guest, IoT devices, and home to isolate WiFi traffic (security and privacy for data traffic). My audio net is on a separate subnet on my first EdgeRouter X. I don't use WiFi for audio.
  11. FWIW, I have ATT Fiber -> ATT Lucent ONT -> EdgeRouterX SFP -> opticalModule -> Extreme. I can very clearly (and happily) hear the Sablon ethernet cable between the ONT and ERX, and between ERX and oM (the later being more impactful). As I detailed in my long Extreme write up, the EtherRegen moat held back the Extreme, and it worked better when using the EtherRegen as a FMC (optical and copper on the same side, not going across the moat). In this FMC configuration, I prefered the oM to the EtherRegen. When I was experimenting with the EtherRegen going across the moat, the impact of the Sablon ethernet was clear as well. Net net, at least in my chain, the Sablon Ethernet cables made a (still) surprising difference very far from the business end of the signal. I still don't understand why or how that could be, but empirically, that's what I'm hearing. To echo Rajiv's note, Mark is incredibly generous with his loaner cables (and it has worked...I have Sablon ethernet cables, USB cables, and power cord now...all had a "first 5 seconds, yup this is the one" impact in my system).
  12. I'm in the same boat. I hear huge benefit under DACs and power supplies (and I'm comfortable that I at least have a hypothesis why), but I don't have a working theory why there would be an impact with transducers (not saying that there isn't!) The physical displacement distances would have a vanishingly small impact on phase alignment. The only other thing I can think of is draining unwanted resonances from the case work, but that would be hugely speaker/room dependent. This is an intriguing area to explore and understand.
  13. Wow, you definitely earned your “made it through all five parts” post pandemic beer with that feedback! thank you very much for the very kinds words. I’m very glad it was interesting for you!
  14. Holy crap, $300/pound!! This is the UPOCC of darjeelings!! I'm very tempted....
  15. Shouldn't you be enjoying vacation Chris? This week my project is rigorous AB testing with First Blush Darjeeling vs Second Blush Darjeling, with some lovely Assam as a control...
  16. ray-dude

    HQ Player

    The modulator section only applies if you're upsampling DSD content. Tidal content is PCM, so it goes down the PCM Defaults path. If you happen to play any DSD content, it will go down the SDM defaults path, and use that modulator. Choice of modulator is dependent on the power of your computer (some are VERY compute heavy) and what sounds good to you. In your case, 99% of the time it won't matter. If you set Vol Min and Vol Max both to -3dB, it will fix the output volume, and you will control volume on your DAC. As background HQP can also control volume, and these settings set the min and max volume settings for HQP. If you're controlling volume with your DAC (which is what I do), setting both to -3dB basically sets HQP at max volume output.
  17. ray-dude

    HQ Player

    Here are my current settings. sinc-M and LNS15 gets you mScaler-like, and adjusting buffer time and DAC bits gets you that final tuning to your ear. I find the sinc-L quite nice as well (more for some albums than others), but I keep coming back to sinc-M
  18. ray-dude

    HQ Player

    I can only speak to Chord DACs, but in addition to the above, DAC bits and buffer both serve as final "tone" controls for me (I use USB from Taiko Extreme to my DAVE). Sound is most incisive sounding at 32, as I step down I can even out the mid range presentation (I'm usually around 29 or 30). Buffer time is less impactful for me, but does impact dynamics for me (this may be more system specific than DAC specific). Higher buffer is slower (but fuller), with lower buffer more dynamic (but perhaps thinner). I'm usually 20 or 50ms. These really are final tweaks for me. Get your preferred filters and noise shapers, then use these settings for the very final tweaks. In general, I've found that the more I improve my digital chain (power, etc), the more I'm able to push incisiveness and dynamics and still have things sound natural and relaxed. YMMV of course...this is subtle stuff, and very system specific.
  19. This is very true. I experience a very abrupt phase transition where some recordings suddenly seem very real. Some recordings never get there, others get on the "good side" of reality mountain after some significant effort or enhancement. I've attributed this to my brain getting better at fabricating the illusion for me, once appropriate auditory cues are there. Conversely, for recordings on edge of that transition, it is a very sensitive "tell" when I've done something to disrupt some aspect of music reproduction. In a way, much of my audio optimization journey has been about getting more and more of my music library to feel like it is "real" and in the room.
  20. The head vice would be fabricated out of panzeholz For the spirit of this anecdote, I was trying to share what is detectable, (thankfully) not how I listen. FWIW, it is angular distance from the driver. That is a LOT of lateral head position to have 1mm impact on distance to driver (at 9' listening distance, ~3" if I did my math right?). My head twist variance is absolutely more than 1mm, agreed. Interestingly, I find that my head position naturally gravitates to where the soundstage is most expansive and natural (or in the case of the pink noise scenario, where the null is more pronounced) To the ITD, you are certainly better read on this than I am, but isn't that related to localization of sound source by the time difference it takes for sound to get to each ear? With stereo music reproduction, we actually don't want to hear the speaker driver, we want a sound stage projected before us. My phase analogies were related to that sound stage projection, not localizing where a speaker may be. That being said, I am google-level ignorant on the ITD measures. I have no idea if the psycho acoustics are the same mechanism between ITD for sound source localization, and reconstructing a sound scape from the aggregate phases of the sounds we are hearing. I suspect our brains are doing a lot of interpolation/projection for the later functions, just because our brains are really good at casting things in a way where it is easier to digest/interpret (Coltrane obviously isn't standing in front of me, but damn does my brain gets a lot of juice when it sounds like he is...sign me up for more of that kind of self-delusion!).
  21. I'm with you on filters. I'm still trying to get my arms around what the various filters in HQP are doing on various recordings. I'm talking more about our brain's ability to detect and process phase/timing differences, esp. for spatial placement. For better or worse, our brains have evolved to give us the sense of spatial placement and space, based in part on these sort of phase cues (at least for naturally occurring signal scenarios) To Chris' original ask about "threshold of human" hearing, there are certainly multiple thresholds for different types of information that we hear. As a practical example of phase vs frequency thresholds, I typically phase align my speakers with mono pink noise. I drive both speakers with the same signal. My speakers are single driver high efficiency speakers (no cross overs, no multiple driver phase alignment issues, basically point sources). As the speakers become phase aligned to my listening position, the image converges to a dot. The tighter the dot, the better the phase alignment and the fewer spurious paths/reflections. If I invert the signal to one speaker, it becomes even easier to fine tune, since I'm now sitting in an effective null. Even very small phase differences between the speakers becomes audible as a buddy is tweaking a speaker position. As a practical matter, in my room, ~1mm changes in speaker position is audible for me in this (very) artificial scenario (and since this is an objective forum, distance from my listening position to the same position on speaker drivers confirmed to be identical to within the ~2mm resolution of my laser measure If we were to naively translate that to frequency, at the speed of sound that implies ~340kHz hearing resolution. My 53 year old ears tap out around 15kHz and clearly can not hear >300kHz tones. However, I can hear phase timing differences with that level of signal timing resolution, in this (very) artificial scenario. With a better treated room, I'm sure things would be much better still. This is akin to interferometry. In optics, resolution is limited to wavelength of light you're using to "see" divided by two. If you want to see smaller things, you need to use smaller and smaller wavelengths. However, you can us phase information to get arbitrary resolution, if you have a coherent enough light source and you're able to integrate the signal long enough to overcome any noise in the measurement. Way back in the day, this allowed me to monitor etch depth in semiconductor structures to essentially the atomic level, clearly WAY beyond the resolution of the light I was using to do the etch depth measurement. For me, higher resolution sources (whether natively recorded or reconstructed with a sinc reconstruction function) has been about phase timing accuracy, not audibility of the ultra high frequencies. Depending on the recording chain and performance of the components, that phase resolution may or may not matter obviously. (Chris, sorry for the long detour into phase land...I suspect your original question was more related to limited of audibility in distortion and noise measurements than timing accuracy)
  22. I suspect it is differing levels of phase/timing sensitivity. I'm definitely a phase/point source guy (on steroids) and have optimized my system around same. I've noted that when folks come over, some people are definitely more WOW for phase-related optimizations, and others barely hear them at all. The later group seems to be more power/amplitude focused. Some folks fall somewhere in between.
×
×
  • Create New...