Jump to content

jabbr

  • Posts

    8177
  • Joined

  • Last visited

  • Country

    United States

5 Followers

Retained

  • Member Title
    Senior Member

Personal Information

  • Location
    Cincinnati

Recent Profile Visitors

14643 profile views
  1. English Blues evolved into its own forms and remains distinct from American Blues. The best thing Eric Clapton did was give highlight to Buddy Guy whom I have seen live many times in many years. The artists notably Led Zeppelin, The Rolling Stones and the other English kids might have sounded like American Blues in their demo tapes and earliest work but quickly incorporated English folk and went off in their own direction.
  2. Interesting ... a cable that actually does something! Is it a common mode choke? The ethernet PHY transformers aren't great at filtering common mode noise and adding common mode chokes has been discussed.
  3. Would a good application for Ravenna be using HQPE as a digital crossover to produce 3+3 or 4+4 each going to a DAC/Amp ie tri or quad amping each channel?
  4. The 100M connection can't try to stuff >100Mbe to an endpoint. On the 1Gbe side, the upstream system can try to burst data at 1Gbe. Your switch has a buffer that stores the packets bursted in at 1Gbe and sends it out at 100Mbe. If the buffer fills, you might send pause frames so that there isn't packet loss and retransmission. If the endpoint advertises that it can handle 1Gbe but can only handle 600MbE then the switch may try to send it packets too fast. The endpoint would then need to send pause frames. If the endpoint wants to receive stereo DSD1024 or even DSD2048 (e.g. for the Holo May) then it needs a switch that can provide 1Gbe.
  5. Both Xilinx and Mellanox offer NICs i.e. PCIe which have integrated processors that are used for high speed applications but presumably could decode Ravenna if that weren't something you couldn't do on your PC CPU. The ZMAN module, as I superficially understand, is designed to provide an Ethernet/Ravenna interface to a DAC and outputs I2S. Presumably one could also program an RPi to decode Ravenna and supply I2S to a DAC. Maybe that's why ZMAN is DOA despite being extraordinarily cheap for what you get. Then again using a mainframe as a calculator might be cheap for a mainframe but not cost effective as a calculator ... unless you really care about timing... but a DAC designer would measure and decide which interface is appropriate.
  6. To be very clear, ground plane noise is not transmitted over optical fiber. The stressed receiver test also tests for this because the ground plane forms the floor of the stressed eye pattern. Asserting that ground plane noise is transmitted along ethernet is a mere speculation that you've promised to demonstrate for years and haven't. Moreover the extensive testing for this that is done by the modern network industry demonstrates that this doesn't happen in modern compliant networks. The continued assertion of this in the absence of the measurements you promised years ago, and in the face of numerous measurements that disprove this is at best ludicrous.
  7. I have a Xilinx Kria KV260 kit for $200 and that's waaay more powerful than the original Zynq. Looking to see if the development platform has gotten as easy as promised but typically, you need to get down into the nitty gritty and not rely on high level language support, particularly with clock timing. I guess Merging might be essentially giving away the ability to input Ravenna 🤷‍♂️ ... and if they can't give it away, that tells you something about the market.
  8. I have a different view: In my experience an application can take up no more than 80% of the bandwidth (roughly speaking) so 73% should usually work. Regarding pause frames, its the hardware/SoC that often emits them. solid-run, which makes the ClearFog sbc that I use (with SFP) input is perhaps more famous in audio circles for the CuBox series which used the i.Mx6/8 series of low powered CPUs. Solid-run makes SOM or system on modules which package the CPU on a small sophisticated multilayer board that greatly eases the incorporation of a CPU onto a more easily developed baseboard which contains all the I/O, power supply etc. So ... if you look at the i.MX datasheets (NXP semi) you see that the underlying processor is limited to 600Mbe or so (from memory) even though a 1Gbe interface is advertised. The i.MX itself emits the pause-frames, so this is a popular chip widely used in audio endpoints for many years and emits pause frames. @Miska, IMHO, shouldn't have to clog the NAA protocol to avoid pause frames because that would add an extreme redundancy at the application level to what is a layer 2/3? protocol. So turn on pause frames if you have i.MX processors or just turn on switch processing 🤷‍♂️
  9. Since I use HQPE on Linux, and have zero possibility of moving to either Windows or MacOS ... and particularly since this is a server based program, any consideration to a Linux version? Since this is javascript would it run on Linux?
  10. Yes for those of us who understand that as an analogue device, it is the analogue input to the DAC which determines sound as opposed to the interpretation as bits. As such I am interested in changes in the analogue output of the DAC. It would be great to demonstrate that clearly. My very firm opinion is that if there is a real audible difference in sound (not just SQ) then there will be a real difference in analogue DAC output. A failure to demonstrate that will necessarily be a failure to measure correctly. I am not placing any requirements on what the analogue changs are, nor whether they are bandwidth limited.
  11. How about this, forget what you think people will or won't believe state: 1) clear answers 2) describe how the software changes the timing 3) or how the software is changing the electrical signal You are treating this too much like a trade secret and I'm afraid your software is not getting the attention it might otherwise deserve. Contrast this with HQPlayer which does a fair amount more documenting filters and modulators and has seen a much higher usage growth. Tell us what the settings do. You are being too coy. Shed some light.
  12. The premise that bit perfect playback sounds identical is fundamentally flawed. Imagine these software settings: A: do not inject common mode noise into USB power/ground B: inject maximal common mode noise into USB power/ground It should be obvious to anyone that ground loops exist and are audible. If this were known to be the setting no one would question the results. The problem are undocumented settings whose action have not been explained. The really scientific was to explain a black box is to open it up and look inside. ie reverse engineer — if anyone cared. Short of that this argument is pointless. I am not saying that common mode noise is the mechanism just that it would be a possible mechanism — there are others : you could send signals down the USB power-ground, you could vary the USB d+|- within allowed values etc etc etc
  13. Unlike copper, optical does not have the same increase in power with increased bandwidth. One of the reasons we are here. Yes you can weight different perceived advantages and disadvantages. I’ve done that for myself and of course other people are free to make their own optimizations. It’s great we have such diverse choices. As this recent discussion has elicited, we are bumping up against the 100Mbe limit for many real world audio applications.
  14. DSD 1024 requires 100 Mb bandwidth. Native. That without upsampling. Obviously not all music is DSD1024 but there are DACs which accept. Not sue why anyone would limit themselves to 100 Mbe these days.
×
×
  • Create New...