Jump to content

JohnSwenson

Members
  • Content Count

    1600
  • Joined

  • Last visited

About JohnSwenson

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. A switch can deal with the pause frames. The FMCs are theoretically very simple devices, just converting from one network form to another without doing anything to the data. It depends how the designer of the FMC implemented it. If the FMC uses a switch, then it will handle the pause frames, if a switch was NOT used it won't handle the pause frames. For the oM I deliberately chose to use a very simple circuit, a switch chip does a whole lot more causing significantly more ground plane noise etc. I wanted the oM to be as pristine as possible so I left out stuff it didn't need. (switch chips also use quite a bit more power) Other designers will make different decisions on this. John S.
  2. Hi All, the issue with NAA is somewhat complicated, so I'll try and explain it so it is easy to understand. Most protocols have an explicit "flow control", the server needs some way to send the data at some rate that doesn't overflow the DAC. The NAA protocol doesn't seem to do this, it seems to rely on a very low level mechanism built in to Ethernet called pause frames. If the receiver's buffer is getting too full it sends a pause frame upstream. The switch sending the data then stops sending data. The problem with this is that this is handled by the network equipment, not the source of the data, HQP. There are some generic issues here, network professionals hate pause frames, it's very easy for a network to go into gridlock because of pause frames. The result is that almost all "profesional" switches, which are usually managed switches, do not by default handle pause frames, you have to explicitly turn them on. "Home" routers usually do support pause frames. The issue with the oM is that the circuit inside is NOT a switch, thus it cannot handle pause frames, it passes them on up stream, but it by itself cannot pause the stream. The problem here is that the linux kernel inside of the Rendus asks the connected device if it supports pause frames, the oM says no it cannot (which is the correct response), so the linux never sends any pause frames when its buffers fill up. The problem is that it SHOULD send them because the oM will send them right along to the upstream switch which CAN deal with it. We found this issue in the opticalRendu development and found a fix for it, it causes the linux kernel to go ahead and send the pause frames even when the oM says it cannot handle them. This has gone out in every oR sold. It was supposed to be added to the software for the other rendus, but I'm not sure if it actually made it. So if you guys are having the problem with NAA and an oM on an ultrRendu or microRendu try doing an update (you have to be at 2.7 for this to work) and see if that fixes the problem. Note this is only going to work if the upstream switch properly handles pause frames. If you have a managed switch you may need to explicitly turn on pause frames to use NAA (whether there is an oM involved or not). I hope this helps clarify the situation. John S.
  3. I didn't have a good high speed probe at the time to really look at USB high speed signals so looking at the signals was almost useless. I do have a good high speed differential probe now, but I have found sort of a Heisenberg issue, connecting the probe changes the signal to some degree so it is difficult to try and measure things like noise on the signal. I really need to build a multi-giga Hertz high impedance two channel buffer I can build into a board right on the USB traces, but that is not going to happen right away. I did do some listening and looking at the output of the DACs on a scope during some of the tests. When errors were occasional most of the time they were inaudible. I would play a 400Hz sine wave and look at the DAC output and most of the time when the analyzer said an error occurred I could not see any thing on the scope. This seems to indicate that the DACs were covering up the errors pretty well. With UAC a DAC can detect an error but can't error correct, they CAN do forms of interpolation to try and hide the bad data. Once in a while something was enough to cause a blip in the wave form, those were definitely heard as clicks. When error rates got higher I could actually hear subtle changes to sound but no specific crackle or clicks. I don't know exactly what was causing this but my guess was the distortion caused by the "covering" up was happening often enough that brain was noticing it. As the error rates went higher there was definite crackling and pops. Two of the DACs just shut down when this started happening but the other kept on going with massive crackling and pops etc. I think the shutting down was actually a better way to deal with it, the pops could get pretty loud. John S.
  4. This brings up an interesting aspect, the test design. Every time anybody posts a test it seems almost everybody starts screaming "the test was invalid due to xxxxx". So what IS a proper test? What would you be willing accept as a proper test? What would the people that say they can hear a difference accept as a proper test? Until this happens I don't think saying "do a proper test" is going to prove anything until agreement is reached as to what constitutes a "proper test". Any thoughts on how to go about designing the proper test? John S.
  5. I used a Lecroy Mercury T2 USB protocol analyzer to measure errors. I used several sources: Deskside computer, homebuilt, Supermicro motherboard, Xeon processor, 32 gigs memory Windows7 64bit. Foobar 2000 playing local files. LenovoT460S laptop, powered from provided SMPS and also tried from internal batteries. Also running Win7 64bit. Foobar 2000 playing local files. Sonore ultraRendu in squeezelite mode over LAN from vortexbox software running on compulab fitlet. DACs: Soekris dac1101 micca OriGen+ Bottlehead Dac Cables: whatever I had on hand. This was NOT designed to be a test of specific brands, I didn't buy any cables for this test. There were only three cables that had a brand name, Supra, Belkin and EVERNEW. The rest did not have brand names but did usually have some form of USB 2.0, usually a temperature max etc. The Supra and Belkin were the only ones I bought specifically as a cable purchase, the others came with DACs and other stuff bought over the years. Error rates: Under 6ft, no errors with any cable, source DAC combinations. 6 to 10ft, all source and DAC combos, except for Supra cable. Occasional errors, somewhere between 1 error every several minutes to 1 error every seconds. Different source and DAC combinations made very little difference. It was hard to give precise error rates. Even with exactly the same setup error rates varied quite a bit. I tried different times of day, different room temperatures and couldn't find any decent correlations. Sometimes it would go for ten minutes without an error, then have a whole bunch of errors within 10 seconds. The Supra in this range didn't have any errors, but all the others behaved pretty much the same, they all varied a lot between minutes per error to groups of errors close together. Above 10ft ALL the cables had errors. This is where I saw some differentiation between cables and equipment combinations. A few combinations had many errors per second which caused the DACs to drop the connection. Withe the same source and DAC changing cables did make significant differences. They all had errors but some would be at 1 per second and others would be at 10-20 per second. At this length sources did make a difference but the DAC made a much bigger difference. One of the DACs would stay connected even if getting lots of errors but the other two would disconnect when the error rates got above 2 errors per second. As a separate test, quite a while after the original, I DID measure a corning optical cable, 30ft long, this worked very well, no errors under any combination. Note this was NOT a test designed to measure specific brand cables. I really had no idea what error rates existed on music over USB cables so I decided to test systems using equipment and cables I had on hand, that is all this test was about, there was no sound quality part of this test, it was all about errors as detected by the analyzer. It was not supposed to be any form of uniform representation of the entire universe of cables. Summary of results: Note this is for the cables and equipment I had on hand others may be different. Cable length DOES matter. Under 6ft no errors with any cables and equipment combinations. Equipment (source, DAC) makes very little difference. Note: this is JUST error rates, it has NOTHING to do with how things sound, that is a completely different test. With longer cables, giving lots of errors, DACs vary a lot in how they handle errors. John S.
  6. I would recommend 6Ft (2m) and under. The protocol used to transfer audio data over USB does NOT have any kind of error correction. I did a study of of about 12 different USB cables (none of them expensive audiophile types). Above 6ft I started to get occasional errors and above 10ft I got frequent errors. The only exception to this was Supra, a 10ft Supra showed no errors, but ALL the other cables were showing frequent errors at 10 ft. None of the cables 6ft and under showed any errors at all. Note that the audio protocol is very different than all the other protocols used in USB, the others DO have error correction so you can go with longer cables and have error free connection, but not with audio. John S.
  7. The A side RJ45 jacks are all 10/100/1000. The A side SFP is just 1000 and the B side is 100. There really is no "clean" and "dirty" side with the current implementation. There is a slight difference in clocking and power between sides, but certainly not worthy of the monikers "clean" and "dirty". I'm just using A and B sides. A has the 4 RJ45 jacks and the SFP cage, B has the single 100Mb RJ45. John S.
  8. That is one of the intriguing things, we don't know yet.In the configuration most people will be using it (B side RJ45 to streamer) we have a built in high quality isolating power supply between the external power supply and the circuitry. The B side chips actually have a 4 stage power network powering them. I'm not sure how much of external power supply will get through that. Even on the A side from say RJ45 to another A side RJ45, there is a two stage power network for everything. The result is that the quality of the external supply may not be as important as for other designs. But we really won't know for a while. John S.
  9. Most of the ports on that switch are not gigabit, EXCEPT the RJ-45 and SFP cage off to the side, they both work at gigabit. So if you connect the opticalModule to one of those it should work. John S.
  10. I highly doubt it. Of course nothing is stopping you from trying, but my guess is that it will make no difference. John S.
  11. The metal case of the B RJ-45 is connected to the GND of the B side. BUT the GND of the B side is not connected to the GND of the A side. The only thing a connection from the B RJ-45 shield can possibly connect to is the shell of the BNC jack for external clock. John S.
  12. Please read post #658 https://audiophilestyle.com/forums/topic/55217-sonore-opticalrendu/?do=findComment&comment=963599 This goes into some detail on what is happening here. For your specific question, an opticalModule plugged into an RJ-45 jack of the switch is probably going to produce lower phase noise coming out its SFP port than the SFP port of the switch. The result is that using the opticalModule to the opticalRendu is probably going to sound significantly better then the SFP port of the switch going into the opticalRendu. John S.
  13. The iFI groundhog does not use a common spade, it is very carefully designed so it is exactly the right size and has little "hooks" at the ends that go slightly more around the barrel than a common U spade. The result is that it "clicks" on and stays in place making a fairly decent connection. John S.
  14. One thing to consider is that your current situation of sending music through the Ethernet while playing significantly degrades sound may in fact go away when using an EtherREGEN, that is in fact its primary purpose, to get rid of anything coming over the network that will degrade sound. It seems like your assumption is that the degradation is coming from the renderer itself having to deal with the data rather than from what is coming over the connection itself. It's hard to tell exactly which it is. The EtherREGEN may clean up the external degradation so much that having music data coming over the wire wile it is being played does not cause any degradation. Or there still might be some effect from the renderer's electronics itself. That's going to be hard to tell until you try it. Unfortunately there is no way anyone can tell you exactly how it is going to turn out in any situation, the fully functional EtherREGEN simply does not exist at this point in time. Even if I had all the equipment you have I couldn't test it since I don't have a fully functional board at this point. I'm not trying hide things or be difficult, It just cannot be done right now. There is no way I can tell yow what configuration is going to sound best. There is just no way I can do that right now. Any attempt at making a pure guess would be a disservice to the community. It seems to me that the best way to deal with this, is wait until you have an EtherREGEN, then try it in the different modes, see if prebuffering everything before playing is in fact better than data coming over while playing, THEN we have some information to start coming up with scenarios for testing to find out what ultimately is best for you. John S.
×
×
  • Create New...