Jump to content


  • Content Count

  • Joined

  • Last visited

About JohnSwenson

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The ultraRendu's Ethernet port is 10/100/1000 so will work fine with the 100Mb B side port of the ER. The opticalRendu is just gigabit on its SFP cage (optical). Which will also work with the ER since the optical port on the ER is gigabit as well. So in THAT case you would connect the SFP port over optical to the optical port of the opticalRendu and the B port of the ER to your main network. John S.
  2. Hmm, so the connection to the line through the wall is gigabit and the connection to the Blu-Ray player is 100Mb. But when you have the 100Mb switch instead of the ER, the connection through the wall is 100Mb, One thing to try is put the 100Mb switch on the left side of the diagram feeding the wire to the wall. That way the switch on the left of the diagram sees a 100Mb connection not a gigabit connection. I have seen a situation where this mattered, causing lots of dropouts just like you are seeing. I don't remember the exact details, but the audio protocol wound up behaving differently depending on the connection speed, if both ends were not the same it didn't work right. John S.
  3. This configuration will work very well. As long as the packet flow is going from B to two A ports this is fine. The problem occurs when you have something like A NAS or router connected to the A side AND an endpoint on the A side. Then the timing of the packets coming into the A side can affect the timing going out on another A side port. If two A ports are going out with data coming from the B port this can't happen. John S.
  4. Just look at the rated distance compared to what you will be using. The ones you are using are rated at 40KM, the SX ones are rated at 500M. Those 40KM ones need a very powerful laser to reach that distance, with a short cable (like what you would have in a house) it is severe overkill and can fry the receiver. The 500M ones have a much lower power laser which works fine with the short cables we use for audio. There MAY be things that make the 40KM ones better (maybe better receivers etc, I have no clue what is inside) that might make it worth it, but I would start with the SX ones get familiar with them and then later try playing with the ones that seem a little out there for this purpose. John S.
  5. No it's not in the white paper, but I can give some insight into measuring it. First off is what you are measuring. You can use test equipment to measure the supply response, this means a controlled current load on the supply then measure what the supply does. Some electronic loads let you vary the load with periodic patterns then you look at the voltage with a scope and see what it does. These are usually square wave changes to the current. This is nowhere near good enough to really characterize a supply. Another method is to hook the supply up to a real load (computer etc) and see what you get. Scopes are fairly useless for this, the patterns in load current from a computer are so all over the place that no matter how you set a scope you will miss important stuff. Then there is how you look at the results, you can look at it in time or frequency. Exactly the same thing as using a scope and a spectrum analyzer, they are different ways to to look at the same thing. A scope gives you the timing view and an output impedance plot gives you the frequency view. Finding the right test equipment to measure this is not easy. You need to measure small changes sitting on top of DC voltages. The usual way to do this is with AC coupling (the signal goes through a cap) that works fine for periodic high frequency signals, but we are not measuring that. The variations from digital loads (computers etc) are not very periodic and some of the most interesting events are low in frequency which will not pass through the cap. When doing an impedance plot we also want to measure low frequencies which again will not pass through the cap. If you use DC coupling a scope does not have enough resolution to measure the small voltage changes at the DC voltage of the supply. Many years ago I tried doing this with a custom AC coupled amplifier, I first tried high value electrolytic caps, which couple the low frequencies fine, but are very non-linear at high frequencies. So I tried large electrolytic in parallel with small film caps. well the parasitic inductance and capacitance reacted with each other creating massive resonances which again made the results useless. Then I tried high value film caps which are way more linear, but by the time you get enough capacitance are large and with enough inductance that the high frequency response is terrible. OK so now it is non-inductive film, this might have worked but the prices were insane, I could not afford it. So I dropped this project at the time. To make an impedance plot you have a voltage controlled load and you feed it a swept frequency sine wave. This will cause small changes in the voltage of the supply, the higher the impedance at the particular frequency the higher the voltage change. Part of the problem here is that NONE of the test equipment manufacturers actually make such a device. The frequency range, voltage and current of the supply vary so much that no company can produce a universal load that will work for most people. A year and a half or so ago I built my own voltage controlled load, it works pretty good, but it is not calibrated, there is no single voltage change to current change number, unfortunately the delta varies depending on the supply voltage and how much current you are pulling with the load. I used this with my spectrum analyzer which has a "tracking oscillator" which produces a sine wave at the center of the instantaneous frequency of analyzer. This DID actually work. I get a nice amplitude verses frequency graph, but it is not calibrated and because of the non-linear behavior of the load you can't just look at the graph, there have to be corrections made. In addition the spectrum analyzer can only go down to 10Hz which is not low enough to capture some of the most important ranges. So after all this I decided the best way to do this is to use a high resolution ADC and DAC, the DAC generates the variable frequency sine wave to drive the load, the ADC is set so the DC of the supply is close to the top of the range, then the high resolution can properly measure the small changes in output voltage. The data from the ADC can then be post processed to account for the non-linearities in the load and put in graph form (either time or frequency. It turns out there is a fairly inexpensive piece of equipment on the market today which is perfect for this called the Red Pitaya, it has high resolution DACs and ADCs in a data collection system with a lot of memory and an FPGA and ARM processor to control things. I have one of these but have not had the time to write all the code necessary to make this work. It is doable but is going to take a fair amount of work to get it right. Well that is probably more than anybody wanted to know about measuring power supplies! John S.
  6. Higher input voltages produce lower currents which produce smaller voltage drops across cables and connectors, which MIGHT give slightly better performance. The lower current can also significantly affect the power supply output, but exactly what happens is going to be very dependent on the particular supply. So again as with pretty much everything else, no cut and dried universal rule. If you have the option, try it your self and don't worry. Just remember that say you want to buy a model X 12V supply, but it costs many hundreds of dollars, so you try a model Y 12V supply, if Y sounds better is not a guarantee that the 12V X will also sound better. John S.
  7. When going from B to A, the CONNECTION characteristics of the RJ45 A ports don't change, they are still auto-negotiated to 10-100-1000, BUT since the B side is 100 the overall throughput is limited to 100. So if you connect to an A port at gigabit, the bits in a packet are traveling at gigabit protocol, but there is more time between packets. The A side SFP interface is ALWAYS gigabit only. John S.
  8. Hi dmagnus1, you are inquiring into one of the most complicated and controversial subjects in audio, so worry about not understanding it well. First off the "number" for jitter, this is like giving a single "performance number" for a car it is almost meaningless without defining exactly how it is come up with and what it means. As far as jitter goes there are a whole bunch of different measurements of jitter that all give a single time number (ps, ns, fs etc) yet these measurements are quite different. For example I have a particular oscillator that when measured one ways gives 11ps and another 97fs, kind of a big difference. The result is that using a "ps" number in comparisons is only valid when the same person, using the exact same test equipment in exactly the same way is doing the measurements. Comparing numbers coming from different companies is essentially meaningless. Even if they specify exactly WHAT the measurement is, using different test equipment, or even the same equipment in different ways can give different results for the SAME test. And all this is assuming that what you are measuring in time units (ie ps) is even correlated to sound quality. My experience seems to to be showing that using a spectral measurement of jitter (ie phase noise) seems to have a greater correlation to sound quality. But GOOD phase noise measurement equipment is VERY expensive. Unfortunately understanding a phase noise plot is not easy and again comparing two is fraught with peril. Then there is what you are measuring. Are you measuring a clock right at the oscillator, the DAC chip pins or somewhere else? You can get radically different results depending on where in a circuit you measure it. Then there is the interesting part that the inside of a DAC chip can be much worse than the jitter at the package pin. What is going on inside the chip can make it worse. Unfortunately there is essentially NO way to measure THIS directly. The upshot of all this is don't even try to select equipment based on jitter numbers, it is meaningless. This doesn't mean that jitter doesn't matter, or that everything is the same, far from it, it just means that the commonly bantered numbers are meaningless for use in comparing the jitter from one device to another in any way that has any correlation with sound quality. John S.
  9. There has been some discussions about thermal issue with the ER and I'd like to go into a little detail about how this all works. There is a lot of partial understanding of this that can cause people to make bad decisions. (BTW this is generic, ALL electronics follows this). To see how the system works thermally start at the outside of the case, its thermal properties are by far the biggest factor, the internal details have very little to do with it. (I know that sounds bizarre , but it's true). What matters for a case like the ER which is a closed case made out of aluminum, which reaches an equilibrium where the entire case is all at almost the same temperature, is how how well the case gets rid of the heat AND the total power being generated inside the case. This type of case has two primary means of getting rid of heat: IR radiation and air movement in contact with the surface. Without a fan blowing air across the surface both of these are heavily determined by the temperature of the case. As the temperature goes up more heat goes out through IR, and more air convection happens. BTW in this particular case, the IR is the biggest contributor to getting the heat out, not air. So what happens is the case reaches a temperature that produce IR and convection power that equals the total amount of power generated by the circuit. That's it. The temperature of the case has nothing to do with the internal structure, whether things are connected by metal to the case or just air etc. NONE of that matters for case temperature. It is entirely the equilibrium temperature where internal heat flow matches external heat flow. The structure INSIDE does matter for the temperature of the board though. The temperature of the board is again an equilibrium of the heat coming from the board and how efficient the board can get heat to the case. We know from thermodynamics that the board HAS to be hotter than the inside of the case, how much hotter depends on the hat transfer efficiency of the board. It turns out that the common PC board is actually pretty good at doing this, and most of it is IR radiation not air. It turns out that for a system like the ER the board temperature is only a degree or so hotter than the case! Even when there is just air between them! There is no need for direct metal to case connections, it would only drop the board temperature by a fraction of a degree. This only works when most of the heat being generated by the devices on the board gets coupled well to the board and the internal power and ground planes do a good job of spreading the heat around throughout the board. in the ER case it is a six layer board with a LOT of P/G planes that do a VERY good job of this. The only issue is that some of the small devices do not couple well into the board, their size is so small they just can't transfer their heat dissipation well into the board. Most of the devices on the board DO couple well and the device temperatures are only a few degrees hotter than the board. BUT a few of them don't couple well and they are much hotter than the board. These are the ones that have the heatsinks, this adds another path to get the heat from the device to the case (again IR and convection) The result is these devices are quite a bit cooler then they would be without the heatsink. (they still are hotter than the board, but not by much) Note these internal heatsinks do not change the case temperature one bit, the total heat stays the same, they just lower the temperature of certain devices that have a hard time getting their heat into the board. We spent a lot of work on this, analyzing, measuring, trying different things etc. The result is a well integrated thermal system where every device runs at well below its thermal rating. Nothing you could do on the inside will make the parts run any cooler. The only thing that really matters is getting the case itself cooler, external heat sinks, external air flow etc. But you have to do a LOT of this to make a big difference since a large percentage of the total case heat flow comes from IR, air flow changes only make a fairly small change in the total. One very important thing to understand is that the IR emission from a black surface is vastly greater than from a bare metal surface, so putting a bare metal heatsink on top of an ER will actually increase the temperature of the case because you are almost stopping the IR emission. If you DO use a heatsink make sure it is black! This brings up another interesting way to cool the thing, put a black thick aluminum plate under the ER, the bottom will transfer a lot of heat to the plate through IR, which then spreads out through the plate into the environment. You could also make that into a nice anti vibration platform. The temperature inside the case does not inherently matter to the oscillator, but it DOES affect the thermal sensitivity. All crystal oscillators have a sensitivity to temperature change, but it is not constant. At a certain temperature the temperature coefficient is zero. Far away from that temperature small changes in temperature make a fairly big change in frequency. The zero point for the oscillators we use are a little above common room temperature, so at the temperature they are at inside the case they are definitely above the zero point so they become somewhat sensitive to temperature changes. This is why reaching thermal equilibrium is quite important. You want to keep the temperature stable because the oscillator is sensitive to temperature change. The cooler the case is the less this sensitivity is. There you go, far more than you ever wanted to know about thermal flow and electronics. John S.
  10. Since you are running optical into the ER, it is probably better to just have that connection into the ER and hook other stuff up to the 2960, although if they are near each other you could try it both ways and see what sounds best. There is no obvious "this will definitely be the best", so it looks like if you really are interested in optimizing the system you are going to have to try some things out and see what sounds best. John S.
  11. You want to keep the ER near the ALLO without any boxes in between. I would probably run the existing optical connection from the Ubiquity router to the optical SFP on the ER. This sounds like it is just swapping the 10/100 2960 for the ER, correct? Is the router near the ER? Is the 125ft cable near the ER? If the router, ER and 125ft cable all near each other it seems you have several ways to hook this up: 1) ER optical from router, 125 cable from router, gigbit 2960 or Netgear at other end 2) gigabit 2960 from router, 125ft cable from gig 2960, Netgear or 10/100 2960 at other end 3) gigabit 2960 from router, 125ft cable AND optical to ER from gig 2960, Netgear or 10/100 2960 at other end 4) 10/100 2960 from router, ER from 10/100 2960 optical, 125ft cable from router, gig 2960 on other end. The 10/100 2960 has one gig port and the rest 10/100 so whether you can use it at the end of the 125ft cable depends on if the TV etc can run at 100 or if the NEED gigbit. Several people have stated that the ER driven by a 2960 sounds better than the ER directly into a router. The SFP port on the 10/100 2960 IS gigabit, so it can talk to the SFP on the ER. I'm getting a feeling that #4 probably gives you the best over all optimization. John S.
  12. I don't know about the Emo, but I have looked and measured the Baaske extensively. The Baaske is just a transformer, that's it. This gives some extra common mode attenuation, and the bandwidth of the transformer is low, just enough to barely pass an Ethernet signal. This can cut down on some frequencies of differential noise, but it also degrades signal integrity significantly. So putting one between the B side jack of an ER and an endpoint is almost guaranteed to make things worse. On the A side it may may have some effect, but it could be worse or better. My gut feeling is to not use these at all. John S.
  13. This is where it gets complicated, you do NOT want the DC- to be the same for both supplies, this bypasses the moat, giving up much of the goodness of the ER. The connecting the DC- to safety ground should ONLY be done for the most upstream ER. Doing for both again shorts out the moat. Whether you need the safety ground on the upstream ER depends on if there are other things plugged into the A side. If nothing else is plugged in, then don't bother with connecting DC- to safety ground. You CAN but I would not do it if it will cost more or take extra time etc. John S.
  14. Are the two rails completely isolated from each other? An EtherREGEN runs nicely at 12V, there is almost no difference heat wise from 7v to 12V so I would go with 12V it will pull less current from the supply, reducing any voltage drop across the DC cable from the supply. John S.
  15. I think your problem is the MC200cm. It says it is 10/100/1000 capable, which requires auto-negotiation, BUT the fiber protocol the oM uses is JUST 1000 only, it cannot auto-negotiate. This makes me think that TP-Link device is using an incompatible fiber protocol. John S.
  • Create New...