Friday, 15 November 2019

passive networks - Understanding USB Inrush Current Requirements


In relation to a previous question, I'm trying to understand the USB 2.0 requirements for inrush current. I understand the basic idea, but a few details are still unclear to me. The specification states, in part, that:



The maximum load (CRPB) that can be placed at the downstream end of a cable is 10 μF in

parallel with 44 Ω. The 10 μF capacitance represents any bypass capacitor directly connected across the VBUS lines in the function plus any capacitive effects visible through the regulator in the device. The 44 Ω resistance represents one unit load of current drawn by the device during connect.


If more bypass capacitance is required in the device, then the device must incorporate some form of VBUS surge current limiting, such that it matches the characteristics of the above load.



The USB-IF also provides a description of an inrush current test:



Inrush current is measured for a minimum of 100 milliseconds after attach. Attach is defined at the moment the VBus and ground pins of the plug mate with the receptacle.
Any current exceeding 100 mA during the 100 ms interval is considered part of the inrush current event. The inrush current is divided into regions. A region is an interval where the current exceeds 100 mA until the time the current falls below 100 mA for at least 100 µs. There can be multiple inrush regions during the 100 ms period. Pass/fail is determined by the region having the highest charge.



That is explicit as far as it goes, but it only gives a minimum measurement time, and it doesn't spell out what algorithm is applied to the inrush regions to come up with a pass/fail decision. I think the idea is that during the regions when the current exceeds 100 mA, the current is integrated to get the total charge transferred during this window, and the total charge must not be greater than what you would get with the 10 uF // 44 Ω load. According to one source, this would be 5V * 10 µF = 50 µC. That is where my understanding gets a little shaky.


To help me understand, I analyzed the following circuit:



circuit


[The R1 resistance is not part of any specification, but I need it to do the math, and I can let it go to zero as needed.] The current starts at \$V_1/R_1\$ and decays exponentially to \$V_1/(R_1 + R_2)\$ with time constant \$(1/R_1 + 1/R_2)^{-1} C_1\$.


The total charge transferred at time \$t\$ will be


$$ Q(t) = \frac{V_1}{R_1 + R_2}t + \frac{V_1 R_2^2 C}{(R_1 + R_2)^2}\{1 - \exp(\frac{-t}{C_1}(\frac{1}{R_1} + \frac{1}{R_2}) )\} $$


In the limit as \$R_1\$ goes to zero, this simplifies to


$$ Q(t) = \frac{V_1}{R_2} t + V_1 C$$


One part I don't understand is that with 5 volts from the USB bus and the prescribed 44 Ω load, there will always be a 5V/44 Ω = 114 mA current, which is more than the 100 mA limit described in the quoted USB-IF test, and also more than the maximum one unit (i.e. 100 mA) load allowed for a low-power USB function (USB 2.0 spec. section 7.2.1). In the R1 = 0 limiting case, this current will draw as much charge as the capacitor (i.e. 50 µC) in R2*C1 = 440 µs.


So the question, if you're still reading, is what does it mean, precisely, to "[match] the characteristics of the above load" (i.e. 44 Ω parallel with 10 µF), and how does the described USB-IF inrush current test decide how much current is too much?


Thanks.



Answer




The answer is: no one knows.


Well, someone knows, but the inrush pass/fail test is considered proprietary information and how that determination is made is not published by the USB-IF, for reasons that are theirs to know. I know that's not a very satisfying answer, but that's the simple truth.


To quote the electrical compliance tests page (the text is in red so you know they're even more serious than the normal level of USB-IF seriousity):



NOTE: Some of the following approved test solutions use proprietary software to assess signal quality and inrush current events. The only official analysis tool for certifying signal quality and inrush current is USBET20 published by the USB-IF. Please be sure to run captured signal quality and inrush current test data through USBET for an official assessment of the measurement.



So they are explicitly saying that you cannot make the determination using a waveform capture alone or several oscilloscopes' "USB inrush test" feature (I've never seen this, so I must not using expensive enough oscilloscopes) not valid and the only way to meet inrush current compliance is if USBSET20 says your device meets compliance. It takes in .tsv/.csv waveform capture data and poops out USB compliance justice (in html format).


From the USB tools download page:



USBET20 (8MB, August 2016) is a stand-alone electrical signal analysis tool for USB Compliance testing. USBET20 is the official compliance electrical analysis tool that performs pass/fail assessments on signal quality and inrush current data captured from an oscilloscope.




To elaborate further, they only tell you the minimum measurement time because that's all you need to know. You don't need to know how the actual pass/fail determination is made, and indeed, they aren't telling. The USB-IF is willing to tell if you if you are in compliance, but they aren't telling anyone how they actually determine that (at least for inrush current).


That maximum downstream load is a specification relevant to an upstream device (host port or hub), meaning when designing one of those and NOT a peripheral, then that hub or port should be able to withstand a maximum downstream load of a 44Ω resistor and a 10µF capacitor in parallel. And you're absolutely correct - this could draw as much as 25mA over the 100mA limit under the most extreme conditions. As such, an upstream device must be able to handle such a load ("handle" meaning not suffer more than a 330mV droop) being connected.


However, if your peripheral were such a load, it would not pass compliance because it will draw over 100mA at some (essentially all) of the possible voltage range. That load is meant entirely as a worst-case design scenario for upstream devices, and is used to test them. It is not relevant to a peripheral inrush current compliance test.


What is relevant is that is not really about the current. It's about charge, so you're already sort of on the right path with this. Specifically, it is about the voltage droop. An upstream port on a hub must have no less than 120µF of very low ESR capacitance on its output VBUS, the bus powering downstream peripherals.


A host or powered hub yielding the worst case output voltage (4.75V), going through the crappiest connectors, the crappiest cable, to an unpowered hub that also uses the crappiest connectors, then that hub further has the crappiest VBUS input voltage to VBUS output/downstream voltage drop (350mV), the voltage will be 4.4V. That 4.4V, connected through crappy connectors to a crappy peripheral can cause it to see the real absolute minimum voltage for a low-power device: 4.35V. From page 175 of the USB 2.0 spec:


enter image description here


Let's do some math. an unpowered upstream hub must have 120µF of downstream capacitance. At 4.4V*120µF, that's 528µC of charge. A device connected has a 10µF capacitor. If you pretend that there is no static load or power, just a charged capacitor on the port, and an uncharged 10µF one in the peripheral, charge will be distributed not until the other is full, but until the voltage between them is equal. Charge is conserved, so the point at which the two capacitors voltages will equal each other, given 528µC of starting charge, is roughly 4.06V. Or, 40.6µC transferred. Add in the connector resistances, and the downstream capacitor will not even be able to draw that much charge during inrush.


So, literally the only important factor is that it does not exceed 10µF. Current is not really the thing that matters, its how depleted the hub's downstream port capacitance can be depleted without drooping more than 330mV during the transient before things like cable inductance give time for the actual host power to catch up. And a 10µF capacitor is the closest available value that won't do that.


Also note that there is no capacitance limit. You can have 1F of all ceramic capacitance on a downstream device, as long as you have it divided into 10µF sections, and only one of which will connect on attachment. Once the device is attached, you must stay below any 10µF step, but you may gradually "online" more capacitance in 10µF increments. The whole point is avoiding that transient.



And yes, this means a low-power peripheral is not only supposed to work down to 4.35V, but also withstand a 330mV voltage droop transient, such as when something new is connected to a hub. This also means that, theoretically, if you plugged in two devices at JUST the right times so as to be nearly simultaneous, you could possibly disrupt operation of other devices on the unpowered hub. I'm sure the robots, with their HPETs, will exploit this critical flaw in our USB bus specification to bring about our downfall.


Now, there are probably other subtle aspects like dI/dT rates or whatever else. Who knows exactly what is incorporated into the pass fail test. Considering that they have an entire 7.5MB installer for the program that performs that test, its probably safe to assume that its not something simple. But just keep in mind that you are trying to avoid over depleting upstream capacitor reservoirs with your own downstream capacitance, and that's really all there is to it. As long as you don't cause other devices to fail due to the voltage transient your peripheral could potentially cause, you'll be ok. And really, it just amounts to keeping the capacitance seen on attach or other power state changes to 10µF. It actually would be better to try to have less than that, 10µF is the maximum. I don't know where the idea started that the absolute maximum should be the 'standard' capacitance started, but good engineers know better than to go for maximum ratings. Always underrate. I like a nice 4.7µF capacitor. If you need more decoupling, all you have to do is not connect it directly to VBUS and limit it to 100mA surge draw and you're golden. But you're allowed far in excess of 100mA - as long as only 40.6µC worth of charge is transferred during a region.


Don't worry about the inrush current. The inrush current test is not really about inrush current.


No comments:

Post a Comment

arduino - Can I use TI's cc2541 BLE as micro controller to perform operations/ processing instead of ATmega328P AU to save cost?

I am using arduino pro mini (which contains Atmega328p AU ) along with cc2541(HM-10) to process and transfer data over BLE to smartphone. I...