Saturday 31 October 2015

pcb - Eagle: supply pin vcc overwritten with more than one signal


in some words, my circuit is feeded from battery (VBATT). The VBATT is taken with a voltage divider for read voltage by Atmega, and is routed into a voltage regulator (from 0.8/3.2 to 3.3) VREG.


All net for voltage and GND have their names and labels (I need "only" to do the signal routes and voltage divider) as you can see.


Checking for errors, DRC errors complains with



supply pin vcc overwritten with more than one signal


This is the image (if needed I can also give sch and brd)


On board all airwires seems ok, instead.


enter image description here


Thank you for your effort and support.




gprs - GSM Module gets into reboot loop


I have assembled a diy m590e gsm gprs module kit, bought from Amazon. The seller provided image is: m590e gsm gprs diy kit


The Schematic used to assemble the kit is: Schematic used


I connected it to the Laptop using a CP2102 based USB to TTL converter: cp2102 based usb to ttl


GSM Module    CP2102 Module
GND GND
RXD TXD

TXD RXD
VCC 5V

Then after opening putty, the output from the serial reads as:


at
OK

at+cpas
+CPAS: 0


OK

MODEM:STARTUP

+PBREADY

MODEM:STARTUP

+PBREADY


MODEM:STARTUP

+PBREADY

MODEM:STARTUP

...

However, on powering up without sim card, the modem acts normally without getting into the loop.


The SIM Card used is Airtel Prepaid.




Answer





How are you powering that GSM module?



[...] from the usb port



Using a PC USB port for power is likely to be a problem, for the same reason I explained in my answer here to someone who (initially) also used a power supply with a similar current capability.


The supplier's website from that earlier question mentioned that the power supply should be capable of supplying 1A. (Actually, that GSM module's datasheet says that for a 100uF main capacitor, as shown in the photo of your kit, the power supply may need to provide up to 1.2A.)


Since a normal PC USB port is unlikely to successfully supply 1A (or 1.2A) at 5V, then when the module tries to register with a network (meaning its RF transmitter needs power), the USB port voltage will probably drop and that fits with the symptom that your GSM module appears to reboot.



This is especially true because I now see you mentioned in your question, that you connected the GSM module's power "through" the CP2102 USB to TTL adapter. The thin PCB traces on the CP2102 will cause even larger voltage drop on the 5V supply reaching the GSM module, when it tries to draw more current.



on powering up without sim card, the modem acts normally



That is because, without a SIM card installed, the module doesn't try to register on a network, therefore it doesn't enable its RF transmitter, meaning it doesn't need much current, and so it can operate with the limited current from a USB port.


If you have an oscilloscope, you can view the voltage actually on the module (pins 2 & 3) to confirm that the supply voltage to the module dips (perhaps briefly) below its absolute minimum 3.3V, during the sequence where you see the module reboot with a SIM card installed. A multimeter cannot react quickly to short dips in voltage, and so is a less reliable measurement tool in this case.


Also note that the lack of design features (like additional filtering capacitors) recommended in the module datasheet, and the use of a diode to drop the input voltage (which is specifically discouraged in the datasheet), suggest that the module is designed for low cost and not maximum reliability.


Summary





  • Power the GSM module with short, thick, ground & 5V wires from a power supply capable of provided at least 1.2A (preferably rated at 2A). Note: You need to ensure a common 0V reference is used between all the devices.


    and/or




  • Increase the main capacitor from the 100uF shown in the photo of the parts kit, as explained in the module datasheet, to reduce the maximum current requirement from the external power supply:






recommended capacitor values from Neoway M590 hardware design manual

Source




impedance - Circuit to Convert Line Level to Mic Level


How would I design a circuit that converts a line level signal to be able to be fed into a microphone input? The input impedance of the microphone jack is 2 kohms.




Negative to Positive Voltage Converter


Is there an easy circuit or IC that will convert negative voltage to positive voltage and retain the initial value? Context below.



I am trying to add a volt meter to my power supply using some old parts I have lying around. My power supply is two 2-12V channels, one positive and one negative. I have an adc that can take anywhere from 0 to 5v. To start, I have a voltage divider on my channel output to my ADC so that my max voltage on both channels is just under 5v. However, I don't think my ADC can take negative voltage. I only have one 7 seg display, so I am using a switch to decide which voltage I am measuring.


On my ADC there is a VCC, +VREF, and a -VREF. I have VCC coming from my rectifier and my VREFs are coming from my regulated output. Initially I was using the switch to swap both VREFs depending on the channel. For example, my positive channel would have +VREF on my regulated positive output, and -VREF would be ground. On my negative channel it would switch so my +VREF was ground and -VREF was my negative output. This would still provide up to a 5v drop to my ADC for measurement. However, it doesn't seem to work that way.


My theory is that the 5V VCC doesn't change, so when I switch to my negative channel, it becomes much more than 5V relative to my VREF. So I want to put a negative to positive voltage converter on my negative output so that the ADC will receive a 5v drop relative to the initial ground. I just have no idea how to do that and all google seems to give me are positive to negative converters. Any help is appreciated! Thanks!



Answer



How about a simple inverting amplifier:


$$ Vout = - \frac{Rf}{Ri} Vin $$


schematic


simulate this circuit – Schematic created using CircuitLab


Ri at around 10Kohm can be a good starting value. Choose Rf to scale the output voltage such that it is always within the output voltage limit of the opamp and the input limit of the ADC.


Friday 30 October 2015

pcb design - How to split Analog and Digital GND planes for a TQFN device


This question is in reference to Mixed signal PCB layout for PSoC.


GNDs connected to thermal pad


If I am using a microcontroller which has separate analog and digital supply inputs and GNDs, what should do with the grounds?


Should I connect all GND pins (both analog and digital) to the thermal pad? Or should I only connect the digital GNDs to the thermal pad and connect the analog GND to its own GND plane?




Answer



Sorry, my bad. I posted this image from the CY8C32 datasheet into my answer to the other question:


layout


but didn't copy the caption. "Figure 2-8. Example PCB Layout for 100-pin TQFP Part for Optimal Analog Performance". This is for the TQFP100 part, which doesn't have the thermal pad, and doesn't apply to the QFN48 you're using.


For parts with a thermal pad the split makes no sense, and you should connect the thermal pad to digital ground.



The center pad on the QFN package should be connected to digital ground (VSSD) for best mechanical, thermal, and electrical performance. If not connected to ground, it should be electrically floated and not connected to any other signal. (page 6)



Note that when you use a thermal pad on your PCB that you shouldn't apply solder paste all over it, but use a windowed stencil to avoid the IC being pushed up by the solder paste:


enter image description here




"The solder paste pattern area should cover 35 % of the solder land area. When printing solder paste on the exposed die pad solder land, the solder paste dot area should cover no more than 20 % of this solder land area. Furthermore, the paste should be printed away from the solder land edges. This is illustrated in Figure 9; the solder paste pattern area lies within the boundary indicated by the red line and it is divided by the entire solder land area." (from here)




Further reading
CY8C32 datasheet
HVQFN application information, NXP application note


Is single ended amplifier a subset of differential amplifier?


The reason I'm asking this is that when I read about CMRR issue in amplifiers I only see differential amplifiers as if it is only a diff. amp. issue. But isn't a single ended amplifier also a differential amplifier where one input is zero with respect to the other input? Can we talk about CMRR issues when it comes to single ended inputs for a data acquisiton or it only applies to differential inputs?



Answer




does cmrr value make sense for a differential amplifier if its inputs are separate and totally different voltages?



Yes, it makes sense. Consider two independent voltage sources as the inputs to a differential amplifier:



schematic


simulate this circuit – Schematic created using CircuitLab


This is equivalent to:


schematic


simulate this circuit Where we've defined the differential and common mode voltages to be:


$$V_d = V_{in1} - V_{in2}$$


$$V_{cm} = \dfrac{V_{in1} + V_{in2}}{2}$$


led - How many individual lights can an arduino control


I want to use a regular Arduino board to control LEDs - what is the most lights I can control individually from the board?


Is there a tutorial on how to hook it up?




Answer



I believe using Charlie-plexing N lines controls N*(N-1) LEDs. There is a good article on Wikipedia.


A friend of mine, Jimmie P. Rodgers, fit 126 LEDs on an Arduino Shield. He uses charlie-plexing to control the LEDs. Some information on his board is at -- jimmieprodgers.com/2009/12/my-development-process/ (archive.org copy)




At the last Boston Arduino User Group meeting Jimmie P. Rodgers drew a Charlie-plexing diagram as a matrix with labeled nets. Schematics drawn in this manner seemed to do a good job of communicating the concept. I created a couple of similar schematics -- See http://wiblocks.luciani.org/FAQ/faq-charlie-plex.html


power - What is the practical difference between watts and VA (volt-amps)?


I have seen one inverter rated at 300W, and another at 300VA. What is the practical difference between them, if any?



Answer



For resistive loads they're equivalent. For reactive (inductive/capacitive) loads however, voltage and current are not in phase, and you have to take this phase difference (phi) into account.




Effective power = Voltage \$\cdot\$ Current \$\cdot\$ cos(\$\phi\$), in Watt



and



Apparent power = Voltage \$\cdot\$ Current, in VA



Apparent power is higher for non-resistive loads, but that's because it contains a part of blind power; power the utility has to provide but also gets back during part of the cycle (in the graph the part where the power is negative, purple line under the X-axis).


enter image description here


Because they have to provide it, but can't invoice it (after all they get it back) the utilities are not too happy with reactive loads, and industries have to pay fines if their cos(\$\phi\$) becomes too low.


edit (re jpc's comment):

The 300W PSU will consume 300 W of effective power, for the other PSU the apparent power is given, effective power for this PSU will be lower, e.g. if the cos(\$\phi\$) = 0.9, effective power will be 270W.


terminology - Electric vs Electrical vs Electronic


I want to know if the words electric, electrical, and electronic can be used interchangeably. They all seem to mean about the same thing... Any suggestions?



Answer



"Electric" and "electrical" mean exactly the same thing, and apply to anything that uses electricity, but mostly used for systems involving "passive" components — a battery or generator, motors, resistive loads, inductors and transformers, capacitors and maybe even rectifiers and relays.


"Electronic" is a term that is generally reserved for systems that incorporate "active" (electron) devices such as vacuum tubes or transistors — any device that can be said to have "gain", or the ability to proportionally control a large voltage or current using a small voltage or current.


Thursday 29 October 2015

digital logic - Shifting -5 V to +5 V signal to 0 to +5 V for ADC


I want to convert the output voltage of an ECG (-5 to +5) to 0 - +5 volt for ADC. Should i use Op amp ? What is my problem in this picture? I simulated it in Proteus ... but when input voltage is higher than 4 V or lower than -4 V Vout gives an unexpected value. enter image description here


Below is another simulink that have same problem ... enter image description here



My idea to make this level of voltage is (Vin +10)/2 that give the range 0-5 V.



Answer



The 741 can't accept an input that's too close to the negative rail. The data sheet says the input range is +/- 13V when the power supply is +/- 15V. Not sure about when the negative power supply is 0V but I doubt it will work the way you expect with a 0V input. Similar story on the output side. That would explain the strange numbers you're getting below -4V input. Don't know what's going on with the +4V input since there would seem to be plenty of headroom with your +12V supply. Does the simulation work if you power the 741 with +/- 15V? It looks basically correct if you assume that the op amp is ideal.


memory - Storing an LED's previous state even when power is removed


Storing an LED's previous state even if power is removed


I want to build a simple circuit that consists of 2 push buttons and an LED. I want the LED to turn on when one pushbutton is pressed and off when the other is pressed. I am pretty sure this can be done with a flip-flop. But this is not the only part. I also want the circuit to keep its previous state even if power is removed. So if the LED is on, and power is removed, I want it to be on when the power is added. Same with if the LED is off and the power is removed I want it to stay off when power is added. I think that NAND gates or something are used to store flash data, but I am not sure. I want this circuit to be only consisting of transistors and other common parts like capacitors resistors diodes crystals inductors ect... I don't want to use any uncommon integrated circuits (I only have 555 timers and some dual flip flops and some buffers and a few other really odd ic's.



This may be impossible (especially with only transistors) but any information is helpful  and I'm only 14 so I'm still a beginner in this stuff.


Edit: I want to figure out a semiconductor way to do this. I know I don't have the parts but what would I need?



Answer



A bi-stable switch as Dave Tweed mentioned will certainly work. Another way is to use a tiny microcontroller that has EEPPROM built in. There are some PIC 12 available with EEPROM. The micro reads the two switches, drives the LED, and stores the last state in EEPROM, which it then recovers on powerup.


led driver - Power Supply for LED Array




The circuit I have attached is basically LED arrays with some transistor and N channel Field-Effect Transistor. It requires 17v, 400mA to work. I will only glow either white led or RGB. They won't glow at same time, due to less current. . The N channel Field-Effect Transistor (XORB48) has



  • IDS: 5.8A

  • VGS: ±12V

  • VDS: 30V

  • CISS: 623

  • VGS(th) = 1.05V


It has low gate charger and operation gate voltages as low as 2.5V.



Each white led is of 0.2W (2835) and RGB is of 1W (5050) each.


I will control and dim the LEDs using Arduino PWM, for that individual GPIOs are connected, mentioned in circuit.


I want to know what kind of supply is needed for this? Constant current? Constant voltage? Does it require High power factor? How about ST's Viper22a or power integration's LNK IC?


Schematic :-


enter image description here




rf - How to minimize characteristic impedance mismatches caused by vias?


This question springs from an answer here.


When using stripline configurations for rf signalling, the conductors are routed on an inner layer of the pcb, sandwiched between two ground planes.


enter image description here



I understand that vias cause mismatches in the characteristic impedance, which I would like minimize. My frequencies are 2.4 GHz and 3.5 GHz.


Which aspects should I consider when carrying rf signals through vias? For example:



  1. Drill size

  2. Annular ring thickness

  3. Distance from rf source

  4. etc?



Answer



Well... depending on what you're doing you could use a 3d solver to design a set of vias and return vias to closely match the impedance of your line. Or you could keep your vias close to other discontinuities such as your source (your chip), destination, or connector. Usually you want to keep those with in 1/4 Wavelength of your highest frequency of interest.



Lacking tools but strong in motivation you could try to calculate by hand. That could be useful if say you're jumping from layer 1 to 3 and layer 2 is your reference plane. If your two routing layers don't share a reference plane you'll have to consider how the current will return, could be a via nearby, could be decoupling cap, could be radiated coupling depending on the frequency.


Really you want to control that return path so try to make it easy on yourself if you can.


timer - Use USB as an on-off switch


I'm looking for a way to build a zero(ish) cost intervalometer for my Canon SLR using only stuff that I have or can get very cheaply (my budget is about $4). I've already managed to "build" a wired remote for the camera, which is basically two wires connected to the ground and shutter release pins of the Canon remote connector. Touching the two wires (manually or via a switch) causes the camera to click.


I also have an old Android phone lying about that I'd like to repurpose for this project, since the phone is fully programmable. A requirement for this project is that neither the camera nor the phone can be modified in hardware (i.e., no soldering to or dismantling either).


Since the only wired interface in and out of the phone is USB (mini-USB on the phone, if that matters), I was wondering if a modified USB cable could be built that can be controlled via the phone's software, such that the circuit with the camera is completed or broken. Mutilating or otherwise modifying the USB cable itself is OK.


I'm willing to modify the software on the phone, so if this is possible, but the software doesn't support it, I'll figure it out.


Can this even be done? If not, is there an alternative way of doing this without buying a microcontroller, board and other associated stuff? I'm not a hardware guy, so please be gentle ;).



Answer



There's pretty much no chance you'll get all this for $4, it is not a realistic budget for this. But that's not to say it's not possible. You do need a microcontroller with USB functionality (or have it bit-banged, but USB ones are cheap enough) to talk to the Android phone. Then the Android phone could use its USB API in the SDK to talk to it and issue a command. When the USB microcontroller recognizes the 'take picture' command or whatever, it could flip a transistor on/off to toggle those wires you have and tell the camera to take a picture.



But you mentioned an 'old' Android phone, that could be a problem because only newer ones running Android OS 3.0 and up have the ability to expose the USB API to talk to it. If your target phones don't run that, this isn't going to work.


In that case, to accomodate most Android phones, even older ones, you could do this over Bluetooth. But that's going to raise your costs even more because you'll need a bluetooth module and power source for the Bluetooth, probably a battery, on top of the micro. But Bluetooth modules can be very cheap, here's one for $5.50 USD and they're even less in larger quantities: http://imall.iteadstudio.com/prototyping/basic-module/im120723010.html


Otherwise an even simpler approach with no micro could be making a circuit with a photosensor such that it triggers the camera wires to cross when it detects some amount of light, you could vary that with a pot/resistor. Then tape/connect the photosensor somehow to your phone and program it to light up the LCD or the flash on the back (if it has one) when it wants to trigger the photo. That might be do-able for under $4 but it's not very elegant.


Good luck!


Pull down resistor on logic gate input


I would like to use a pull-down resistor on the input of a NOT logic gate. I am using a 5V microcontroller, and I want to ensure that the input of the gate will never be floating (eg. microcontroller damage).


General information:
The logic gate I am using is a HD74LS04 inverter whose datasheet can be found here. I am testing the output by placing my positive multi-meter lead on R4 and the negative lead on GND. With the digital pin floating, the output of 1Y is 4.33V. This is the expected behavior.


The logic gate requires a minimum of 2.0V for Vih. If I understand this circuit correctly, I expect the voltage at pin 1A to be 3.76V since R2 and R3 will form a voltage divider. This value seems plenty to signify a 'high' in the logic gate.



R2 exists to protect the digital pin. Ideally, I would have liked R2 to be 1KΩ. Increasing the value of R2 means I have to increase the value of R3 to maintain the same voltage divider. During testing I found that if R3 was any value over 3.3KΩ, the pin 1A would not get pulled down all the way if I left the digital pin floating.


Tests:
If I disconnect everything except GND and Vcc and place the positive lead of my multi-meter on pin 1A (now floating) and the negative lead to GND, I read 1.49V. Placing a pull-down resistor (R3) on pin 1A to GND with a value of 100Ω changes the reading to 0.02V. Increasing the resistor's value to 330Ω increases the reading to 0.07V. At the value I need, 3.3KΩ, the reading increases to 0.60V. Finally, changing the resistor's value to 1MΩ increases the reading to 1.47V and actually changes the output of pin 1Y to low.


The test that uses the 3.3KΩ resistor is concerning. The reading I took (0.60V) seems to be approaching the Vil maximum of the logic gate which is 0.80V.


Questions:
This must be a trivial question, but why does an R3 value of 3.3KΩ not pull pin 1A all the way to GND? Is there a formula for determining the correct pull-down resistor value? Other answers I found on here simply state 10KΩ as a 'good' value for a pull-up resistor. I'm not sure where that value comes from or if it is applicable to a pull-down resistor.


schematic


simulate this circuit – Schematic created using CircuitLab


Edit:
Although Ignacio's answer technically answers the question, I thought I would share some information I found for others who may come across this question.



User 'Papabravo' on the All About Circuits form states that the calculation for a pull down resistor can be calculated by taking the Vil (0.8V) and dividing it by the "standard TTL load" of 1.6mA. As Ignacio pointed out, my specific logic gate can source up to 0.4mA, so the calculation would be 0.8/0.0004A which equals 2KΩ. For safety, I performed the calculation at 0.5V in order to give myself some headroom.


Other Resources:
Sparkfun's logic gate tutorial



Answer



Due to their input structure, TTL devices have a relatively low low-input impedance. As shown in the datasheet (IIL), the input pin will source up to 0.4mA when pulled near ground. You must make sure that your voltage divider takes that current into consideration when choosing resistor values.


Device/circuit for high current, short pulse discharge?


I'm tasked with replacing a relay with a semiconductor switch in a high voltage, high current, short pulse duration application (all relative, I concede). Basically I'm charging a capacitor up to ~1kV and dumping it through a low-impedance load. The di/dt requirements are pretty gnarly: ~400A with a ~300ns pulse width. Due to other constraints it has to be a high-side switch, too.


schematic



simulate this circuit – Schematic created using CircuitLab


I originally tried using a high-power SCR, but all viable devices I've found have a di/dt limit of 100~150A/us, whereas I'm in need of about 2500A/us.


schematic


simulate this circuit


My next plan of attack is using an IGBT with a gate-drive transformer, though I'll have to make sure to keep the leakage inductance low.


schematic


simulate this circuit


My question, though, is if anyone has another approach to consider? The isolation is a bit of a PITA but that's the route I'll take if it's the most practical.


EDIT: we were using these voltage-controlled solidtrons which worked perfectly (in a 5-lead TO-247-ish package) but now this is no longer an option.




How correct is my understanding of basic electricity?


I have a very basic way of understanding things. Sometimes I over-complicate and sometimes i make it way too simple so I am hoping someone can help me judge my understanding of basic electricity. I am in school to be an Aircraft Mechanic and am working part time at an aircraft repair station where I do some Avionics installations.


In school I have learned that



  • Ohms = resistance (opposition of current to flow is how I understand this )

  • Watts = Electrical Power ( What is powering the system )


  • Voltage = Electrical Pressure ( In my head this is what controls the rate that current flows through the system attempting to return to ground )

  • Amperes = Current (This is the flow of electricity through the system but I am a little fuzzy on this compared to voltage because to me they seem like the same thing.




voltage - Bench DC power supply output


I recently purchased a ATTEN APS3005D Regulated DC power supply from eBay and I'm having some odd and potentially alarming results that I'm trying to get my head around.


Note: I'm testing this with a reliable Fluke multimetre.


Atten APS005D


The device is plugged into 240v Australian mains supply - which it is rated for.


Test 1:
The device is switched off.
Fluke Multimeter is set on AC, leads are placed between the Ground (green) and both the positive and negative terminals on the APS3005D.
The result reads ~110VAC between Ground and Positive
The result reads ~110VAC between Ground and Negative



Test 2:
Turning the device on, the device appears to work correctly. The control circuitry is working, 7-seg display is working. The power supply is switching when the voltage is turned up.
Placing the multimeter onto DC mode and measuring between negative and positive, I get the correct readings that are also described by the 7-seg displays.


Test 3:
Device is on as per Test 2
Multimeter in DC mode between Earth and either positive or negative terminals, gives wrong floating readings. (Probably because theres actually 110VAC not DC).


Test 4:
Device is off
Ohmmeter measuring the output resistance between ground and either positive or negative terminals gives an infinite resistance reading.





It does not seem safe at all to have 110V between the Earth terminal and either positive or negative terminals. This could seriously hurt someone.
This doesn't seem right does it?


I was under the impression that the ground terminal was a reference ground between positive and negative yielding this sort of behaviour:
-15V ----- 0V ----- +15V
<----------30V--------->



Answer



The ground terminal is not half way between negative and positive pins it is ground i.e. connected through to your building's earth wire. It should be isolated from both outputs (black and red).


However, an isolated power supply is still capacitively connected ( a few tens of pF) to the incoming AC live and neutral - the net effect is these caps form a potential divider reducing (when unloaded) the 240v to exactly half i.e. 120V - this won't hurt you because the capacitance is so small however, your meter's input impedance is maybe 10Mohm and it will register the voltage.


Try putting a 100k resistor between red terminal and ground and re-measuring - I bet it falls to just a few volts.


Wednesday 28 October 2015

pic - Will an SPI flash memory chip have the same issues with non-atomic write operations as a dsPIC's internal EEPROM?


A while back I had some intermittent trouble with the internal EEPROM of a dsPIC. Every so often, some value in the EEPROM would be found zeroed out on power-on. I tracked the problem down to when the chip lost power after the erase step of the write cycle, but before the write had completed. It was all about the timing of the power-down relative to the firmware execution, which was (in normal operation) random. I solved this by adding a buffer section to my EEPROM, to ensure that an incomplete write-cycle could be completed on restoration of power. I had to turn EEPROM writes into an atomic operation.


Now I'm using a different dsPIC without internal EEPROM, and I'm trying to use an external flash memory chip to store persistent data. I'm wondering if I should have similar concerns. Should I be worried that my external flash chip will power down mid-write and lose data, and write a fix for this in my firmware like I did for internal EEPROM? Or does the chip itself guarantee atomic write operations?


For further detail, my buffering technique defines an area of persistent memory that consists of three fields: address to write to, data to be written, and a READY flag. A "write" consists of four steps: write to buffer, set READY flag, write from buffer, clear READY flag. On power-up, you check the READY flag. If it's set, execute whatever's in the buffer. This worked well in EEPROM, but I'm not sure if it will work well in flash.



Answer



I have never heard of a flash memory chip (or processor with internal flash) that has sufficient energy storage internally to complete a write (or erase) cycle if external power should be removed. In other words, if you don't have control over when your system powers down, you always need to create a protocol that can detect and deal with any individual flash update operation that might have been interrupted.



One way around this is to provide the necessary energy storage (e.g., an electrolytic capacitor) on your board, such that you can detect an external power failure, yet still complete any write/erase operation that may have already started.


EDIT: Your write buffer concept could be used with the external flash, but it needs to be modified to take into account the larger erase granularity. According to the datasheet, the minimum erase size is one "sector" (4K bytes).


You'll need to reserve three sectors for your write buffer. One of these will hold your READY flag (call this the WB_R wector). The second will hold the sector address of the sector being updated (call this the WB_A sector). The third will hold the updated data for that sector (call this the WB_D sector).


To update any particular byte (or a group of bytes in a single sector), follow the following steps. We assume that WB_R is already erased.



  1. Erase WB_A.

  2. Locate the flash sector that contains the byte you want to change (call this the DEST sector).

  3. Write the sector address of DEST to WB_A.

  4. Erase WB_D.

  5. Copy the contents of DEST to WB_D, but when you get to the byte(s) that you're changing, write the new value(s) to WB_D instead of the old value(s).


  6. Set the READY flag in WB_R. Note that this means you change it to its non-erased state. Since the erased state is 0xFF, this means that you write 0x00.

  7. Erase DEST (getting the sector address from WB_A).

  8. Copy the contents of WB_D to DEST.

  9. Erase WB_R.


On power up, check the READY flag, and if it's set (anything other than 0xFF — it may have been only partially written or partially erased), jump directly to step 7.


Note that with this algorithm, each of the write buffer sectors gets written and erased at least once for each write operation you do. This could become a problem if you do a lot (more than 100,000) of writes over the lifetime of the product. If that's the case, you'll need a more sophisticated wear-leveling algorithm.


design - How can I go from prototype to commercial product?


I have made many designs that I would consider "professional grade", but in reality they are still just prototypes. I have used professional board fabs and professional population, but what I haven't had the ability to go through is actually turning that product into a commercial product.


What is needed in order to take the product across this large last step? I can think of obvious things like FCC/CE testing, case design, marketing, distributors, etc. but I am sure there are other things that aren't thought of until going through the process a time or two, as an engineer, what are these things that would come up?



Answer



In my industrial experience, the way things generally go in terms of a design cycle follow this sort of flow (condensed version):



  • Market research/contact with a customer (identification of the what the basics of the product should be)

  • Establishment of a specification (or "spec" as it's often abbreviated)


  • A formal quotation and business agreement between the parties (or between engineering and marketing for a 'standard' product)

  • Initial design / calculation / simulation / debugging

  • Design validation testing, or DVT (you making sure the product meets the spec)

  • Bugfix implementation / regression test

  • Independent DVT (someone else making sure the product meets the spec)

  • Safety and regulatory approvals

  • Design for manufacturability review (DFM)

  • Production release / marketing release


If you have a project that you've developed, and you feel you're ready to market it, you may want to consider some or all of the following points:




  • Write a spec if one doesn't exist. Characterize the typical performance of the product and put this in the spec, as well as any absolute maximums or minimums in terms of 'externals' (volts / amps / degrees) that should be avoided.

  • Make sure that your product meets your spec. Do a formal DVT and make a report of the results. Also, test a quantity of boards against the spec and use statistical analysis (Cp/Cpk) to prove that over tolerances and normal variation, you product can do what you say it can do.

  • Come up with a reduced number of tests to perform per-unit as a manufacturing reliability test. Bad builds, bad parts, new operators, bad equipment can all lead to problems.

  • Make sure that your PCB has in-circuit test pads on as many nets as possible to facilitate in-circuit test (ICT) or manufacturing defect analysis (MDA) - finding problems earlier rather than later is always beneficial.

  • If this product is intended to be high reliability, you may want to perform a life test. Run a quantity of units perpetually until something fails. There are industry standards on life testing (acceleration factors, lot size, etc.) which fall beyond the scope of casual advice.

  • Consider HALT (highly-accelerated reliability testing) as well - testing the product under thermal and mechanical vibration stress can show weaknesses in the design as well as its construction.


Many of these sorts of tests are big bucks. (Tens of thousands of dollars per test). The cheapest things you can do are those that you can do yourself - the spec, the DVT test and statistical analysis, test pads on the PCB, for instance.


You may wish to consult with an experienced manufacturing engineer to figure out what you really need to do, given your budget and the expected revenue of the product.



Good luck!


Can an audio circuit be powered by a switched-mode power supply?


Most audio circuits are powered with large, heavy transformers and a small ripple after smoothing. SMPS are smaller and more efficient. EMI can be shielded by a metal enclosure and the output filtered for noise suppression.


Especially where the power is going to be further regulated. Why aren't switched-mode power supplies used in audio circuits, eg. power amplifiers, and what improvements can be done to make a SMPS suit an audio circuit?



Answer



Let me give you a little background on myself... I've been working professionally in the audio industry for more than 14 years. I've designed circuits for most of the major pro-audio companies, one audiophile company, and several consumer audio companies. The point is, I've been around and know a lot about how audio is done!


SMPS can and are used for audio circuits! I've used them from sensitive microphone preamps to huge power amplifiers. In fact, for the larger power amplifiers they are mandatory. Once an amplifier gets over a couple of hundred watts then the power supply needs to be super efficient. Imagine the heat produced by a 1000 watt amp if it's power supply was only 50% efficient!


But even on a smaller scale, the efficiency of a SMPS often makes a lot of sense. If the analog circuitry is properly designed then the noise from the power supply gets rejected by the analog circuitry and doesn't impact the audio noise (very much).



For those super-noise-sensitive applications you can do a hybrid approach. Let's say that you have an ADC that requires +5v. You can use a SMPS to generate +6v, then a super-low-noise linear regulator to bring that down to +5v. You get most of the benefit of the SMPS, but the low-noise of the linear regulator. It is not as efficient as just a SMPS, but those are the trade-offs.


But one thing to keep in mind... A SMPS for audio applications needs to be designed with audio in mind. Of course you'll need better filtering on the output. But you will also need to keep other details in mind. For example, at very low current the SMPS might go into something called "burst mode" or "discontinuous mode". Normally a SMPS will switch at a fixed frequency, but in one of these modes the switching will become somewhat erratic. That erratic behavior might push the output noise into the audio frequency band where it becomes more difficult to filter out. Even if the SMPS is normally switching at 1 MHz, when in one of these modes you could get 10 KHz noise. Controlling how this happens depends on the design of the chip that the power supply uses. In some cases, you can't control it. In that case you have no choice but to use a different chip or use a hybrid approach.


Some people advocate using only linear power supplies for audio. While linear supplies are less noisy, they have lots of other issues. Heat, efficiency, and weight being the biggest ones. In my opinion, most of the people who preach linear supplies only are either misinformed or lazy. Misinformed because they don't know how to handle switching supplies or lazy because they don't care to learn how to design robust circuits. I've designed enough audio gear with SMPS to prove that it can be done without too much pain.


diy - Maximize efficiency and brightness of diff color, cheap, 5m 5630 SMD LED strip lighting



I'm a science teacher and we are going to explore different design/effects of grow-light setups this year in class. I've gotten some great ideas for this project from reading from this stackexchange site, thanks! I also see problems I hadn't even considered, double-thanks :-)


For cost and brightness and versatility purposes I'll be buying the ~$10/5m strips of 5630 SMD LED's. But, there are issues of efficiency (current-controlling resistors wasting power) and unrealized brightness needing to be address and also here.


First, in the question exploring why there was such a huge loss of energy with LED's that are energy efficient, it was explained that the blue and white LED strips might be up to 80% efficient, but red LED strips be only 45% efficient. Now, it's not the LED's 'fault', as I understood that discussion, but the circuit design. The strips (regrettably) used a consistent circuit design, regardless of the forward voltage of the LED's being used. The blue and white LED's have a 3-3.6 forward voltage and with 3 in a series meaning they will need anywhere from 9 to 10.8 of those volts, but the red LED's have 2-2.2 Vf so will only need 6 to 6.6 of those Volts. And, if one is using a 12 V power supply, there's a lot of 'pressure' that needs to be reduced with resistance. And as I understand it, the resistors 'waste' that energy. SO... I'm wondering, if I can match my power supply more closely to the demands of what each 3-LED module needs and then adjust the resistance to what it needs to be, I could substantially increase the efficiency of the strips. (I know, it's crazy, but remember, I have many students who would be happy to solder resistors in parallel on these strips!!!--a learning opportunity.)


What do you think about this idea? Lots of these sites where I buy don't give the Vf specs of the individual 5630 LED's so how could I find that out?


Relating to that question, how ought I decide at what current to drive these LED's (so as to be able to calculate the total resistance each module will need)? The discussion about why an LED strip was using less power than expected got me wondering if I should be driving the strips at a higher voltage than they recommend, for example 14.5 Volts to get the hoped-for brightness as given in that discussion. I know heat sinks and LED life relate to this, I'd really like to hear people's thoughts and recommendations on this, too.


For those people interested, from discussions w/plant people I've had at the U of MN about this project, I'm going to suggest to my students to start out with a 3:2:1 ratio of red:warm white:blue LED's in their initial designs of plant-grow systems (though that is the ratio of lumens so depending on the comments on the preceding questions, this may not be the ratio of lights, themselves).


EDIT to include new questions from comments: Do I need the data sheet to determine what the highest light value I can drive the LED at and still have it last a long life? Or, can I driving it up until it reaches some rule-of-thumb temp.


Also, It seems that the closer one can match the voltage of the power supply to the sum of the Vf of the LED's, the smaller the resistor and therefore the less power wasted. Is that true? What margin of error should one calculate for? (BTW yes, each of the modules has 3 LED's and 1 resistor in parallel.)



Answer



Firstly, to find the Vf for the LEDs your best bet is to find a similar component from a major supplier (RS, Farnell etc.). I've found a 5630 Red LED from RS components, whilst this isn't 100% accurate it will give a you a relatively close starting point.



A lot of these strips call for a 12V supply because it's simple and the majority of people using these strips don't put as much thought in as you have. If you wanted to keep the same brightness of the strips as when they first arrive then it's a relatively simple operation.


schematic


simulate this circuit – Schematic created using CircuitLab


Your strips may have a different setup to this (1 resistor per LED, less LEDs in series etc.) but the principle remains the same. To keep the same 'brightness' then the same current will need to be flowing through each branch of LEDs.


Now I've just used 100mA based on the datasheet of the 5630 red LED I linked. To calculate the current flowing through your LED branch you need to find the total resistance per branch. Using datasheets to get an estimated voltage drop of the LEDs you can calculate how much voltage is being dropped across the resistor(s) and use Ohms Law to solve for I.


The downside is, for maximum efficiency you will need a different power supply for your red LED strip and your blue/white LED strip.


This is the case if you want the LED strips to have the same brightness that they arrived with, if you want to match the brightness of the different LED strips this will be a little more difficult without the original datasheets.


Using the datasheet I linked we can see that the luminous intensity of the red LED is typically 4.2cd, if we want to match this to a green LED with a luminous intensity of 2.8cd we need to look in the datasheet of the red LED and find two graphs. Forward Current vs Forward Voltage and Luminous Intensity vs Forward Current.


enter image description here


Using our example value of green being 2.8cd, our 4.2cd red LEDs luminous intensity needs to be at 66% of its current value (2.8 is 66% of 4.2). In the graphs from the datasheet, we can now calculate the forward current required to have the red LEDs luminous intensity at 2.8cd and also the forward voltage.



enter image description here


Now that we know what the forward voltage of the LED required for 2.8cd we can adjust our first schematic accordingly


schematic


simulate this circuit


I may have gone a little bit off track at some points and rambled a bit too much but hopefully I've given you some sort of help.


Edit in response to questions in comments:


All of the voltage that is dropped across the resistor is essentially wastage. The closer to the voltage source to the combined Vf of the LEDs, the smaller the resistor required and therefore less power wastage.


I completely missed this earlier, but thinking now it's become obvious. If you power the LED strip with 12V. Using a multimeter, measure the voltage across the resistor in the LED chain. You can then subtract this measured value from your 12V supply and figure out the combined Vf of the 3 LEDs. Then just divide this value by 3 to get an approximate average Vf per LED.


In order to determine the 'highest light value' you will need to know what the maximum forward current is which unfortunately is in the datasheet. From other 5630's I've looked at they seem to all be around 150mA maximum forward current, unfortunately this might be a case of "Try 100mA, did it die? Try 150mA, did it die?". It's likely the supplier drives them well below their maximum value.


In terms of margin of error to calculate for. A batch of LEDs will have varying Vf values, this is why I suggested finding the combined Vf for 3 and taking an average. I doubt there will be significant differences between the LEDs branches. In order to play it safe however, I'd always pick the standard resistor value that's above what you calculate, for example 44.43R you would use a 47R resistor.



Voltage regulator for linear constant-current (1.5A) LED driver?


I am considering building my own driver for a ~1.5A LED to power a microscope for my research.


For this application, efficiency isn't critical, but because exposure times to the camera can be short (~1ms), stability/lack of ripple is important. Thus buck/boost or other switching regulators, and PWM, are usually avoided here. (Perhaps suitable output filtering would sort this out, but <500ns [edit: <500 microsec] turn-on/off time is also highly desirable.)


What's the best current regulator for this set-up? (I'll be using a decent switching DC voltage source wall-wart to provide input power to whichever constant-current source I choose.)



One option would be to use an adjustable voltage regulator like a LM317 (but capable of handling more current, so maybe the LT1764 which also has a useful shutdown pin), set up in the standard constant-current mode where current is determined by a resistor between the Vout and Vadj pins. (The voltage between these pins is maintained at ~1.21V for the LT1764, so for 1A you'd want a 1.2-ohm resistor, and for 1.5A a 0.8-ohm resistor.)



(source: diyaudioprojects.com)


The other option might be using a new "one-resistor" voltage regulator like the LT3083, where the voltage is controlled by the resistance-to-ground from a single pin (through which 50 microamps is pushed by varying the voltage: so e.g. a 20kohm resistor induces a 1V output, and a 1-ohm resistor on that output will therefore cause 1 amp to flow).


Here there are two "suggested" constant-current circuits from the datasheet: A constant-current source:



Or a lower-dropout LED driver



The nice thing about the LT3083 is that a 20-kohm potentiometer can be used to adjust current, unlike the LT1674, where you'd need a 20-ohm pot (rather harder to source) to adjust between ~.1A and 1.5A. But I'm not sure which of the two suggested driver circuits for the LT3083 would be more resistant to ripple from the input voltage source, if either. And I don't know if this is a bad idea for other reasons compared to the more standard regulators. (e.g. if the adj pin is left floating, it looks like the LT3083 will probably drive the voltage way up and fry the LED. So if the pot fails or a connection breaks, this could be bad news.)


So, anyone have suggestions for the best option?




Answer



You mention that you want fast switching. None of the devices will be able to do that. Any the feedback circuits internally are designed to be slow so that they don't to much incorporate noise. And in fact some of the feedback is thermal feedback. Put another way, the dominant pole is low, 100's of KHz is typically the the fastest response time. This is true whether you are using a disable pin or connecting/opening the load to the source.


The best way to fix this is that you build an external circuit that current steers between two limbs. On one limb will be your LED chain and on the other a dummy load that draws the same amount of current (and hopefully also has similar load characteristics). An example of such a circuit is a differential pair. You should be able to get 10's of ns switching time then.


The challenge with the two limbs will be that the load characteristic differences may disturb the feedback of the LM317 etc. which then reacts slowly. If you can't match the two limbs then I'd suggest building a current mirror that decouples the current output the LM317 etc. so that it only sees a constant load the mirror transistor sees the variability.


You should be able to simulate all of this in LTSPice or similar before building.


The other factors you mention above not show stoppers, so pick what you want.


Tuesday 27 October 2015

mains - Why does oscilloscope show large amplitude when I touch the probe?


When I touch the probe I see a 50Hz sine wave with an amplitude of about 100V from peak to peak. I understand that 50Hz is the mains hum picked up through my body, which acts as an antenna, but where is that huge amplitude coming from?



Answer



Your body is only very "loosely" earthed. In fact there could be tens or hundreds of Mohms between your body and earth.


This is one of the reasons why the body can accumulate charge (through movement) and then you feel the discharge when you touch something or someone that is earthed.


The cables in (and around) your house produce an electric field that is 230V (or whatever your AC power live voltage is) close to the cable and 0V at earth. If your body is "somewhere" between live and earth, the voltage that will appear on your body is also going to be somewhere between live and earth.


The oscilloscope usually measures a voltage relative to earth and it can have several Mohms input impedance. This means when you touch the probe tip the screen shows a large voltage is present. Because the signal on the scope is AC, the capacitance of your body to earth and "live" also play a significant part in your scope showing this signal.



Why is my solar panel short circuit rating much higher than in reality?


I have two 6 volt 100mA solar panels. I have connected them in parallel and was expecting to see a maximum of 200ma when I short circuit it. As expected, I got 6 volts, but only 65ma. Does this always occur? The sun was really bright at that time.




Answer



100mAh means 100 milliamps for an hour at maximum possible brightness of the sun, not per an instance of a short. They are not batteries which can put substantial current into a short circuit. The 65mA reading you were getting is what the solar cells could put into a short circuit with the light you had.

Solar cells are actually photo-diodes and the current output is limited by how much light is hitting the diode and its efficiency in converting photon energy into excited electrons. They do NOT behave the same as a charged battery when shorted.

That is why solar cells used to charge batteries often have a charger IC designed to properly charge the batteries based on current sunlight conditions and the charging needs of the battery. Look up 'solar cell charger IC' on the web. They are cheap to buy and will prevent overcharging your batteries. They can also boost the charge current when the sunlight is low. If very low sunlight they stop charging.

Huge panels of solar cells that are over 2 meters sq are often fused and can be a shock hazard. The small ones you have pose no threat except possibly overcharging a small battery if a regulator is not used.


Replacing LED with Flashing LED in a circuit


I am currently building a FRED Photopopper but instead of using flashing LED can I use normal LED? I don't get what is the difference between these two in the circuit. If the FLED cannot be replaced, is there any equivalent circuit for a FLED?



enter image description here




USB Softcore for FPGA : Extra resistor on FPGA transmitter pin


Have anyone used this USB softcore before ?


Why do we need extra resistance on the tx pin ONLY ?



Whichever pins you transmit on need to have resistors after them. The exact values will depend on the internal resistance of the pins; usually something around 27 ohms will be ok.



According to section 7.1.1 (Tx) and 7.1.2 (Rx) of the USB 2.0 specification , it seems like we need some resistance for impedance matching for BOTH Tx and Rx.



enter image description here


1) is it correct that I only need TWO 27 ohm resistors in series (to compensate the low output impedance, say 73 ohm of FPGA pins) with the FPGA usbcorev softcore D+ and D- pins ?


2) And another 1.5 kilo-ohm pull-up resistor for D+ line ?


3) Since D+ and D- are bi-directional, do we need 15 kilo-ohm pull-down resistors for both D+ and D- line at FPGA pins ?


4) From on-chip termination section within intel cyclone IV IO inteface spec (have anyone used on chip termination ?) , I am not sure if I need to use external series termination resistors for both D+ and D- line. When I loop deeper into cyclone IV IO spec, it only supports 25 ohm or 50 ohm on-chip termination. This means this on-chip termination is not suffice for 90 ohm transmission line in USB spec . Could anyone advise ?




altium - PCB track clearance


in the PCB I am designing there is 0.88mm between one track and a through hole pad. Between the track and the pad there is a DC voltage difference of around 425V.


Clearance calculators like this one specify a minimum clearance of 0.8mm for 500V if "external conductors coated" are used. If that is the case, I am safe, but I am not sure about what "external conductors coated" refers to.


The track and the pad are in the top layer, so I assume they are external, but are they coated? that coating is referred to a standard solder mask or is it a special one?


EDIT:


If the clearance is not enough I have two options: 1. Apply a coating after soldering. (I am not sure how yet) 2. Cut the problematic track and use a wire instead


What do you think it is best? this is the problematic track:



enter image description here



Answer



The standard solder mask is not a conformal coating.


IPC-SM-840C:



Solder mask materials covered in the standard are not intended for use as a substitute for conformal coatings that are applied after assembly to cover components, component leads/terminations and solder connections.



Related to IPC-2221A (4.5.2.1) there are some types and requirements for the thickness:



  • Type AR - Acrylic Resin : 0.03 - 0.13 mm


  • Type ER - Epoxy Resin : 0.03 - 0.13 mm

  • Type UR - Urethane Resin : 0.03 - 0.13 mm

  • Type SR - Silicone Resin : 0.05. - 0.21 mm

  • Type XY Paraxylylene Resin : 0.01 - 0.05 mm


IPC-2221A (6.3.4):



Typical applications are computers, office equipment, and communication equipment, bare boards operating in controlled environments in which the bare boards have a permanent polymer coating on both sides.


After they are assembled and soldered the boards are not conformal coated, leaving the solder joint and soldered land uncoated.


Note: All conductors, except for soldering lands, must be completely coated in order to ensure the electrical clearance requirements in this category for coated conductors.




"permanent polymer coating" stands for solder mask / solder resist.


So when you don't do a conformal coating after the component assembly, you only can relate on the uncoated or with Permanent Polymer Coating categories for your clearances.


microcontroller - Maximum attainable delay with Micro controller


I am designing a microcontroller based delay circuit to implement delays of 2 hours, 1 hour, 45 minutes, and 30 minutes. The circuit will automatically turn on off a relay after this time period has elapsed.


I am stuck with a narrow selection of microcontrollers available locally in market:



  • 89C51

  • 89C52

  • 89S51

  • 89S52

  • 89C2051


  • PIC 16C71

  • PIC 16F84


I have checked the datasheets of these microcontrollers but there is no information about the maximum delay they can produce.


What is the maximum delay that can be produced with these microcontrollers?



Answer



The delay can be as long as you want. If a timer won't give you the delay you need, simply increment a register, or several registers, each time it overflows. Here is a simple program using Timer0 that illustrates the technique:


        #include    "P16F88.INC"

#define LED 0


errorlevel -302 ;suppress "not in bank 0" message

#define RB0 0
#define INIT_COUNT 0 ;gives (255 - 199) * 17.36 = 972 us between interrupts


cblock 0x20
tick_counter
temp

endc

;reset vector
org 0
nop
goto main

;interrupt vector
org 4
banksel INTCON

bcf INTCON,TMR0IF ;clear Timer0 interrupt flag
movlw 0x00 ;re-initialise count
movwf TMR0
decf tick_counter,f
retfie

main:
banksel OPTION_REG
movlw b'00000111' ;prescaler 1/128
movwf OPTION_REG ;giving 7.3728 Mz/128 = 57600 Hz (period = 17.36 us)

banksel TMR0
movlw 0x00 ;initialise timer count value
movwf TMR0
bsf INTCON,GIE ;enable global interrupt
bsf INTCON,TMR0IE ;enable Timer0 interrupt
banksel TRISB
bcf TRISB,LED ;RB0 is LED output
banksel 0

mainloop:

goto mainloop

bsf PORTB,LED
call dly
bcf PORTB,LED
call dly
goto mainloop

dly:
movlw 20

movwf tick_counter
dly1:
movf tick_counter,f
skpz
goto dly1
return

end

switches - Relay circuit with off switch


I've been searching for a similar circuit and only found latching relays with two momentary switches for on and off. I don't know if this circuit has a specific name or not, and since I am a complete begginer I'm sorry if I missed it somehow. Here's my case.


I have a circuit that runs off a 12V DC power supply with one main switch. I need the circuit to be active when the power supply goes on. Simple enough. What I would need now is a switch that cuts the power to the circuit and keeps it off until the power supply is turned off and on again. So just one switch, no turning the circuit on after it's been pressed.



Answer



This is pretty straightforward. What you need is a single pole, double throw relay with a 12 volt coil and contacts which will handle your load current. Connect it like


schematic



simulate this circuit – Schematic created using CircuitLab


When the circuit turns on, there is no power applied to the relay, so it connects the 12V to the load through the NC (normally closed) contact.


When you connect the switch, the relay activates, and the load is disconnected. 12V is applied to the coil, so when the switch is released the relay stays activated, and it will remain so until power is removed.


Monday 26 October 2015

emc - Weak internal pullups on microcontrollers and EMI susceptibility


What disadvantages can I run into when using weak internal pullups (100k) on microcontrollers? I'm wondering how susceptible lines (with only parasitic trace/component capacitance) become to EMI transients with these weak pullups.


Digital filtering with a 3-4ms window may decrease the odds of transients having an effect, but is there anything else to keep in mind on a professional PCB?




transformer - How to properly isolate multiple power supplies


Say you have a single 110V AC supply and need to use it for a number of devices, each requiring very different characterists. For me personally these are: a direct use of the 110V AC supply for a computer, a very stable yet variable high-voltage supply (up to 2.5kV, low DC current) to power some photo-multiplier tubes, and a 5V DC supply for a DAQ board.


My question is therefore, how should one connect the respective transformers and rectifiers to ensure there was no feeback from one output to another and that the setup is as safe possible?


I guess, in other words, what I'm asking is: what are the necessary considerations and equipment one should employ to safely use a single AC supply for a diverse range of multiple devices?


By "safely" here I mean for the equipment to be placed in a commercial / industrial environment and therefore necessarily complying with 'standard' safety regulations.




Does USB Power Delivery handle USB cables that are not PD-aware?


I think USB-PD is still fairly new (I don't know anyone who owns a device that has it) but reading wikipedia and info sites does not make it clear what happens when you use a cable that is out of spec in a USB-PD charging setup.



So if I have a host and device that are USB-PD compliant, and my device wants to use say a 12V profile, can I get away with using a normal USB cable? Or will it not charge or charge at a lower profile instead?


What about the edge case where I use the 5V profile at 2A. Technically that is out of spec also since USB BC1.2 is spec'd to 7.5W (1.5A), but many normal cables work fine with 2A in practice.




capacitance - How to calculte mosfet capacitors (Cgs Cds Cdg) knowing the charges Q


I need to use the best model for the MOSFET in my circuit I'm simulation on PSIM, the MOSFET is Si4108-TI-GE3.



But I am not given the parasitic capacitance values (Cgs Cds Cdg), I am only given this:


dynamic charge characteristics of the capacitor to be modeled


I think I should use Q=CU, but I don't see how to use Q=CU for Cds since I only have Total gate charge and I don't know if that is the drain-source charge. How do I calculate the necessary parasitic capacitance values?



Answer



Read the data sheet: -


enter image description here


Ciss is gate-source capacitance. You should be able to work out the other two.


power supply - Source switch between two batteries


I've found several answers on switching between an Adapter and a Battery. But I need to autmoatically switch between two batteries. One of them is 'hotplugable'.


Problem: I've got a Solarcell which charges a ~16000mAh Li-Ion battery (BATa). Once in a while, the Solarcell doesn't manage to charge the battery enough. It's crucial that I have 3.0 to 4.1 Volts on my VSYS but the voltage will drop below this value if the Solarcell wasn't able to do its job (cloudy weather, vulcano eruption [joke], dirty cells, etc).


My Plan: I've come up with the idea to feed the VSYS with a additional, hotplugable backup battery (BATb) which does get connected to VSYS if BATa drops below ~3.1v.


So basically I need a Low-Voltage-Cutoff (using a Z-Diode with a Mosfet?) which, when cutoff happens, opens another Mosfet..?


schematic


simulate this circuit – Schematic created using CircuitLab


I'm not sure about the selection of M2 and how to wire it, so it will be conductive if M1 won't forward power anymore.


And I know about another electrical problem: Just using a Z-Diode (D1) with M1 will render M1 to something like a resistor, not a switch. I've read, that I could use something like a Shut Reference (i.e. TI's TL43xx). Again I'm not sure how to chose the correct one and how to wire it.




arm - Strange error when connecting JTAG


My setup:


Common) CPU: at91sam7x512, JTAG: olimex-arm-usb-tiny-h, proprietary single-CPY board
1) Windows 7, OpenOCD 0.0.4 <- works 2) Ubuntu 13.04, OpenOCD 0.7.0 <- strange error


There are some config files written in past times and the windows version works. Now I am trying to make things work under linux.


The manufacturer's page doesn't seem to have a ready config file, although they have instructions for building OpenOCD.


I built it with ft2232 support. OpenOCD seems to locate the JTAG, but not the device:



sw3@pc90313-sw3:~/ocd_conf$ openocd -f ocdconfig.cfg 
Open On-Chip Debugger 0.7.0 (2013-08-16-13:27)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.sourceforge.net/doc/doxygen/bugs.html
Info : only one transport option; autoselect 'jtag'
adapter speed: 20 kHz
srst_only srst_pulls_trst srst_gates_jtag srst_open_drain connect_deassert_srst
Info : device: 6 "2232H"
Info : deviceID: 364511274

Info : SerialNumber: OLWKRKY?A
Info : Description: Olimex OpenOCD JTAG ARM-USB-TINY-H A
Info : max TCK change to: 30000 kHz
Info : clock speed 20 kHz
fError: couldn't read enough bytes from FT2232 device (70 < 81)
Error: couldn't read from FT2232
Error: Trying to use configured scan chain anyway...
Warn : Bypassing JTAG setup events due to errors
Info : Embedded ICE version 1
Info : sam7x512.cpu: hardware has 2 breakpoint/watchpoint units


Here is my config file:


sw3@pc90313-sw3:~/ocd_conf$ cat ocdconfig.cfg 
source [find interface/olimex-arm-usb-tiny-h.cfg]

adapter_khz 20

source [find target/at91sam7x512.cfg]

This error Error: couldn't read enough bytes from FT2232 device (70 < 81) seems to be the give-away. Other people in Googleland have experienced similar problems, but with (0 < 81) in which case it is clear that communication is not working at all. In my case, the number oscillates at 69-79-71.



Ideas?



Answer



The FT2232H uses High Speed USB, and the error translates into "problem in USB communication". I`d recommend trying another USB cable first - but this seems not to be a problem here, as it works in other configurations.


You should use the newer ftdi driver in OpenOCD 0.7.0:


source [find interface/ftdi/olimex-arm-usb-tiny-h.cfg]

You can ignore the warning, the interface was tested to be OK, but after the 0.7.0 release.


Updating the OpenOCD windows binaries might be a good idea, too - the 0.4.0 release is getting rusty.


What is the difference between a PLL and a DLL?



Phase Locked Loops (PLL's) and Delay Locked Loops (DLL) are used in various applications but there isn't yet a salient discussion of the key aspects of these circuits, how they operate, in what applications they might be used, the comparison between the two circuits and why one should be used vs. the other.



Answer



A PLL controls a voltage-controlled oscillator in order to bring its frequency (or some derivative of it) into phase (and frequency) lock with a reference signal.


PLLs have many applications, from creating a "clean" replica of a noisy reference signal (with amplitude and phase variations removed), to creating new frequencies through multiplication and division, to demodulating phase- and frequency-modulated communications signals. The input-to-output transfer characteristics of a PLL can be controlled through the design of its feedback network.


A DLL controls a voltage-controlled delay line, which typically has many taps, in order to bring one of those taps into phase alignment with a reference signal. The input to the delay line is usually also the reference signal, so the various taps provide additional signals that are interpolated and/or extrapolated from the period of the reference signal.


DLLs are commonly used in high-speed communications among chips on a board (e.g., between a memory controller and its SDRAM chips) in order to "cancel out" things like input and output buffer delays as well as wiring delays, allowing very tight control over setup and hold times relative to the clock signal. This allows data rates to be much higher than would otherwise be possible.


With suitably-designed phase detectors, both PLLs and DLLs can work with nonperiodic reference signals; a common application involves aligning data signal transitions with a reference clock.


While the mentioned above (i.e. clean version of signal -> PLL) a key aspect of where PLLs/DLLs differ is that PLL's filter and effectively block jitter in the source from affecting the VCO output, whereas DLL's propagate jitter. At first this may appear to be a negative aspect of DLL's but it can be used to great effect. In some cases you need to pull the main sampling point from the signal that is arriving and ignore the jitter in the signal, you would use a PLL. In other cases, say when a signal and clock signal are subjected to the same jitter inducing effects either at the source or in the communications channel. You can use a DLL and multiple sampling delay times to reduce/eliminate the relative jitter between the two signals (because they have the very similar jitter) and use the delayed clock to align to an ultimate sampling point.


Sunday 25 October 2015

voltage regulator - LM1117 3.3 output gives 4.2


I used this circuit and formula as a reference for 5V to 3.3V


this



It's from the LM1117 datasheet but output gives 4.2V. Here is my circuit:


this


I changed resistors with different values but output give same 4.2V. What can be the problem or how can I find my fault.



Answer



Are you building your circuit on a solderless breadboard? If so, it's very easy to miswire your feedback resistors and see strange problems like this one. I've seen similar problems before.


Also, I see you have a decoupling capacitor at Vadj, the junctions between resistors and capacitors can be pretty confusing on a breadboard, and can be easily miswired. I suggest you to remove the capacitor at Vadj pin, leaving two resistors and one output capacitor only. First, make sure the resistor network is correctly connected. Add the Vadj capacitor later, only after you see the correct output voltage.


I managed to reproduce your problem, it's only one possible way for things to go wrong, but as an example, consider this simple LM1117 circuit.


LM1117 example


According to the formula, it should output


$$ V_{out} = 1.25 \times (1 + \frac{220}{100}) = 4.0 $$



Now let's build the circuit on the breadboard...


Wrong breadboard example


And the moment of truth, let's measure the output voltage.


Voltmeter shows 4.192 volt


4.2 volt?!


Can you find the mistake? In this case, the problem is caused by the miswired yellow wires. Just a few more additional wires, like adding a capacitor at Vadj, on the breadboard can make it difficult to notice.


Miswired portion of the breadboard.


The right side of the resistor network should be connected to Pin 2 of the LM1117, the center-tap should be connected to Pin 1. But in this diagram, it's connected backwards, so the Vadj won't receive the proper feedback, and R2 would get extremely hot soon as Vout is connected to ground via R2.


What is the moral of the story?





  1. The human brain is not a good netlist processor. If you want to make sure your wiring is correct, you may still want to use a PCB layout tool. You can draw a breadboard-like grid and route inside first, and transfer it to the physical breadboard. The computer can help you ensure the layout is correct.




  2. Often, you can have better results by not using a solderless breadboard at all, as Lundin commented. For example, a breadboard forces you to route your circuit horizontally, many mistakes can be avoided on a perfboard as you can route to all directions, and there won't be any accidental open or short circuit due to loose springs or touched metal wires underneath.




  3. It's only one possible way for things to go wrong. You must post a high-resolution picture that clearly shows the construction and connections for us to help.





Saturday 24 October 2015

batteries - Charging li-ion cell using constant-voltage only



Various resources [1] [2] state that the optimal method of charging a li-ion cell -- such as one found in a mobile phone -- is to charge at a constant current (usually <1C) until a certain voltage threshold is reached, then switch to charging at a constant voltage until the charging current drops to about 0.1C, at which point the battery is fully charged.



The exact target voltage differs slightly for different battery chemistries, but according to National Semiconductor (see figure 5) it's common to switch from CC to CV mode at 4.1V with the CC target voltage being 4.2V.


Many manufacturers sell inexpensive charge-controller ICs that take the guesswork out of charging li-ion cells, but they're not as widely available to hobbyists as, say, an LM317 voltage regulator. Additionally, they tend to be small, SMT devices that are not breadboard friendly.


Question


Although both constant-current and constant-voltage charger circuits can be built with an LM317, a current-limited, constant-voltage circuit is particularly simple to build.


Is there a downside to charging at a constant voltage (say 4.2V) so long as the maximum current is limited to a reasonable value for the cell (say 0.5C)?


(In this context, consider "downside" to be more along the lines of "cell doesn't fully charge with CV-only, is damaged, explodes, steals your lunch money, burns house down, etc." and less along the lines of "CV-only charging takes longer", though that is also useful information to know.)


Background


I ask because the internal charger circuit in my mobile phone died unexpectedly and, while waiting for a replacement phone, I constructed a CC-CV charger using an LM317 and successfully charged the battery. This was a bit messier than expected, and I was curious if a simple current-limited constant voltage circuit would also be suitable for occasional use.


Edits





  1. Possible duplicate of this question, which was answered with "If you really want to charge battery from time to time - charging with resistor and constant voltage 4.2V or less will work and battery will not blow up if you choose proper resistor." and a caution to ensure it's OK for the cell to be charged at high current when empty.




  2. Fortunately, charger ICs like the TP4056 as well as assembled PCBs with such a chip and the necessary connecting pins are available cheaply on eBay and other vendors. Since I needed to safely charge lithium batteries for other, unrelated, electronics projects I've purchased several of them and keep a few spares around just in case. They work very well.





Answer



(the numbers are given for LiCoO2 chemistry, for LiFePO4 they should be lower) Short answer: yes, this is OK. I charge lithium cells with a bench supply set to 4.2 V and whatever C/10 figure is for the cell I'm charging. If you are not in a hurry, slow charging is better, even for cells claimed to be capable of withstanding higher charging currents.


However (quoting you):




charging at a constant voltage (say 4.2V) so long as the maximum current is limited to a reasonable value for the cell



means you will have constant current charger till your cell is at ~95%. Up to this point the voltage across the battery will be less than 4.2V if you measure it. Only when your charger starts outputting 4.2V it will become constant voltage.


What you are about to build is CC/CV charger and this is the right thing to do. "Constant voltage only" charger will be set to 4.2 V with no current limiting and it will charge the lithium cell very slowly. You can check it youself, just construct var.voltage circuit and measure the current into (discharged) cell at 3.5, 3.7, 4.0, 4.2, and 4.5 V. Cheap Chinese chargers are constructed like that, they restrict the voltage to 4.2 V so the cell won't ignite after being charged but the consumer would have to wait longer. I once bought a portable emergency charger which could be emptied in a couple of hours then took 3 days to recharge its internal cells.


There are other precautions to observe while charging lithium, you can learn them from any modern charge controller IC datasheet (my favorite is Linear Tech., their literature is very high quality). If you don't implement these precautions in your design never leave it unattended while charging, otherwise it may ruin your morning one day.


protection - To protect a LiPO cell from undervoltage, how low current is low enough?


This may seem like a topic beaten to death, but bear with me.. this is an apparently ignored wrinkle. Over the past several weeks I've been designing various circuits to protect a LiPO cell from under-voltage, if the user of my device carelessly leaves it on. Now I know you can buy protection circuits ready made, but most of them start cutting the current when the LiPO voltage gets to around 2.5V. If you really want to protect a LiPO cell from damage, 3.0V is a much better discharge point to call it quits. At this point I thought I had a few good solutions, but I may be wrong... dead wrong!



I don't know about you, but if I accidentally leave something ON, and its something I don't use every day, then it might be ON for days... maybe weeks or months. I've come to realize that just about any scheme to stop battery draw when the voltage is too low is NOT going to drop the current to ZERO. Even the best MOSFET based circuits are going to have leakage current, and a good control circuit may increase it more. So how low is low enough?


I suppose its somewhat related to the capacity of the cell. Obviously if my cutoff circuit limits cell current to less than 1 uA, that's going to prevent a 10000maH cell from damage for a good long time. But what about a 200mAH cell? Would a cutoff to 1uA offer "reasonable" protection or am I just kidding myself? What about 1/10 of that (100nA)? the lower the circuit leakage, the more expensive the design. So how low is low enough?


Addendum... Here is a circuit I'm intending to try. If it works as I'm hoping, it will will reduce residual current to about 1uA when the cell voltage reaches about 3V. There are only 3 parts here, a Load switch made by Fairchild (FDC6331L) does all the grunt work of cleanly switching my load, while a Microchip part ( MCP112-315 or MCP112-300m) "trips" at about 3V, to control the load switch. The total cost of this circuit is about $1, and the low part count is due to the multiple parts inside each IC. This is still unproven but I'm hopeful. But if it works as planned, time and experimentation will tell how long it actually protects a little 200mAh cell in actual use, when the user leaves the load ON.


enter image description here




power supply - Using a regulator or not?


We have designed a board for mass production . This board has a few components on it, that works with 3v3 , such as MCU, Wi-Fi, and a few more sensors.



We used to have a battery, 3.7v to power the board directly, and later on, we have decided that we must use a regulator, because its more safe . the regulator is AMS1117


After some research, we found out that this regulator(and others) must have 1V more in its input,relative to its output .


So, using a battery pack of 3.7v lithium, will not work properly for such regulators(3.3v) , even when its charged with 4.2v .


We found out that logically ,connecting the battery directly (no regulator) to the board is much clever, because you can "enjoy" the battery power as long as it is > 3.3v .


Our only concern is that we are not using a regulator, and its not healthy .


Is that a good thing to do- "professional" ?


do you think of more options,such a zener diode instead of a regulator ?



Answer



If you're asking questions like this, you're not ready for "mass production", whether you wish to be professional or not.


You need to look at the supply voltage range of everything, and then see if the range of your battery's voltage from full to flat will work. If not, then you will likely need either a very-low-dropout linear regulator (hint: the AMS1117 isn't), or a type of buck-boost switch-mode regulator that can provide a stable 3.3V supply from a source that can be either above or below the output (i.e. as the battery goes from full to flat). For a linear regulator option, even if the battery voltage drops below the regulator's drop-out voltage (for a 3.3v LDO regulator's output, lets say it's 3.5V, 200mV above output), then you might be able to get away with that - the regulator won't really be in regulation, but depending on your load profile that may or may not matter much.



When you say a "3.7 lithium" battery, its range between full to flat will be ~4.2V full, to no less than 3.0V flat. And you don't charge lithium with a constant voltage source. Well, at least not at first - lithium battery charging is at least a 2-stage process, first constant-current, then constant-voltage, using a charge management chip specific to Lithium chemistry.


You are not ready for mass production, not even close.


arduino - Can I use TI&#39;s cc2541 BLE as micro controller to perform operations/ processing instead of ATmega328P AU to save cost?

I am using arduino pro mini (which contains Atmega328p AU ) along with cc2541(HM-10) to process and transfer data over BLE to smartphone. I...