Monday 30 November 2015

arduino - Measure Lithium ion battery voltage (thus remaining capacity)


What I am working with: I am running my self-made Arduino board (in the sense that I use the Arduino bootloader and code editor) at 3.3V, and powered from a Lithium ion battery, which is USB-charged by a corresponding Microchip charger IC.


What I am trying to achieve: I want to measure battery capacity once every minute or so. I have an LCD attached, so the idea is that the overall setup lets me know how the battery is doing at a given moment. The datasheet of the battery has a voltage versus discharge-level curve, and so by measuring the voltage of the battery, I can estimate the remaining capacity (very roughly but enough for me!).


What I did:




  • (EDIT: Resistor values updated and P-MOSFET switch added based on @stevenvh and @Jonny's suggestions).





  • I connected a voltage divider from the battery V_plus, with the larger "portion" going to an analog-read-pin (i.e., ADC) on the Arduino/Atmega chip.




  • The divider is 33 KOhm-to-10 KOhm, thus allowing measurement up to 4.1 Volts maximum of the Li-ion battery from my 3.3V level microcontroller.




  • Also, using one of the I/O pins connected to an n-channel MOSFET, I can switch the current through the divider only when I need the measurement.





  • Here is a rough schematic (updated for a 2nd time based on suggestions of @stevenvh and @Nick):




enter image description here


My question:




  • How is my current setup?





  • My only constraints are: (1) I would like to make a rough measurement of the battery capacity based on the voltage reading, as described above. (2) I would like to prevent the voltage divider from interfering with my charging IC's reading of battery presence (in my original setup, the divider sometimes caused the IC to misread presence even when the battery was absent).





Answer



enter image description here


This seems to be very similar to Nick's schematic, was probably busy drawing it when he posted :-).


First why you can't use the N-FET on the high side: it needs a gate voltage a few volts higher than the source, and the 4.2 V is all you have, nothing higher, so that won't work.


I have a higher value for the pull-up, though a value of 100 kΩ also will do. 10 kΩ will cause an unnecessary extra current of 400 µA when you're measuring. Not the end of the world, but it's 1 resistor in both cases, so why not use a higher value.


For the MOSFETs, there are a variety of parts to choose from given the requirements are not so strict; you can consider inexpensive ones such as, e.g., Si2303 for the P-channel and BSS138 for the N-channel.



Sunday 29 November 2015

digital logic - Essential Prime Implicant



enter image description here


While solving this question using Quine–McCluskey algorithm , I am getting total EPIs as 3 but by definition , it is giving 2 EPIs. I am attaching both solutions. Please correct me where I am wrong. enter image description here


enter image description here



Answer



A prime implicant is only essential if no other prime implicants can 'cover' its outputs. As you pointed out, the only prime implicants that fit this definition are the ones in the corners. The ones in the middle can be covered both by the square and by the two horizontal rectangles. So the answer is two.


soldering - How to start implementing your own computer?



The question



As a first step I need to select a circuit board, CPU & RAM/ROM chips and I/O connectors. How do I select a board that can interconnect all these components? Perhaps you could give examples of a full configuration (board and chip models). What type of board would you recommend for a beginner (solderless, stripboard, PCB, ...) ?


Project Outcome: create a motherboard by myself to which I could connect a monitor and a keyboard and the monitor would display the keys I've typed. An additional extension would be receiving and sending data from a network interface.


Question background


I have a fairly decent background in software development. However I would like to start doing some hardware projects in my spare time.


I've been looking around on the internet for materials to understand how programs are executed on hardware. Two great resources I've found are the http://nand2tetris.org/ which I am currently undergoing and the raspberrypi motherboard which seems like a simple hardware platform to write your first OS. Given the understanding I can get about the basic workings of operating systems from those resources I would like to start to assemble my own motherboard using standard ICs and write programs for it.


Could you give some initial pointers on where to look for information regarding such kind of projects? In particular I have no idea which ICs would make up a working computer, which kind of board to use to assemble them, how to create the circuits between the ICs. Any kind of information on those basic things would be useful to get me started.


UPDATE:


Since the question seems to be considered vague (how else it could be if you need some kick-start information on an unknown topic) I would like to emphasize that I know programming (Java, C/C++, Python, Assembler x86), i've also done some projects in tools like OrCAD etc. The problems I am struggling with are:



  • I've never learned the basics for working with physical components


  • I have little knowledge in components other than general purpose CPU and RAM/ROM chips and how they interconnect with CPU and memory chips in order to make a working computer (the clock, power supply unit, keyboard/monitor/network interface units)

    So given all this I need some pointer to resources that contain step-by-step instructions on buying the necessary components and assembling them into a working mini computer. An example of such a resource is the one given by Alfred Centauri in the comments below. [A Homebuilt Z-80 Computer].





Answer



If you really want to do this, I suggest looking at old processors; something of the 6502 or 8080 vintage. In those days, you could take a cpu and some external chips and assemble a working - if limited - computer. Newer processors are much more exacting and challenging to work with.


Note that this is huge undertaking; do a search for the apple 1 schematic, and you'll get some idea of what it will take. Then you get to start writing software for it, and you'll get to do at least some of it in assembly.


If you're actually trying to get things done, the current crop of microcontrollers are so much easier to use; you can write in C/C++, they have great built-in capabilities, and they're cheap.


How to calibrate multimeters?


Without buying an expensive calibration tool, how can I accurately calibrate my multimeters?



Answer




You can buy a few 0.1% resistors to calculate resistance ranges cheaply.


Voltage is tricker - if you have access to several meters you can 'calibrate by consensus', as it is improbable that they will all go out in the same direction.


Another option is buy a precision voltage reference IC - e.g. AD581 is 10V with 0.1% accuracy.


Current can also be done using voltage across a known, accurate resistance.


batteries - How to calculate battery life with a given load?


I am not very good at electricity things so any help anyone could give me would be fantastic. Here goes:




  • I have a battery (for power tools); it is 18v and 1500 mAh (imagine DC).




  • I'm trying to get it to power a tool that is designed to take 120V and 6 amp (AC).





Is there a way to calculate how long the battery would last? I read the article on "How to calculate Battery Life" and it was extremely helpful... but I have more questions. Does the AC/DC conversion have any effect on battery life? What affects do volts have on battery life? Is there an equation to calculate this?




Protection of ADC inputs


I am using an ADC MCP3424.


After breaking one of the ADC inputs, I was saw this thread: ADC input protection?



I also read Microchip app note TB3013. Figure 3 in particular:


examples of input pin protection


The ADC's analog inputs need to be clamped at ± 0.3V of the supply rails.


The Vf of a schottky like BAT85 depends on the current flowing through it. So for a Vf < 0.3V, we would need a series resistance of 10k, to keep the current less than 1mA (assuming the Opamp Supply is +15V/-5V). Now the problem is wouldn't the 10k resistance slow down the charging time of the sampling capacitor and reduce my 18-bit accuracy? I am measuring slow moving DC voltages.


Is there any solution for the problem ?



Answer



Putting a 10K resistor on the input will not change things to any great extent. Yes, technically it will slow down charging the sampling cap. It will act as a low pass filter where the cutoff frequency is 1/(2*pi*R*C). That works out to a cutoff frequency of 4.97 MHz. So any frequency above 4.97 MHz will be significantly attenuated.


This ADC has several modes, but the quickest mode it has runs as 240 samples per second. Meaning that it can handle frequencies less than 120 Hz. The RC filter caused by the input cap and the 10K resistor has a frequency response that is still 4 orders of magnitude higher!


In short, that 10k resistor on the input isn't going to do anything negative for you.


msp430 - Msp430g2553 i2c communication library with itg3200 gyroscope


I've been creating an I2C library for communicating with ITG3200 gyroscope for MSP430g2553 experimental launchpad. I still have slight problems with sequential reading and the signed values but I'm nearly complete. Can you help out the last problems that I'm dealing with? It seems that I'm stuck on a point. Here is the library http://e2e.ti.com/cfs-file.ashx/__key/communityserver-discussions-components-files/166/8272.i2c_5F00_lib.rar


family user guide : http://www.ti.com/lit/ug/slau144i/slau144i.pdf


gyro user guide : https://www.sparkfun.com/datasheets/Sensors/Gyro/PS-ITG-3200-00-01.4.pdf


If you do not want to download those 3 source files, here is the explanation of the required functions:


Main:


WDTCTL = WDTPW + WDTHOLD; // Stop WDT
BCSCTL1 = CALBC1_1MHZ; // Set DCO to 1Mhz
DCOCTL = CALDCO_1MHZ;

P1SEL |= BIT1 + BIT2 + BIT6 + BIT7; // Assign I2C pins to USCI_B0 // Assign Uart pins to USCI_A0
P1SEL2 |= BIT1 + BIT2 + BIT6 + BIT7; // Assign I2C pins to USCI_B0 // Assign Uart pins to USCI_A0
init_I2C(); // initialize i2c
initUart(); // initialize uart

..

..

I2C init:



void init_I2C(void) {
UCB0CTL1 |= UCSWRST; // Enable SW reset
UCB0CTL0 = UCMST + UCMODE_3 + UCSYNC; // I2C Master, synchronous mode
UCB0CTL1 = UCSSEL_2 + UCSWRST; // Use SMCLK, keep SW reset
UCB0BR0 = 10; // fSCL = 1Mhz/10 = ~100kHz
UCB0BR1 = 0;
UCB0I2CSA = itgAddress; // Slave Address is 069h
UCB0CTL1 &= ~UCSWRST; // **Initialize USCI state machine**
IE2 |= UCB0RXIE + UCB0TXIE; // Enable RX and TX interrupt
}


Receive function:


uint8_t Receive(char registerAddr){
uint8_t receivedByte;
while (UCB0CTL1 & UCTXSTP); // Ensure stop condition got sent
UCB0CTL1 |= UCTR + UCTXSTT; // I2C start condition with UCTR flag for transmit
while((IFG2 & UCB0TXIFG) == 0); //UCB0TXIFG is set immidiately
UCB0TXBUF = registerAddr; //write registerAddr in TX buffer
while((IFG2 & UCB0TXIFG) == 0); // wait until TX buffer is empty and transmitted
UCB0CTL1 &= ~UCTR ; // Clear I2C TX flag for receive

UCB0CTL1 |= UCTXSTT + UCTXNACK; // I2C start condition with NACK for single byte reading
while (UCB0CTL1 & UCTXSTT); // Start condition sent? RXBuffer full?
receivedByte = UCB0RXBUF;
UCB0CTL1 |= UCTXSTP; // I2C stop condition
return receivedByte;
}

Uart init:


void initUart(void) {
UCA0CTL1 |= UCSSEL_2; // Use SMCLK

UCA0BR0 = 104; // 1MHz 9600
UCA0BR1 = 0; // 1MHz 9600
UCA0MCTL = UCBRS0; // Modulation UCBRSx = 1
UCA0CTL1 &= ~UCSWRST; // **Initialize USCI state machine**
}

integer print:


void serialPrintInteger(uint16_t num) {
int i;
uint16_t num_send[5];

uint16_t numTemp;
num_send[0] = num/10000; // extract 5th digit
numTemp = num % 10000; // get remaining 4
num_send[1] = numTemp/1000; // extract 4th digit
numTemp = numTemp % 1000; // get remamining 3
num_send[2] = numTemp/100; // extract 3th digit
numTemp = numTemp % 100; // get remaining 2
num_send[3] = numTemp/10; // extract 2th digit
num_send[4] = numTemp % 10; // extract 1th digit


if(num_send[0] > 0) { // if num is 5 digit
for(i = 0 ; i <= 4 ; i++)
serialWrite(num_send[i]); // send each digit as one byte
}
else if(num_send[1] > 0) { // if num is 4 digit
for(i = 1 ; i <= 4 ; i++)
serialWrite(num_send[i]);
}
else if(num_send[2] > 0) { // if num is 3 digit
for(i = 2 ; i <= 4 ; i++)

serialWrite(num_send[i]);
}
else if(num_send[3] > 0) { // if num is 2 digit
for(i = 3 ; i <= 4 ; i++)
serialWrite(num_send[i]);
}
else { // if num is 1 digit
serialWrite(num_send[4]);
}
}


void serialPrintAscii(uint8_t ascii) {
UCA0TXBUF = ascii;
while (!(IFG2&UCA0TXIFG)); // USCI_A0 TX buffer ready?
}

And here is my problem. When i try these readings in my main loop seperately i get the results that i should have.


serialPrintInteger(Receive(0x00));

or



serialPrintInteger(Receive(0x15));

or


serialPrintInteger(Receive(0x16));

and the outputs of these are 0x69 from Receive(0x00) which reads slave address register of the gyroscope, 9 from Receive(0x15) where i wrote 9 inside 0x15 register for configuration and 25 from Receive(0x16) where i wrote aswell.


I don't think my serialPrint functions are corrupted aswell, I've tried with a lot of combination under range of 16 bit where it should be. This sequence is working fine:


serialPrintInteger(5);
serialPrintAscii(Ascii_Comma);
serialPrintInteger(10);

serialPrintAscii(Ascii_And);
serialPrintInteger(15);
serialPrintAscii(Ascii_Dot);

I see an output like this in my serial console: 5,10&15.5,10&15.5...


The funny part starting when i try this logic on my Receive function. Here is the sequence i use


serialPrintInteger(Receive(0x00)); // result 105
serialPrintAscii(Ascii_Comma);
serialPrintInteger(Receive(0x15)); // result 9
serialPrintAscii(Ascii_And);

serialPrintInteger(Receive(0x00)); // result 105
serialPrintAscii(Ascii_Dot);

The sequence in my console is like this: 105,105&9.105,105&9.105...


First i thought i wasn't properly send NACK to the slave for single byte and i thought it kept incrementing the register address on his own but they are working fine seperately and gyro has x,y,z registers aswell and they are not corrupting my sequence.


I was struggling with gyro motion registers for a while but i realised that I'm not in full control of my I2C just yet. So can you point out what i'm doing wrong here?




identification - What kind of knob fits a slotted potentiometer shaft?


What I thought was an easy question is more elusive than I suspected. I have a number of potentiometers that have smooth (not knurled) shafts with a slot cut into the end...it looks like a flathead screw.


similar to this, from Mouser's site


I need a knob for it, but can't find one. I suspect I don't know the right thing to ask for. When I google "knob slotted potentiometer" or similar, I get a ton of pots drowning out anything I might hit for noise, and no useful links if I search for "slotted knob" "knob for slotted" etc.


While a set screw would likely work I'd prefer to get something that's made for the shaft I'm working with. Does someone know what I'm looking for and what search terms I need to use to effectively filter out the pots from the knobs?



Answer



The slot is for adjustment with a flat-headed screwdriver (instead of a knob), and is not generally intended as a feature for the knob to engage with. Just use the same type of knob that you'd use with a purely cylindrical shaft - a knob with a setscrew:


enter image description here



Saturday 28 November 2015

clock signals in computers and machines



Why do computers have clock signals and clock rate, while ordinary machines do not have them?


And why is clock so fundamental in CPU and mainboard?




pcb - Altium Power Plane Gerber inverted


Good Day


I used internal power planes for the first time (Altium), and I now want to generate fabrication outputs in the form of Gerbers.


The gerber below illustrates my internal GND plane, to avoid confusion with the fab house I would like to invert the gerber? Currently the voids are supposed to be copper and the copper is supposed to be voids? It looks to me like the whole gerber should be inverted.



Where do I change the settings for this?


Thank you!


Altium Internal Power Plane Inverted



Answer



Power planes are always created as negatives. You have a couple of options:



  1. Specify to your manufacturer that this layer is a negative. Many manufacturers will ask you the polarity of the layers.

  2. (My personal preference) Instead of using planes, use polygon pours. This way there is no doubt, and it's clear where the copper is.


The choice is ultimately up to you as a designer, but I tend to find #2 easier to do and it doesn't rely on having smart people at the board house.



power supply - How to convert AC to DC


I am designing a circuit that needs to output 5VDC @ 1A. I'm trying to use a wall transformer to step the voltage down to 12VAC. The next step is the diode bridge and ripple capacitor.


The ripple voltage equation is:


$$V_{ripple} = \frac{I}{2fC}$$


I = load current (1A)
f = AC frequency (60Hz)

C = Filter Capacitor (? uF)

If I choose a C of 1000 uF, the ripple voltage is 8.3 V! Do I really need to put more capacitance to lower the ripple voltage? Is there another method of converting AC to DC?



Answer



1000 µF at this voltage isn't terribly big. Are you limited by size or something?


To completely get rid of the ripple and produce 5 V, you need to add a voltage regulator after the capacitor.


12 VRMS = 17 VPeak, which, minus the two diode drops, is the peak DC voltage you'll see at the output of the rectifiers: 17 - 1.1 - 1.1 = 14.8 V. So there's no threat of exceeding the input limits of the regulator (35 V input).


If the ripple is 8.3 V, then the DC voltage will be varying from 6.5 V to 15 V. This is just barely high enough to feed into the regulator without dropping out of regulation, since the 7805 has about 1.5 V dropout at 1 A (depending on temperature). So yes, you should use a slightly higher capacitor (or multiple capacitors in parallel, if space is an issue).


enter image description here (Source: Alan Marshall)


Here's a guide to each stage of the power supply circuit.



Also:


Real life power line voltages vary from one outlet to the next, and the frequency varies by country. You need to calculate the low line/high load condition to make sure it doesn't drop below regulation, as well as the high line/low load condition to make sure it doesn't exceed the regulator's input voltage limit. These are the generally recommended values:



  • JP: 85 VAC to 110 VAC (+10%, -15%), 50 and 60 Hz

  • US: 105 VAC to 132 VAC (+10%), 60 Hz

  • EU: 215 VAC to 264 VAC (+10%), 50 Hz


operational amplifier - Where does an op amp's input bias current come from if capacitor is used?


In the case of a non-inverting amplifier, if I DC couple the input signal with a capacitor only, \$C_1\$ (i.e. no resistor to ground), sources (as well as an experiment in the lab) show me that the op amp input will be saturated as the input bias current deposits charges on the terminal without having a return path to ground. This is solved by using connecting a resistor (\$R_1\$, as shown in the image below) to ground to create the return path. [The image is reproduced from an answer by "Neil_UK" on one of my previous questions, thanks Neil!]



enter image description here


My questions:


1) How is the input bias current able to continuously deposit charges and saturate the input if I don't use the resistor? The resistor should not allow the DC bias current through, so where is this current flowing from/into?


2) Why is this not a problem for the feedback path to the "-" terminal? There is no resistor to ground in that case, and capacitor \$C_2\$ seems to be causing no issues. The PDFs I've been reading suggest that the feedback behaves similar to when I have a resistor to ground by acting as the return path, how is this?



Answer



1) In the case that you do not have R1 in place you would expect the + input node to be floating. But that is not the whole story. All the pins of opamps and in fact every chip have ESD protection diodes between the pins and the supplies. So if you apply a signal at the left of C1 there is a way to charge and discharge C1 if the input signal goes below or above the supply voltage. Also leakage currents can raise or lower the voltage of the input.



  • The resistor should not allow the DC bias current through* Actually it should and it must. You found that R1 is needed, even though the DC input bias current can be extremely small, the resistor is still needed to provide a path for it.


2) The - input's input bias current flows from the output of the opamp through R3. If C2 was not there that current could also flow through R2, or it could flow partly through R2 with the rest through R3, it depends on the DC voltages at each point how the current flows.



I suggest you browse through the excellent and free Opamps for everyone to learn more about how to use opamps.


transistors - LED's flash with music


I am wanting to put have a few LED's flash with the intensity of my music. I don't need a color organ, but just want the LED's to flash in sync with the music. I am amplifying the signal with a LM368 chip and playing music from a 4ohm 3W speaker and it's sound quality is great. The problem is my LED's don't turn on at all..enter image description here


I am using the configuration that many people claim works well but its usually done with a TIP31 transistor and not a 2N3904. I tried inserting an opAmp with a gain of 10 before the base of the transistor with no success.


Does anyone see a problem with my circuit or know of a reason the LED's arn't turning on? Any advice would be appreciated


Thanks



Answer



Odds are that Q1 is smoked. You forgot to add a current limiting base resistor to limit the current.


You should probably add a reverse diode on the base (after the resistor) to protect the transistor. The diode is recommended because you are feeding the base with an alternating voltage that swings above and below zero volts. When it swings negative the base-emitter junction is reverse biased. It will probably survive given that you are operating on a low voltage but it's good practice anyway.



Test the transistor with your multimeter diode test function. You should get 0.7 V b-e and b-c with + lead on base. You should get high reading when leads reversed.


schematic


simulate this circuit – Schematic created using CircuitLab


Figure 1. Modified circuit.


When you get Q1 going again the next problem will be that you'll probably smoke the LEDs. You have no LED current limiting resistor in your schematic. You might get away with it if your supply voltage is low.


Edit: I couldn't read the supply voltage. I now see it's only 5 V. That won't be enough for four LEDs. As others have suggested, try them in parallel pairs.


Friday 27 November 2015

microcontroller - What is a boot loader, and how would I develop one?


I've met many projects in which an AVR microcontroller uses with a bootloader (such as the Arduino), but I don't understand the concept very well.


How can I make a bootloader (for any microcontroller)?


After writing my bootloader, how it is programmed to the microcontroller (like any .hex program burnt on the flash rom of the AVR, or some other method)?



Answer



A bootloader is a program that runs in the microcontroller to be programmed. It receives new program information externally via some communication means and writes that information to the program memory of the processor.


This is in contrast with the normal way of getting the program into the microcontroller, which is via special hardware built into the micro for that purpose. On PICs, this is a SPI-like interface. If I remember right, AVRs use Jtag, or at least some of them do. Either way, this requires some external hardware that wiggles the programming pins just right to write the information into the program memory. The HEX file describing the program memory contents originates on a general purpose computer, so this hardware connects to the computer on one side and the special programming pins of the micro on the other. My company makes PIC programmers among other things as a sideline, so I am quite familiar with this process on PICs.


The important point of external programming via specialized hardware is that it works regardless of the existing contents of program memory. Microcontrollers start out with program memory erased or in a unknown state, so external programming is the only means to get the first program into a micro.



If you are sure about the program you want to load into your product and your volumes are high enough, you can have the manufacturer or a distributor program chips for you. The chip gets soldered to the board like any other chip, and the unit is ready to go. This can be appropriate for something like a toy, for example. Once the firmware is done, it's pretty much done, and it will be produced in large volumes.


If your volumes are lower, or more importantly, you expect ongoing firmware development and bug fixes, you don't want to buy pre-programmed chips. In this case blank chips are mounted on the board and the firmware has to get loaded onto the chip as part of the production process. In that case the hardware programming lines have to be made available somehow. This can be via a explicit connector, or pogo pin pads if you're willing to create a production test fixture. Often such products have to be tested and maybe calibrated anyway, so the additional cost of writing the program to the processor is usually minimal. Sometimes when small processors are used a special production test firmware is first loaded into the processor. This is used to facilitate testing and calibrating the unit, then the real firmware is loaded after the hardware is known to be good. In this case there are some circuit design considerations to allow access to the programming lines sufficiently for the programming process to work but to also not inconvenience the circuit too much. For more details on this, see my in-circuit programming writeup.


So far so good, and no bootloader is needed. However, consider a product with relatively complex firmware that you want field upgradable or even allow the end customer to upgrade. You can't expect the end customer to have a programmer gadget, or know how to use one properly even if you provided one. Actually one of my customers does this. If you buy their special field customizing option, you get one of my programmers with the product.


However, in most cases you just want the customer to run a program on a PC and have the firmware magically updated. This is where a bootloader comes in, especially if your product already has a communications port that can easily interface with a PC, like USB, RS-232, or ethernet. The customer runs a PC program which talks to the bootloader already in the micro. This sends the new binary to the bootloader, which writes it to program memory and then causes the new code to be run.


Sounds simple, but it's not, at least not if you want this process to be robust. What if a communication error happens and the new firmware is corrupt by the time it arrives at the bootloader? What if power gets interrupted during the boot process? What if the bootloader has a bug and craps on itself?


A simplistic scenario is that the bootloader always runs from reset. It tries to communicate with the host. If the host responds, then it either tells the bootloader it has nothing new, or sends it new code. As the new code arrives, the old code is overwritten. You always include a checksum with uploaded code, so the bootloader can tell if the new app is intact. If not, it stays in the bootloader constantly requesting a upload until something with a valid checksum gets loaded into memory. This might be acceptable for a device that is always connected and possibly where a background task is run on the host that responds to bootloader requests. This scheme is no good for units that are largely autonomous and only occasionally connect to a host computer.


Usually the simple bootloader as described above is not acceptable since there is no fail safe. If a new app image is not received intact, you want the device to continue on running the old image, not to be dead until a successful upload is performed. For this reason, usually there are actually two special modules in the firmware, a uploader and a bootloader. The uploader is part of the main app. As part of regular communications with the host, a new app image can be uploaded. This requires separate memory from the main app image, like a external EEPROM or use a larger processor so half the program memory space can be allocated to storing the new app image. The uploader just writes the received new app image somewhere, but does not run it. When the processor is reset, which could happen on command from the host after a upload, the bootloader runs. This is now a totally self-contained program that does not need external communication capability. It compares the current and uploaded app versions, checks their checksums, and copies the new image onto the app area if the versions differ and the new image checksum checks. If the new image is corrupt, it simply runs the old app as before.


I've done a lot of bootloaders, and no two are the same. There is no general purpose bootloader, despite what some of the microcontroller companies want you to believe. Every device has its own requirements and special circumstances in dealing with the host. Here are just some of the bootloader and sometimes uploader configurations I've used:



  1. Basic bootloader. This device had a serial line and would be connected to a host and turned on as needed. The bootloader ran from reset and sent a few upload request responses to the host. If the upload program was running, it would respond and send a new app image. If it didn't respond within 500 ms, the bootloader would give up and run the existing app. To update firmware therefore, you had to run the updater app on the host first, then connect and power on the device.


  2. Program memory uploader. Here we used the next size up PIC that had twice as much program memory. The program memory was roughly divided into 49% main app, 49% new app image, and 2% bootloader. The bootloader would run from reset and copy the new app image onto the current app image under the right conditions.

  3. External EEPROM image. Like #2 except that a external EEPROM was used to store the new app image. In this case the processor with more memory would have also been physically bigger and in a different sub-family that didn't have the mix of peripherals we needed.

  4. TCP bootloader. This was the most complex of them all. A large PIC 18F was used. The last 1/4 of memory or so held the bootloader, which had its own complete copy of a TCP network stack. The bootloader ran from reset and tried to connect to a special upload server at a known port at a previously configured IP address. This was for large installations where there was always a dedicated server machine for the whole system. Each small device would check in with the upload server after reset and would be given a new app copy as appropriate. The bootloader would overwrite the existing app with the new copy, but only run it if the checksum checked. If not, it would go back to the upload server and try again.

    Since the bootloader was itself a complicated piece of code containing a full TCP network stack, it had to be field upgradeable too. They way we did that was to have the upload server feed it a special app whose only purpose was to overwrite the bootloader once it got executed, then reset the machine so that the new bootloader would run, which would cause the upload server to send the latest main app image. Technically a power glitch during the few milliseconds it took the special app to copy a new image over the bootloader would be a unrecoverable failure. In practise this never happened. We were OK with the very unlikely chance of that since these devices were parts of large installations where there already were people who would do maintainance on the system, which occasionally meant replacing the embedded devices for other reasons anyway.




Hopefully you can see that there are a number of other possibilities, each with its own tradeoffs of risk, speed, cost, ease of use, downtime, etc.


voltage measurement - How can I calibrate my sound card based oscilloscope?


I'm using Soundcard Oscilloscope software that presents as below



Soundcard Oscilloscope software display


You'll notice that the display has a grid, and in the example is set for 100mV/div. Since it's using the sound card integral to my motherboard, there is no calibration of this display or the line in socket, so the divisions could actually be anything.


I have a digital multimeter as well that can measure AC style voltages in the 10 - 400 Hz range. How can I calibrate it, at least roughly? My only thought was to build a sine wave oscillator to produce 100 Hz @ 1 V RMS. That way I can confirm the amplitude with my multimeter, feed it though line in to the software and then adjust for the reading.


Is this likely to work, or is there some other way of calibrating PC based soft oscilloscopes?


EDIT:


Following some comments, I tested my multi meter against a range of sine wave frequencies. These were line outs generated via Audacity software, with the TONE command set to a volume of 0.8. The red dot is @ 50Hz, so this should be ideal for the meter and forms a baseline voltage reading. The response seems reasonable over 10Hz - 10KHz. Even better across 50Hz - 1KHz.


DMM calibration graph



Answer



The only way to truly calibrate an oscilloscope is with a calibrated function generator. A quick internet search will demonstrate that these things are usually not cheep, of course.


...but you don't need perfect, just "reasonable"...



The best way to do this, then, is to:



  1. Build a variable frequency square-wave generator (CMOS 555 FTW)

  2. Use your DMM to measure the output peak voltage with freq set to ~DC.

  3. Bump the frequency to ~1k and adjust the scope.


Provided the square-wave gen's impedance is fairly consistent over all frequencies used, the peak voltage that appears on the scope should reflect the DC voltage fairly accurately. Knowing this we know that it's just a matter of getting the scope to agree with the DMM.


If you want more confidence, repeat the process with your gen powered from more voltages, then just compare the scope output to what you expected.


Lastly, I would not rely on the AC function of any DMM to be useful for something like this. Those things are usually specified for specifically 50 or 60Hz, and will read inaccurate for other frequencies. And, in case you're thinking you've got a workaround for that, you should know that your sound card will (read: should) take to 50-60Hz like an NSA vending machine to a crinkled Won.


Thursday 26 November 2015

voltage - Explain in layman's terms Vgs and Vgs(th) of MOSFET's


I'm trying to understand \$V_{GS}\$ of MOSFET transistor. From what I understand \$V_{GS}\$ normally stands for voltage gate to source breakdown, but other than that I lack an understanding. \$V_{GS(th)}\$ is the threshold voltage at which the mosfet will turn on, so I have some questions about the threshold voltage;




  1. What happens if I go over the max threshold as told by the data sheet?




  2. What happens if I'm under it?






Answer



Vgs is just the voltage from gate to source (with the red lead of the multimeter on the gate and the black one on the source). Everything else is from context.


The Absolute Maximum Vgs is the maximum voltage you should ever subject the MOSFET to under any conditions (stay well away). Usually the actual breakdown is quite a bit different (borrowing from this datasheet):


enter image description here


Vgs(th) is the voltage at which the MOSFET will 'turn on' to some degree (usually not very well turned on). For example, it might be 2V minimum and 4V maximum for a drain current of 0.25mA at Tj = 25°C (the die itself is at 25°C).. That means that if you want your 20A MOSFET to really turn on fully (not just conducting 250uA) you need a lot more voltage than 4V to be sure about it, but if your Vgs is well under about 2V you can be pretty sure it's well turned off (at least around room temperature).


Rds(on) is always measured at a specified Vgs. For example, it might be 77m\$\Omega\$ with Vgs = 10V and Id = 17A and Tj = 25°C. That 10V is the Vgs you need to feed your MOSFET for it to be happily turned on so it looks like a very low resistance.


Vgs also comes up when you want to know the gate leakage. Igss might be +/-100nA at Vgs = +/-20V and Tj = 25°C.


digital logic - Recommended voltage/current for dry contact input?


What would be a reasonable value for voltage and max current to used in conjunction with a remote dry contact output so that it can return to my MCU (through isolator)? I plan on using this input for both dry contact and open collector (in other words, I plan on having a pull up on my board). I don't want it too high so that a standard open collector output (30V/50mA-100mA) can work on it as well.



Answer



A dry contact can sometimes require what is known as a wetting current. This means that when the contact closes, a current is available to flow through the contact of a certain amount (usually specified by the vendor). This is usually achieved by the load and, in the case of a relay, it might be a few milliamps to several amps. The current has the effect of cleaning the contact.


When the contact is used solely for signalling, oxide layers can form and although the contact may appear to be closed, it registers an open or partially-closed circuit. Normally, vendors (like in the case of relays) suggest a wetting current and this wetting current is typically supplied by a pull-up resistor to a local DC supply (maybe 5V). The current will go some way towards ensuring the contact remains clean but, the supply voltage is also important - if too low, no matter what potential current may be available the contact oxides that are built up remain unpenetratable.


Here are a few words from wiki on wetting current. Below is a quote from a link on that page: -




Wetting current is the minimum current needing to flow through a mechanical switch while it is operated to break through any film of oxidation that may have been deposited on the switch contacts.[12] The film of oxidation occurs often in areas with high humidity. Providing a sufficient amount of wetting current is a crucial step in designing systems that use delicate switches with small contact pressure as sensor inputs. Failing to do this might result in switches remaining electrically "open" due to contact oxidation.



Here is a thread from a control.com site giving user's experience of the problem. In short - you need to do your homework on the contact if it's exposed to humidity.


If you read thru this discussion 6mA is mentioned BUT there is no excuse for doing homework on the dry contact and finding-out what the manufacturer says.


I'd say some dry contacts will be OK at well under 0.1mA but some may not be OK at 10mA. A sealed contact such as one in a reed relay will nearly always be good for low micro amps.


Pluggable terminal connector long term reliability


I'm investigating the issues related to long term reliability of pluggable terminals of "Phoenix" style. This terminals are available in a range of pitch and configurations (right angle, straight...). Look at the picture as example.


Pluggable terminal connector


In my application I need to supply a control card with power in excess of 10A. Currently the wire coming from the power supply is simply stripped and put into the terminal plug and then fastened with the terminal screw. Is this method considered safe and reliable over many years of use? The environment is clean and stable, no relevant vibrations. Would it be better to pre-tin the stripped wire before screwing it into the terminal? Or should I use single wire terminals like the ones in the picture? Or would it be better to use a pluggable connector which is itself crimpable? I am aware that for reliability reasons, crimping should be preffered to soldering. Am I right?


Wire terminals




circuit analysis - How to bias a voltage for an ADC


I am part of a project that is implementing a power storage system. The storage device voltage must be monitored in order to direct power. The storage device voltage should remain between 12 and 36 V. The project uses a TI TMS320F28027 MCU which operates at 3.3 V. How can the 12 to 36 V be mapped onto 0 to 3.3 V for the MCU.


I posted a similar question here but it specified that the voltage would range from 0 to 60 V. The difference for this question is how to bias the voltage appropriately.



I also posted another question about biasing the output of a current sensor for an ADC, but it involved biasing a balanced voltage around 0 V. I am having trouble adapting the answer to this problem. Diagram from answer:


diagram from answer
Figure 1. Diagram from previous answer


An attempt to emulate the design methodology would be:


24 V maps to 1.65 V
38 V maps to 3.3 V
10 V maps to 0 V


The most basic circuit model:


enter image description here


How to design the bias?




Answer



Rearranging:



  • 38 V maps to 3.3 V

  • 24 V maps to 1.65 V

  • 10 V maps to 0 V


schematic


simulate this circuit – Schematic created using CircuitLab


Figure 1. An 11.5:1 potential divider.



The simplest solution is to use a potential divider with a ratio of 38:3.3 or 11.5:1. This would result in:



  • 38 V maps to 3.3 V

  • 24 V maps to 2.08 V

  • 10 V maps to 0.868 V


The 0.868 V offset can be removed in software. Again, you lose a little resolution with this approach.


If a negative voltage supply is available then the offset can be removed.


schematic


simulate this circuit



Figure 2. With a negative rail available the offset at minimum input voltage can be removed.


How:



  • The span is 38 - 10 = 28 V.

  • This has to be scaled to 3.3 V so a divider ratio of 28 / 3.3 = 8.5:1. Let's use 7.5k and 1k to give us the required ratio.


Now we need to figure out the negative reference voltage.



  • At 10 V in Vout will be 0 V. With the 8.5:1 ratio we will need to hold Vref at \$ - \frac {1}{8.5} 10 = -1.18 V \$.



So, R1 = 7.5k, R2 = 1k, Vref = -1.18 V should do the trick.


I'll leave it to you to work out how to create the reference voltage.


mechanical assembly - I twisted a L200CV in Pentawatt package; what kind of damage can I expect?


I was installing a heatsink which didn't quite fit and after installing it, I noticed that I've twisted the L200CV about 5° when looking from above the chip. What kind of damage, if any, can I expect?



Answer



If you bent the leads after the package was soldered in, the solder joints may have cracked. It is worth touching them up. Ensure you don't dry them out by using flux and a reasonable temperature.



The regulator itself will be fine. Leads are meant to be bent.


calibration - How viable is it to just use 1% resistors and calibrate out the error?


At the moment, I use 0.1% resistors to get accurate voltage measurement through a voltage divider. However, the cost is high, so I was thinking of using 0.5% or 1% resistors and calibrating out the error in software by using a precision voltage reference during production. Has anyone done this successfully? What pitfalls might I encounter?



Answer



So you've got:


          R_x         R_fixed
Vcc -----^v^v^----+----^v^v^------- Gnd
|
|

+--- V_sensed --- ADC input

Rx is some unknown resistance (probably a sensor of some kind). And you're using R_fixed at 0.1% right now in order to effectively calculate R_x, but you want to use a cheaper fixed resistor with a lower tolerance of perhaps 1%. In doing so you want to perform some kind of calibration during production to correct for the increased error, is that right?


The way you end up doing this is putting an byte in EEPROM (or some other non-volatile memory) that acts as an "offset" in your calculation, and it's a perfectly viable thing to do. The thing is it's going to cost you some time during production to do the calibration activity. In order to do the calibration, you'll need one of those 0.1% resistors (call it R_cal) of nominally comparable value to your 1% resistor to substitute into the circuit for R_x. Measuring V_sensed, you can infer more precisely the value of R_fixed (i.e. to something like 0.2%).


If R_cal and R_fixed are nominally the same value, you would expect V_sensed to be equal to Vcc / 2. You would store the measured deviation from Vcc / 2 as a calibration offset byte, and always add it to V_sensed as perceived by your ADC.


The pitfall, as I see it, is that there is a bunch of work involved in doing the measurement and subsequently in storing the value. Another thing to consider as a pitfall is that temperature can play a role in causing a resistance to deviate from it's nominal value, so you'll want a reasonably well temperature controlled calibration environment. Finally don't forget to use calibrated measurement equipment, as that's another potential source of additive error. One last pitfall I can think of is that the calibration byte should be stored in units of the lsb of your ADC (so if you have a 12-bit ADC, units of calibration offset byte should be "Vcc/2^12 Volts").


Edit


If you are using two fixed resistors to divide a large voltage down to a lower scale as follows:


        R1_fixed       R2_fixed
V_in -----^v^v^----+----^v^v^------- Gnd

|
|
+--- V_sensed --- ADC input

Re-edited Section


So now you want to use a precision voltage reference (call it V_cal) to stimulate V_in during a calibration step in production. What you've got there is in theory:


V_sensed = V_predicted = V_cal * R2_fixed / (R1_fixed + R2_fixed) = V_cal * slope_fixed

But what you've got in reality is:


V_sensed = V_measured = V_cal * R2_actual / (R1_actual + R2_actual) = V_cal * slope_actual


In effect you have a different transfer function slope in reality than what you would predict from the resistor values. The deviation from the predicted divider transfer function will be linear with respect to the input voltage, and you can safely assume that 0V in will give you 0V out, so making one precision voltage reference measurement should give you enough information to characterize this linear scale factor. Namely:


V_measured / V_predicted = slope_fixed / slope_actual 
slope_actual = slope_fixed * V_measured / V_predicted

And you would use slope_actual as your calibrated value to determine the voltage in as a function of the voltage measured.


below courtesy of @markrages


To get the actual slope sensitivity to resistor values requires partial differentiation:


alt text


Wednesday 25 November 2015

signal integrity - Sensor question for low freq


A floating DC voltage output sensor will be powered locally and the signal will be sent outdoors 200 meters far away to a data acquisition board. I didn't receive the sensor yet and don't have the datasheet. But my question will be about something else.


For such long transmission I was first thinking to use a differential line driver at the sensor output or convert the voltage signal to current and send it as current or send as digital ect ect.


But I can use a differential ended input data-acquisition board has the following architecture:


Omitting the voltage divider effect, if I use this module and data acquisition as in below diagram would that be a adequate to eliminate common mode noise issues or capacitive coupling noise issues?Or still a differential line driver or current conversion is needed?


to be updated soon...




charger - LiIon tp4056 with protection, avoid drop out voltage at output


I have checked some behaviour in TP4056 boards (with protection circuit) that i don't like at all. (see schemas at the end)



When you use the TP4056 module with the protección board, you will get a dropout voltaje of 0.4 - 0.7V in the output.


That is because the forward voltage of the reverse diodes of the mosfet of the FS8205A.



  • DW01A pin 3 is for overcharge control and is connected to pin 5 in FS8205A

  • DW01A pin 1 is for discharge control and is connected to pin 4 in FS8205A Also, overcurrent protection use the DW01A pin 1


The TP4056 already control to protect for overcharge the battery, so what about a bypass between FS8205A's pin 6 or 7 and FS8205A's pin 1 or 8 (see datassheet)




  • We will loose overcharge protection of DW01A, but we have already the TP4056's one.





  • We will change over current protection as the Ron of the mosfets of FS8205A that DW01A will see, will be the half, so we will change over current protection from 2.5A to 5A (I think both of them are useless, 2.5A is usually a lot for our projects). You can "repair" it by putting a resistence of 30mOhm or bigger instead of the direct bypass.




With this modification we will get the real battery voltaje in our output only loosing the overcurrent protection (more or less), won't we ?


I hope you understand me, what do you think ?


Regards,


Schema


DW01A reverse diodes




Answer



u will get a dropout voltaje of 0.4 - 0.7V in the output


I doubt that. Actually I'm quite sure it is simply not true.


The DW01A datasheet (page 8, section 11.1) confirms this:


Normal condition ... M1 and M2 are both turned on.


Also: Most phone batteries have a protection circuit similar to what you show. On phones with removable battery the protection circuit is inside the battery. Accepting a 0.4 V drop due to this protection would severely limit the battery life.


So I think you're forgetting that both MOSFETs must be on when the battery is in use. Perhaps you're thinking that only one MOSFET is on at any time. This is not the case!


Why use to MOSFETs in series then?


Well, both MOSFETs can only block the current in one direction due do the drain-source diode. So two MOSFETs in anti-series have to be used so that current can be blocked in both directions.


I think you should simply do what everyone else does and that is simply this:



schematic


simulate this circuit – Schematic created using CircuitLab


This is how it is supposed to be used!


The overcharge protection of the DW01A is not the same as what the TP4056 has! You want both. The TP4056 charges until the battery is full and then it stops charging. The DW01A's protection is there to prevent fire/smoke when the TP4056 does not stop charging (for whatever reason). Some cheap Chinese gadgets rely on the DW01A to stop the charging but actually that's a really bad idea. The DW01A stops the charging at a much higher voltage than any decent charger. So the battery will be overcharged all the time and wear out more quickly.


matlab - Changing ltspice export from cortesian to voltage values when using AC anlysis


I'm trying to export a waveform from ltspice into an excel document to be later graphed using Matlab. I am simulating a simple series bandpass filter and when I export the voltage across a capacitor. It is not simply putting the voltage values into the second column. I am only given the option to use polar or Cartesian coordinates. Then when I am parsing the data in Matlab, it is not populating the variables because the second column is being occupies by coordinates. Can I change this export format somehow??





Sharp Infrared sensor-Filtering supply ripple




I am using Pololu's sharp infrared sensor board part# GP2Y0A60SZLF and I am trying to minimize the noise caused by the sensor on the voltage supply as much as possible.


I have made a filter and voltage regulator circuit as shown below, The IR sensor and the filter PCB are directly connected together. The output is carried out via a wire 7" long from the filter circuit as shown to a main PCB that has another RC filter (R2,C10) filter schematic Here is the scope shot at the 6-12v supply terminal. The spikes below are 1khz, 46mv p-p. Scope Probe VDD


What do you suggest to clean the supply line below?


I am also including the waveform on the sensor output.


Sensor Output



Answer



This sensor draws like 300mA in bursts, likely you can see that on the power. I have had a similar problem with driving servos, which made huge spikes on the power line when powering from a lab psu.


I suggest to:




  • power this all with a battery with small internal resistance, i.e. alkaline AA cells, or LiPo

  • increase c3 to at least 470uF or 1000uF

  • (might try LC filter on the power lines)


The output signal is not bad with 30mV of noise. Your idea of the RC filters look correct. I'd give also a try with bigger C and smaller R, like R of not more than 1k, and C of some microfarads.


datasheet - Where can I search components according characteristics, not model?


I'm 2 years disconnected of electronics, and i don't know how i get and know the components i used, and i want to get and know few more components, but i can't get to know them... and well i was wondering if there is some search engine or database where they are classified according characteristics... i never got to know anything like that, only the books... but if there are books, they certainly are some sites of it... i just can't find them...



Is there a Component search engine where i can search according the characteristics of a certain component? For example, gain and power of an OpAmp?, or an integrate circuit for ADC and DAC, certain gate current of a FET, etc etc etc...




Tuesday 24 November 2015

How do I determine the maximum current for charging a li-ion battery?


I have a cell phone that has a 1500 mAh 3.7 V battery. It comes with a 700 mA charger but I've sucessfully used a 1 A charger with no problems. I'm now trying to make my own multi-device charging station and my first problem if figuring out the maximum amount of current I can use at 5V?



Answer



If you are charging the battery through the phone then this will have the charge controller circuitry between the 5V charge supply and the battery. You CANNOT/MUST NOT just connect a battery pack to a power supply and expect it to charge without fire and or explosion.


The charge controller in the phone will limit the current supplied to the battery pack to be within the limits specified by the battery manufacturer to ensure that the battery is not damaged. Supplying the phone from a 5V source that has a higher current capability will not make the battery charge any faster. If it did then you would run the risk of damaging the connector on the phone or even melting the tracks on the PCB within the phone. Small USB connectors that I have used have a contact rating of up to 1A on the power lines.



If you want to build a charging station to charge multiple phones at a time then you need to have a power source that can supply up to the maximum charge current taken by the phone down each of the charging leads. These can all be in parallel but I would place a diode in each of the positive supply lines to prevent the possibility of any current flowing from the battery back to the charger.


battery charging - Solar powered single cell LiFePo4 charger circuit


As a follow-up to this question I've come up with this schematic for a solar powered battery charger:



enter image description here


Subcircuits, from left to right:



  • Solar cell (just a voltage source), 12V, 10W.

  • Buck regulator with current limit set to about 300 mA,

  • output voltage selection: 3.5V (float) or 4V (fast charging)

  • LiFePo4 battery with charge enable transistors (M2 and M3)

  • 3V LDO for the application circuit which includes an MCU


M2 and M3 are used to enable charging. If M2 is off, the battery can supply the application through M2's internal body diode. The gate voltage of M2 can rise above 3V, which is the reason why M3 exists.



M1 is used to alter the FB voltage divider. When M1 is off, the buck regulator's output voltage is 3.5V. When it is on, the output voltage is 4V.


I have selected the regulators as shown above mainly because they were available in LTspice, but also because U1 (the buck regulator) has current limiting and because U2 (the LDO) has a low dropout voltage. I don't need a shutdown pin on that one.


The design suggested in the linked question also includes a signal to detect if solar power is available, but I have omitted that in this schematic.


I also omitted the battery protection IC (BQ29706) because I could not simulate it. It will disconnect the battery from the charger when the battery voltage reached 3.85V (in fast charging mode), so that the buck regulator's voltage rises to 4V. With a simple voltage divider this can be detected by the MCU (not shown), which will then let the battery relax to 3.5V.


I chose a LiFePo4 battery because I want the circuit to operate at both low (say -20°C) and high temperatures (60°C). On average it will draw very little current and I want it to work over a full year. Peak current is 300 mA for a few milliseconds every now and then, and less than 1 mA while sleeping.


Specific questions:



  • Are there alternative parts that you would recommend? Is anything wrong with the parts I chose?

  • If I wanted to add MPPT to this, how could I get finer control over the solar cell current? The selectable voltage divider I'm using now is a bit coarse.

  • Apart from the battery protection, what other means of protection should I consider?


  • is it safe to add another transistor next to M1, below R2, in order to shave away those 20µA through the fixed part (R1 + R2) of the voltage divider?




Arduino MEGA 2650 R3 pin 13 LED is always on, unless I connect a multi-meter


I have a problem with the on-board LED staying lit up despite running a blink program.


I was running the blink program below, which writes HIGH/LOW to pin 13, but the light was solid. As soon as I connected the circuit between ground and pin 13 with the multi-meter, the LED started blinking.


Here's a video showing my setup. The video shows the "L" LED lit solid. The digital multi-meter (connected to pin 13 and GND) is switched from OFF to DCV, at which point the LED starts to blink. The wire connections are removed, and the LED is again lit up solid.



When the pin is set HIGH, the voltage reading is 5.07V. However, when the pin is set LOW, the voltage reads 0.61V whereas other pins read 100 times less, around 6mV.


void setup() {
pinMode(13, OUTPUT);
}
void loop() {
digitalWrite(13, HIGH);
delay(2000);
digitalWrite(13, LOW);
delay(2000);
}


The question is why is this happening? Is my board bugged?


Reference schematic linked here. "arduino-mega2560_R3-sch.pdf"




Multiple resistors in series instead of using a single resistor has any advantage: heat produced by the resistors of diffent watts are different?


I have two doubts, requesting you to answer my doubts separately. :)


1) I need a resistance of 'X', so is it better to use a single resistor of 'X' value or multiple resistance of r1+r2+r3='X'? What I mean is using multiple resistors in series instead of using a single resistor has any advantage? Will it reduce the resistors getting overheated?



2) Consider a 1W 2k2 resistor and a 1/4W 2k2 resistor. Is the heat produced by resistors of different watts different? Which resistor will get more heated in the same conditions (I mean the current, voltage etc given both resistors are same)


Regards, Kiran.




Monday 23 November 2015

components - What will happen if I use a voltmeter instead of an oscilloscope in a Chua's circuit?


Suppose I use a voltmeter instead of an oscilloscope in chua's circuit. Voltmeters only measure the average voltage over a short time period.


Questions (please consider both DC and AC voltmeter, because I don't know which one should be used...):



  1. So does the voltmeter show a "coordinate" of the centroid of the double scroll attractor(or any other pattern formed)(since it is an "average")

  2. What voltmeter reading corresponds to the double scroll? If the double scroll is symmetric about the origin, should the voltmeter read zero? Will the voltmeter suddenly jump to zero or gradually declines to zero?

  3. Practically is there any other difference between voltmeter and oscilloscope?




Are termination resistors needed for UART, I2C and SPI?


I see that termination resistors might be needed for some communication lines. What about UART, I2C and SPI, are they needed ? If yes, how do we choose their values?



Answer



Take 50 MHz clock SPI.


50 MHz has a fundamental wavelength of 6 metres but you could argue that due to the fast rising edges everything up to the 5 th harmonic is potentially capable of disrupting the shape too much if terminations are not applied. So that's a wavelength of 1.2 metres.


On the other hand, on PCBs, the speed of signals is about 60% the speed of light so that fifth harmonic has a wavelength of 0.7 metres.



A sensible rule of thumb suggests reasonably that you need to consider adding a termination if the transmission line is longer that about one-tenth of a wavelength - so if your PCB tracks are more than 7 cm long then you should consider a termination but, you'll probably get away without one for a few more centimetres and, if the edges your chip generates are not that fast maybe you can consider the 3rd harmonic as the benchmark rather than the fifth harmonic.


The same argument applies for UART transmission but it's a little different with I2C because of the pull-up resistors and the inability to drive very fast edges due to the output device being active only in the low state.


led - How can I effectively reduce the voltage needed to activate a transistor?



I have built a circuit which basically connects the line out (audio output) of a music playing device to a set of LEDs (actually a huge strip of around 200 LEDs), so they flash in time with the music (from internet tutorials - I'm a bit of beginner).


schematic


simulate this circuit – Schematic created using CircuitLab


My circuit works very well using my laptop as the audio device (connecting my circuit to the headphone jack on it). But when I use something smaller such as an iPod, the lights barely turn on at all.


I've tried using a Darlington Pair (below), but that makes the issue worse. This is why I think the issue is that the audio line out is not reaching the 0.7 volts across the base and emitter that the TIP31C transistor needs to activate (the Darlington Pair means it now needs 1.4 volts to activate).


schematic


simulate this circuit


From my research, it looks like using an op amp might be the way forward, to amplify the audio line out signal before the TIP31C transistor. Would somebody be able to suggest one, and which inputs I should connect to?


I've also read that Germanium transistors only need 0.3v across the base and emitter to activate, would that be useful?



Answer




In short: you can't. The 0.6V threshold for a BJT is a consequence of the physics of silicon P-N junctions.


A germanium transistor would work, but you will have to mail-order it, and it will be expensive.


A rail-to-rail op-amp indeed may be an option.


However, another solution is to make the voltage of your audio signal higher, rather than making the transistor threshold lower. You could do this two ways:


Make the emitter voltage lower


schematic


simulate this circuit – Schematic created using CircuitLab


Now, the audio signal is 0.6V higher than the emitter. Of course, you'd have to come up with a way to get a 0.6V power supply, and probably adjust it to get just the action you want. There's another way...


Add a DC bias to the signal


schematic



simulate this circuit


Here you can adjust the pot to add some amount of DC bias to the signal to get the sensitivity you desire. The capacitor serves to isolate this DC from your audio source while allowing the AC signal to pass. This is called capacitive coupling.


R4 exists to limit the base current in case R1 is adjusted too far. There's no point in biasing the signal above 0.7V since that would mean the transistor is always on, so R4 also makes the useful adjustment range of R1 wider.


Also, notice in both cases I've added a resistor to the transistor base. You don't want to make this mistake.


cpu - How Modern Overclocking Works


Forgive me if I misunderstand some basic EE principles -- I'm a CS guy. After some googling, nobody really explains how the chip runs faster. I understand voltage must increase per this related article.


Do we actually increase the frequency at which the chip operates?


A CPU runs at a certain frequency, which is programmed into it's registers. This frequency can be modified at any time to account for drifting, which is used to account for distance between the crystal and the CPU. (This is going from memory from years ago -- chances are every assumption here is inaccurate.)


So, the original frequency is defined by the crystal which, by the nature of a crystal, oscillates at a static frequency band. This in turn we program into the CPU, which will go on to do X number of calculations per millisecond.


At this point, overclockers manipulate the drift that is programmed into the CPU? My gut tells me that the drift registers can't increase the frequency enough to be relevant. So where does the increase in frequency come from?



Something that has just occurred to me is that just by applying more voltage, the 'bits' actually move around a faster..but then that wouldn't mean an increase in frequency, right?



Answer




Do we actually increase the frequency at which the chip operates?



Yes, we do!


Modern CPUs have a unit called PLL - Phase Locked Loop - which make the multiple GHz out of relatively cheap crystals running at something like 33.3 or 100 MHz. These units are programmable in a wide range of output frequencies. That is used to slow down the core(s) when there is less work to do to save power - or to overclock them.


You can increase the clock frequency further when the voltage is higher - but at the price of massive additional generated heat. And the silicon will "wear out" faster, as bad things like Electromigration will increase too.


basic - why electricity requires a closed loop to flow?


Imagine a battery with the positive terminal connected to ground. The positive side has a high eletrical potential, ground obviously has a lower electric potential. Electricity wants to seek a lower potential, and there's no significant resistance between the terminal and the ground. Shouldn't electricity flow? I know it doesn't but I don't understand why. I know you have to close the loop, but why is this necessary?




Answer



It did flow.


A current appeared briefly as you connected the terminal to ground, and again when the battery restored its output voltage to full value.


Say we have a 9V battery, and one terminal is at +4.5V wrt Earth, while the other terminal is at -4.5v wrt Earth. (So, 9.0V between the battery terminals.)


If we now connect the battery positive terminal to Earth, this discharges the tiny capacitance found between the metal battery parts and Earth (it's probably a fraction of a picofarad.) So, a current does flow momentarily in the earth connection. After touching the terminal to earth, the positive battery terminal is now at 0V wrt the earth. But the negative battery terminal is still at -4.5V wrt earth.


What does a battery do when a load has reduced it's output voltage? The internal charge-pump turns on! This charge-pump only runs until the battery terminals are again at 9.0V potential-difference. As this happens, a current briefly appears in the earth-connection again. (This current is there because the capacitance between the earth and the battery neg terminal has to be charged up from -4.5V to -9.0V.


After the current has finally fallen to zero, the battery negative terminal is at -9.0V wrt earth, and the battery positive terminal is at 0.0V wrt earth.


How long do these currents last? If the capacitance between the metal battery parts and the nearby Earth is a fraction of a picofarad, and the resistance of the wire (and of the battery internal impedance) is about an ohm, then the time-constant of the capacitor charge/discharge is a fraction of a picosecond.


A tiny, very brief zap!


(Try holding an untuned AM radio near the wire, crank the volume way up, and you might hear a slight "click" when the battery terminal touched earth. Find a large metal object and try alternately touching the + and the - battery terminals against it. Can the radio detect these slight impulses of current? I haven't tried this. Maybe it works, or you need far more than 9V to hear the pulses.)





2nd question: why are complete circuits needed? Ah, for that question we have to confront the fact that all electric circuits are based on electrostatics. The physics of "Static Electricity" rules the wires. In particular, wires are composed of positive and negative charges in equal quantities: tens of thousands of coulombs of each. Wires are "neutral" but only because the opposite charges in the metal are in perfect balance.


OK, what happens to a piece of wire if we try to remove some charge; some of its movable electron-ocean? A few feet of wire might have, say, tens of thousands of coulombs worth of electrons, and also has one picofarad capacitance, all on its own (with the rest of the universe being the other capacitor plate.) Say we take out a millionth of a coulomb of electrons from that wire and dump them into the ground. (Not one coulomb, but only 1/1,000,000.) Now our wire will have a positive charge from extra protons left behind. What's its static voltage? Q=CV, or V=Q/C. The positive voltage will be 10e-6 coulombs divided by 10e-12 farads, or ONE MEGAVOLT WTF?!!!!! Yep, that's how it works. By trying to remove even a millionth of a coulomb from a hunk of metal, we end up creating large DC voltage.


Yet an everyday current of one ampere is a charge-flow involving one coulomb per second. A million microcoulombs ...so a voltage that's a million times higher than a megavolt.


Finally, again look at an electric circuit. If we want to pump charges out into the air, but not have the charges loop around into a closed circle, then obviously we need some megavolts. And that's just for a microamp flow, and only for a 1-second flow! A continuous one-way current of 1uA, with no circuits involved, would require a rising voltage of MEGAVOLTS PER SECOND. That's what happens when we try to force a current out of a metal, but without having a return-path.


Whew. Here's an analogy: what if we had a pool of liquid neutronium. Millions of tons per cubic cenimeter. It's impossible to scoop any up and lift it against gravity. However, we still might stir the liquid neutronium, so it flows in a circle within the pool. The electrons in metals act a lot like that. It's not hard to stir them, so they flow in complete ring-shaped paths, without being "lifted up" at all. But to lift them out of the metal, so they don't just flow right back again, that takes megavolts, constantly increasing megavolts.


For visualizing electricity, I think the best model is to imagine a solid cube of copper, plus a supermagnet. The entire copper block is full of coulombs worth of movable electrons; kilocoulombs, megacoulombs. The copper block is like an aquarium, a water-tank, where the water is made of pure electron-stuff. But the static voltage is near zero, because every movable electron in the copper block is very close to a proton in the copper atoms. OK, now wave your supermagnet near the copper. The electron-sea inside the copper will swirl in closed loops. We'll perform some work doing this, and feel a magnetic drag force. The circles of current in the copper might be several tens of amperes. Lots of coulombs flowing per second. But the flows are circular. And as each electron is moving away from a proton, another electron is moving right behind it and taking its place. On average the voltage remains near zero, because the average distance between the electrons and protons inside the metal block is not being changed. We're allowed to "stir" the "bucket of electrons" as much as we want, and that's what circuitry is based upon. But we're not allowed to scoop a big glob of electrons out of the bucket, not if we don't have any giga-volt power supplies doing it.



digital logic - AND, OR gates: 3 transistors. NAND, NOR gates: 2 transistors. Why?



Why do NAND and NOR gates require two transistors each, whereas AND and OR gates require three each?




Sunday 22 November 2015

microcontroller - Mosfet as switch for high output current



I'm working on an android controlled input/output/gauges manager for cars based on a pic24 micro controller. I'm really not an electronics expert so I had someone working on the schematics but he has no time anymore. It's mostly done and functional but I'm having trouble with the output schematics.


The idea is to be able to drive various loads (leds, solenoids, dc motors, etc) and turn them on/off from the pic micro controller.


Pin43 on the schematic is a pic pin that is 0v-3.3v, Vin is 12V and Pos_Out1 is connected to a load that is grounded.


The issue I'm having is that testing it with a light for example, there is always current even when the pic pin is floating or grounded. When I send 3.3v on the pic pin, the light gets brighter so more current flows but there should be no current flowing when the pin is grounded or floating.


Is something wrong in the schematic?


Thank you for your time!


output schematic



Answer



The MOSFET is shown connected incorrectly in the schematic- the body diode conducts so you get 11.3V out with 12V in, and when the transistor turns on you get 12V out with 12V in. This kind of inverse operation is used deliberately sometimes (for example in the classic reverse polarity protection circuit as shown below- not related directly to your application, of course).


enter image description here



The 1M resistor is too large to allow the MOSFET to switch quickly so you will be unnecessarily stressing the MOSFET when it switches off, however that may save its life since there is no protection against inductive load flyback (a case where two wrongs may make a sorta-right) and there is no gate protection for the MOSFET in case of typical automotive transients.


I'm not going to correct all those issues for you in this answer, there are plenty of examples of good design of high-side switches for automotive applications. It needs a few more parts- a couple of diodes and a resistor at a minimum- and a bit better choice of value for the pullup to be acceptable. If you don't fix those problems you will have failures caused by inductive loads. There is also a potential brownout issue that you should address at a system level if nowhere else- think about what happens during cranking or with a dead-ish battery- can the MOSFET go linear and burn out.


microcontroller - For a 'customized' arduino consuming less energy making use of Attiny85 vs PCB Fabrication?


Currently I am working on a prototype project where I try to move a servo (6V) according to the level of light that is recognized by a photodiode making use of the ISR function. A torch light is used as the source of light.


First, I have built this project successfully making use of an arduino uno. After that in order to reduce the energy consumption I have made an Arduino on a breadboard making use of an atmega328 microcontroller adding sleep function using the jeeLib library. It worked well. Now I want to go to the next step with this project to design the prototype into a functional tool/device.


I was planning to customize an arduino and solder the servo (6V), photodiode etc.


I have researched to on this topic and read carefully the following link (How to go from newbie to manufactured?) and learned that creating a customized arduino might emerge to be an exhausting task; especially for a newbie to electronics like me.


At the same I am wondering whether there is a way to skip going through a PCB Fabrication. Now I came to learn that I might use an attiny 85 using a perfboard that can be used to program a servo and ISR function as well. Attiny 85 also consumes less energy.



Given the fact that I 'only' need to control a servo (6V) depending on the voltage going through a photodiode I do think that an attiny85 microcontroller might be a decent way to realize my project with less effort. I am aware of the little EEPROM and Flash size. However as already once in a discussion ( My Atmega328 seems to be overkill, what should I use instead?) apparently not all program that work with arduino uno will work with attiny 85.


Can you tell me whether there is a way for me to skip working with a pcb manufacturer to finalize my project? Is using an attiny 85 microcontroller a good alternative for my project?



Answer



An attiny will be fine for a single servo and adc sensor with the Arduino firmware. The things an attiny can't do that an atmega can has to do mostly with code space and amount of timers or interrupts. Your project is very small and not affected by this limitation.


As for developing a custom Arduino pcb, it's not that complicated. An Arduino is essentially just a breakout board for the microcontroller, with a regulator and usb to serial circuit attached, neither which are 100% required to be on board. Look at various bare bones Arduino which are no more than the microcontroller and a few passives.


sensor - Reed switch sticks until tapped




  1. Why is the reed switch sticking when the magnet is no longer nearby? Only when I tap the switch it releases, otherwise it just sticks for many minutes.




  2. Are there any best practices (do's or don'ts) with reed switches?




  3. Does it matter what magnet you use? Eg. regular versus rare earth.


schematic


simulate this circuit – Schematic created using CircuitLab



  • Did I manage to magnetize the contacts?

  • Is the current too large (approx 4mA)?


It is one of these and exact part number is unknown.



enter image description here



Answer



Reed switches can stick for the following reasons:



  • The metal of the contacts have been pitted or otherwise compromised, from arcing if switching with active loads

  • The contacts have become magnetized (or another magnetic field is unaccounted for)

  • The reed switch is physically damaged, such as through bending or cutting the leads improperly


Current Rating:


4mA doesn't sound like much current, even for switching. The lowest rated switching current value for reed switches I could find at Digikey is 50mA.



A reed switch has two values for current rating. Switching and carry. Switching is the amount of current that it is designed to switch on and off without harming the contacts. Carry is the amount of current for a switch that has already been engaged, and thus won't arc when switching.


Nickel and iron used for the contacts are soft metals, which means they more vulnerable to melting. So the contacts are generally plated with a harder metal (like rhodium or ruthenium) to increase the life of the switch.)


Residual Magnetism:


Reed switches are made of metal contacts that have been annealed, a process that leaves no or very little magnetic retentivity. This means after the reed switch is removed from a magnetic field, there should not be any residual magnetism in the contacts. (Meder)


It's possible the reed switch is defective and is not properly annealed, causing residual magnetism problems.


Physical Damage:


Reed switches are manufactured with fairly exacting tolerances. When cutting or shaping the leads, it's possible to damage the reed or glass. Meder has a whole document dedicated to handling precautions. In short, physical damage may be difficult to verify visually, but can affect performance. An excerpt from the linked document:



Internal damage can occur with no visible signs on the seal. In these instances, seal stress has occurred, leaving a torsional, lateral, or translational stress in the seal. This produces a net force on the contact area that can affect the operate characteristics (Pull-In and Drop-Out), contact resistance, and life characteristics.






Summary:



Did I manage to magnetize the contacts?



I would say unless the reed is defective (improper annealing) it is unlikely.



Is the current too large (approx 4mA)?



Without the datasheet specifics, unknown. But also unlikely.




Does it matter what magnet you use? Eg. regular versus rare earth.



No, it should not matter. The reed switch contacts simply react to a magnetic field, whatever its origin.


mosfet - Controlling a very small DC motor with PWM?


I have a very small DC motor (from a Walkman), I would like to control the speed from a micro-controller. For that I would like to connect a MOSFET in series and apply a PWM signal to its gate in order to change the speed of the motor.


I have measured the motor's L & R = 4.7mH, 11.5Ohm (Tao 0.41msec).



From running a few experiments with the motor using a bench power supply I can see that it runs well from a voltage of 0.2V up to somewhere around 0.4V - that is all the range I require.


The power supply I have for this is set to 1.8V (used for the digital part of the circuit) so this makes using standard MOSFETs a bit difficult because I can not supply the voltage required for the gate saturation. I bought a few P-channel MOSFETs like this.


So even though I would have thought this setup would work (Vcc -> motor -> FET -> GND) I can't seem to get good resolution over the control and I don't get as much torque from the motor as I used to get when ran from a DC power supply.


I am not sure what freq. I should be using and not sure what other parameters need to be checked in order to make this work as intended. Any help on this will be appreciated.


* UPDATE * Following Olin's answer I have built the circuit he suggested. I have used a 2N3904 transistor, 180Ohm resistor paralleled with a 4.7nF cap. Attached is the collector voltage when running from PWM code 100 (out of 256). Vcc is 1.8V.


enter image description here




arduino - Can I use TI&#39;s cc2541 BLE as micro controller to perform operations/ processing instead of ATmega328P AU to save cost?

I am using arduino pro mini (which contains Atmega328p AU ) along with cc2541(HM-10) to process and transfer data over BLE to smartphone. I...