To increase the 12-bit resolution of ADC from 12 bit to 14 bit, this can be done through the 'oversampling and decimation method'. An Atmel Application note says that
The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.
It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution
This reference clearly says that for the decimation method, the result is right shifted by the desired extra bit of resolution, and not divided by m as in the normal average.
So, the question is, why do we need to use the decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number? How do we use the decimation method in this case?
No comments:
Post a Comment