In AVR programming, register bits are invariably set by left-shifting a 1
to the appropriate bit position - and they're cleared by a ones' complement of the same.
Example: for an ATtiny85, I might set PORTB, b4 like this:
PORTB |= (1<
or clear it like this:
PORTB &= ~(1<
My question is: Why is it done this way? The simplest code ends up being a mess of bit-shifts. Why are bits defined as bit positions instead of masks.
For instance, the IO header for the ATtiny85 includes this:
#define PORTB _SFR_IO8(0x18)
#define PB5 5
#define PB4 4
#define PB3 3
#define PB2 2
#define PB1 1
#define PB0 0
To me, it would be much more logical to define the bits as masks instead (like this):
#define PORTB _SFR_IO8(0x18)
#define PB5 0x20
#define PB4 0x10
#define PB3 0x08
#define PB2 0x04
#define PB1 0x02
#define PB0 0x01
So we could do something like this:
// as bitmasks
PORTB |= PB5 | PB3 | PB0;
PORTB &= ~PB5 & ~PB3 & ~PB0;
to turn bits b5, b3, and b0 on and off, respectively. As opposed to:
// as bit-fields
PORTB |= (1< PORTB &= ~(1<
The bitmask code reads much more clearly: set bits PB5
, PB3
, and PB0
. Furthermore, it would seem to save operations since the bits no longer need to be shifted.
I thought maybe it was done this way to preserve generality in order to allow porting code from an n-bit AVR to an m-bit (example 8-bit to 32-bit). But this doesn't appear to be the case, since #include
resolves to definition files specific to the target microcontroller. Even changing targets from an 8-bit ATtiny to an 8-bit Atmega (where bit definitions change syntactically from PBx
to PORTBx
, for example), requires code changes.
No comments:
Post a Comment