Many processors / µCs / dev-platforms (BeagleBoard, Arduino,...) use interrupts.
These can be triggered by the detection of:
- HIGH level
- RISING edge
- CHANGING level (either FALLING or RISING edge)
- FALLING edge
- LOW level
Now either of two things must be true:
- FALLING and LOW (/ RISING and HIGH) are virtually the same
- When a LOW (/HIGH) level is applied over a non-trivial time, the controller is stuck repeating the interrupt service routine over and over
Both of these don't make sense to me:
The first cannot be true, since it would be totally useless to make the difference in the first place then.
So if the second one is true: how could this be useful? What application is there that is not better off with a combination of RISING and FALLING instead of HIGH?
Research so far:
Answer
Level triggered (high or low) can allow the source to say "nevermind" or to keep the trigger active until the ISR gets around to it. Interrupt latency is not guaranteed on a single core with multiple triggers, though it's usually pretty fast. Generally, the signal for a level-triggered interrupt is itself edge-triggered and you have to clear it in the ISR or else you'll come right back into it again.
As Ignacio said, level triggered can also do something continuously while active, though you should write your software to not get stuck in an "interrupt loop". Not getting to your main code can be somewhat difficult to debug.
Edge triggered is good for things that happen once on some event. If the event happens again, then your response will happen again, so you'll need to be careful about repeated events like switch bounce.
No comments:
Post a Comment