Sunday 18 December 2016

adc - Why modern oscilloscopes use hardware triggering?


I've been learning how DSOs work for the past few days. As far as I understand, digital storage oscilloscopes have trigger circuitry, which usually consists of an analog comparator and a DAC. A voltage, corresponding to the trigger level is generated with a DAC and compared with the amplitude of the input signal. Once the input signal passes the threshold level, the acquisition starts.


My actual question is: why can't this all be done in software? Wouldn't that be easier to constantly acquire data from ADC and store it into a circular/FIFO buffer and compare the values with the given trigger level?




Answer



There are a few different ways of doing triggering on a scope and they all have various tradeoffs.


For low end scopes that sample slow enough to use an MCU of some sort, this can be done in software. But this sort of scope isn't really something I would consider a true scope, either a low end 'toy' or a low bandwidth data acquisition unit of some sort. These scopes either operate so slow that they can check on a per-sample basis for trigger conditions, or they grab an entire buffer blindly and then process it to see if it happened to contain a trigger event. This is what some of the really cheap USB scopes do.


For anything above a few 10s of MSa/sec, dedicated hardware is required to manage the data coming out of the ADC and get it stored in dedicated high speed sample memory as general purpose CPUs cannot handle the firehose of data efficiently. This is done on an FPGA or an ASIC. Since the data has been digitized already, it's quite simple to add some digital trigger circuitry that can check for various trigger conditions in the data stream coming directly out of the ADC without requiring any additional components. It's possible to implement some rather complex triggering capabilities in this way, possibly with multiple thresholds (for things like windowed triggering). In some scopes, especially mixed signal scopes, each channel has a comparator that can either be used for edge triggering directly or to extract the digital level of the channel for use by serial decoding logic, which can in turn generate trigger events based on the decoded data. This works on most simple architectures that have a single ADC per channel. This comparator is generally implemented in the ADC data path, though I suppose it doesn't have to be. Another advantage of building a trigger comparator into the digital datapath after the ADC is that it makes calibration simpler - you don't need an extra step to calibrate the trigger DAC level against the main ADC.


Very high-end scopes use various interleaving and sampling techniques across multiple ADCs to get very high equivalent sample rates, and these methods can require a lot of signal processing to get back the original data, more so than can be done in real time. In this case, there is no place in the signal path to check for trigger conditions, so dedicated trigger circuitry is required. See the 100 GHz scope from LeCroy for a good example of where you have to have a separate trigger path - the 100 GHz band is split into 3 bands with diplexers, and each one is downconverted and then sampled by multiple interleaved ADCs. The original signal is then reconstructed by a general purpose CPU as a post-processing step after acquisition is complete.


No comments:

Post a Comment

arduino - Can I use TI's cc2541 BLE as micro controller to perform operations/ processing instead of ATmega328P AU to save cost?

I am using arduino pro mini (which contains Atmega328p AU ) along with cc2541(HM-10) to process and transfer data over BLE to smartphone. I...