Satellites in low earth orbit are moving close to 8km/sec. Most consumer-grade GPS chips still invoke the CoCom limits of 1000 knots, about 514 m/s. CoCom limits are voluntary limits for exports that you can read more about in this question and answer and this question and answer and elsewhere.
For this question, let's assume they are numerical limits in the output section of the firmware. The chip must actually calculate the speed (and altitude) before it can decide if the limit is exceeded, and then either present the solution to the output, or block it.
At 8000 m/s the doppler shift at 2GHz is about 0.05 MHz, a small fraction of the natural width of the signal due to its modulation.
There are several companies that sell GPS units for cubesats, and they are expensive (hundreds to thousands of dollars) and probably worth every penny because (at least some of them) are designed for satellite applications and space tested.
Ignoring the implementation of the CoCom limits, and all other issues of operation in space besides velocity, are there any reasons why a modern GPS chip specked at 500 m/s max velocity would not be able to work at 8000 m/s? If so, what are they?
note: 8000m/s divided by c (3E+08m/s) gives about 27ppm expansion/compression of the received sequences. This might affect some implementations of correlation (both in hardware and software).
Answer
I would not advise to use an integrated GPS solution (containing MCU and closed source firmware) for a satellite application. There are several reasons why this might fail to work:
- The frontend frequency plan might be optimized for a limited doppler range. Typically, the RF frontend will mix down the signal to an IF lower than 10MHz (higher IF will require higher sampling rate and consume more energy). This IF is not arbitrarily choosen! The quotient IF/samplerate should be nonharmonic for the whole doppler range to avoid spurious tones from a/d-truncation errors in the sampled signal. You may observe beating effects, that make the signal unusable at some doppler rates.
- The digital domain correlator needs to reproduce a replica of the carrier and the C/A code at the correct rate, including doppler effects. It uses DCOs (digital controlled oscillators) to pace carrier and code generation, that are tuned via configuration registers from the MCU. The bit-width of these registers may be constrained to the doppler range expected for a ground based receiver, making it impossible to tune the channel to the signal if you are traveling too fast.
- Firmware will have to do a cold acquisition if no position/time estimate is available. It will search doppler frequency bins and code phases to find a signal. The search range will be restricted to the range expected for a ground based user.
- Firmware will typically use kalman filtering for position solutions. This involves a model of receiver position/velocity/acceleration. While acceleration will not be a concern for a satellite, the model will fail for velocity, if the firmware is not adapted for in-orbit use.
All of these issues can be addressed, if you use a freely programmable frontend and correlator with a custom firmware. You may, f.e. look at Piksy.
No comments:
Post a Comment