# Oversampling

In signal processing,**oversampling**is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.

A signal is said to be oversampled by a factor of

*N*if it is sampled at

*N*times the Nyquist rate.

## Motivation

There are three main reasons for performing oversampling:### Anti-aliasing

Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampling system, design constraints for the anti-aliasing filter may be relaxed. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, the digital filter associated with this downsampling are easier to implement than a comparable analog filter required by a non-oversampled system.### Resolution

In practice, oversampling is implemented in order to reduce cost and improve performance of an analog-to-digital converter (ADC) or digital-to-analog converter (DAC). When oversampling by a factor of N, the dynamic range also increases a factor of N because there are N times as many possible values for the sum. However, the signal-to-noise ratio (SNR) increases by $sqrt\{N\}$, because summing up uncorrelated noise increases its amplitude by $sqrt\{N\}$, while summing up a coherent signal increases its average by N. As a result, the SNR increases by $sqrt\{N\}$.For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the SNR by a factor of 16, effectively adding 4 bits to the resolution and producing a single sample with 24-bit resolution.256 there is an increase in dynamic range by 8 bits, and the level of coherent signal increases by a factor of N, the noise changes by a factor of $$16, so the net SNR improves by a factor of 16, 4 bits or 24 dB.}}

The number of samples required to get $n$ bits of additional data precision is

:$mbox\{number\; of\; samples\}\; =\; (2^n)^2\; =\; 2^\{2n\}.$

To get the mean sample scaled up to an integer with $n$ additional bits, the sum of $2^\{2n\}$ samples is divided by $2^n$:

:$mbox\{scaled\; mean\}\; =\; frac\{\; sumlimits^\{2^\{2n\}-1\}\_\{i=0\}\; 2^n\; ext\{data\}\_i\}\{2^\{2n\}\}\; =\; frac\{sumlimits^\{2^\{2n\}-1\}\_\{i=0\}\; ext\{data\}\_i\}\{2^n\}.$

This averaging is only effective if the signal contains sufficient uncorrelated noise to be recorded by the ADC. If not, in the case of a stationary input signal, all $2^n$ samples would have the same value and the resulting average would be identical to this value; so in this case, oversampling would have made no improvement. In similar cases where the ADC records no noise and the input signal is changing over time, oversampling improves the result, but to an inconsistent and unpredictable extent.

Adding some dithering noise to the input signal can actually improve the final result because the dither noise allows oversampling to work to improve resolution. In many practical applications, a small increase in noise is well worth a substantial increase in measurement resolution. In practice, the dithering noise can often be placed outside the frequency range of interest to the measurement, so that this noise can be subsequently filtered out in the digital domain—resulting in a final measurement, in the frequency range of interest, with both higher resolution and lower noise.

### Noise

If multiple samples are taken of the same quantity with uncorrelated noise added to each sample, then because, as discussed above, uncorrelated signals combine more weakly than correlated ones, averaging*N*samples reduces the noise power by a factor of

*N*. If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of 4 which corresponds to a factor of 2 improvement in terms of voltage.

Certain kinds of ADCs known as delta-sigma converters produce disproportionately more quantization noise at higher frequencies. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled signal down to half the target sampling rate, a final result with

*less*noise (over the entire band of the converter) can be obtained. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.

## Example

Consider a signal with a bandwidth or highest frequency of*B*= 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at four times that rate requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz ((

*f*

_{s}/2) −

*B*= (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz. Achieving an anti-aliasing filter with 0 Hz transition band is unrealistic whereas an anti-aliasing filter with a transition band of 300 Hz is not difficult.