Calibration methods depend on several factors.
the system design
the component being calibrated (probe, meter, or set)
for probes, whether it is a electric field or a magnetic field probe
See RF Survey Equipment for details on equipment designs.
Probe calibration begins with establishing either an electric field or a magnetic field of known intensity. The probe is then precisely positioned in the field. Some type of fixture is often used. A thorough calibration should take into account isotropic response by either manually or mechanically rotating the probe about its axis. The value used should be the average value obtained throughout the 360° rotation of the probe. The intensity of the calibration field should be somewhere in the middle of the probe's dynamic range. This will minimize the effects of any linearity error. Linearity errors refer to the errors that occur at the same frequency and probe position when only the magnitude of the field is changed. In a good probe, linearity errors are normally small—typically ±¼ dB.
Several different types of hardware are used to establish a precise field. They include
sliding waveguide fixtures, and
TEM (Transverse ElectroMagnetic) cells.
The description of this equipment and their use is beyond the scope of this Web site. In general, horn antennas are used at the higher frequencies, typically at frequencies above 1 GHz. Some calibration systems use antennas down to 500 MHz. Sliding waveguide fixtures are most useful from 500 MHz to 1,000 MHz. TEM cells are used at frequencies below 500 MHz.
If a probe is only going to be calibrated at a single frequency, an adjustment is made within the probe if it has an amplifier. If the probe does not have an amplifier where gain can be adjusted, an adjustment is made in the meter or a calibration factor is provided that is stored in the meter, so that the survey set reads correctly at the calibration frequency. Single-frequency calibration is fine if the instrument has a dedicated application or is only rated for a narrow bandwidth. A microwave oven leakage instrument is a good example of a narrow band instrument. It only has to work at a very narrow band centered at 2,450 MHz. Another example would be a survey set that is dedicated to checking a specific system that operates at a single frequency or a narrow band of frequencies. So, even if the equipment is broadband in design, a single-frequency calibration is perfectly adequate. The major problem with single-frequency calibration is that the sensitivity of broadband probes can vary dramatically over their rated frequency range. So there is automatically an unknown error every time the instrument is used at any frequency other than the calibration frequency, since the largest component of measurement uncertainty is normally frequency deviation.
Multiple-frequency calibration is the only way to guarantee accuracy with broadband probes. Most manufacturers calibrate at 10 to 20 frequencies, depending on the frequency range of the probe and whether the probe has a flat-frequency response or shaped-frequency response. See Shaped-Frequency Response for a description of a how shaped-response sensors work and why they can be very important.
Calibration frequencies should normally include the band ends (the highest and lowest rated frequencies for the probe) and frequencies no more that about an octave (2:1 ratio) apart. For shaped-frequency response probes, there should be calibration points at the "breakpoints" in the particular standard to which the probe is attempting to conform. For example, in the FCC Regulations, the breakpoints are at 3 MHz, 30 MHz, 300 MHz, and 1,500 MHz. Since it is impossible for the probe to accurately mimic these sharp breaks in the standard, these are the regions within the rated band of the probe where the frequency sensitivity is the greatest.
Multiple-frequency probe calibration involves the following steps:
An initial calibration is made at a center frequency. If the probe has an amplifier, it is adjusted to yield accurate readings on the meter at this frequency.
Measurements are made at all remaining calibration frequencies.
Results are analyzed to determine if the overall frequency sensitivity is within specification (± a defined maximum deviation from a center value).
The nominal midpoint is determined to "center" the frequency response. If the probe has an amplifier, it is adjusted so that the readings across the band fall above and below this point and all are within specification. If the probe does not have an amplifier, this nominal midpoint value is used to adjust the meter. In modern meter designs, it is provided as a calibration factor that is stored in the meter.
Meters normally have a single adjustment for "gain," assuming that all probes use the same input. Some meter designs have two inputs. In these cases, each of the inputs must be adjusted separately. Older analog meters have potentiometers that are manually adjusted so that the meter reads correctly with a particular input level. Modern digital meters store the "gain" as a value that is used by the microprocessor to automatically compensate for errors that are based on the levels coming from the probe. For example, Narda’s 8700 series meters are designed so that a 1-Volt input represents full scale from any probe (the probes all have amplifiers). Thus, it is important that the meter accurately display a value equal to half the probe's rating when there is an input of 0.500 Volts.
Some survey instruments are calibrated as a set. The probe is connected to the meter and placed in a test fixture at specific field intensity. The adjustments, either mechanical or digital, are then made inside the meter. This is a less expensive way of calibrating equipment. The downside is that calibration is lost if the probe and meter are separated. Even identical models of the same probe or meter cannot be substituted without calibrating the equipment again.