FPO
IEEE

General Introduction to Standards

Every industrial country requires a sound infrastructure for measurement. This is reflected in the increasing importance given to quality schemes and accreditation, for example the ISO9000 series. Properly authenticated measurements are required in support of regulation (for example, for environmental assessment and for health care), and to underpin good product design and efficient manufacturing in demanding and competitive markets.

However, measurements are only meaningful if they are performed in a technically sound manner and if they can be related to common standards of measurement. The general requirements for a “good-quality” measurement include:

  • expertise in the use of the instrument (including any necessary training/certification);
  • a calibrated instrument or artefact (with the calibration traceable to agreed standards);
  • a standard method and procedure for using the instrument.

The first of these relates to the knowledge, experience and training of the individual undertaking the measurement, which inevitably influence the ability to obtain high-quality results. However, to ensure different individuals consistently obtain comparable results requires a calibrated instrument and an agreed procedure, and it is these two influences that are the subjects of this introduction.

In the measurement context considered here, the word “standard” is used with two meanings. Firstly, it refers to a calibration standard – a method (or occasionally an artefact, as is the case for the kilogram) which is used to provide traceability back to a common benchmark. Secondly, it may refer to a specification standard – a written procedure describing the method for undertaking a measurement. Both of these are discussed here.
 

I.      Calibration standards
 

In most industrialised countries, calibration standards are provided through a hierarchical infrastructure, at the head of which are the primary national standards realized and maintained by national metrology institutes (NMIs). The National Institute for Science and Technology (NIST) in the USA, and the National Physical Laboratory (NPL) in the UK, are examples of such NMIs.

Traceability and uncertainty
Measurements should ideally be traceable to the standards held at the NMI. This concept of traceability of measurements can be defined as [1]:

the property of a result of a measurement whereby it can be related to appropriate standards, generally international or national standards, through an unbroken series of comparisons.

Traceability requires that the instruments used for measurement (for example, the hydrophones used in acoustic measurements) be calibrated such that the calibration may be traced back to national standards, either by a calibration undertaken by the NMI, or by an accredited laboratory whose standards are themselves traceable.

When comparing two measurements (or comparing one measurement with a required tolerance or specification), it is important to take full account of the measurement accuracy. When making a measurement, it is not generally possible to identify the absolute error since the true value, of the quantity being measured, is not known. Instead, an estimate is made of the uncertainty in the measurement.

The uncertainty may be defined as:

an estimate of the range of values within which the true value is considered to lie to a specified degree of confidence (for example, for a confidence level of 95%).

Analysis of uncertainty is a well-established discipline with standard procedures, such as the international Guide to the Expression of Uncertainty in Measurement [2], for example.

Note that uncertainty estimation should be distinguished from the natural variation in the quantity being measured that might occur due to the quantity itself varying. For example, the level of underwater ambient noise will vary with sea-state, and the noise radiated by a vessel may vary depending on the speed and engine loading.

There are two general classes of uncertainty, categorised in terms of how they are estimated. Type A uncertainty is sometimes described as the “random uncertainty” or degree of repeatability, and may be assessed by making related measurements of a quantity and examining the statistical spread in the results. However, it may not be possible to make repeated measurements if the event being measured is unique. The Type A uncertainty is a measure of the precision in the measurement – high precision is obtained if the measurements are repeatable with little dispersion in results.

The second category is the Type B uncertainty, which is sometimes referred to as the “systematic uncertainty”, and represents the potential for systematic bias in a measurement (caused by incorrect instrument calibration, for example). This category of uncertainty cannot be assessed using repeated measurements, and must be evaluated by consideration of the potential influencing factors on the measurement accuracy.

Validation of calibration standards
Whilst traceability of measurements to primary standards provides some quality assurance, there remains the question of how to determine that the primary standard itself is accurate. Since it provides the benchmark for secondary and tertiary calibrations, the primary standard cannot simply be compared to a method that is lower down the standards hierarchy. Instead, an assessment of the absolute accuracy must be made, which can be achieved in three ways:

  1. systematic study of the uncertainties in the primary standard calibration method. This may involve performing experiments to determine the size of the uncertainty contributions;
  2. where possible, a comparison is made with another independent absolute calibration method, preferably one based on a different physical principle (and therefore with few common sources of uncertainty);
  3. comparisons are conducted with the NMIs in different countries. Such comparisons help to harmonise standards across national boundaries and can lead to the discovery of previously unknown sources of error.

The result of these comparisons may be expressed in terms of the degree of equivalence between the different countries along with the uncertainties e.g. [3]. This type of exercise provides greater confidence in the individual results and allows for mutual recognition of calibrations undertaken in different countries.
 

II.        Specification standards
 

Specification standards are documents describing procedures to be followed when undertaking measurements. The highest of such standards are international standards produced under the auspices of organisations such as ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission), for example.

ISO produces standards which cover physical measurements; with regard to underwater acoustics this includes environmental noise or noise radiated from specific sources such as ships.  IEC produces standards that cover electrical measurements, and this includes the calibration of instruments, such as hydrophones. ISO and IEC standards are typically adopted as national standards within member countries.

Sometimes, national standards bodies within individual countries produce their own standards, for example the American National Standards Institute (ANSI) in the USA [4], and the British Standards Institute (BSI) in the UK.

In addition, informal guidelines and good practice guides may be produced, which are very useful in promoting good practice, although they do not have the status of international or national standards.
 

III.       Joint standards
 

There are several other standards which are of relevance to measurement in general, and which are joint publications by several standards bodies including ISO and IEC. These are:

JCGM 200:2012, International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition.

JCGM 100:2008, Evaluation of measurement data – Guide to the Expression of Uncertainty in Measurement (GUM).

Both are available from www.bipm.org.

References

[1]   JCGM 200:2012, International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition, joint publicationby BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, 2012, Available from www.bipm.org

[2]   JCGM 100:2008, Evaluation of measurement data – Guide to the Expression of Uncertainty in Measurement (GUM), joint publication by BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, 2008. Available from www.bipm.org

[3]   S. P. Robinson, P. M. Harris, J. Ablitt, G. Hayman, A. Thompson, A. Lee van Buren, J. F. Zalesack, R. M. Drake, A. E. Isaev, A. M. Enyakov, C. Purcell, Z. Houqing, W. Yuebing, Z. Yue, P. Botha, D. Krüger, “An international key comparison of free-field hydrophone calibrations in the frequency range 1 to 500kHz”, J. Acoust. Soc. Am., 120, 1366-1373, 2006.

[4]   S. B. Blaeser, “International standards development and the U.S. technical advisory group process”, Acoustics Today, 8(1), January 2012.