Sunteți pe pagina 1din 80



Vol. 7 No. 3 September 2012

Welcome to NCSLI Measure, a metrology journal published by NCSL International for the benet of its membership.

24 SPECIAL FEATURE Historical Development of Units Based on Fundamental Constants or Invariants of Nature Terry Quinn

 2 Measurement Techniques for Evaluating Current Range Extenders 3 from 1 Ampere to 3000 Amperes Marlin Kraft

38 North American 100 Ampere Interlaboratory Comparison Jay Klevens 48  Software Tools for Evaluation of Measurement Models for Complex-valued Quantities in Accordance with Supplement 2 to the GUM Cho-man Tsui, Aaron Y. K. Yan and Hing-wah Li 56 Precise Blackbody Sources Developed at VNIIOFI for Radiometry and Radiation Thermometry Sergey A. Ogarev, Boris B. Khlevnoy, Boris E. Lisiansky, Svetlana P. Morozova, Mikhail L. Samoylov, and Victor I. Sapritsky 70  A Sub-Sampling Digital PM/AM Noise Measurement System Craig W. Nelson and David A. Howe

3 4 Letter From the Editor NMI News 14 Metrology News 74 New Products and Services 76 Advertisers Index

NCSL International Craig Gulka, Executive Director 2995 Wilderness Place, Suite 107 Boulder, CO 80301 (303) 440-3339

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |

NCSLI Measure (ISSN #19315775) is a metrology journal published by NCSL International (NCSLI). The journals primary audience consists of practitioners and researchers in the eld of metrology, including laboratory managers, scientists, engineers, statisticians, and technicians. NCSLI Measure provides NCSLI members with practical and up-to-date information on calibration techniques, uncertainty analysis, measurement standards, laboratory accreditation, and quality processes, as well as providing timely metrology review articles. Each issue contains peer reviewed metrology articles, new product and service announcements, technical notes, national metrology institute news, and other metrology information. Author instructions are available at If you are interested in submitting new product/service announcements or purchasing advertising, please visit for more information.

Managing Editor:
Michael Lombardi, National Institute of Standards and Technology (NIST), USA,

Associate Editors:
Dr. Klaus Jaeger, Jaeger Enterprises, Harry Moody, Harry J. Moody Enterprises, Dr. Leslie R. Pendrill, SP Technical Research Institute of Sweden, Dr. Alan Steele, National Research Council of Canada,

NMI/Metrology News Editor:

Dr. Richard B. Pettit, Sandia National Laboratories (retired),

New Product/Service Announcements & Advertising Sales:

Linda Stone, NCSL International, 2995 Wilderness Place, Suite 107, Boulder, CO 80301-5404 USA,

Technical Support Team:

Norman Belecki, NIST (Retired), USA Carol Hockert, National Institute of Standards and Technology (NIST), USA Dr. James K. Olthoff, National Institute of Standards and Technology (NIST), USA Dr. Salvador Echeverria-Villagomez, Centro Nacional de Metrologia (CENAM), MX Dr. Seton Bennett, National Physical Laboratory (NPL), UK Dianne Lalla-Rodrigues, Antigua/Barbuda Bureau of Standards, Antigua W.I. Dr. Angela Samuel, National Measurement Institute (NMI), Australia Pete Unger, American Association for Laboratory Accreditation (A2LA), USA Dr. Michael Khne, Bureau International des Poids et Mesures (BIPM), FR

Copyright 2012, NCSL International. Permission to quote excerpts or to reprint any gures, tables, and/or text from articles (Special Reports/Features, Technical Papers, Review Papers, or Technical Notes) should be obtained directly from the author. NCSL International, for its part, hereby grants permission to quote excerpts and reprint gures and/or tables from articles in this journal with acknowledgment of the source. Individual teachers, students, researchers, and libraries in nonprot institutions and acting for them are permitted to make hard copies of articles for use in teaching or research, provided such copies are not sold. Copying of articles for sale by document delivery services or suppliers, or beyond the free copying allowed above, is not permitted. Reproduction in a reprint collection, or for advertising or promotional purposes, or republication in any form requires permission from one of the authors and written permission from NCSL International.

2 2

| |

NCSLI Measure J. Meas. Sci. NCSLI Measure J. Meas. Sci.

Letter From the Editor

We are pleased to present a special feature that was graciously written for this issue of Measure by Terry Quinn, the Emeritus Director of the Bureau International des Poids et Mesures (BIPM). Dr. Quinns paper, entitled Historical Development of Units Based on Fundamental Constants or Invariants from Nature, is based upon the lecture he presented to the American Physical Society on March 1, 2012 in Boston, Massachusetts. It also contains some material from his recent book, From Artefacts to Atoms: The BIPM and the Search for Ultimate Measurement Standards (Oxford University Press, USA: November 2011). History in general is a fascinating subject, and the history of science and metrology is no exception. When well presented, history can be as compelling a story as any type of fiction or literature in fact the word story is part of the word history. As we all know, there are stories behind the development of every standard, every instrument, and every measurement technique that we practice in our laboratories. It is important to know these stories, because they not only reveal where we have been, but also provides clues as to where we are going. Perhaps most importantly, a careful examination of history can help us to better understand the way things are now. A skilled historian, Dr. Quinn explains that the recent (October 2011) proposal to change the International System of Units (SI) to a system based on fundamental constants and invariants of nature is a culmination of more than 200 years of advances in physics and metrology. He deftly describes this progress, beginning with a 1791 report that outlined the original units of the metric system, and continuing to the developments of today. His paper is a must read that we are sure you will enjoy. This issue also includes five technical papers covering a variety of measurement topics. We are especially pleased to publish a comprehensive review paper written by Dr. Sergey Ogarev and his colleagues at the All-Russian Research Institute for Optical and Physical Measurements (VNIIOFI), an acknowledged world leader in the field of blackbody sources. The paper, entitled Precise Blackbody Sources Developed at VNIIOFI for Radiometry and Radiation Thermometry, describes a wide list of radiometric, photometric and radiation thermometry standards for the entire UV-VIS-IR spectrum, covering the temperature range from 80 to 3500 kelvin. Cho-man Tsui and his colleagues at the Standards and Calibration Laboratory (SCL) in Hong Kong, have contributed a paper that describes the software tools theyve developed for the evaluation of complex valued measurement models, as described in Supplement 2 of the ISO Guide to the Expression of Uncertainty in Measurement, better known as the GUM. Published less than a year ago, in October 2011, Supplement 2 deals with measurement models that have more than one output quantity, such as the S-parameters that are common in electrical metrology. We believe that this is one of the first journal articles published on the application of Supplement 2, and expect it to be a lasting contribution to the literature. This paper marks the second fine contribution to the journal from SCL, as Tsui and his colleagues recently won the 2011 NCSLI Measure Editors Choice Award for a paper that described the use of high speed video recordings to calibrate stopwatches (vol. 6, no. 3, pp. 64-71, September 2011). Craig Nelson and David Howe of NIST describe a sub-sampling digital noise measurement system for phase modulation and amplitude modulation noise. This state-of-the-art measurement system is easy to use and has a phase noise floor of less than -180 dBc/Hz at 10 MHz. Finally, this issue features two papers related to shunt calibrations. Jay Klevens of Ohm-Labs provides results from a 100 ampere interlaboratory comparison (ILC) that involved two artifacts and 17 participating laboratories. Marlin Kraft of NIST, who provided assistance with the ILC, has also contributed a very useful technical note that describes measurement techniques for evaluating current range extenders from 1 to 3000 amperes. Im writing this letter about two weeks after returning from the annual NCSLI Workshop & Symposium in Sacramento, California, and the meeting is still fresh in my mind. Although attendance was somewhat down this year, it was an enjoyable and informative meeting, and I had the opportunity to talk to many old friends and to meet many of you for the first time. Even so, the conference went by too quickly, and I was unable to talk with everyone that I wanted to meet. In addition, due to the concurrent sessions, I was unable to attend all of the presentations that I wanted to attend. If you would like to discuss your present or future work, please feel free to send me a note at any time ( Your opinions, comments, and suggestions about the journal are always appreciated, and we especially welcome your papers. Sincerely,

Michael Lombardi Managing Editor

NCSLI Measure, 2995 Wilderness Place, Suite 107, Boulder, CO 80301-5404 USA

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |

New BIPM Director in 2013
On October 1, 2012, Dr. Martin Milton will join the Bureau International des Poids et Mesures (BIPM) as Deputy Director/Director Designate. He will replace the present Director, Dr. Michael Khne, on January 1, 2013. Dr. Milton received a BA in Physics from Oxford University in 1981 and a Ph.D. in Physics from Southampton University in 1990, followed by a MBA from the London Dr. Martin Milton, Director Business School in 1991. He Designate, BIPM. joined the National Physical Laboratory (NPL) in 1981 as a Scientific Officer, became a Principal Research Scientist in 1994 and a NPL Fellow in 1998. Since 2005 he has been the Lead Scientist for the Gas Metrology and Trace Analysis Group at NPL. Dr. Milton is an active participant in the meetings of the Consultative Committee for Amount of Substance (CCQM) and the Consultative Committee for Units (CCU) of the International Committee for Weights and Measures (CIPM). He is a fellow of International Union of Pure and Applied Chemistry (IUPAC) and a fellow of the Institute of Physics of the UK. of Canada (NSERC) post-doctoral fellowship at Kings College, London, UK, before joining NRC, first as a Research Associate with the Institute for Microstructural Sciences in 1991 and then as a Research Officer in the Thermometry Group of INMS in 1993. In 2005, following a two-year sabbatical working as a development program manager at a telecommunications start-up company in Ottawa, Alan became Director of Metrology at the NRC-INMS, where he oversaw research and measurement science activities for the Institutes physical and chemical metrology research programs. Alan is well known internationally for his work within the Sistema Interamericano de Metrologia (SIM) Regional Metrology Organization, where he was the past Chair of the Technical Committee; on the Joint Committee of the Regional Metrology Organizations (JCRB); and as NRC representative to the Board of the National Conference of Standards Laboratories International (NCSLI). Alans enthusiasm for learning and educational development was one factor that led to both his selection as a Co-Director of the Bureau International des Poids et Mesures (BIPM) Summer School in 2008 and Chair of the BIPM Workshop on Metrology at the Nanoscale in 2011. For more information, visit:

Germany Sends Clock Signal 920 km Using Optical Fiber

Dr. Alan Steele Appointed General Manager of NRC Measurement Science and Standards
The National Research Council of Canada (NRC) has announced that Dr. Alan Steele has been appointed Canadas Chief Metrologist and the General Manager of the NRC Measurement Science and Standards, which includes the former Institute for National Measurement Standards (INMS), as well as expanded programs to address measurement challenges in emerging industries. MeasureDr. Alan Steele, General Manager, ment Science and Standards is INMS. one of four portfolios that are part of the Emerging Technologies Division. Alan holds a B. S. in Physics from the University of Toronto (1984) and a Ph.D. in solid-state physics from Simon Fraser University (1988). He held a Natural Sciences and Engineering Research Council
4 | NCSLI Measure J. Meas. Sci.

Illustration showing the intercomparison of optical clocks at PTB and MPQ (by a ctitious physical connection).

Physicists in Germany have sent optical clock signals over a distance of 920 km using an optical fiber, demonstrating that the relative frequency remains stable to parts in 1019. As well as supporting the development of highly accurate optical clocks, the breakthrough could also be used in a range of commercial and scientific applications, including precision spectroscopy, geodesy and very-long-baseline

astronomy. Optical clocks are like conventional microwave atomic clocks but operate at much higher frequencies, at about 1015 Hz rather than 1010 Hz. They are therefore potentially much more accurate than microwave atomic clocks, which use a specific electronic transition frequency of an atom as a frequency standard. The best optical clocks have uncertainties in their frequencies of less than one part in 1017. A team including Gesine Grosche, Physikalisch-Technische Bundesanstalt (PTB), and Katharina Predehl, Max Planck Institute for Quantum Optics (MPQ), sent a stable optical signal 920 km along fibers linking PTB with MPQ. The experiment posed two significant technological challenges. The first was to amplify the relatively weak signal at several points along the route to ensure that it could be measured upon arrival. The second was to ensure that physical disturbances to the optical properties of the fiber, which are caused by local changes in temperature or vibrations, did not shift the frequency of the signal. International time standards operated in various countries around the world are intercompared using satellite links to ensure that they all produce the same result. Comparisons are also important for basic research, particularly for testing the fundamental physical laws and fundamental constants that are involved in the operation of atomic clocks. However, satellite links are only able to compare clocks with uncertainties of parts in 1015 over reasonably short averaging times, and therefore cannot be used to compare optical clocks. Therefore, scientists are investigating the use of optical fibers to intercompare clocks. Earlier comparisons have been performed over shorter distances with similar uncertainties. In 2009, Gesine Grosche and colleagues at PTB transmitted an optical signal 146 km over an optical fiber with relative uncertainties of parts in 1019. These uncertainties were matched by Atsushi Yamaguchi and colleagues at Japans National Institute of Information and Communications Technology (NICT) in 2011, over a distance of 120 km. Looking to the future, the optical fiber network could be extended to allow standards laboratories across Europe to compare their optical clocks. Some European countries have already taken steps to ensure that they have access to the appropriate fiber networks. For more information, visit: or contact Dr. Gesine Grosche ( or Katharina Predehl (

Dr. Nellie Schipper from VSL (left) and Mr. Zijad Demi from IMBIH (right) at the MoU signing ceremony.

To cooperate on improving the quality system of IMBIH and its laboratories, including participation in key and bilateral comparisons, laboratory accreditation, registering Calibration and Measurement Capabilities (CMCs) in the Key Comparison Data Base (KCDB) of the BIPM, conducting training sessions and workshops to improve the competence of staff, and implementing research projects in the field of metrology; To cooperate in the activities of drafting relevant legislation and/or implementation papers in the field of metrology and other related areas in Bosnia and Herzegovina according to EU legislation; To exchange opportunities in metrology and related fields; To decide independently and individually on the recognition of the results of testing, measurement or calibration in order to issue their respective certificates through the development of mutual trust and familiarity with satisfactory capacity of the other Party.

For more information, visit:

NISTs Elizabeth Gentry Receives Arthur S. Flemming Award

Elizabeth Gentry, of the National Institute of Standards and Technology (NIST) Office of Weights and Measures, received the 2011 Arthur S. Flemming Award for her exceptional leadership as Metric Program Coordinator. The award was created in 1948, and twelve Flemming awards are given each year. Elizabeth leads the nations efforts to implement the use of the International System of Units (SI) in trade and commerce through voluntary conversion. She advises federal agencies and businesses as they address SI related issues and works to ensure that the laws of other countries allow U.S. products into foreign markets. She also works with states to eliminate barriers to the use of SI units on packaged goods and in commercial transactions. Ms. Gentry has developed SI teaching tools and publications and has conducted educational outreach to school teachers and students across the country. One tool she created for teaching the SI, which is exceptionally popular with teachers, is the Metric Estimation Game in which participants earn points by guessing the mass or dimensions of everyday objects.

VSL and Bosnia/Herzegovina Sign MoU

VSL (Dutch Metrology Institute) and IMBIH (Institute of Metrology of Bosnia and Herzegovina) signed a Memorandum of Understanding (MoU) in March 2012. Both institutes emphasize the importance of collaboration in the field of metrology and recognize that this cooperation between the two metrology institutes will create favorable conditions for trade, industry, health, law enforcement and economic relations. The main points of the memorandum are:

To sustain the development of mutual cooperation in the field of metrology and metrology related activities; To exchange information on issues related to metrology, in particular on the capacity of metrological infrastructures, and to plan projects, which are of common interest, for their improvement;

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |

Better accuracy for a broader workload

The 52120A Transconductance Amplier lets you accurately test and calibrate a broad workload at full current range
The 52120A expands your calibration capabilities to a broad workload of power and energy meters, clamp meters, CTs and Rogowski coilsup to 6000 A. Use it with all Fluke Calibration calibrators, including the 5080A, 9100, 55XX and 57XX Series. For enhanced accuracy, use it closed-loop with the 610X Electrical Power Standards. Output ranges: 2 A, 20 A, 120 A Frequency to 10 kHz Generate 3,000 or 6,000 amps with accessory coils Industry-leading amplier accuracy

Get the details:

Fluke Calibration. Precision, performance, confidence.

2011 Fluke Calibration. Specications are subject to change without notice. Ad no. 4080184A

NMI NEWS Dr. Kamal Hossain, NPL, New Chairperson of EURAMET

Dr. Kamal Hossain, NPLs Director, Research and International, has officially taken over as EURAMET Chairperson following his election in 2011. EURAMET (European Association of National Metrology Institutes), which is the Regional Metrology Organisation (RMO) of Europe, coordinates cooperation between each of the National Metrology Institutes (NMIs) and manages the Kamal Hossain, NPL, European Metrology Research new Chairperson of EURAMET. Programme (EMRP). As a member of the EURAMET Board of Directors since 2009, Kamal Hossain led the development of the EURAMET 2020 Strategy (see the March 2012 issue of NCSLI Measure), which outlines the vision, mission and objectives of the organization. He will be the Chairperson for the next three years. Kamal joined the National Physical Laboratory (NPL) after completing his Ph.D. research in Physics at the Cavendish Laboratory, Cambridge University. His research expertise is in high-resolution electron microscopy relating microstructures to materials performance and behavior in aggressive environments. He is a Fellow of the Institute of Physics and the Institute of Nanotechnology, and is a Visiting Professor at the University of Surrey. Since 2000, Kamal has been responsible for the strategic development of NPLs scientific research, focusing on developing partnerships with academia, business and government. In 2008, he took on the additional role of leading NPLs international work and was honored with an Order of the British Empire (OBE) for Services to Industry by the Queen in June 2009. intended for the general public, points out that where new products, services and technologies are being introduced, the public must be assured of their safety. The following items are examples of NMIs calibration services that support safety: Food Safety: NMI produces standards used for testing residues in food, such as heavy metals in seafood and pesticides in fruits and vegetables. Since 2008, procedures for testing melamine in infant formula and other foods have been available. Finally, standards for food allergies, including gluten, peanut, hazelnut, almond, egg, milk, soy, seafood and sesame, have been developed. Children Safety: NMI has developed standards and tests for toxic elements, such as selenium, arsenic, mercury, and lead; that can be present in childrens toys. Road Safety: Standards and measurements supporting lightemitting diodes (LEDs) used in traffic signals and signs ensure they meet required standards. Certified reference standards of aqueous ethanol are used to calibrate breathalyzers that test motorists. Finally, standards are available for calibrating speed measuring devices, such as LIDAR instruments. Environmental Safety: NMI has a facility dedicated to analyzing dioxins, polychlorinated biphenyls (PCBs), flame retardants and other persistent organic pollutants (POPs) at ultra-trace levels within a range of materials. For a copy of the Measurement Safety Fact Sheet visit:

NPL Developing Optical Technique for Acoustic Measurements

National Physical Laboratory (NPL) scientists have made the first measurements of airborne acoustic free-field pressures using a laser technique based on photon correlation spectroscopy. This optical measurement technique directly measures particle velocities and realizes the acoustic Pascal, which the SI derived unit of pressure. The technique, which could potentially be used to calibrate microphones, is being developed as part of a wider initiative to base future primary acoustic standards on optical methods. The new technique uses two lasers that intersect at a point in space and produce an interference fringe pattern. When sound is produced by a source (such as a loudspeaker), the intensity of light that is scattered by particles in the air changes as they pass through the fringe pattern. The intensity change can be detected, and the acoustic freefield pressure calculated. The current method for calibrating microphones relies on their reciprocal nature. Part of the process requires one microphone to be used as a sound source and coupled to a second receiving microphone. This set-up, together with complex modeling of the acoustic coupling between the microphone sensitivities, eventually leads to an evaluation of the microphone sensitivity, which is then traceable to a number of dimensional and electrical quantities. However, this method imposes limits on the types of microphone that can be calibrated, meaning that non-standard microphones and new technologies such as MEMS (microelectromechanical) sensors cannot be tested and calibrated. Optical techniques may be used to calibrate these devices based on their ability to directly measure the Pascal at a point within a sound field.
NCSLI Measure J. Meas. Sci. | 7

NMI Australia New Fact Sheet on Measurement Safety

The National Measurement Institute (NMI) Australia has issued a Measurement Safety Fact Sheet describing its role in the area of measurements supporting safety, in areas such as food, consumer products, environment, and highways. The document, which is
Vol. 7 No. 3 September 2012

The use of optical techniques and the improved calibration they offer may also help to accelerate the development of new microphone technologies and acoustic devices. For more information, contact Triantafillos Koukoulas at:

VSL Frequency Comb Laser Accurately Measures Absolute Distances

Researchers from the Dutch National Metrology Institute VSL and Delft University of Technology have designed an optical ruler that uses 9 000 frequencies of laser light simultaneously to measure absolute distances. The results of their work were published in the May issue of Physical Review Letters (S. A. van den Berg, S. T. Persijn, and G. J. P. Kok, M. G. Zeitouny and N. Bhattacharya, Many-wavelength interferometry with thousands of lasers for absolute distance measurement, Phys. Rev. Lett., vol. 108, 183901, May 2012). Using a frequency comb laser, the researchers demonstrated absolute distance measurements with an accuracy of tens of nanometers, combined with a very large range of nonambiguity of about 15 cm. All the frequencies were individually resolved using a spectrometer with an unprecedented spectral resolution. The technique merges multi-wavelength interferometry and spectral interferometry, within a single scheme. The technique allows for high accuracy distance measurement without the need to measure incrementally the displacement. This is a
Overview of the setup and typical image of the spectrometer. The distance is derived from the interference pattern as measured with the spectrometer.

huge advantage compared to single wavelength measurements, since in many setups its not possible to have a linear translation from the starting point to the end point. The present technique is expected to be easily extendible to hundreds of meters or even kilometers. Absolute distance measurements with such a high accuracy are, for example, essential in plans for future optical space telescopes. Using different satellite telescopes, a high resolution image can be obtained by combining the separate images acquired by each telescope. However, this is only possible if the distance between the satellites is known with a very high accuracy. The research thus constitutes an important step towards reaching this goal. For more information, contact Steven van den Berg at:

Calibra on Services You Can Count On

Your laboratory standards are the basis for your organiza ons most accurate measurements.
Examples of Standards Calibrated Mul -Product Calibrators Measuring Bridges Reference Mul meters High Voltage Dividers Voltage Standards Standard Resistors Standard Capacitors Standard Inductors SPRTs/PRTs/RTDs Metrology Temperature Wells
Essco is an extraordinarily diverse commercial laboratory serving many industries as well as other laboratories. Our capabili es far surpass that of mo most commercial laboratories.

Super Micrometer Gage Blocks Dead Weight Tester Torque Standards Force Standards Two-Pressure Humidity Standard Height Gages Thread Set Plugs Measuring Wires Plain Plug & Ring Gages

Super Micrometer is Trademarked to Pra & Whitney

Accredited to ISO/IEC 17025:2005 by NVLAP (Lab Code 200972-0). Over 35 metrology technicians on sta. es/standards 800.325.2201 Ask about online repor ng via EsscoNet

8 | NCSLI Measure J. Meas. Sci.

NMI NEWS PTB Tactile Probe for Micro-CMMs NIST Laser Power Capability Increased to 100 kW

Novel probe tip mounted on an exchangeable head. (Credit: PTB)

Micro-coordinate measuring machines (micro-CMMs) can measure small toothed gears, injector nozzles or micro-channels with measurement uncertainties below 0.5 m. While the probing elements in commercial systems are relatively expensive, PhysikalischTechnische Bundesanstalt (PTB) has developed a 3D micro-probe using hybrid silicon technology. Since a large number of probes can be produced simultaneously, the cost is much reduced. In addition, a novel exchanging device mounted on the probe head means that microprobes are easily changed and characterized with short setup times. For more information, contact Dr. Sebastian Btefisch at:

A schematic diagram of the new 100 kW laser power meter showing the specially designed rotating mirror and water cooled copper cone with black coating.

The new NIST High Power Meter can measure power at levels of 100 kW and higher. The calibration of high power lasers to within 1 % is needed to support the U.S. laser industry and military. Lasers are used for cutting and welding a wide variety of materials from steel and aluminum to fabrics and plastics, and for drilling tiny holes in devices such as iPhones. In addition, lasers have numerous military uses such as defusing unexploded land mines. The new calibration system has the same simple design as the 10 kW calibration units: Laser radiation enters a cone-shaped cavity

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |

lined with a high-absorption, textured coating. A rotating mirror (custom fabricated for the specific wavelength of the incoming beam) mounted toward the back of the unit reflects the incoming beam throughout the devices interior. The cavity is surrounded by a hollow-walled copper enclosure through which water from a 100-gallon tank flows at a carefully controlled rate, which is about 10 liters per minute for the 10 kW standard and 80 liters per minute for the 100 kW standard. The average laser power is determined by measuring the temperature difference between the incoming and outgoing water and factoring in the flow rate and heat capacity of the water. By increasing the flow rate, the researchers expect to be able to make future measurements potentially up to the GW power level. Finding a suitable material to coat the inside of the cavity entailed a long and complex research process. The team wanted to take advantage of the profound blackness and damage-resistance of carbon nanotubes, as well as the heat tolerance of ceramics. However the resulting composite nanotube-ceramic coating would not stick to copper. The solution was to grind the nanotube-ceramic material into a powder and mix it with a potassium silicate (water glass) binder. The spray-on coating, which holds the world record for damage threshold in this application, is easy to apply and highly robust. Both the new and old NIST power meter are based on beam absorption and water calorimetry, but the similarities end there. The old unit weighs about 272 kg and is 1.5 meters long. The new 10 kW version is about the size of a beach ball and twice as accurate. Even the 100 kW unit is smaller than a dishwasher. In addition, measurements can be performed in real time, whereas in the previous system, measurement could only be obtained about every 20 minutes. Another advantage is that the new system is completely portable and can be shipped using two crates the size of a doghouse: one contains the pump and flow-metering apparatus while the other holds the sensor inputs and data analysis equipment. For more information about laser power and energy calibrations, contact Marla Dowell at:

NPL Nano-device Accurately Generates Electric Currents

Scanning electron microscope image of the electron pump. The arrow shows the direction of electron pumping. The hole in the middle of the electrical control gates where the electrons are trapped is ~0.1 m across.

PTB Special Anniversary Publications

At part of Physikalisch-Technische Bundesanstalts (PTBs) 125th anniversary year, both the PTB-Mitteilungen (Metrological Journal of PTB) and the mastbe (Science Magazine of PTB) have special articles highlighting PTB. In addition, PTB has reprinted its book series on the history of the institute. Finally, PTB is making the classic experimental physics reference book Praktische Physik by Friedrich Kohlrausch available on their website in electronic form. The original one volume edition of 150 pages in 1870 has grown to a three volume compendium of Practical Physics in the 24th printed edition that appeared in 1996 with about 2460 pages. The electronic version is available at: publikationen/buecher/der-kohlrausch-praktische-physik/ praktische-physik-band-1.html
10 | NCSLI Measure J. Meas. Sci.

A team of scientists at the National Physical Laboratory (NPL) and University of Cambridge has made an electron pump nano-device that selects electrons one at a time and moves them across a barrier, creating a very well-defined electrical current. The device drives electrical current by manipulating individual electrons, one-by-one at very high speed. This technique could replace the traditional definition of electrical current, the ampere, which relies on measurements of mechanical forces on current-carrying wires. The key breakthrough came when scientists experimented with the exact shape of the voltage pulses that control the trapping and ejection of electrons. By changing the voltage slowly while trapping electrons, and then much more rapidly when ejecting them, it was possible to massively speed up the overall rate of pumping without compromising the accuracy. By employing this technique, the team was able to pump almost 109 electrons per second, 300 times faster than the previous record for an accurate electron pump set at the National Institute of Standards and Technology (NIST) in 1996. Although the resulting current of 150 pA is small, the team was able to measure the current with an accuracy of one ppm, confirming that the electron pump was accurate at this level. This result is a milestone in the precise, fast, manipulation of single electrons and an important step towards a re-definition of the unit ampere. As reported in Nature Communications (S.P. Giblin, M. Kataoka, J.D. Fletcher, P. See, T.J.B.M. Janssen, J.P. Griffiths, G.A.C. Jones, I. Farrer, and D.A. Ritchie, Towards a quantum representation of the ampere using single electron pumps, Nature Communications, vol. 3, 930doi:10.1038/ncomms1935, July 2012), the team used a semiconductor quantum dot less than 0.1 m wide to pump electrons through a circuit. The shape of the quantum dot is controlled by voltages applied to nearby electrodes. For more information, contact Stephen Giblin (stephen.giblin@ or Masaya Kataoka (

JULY 14 18, 2013



INFO@NCSLI.ORG 303-440-3339

NMI NEWS New NIST Vacuum Calibration System

workhorse sensors for most high-precision vacuum measurements. By combining the two gauges, the system takes advantage of the CDGs extremely fine resolution at low pressure and the RSGs outstanding long term drift stability. Igloos and their related equipment have been successfully sent outside NIST for round-robin international comparisons. Currently, they are not sent to customers, although that could happen in the future. The new automated calibration system can handle almost any vacuum gauge that has a digital or DC output. That is important in view of the multiplicity of gauge types (each decade of pressure has a preferred sensor type) and the recent trend is toward combination gauges, such as thermocouples paired with piezoresistive sensors. For more information, contact Jay Hendricks at:
Jacob Ricker (left) and Jay Hendricks (right) with the new NIST Vacuum Comparison Standard (VCS) and supporting apparatus.

A new, automated Vacuum Comparison Standard (VCS), developed by National Institute of Standards and Technologys (NISTs) Sensor Science Division, can accommodate up to 10 customer gauges of different types at one time. Because calibration times are reduced to two to three hours, the new system provides customers with faster calibration turn around and at a lower cost. The system calibrates gauges over five orders of magnitude in pressure, from 0.65 Pa (5 millitorr) to 130 000 Pa (19 psi). At the low end, the pressures cover plasma etching, chemical vapor deposition, and sputtering. At the high end, the pressures cover airplane/helicopter altimeters and barometric sensors. Until this new system was developed, getting direct NIST traceability for vacuum gauges was a lengthy and expensive process. Calibrating a customer gauge against one of NISTs Ultrasonic Interferometer Manometers (UIMs) usually required about 50 hours of data-acquisition, followed by several days to write the report. Everything was done manually, which resulted in a high cost of about $5,000; in addition, the turnaround time for a customer was approximately eight weeks. Measurement uncertainties in the new VCS range from 0.1 % at 13 Pa to 0.04 % at 1 300 Pa to 0.01 % or better at 130 000 Pa (all k = 2). At those same pressures, the UIM delivers 0.023 %, 0.000 7 %, and 0.000 5 %, respectively. Thus, the new calibration system is designed for customers who do not need the highest uncertainty calibrations or for transfer standards that are accurate to only a few tenths of a percent. The new systems key element is an extremely accurate and stable transfer standard which is calibrated against the primary UIM standard. The transfer standard consists of a set of interconnected components informally called an igloo after the enclosure manufacturer. The device is so accurate that for most common industrial process vacuumgauges, it eliminates the need to directly test customer standards against a primary UIM. The igloos transfer the SI unit Pascal from the UIM manometers to the VCS. Inside is a resonance silicon gauge (RSG) that monitors two capacitance diaphragm gauges (CDGs). RSGs are microelectromechanical systems (MEMS) that measure the effect of pressure-induced strain on the resonant frequency of a silicon oscillator. CDGs measure pressure-induced changes in the position of an alloy diaphragm that serves as one plate of a capacitor, and are the
12 | NCSLI Measure J. Meas. Sci.

New NIST Dew Point Standard

Peter Huang and colleagues of the National Institute of Standards and Technology (NIST) Temperature and Humidity Group have devised a new humidity generator that enables dew-point measurements up to 98 C. This is a substantial extension above the previous limit of 85 C, and provides expanded calibration services for hygrometers supporting a variety of industries. The new Multi-Gas Humidity Generator can generate controlled water vapor in nitrogen, air, hydrogen, methane, and other gases. It produces air streams of known moisture content with water-vapor mole fraction levels of 0.50 to 0.95, which correspond to a range of dew-point temperatures from 85 C to 98 C at 1 atmospheric pressure, with only a modestly expanded uncertainty of 0.05 C, at an approximately 95 % confident level. One application of the improved capability is to support the measurement of steam in modern industrial bakeries. The baking process consists of a number of physical, biochemical, and chemical processes tied to quite narrow humidity and temperature ranges. The energy in an oven must reach the baked goods, for example Kaiser rolls, in exactly the required form. Too much humidity means enormous energy consumption. But too little humidity can cause the doughs surface to dry prematurely, resulting in cracked and dull crusts and forcing the company to discard the product. Controlling temperature and humidity are also essential to holding down costs, because much of the thermal energy is not used for baking. Typically only about 25 % of the consumed energy is utilized for baking the product. About 44 % is used to produce water vapor, 27 % is expelled in exhaust gas, and 4 % is lost through radiation and convection in a well-constructed oven. Thus accurate measurement and control of oven humidity are needed. In addition, the new NIST capability will also benefit the fuel cell industry, since high-level humidity measurement and control are crucial for testing and optimizing hydrogen fuel cells. Additional technical information is available in the recent publication - Peter Huang, Wyatt Miller and Gregory Strouse, High Level Humidity Generator for Nitrogen-Water Mixtures, NCSLI Measure J. Meas. Sci., vol. 7, no. 2, pp. 72-77, 2012. For more information, contact Peter Huang at:

NMI NEWS NISTs Olthoff Receives 2012 Physical Sciences Award

James Olthoff, Physical Measurement Laboratory (PML) Deputy Director, has received the 2012 Award for Physical Sciences from the Washington Academy of Sciences for broad contributions to metrology through advancing plasma physics and through management of the NIST Measurement Services Program, the most robust, rigorous and diverse in the world. This spans his initial research at Dr. James Olthoff, Deputy NBS in developing diagnostics Director, PML for electrical discharges, including those of interest to the semiconductor industry, and to his leadership in coordinating measurement services. Dr. Olthoff received his Ph.D. in the area of atomic and molecular physics from the University of Maryland in 1985 and joined the Applied Electrical Measurements Group of NIST in 1987. In 2001, he became Chief of the Quantum Electrical Metrology (QEM) Division; and in 2007, the Deputy Director of the Electronics and Electrical Engineering Laboratory. In his current position as Deputy Director for Measurement Services, Dr. Olthoff is responsible for the oversight of all calibration services at NIST. He has over 120 publications and has co-authored/edited four books. In addition, Jim is the NIST Representative to the NCSLI Board of Directors. Pressure Distance Speed Mass Time Height consider how units and scales will impact Olympic and Paralympic sports and athletes. The posters include: How does pressure measurement in inflatable sports balls ensure fairness in football and volleyball? How do we know a meter rod is really a meter long? How is frequency used to calculate the speed of a tennis ball? How does weight measurement ensure athletes qualify for competitions? How does measurement ensure that clocks used in the 100 m final are accurate? How does measurement ensure fairness in the hurdles event?

Amount of Substance What methods are used to test for doping? Another NPL educational resource is ReactFast ( reactfast/), which is a series of downloadable resources which enable students to measure data correctly and understand its significance in evaluating and improving sports performance. For a detailed discussion of metrology in sports, the reader is referred to the 2008 paper: Seton Bennett, Citior, altior, fortior accuratior? Metrology in Sport, NCSLI Measure J. Meas. Sci., vol. 3, no. 4, pp. 24-28, December 2008. For copies of the posters, please visit the NPL website ( For more information, contact Andrew Hanson at:

NPL Posters Highlight Measurements in Sports

The National Physical Laboratory (NPL) has produced a series of posters celebrating the role of measurement in sport to tie in with the London 2012 Olympic and Paralympic Games. Every event in the London 2012 Games will use precise measurement to enable fairness and to compare performance. NPLs Measurement in Sport posters describe the science behind the Olympic measurements. The posters (available as PDF downloads)
Vol. 7 No. 3 September 2012 NCSLI Measure J. Meas. Sci. | 13

Updated VIM3 Now Available on BIPM Website
A corrected version of the 3rd edition of the International vocabulary of metrology basic and general concepts and associated terms (VIM), also designated as VIM Edition 3 or VIM3, was posted on the Bureau International des Poids et Mesures (BIPM) website on February 13, 2012. This Joint Committee for Guides in Metrology (JCGM) document, JCGM 200:2012, replaces the previous VIM3 version (document JCGM 200:2008). The reason for this update was to consolidate the JCGM 200:2008 edition, which existed as three very slightly different versions (BIPM, International Organization for Standardization - ISO, and International Organization of Legal Metrology - OIML) as a result of a failure to provide a definitive file to the Member Organizations of the JCGM in 2007. An interim remedy in 2010 was the issuing of three different corrigenda, one for each version, which was never an entirely satisfactory solution. A definitive PDF Master File constituting the unique VIM Edition 3 has been published as Document JCGM 200:2012. To avoid future multiple versions of VIM Edition 3, JCGM member organizations will be allowed to display the logo of the organization on the document when posted on its website, but are advised that they are not permitted to modify the text. In addition to updating VIM3, the Working Group 2 (WG2) of the JCGM, responsible for the VIM, has selected a representative sample of papers describing basic concepts and terms such as measurement, calibration, quantity, traceability, etc. The current list of eight papers is available on the BIPM website and will be enhanced with more references over time. To further help users of the VIM, WG2 is preparing a set of answers to frequently asked questions, which will be made available as VIM3 FAQs during 2012. More than 20 questions have been identified and answers are currently being collated. The updated VIM3 is available at ( guides/vim.html), while the list of eight papers is available at ( The new business model focuses on expanding markets, strategic partners and customer relationships, with the goal of long-term sustainability and growth for the Baldrige Program. New products, services and strategies include educational offerings for international quality experts and others; a fee-based alternative assessment for organizations not eligible for a Baldrige Award; and sponsorships of exhibits and activities. The BPEP raises awareness about the importance of performance excellence in driving the U.S. and global economy; provides organizational assessment tools and criteria; educates leaders in businesses, schools, health care organizations, and government and nonprofit organizations about the practices of national role models; and recognizes them by honoring them with the only Presidential Award for performance excellence. More information on the Baldridge transition plan is available at:

28th ISO/CASCO Meeting Set for 11-12 October 2012

The 28th ISO/CASCO plenary will be hosted by the Colombian Institute of Technical Standards and Certification (ICONTEC) on October 11-12, 2012 in Bogota, Colombia. In addition, based on the positive experience of the open day organized last October in Montreux, Switzerland, ISO/CASCO has decided to combine its plenary meeting with another open day on conformity assessment. A total of six sessions will be held on October 10, 2012, the day before the plenary, providing a unique forum to discuss the most recent developments in the conformity assessment community. A detailed program of the sessions with respective schedules and locations, as well as registration details, is available on the ISO/ CASCO website:

NIST Baldrige Program Moves to New Funding Model

The Baldrige Performance Excellence Program (BPEP), managed for 25 years by the National Institute of Standards and Technology (NIST) in collaboration with the private sector, has announced that it is transitioning to a new business model that will move the program from federal government funding to a self-sustaining operation. However, NIST will continue its leadership role in managing the program, but it will be funded through a combination of new fees together with expanded support from the private sector. The Baldrige Program offers business, health care, nonprofit and educational institutions a proven path to becoming more competitive, more efficient and more successful. Named after Malcolm Baldrige, the Baldrige Award, which was established by Congress in 1987, has recognized 90 organizations to date.
14 | NCSLI Measure J. Meas. Sci.

U.S. Celebrates World Standards Day October 11, 2012

This year, the U.S. Celebration of World Standards Day, Standards Increase Efficiency, will recognize the crucial role of standards, codes, and conformance activities play in driving more efficient processes, services, and systems worldwide. Globally relevant standards and codes provide the technological and scientific foundation that drive innovation and efficiency, harmonize trade, and lead to more efficient and sustainable built environments. The stateof-the art know-how contained in international standards and codes helps companies and organizations compete globally, produce more efficiently for more markets, and reduce costs.

Leaders of business, industry, academia, and government will gather in Washington, DC, on Thursday, October 11, 2012. The event will kick off with a reception and exhibition at The Fairmont Washington, while the evenings festivities will include the presentation of the 2012 Ronald H. Brown Standards Leadership Award, which honors an individual who has dedicated him/herself to the promotion of standardization. The winners of the 2012 World Standards Day Paper Competition will also be announced. For more information, visit the American National Standards Institute (ANSI) website:

10th International Seminar on Electrical Metrology

The 10th International Seminar on Electrical Metrology, X SEMETRO, will be held on September 25-27, 2013 in Buenos Aires, Argentina. SEMETRO is devoted to topics related to electromagnetic measurements, which covers high accuracy and industrial measurements in the frequency range from DC to the optical region. SEMETRO is an international forum that involves specialists from various fields related to electrical measurements. The SEMETRO congress, which is held every two years, provides the opportunity to participate in various activities and to share knowledge and experience in metrology. The nine previous events of SEMETRO took place in Brazil. Due to the significant participation of scientists and metrologists from Argentina, it was Mag decided that the tenth edition will be PM held Page there.1 HygroGen2 Measure 7.125x4.75_2012 5/17/12 3:40 Papers describing original works within the scope of the conference are invited to participate. Two-page, two-column summary papers in PDF format should be submitted through the conference web page. The closing date for submission is March 15, 2013. All papers shall be reviewed by the Technical Program Committee and those accepted shall be assigned for either oral or poster presentation. Authors will be notified of acceptance for either an oral or poster presentation by June 7, 2013. More information about the X SEMETRO can be found at:

On-site calibration and adjustment.

Generates stable humidity and temperature conditions Increased calibration productivity Stable humidity values within 5 to 10 minutes Calibrates up to 5 sensors simultaneously Integrated sample loop for use with Reference Hygrometers Integrated desiccant cell and digital water level monitoring Easy-to-use graphical user interface
Visit for more information or call 631-427-3898.
ROTRONIC Instrument Corp, 135 Engineers Road, Hauppauge, NY 11788, USA,

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |


METROLOGY NEWS NACLA Recognizes CMEC under Construction Materials Program

The National Cooperation for Laboratory Accreditation (NACLA) has awarded its Construction Materials Testing (CMET) recognition to the Construction Materials Engineering Council (CMEC), Orlando, Florida. The recognition runs through May 31, 2016. Then, about four months later, the U.S. Department of Transportation, Federal Highway Administration formally accepted the CMET Recognition awarded by NACLA. The CMET recognition satisfies the Federal Highway Administration (FHWA) regulation titled Quality Assurance Procedures for Construction 23 CFR 637 paragraphs 637.209 (a) (2), (3), and (4). This is the second CMET recognition issued by NACLA. The first one was granted to Laboratory Accreditation Bureau (L-A-B), Fort Wayne, Indiana. According to David Savage, CMECs Director of Accreditation, With the growing number of third party Accreditation Bodies in the United States, NACLAs role is becoming more and more important. There has always been a need in the United States for a program to recognize the Accreditation Bodies and NACLA is fulfilling that role. According to Arman Hovakemian, NACLA President, This recognition is another milestone in NACLAs growth and expansion and a testimony to its important mission and value for the United States laboratory accreditation system. For more information, visit:

Agilent Technologies Receives European Calibration Services Award

Frost & Sullivans Best Practices Award has recognized Agilent Technologies with the 2012 Europe Frost & Sullivan Customer Service Leadership Award for Calibration Services. This award is presented to a company that has demonstrated excellence in quality, timeliness and cost of service, as well as a positive impact of service on customer value. Agilent was recognized for having an integrated team of professionals, resolving customer queries and meeting their customers changing requirements in delivering the highest quality of calibration services. This included being responsive to customer feedback and thereby developing better services, while gaining a reputation for utmost reliability and trustworthiness. On a weekly average, Agilent collects approximately 25 customer responses, which are reviewed by the services management team. Taking all the customer feedback and responses into account, the company interacts directly with a customer who has given the company a score of less than 6 out of 10 and subsequently handles the improvement suggestions. The other factor that distinguishes the companys service quality is its strong emphasis on delivery metrics, such as internal turnaround time (TAT) and ship-on time (SOT). In fiscal year 2011, the calibration SOT for the company was as high as 95 %. The Source for Calibration Professionals Agilent is involved with more than 35 standards groups and associated with calibration standards committees that support ISO/IEC 17025, ILAC-G8, ISO Guide for Expression of Uncertainty (GUM)

w w w . i s o t e c h n a . c o m


The Source for Calibration Professionals


The solution you have been waiting for

Automatic Calibration of Thermometry Bridges

- The only solution for AC or DC Bridges - Includes Analysis Software


NCSLI Measure J. Meas. Sci.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |


and ANSI/NCSLI Z540.3. Agilent offers accredited calibrations with accreditations from European bodies such as the United Kingdom Accreditation Service (UKAS), Comit Franais dAccrditation (COFRAC), Entidad Nacional de Acreditacin (ENAC - Spain) and Israel Laboratory Accreditation Authority (ISRAC). Agilent provides calibration services in the Europe, the Middle East and Africa region to more than 3,000 customers across 27 countries. For more information, visit: around their star, push and pull the star a small amount, causing a relatively small change of its velocity. For example, the recoil on our sun caused by the Earth amounts to a velocity change of 9 cm/s. Hence, the amount of Doppler shift in the stars spectrum, which is very small, can be detected only with the help of high precision measurement tools. The calibration of the HARPS spectrograph with a frequency comb resulted in a sensitivity for velocity changes of 2.5 cm/s. This was demonstrated in a series of measurements in November 2010 and January 2011. By observing a star with a well known planet for a couple of nights, the team could prove the high stability of the system over time. For more information, contact Dr. Ronald Holzwarth (ronald. or Dr. Tobias Wilken (tobias.wilken@ at the Max Planck Institute of Quantum Optics, Mnchen, Germany.

Group Hunting Planets with Laser Rulers

Because extra-solar planets cannot be imaged directly, other techniques must be used to search for them. One of the most successful detection methods is the measurement of Doppler shifts in the spectrum of the parent star which is due to the recoiling motion caused by a rotating planet. Currently, more than 600 planets have been discovered using this technique (a comprehensive list is maintained at http://www. Laser frequency combs, invented some ten years ago, are now making their way into astronomy and astrophysics. Recently, a team of scientists from the Laser Spectroscopy Division at the Max Planck Institute of Quantum Optics (Garching), in collaboration with the European Southern Observatory (ESO), the Instituto de Astrofsica de Canarias, and the Menlo Systems GmbH (Martinsried), has modified the frequency comb technique in a way that it can be applied for the calibration of astronomical spectrographs (Nature, 31 May 2012, DOI: 10.1038/nature11092). The new instrument has been tested successfully with the High Accuracy Radial velocity Planet Searcher (HARPS), a spectrograph at the 3.6-meter-telescope at the La Silla Observatory in Chile. The HARPS covers the visible wavelength range from 380 nm to 690 nm; other systems operate in the near-infrared (see Gabriel G. Ycas, et al., Demonstration of On-Sky Calibration of Astronomical Spectra using a 25 GHz near-IR Laser Frequency Comb, Optics Express, vol. 20, no. 6, pp. 6631-6643, March 12, 2012). A tenfold improvement of precision was obtained as compared with traditional spectral lamp calibrators. Ultimately, the search for extra-solar planets shall answer the question whether our solar system is the only place in the universe that provides the conditions for life as we know it. The light from distant stars is composed of numerous spectral lines that are characteristic of the different chemical elements in the stars atmosphere. When the star is moving towards or away from the observer, these lines are shifted to slightly higher or lower frequencies. This provides a way to find extra-solar planets which, while travelling

11th ISMTII Planned for July 1-5, 2013

The 11th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII) will take place in Aachen and Braunschweig, Germany from July 1-5, 2013 and will be hosted by Aachen University and Physikalisch-Technische Bundesanstalt (PTB). Abstracts are due by November 16, 2012, and, for accepted presentations, a manuscript is due by April 1, 2013. The conference addresses the role of metrology in technical solutions facing global challenges. The sessions will focus on the following topics: 1. Micro and Nanometrology 2. Macro-metrology 3. In-process and In-line Metrology 4. Intelligent Instruments for Automation 5. Sensors and Actuators 6. Management of Measurement Processes 7. Calibration and Machine Tool Performance 8. Optical Metrology 9. Material Characterization 10. Education and Training in Metrology For more information, contact the Conference Chair of ISMTII 2013, Prof. Dr.-Ing. Robert Schmitt ( or visit:


NCSLI Measure J. Meas. Sci.

Discover the BlueBox Difference

Are Are Your Your Primary Primary Standards Standards Accurate Accurate or or Are Are They They Blue Blue Box Box Accurate? Accurate?

9300A Air Bath Stability <15mK

MI Standard Resistor Air & Oil Baths

Excellent stability and uniformity GPIB controllable Large working space
9400 Oil Bath Stability <2mK

Peltier cooled for low noise Establish temperature coefficients

321A-EXhlfPgAd.qxp:Layout 1


2:41 PM

Page 1

BetaGauge 321A-EX
Intrinsically Safe for Use in Hazardous Areas
Also Available in Single Sensor BetaGauge

Dual Sensor

Ex ia IIB T3 Gb (Ta=10... +45C) KEMA 10 ATEX 0168X 0344 Ex ia IIB T3 Gb (Ta=10... +45C) II 2 G IECEx CSA 10.0013X

. Accuracy: 0.025% . ClearBrite LCD Display . 25 Pressure Ranges . Single or Dual Sensor . Pt100 RTD Connector . Read mA . 3 Key Martel Menu System .
Martel Electronics Corporation Tel: 800-821-0023 / 603-434-1433

NIST Traceable Calibration


MODEL CS-01 CS-1 CS-5 CS-10 CS-20 CS-50 ACCURACY <0.005 % <0.005 % <0.01 % <0.01 % <0.01 % <0.01 % MODEL CS-100 CS-200 CS-300 CS-500 CS-1000 MCS ACCURACY <0.01 % <0.025 % <0.03 % <0.05 % <0.05 % MULTIPLE



611 E. CARSON ST. PITTSBURGH PA 15203 TEL 412-431-0640 FAX 412-431-0649 WWW.OHM-LABS.COM
20 | NCSLI Measure J. Meas. Sci.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |


February 6 and 7, 2013

Charlotte, North Carolina

What is the Technical Exchange?

The Technical Exchange, developed by NCSL International (NCSLI), is a new educational event designed to provide you regional access to low-cost, high-quality metrology training solutions. At this two-day event you will get metrology training covering several fields. Each training session is taught by subject matter experts from throughout the industry. The NCSLI Technical Exchange provides a forum for exchanging ideas, measurement techniques, best practices and innovations with others interested in metrology industry trends. It will build or enhance specific hands-on skills in the calibration of measurement and test equipment, and teach best practices along with introducing new and innovative calibration hardware, software and calibration services.

Sheraton Charlotte Airport Hotel 3315 Scott Futrell Drive Charlotte, NC 28208


NCSL International | 2995 Wilderness Place, Suite 107 | Boulder CO 80302 | Phone 303-440-3339 | Fax 303-440-3389

Tutorials at a Glance
An Introduction to Instrument Control and Calibration Automation in LabVIEWTM Best Practices and Evaluation of Suppliers in Testing Good, Bad or Indeterminate? Measurement Literacy: Speaking the language of measurement to support your understanding and the needs of your organization Calibration of Torque Wrenches (Type I and II) Uncertainty in Testing Introduction to Statistical Process Control in the Calibration Laboratory Understanding ISO/IEC 17025 Requirements and Most Common Deficiencies Forensic Sciences - Testing, Inspection and Accreditation Is this Measurement Result Traceable?


For registration questions and answers

Please call the NCSLI business office. 303-440-3339 or visit


The Metre Convention was signed in Paris, France in 1875.

Historical Development of Units Based on Fundamental Constants or Invariants of Nature

Terry Quinn Abstract This article outlines the origins and history of the Metre Convention, the BIPM and the International System of Units (SI), with particular reference to the historical development of units based on fundamental constants or invariants of nature. In the past, the ideas and the intention to proceed towards a unit system based on invariants of nature had existed but it has only recently become a practical possibility. The adoption by the 24th General Conference on Weights and Measures in October 2011 of a Resolution outlining the principles of such a system is the culmination of more than two hundred years of advances in physics and metrology. This article is based on a lecture given at the March 2012 meeting of the American Physical Society in Boston.
1. The Original Units of the Metric System

On 19th March 1791, a Report1 was presented to the Acadmie Royale des Sciences of Paris by a Committee made up of Messrs. Borda, Lagrange, Laplace, Monge and Condorcet. It was entitled On the choice of a unit of measurement and began as follows: The idea of referring all measurements to a unit of length taken from nature was seized upon by mathematicians as soon as the existence of such a unit and the possibility of determining it became known. They saw it as the only way to exclude all that was arbitrary from a system of measurement and to conserve it unchanged, so that no event or revolution in the world could cast uncertainty upon it. They felt that with such a system, belonging exclusively to no one nation, one could hope that it would be adopted by all. They went on to list the three options they saw for the basis of such a unit, namely, the length of a pendulum having a 1 s beat, the quarter of the circle of the equator, and the quarter of a terrestrial meridian. Of these, they initially considered the pendulum to be the most attractive: The length of a pendulum has the advantage of being the easiest to determine and, in consequence, the easiest to verify if some accident happens that renders it necessary. Furthermore, those who wish to adopt this measure already adopted by another country, or having adopted it wish to verify it, would not be obliged to send observers to the place where it was originally established. In addition, the law of the length of a pendulum is well known, confirmed by experiment and can be used without fearing small errors. Despite these obvious advantages of a unit based on the length of a pendulum, the Committee finally recommended the adoption of a fraction of the length of the meridian, although with a pendulum envisaged as a means to retrieve the unit in case of accidents. The reason for this change appears to be the ambition of Borda to demonstrate his
24 | NCSLI Measure J. Meas. Sci. Figure 1. The Metre and Kilogram of the Archives (photograph by the author, December 2010).
My translation from Histoire de lAcadmie Royale des Sciences, 1788, pp. 6-16 (the date 1788 is not a misprint, this is the date of the volume in which the text of the 1791 Report appears)


Figure 2. The original plan of the Observatoire, the rst laboratories at the BIPM in 1878, with handwritten additions from about 1890 (courtesy BIPM).

new dividing circle. There then followed the determinations of the length of the meridian from Dunkerque to Barcelona by Mchain and Delambre, using Bordas dividing circle for the triangulation, and then the mass of a cubic decimeter of water by Lavoisier, Lefvre-Gineau and Fabbroni. The new metric units, the Metre and the Kilogram, were adopted and deposited in the Archives in 1799 where they remain today (Fig. 1). Thus, despite the original idea of a unit referred to nature, the metric system was in fact based on the length and mass of two particular platinum artefacts, the Metre and Kilogram of the Archives which continued to be the reference standards for the metric system for the next eighty years.
2. The Metre Convention of 1875 and New Metric Prototypes

If, then we wish to obtain standards of length, time, and mass which shall be absolutely permanent, we must seek them not in the dimensions, or the motion, or the mass of our planet, but in the wavelength, the period of vibration, and the absolute mass of these imperishable and unalterable and perfectly similar molecules At the time, science and technology were not sufficiently advanced for Maxwells precepts to be implemented. Now, at the beginning of the 21st century, they are and this is what we are about to do. The Metre of the Archives was what is called an end standard, that is to say a simple bar of rectangular shape with the metre defined simply as the distance between the supposedly flat and parallel ends. By the middle of the 19th century, geodesists recognized that a line standard was superior, in which the length is defined as the distance between fine lines engraved on the surface of the bar. Furthermore, they judged the Metre of the Archives inaccessible because all calibrations made in Paris, supposedly in terms of the metre, were traceable not directly to the Metre of the Archives but to a copy held by the Conservatoire (Imprial) des Arts et Mtiers in Paris. This was considered less and less satisfactory and in 1867 a proposal was made at a Conference on Geodesy in Berlin to construct a new European Metre, to be based on the Metre of the Archives but in the form of a line standard, and that it should be deposited in a new International Bureau of Weights and Measures. As one might imagine, this led to much controversy and debate, particularly in France, but in 1875 a Diplomatic Conference took place in Paris that culminated in the signing on 20 May of an intergovernmental treaty known as the Metre Convention. It created an International Bureau of Weights and Measures (BIPM)2 placed
2 The acronyms BIPM, CIPM and CGPM are derived from the French names of these bodies

There were many reasons why by the middle of the nineteenth century, the time was ripe to develop the idea of an international agreement on measurement standards. The great scientists of the nineteenth century had all been preoccupied with how to measure things: Weber, Faraday, Maxwell, Kelvin, Rayleigh, Helmholtz, Regnault to name just a few. Indeed, James Clark Maxwell made his well-known statement on absolute standards in an address to the Mathematics and Physics Section of the British Association for the Advancement of Science in Liverpool in 1870: Yet, after all, the dimensions of our earth and its time of rotation, though, relatively to our present means of comparison, very permanent, are not so by physical necessity. The Earth might contract by cooling, or it might be enlarged by a layer of meteorites falling on it, or its rate of revolution might slowly slacken, and yet it would continue to be as much a planet as before. But a molecule, say of hydrogen, if either its mass or its time of vibration were to be altered in the least, would no longer be a molecule of hydrogen.
Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Figure 3. The Brunner comparator, the principal comparator for metre prototypes from the 1880s until 1954, side view only, Travaux et Mmoires du BIPM, Vol. IV, 1885 (courtesy BIPM).

Figure 4. The Ruprecht balance, used as the primary balance for 1 kg prototypes from 1878 to 1973 Travaux et Mmoires du BIPM, Vol. 1, 1881 (courtesy BIPM). 3. First Moves Towards Natural Units

under the authority of an International Committee for Weights and Measures (CIPM), itself under the authority of a General Conference on Weights and Measures (CGPM) made up of delegates of the contracting governments to the Convention, the organizational structure that continues to exist today. The BIPM was quickly established at the Pavillon de Breteuil in Svres. The Pavillon was repaired after its damage in the 1870 Franco-Prussian war, and laboratories were built to take the new metric prototypes whose construction had started by international agreement even before the signing of the Convention. The laboratories were remarkable in their design, incorporating all the most modern ideas on stability and temperature control (Fig. 2) and were equipped with best available apparatus for comparing line scales and kilogram standards as well as for the accompanying scientific work that had been foreseen (Figures 3 and 4). The new metric prototypes were made to be as close as possible to the Metre and Kilogram of the Archives. It had been decided during the preliminary discussions that the metre would be defined as the length of the new metric prototype and not one ten millionth of the quadrant of the earth, and that the kilogram would be defined as the mass of the new prototype of the kilogram and not as the mass of a cubic decimeter of water, although it was stated at the time that it would be good to check the weight of a cubic decimeter of water. At the time of their formal adoption at the 1st CGPM in October 1889, these new prototypes had replaced the Metre and Kilogram of the Archives which became simply historic artefacts. The grand ideas of the originators of the metric system, eighty years before, appeared to be laid to rest, but not for long.

Almost as soon as the new metric prototypes had been deposited in the safe in the vault at the BIPM, the new International Committee for Weights and Measures began thinking about the future. At its meeting in 1891 the American member B.A. Gould, referring to earlier discussions said From the very beginning of the International Committee it has been generally recognized to be of fundamental importance to determine the relation between the metric units and some basic fundamental constants that one can deduce from natural phenomena. He proposed inviting A.A. Michelson to the BIPM to measure the wavelength of the red spectral line of cadmium in terms of the new International Prototype of the Metre. This was accomplished in the years 1892 and 1893 by Michelson and Benot, then Director of the BIPM (Fig. 5). Their result was confirmed with an improved accuracy of a few parts in 107 in 1906 by Benot, Fabry and Perot. It was not until 1960, however, that the decision to change the definition of the metre to one based on the wavelength of the orange line of the atom of krypton, was taken by the 11th CGPM. Remarkable work was done in the 1890s at the BIPM by Guillaume (who received the Nobel Prize for physics in 1920 for his discovery of invar) and Chappuis in determining the density of water with a precision of a part per million (ppm). The results, later confirmed to within 1 ppm by work in Australia the 1960s, showed that the value of the density of water originally determined by Lefevre-Gineau and Fabbroni in 1794 after the death of Lavoisier at the hand of the Revolution, was only 28 ppm away from the modern value. This was an extraordinary achievement; the mass of the Kilogram of the Archives was thus only 28 ppm away from its formal definition!


NCSLI Measure J. Meas. Sci.


Figure 5. The Michelson interferometer used by Michelson and Benot in 1892 to determine the wavelength of the red line of cadmium in terms of the metre, Travaux et Mmoires du BIPM, Vol. XI, 1895 (courtesy BIPM).

Electrical units had been much discussed in the 1930s. The 8th CGPM in 1933 decided to change the definitions of the electrical units from the so-called International Units adopted by the 1908 London Electrical Congress to absolute units based on the mechanical units of length, mass and time. It gave authority to the CIPM to adopt the new absolute units when the necessary absolute measurements had been made. This finally took place in 1946 with the effect, in the new definition of the ampere, of fixing the numerical value for the magnetic constant, 0, also known as the permeability of free space, to be exactly 4 x 10-7 N/A2. However, reference standards of the new units remained sets of Weston cells and wire-wound resistors kept at the BIPM, although their values were set from absolute measurements. The unit of time, the second, in the past always a fraction of the day, was changed in 1956 to become a fraction of the tropical year. This in turn was replaced in 1967 by todays atomic definition of the second, based on the frequency of a hyper-fine transition of the atom of cesium, maybe not a fundamental constant but certainly an invariant of nature. The redefinition of the metre in 1983 in terms of a fixed numerical value for the speed of light was the next step along the path towards units explicitly based on fixed numerical values for fundamental constants. The unit of mass, the kilogram, remained the platinumiridium cylinder at the Pavillon de Breteuil and there seemed little prospect of it becoming anything else. In 1971, when introducing the definition of the mole to the 16th CGPM, Jan de Boer Secretary of the
Vol. 7 No. 3 September 2012

CIPM and President of the Consultative Committee for Units, said: As far as the unit of mass is concerned, the choice of an atomic definition, for example the mass of a proton of the unified atomic mass unit would seem natural; but such a proposal seems to me still a far cry from practical because of the necessity of determining to high precision the mass of the proton, and about electrical units he said: Here again one could imagine the elementary charge of the proton as the natural and fundamental electrical unit to serve as the base of a universal system of units; but in this case as well it is the requirements of metrology that render such a proposition impracticable for the high precision measurement of electrical quantities, and he ended by saying: Naturally, one might ask also in the case of the mole would it not be preferable to replace the definition of the mole given here by a molecular one; but as in the cases of the unit of mass and of electric current this would require determinations such as the absolute counting of molecules or the measurement of the mass of molecules that are not possible with the required precision.
NCSLI Measure J. Meas. Sci. | 27


The discovery of the quantum-Hall effect by Klaus von Klitzing in 1980.

accurately measurable physical effects in the laboratory. In consequence, mechanical quantities became, in principle, linked in a measurable way to electrical quantities through exact quantum equations applicable in macroscopic experiments. In 1976 Bryan Kibble of the NPL in the UK had a clever idea on how to use a simple equal arm balance to compare electrical and mechanical power. It was very simple in principle and is now used to balance the gravitational force acting on an object, such as a 1 kg mass standard, against an electromagnetic force developed by an electrical current passing through a coil in a magnetic field. The electrical current is measured in terms of a voltage and resistance linked to the Josephson and quantum-Hall effects, the measured quantities being, mass, the acceleration due to gravity, the velocity of a moving coil and microwave frequencies needed to make the Josephson effect work. These two macroscopic quantum effects make, in one step, the enormous jump in scale between the microscopic world of quantum phenomena and the macroscopic world of kilogram mass standards. The whole apparatus is known as a watt balance. But this was not all, in a totally different domain of science advances were made that opened the way to linking the kilogram to the mass of an atom. In the 1970s Richard Deslattes at the National Bureau of Standards (NBS) made direct measurements of the lattice spacing of silicon by combining x-ray and optical interferometry. This key advance made it possible to conceive of an experiment in which the number of atoms in a 1 kg piece of silicon, about 215 253 842 1017, could be determined to high accuracy by weighing an artifact whose volume had been determined. The measured quantities are the lattice spacing of the atoms in a crystal of silicon and the mass and volume of the artifact. This became a second, quite independent, path to defining a unit of mass in terms of a fundamental constant, namely the mass of an atom of silicon. This is known as the x-ray crystal density method. Despite the simplicity of the principles of these two routes to a new definition of the kilogram, it has taken nearly thirty years to reach the stage at which the results of these experiments have accuracies and consistencies among themselves sufficient actually to draw up a detailed proposal for the new, absolute system of units.

Figure 6. A sphere of 28Si on a BIPM balance, 2012 (courtesy BIPM).

Today, the landscape is different. Advances in science and technology since Jan de Boer wrote these words in 1971 have made it possible to do all of the things he then thought were a far cry from practical. We can now construct a fully absolute system of units by being able to redefine the kilogram, the ampere, the kelvin and the mole in terms of fixed numerical values for a set of fundamental constants or invariants of nature. How we actually do this, I come to a little later. First I give some explanation of what has changed to allow us to contemplate taking this big step.
4. Key advances in Physics since 1971

The key advance in science since 1971 that opened the way to a fully absolute system of units was the discovery of the quantum-Hall
28 | NCSLI Measure J. Meas. Sci.

effect by Klaus von Klitzing in 1980, which allowed an electrical resistance to be made whose value was exactly proportional to h/e2, where h represents the Planck constant and e the elementary charge. Taken together with the Josephson effect, discovered in 1962, giving a voltage exactly proportional to the ratio h/2e, it became possible to establish values of electrical voltage and resistance at the levels of millivolts and thousands of ohms, and hence an electrical current, related to quantum effects. The ratio 2e/h was designated the Josephson constant KJ and the ratio h/e2 was designated the von Klitzing constant RK. It thus became possible to produce not only an electric current but also other electrical quantities, notably power, proportional to a combination of fundamental constants at a level where they produced visible and

Numerical value for the speed of light depends SPECIAL FEATURE on the size of the unit of length.
The implementation of the Josephson and quantum-Hall effects in the 1980 and 90s had, however, another and much more immediate consequence. It very quickly became possible to maintain reference standards for the volt and the ohm, based on these two effects, whose reproducibility approached parts in 1010. Their absolute values in terms of the SI volt and ohm were, however, known only to a few parts in 107, limited mainly by uncertainties in the value of the Planck constant. These quantum-based reference standards were so useful that in 1990 the CIPM adopted conventional values for KJ and RK that were designated KJ-90 and RK-90 respectively. By so doing, however, electrical metrology became much more precise but became, in one sense, decoupled from the other units of the SI at the highest levels of accuracy.
5. Dening Units in Terms of a Fixed Numerical Value of a Fundamental Constant

definition is linked to a biological weighting factor relating optical power to the visible sensation of the eye and is already defined in terms of a fixed value for the ratio lumens per watt.
6. Draft Resolution to the 24th CGPM 2011 for the New SI

One might quite reasonably ask how is it possible to arbitrarily fix the numerical value of a fundamental constant when, by its very nature, its value must be fixed and be constant. The important distinction to be made is the following, the value of a constant is indeed fixed by nature, but its numerical value depends on the magnitude of the unit in which we choose to measure it. Let us take the case of the speed of light, which is a constant of nature and is the same everywhere, but its numerical value depends on the magnitude of our units of time and length. We generally write the speed of light, c, in the following way, c = 299 792 458 metres per second, but we can also write it in feet per second, c = 983 571 056.4 feet per second, or even in yards per second, c = 327 857 018.8 yards per second. For a given value for the SI second, it is thus evident that the numerical value for the speed of light depends on the size of the unit of length we choose to use. Conversely, if we decide to choose a numerical value of 327 857 018.8 then we have fixed the size of the unit of length to be equal to that of the yard, or put another way, we have defined the magnitude of the yard. This is the way in which the base units of the New SI are to be defined. We shall fix the numerical values of a set of fundamental constants or invariants of nature in such a way as to define the seven base units of the SI. In the case of the kilogram, for example, which will be defined in terms of a fixed numerical value for the Planck constant, we have that the units in which the Planck constant is expressed are joule seconds or J s which we can write as s1m2kg. The second is already defined in terms of a fixed numerical value for the frequency of a hyper-fine transition of the atom of cesium and the metre is already defined in terms of a fixed numerical value for the speed of light. Thus, fixing the numerical value for the Planck constant defines the kilogram. The ampere, kelvin and mole will be defined in terms of fixed values for the elementary charge e, the Boltzmann constant k and the Avogadro constant NA respectively. The candela is different in that its
Vol. 7 No. 3 September 2012

Confident that the time was ripe to make a formal proposal outlining the principles of a new system of units based on fundamental constants, the CIPM submitted a detailed draft to the 24th CGPM meeting in Paris in October 2011. This was adopted unanimously and is now publicly available on the BIPM website as Resolution 1 of the 24th CGPM with the title On the possible future revision of the International System of Units, SI ( Resolutions.pdf). This somewhat tentative title belies the very precise and detailed proposal it includes. The actual implementation of the new system with the adoption of new definitions of the base units will take place as soon as final agreement is obtained on the numerical values of the constants in question. This is expected to be within a few years from now, perhaps at the next CGPM in 2014 or a little later, so the new definitions are not yet adopted According to the Resolution adopted by the 24th CGPM, the new SI will be presented in two parts, in the first part the set of constants or invariants of nature whose numerical values will be fixed will appear in the following way, which essentially fixes the scale, of the new SI, i.e., the relative sizes of the base units: The International System of Units, the SI, will be the system of units in which

the ground state hyperfine splitting frequency of the caesium 133 atom D(133Cs)hfs is exactly 9192631770 hertz, the speed of light in vacuum c is exactly 299792458 metre per second, the Planck constant h is exactly 6.626068...10-34 joule second, the elementary charge e is exactly 1.602176...10-19 coulomb, the Boltzmann constant k is exactly 1.380650...10-23 joule per kelvin, the Avogadro constant NA is exactly 6.022141...1023 reciprocal mole, the luminous efficacy Kcd of monochromatic radiation of frequency 540 1012Hz is exactly 683 lumen per watt.

In the Resolution this list is followed by a few lines of explanation, which I do not reproduce here, as to the meaning of the symbols. In the second part, the new definitions of the base units will be given. They will appear in the following order: second, metre, kilogram, ampere, kelvin, mole and candela. The order is chosen so that in fixing the numerical value of a constant, no unit is called upon that has not already been defined, as we have seen in the case of the kilogram. Although the order is for clarity of understanding it is not essential since the whole set taken all together can be considered a set of simultaneous linear equations. Here I shall not give all the proposed new definitions but only those for the kilogram, ampere and mole. For the others the reader is invited to look at the BIPM website and papers referred to in the bibliography. The formal definition of each unit will be followed by a short text giving the effect of the definition in defining the unit. The versions
NCSLI Measure J. Meas. Sci. | 29


Kilogram, unit of mass, mole, unit of amount of substance

As regards the new definition of the mole, it is important to note that it will no longer be defined in terms of the number of entities in 12 grams of carbon 12. This is a significant change and is intended to highlight the distinction between amount of substance, the quantity whose unit is the mole, and mass the quantity whose unit is the kilogram. The effect of the new definition is that the molar mass constant, Mu, which in the present SI is equal to exactly 10-3 kilograms per mole, will become a quantity whose value will remain close to 10-3 kilograms per mole but will have an uncertainty of a few parts in 1010 in the New SI and henceforth subject to measurement. I must emphasize that these draft definitions remain drafts, the final numerical values have yet to be fixed and there may be small changes in wordings before they are adopted by the CGPM but they give a clear idea of how the new definitions will look.
7. The Choice of a New Denition for the Kilogram

given here are those from the paper by I. M. Mills et al., referred to in the bibliography and are written, for clarity, in the present tense rather than the future as in Resolution 1 actually adopted by the 24th CGPM and are given here only for illustration, they are not yet the formal definitions of these units. kilogram, unit of mass The kilogram, kg, is the SI unit of mass; its magnitude is set by fixing the numerical value of the Planck constant to be equal to exactly 6.626068... 1034 when it is expressed in the SI unit s1m2kg, which is equal to J s. Thus we have the exact relation h=6.626068 ...1034Js. The value of the Planck constant is a constant of nature, which may be expressed as the product of a number and the unit joule second, where Js=s1m2kg. The effect of this definition, together with those for the second and the metre is to express the unit of mass in terms of the unit of frequency through two of the most fundamental equations of physics, namely E = mc2 and E= hv, which relate energy E to mass and to frequency, and which together lead to the relation m = hv/c2. Another effect of the new definition is that at the moment of the redefinition, the mass of the International Prototype of the kilogram, at present 1 kilogram exactly, will continue to have a mass of 1 kilogram but with an uncertainty equal to that of the Planck constant at the moment of the redefinition (a few parts in 108) and subsequently that of the uncertainty of realization of the unit using the new definition. If, over the years, its mass is found to change then its value will be adjusted accordingly. ampere, unit of electric current The ampere, A, is the SI unit of electric current; its magnitude is set by fixing the numerical value of the elementary charge to be equal to exactly 1.602176... 1019 when it is expressed in the SI unit s A, which is equal to C. Thus we have the exact relation e=1.602176 ... 1019C. The effect of this definition is that the ampere is the electric current corresponding to the flow of 1/(1.602176... 1019) elementary charges per second. Another effect of the new definition of the ampere is that the value of 0, the magnetic constant, at present equal to exactly 4 10-7 N A-2, will become a quantity of the same value but with an uncertainty of a few parts in 1010 and henceforth subject to measurement. mole, unit of amount of substance The mole, mol, is the SI unit of amount of substance of a specified elementary entity, which may be an atom, molecule, ion, electron, any other particle or a specified group of such particles; its magnitude is set by fixing the numerical value of the Avogadro constant to be equal to exactly 6.022141... 1023 when it is expressed in the SI unit mol-1. Thus we have the exact relation NA=6.022141... 1023mol-1. The effect of this definition is that the mole is the amount of substance of a system that contains 6.022141 ... 1023 specified elementary entities.

In drawing up these definitions, the Consultative Committee for Units, CCU, took great care to ensure scientific rigor and clarity. Nevertheless, many questions can quite reasonably be put as to why certain decisions were made and the effects of the definitions. In response, the BIPM website carries a list of Frequently Asked Questions on the New SI ( An obvious one is why is it planned to define the kilogram in terms of a fixed value for the Planck constant rather than the mass of a specified number of atoms of silicon, the latter apparently being a more intuitively understandable definition. The answer is related largely to the important role played by the Planck constant in the Josephson and quantum-Hall effects where KJ-90 and RK-90 occupy a key place in ensuring high accuracy electrical metrology. By fixing the numerical values of h and e we fix the values of KJ and RK in SI units and thus resolve the present difficulty that although KJ-90 and RK-90 are exactly defined by convention and can be realized to parts in 1010, their true values in SI units are known only to parts in 107. In the New SI as proposed, they will be known exactly and thus electrical metrology will be able to take full advantage of their reproducibility to make measurements to the highest accuracy in SI units. In addition, by fixing numerical values of the Planck constant many other fundamental constants and conversion factors will have either zero or much smaller uncertainties than at present.
8. Practical Realization of the New Denitions, Primary Methods

The practical realization of the new definitions is currently the subject of discussions at the various Consultative Committees of the CIPM. Drafts of most of the documents explaining how the realizations can be accomplished already exist and will be published in due course on the BIPM website. Clearly, great care is being taken to ensure continuity of the magnitudes between the old and new definitions so that there will be no discernible change visible to any user. The method used for the practical realization of any base unit is referred to as a primary method which is a method of measurement in which a result is obtained without reference to a standard for a quantity of the same kind. This must be obvious, since clearly if one is realizing the definition of the kilogram it cannot be by means of a method in which the measurement requires a calibrated mass standard to start with. In the same way, in the realization of the definition of the kelvin, one cannot use a method in which the measurement requires a calibrated thermometer. For the kilogram, the watt balance and the x-ray crystal density methods, Fig. 6, are the obvious primary methods


NCSLI Measure J. Meas. Sci.

of realizing the new definition, but in the future any novel method based on an explicit equation of physics would be possible. For the ampere, single-electron tunnelling is a potential primary method as are the Josephson and quantum-Hall effects taken together. All these and many more will be described in detail in the new section of the BIPM SI Brochure that will be on its website in due course.
9. What of the Future?

Although the base units of the SI will in the future be based on fundamental constants or invariants of nature, this will not eliminate the need to carry out international comparisons to check that the realizations in different National Metrology Institutes around the world are consistent. The operation of the CIPM MRA (Mutual Recognition Arrangement for National Measurement Standards, see the BIPM website) will continue as it is now. For the foreseeable future, the only way to maintain and confirm worldwide uniformity of measurement standards based on the SI will continue to be by international comparisons of transportable primary or secondary reference standards. The only exception is for time standards which can be compared at a distance through radio or optical signals. The work of the NMIs will not be changed except that the new definition of the kilogram

will open the possibility for an NMI, if it so wishes, to make its own primary realization of the unit of mass. Under the CIPM MRA, however, such primary realizations will still have to be compared with others (http:// CIPM_MRA-D-04.pdf). In the particular case of the kilogram there will still be a need to maintain groups of 1 kilogram artefacts. These will be needed not only to maintain the unit between primary realizations but also because balance technology and the excellent short term stability of mass standards allow a highly stable unit easily to be maintained and distributed worldwide with a better precision than can at present be obtained with a watt balance. Such a group will be maintained at the BIPM as well as at any NMI that wishes to do so. In summary, we can say that the New SI with its set of definitions all based on fixed values for fundamental constants or invariants of nature and all in the same format, will not only ensure the long-term stability in our measurements system but will also lead to a system whose elegance and simplicity will facilitate teaching and broader understanding of metrology. No longer will all measurements made in the world of mass, force, density, electric current, electric and mechanical power and many other quantities, be ultimately traceable to a

cylinder of a platinum-iridium alloy made in 1879 in the workshops of Johnson Matthey Ltd. in Hatton Garden, London, which since 1889 has by definition had a mass equal to 1 kilogram exactly.
10. Further Reading

[1] T. Quinn, From Artefacts to Atoms, The BIPM and the Search for Ultimate Measurement Standards, New York: Oxford University Press, 2011 [2] Papers presented at a Discussion Meeting at the Royal Society, London: The new SI based on fundamental constants, Phil. Trans. Roy. Soc., vol. 369, no. 1953, pp. 39034142, October 28, 2011 (in particular see Adapting the International System of Units to the 21st century by I.M. Mills, P. J. Mohr, T. J. Quinn, B. N. Taylor, and E. R. Williams, pp. 3907-3924). [3] A conversation with the Director of the BIPM, NCSLI Measure: J. Meas. Sci., vol. 7, no. 2, pp. 24-28, June 2012.

Terry Quinn
Emeritus Director BIPM 92 rue Brancas 92310 Svres, France Email:

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Measurement Techniques for Evaluating Current Range Extenders from 1 Ampere to 3000 Amperes
Marlin Kraft Abstract: When measuring standard resistors at different ratios with a direct current comparator (DCC) incorporating current range extenders (CREs), it can be difficult to determine if the measurements are correct and to evaluate measurement uncertainty. This technical note discusses techniques developed at the National Institute of Standards and Technology (NIST) for measuring a set of low value resistors at the same current using different ratios. The design of modern automated DCCs allows measurement currents up to about 150 mA and DCCs can operate at 1:10 ratios with uncertainties well below one part in 106. Higher currents require CREs with ratios from 1:10 to 1:100,000 or more. The techniques described here allow low uncertainties to be obtained by measuring high quality medium current resistors at different power levels, so that they can be used as standards when measuring high current resistors. These techniques also allow you to determine if your CRE is functioning properly.
1. Introduction

In the early 1950s, scientists at the National Standards Laboratory (NSL) in Australia, the National Bureau of Standards (NBS) in the United States, the Physikalisch-Technische Brundesanstalt (PTB) in Germany, and the National Research Council (NRC) of Canada recognized that increased accuracy and stability that could be obtained in DC resistance ratio measurements by using transformers constructed with new highpermeability, low-loss magnetic materials. NRC initiated a study to improve the threewinding current transformer calibration techniques after an intercomparison with PTB. The measurement technique used with three-winding current ratio transformers is similar to that used with the three-winding voltage ratio transformer, however the power source and the balancing detector are interchanged. The important difference is the current ratio transformer operates with zero flux in its magnetic core at balance and this, together with appropriate flux detection techniques, makes it possible to measure direct current. In 1961, Kusters and Moore of NRC collaborated with Miljanic of the University of Belgrade in Yugoslavia to develop the direct current comparator (DCC). A patent was granted in 1964 [1, 2, 3]. The DCC is essentially a magnetic flux detector, which can be used to determine the ratio between two currents with a high degree
32 | NCSLI Measure J. Meas. Sci.

of accuracy. It operates on the principle that when the ampere turn equality is achieved between two windings of opposite polarity on a magnetic core, no flux will be induced in that core. With appropriate windings, this allows the bridge to pass a known ratio of current through a pair of resistors that have values that are in a ratio equal to the ratio of turns. A separate winding is used to detect minor corrections, and feedback is applied to the secondary to achieve zero DC flux in the core. When the primary and secondary ampere turns are equal and opposite then the voltage developed across each resistor is nearly the same. A simplified schematic diagram of an early manual DCC resistance bridges is shown in Fig. 1 [4]. The bridge consists of an adjustable 1111.111 turn winding in the primary circuit, and a fixed 1000 turn and adjustable deviation windings in the secondary circuit. The resistor to be measured is connected in the primary circuit and a reference resistor, RS is connected in the secondary circuit. If the deviation winding is set equal to the correction cS of resistor RS, the bridge becomes a direct reading in ohms. The reference resistors and low-power resistors are located in a constant temperature oil bath maintained at 25.00 0.01 C. The DCC resistance bridge is balanced by adjusting the primary turns ratio for a null condition on detector D using the reversal balancing procedure.

The fully adjustable ratio winding in the primary circuit of Fig. 1 can be replaced by a fixed winding of 100 turns, 10 turns, or 1 turn which will provide additional bridge ratios of 10:1, 100:1, and 1000:1. For these higher ratios, the adjustable fractional-turn section in the primary side is switched to the secondary side in order to balance the bridge. The bridge has a resolution of 0.1 parts per million (ppm) for all ratios. For resistance ratio measurements of 1:1 and 10:1, the adjustable 1 A internal power supply is connected in the primary circuit. When the bridge is used for ratio measurements of 100:1 and 1000:1, the internal power supply is replaced by an external, adjustable 100A supply. The 100:1 and 1000:1 ratios have respective maximum current ratings of 20A and 100A.

Marlin Kraft
National Institute of Standards and Technology* 100 Bureau Drive, Stop 8171 Gaithersburg, Maryland 20899-8171

* Quantum Measurement Division, Physical Measurements Laboratory. NIST is part of the U.S. Department of Commerce. This paper is a contribution of the U. S. government and not subject to copyright.

In the late 1980s, work started on developments leading to automated DCCs, at the request of Sandia National Laboratories. By 1994 the first microprocessor controlled automated DCC was commercially available.
2. Comparator Current/Power Measurements

The DCC bridge produces a known ratio of currents in the two resistors, RS and RX. However, the greater power is dissipated in the smaller resistor. In the comparison of a 1 W standard and a 0.1 W unknown, with 1 A in the 0.1 W there will be only 0.1 A in the 1 standard. It is also possible to use a 10 W standard, (100:1 ratio) and a 100 W standard (1000:1 ratio). By taking a series of measurements at different current levels (which would dissipate insufficient power in the standard to cause self-heating errors), the power coefficient characteristics of the resistor that carries the higher current can be determined. Extensive intercomparisons of the manual DCC bridge and the automated binary DCC bridge were performed at Sandia National Laboratories and results were presented in 2001 [5]. NIST still uses the manual DCC to quickly confirm results from time to time as do other several other laboratories. With the manual current comparator bridge, if something is wrong in the connections or settings, the bridge will not function or a bridge balance cannot be obtained. It is important to ensure that the internal power supplies and the primary and secondary galvanometers are in good working order. The automated DC current comparator bridges rely on many internal digital electronic components. They can still function when connections or settings are incorrect, but will produce erroneous results. The results of an international comparison of low ohmic resistor measurements with NIST and Van Swinden Laboratories (VSL) in the Netherlands were published in 2012 [6]. This international intercomparison will later be expanded to include the Federal Office of Metrology (METAS) in Switzerland and the National Physical Laboratory (NPL) in England.
3. Description of Range Extender Validation Process

Figure 1. Simplied schematic diagram of the DCC resistance bridge.

Unknown Resistor RX 1 0.1 0.1 0.1 0.1 0.01 0.01 0.01 0.01 0.01 0.001 0.001 0.001 0.001 0.001 0.0001 0.0001 0.0001 15 mA 150 mA 1A 1A 1A 1A 1A 1A 10 A 10 A 1A 1A 1A 10 A 10 A 10 A 10 A 100 A

DCC Setup Bridge only 0.1 1 Extender 1 10 100 0.1 1 Extender 10 1 10 0.01 0.1 Extender 1 0.1 1 0.01 Extender 0.1 0.1

Standard Resistor RS 150 mA 15 mA 100 mA 10 mA 1 mA 100 mA 10 mA 1 mA 100 mA 10 mA 100 mA 10 mA 1 mA 100 mA 10 mA 100 mA 10 mA 100 mA

Table 1. RX and RS resistance values used for range extender validation.

Comparing results for several different ratios and current levels using range extenders is likely to reveal any errors that exist. Range extenders of 1000 A,
Vol. 7 No. 3 September 2012

2000 A, and 3000 A can be checked with this same measurement technique using a well characterized 10 mW high current resistor to achieve less than 1 10-6 expanded uncertainties (k = 2). To test a range extender, a series of comparisons can be performed using standard resistors of the four-terminal design that have a well-known predictable drift, and known temperature coefficients which are directly proportional to the power coefficients. The validation process test requires a combination of at least seven

resistors, including four of the Reichsanstalt [7, 8] design having nominal values of 0.1 W, 0.01 W, 0.001 W and 0.0001 W. We used two resistors at each nominal value giving a total of 14 resistors used in the following process. The reference resistors and unknown resistors are all placed in a constant temperature oil bath maintained at 25.00 0.005 C. This is critical to achieve the desired uncertainties because the temperature coefficients of the resistors are in the range of 1 10-6/C to 5 10-6/C. Given that the power
NCSLI Measure J. Meas. Sci. | 33


Figure 2. Measurement results for a 0.1 W resistor.

Figure 5. Measurement results for another 0.01 W resistor.

Figure 3. Measurement results for another 0.1 W resistor.

Figure 4. Measurement results for a 0.01 W resistor.

dissipation is as much as 1 W, presumably even state of the art air baths will have difficulties maintaining the temperature stability required to achieve relative uncertainties at or below 0.5 10-7 uncertainty (k = 2). Shunts of the same nominal values could be used as well in an oil bath, but the size of large current shunts makes this impractical.
34 | NCSLI Measure J. Meas. Sci.

Table 1 shows the unknown resistor (RX) and the current applied to it and the standard resistor (RS) used with the appropriate current passing through it. The first step is to measure the 0.1 W resistor at 150 mA, and obtain a resistance value by putting 15 mA through a 1 W resistor using the current comparator bridge only. Two primary reference resistors were used at each of the four levels of measurement described in this technical note. We connect the range extender and measure the same 0.1 W resistor at 150 mA against the same 1 W resistor and repeat the previous measurements. The fewer changes made in the test setup, the lower the probability of measurement errors. From this point on, the same 18 gauge shielded wires for the potential terminals were used on each of the RX resistors. The current terminals of each of the RX resistors were connected using the same 12 gauge wire up to 10 A. Above 10 A to 100 A, the same AWG 1/O gauge cables were used to connect to the RX resistors current terminals. The same number of readings and reversal rates were used in all of the measurements. The current is increased to 1 A on the 0.1 W resistor, which will put 100 mA through the 1 W standard. After making repeated measurements, change RS to a 10 W resistor and repeat the 1 A measurements, then change RS to a 100 W resistor and repeat the 1 A measurements. Figure 2 shows the results of the bridge and extender comparison at 150 mA and the 1 A measurements on all three current ranges. Figure 3 shows the same measurement results for another 0.1 W resistor. The green error bars in Figs. 2 and 3 are 0.5 10-6, which is the published expanded uncertainty (k = 2) of a 0.1 W resistor measured at 10 mW [4]. The next level of testing utilizes a 0.01 W resistor as the RX with 1 A applied, and RS will now be a 0.1 W resistor with 0.1 A applied. The procedure continues with the 0.01 W resistor repeatedly measured using the same test parameters as in the previous tests. Figure 4 shows the results. As before, three well-characterized resistors were used for RS. All three current ranges of the range extender agree at 1 A to within 0.1 10-6. The 10 A measurement results agree within 0.05 10-6. The blue error bars in Fig. 4 are 0.8 10-6, which is the published expanded uncertainty (k = 2) of a 0.01 W resistor measured at 10 mW [4]. The difference between the 1 A value and the 10 A value is the heating effect of the resistor. The blue dashed line is the historical drift of this resistor for the last 10years. Figure 5 shows the results for another 0.01 W resistor. The 1 A data on the blue dashed line agree to within 0.5 10-6 and the 10 A


Figure 6. Measurement results for a 0.001 W resistor.

Figure 7. Measurement results for another 0.001 W resistor.

data agree to within 0.1 10-6. The blue error bars in Fig. 5 are again 0.8 10-6, which is the published expanded uncertainty (k = 2) of a 0.01 W resistor measured at 10 mW [4]. The difference between the 1A value and the 10 A value is the heating effect of the resistor. The next level of testing utilizes a 0.001 W resistor as the RX with 1 A applied, and RS will now be a 0.01 W resistor with 0.1 A applied. The 0.001 W resistor is measured repeatedly with the same test parameters as in the previous tests. As before, three well-characterized resistors were used for RS. Figures 6 and 7 show the measurement results for two different 0.001 W resistors. The 1 A and 3.16 A measurement results of all three current ranges of the range extender agree to within 0.1 10-6. The 10 A results agree to within 0.05 10-6. The difference between the 1 A value and the 10 A value is small because there is very little heating effect of the resistor. The small change is because of the temperature coefficient, which is about 1 10-6/C. The blue dashed line is the historical drift of this resistor. The blue error bars in Figs. 6 and 7 are 1.2 10-6, which is the published expanded uncertainty (k = 2) of a 0.001 W resistor measured at 10 mW [4]. The final level of testing listed in Table 1 utilizes a 0.0001 W resistor as the RX with 10 A applied, and RS will now be a 0.01 W resistor with 0.1 A applied. The 0.0001 W resistor is measured repeatedly with the same test parameters as in the previous tests. As before, well-characterized resistors were used for RS. Figure 8 shows the results. The blue error bars in Fig. 8 are 4 10-6, which is the published expanded uncertainty (k = 2) of a 0.0001 W resistor measured at 10 mW [4]. It is critical that the current cables be changed from American wire gauge (AWG) 12 gauge to AWG 1/0 gauge cables for the 10 A currents that will be used in the next level of testing. The next three figures will show how the full range of the range extender was checked. Figure 9 shows a 0.0001 W resistor with a temperature coefficient of +2 10-6 / C that peaks around 25 C. This makes it ideal for a full range check. The error bars are 1 s at each current level. The change in value from 10 A to 100 A is small and in the positive direction, which agrees with the temperature coefficient. Figure 10 shows a 0.0001 W resistor with a temperature coefficient of -5 10-6 / C. The change in value from 10 A to 100 A is small and in the negative direction, which agrees with the temperature coefficient.

Figure 8. Measurement results for a 0.0001 W resistor.

Figure 9. 0.0001 W resistor with a temperature coefficient of +2 10-6 / C.

Figure 11 shows a 0.0001 W resistor with a larger temperature coefficient of +10 10-6 / C. The change in the resistors value from 10 A to 100 A is much larger, which agrees with the temperature coefficient.
NCSLI Measure J. Meas. Sci. | 35

Vol. 7 No. 3 September 2012


Figure 10. 0.0001 W resistor with a temperature coefficient of -5 10-6 / C. 4. Summary

Figure 11. 0.0001 W resistor with a temperature coefficient of +10 10-6 /C.

As a good metrological practice, DC current comparator range extenders and bridges need to be periodically checked to ensure that results will meet or exceed stated uncertainties. The technique presented here allows you to check the functionality of all ranges of a range extender. Keep in mind that the resistors used in this technique (RS and RX), must have an accurate historical drift record, wellknown temperature and power characteristics, and must be placed in a constant temperature oil bath maintained at 25.00 0.005 C to achieve the best possible results. Some of the low value resistors that were discussed in this paper were used in an international low ohm intercomparison with results reported in [6].
5. References

[1] N. Kusters, W. Moore, and P. Miljanic A Current Comparator for the Precision Measurement of D-C Ratios, IEEE T. Commun. Electron., vol. 83, pp. 22-27, January 1964. [2] M. MacMartin and N. Kusters, A Direct-Current-Comparator Ratio Bridge for Four-Terminal Resistance Measurements, IEEE T. Instrum. Meas., vol. 15, no. 4, pp. 212-220, 1966.

[3] W. Moore and P. Miljanic, The Current Comparator, Peter Peregrinus Ltd., London, UK, 1988. [4] R. Elmquist, D. Jarrett, G. Jones, M. Kraft, S. Shields, and R. Dziuba, NIST Measurement Service for DC Standard Resistors, NIST Technical Note 1458, January 2004. [5] M. Kraft and S. Kupferman, Intercomparison of Decade Resistance Values Between 0.0001 and 10,000 Ohm Measured on an Automated Binary Current Comparator and on a Manual Current Comparator and a Double Ration Set, Proceedings of the NCSL International Workshop and Symposium, Washington, DC, July 2001. [6] G. Rietveld, J. van der Beck, and M. Kraft, Evaluation of LowOhmic Resistance Measurement Capabilities between VSL and NIST, IEEE Conference on Precision Electronic Measurements (CPEM) Digest, pp. 195-196, July 2012. [7] R. Dziuba, The NBS Ohm: Past-Present-Future, Proceedings of the Measurement Science Conference, pp. 15-27, January 1987. [8] E. Rosa, A New Form of Resistance Standard, Bulletin of the Bureau of Standards, vol. 5, no. 3, reprint no. 107, pp. 413-434, 1908-1909.


NCSLI Measure J. Meas. Sci.

Calibrate Faster?

Under Pressure To

Make Connections Fast, Calibrate Faster!

For transmitters, gauges and pressure sensing instruments No wrenches or thread sealants Eliminates thread damage Reduces contamination risks Twist-to-connect, sleeve and handle actuated connectors save time Leak-tight sealing from vacuum to 10,000 psi 1/8" to 1/2" NPT, AN37 flare, sanitary style flange and metric with additional sizes available

Improve your lab efficiency calibration connection manifold for calibration of up to four instruments simultaneously! 800.444.2373

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



North American 100 Ampere Interlaboratory Comparison

Jay Klevens Abstract: The base SI unit for electricity is the ampere. At present, there is no intrinsic standard for the ampere, so in practice it is disseminated by measuring voltage across a resistor, using Ohms Law (I = E / R). Higher current is measured with a shunt, which is a high power resistor. Accurate electrical current measurement is critical to the power and electrical test industries. In cooperation with the National Institute of Standards and Technology (NIST) and the NCSLI Utilities Committee, Ohm-Labs performed a North American 100 ampere interlaboratory comparison (ILC). Many measurements did not meet claimed uncertainties, revealing errors in measurement and uncertainty estimation. Two rounds of measurements were performed, and the results of both rounds are presented in this paper.
1. Introduction

Interlaboratory comparisons (ILCs) are an important part of measurement assurance programs. At the international level, national metrology institutes (NMIs) regularly circulate artifacts to verify agreement of their measured values. This assures dissemination of standard and derived international units within claimed uncertainties, and is necessary to establish traceability [1]. At the regional level, measurement labs can participate in similar comparisons. Regional ILCs allow laboratories to compare their methods, procedures, uncertainty estimations, and results with other participating laboratories. Current shunts are low ohmic value resistors designed for high power dissipation. Power is expressed in watts and equals current squared multiplied by resistance (W = I 2R). Shunts are calibrated by comparison with a calibrated resistance standard. Calibration of the resistance standard provides traceability to the SI unit of the ampere. There are several methods for calibrating shunts. Formerly, a Kelvin bridge method was used [2]. The Kelvin bridge subjects both the shunt under test and the standard to equal current. Most resistance standards are not designed to handle high power. This limited the accuracy of shunt measurements to about 0.01 % of measured value. At a time when metrology grade shunts claimed 0.04 % accuracy, this provided a comfortable 4:1 test uncertainty ratio. The Kelvin bridge method is rarely used today.

A second method calibrates shunts by direct comparison. A calibrated standard shunt (Rs) is connected in series with a shunt under test (Rx) so that equal current flows through both. Both shunts are metered (Es and Ex). The value of Rx will equal (Ex / Es) Rs. The accuracy of this method is primarily limited by the calibration uncertainty of the standard shunt, Rs. A third calibration method involves comparisons using a current comparator bridge. The current comparator bridge has two separate current loops, one through a resistance standard, a second through a resistor under test. Ratio windings allow up to 1,000,000:1 current comparisons. A current comparator system in wide use has 1,000:1 ratio capability, allowing direct comparison of 100 A through a shunt with 100 mA through a resistance standard [3]. Because 100 mA is the nominal measurement current for a 1 resistance standard, it was suggested that the 100 A level was a desirable area for examination. This led to a proposal to perform a 100 A ILC.
2. Proposal and Charter

lab, a U.S. Navy lab, and two U.S. Department of Defense prime contractor labs. A total of 16 laboratories participated. Most are accredited to ISO 17025 or controlled by nuclear regulatory quality system requirements. NIST provided opening and closing measurements. A draft proposal, participant list and draft measurement worksheet were circulated for participant review and comment. Suggestions and corrections were incorporated into a final proposal, which was distributed to the participants. The proposal defined the region and scope, identified the coordinator, specified the artifacts, identified potential problems, and outlined the ILC structure and cost. A modified petal structure was used, with the artifacts returning to the pivot lab several times during the ILC for intermediate checks. Participants were requested to bear the cost of outbound shipping to the next participants lab, and to contribute a share of the NIST measurement cost. Measurement methods and uncertainty estimations were not initially defined, with the objective of surveying and evaluating existing practices. The proposal formed the basis of the ILC charter. The charter formalized the proposal

The ILC followed NCSLIs Recommended Practice RP-15 [4], Recommended Practice for Interlaboratory Comparisons. As current measurement is integral to electrical utilities, participants were solicited from the NCSLI Utilities Committee. Participants also included manufacturers of current comparator bridge systems, manufacturers of precision current shunts, a U.S. Department of Energy

Jay Klevens
Ohm-Labs, Inc. 611 E. Carson St. Pittsburgh, PA 15203-1021


NCSLI Measure J. Meas. Sci.


Figure 1. 1 m shunt measurements showing connection variations.

and specified confidentiality. Each participant was assigned a letter code. The ILC coordinator acted as the pivot lab, and participated blind until the closing measurements were completed. The charter was subsequently modified to include a second round of measurements.
3. Artifacts

NIST provided two artifacts. One was a Leeds & Northrup model 4363 1 m shunt, manufactured in 1980, the other a Rubicon model 1166 10 m shunt of similar vintage. Using older artifacts allowed a depth of measurement history. Using artifacts of different nominal values allowed evaluation of systems at two power levels. Current connection lugs were provided by NIST to accompany the Leeds & Northrup shunt. The coordinator provided a transit container.
4. Instructions

Participants received instructions in the form of a worksheet. The worksheet had check boxes for receiving inspection. Participants reported, for each shunt, the time required to stabilize in the laboratory, the ambient temperature and relative humidity, a photo of the setup for review, the measured resistance, the date of measurement and uncertainty, and a final inspection checklist prior to release to the next laboratory. A second page on the worksheet requested information on the measurement method and uncertainty calculations. These sections were deliberately left open to the participants interpretation; reports ranged from brief narrative descriptions to detailed mathematical analysis.
5. Results Figure 2. Current connection lug on 1 m shunt.

The first round of the ILC began on November 29, 2007, following NIST opening measurements. The first round was comprised of 20 measurements, 17 by participants (one of the 16 participants discovered an error and performed a second test), and three intermediate checks. The first round required an average of 2.6 weeks per measurement and concluded on December 2, 2008. Several participants were prompt in their measurements and reporting, but several needed reminders by the coordinator.
Vol. 7 No. 3 September 2012

NIST concluded the closing measurement of the 1 m shunt on December 15, 2008. The closing measurement showed an upward shift of 342 m/; indicating damage to the artifact (Fig. 1). For ease of comparing data, charts are scaled for m/. The baseline is a linear interpolation between the opening and closing measurements.

NCSLI Measure J. Meas. Sci. |



Figure 3. First round of 10 m shunt ILC results. The red lines represent the boundaries of the NIST uncertainties.

The Fig. 1 results illustrate problems associated with accurate shunt measurements. The artifact shifted. Significant differences in connection errors can be seen in data taken both with and without the supplied current posts. The shunt temperature was not reported, so temperature errors are unknown. Stabilization time was not specified, adding to temperature variations caused by varying degrees of thermal stabilization under power. One set of data was not received (Lab code I), and one measurement is off the scale of the chart (Lab code M without posts, -1012.3 m with a claimed uncertainty of 3.0 m/). Finally, uncertainty estimates, often lower than those of NIST, do not allow for variables and thus nearly all of these measurements would fail a proficiency test. It appears the value shifted in two steps, one after the opening measurement (first point) and one prior to the closing measurement (last point). Because the pivot lab, as a participant, was operating blind, and because of the relative stability of pivot lab measurements through the ILC, this shift was not noticed until the closing. The blue diamonds indicate measurements made with current connection posts supplied by NIST; the red squares show measurements made without these posts. The connection variations are apparent. Figure 2 shows a current post installed. The posts are nickel plated solid copper bars, with a diameter of approximately 3 in 0.75 in (7.6 cm 1.9 cm). One end is threaded; the other is machined to closely fit the 0.75 in (1.9 cm) diameter hole in the shunt. It distributes current more uniformly through the brass current posts on the shunt. Variations in current distribution through brass posts or blocks affect the measured resistance of a shunt. The manufacturer does not note or quantify this error source. Torque on the connecting bolts and the cleanliness (surface resistivity) of the current connection are also variable factors which cause errors
40 | NCSLI Measure J. Meas. Sci.

by affecting current distribution through the shunt [9]. The author has observed connection errors on older metrology shunts of this type exceeding 200 m/. Connecting to the top surface instead of inside the holes can cause errors greater than 400 m/. On lower cost metering type shunts, the author has observed connection errors exceeding 1000 m/. A review of photos of the test setups showed a variety of current connections, including some to the top of the current posts. NIST concluded closing measurements on the 10 m shunt on January 12, 2009. A linear interpolation between opening and closing values formed a baseline value. Participants measured values were compared to this baseline value. The drift of this shunt was determined by the difference between the opening and closing measurements, which was -1.6 m/ and can be considered negligible. Figure 3 shows the participants results. Letter code X represents pivot lab measurements. Red upper and lower limits represent NIST uncertainty. Error bars show participants claimed uncertainties (UC). All uncertainties were reported at a coverage factor of k = 2. One set of readings was not received, and four measurements are off the 100 m/ scale of the chart (Lab A, -619.9, UC 21.3; Lab F, +2038.5, UC 75.8; Lab G -208.0, UC 501; Lab O, -1664.0, UC 27.0. Lab F discovered an error and performed a second measurement as a corrective action.). Figure 3 also illustrates the difference in uncertainty between current comparator systems (smaller error bars) and shunt comparison systems (larger error bars). It also shows many labs claiming a lower uncertainty than NIST. NIST claimed an uncertainty of 20 m/ for the measurement of this shunt. This uncertainty included the standard deviation obtained from multiple measurements. The NIST report and an accompanying fact sheet note the effects of drift, transport, temperature, current and humidity, but not connection variations.

Due to the number of outlying measurements and the shift of one artifact, the coordinator proposed a second round of measurements. The goal was to minimize problems encountered in the first round [5]. The coordinator requested all participants to continue, drafted a revised charter, and developed new instructions. For the second round, participants performed one measurement using existing procedures, and three subsequent measurements following a procedure defined by the coordinator. For the second round, the Leeds & Northrop shunt was replaced by a Rubicon 1168 1 m shunt supplied by the coordinator. Copper current connector bars were fabricated and a torque wrench with mating socket was supplied. Type T thermocouples were affixed to the mid-point of both shunts, and a type T thermometer was included with the artifacts. Figure 4 shows the second round items. The 10 m shunt is on the left; the 1 m shunt is on the right.

Figure 4. Artifacts for the second round of the shunt ILC.

Figure 5. Second round of 10 m shunt comparision using standard laboratory procedure.

Fourteen of the 16 participants agreed to repeat the measurements (one was unable to budget the time, and one went out of business). The opening measurements for the second round were completed on September 20, 2009. Fifteen measurements (13 by participants and two by pivot laboratories) concluded on April 5, 2011. These measurements averaged 5.3 weeks each, roughly twice the duration of the first round measurements. The longer time was partly due to increased measurement requirements. Closing measurement results were not received from NIST until November 1, 2011, during which time the ILC was idle. After the closing, two participants with prior scheduling conflicts completed second round measurements. These concluded February 17, 2012. The preliminary ILC results were distributed to participants by March 19, 2012.
Vol. 7 No. 3 September 2012

The 10 m shunt shifted significantly between the opening and closing measurements. The shift in value, as measured by NIST, was -42.5 m/ with an uncertainty of 10 m/ (k = 2). The pivot laboratory measurements of this shunt agreed within 15 m/ after the opening and prior to the closing measurements, compared to a 6 m/ agreement for the first round. The larger shift in value calls into question the validity of the second round results, as the artifact instability was greater than many participants claimed uncertainty. This artifact was provided by NIST and had a long calibration history. The cause of the change in value is unknown, although being a negative change, in accordance with the long term drift of the shunt, it may be partially due to an accelerated downward trend caused by repeated operation at full power during the ILC. This effect, which varies from shunt to shunt, also occurs in resistors, and can be largely
NCSLI Measure J. Meas. Sci. | 41


Figure 6. Second round of 10 m shunt comparison using ILC procedure. The red lines represent the boundaries of the NIST uncertainties.

Figure 7. Temperature curve for 10 m ILC shunt.

attributed to relaxation of stresses in the resistance alloy. To allow for the shift, results for this artifact include both NIST uncertainty and the change in value. The baseline value is a linear interpolation between opening and closing measurements. Eleven of the 14 participants submitted standard calibration reports, per the ILC instructions. Figure 5 shows the results of the 11 standard lab calibrations of the 10 m shunt, plus two NIST measurements and four pivot measurements. NIST claimed a measurement uncertainty 10 m/ for this shunt. One measurement is off the scale of the chart (Lab A, +699.2, UC 55.0). After a measurement using the laboratorys standard procedure, participants were instructed to perform three measurements on three separate days, cleaning current connection terminals, connecting the
42 | NCSLI Measure J. Meas. Sci.

copper bus bars with a torque of 20 newton-meters (15 foot-pounds) using the supplied torque wrench, and recording the shunt temperature with the supplied thermometer. This procedure was designed to reduce connection and temperature variations. All 14 participants provided these measurements. Figure 6 shows the results, including one pivot measurement and the two NIST measurements. An average of the three measurements is shown, along with claimed uncertainty (error bars). Three measurements are off the scale of this chart (Lab A, -682.5, UC 52.0; Lab G, -176.0, UC 556.0; Lab E, -109.1, UC 199.5). A comparison of measurements using both standard lab calibration procedures and the special ILC procedure reveals that four were improved, and 11 were worsened. Participants recorded the shunt temperature at the time of test. Since all metals change resistance with temperature, variations in temperature at the time of measurement may have caused errors. The temperature coefficient of resistance (TCR) of a shunt generally will not change over time. It can be positive or negative, and its curvature can cause a shunt to have both positive and negative areas across its power (or temperature) range. TCR varies from one shunt to another, even in identical models. The participants recorded temperatures between 53.6 C and 63.0 C during tests. With this shunt, the change in resistance around its mean temperature at 100 A is approximately -20 m/ / C. Figure 7 shows the temperature chart for this shunt. The difference between the NIST opening measurement at 58.9 C and the closing measurement at 58.0 C would cause a resistance difference of approximately +9.2 m/. This difference is a significant proportion of the NIST claimed uncertainty of 10 m/ and highlights the difficulty of accurately transferring valid and repeatable shunt measurements. The difference can be factored into the closing measurement to result in a corrected drift of -33.3 m/, but


Figure 8. Participants average reported temperature of 10 m shunt at 100 amperes.

Figure 9. Participants measurements of 10 m corrected for temperature.

applying this correction does not significantly affect the participants results. Figure 8 shows the average of participants three temperature measurements. The trend line in Fig. 7 is a third order polynomial fit. Although a second order curve is usually used for resistors or shunts, the third order curve better fits the data points. By applying the third order polynomial equation from the curve to the participants measurements, we can estimate and apply a temperature correction. Figure 9 shows participants measurements corrected for temperature variation from the mean. However, applying a temperature correction does not significantly alter the participants results. The second shunt, with a resistance of 1 m, heats less than the 10 m shunt. This is due to lower power. At 100 A, I 2R for the 10 m shunt equals 100 W, and for the 1 m shunt it is 10 W. Lower power reduces errors from heating.

This shunt changed in value by +10.0 m/ between the opening and closing measurements. This change is within the NIST claimed uncertainty of 20 m/ and thus the data is presented without additional correction. As with the 10 m shunt, the baseline value is an interpolation between the opening and closing measurements. Eleven participants submitted results and uncertainties using their existing calibration procedures. Figure 10 shows results for the 1 m shunt, with two NIST and four pivot measurements. Participants performed a subsequent set of three measurements on three separate days, following the ILC instructions. As with the 10 m shunt, all 14 participants reported this data. Figure 11 shows the average of these three measurements, including the NIST result and two pivot measurements. Comparing the 16 measurements performed using participants existing procedures with the ILC procedure shows that 10 were improved by the ILC procedure, and that six were worsened.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Figure 10. Second round of 1 m shunt comparison, standard procedure. The red lines represent the boundaries of the NIST uncertainties.

Figure 11. Second round of 1 m shunt comparison, ILC procedure. The red lines represent the boundaries of the NIST uncertainties. 6. Evaluation of Results

A useful gauge of a labs measurement proficiency is to calculate the difference in a labs measured value with a reference value, and comparing that difference to the relative uncertainties of the two values uncertainty. The equation for this calculation is (1) In Eq. 1, x is the participants measured value, X is the reference measured value, U is the measurement uncertainty, and En is the result. An En of >1 shows that a lab has failed to perform a measurement
44 | NCSLI Measure J. Meas. Sci.

within its claimed uncertainty. An En close to 1 can reveal areas of concern. Reported uncertainty is generally calculated by combining Type B (Built-in) uncertainty components inherent in the labs system with Type A (At time of test) components that are unique to the unit under test. Table 1 shows the En results for the ILC. Note that the En results for the 10 m shunt factor in the shift in value of the artifact and may not be reliable. In general, laboratories using the shunt comparison method achieved an En of <1, due to larger measurement uncertainty. Laboratories with the highest En generally claimed the smallest uncertainty, leaving the least allowance for error.



0.00 -1.08 -10.51 2.04 -0.44 26.00 0.25 -1.59 1.02 -0.30 0.41 0.21

0.01 Std Cal

-0.55 -9.20 -0.18 -0.17 -0.87 -0.13

0.01 ILC Cal

0.001 With Posts


0.001 Without Posts


0.001 Std Cal

-0.49 0.77

0.001 ILC Cal

0.00 0.59 -5.42 -1.37 0.96 -0.27

-9.24 -0.37 -0.28 -0.84 -0.51

5.71 -0.63 2.66 -0.02 0.18 -1.75 -2.85 -0.78 -4.04

-5.72 -1.60 -0.08 -1.29

-0.32 -0.38 -0.58 -0.74 0.29 -0.76 -0.54 -0.84 -0.34 0.00 0.33 0.01 -0.85 0.00 -0.53 -0.22 -0.10 -0.52 -1.00 0.45 -0.81

0.06 -0.41 -3.83 -4.50 -2.76 -4.27 -5.08 -4.53 -4.47 -5.08 0.00 -0.83 -0.39 -1.31 -6.68 -11.72 2.15 -0.62 -50.06 -7.09 -2.11 -0.96 -1.53 0.61

-0.24 0.78 -0.13 -1.30 -1.09 1.08 -0.05 -0.40 -0.10 0.00 -0.29 -0.29

-2.26 0.99 -2.08 -1.55 -0.90 -49.52 0.54 -0.82 0.00

Table 1. En tabulation.

Performing separate tests on separate days helps to establish repeatability as a Type A uncertainty component. The standard deviation of the participants three measurements is shown in Fig. 12. Although repeatability is a significant Type A uncertainty component, a comparison of En and repeatability does not show a strong correlation. Laboratories with highly repeatable results may be repeating measurements which would fail a proficiency test.
7. Root Cause Error Analysis

NIST reports the measured value of a shunt using a current comparator bridge system. Careful attention is paid to potential and current connections. Shunts with a temperature sensor are allowed to reach thermal stabilization before measurements begin. Shunts without a temperature sensor are allowed to approach thermal equilibrium, at which point resistance measurements will show stabilization. Ambient temperature and the rate of air flow across the shunt both affect equilibrium temperature and thus resistance [6]. Figure 13 shows the stabilization time of the 10 m shunt used in the ILC. Figure 14 shows the stabilization time for the 1 m shunt. Both measurement runs were with 100 A applied from a cold condition at a lab temperature of 23 C.
Vol. 7 No. 3 September 2012

Both shunts require about one hour to reach thermal stability. Due to the temperature coefficient of resistance of these shunts, the most likely error source is due to temperature, which is affected by time of applied current, level of applied current, ambient temperature, air flow, and contributions by the thermal mass of the connectors. Without carefully duplicating all of these environmental conditions, a laboratory may not obtain a measurement in agreement with NIST [7]. It is important to note the material and design of these shunts. The material is a copper-manganese alloy called Manganin, which was developed in the late 1800s. The design of the L&N shunt dates from the early 1900s, and the Rubicon design (differing only in its current connections) from around 1960. Many laboratories rely on their calibration history for shunts of this design, because the history of shunts of newer designs has not been as well established [8]. The ILC instructions for the second round attempted to minimize connection, temperature and stabilization errors. Drift errors were accounted for by interpolating NIST opening and closing measurements. A review of participants uncertainty components shows that participants generally understand and applied appropriate elements and calculated them properly.

NCSLI Measure J. Meas. Sci. |



Figure 12. Standard deviation of three measurement runs.

100 Amp Single Range Shunt S/N 36300

0.001 Ohm 300 Amp Shunt Rubicon Model # 1150 S/N 102145

Figure 13. Stabilization time for 10 m shunt. 8. Comparison of Methods

Figure 14. Stabilization time for 1 m shunt. 9. Conclusions

Two measurement methods were used by participants: comparison using a current comparator bridge system and direct comparison with a calibrated shunt. Five participants used the direct comparison method. Twenty of the 24 measurements made using the direct comparison method passed (En < 1). Of the four that did not pass, one was caught by the participant and subsequently corrected. Eleven participants used a current comparator bridge system. When properly used to measure resistance standards, these systems can support uncertainties to the m/ level. Resistors are usually measured at a power level at or below 10 mW. When measuring current shunts at higher power, additional error factors become increasingly significant, often overwhelming the systems best uncertainty. NIST claims a type B uncertainty at 10 m of 0.8 m/ at a power level of 10 mW. At 100 W, as in this ILC, the NIST type B uncertainty at 10 m is 10 m/. At the 1 m level, the type B uncertainties are 1.2 m/ and 20 m/ respectively. The higher uncertainty at higher power is derived from experience with shunts.

Although many laboratories have experience measuring shunts, errors inherent in higher power measurements may not be specified in manuals for shunts or current comparator bridge systems. Such error factors were absent from earlier publications, such as instruction sheets for the shunts used in this ILC. An end user must rely on a laboratorys measured value and claimed uncertainty. Laboratories with many years experience, state of the art equipment, and careful environmental controls, when supported by third party accreditation, are expected to provide reliable calibration results. A comparison of measurements with NIST shows that this is often not the case with current shunts.
10. Acknowledgements

The author acknowledges the contributions and assistance of Marlin Kraft of the Quantum Measurement Division of NIST.


NCSLI Measure J. Meas. Sci.

11. References Appendix: List of Participants

[1] R. Elmquist, D. Jarrett, and N. Zhang, RMO comparison final report: 20062007 Resistance standards comparison between SIM laboratories. SIM.EM-K1, 1 ; SIM.EM-K2, 1 G; SIM.EM-S6, 1 M, Metrologia, vol. 46, no. 1A, (Technical Supplement), 2009. [2] F. Wenner, Methods, Apparatus and Procedures for the Comparison of Precision Standard Resistors, J. Res. Nat. Bur. Stand., vol. 25, pp. 229-293, August 1940. [3] M. P. MacMartin and N. Kusters, A Direct-Current-Comparator Ratio Bridge for Four-Terminal Resistance Measurements, IEEE T. Instrum. Meas., vol. IM-15, no. 4, pp. 212-220, December 1966. [4] NCSLI, Guide for Interlaboratory Comparisons, NCSLIs Recommended Practice RP-15, NCSL International (Boulder, Colorado), 2008. [5] D. Braudaway, The Problem With Shunts, Proceedings of the Measurement Science Conference, Anaheim, California, March 2008. [6] R. Elmquist, D. Jarrett, G. R. Jones, Jr., M. Kraft, S. Shields, and R. Dziuba, NIST Measurement Service for DC Standard Resistors, NIST Technical Note 1458, 75 p., December 2003. [7] M. Kraft, Measurement Techniques of Low-Value HighCurrent Single-Range Current Shunts from 15 Amps to 3000 Amps, NCSLI Measure J. Meas. Sci., vol. 2, no. 1, pp. 44-49, March 2007. [8] D. Braudaway, Behavior of Resistors and Shunts: With Todays High-Precision Measurement Capability and a Century of Materials Experience, What Can Go Wrong?, IEEE T. Instrum. Meas., vol. 48, no. 5, pp. 889-893, October 1999. [9] D. Destefan, Current Feed Point and Torque Sensitivity of High Current dc Shunts, Proceedings of the NCSL International Workshop and Symposium, Tampa, Florida, July 2003.

NIST Energy Northwest Exelon PowerLabs FirstEnergy BETA Lab FPL Energy Seabrook Guildline Instruments High Current Technologies Lockheed Martin Denver Metrology Lockheed Martin Tactical Systems Measurements International Norfolk Naval Shipyard Ohm-Labs Pacific Gas and Electric Process Instruments Sandia National Laboratories Southern California Edison Southern Texas PNOC

Marlin Kraft John Atkins Cory Peters John Decker Bill Hinton Cliff Chouinor Dennis Destefan Bill Miller Rod Enke Duane Brown Scott Smith Jay Klevens Gary Barnes Karl Klevens Jim Novak Richard Brenia Keith Scoggins

Gaithersburg Richland Coatesville Mayfield Village Seabrook Smiths Falls Broomfield Littleton St. Paul Prescott Norfolk Pittsburgh San Ramon Pittsburgh Albuquerque Westminster Wadsworth

State/ Province

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Software Tools for Evaluation of Measurement Models for Complexvalued Quantities in Accordance with Supplement 2 to the GUM
Cho-man Tsui, Aaron Y. K. Yan and Hing-wah Li Abstract: In October 2011, a new guidance document Evaluation of measurement data Supplement 2 to the Guide to the expression of uncertainty in measurement Extension to any number of output quantities was released. It deals with measurement models that have more than one output quantity. These models are common in electrical metrology where the measurands can be complex-valued quantities, such as S-parameters. The guidance document described a generalized GUM Uncertainty Framework (GUF) and a Monte Carlo Method (MCM) to estimate the output quantities, their standard uncertainties, the covariances between them, and the coverage region. The GUF has more limitations. The MCM has a broader domain of validity but requires software tools for its computation. The Standards and Calibration Laboratory (SCL) in Hong Kong has developed software tools for evaluation of complex-valued measurement models using both GUF and MCM. The software tools were written in Visual C++ and Visual Basic, with Microsoft Excel as a front-end user interface, but could be adapted to other programming languages. The user is only required to encode the measurement model as a Visual Basic subroutine and to enter the information of the uncertainty components in a table. The software tool includes two parts that can be used separately, with the first part as a standalone simulator and the second part as an Excel user-defined function.
1. Introduction

In 1993, the document Guide to the expression of uncertainty in measurement (GUM) [1] was released. It defined the GUM Uncertainty Framework (GUF) for evaluation of measurement uncertainties. There are limitations in the application of GUF. In 2008, Supplement 1 to the GUM [2] was published. It discussed the propagation of probability distributions through a measurement model. The GUM and Supplement 1 to the GUM mainly deal with measurement models having any number of input quantities but only one output quantity. A measurement system is represented by a function involving real input quantities X1, , XN and a single real output quantity Y, i.e. Y = f(X) where X denotes the vector (X1, , XN)T . This is called the real univariate measurement function. In many measurement systems, especially those involving complex numbers, there may be more than one output quantity. These measurement systems are multivariate and can be represented as either Y = f(X) where Y denotes the vector (Y1, , Ym)T or a more general form h(X, Y) = 0. The output quantities
48 | NCSLI Measure J. Meas. Sci.

may be correlated even when all the input quantities are independent. The coverage regions for the output quantities, being multidimensional, are more complicated than that for the univariate cases. The GUM and Supplement 1 to the GUM are inadequate for measurement models with multiple output quantities. A new guidance document, the Supplement 2 to the GUM [3], is intended to deal with measurement models having any number of input quantities and any number of output quantities. In this supplement, the GUF is extended to cover multivariate measurement models. MCM is also described for the propagation of distributions. Since the GUF is based on a linear approximation of the measurement function and assumes a Gaussian error in the output, the Supplement 2 to the GUM also provides a more general procedure for evaluating uncertainty using MCM. The Standards and Calibration Laboratory (SCL) has developed software tools for evaluation of measurement models for complex-valued quantities in accordance with Supplement 2 to the GUM. With

these tools, users only need to encode the measurement model as a Visual Basic (VB) subroutine and to specify relevant parameters of the uncertainty components, such as estimates, standard uncertainties and probability distribution function (PDF), in table form on a Microsoft Excel worksheet. The software tools include two parts that can

Cho-man Tsui

Aaron Y. K. Yan

Hing-wah Li The Government of the Hong Kong Special Administrative Region Standards and Calibration Laboratory 36/F Immigration Tower, 7 Gloucester Road, Wan Chai, Hong Kong

be used separately. The first part is a standalone interactive simulator for evaluation of measurement models. The second part is a Microsoft Excel user-defined function that can be embedded in any Excel worksheet for GUF or MCM computation. The software tools, written in Visual C++ and Visual Basic, are tightly integrated with Microsoft Excel which serves as front-end user interface. Computational intensive routines that require faster execution speed were developed in Visual C++ and compiled into a Dynamic Link Library (DLL). Note that the terms measurement function and measurement model have distinct meanings in Supplement 2 to the GUM. Measurement function is a subset of measurement model. In Supplement 2 to the GUM, the term measurement model is chiefly used in cases where the output quantity Y and input quantity X are related by the equation h(Y, X) = 0. On the other hand, measurement function is chiefly used in cases where Y is written in the form Y = f(X). The GUM and Supplement 1 to the GUM do not seem to make such distinctions. The SCL software tools support only measurement function. In this paper the term measurement model is used in a more general sense and encompasses also measurement function.
2. The Three Stages of Uncertainty Evaluation

In the SCL software tools, this measurement model is encoded into the following VB subroutine. Lines 1 and 4 are VB syntax for declaration of subroutine. Lines 2 and 3 form the core of the measurement model and their usage are self-explanatory.
1: Public Static Sub Double) 2: y(1) = x(1) + x(3) 3: y(2) = x(2) + x(3) 4: End Sub model(x() As Double, y() As

Although the SCL software tools handles only two output quantities at a time, it will be straightforward to extend its use to measurement systems with more than two output quantities. For example, for a measurement model with three output quantities, the computations can be performed three times with two outputs in rotation to obtain the three covariance between the output quantities. Another example is the co-ordinate system transformation given in section 9.3.2 of the Supplement 2 to the GUM. Y21 = X21 + X22, tan Y2 = X2 / X1 (2)

The main stages of uncertainty evaluation described in GUM and its Supplements are listed below. This paper will present how the SCL software tools work in each stage. (a) Formulation The input quantities X and output quantities Y are defined. The estimates, standard uncertainties and PDF for X are assigned based on available knowledge. The mathematical relationship between X and Y is established to form the measurement model. (b) Propagation The PDF of Y are derived from that of X through the measurement model. (c) Summarizing The expectation, covariance matrix and coverage region of Y are obtained from the PDF of Y.
3. Formulation 3.1 Setting up the Measurement Model

The VB subroutine looks like the following. Line 3 ensures that both x(1) and x(2) are non-zero values before the arctangent function in line 4 is called. This avoids an error condition which would halt the software execution. During MCM computation, this subroutine will be called many times. Each time different random values will be passed to the variables x(1) and x(2). The subroutine must be designed to avoid illegal operations such as division by zero. The atan2_c() function is a stub for calling the atan2() function in Visual C++. The authors found that the built-in arctangent function in VB did not behave well for this application.
1: Public Static Sub model(x() As Double, y() As Double) 2: y(1) = Sqr(x(1) * x(1) + x(2) * x(2)) 3: If x(1)<>0 and x(2)<>0 Then 4: y(2) = atan2_c(x(2), x(1)) 5: Else 6: y(2) = 0 7: End If 8: End Sub

Microsoft Excel includes a Visual Basic (VB) editor, allowing users to employ a subset of the VB programming language to develop software routines to enhance and customize the features of Excel. The SCL software tools make use of this built-in VB programming language to encode the measurement model. The measurement model is represented by a VB subroutine. The input quantities of the measurement model are passed to the subroutine in an array x with a maximum of 100 elements, x(1) to x(100). The output quantities are returned in an array y with two elements, y(1) and y(2). It will be easier to explain using examples. Section 9.2 of Supplement 2 to the GUM describes the following additive measurement model. Y1 = X1 + X3 , Y2 = X2 + X3 (1)

The examples above are measurement models with independent input quantities (i.e. correlation coefficient = 0). If the input quantities are correlated, the encoding for the measurement model will be more complicated. See the Appendix for an example. The GUM [F1.2.4] suggests that correlations may be eliminated by re-defining input quantities in terms of additional fundamental independent input quantities. Such redefinition is desirable since it simplifies the encoding of the measurement model and speeds up the computation. VB supports user defined data types. With this feature, it is easy to represent and manipulate complex numbers. An example is shown in Section 8.
3.2 Setting up Input Quantities and Assigning PDF

The SCL software tools support a maximum of 100 input quantities. Users can specify the parameters of the input quantities in table form on an Excel worksheet. An example is shown in Table 1. The PDF column is used to specify the PDF for the input quantities. The following types are supported: CTP - curvilinear
NCSLI Measure J. Meas. Sci. | 49

Vol. 7 No. 3 September 2012


X1 X2 X3 X4 X5 X6 X7 X8 G T R U TR CTP TP E

2.000E+00 0.000E+00 1.000E+00 3.000E+00 0.000E+00

G, T : R,U,TR:semi-range CTP, TP : a

5.000E-01 1.000E+00 2.000E+00 1.000E+00 1.000E+00 -1.000E+00 -1.000E+00

T : DOF CTP, TP : b

TP : CTP : d

1 1

0.3 0.5


Table 1. An example of how to specify the parameters of input quantities.

trapezoid; E - exponential; G - Gaussian; R - rectangular; TP trapezoidal; TR - triangular; T - student-t; U - arc sine. The other columns are used for entering the expectation, standard deviation, semi-range or other parameters relevant to the particular type of PDF as indicated in the table heading. For example, in Table 1, the input quantity x1 has expectation of two, a standard deviation of 0.5, and Gaussian PDF. During MCM computation, pseudo-random numbers will be generated with such properties and passed to the variable x(1) in the VB subroutine representing the measurement model. Similarly, the input quantity x3 has an expectation of one, a semi-range of 2 and rectangular PDF. The input quantity x6 is of curvilinear trapezoid distribution with parameters a = -1, b = 1 and d = 0.3.
4. Propagation

4.2 Propagation using Numerical Method

Supplement 2 to the GUM describes a Monte Carlo method (MCM) for the propagation of distributions. The idea is to make large number of draws from the PDF of the input quantities and to derive the output quantities for each draw. Increasing the number of draws makes the results more reliable, but the trade-off is a longer computation time. There are two ways to select the number of Monte Carlo trials. A fixed number of trials can be chosen beforehand, or an adaptive algorithm may be used to determine the number of trials on-the-fly based on the convergence of the simulated output. The SCL software tools support both the fixed sample size method and the adaptive method. When the adaptive method is chosen, the required number of significant digits (1 or 2) for the output quantities can be selected.
5. Summarizing

Supplement 2 to the GUM described several ways to propagate the distributions: (i) Analytical method, (ii) Using first order Taylor series approximation of the measurement model, (iii) Numerical method. The first method is impractical for most real world measurement systems. The SCL software tools support the second and third methods.
4.1 Propagation Using First Order Taylor Series Approximation

This is the method employed in GUF. In this method, the input quantities are characterized by a vector x = (x1, xN)T representing the estimates of the input quantities X and a covariance matrix Ux containing the covariance of the input quantities. Let the measurement model be denoted by Y = f(X), the estimate of Y is then given by y = f(x). The covariance matrix of y, Uy, is obtained using the following matrix formula Uy = CxUxCxT (3)

Cx is the sensitivity matrix. The entry at row i and column j of Cx is given by the partial derivative . In the SCL software tools, the values of Cx are evaluated using numerical method.
50 | NCSLI Measure J. Meas. Sci.

After propagating the distributions to obtain the PDF of output quantities Y, the expectation, covariance matrix and coverage region of Y can in turn be summarized from the PDF. To illustrate the results, the case described in section 9.2.4 of Supplement 2 to the GUM is used as an example. The measurement model is given by Eq. (1). The definitions of the input quantities are shown in Table 2. Here, x(1) and x(2) are both characterized by a Gaussian PDF with zero expectation and standard deviation of unity, and x(3) is characterized by rectangular distribution with zero expectation and semi-range of 33 (i.e. standard deviation equals three). In this example MCM was carried out for 1,000,000 trials. The estimates and covariance matrix of the output quantities were computed directly from these 1,000,000 trials. The results are listed in Table 3 below along with that obtained by GUF. To help visualize the results, the SCL software tools utilize the chart functions of Excel to display histogram of the PDF for each output. Fig. 1 shows the histogram for y1. The blue line represents the PDF computed by MCM while the purple line that assumed by GUF. To depict the correlation between the output quantities, the contour of the joint PDF of the output quantities is displayed. Figure 2 shows the contour for the above example. The correlation between y1 and y2 is apparent from the shape of the contour. It is not easy to determine the multidimensional coverage regions for vector output quantities. Supplement 2 to the GUM considers three types of coverage regions for multivariate cases: hyper-ellipse,


X1 X2 X3 G G R

0.000E+00 0.000E+00 0.000E+00

G, T : R,U,TR:semi-range CTP, TP : a

1.000E+00 1.000E+00 5.196E+00

T : DOF CTP, TP : b

TP : CTP : d

Table 2. Input quantities for the example in 9.2.4 of Supplement 2 to the GUM.

Method MCM GUF

-0.004 0.000

0.001 0.000

3.161 3.162

3.161 3.162

r(y1, y2)
0.900 0.900

Covariance matrix of output obtained by MCM

9.995 8.996 8.996 9.995 10.000 9.000

Covariance matrix of output obtained by GUF

9.000 10.000

Table 3. Results for the example in 9.2.4 of Supplement 2 to the GUM.

Figure 1. Histogram of PDF for an output quantity.

Figure 2. Contour of the joint PDF.

rectangle and smallest coverage region. The first two types of coverage region are characterized by two sets of quantities: the covariance matrix for the output quantities and a scalar parameter (kp for hyper-ellipse and kq for hyper-rectangle) which determines the volume under the PDF corresponding to the coverage probability. The SCL software tools calculate kp and kq from the MCM results. In the above example, the value of kp obtained is 2.283 while that for kq is 1.870. For comparison, the GUF assumes the output quantities follow a multivariate Gaussian distribution, the values for kp and kq for the bivariate case are 2.45 and 2.24 respectively. The current version of the SCL software does not compute parameters for the smallest coverage region.
6. The Simulator

The SCL simulator also allows comparison of results obtained by GUF with MCM. Supplement 1 to the GUM details perceived limitations of the GUF method in sections 5.7 and 5.8. Section 8 of Supplement 2 to the GUM describes a procedure that uses MCM to check whether the GUF method provides an adequate approximation in a particular case. Both GUF and MCM are applied to the same data and their results compared. If the differences in the results obtained by the two methods are small, compared to numerical tolerances, then the Supplement allows GUF to be used for that particular measurement system. While the Supplement states that the MCM has a wider domain of validity than GUF, it should be pointed out here that there is currently controversy over this view [4].
7. User Dened Function

The SCL software tools include two parts that can be used separately. The first part is a standalone simulator that allows users to interactively vary the parameters of input quantities and the measurement model. Users can also experiment with different configurations of the measurement system and explore the effects on the measurement uncertainties. A screen shot of the SCL simulator is shown in Fig. 3.
Vol. 7 No. 3 September 2012

If the validation of GUF described in section 8 of Supplement 2 to the GUM fails, then the computationally more demanding MCM need to be used. To facilitate such computation, the second part of the SCL software tools is an Excel user-defined function (UDF) that can be embedded in any Excel worksheet for GUF or MCM computation.
NCSLI Measure J. Meas. Sci. | 51


Figure 3. Screen shot of the SCL simulator.

An Excel UDF behaves just like other Excel built-in functions. It takes arguments and returns a value. An Excel UDF will be executed whenever any value in its argument list changes. In other words, recalculation is automatic. If an Excel UDF takes a long time to execute, such as when running MCM for a large number of trials on a complicated measurement model, Microsoft Excel will appear frozen. Unlike macros in Excel, there is no convenient way to report the progress of execution of a UDF. A solution is to declare an Excel macro that invokes the UDF, so that a progress bar can be added. The tradeoff is that since the macro needs to be invoked manually, the recalculation will not be automatic. The syntax of the UDF is: =gum2(range, index, sim_mode, sim_par, conf_type, model_index) where range : To point to the table of input quantities. index : To specify the return parameter (see Table 4 for details). sim_mode : Enter 1 to select adaptive simulation mode. Enter 2 to select fixed sample size simulation mode sim_par : Enter number of significant digits (1 or 2) for adaptive simulation mode. Enter the number of trials for fixed sample size simulation mode. conf_type : Optional parameter. Enter 1 to select symmetrical coverage interval type. Enter 2 to select shortest coverage interval type. This is for compatibility with Supplement 1 to the GUM. model_index : Optional parameter for future expansion.
8. Example

Computed by MCM Index 1 2 3 Return parameter Expectation of measurand 1 Standard uncertainty of measurand 1 Low boundary of 95 % confidence Interval of measurand 1 High boundary of 95 % confidence Interval of measurand 1 Expectation of measurand 2 Standard uncertainty of measurand 2 Low boundary of 95 % confidence Interval of measurand 2 High boundary of 95 % confidence Interval of measurand 2 Kp Kq Correlation coefficient

Computed by GUF Index 101 102 103 Return parameter Expectation of measurand 1 Standard uncertainty of measurand 1 Effective degree of freedom of measurand 1 Coverage factor of measurand 1 Expanded uncertainty of measurand 1 Expectation of measurand 2 Standard uncertainty of measurand 2 Effective degrees of freedom of measurand 2 Coverage factor of measurand 2 Expanded uncertainty of measurand 2 Correlation coefficient

4 11 12 13

104 105 111 112

14 21 22 23

113 114 115 121

The following is a simplified measurement model for the effective output voltage reflection coefficient ( ) of a microwave power splitter [5]. The 3-port S-parameter and are complex quantities.

Table 4. Return parameters of the UDF.


NCSLI Measure J. Meas. Sci.

(4) The measurement model is encoded below in Fig. 4. In lines 1 to 4, a user defined data type Complex, with r as the real part and i as the imaginary part, is declared to represent complex numbers. In lines 20 to 44, the functions cadd(), cminus(), cmul() and cdiv() are defined to manipulate complex numbers. In this way Eq. (4) can be written in a single VB statement as in line 16. Lines 12 to 16 are used to convert the input quantities from polar form to rectangular form.
1: 2: 3: 4: 5: Type Complex r As Double i As Double End Type Public Static Sub model(x() As Double, y() As Double) 6: X(1) : s22 magnitude X(2) : s22 phase 7: X(3) : s12 magnitude X(4) : s12 phase 8: X(5) : s23 magnitude X(6) : s23 phase 9: X(7) : s13 magnitude X(8) : s13 phase 10: Dim S22 As Complex, S12 As Complex, S23 As Complex, S13 As Complex 11: Dim VRC As Complex 12: S22.r = x(1) * Cos(x(2)): S22.i = x(1) * Sin(x(2)) 13: S12.r = x(3) * Cos(x(4)): S12.i = x(3) * Sin(x(4)) 14: S23.r = x(5) * Cos(x(6)): S23.i = x(5) * Sin(x(6)) 15: S13.r = x(7) * Cos(x(8)): S13.i = x(7) * Sin(x(8)) 16: VRC = cminus(S22, cdiv(cmul(S12, S23), S13)) 17: y(1) = VRC.r: y(2) = VRC.i 18: End Sub 19: 20: To add two complex numbers 21: Public Function cadd(a As Complex, b As Complex) As Complex 22: cadd.r = a.r + b.r 23: cadd.i = a.i + b.i 24: End Function 25: 26: To minus a complex number from another 27: Public Function cminus(a As Complex, b As Complex) As Complex 28: cminus.r = a.r - b.r 29: cminus.i = a.i - b.i 30: End Function 31: 32: To multiply two complex numbers 33: Public Function cmul(a As Complex, b As Complex) As Complex 34: cmul.r = a.r * b.r - a.i * b.i 35: cmul.i = a.r * b.i + a.i * b.r 36: End Function 37: 38: To divide two complex numbers 39: Public Function cdiv(a As Complex, b As Complex) As Complex 40: Dim D As Double 41: D = b.r * b.r + b.i * b.i 42: cdiv.r = (a.r * b.r + a.i * b.i) / D 43: cdiv.i = (a.i * b.r - a.r * b.i) / D 44: End Function

example, the input quantities are assumed to be independent. The output quantities, for reasons stated in section 9, should always be in terms of the real and imaginary parts.
9. Some Considerations in Encoding Measurement Models for Complex Quantities

There are typically two ways to represent a complex quantity: either in terms of its real and imaginary parts, or in polar form by its magnitude and phase. When encoding a measurement model for complex quantities, it is recommended that the output quantities of the model should be in terms of real and imaginary parts. The reason is that in the summarizing stage of MCM, statistical analysis is applied to the MCM trials. It is known that statistical analysis on complex quantities will produce different results depending on whether the inputs to the statistical analysis are represented in polar form or rectangular form [7]. There are potential problems in performing statistical analysis on polar form of complex quantities. The transformation between rectangular and polar form is non-linear. The real and imaginary axes in the complex plane extend to plus and minus infinity while the magnitude is always non-negative. The phase is cyclical in nature. To avoid these problems, the output quantities of the measurement model should be in terms of real and imaginary parts. Results should only be converted into polar form after statistical analysis.
10. Conclusions

Software tools have been developed for evaluation of measurement models for complex-valued quantities in accordance with Supplement 2 to the GUM. The software tools have several desirable features, including tight integration with Microsoft Excel, easy encoding of measurement model using the VB programming language, an interactive simulator that allows users to experiment with the parameters of input quantities and measurement model, and user-defined function that can be embedded in any Excel worksheet for GUF or MCM computation.
11. References

Figure 4. Code for measurement model.

The parameters for the input quantities are taken from an actual measurement and listed in Table 5. The results computed by GUF and MCM are shown in Table 6. The input quantities are stated in polar form since users of microwave network analyzers normally have separate procedures to estimate the measurement uncertainties of the magnitude and phase of the S-parameter readings [6]. For this
Vol. 7 No. 3 September 2012

[1] JCGM, Evaluation of Measurement Data Guide to the Expression of Uncertainty in Measurement, Joint Committee for Guides in Metrology, JCGM 100:2008, September 2008. [2] JCGM, Evaluation of Measurement Data Supplement 1 to the Guide to the Expression of Uncertainty in Measurement Propagation of Distributions Using a Monte Carlo Method, Joint Committee for Guides in Metrology, JCGM 101:2008, First Edition, 2008. [3] JCGM, Evaluation of Measurement Data Supplement 2 to the Guide to the Expression of Uncertainty in Measurement Extension to Any Number of Output Quantities, Joint Committee for Guides in Metrology, JCGM 102:2011, October 2011. [4] R. Willink, On the Validity of Methods of Uncertainty Evaluation, Metrologia, vol. 47, pp. 80-89, 2010. [5] N. Ridler and M.J. Salter, A Generalised Approach to the Propagation of Uncertainty in Complex S-parameter Measurements, Symposium Digest: 64th ARFTG Microwave Measurements Conf., Orlando, Florida, pp. 114, December 2004. [6] J. de Vreede, Draft for Comment Proposed Revision to the EA Guidelines on Evaluation of Vector Network Analyser (VNAs) to Include Uncertainty in Phase, ANAMET Report 038, August 2003.
NCSLI Measure J. Meas. Sci. | 53


s22 magnitude s22 phase s12 magnitude s12 phase s23 magnitude s23 phase s13 magnitude s13 phase G G G G G G G G

0.24776 4.88683 0.49935 4.78595 0.24971 4.85989 0.49952 4.79054

G, T : R, U, TR: semi-range CTP, TP : a

0.00337 0.01392 0.00340 0.00835 0.00170 0.00842 0.00340 0.00835

Table 5. Input quantities for microwave power splitter measurement.

By MCM VRC real part VRC imaginary part Estimate Standard Uncertainty Low boundary of 95 % confidence interval High boundary of 95 % confidence interval Kp Kq Correlation
0.0074 0.0050 -0.0023 0.0172 2.4469 2.2352 0.0299 0.0031 0.0045 -0.0057 0.0119

By GUF VRC real part VRC imaginary part

0.0074 0.0050 -0.0023 0.0172 0.0031 0.0045 -0.0057 0.0119


Table 6. Results obtained by MCM and GUF.

[7] N.M. Ridler and M.J. Salter, An Approach to the Treatment of Uncertainty in Complex S-parameter Measurements, Metrologia, vol. 39, no. 3, pp. 295-302, 2002.
Appendix: Example of Measurement Model with Correlated Input Quantities

For the example described in section 9.3.3 of Supplement 2 to the GUM, the input quantities are drawn from a multivariate Gaussian distribution with the following values : x1 = 0.001, x2 = 0, u(x1) = u(x2) = 0.01 and correlation coefficient r(x1, x2) = 0.9. The VB subroutine to implement this measurement model is encoded below in Fig. A1. Lines 15 to 32 are a subroutine which takes two independent variables x1 and x2 with standard Gaussian distribution N(0, 1) as input and produces two output yy(1) and yy(2) of multivariate Gaussian distribution with the require properties. For details of the algorithm, please refer to section 6.4.8 of Supplement 1 to the GUM.
1: Public Static Sub model(x() As Double, y() As Double) 2: 3: x(1) and x(2) are set up to have gaussian pdf with mean=0 and u=1 4: 5: Dim yy(1 To 2) As Double output of multivariate Gaussian distribution 6: Call MG2(x(1), x(2), 0.001, 0, 0.01, 0.01, 0.9, yy) 7: y(1) = Sqr(yy(1) * yy(1) + yy(2) * yy(2))

8: If yy(1) <> 0 and yy(2) <> 0 Then 9: y(2) = atan2_c(yy(2), yy(1)) 10: Else 11: y(2) = 0 12: End If 13: End Sub 14: 15: Public Sub MG2(x1 As Double, x2 As Double, xx1 As Double, xx2 As Double, uxx1 As Double, uxx2 As Double, r As Double, yy() As Double) 16: Static initialised As Integer initialised set to 0 on first call 17: Static c(1 To 2, 1 To 2) As Double 18: Static cov(1 To 2, 1 To 2) As Double 19: 20: only perform initialisation once to improve speed of execution 21: 22: If initialised = 0 Then 23: cov(1, 1) = uxx1 * uxx1 24: cov(1, 2) = r * uxx1 * uxx2 25: cov(2, 1) = cov(1, 2) 26: cov(2, 2) = uxx2 * uxx2 27: Call cholesky(2, cov(), c()) 28: initialised = 1 29: End If 30: yy(1) = xx1 + c(1, 1) * x1 31: yy(2) = xx2 + c(1, 2) * x1 + c(2, 2) * x2 32: End Sub

Figure A1. Code for measurement model with correlated input quantities.


NCSLI Measure J. Meas. Sci.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Precise Blackbody Sources Developed at VNIIOFI for Radiometry and Radiation Thermometry
Sergey A. Ogarev, Boris B. Khlevnoy, Boris E. Lisiansky, Svetlana P. Morozova, Mikhail L. Samoylov, and Victor I. Sapritsky Abstract: This paper reviews a wide list of precise blackbody (BB) sources developed at the All-Russian Research Institute for Optical and Physical Measurements (VNIIOFI). These radiometric, photometric and radiation thermometry standards are for the entire UV-visible-IR spectrum, and cover the temperature range from 80 K to 3500 K [1, 2, 3]. Low temperature blackbodies with temperatures from cryogenic to 450 K were developed for space borne instruments calibration [4, 5, 6]. We present models of variable-temperature BBs, as well as those based on fixed-points of Ga, In and binary metal-metal eutectic alloys. BBs are characterized with high temperature uniformity and stability. For example, for the BB100-V1 model, these parameters account for 0.05 K to 0.1 K, and 0.1 % for the 1.5 m to 15 m wavelength region under cryo-vacuum conditions of medium background environment emulating the orbital working environment. Copper and aluminum alloys are used as the radiation cavity materials for the low-temperature and cryogenic BBs. Recent advances in high-temperature technology and a novel design made it possible to develop the Planckian sources with temperatures as high as 3500 K, high uniformity, and stable radiation characteristics [7, 8, 9]. These large-area blackbodies allow the creation of a new generation of radiometric and radiance temperature standards with low uncertainties. High-temperature, large aperture blackbodies of the BB3500 series allow the realization of projects requiring high-temperature fixed points based on metal-carbon eutectic and peritectic alloys.
1. Introduction

The blackbody-based approach reflects the preference for the radiation approach by the Russian metrological school in the field of radiometry and radiation thermometry [8]. This presents a practical implementation of the concept of standard integration, and the development of an integrated system of reproducing and transferring the unit dimensions on the basis of parametric series of VNIIOFIs cavity and plane type BBs. This includes devices of the variable working temperature type, and devices based upon fixed points corresponding to phase transitions of pure materials [3, 9, 10, 11, 12]. Most of the blackbody sources developed at VNIIOFI are cavity type sources. The radiation of a perfect cavity type source regarded as an absolute BB is not a function of the cavity shape or of the properties of its wall materials, and according to Plancks law is completely determined by its temperature. As a thermal radiation source, a cavity type BB has advantages over other thermal sources, including higher reproducibility, lower dependence of the efficient emissivity on
56 | NCSLI Measure J. Meas. Sci.

variations, and degradation of the optical properties of the cavitys inner surface. The development of state-of-the-art technologies and optics calls for more accurate blackbody radiometry, and for a practical implementation of the spectral radiance and spectral irradiance scales, which currently have a reproducibility uncertainty of about 0.1 %. One of the fundamental tasks of radiometry is the development of a new generation of BB sources for implementation of scales with a total uncertainty of less than 0.1 %. The basic characteristics of most BB radiation sources developed at VNIIOFI are given in Table 1 as functions of the required spectral range and radiation flux density in accordance with their working temperatures: from cryogenic (80 K to 200 K), to low (200 K to 400 K), medium (400 K to 1800 K), and high (1800 K to 3500 K).
2. Low and middle temperature blackbodies

emulating the orbital working environment. These BBs are utilized in calibration facilities at such space research institutions as Space Dynamics Laboratory (SDL) in the U. S., the German Aerospace Center (DLR) in Germany, the Keldysh Space Center in Russia, RNIIKP/RISDE in Russia, NECToshiba Space Systems in Japan, and the Korean Research Institute of Standards and Science (KRISS) in South Korea.

Sergey A. Ogarev Boris B. Khlevnoy Boris E. Lisiansky Svetlana P. Morozova Mikhail L. Samoylov Victor I. Sapritsky
All-Russian Research Institute for Optical and Physical Measurements (VNIIOFI) 46 Ozernaya Str., 119361 Moscow, Russia

The low-temperature BBs are developed for calibration of the infrared radiation (IR) instruments under cryo-vacuum conditions

Type HfC-C cell in BB3500 ZrC-C; TiC-C cell in BB3500MP BB3500MP BB3500-YY* BB3500-M BB3500 BB-PyroG ZrC-C cell in BB3200pg TiC-C cell in BB3200pg (MoC)-C cell in BB3500-M Re-C cell in BB3200pg BB3200-M BB3200pg BB3200c BB3000 BB22p BB39p BB2500 BB14 BB2000/40 BB2000 BB1200 BB1100 BB1000 [vacuum] BB156in VMTBB, mediumvacuum conditions VTBB BB900 [vacuum] BB29gl VTBB (for KRISS) BB100-K1 BB100-V1 [ vacuum ] BB-80/350 [vacuum] VLTBB, mediumvacuum conditions BB300 BB100 [vacuum]

Temperature, K 3458.7 3155 / 3033 2000 to 3500 1800 to 3500 1500 to 3500 1500 to 3500 1300 to (2800 to 3300) 3154.2 (melting) 3033.8 (melting) 2856 (melting) 2748.5 (melting) 1800 to 3300 1800 to 3300 1800 to 3300 2500 to 3000 1600 to 2900 1800 to 2900 2000 to 2500 1600 to 2500 1070 to 2300 800 to 2000 800 to 1200 450 to 1375 400 to 1000 429.85 420 to 700 330 to 380 / 210 to 350 / 320 to 450 400 to 900 302.914

Radiating cavity or heater (cell) Graphite cavity in fixed-point BB on phase transition of HfC-C Graphite cavity in fixed-point BB on phase transition of ZrC-C / TiC-C

Dimensions of cavity (or crucible), length diameter, mm 43 4 124 16 250 57 230 48 (47)* 200 37 200 37 200 (15 to 25)

Opening, mm 3 16 34 47* 24 24 10 3/8 3/8 4 2/8 24 24 20 12 14 30 12 8 40 60 8 60 32 20 20 8 / 30 / 20 100 8 / 20 30 100 100 350 20 60 100

Effective emissivity 0.999 0.9997 0.999 N/A* 0.999 0.999 0.9996 0.0003 0.99994 / 0.99965 0.99994 / 0.99965 0.9997 0.9999 / 0.99965 0.999 0.999 0.999 0.997 0.999 0.998 0.998 0.998 0.995 0.003 0.995 0.9995 2 >0.99 (2.7 to 5.5 m) 0.9998 (1 to 20 m) 0.9996 0.9998 (1 to 20 m) 0.992 0.9998 (1 to 20 m)

Power consumption, kW 28 25.5 24 24 10 14 14 12 12 7 10 15 3 8 0.5 2 0.3

Year developed 2003 2005 2004 2004-2005 2004-2010 1997 2005-2010 2002 2002 2005 2001 2004 1995 1995 1985 1992 1992 1975 1990 2010 1995 1985 1999 2000 2000 2008 1998/2002

Pyrolytic graphite

Graphite cavity in fixed-point BB on phase transition of ZrC-C Graphite cavity in fixed-point BB on phase transition of TiC-C Graphite cavity in fixed-point BB on phase transition of (MoC)-C Graphite cavity in fixed-point BB on phase transition of Re-C Pyrolytic graphite Niobium carbide Graphite Carbon glass Graphite Sodium heat pipe Cr-Ni alloy Graphite V-grooved bottom, Goldcovered Aluminum lateral walls Copper cavity in fixed-point BB on phase transition of In Copper, chemical nickel plating Copper Aluminum alloy Copper cavity in fixed-point BB on phase transition of Ga Aluminum alloy, black paint LORD Aeroglaze Z306; Copper cavity, black paint LORD Aeroglaze Z306 Copper cavity, black paint Nextel Velvet 811-21; V-grooved bottom Copper plane disk, black paint Nextel Velvet 811-21 Copper, Aeroglaze Z306 painted Copper Aluminum alloy

43 4 / 65 10 43 4 / 65 10 54 24 (cell) / 34 4 (cavity) 43 4 / 65 10 200 37 200 37 145 32 350 19 130 22 250 39 450 19 110 14 300 55 190 90 100 10 400 90 80 80 172 34 26 mm 240 mm, cone bottom 250 40 190 160 172 34 Conical bottom V-grooved bottom 200 120 Plane type BB, 350 mm 40 mm 250.6 mm, cone bottom 500 140 190 160

1 0.1 thermostabilization system 3 0,8 with thermostabilization system;

1997 1995

231 to 363


0.996 0.001 0.997 (1.5 to 15m) 0.96 0.02 (2.5 to 15 m) 0.9996 (10 to 25 m) 0.9997 (1 to 20 m) 0.999

240 to 350 220 to 350 100 to 450 80 to 300 100 to 400

2005 2006 2008

0.1 1

1992 1997

* Instrument is intended for operation as a furnace for inserted BB fixed-points cells Table 1. Characteristics of main BB radiation sources developed at VNIIOFI.


Environment operation conditions Working temperature range Cavity inner diameter Opening diameter Temperature gradient along cavity at -10 C Temperature stability at -10 C Temperature control Figure 1. An external view of the VTBB equipment. Temperature sensors

Vacuum (1.3 10-2 Pa) (- 60 to 90) C 40 mm 30 mm 0.1 C 0.02 C by means of external thermostat-circulator 5 PRTs (100 )

Table 2. Specications of the VTBB.

VD83M vacuum controller (5) with a power supply unit; Computer system (6).

Figure 2. Schematic drawing of the VTBB radiator unit with temperature control and measurement system.

Copper and aluminum alloys are used as the radiation cavity materials for the low temperature and cryogenic BBs. The required value of emissivity is achieved by using different black coatings [13], in particular, Chemglaze Z-302 for BB29gl or Nextel Velvet 811-21 for BB100, BB100-V1, BB-80/350, and LORD Aeroglaze Z306 black paint for the latest VTBB, BB100K1 and VLTBB models. Low-temperature and cryogenic BBs are based on the principles of indirect multi-zone electric heating (high-resistance winding arranged around particular zones of the cavity) with heat isolation from a liquid nitrogen cooling loop, or by using an external liquid thermostat with a circulating heat-transfer agent.
2.1 Vacuum Temperature Blackbody model (VTBB)

The VTBB was designed for operation at the Middle Background Calibration Facility (KRISS, South Korea) to serve as highly stable reference source with a 30 mm aperture diameter in the temperature range from -60 to 90 under medium-vacuum conditions (up to 1 10-4 mbar). For operation under vacuum conditions, VTBB has a hermetic housing and flange for mounting to the vacuum chamber. The VTBB consists of the following main units (see Fig. 1):


Radiator (1); Unistat 750 liquid thermostat (2) with connecting hoses; Keithley 2000 digital multimeter (3) with multiplexer; VRL lamellar-rotary vacuum pump (4), model 100-3.5;
| NCSLI Measure J. Meas. Sci.

The liquid thermostat Unistat 705 is used to stabilize the VTBB temperature, the digital multimeter is used to measure the temperature of the VTBB radiator, the lamellar-rotary vacuum pump is used to thermally isolate the heat exchanger of a blackbody from its housing and environment, and the vacuum controller is used to measure the vacuum level around the heat exchanger of the VTBB blackbody. An external view of the equipment for the blackbody VTBB is presented in Fig. 1, and specifications are presented in Table 2. The numerical investigation of the effective emissivity of the VTBB is performed by use of the STEEP3 modeling software based on a Monte-Carlo algorithm [9]. The calculation of a normal effective emissivity for the radiator cavity of VTBB was conducted for a temperature range from 213.15 K to 363.15 K, at wavelengths from 5 m to 30 m. The internal surface of a cavity is covered by paint (LORD Aeroglaze Z-306) with an emissivity of 0.94 and a diffusivity of 0.15 [13]. The calculations of the normal effective emissivity were performed for cases when the non-uniformity of temperature along a radiator cavity lies within limits from 50 mK to 100 mK. If the nonuniformity of the temperature along the cavity lies within the limits of 50 mK values, then the normal effective emissivity of a cavity at the temperature of 213.15 K in the spectral range from 5 m to 30 m is limited to 0.9997 to 0.9999. If the uniformity of the temperature of the radiator cavity lies within the limits of 100 mK values, then the normal effective emissivity of a cavity at the temperature of 363.15 K in the spectral range from 5 m to 30 m is more than 0.9998. The body of the VTBB is made of an aluminum alloy. The radiating cylindrical cavity with a conical bottom is formed in the copper cylindrical block. The internal diameter of the VTBB radiating cavity is 40 mm, the length of the cavity with a conical bottom is 250 mm, and the apex angle of the cavity bottom is 75 degrees. The solid angle of a radiation from the variable temperature blackbody cavity is limited by the optical baffle within the aperture diaphragm with a diameter of 30 mm. The heat exchanger executed in the form of a copper tube is fixed round the copper block with a radiating cavity. Heat exchanger tubes are deduced outside through the back flange of the body. The heat exchanger tubes have nipples on their ends for joining to the hoses of the liquid thermostat. Five platinum resistance thermometers (PRTs)

are installed in the walls of the copper block with a radiating cavity. Wires from the PRTs are deduced outside through a tight electric socket, established on a rear flange of the body of a blackbody. The copper block with a radiating cavity is fixed on the front and rear flanges of the body with a help of the flexible baffle made from stainless steel. The heat exchanger is insulated from the body of the blackbody with the help of two cylindrical radiating screens, which are made from stainless steel with a thickness of 0.5 mm. A pumping out of an internal volume of a black body ensures a thermal insulation of the copper block with the heat exchanger and a radiating cavity from an environment. The pump is connected to the vacuum tube

Figure 3. Cross-section and 3-D view of BB100-V1 blackbody.

Operating temperature range Spectral range Cavity effective emissivity Opening (non-precision aperture) System Field-of-View (FOV)

Tested Value
240 K (-33 C) - 350 K (77 C) 1.5 m to 15 m 0.997 0.001 (Estimated from computer modeling by STEEP3 software) 100 mm 12 mrad (0.688) Vacuum chamber (10-6 Torr, below 100 K) Tests were carried out in 2 modes: (1) warm vacuumized chamber (P = 10-2 Torr); and (2) cold vacuumized chamber (P = 10-6 Torr, T = -196 C) Air environment (clean room at 23 3 C)

Environment operation conditions

Temperature non-uniformity across opening Temperature set point resolution Maximum temperature instability under thermostabilization Limitation on the blackbody warning-up time (approx.) Total Wattage (approx.) Input Voltage Blackbody temperature set up and control

0.04 K Measured by internal temp. sensors of cavity bottom 0.01 K 0.02 K Measured by internal temp. sensors of cavity bottom during 1-hr measurement 40 to 80 min 1600 W (switchable to 3500 W with usage of thermostat LAUDA Proline PR1845 LCK 1891) 200 V AC with additional 200 to 220 V transformer External controller of LAUDA Proline PR1845 LCK 1891 thermostat with RS-232 interface to PC computer and one PRT-100 sensor incorporated in cavity bottom of BB100-V1 Pt RTD by MINCO Products, Inc., USA, 5 pieces: sensors Mod.S278PD06 (4 pcs.) and Mod.S1059PA5X6 (1 pc.) Assumed for one Pt RTD only 10-6 Torr Facing down (30 leaned) 5 m for inside and 5 m for outside vacuum chamber

Temperature sensors for control system Calibration traceability of Pt RTD to NIST Operating Environment Pressure Orientation of the blackbody Cable (tubing) length
Table 3. Technical features of BB100-V1 blackbody model.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Figures 4(a), left, and 4(b), right. BB100-V1 aperture scan at T = +30 C (left) and -30 C (right). Determination of spatial distribution of radiation temperature across the aperture of BB100-V1 was performed by means of thermal imager AGA-780 placed in the vicinity of the IR-window outside the cryo-vacuum chamber.

located on a rear flange of the body of a black body by means of a flexible metal sleeve. To cover the -60 C to 90 C temperature range, the DW-Therm M90.200.02 fluid was utilized for both VTBB and BB100K1 blackbodies within a single joint closed loop of the hydraulic system. The temperature gradient along the VTBB cavity under vacuum operation conditions does not exceed 40 mK at working temperatures from -60 C to 90 C.
2.2 BB100-V1

Large-area, low temperature, blackbody model BB100-V1 is intended for use as a reference temperature radiation source for the calibration of space borne radiometers and thermometers within the 240 to 350 K (30 to +80 C) temperature range in IR wavelength range (1.5 15 m). It was developed for NEC Toshiba Space Systems and JAXA (Japan) and Keldysh Space Center (Russia) in 2005 [7]. The basic features of BB100-V1, based on tests conducted in cryo-vacuum chambers (both in Russia and Japan), are presented in Table 3. A cross-section of the blackbody is shown in Fig. 3. In order to meet the requirements for accuracy and long-term stability, the design of cavity-type BB100-V1 was based on the prototypes of high-precision unique BBs developed within the last decade at VNIIOFI [8, 14]. Among them there are variabletemperature and fixed-point models BB100, BB300, BB900, BB1000, VTBB, BB29Ga, and BB156In with 100 K to 1000K working temperature range, designed for calibration of space borne IR sensors
60 | NCSLI Measure J. Meas. Sci.

and high-precision radiometry at such research organizations as DLR and Physikalisch-Technische Bundesanstalt (PTB) in Germany, the National Physical Laboratory (NPL) in the United Kingdom, SDL in the United States, and the National Institute of Metrology (NIM) and the Xian Institute of Applied Optics (IAO) in China. The BB100-V1 blackbody utilizes a copper cylindrical cavity, painted inside with NEXTEL Velvet 821-11 black coating, which is heated/cooled by means of thermal transfer from liquid thermofor KRYO-51 (Polydimethylphenylsiloxane) from LAUDA Dr. R. Wobser Gmbh & Co. (Germany). Liquid thermofor is circulating along the copper coil tubing (heat exchanger) around the copper cavity of the BB. The radiating cavity features a cylindrical shape of 120 mm diameter and 2260 cm3 volume. The coating for the BB-100V1 bottom must have an possessing emissivity better than 0.9 in the spectral range of interest from 1.5 m to 15 m. The Nextel velvet coating 811-21 was chosen to meet these requirements after a comparative analysis of optical properties of black paints and various coatings to stray light suppressing, solar energy absorbing, and radiation loss control. The screen-vacuum insulation around the radiating cavity is made of multilayered polyethlenetheraftalat film. Temperature stabilization of the cavity of the BB100-V1 blackbody is performed by pumping liquid thermofor along the tubing connected to the external thermostat Proline RP1845 from LAUDA. A digital controller is part of the thermostat, and is connected to the temperature sensor PRT-100 (a 100 platinum resistance thermometer) incorporated in the copper body of BB100-V1 cavity. Resistance


Figure 5. External view and cross-section of BB100K1 (the left picture shows BB100K1 equipped with a special hood for operation under open air conditions).

is measured with a HP-34970A (from Agilent Technologies, USA) multi-channel voltmeter, to determine the working mode of thermostat controller for heating or cooling the BB100-V1 cavity. To monitor the temperature uniformity of blackbody cavity, four additional PRT100 temperature sensors are incorporated in its cavity. A fifth sensor is enabled in the feedback loop of the temperature stabilization system. The BB100-V1 external housing is made of stainless steel. The radiating cavity of BB100-V1 with its external housing and screenvacuum insulation is shown in Fig. 1b. The blackbody consists of three main parts: a central cylindrical part (radiating cavity), and rear (left side of Fig. 1) and front flanges. The outer diameter of the blackbody housing accounts for 214 mm, the inner diameter of the radiating cavity is 120 mm. The length of the radiating cavity is 200 mm. The overall length of the BB100-V1, including nozzles (nuts) of copper heat exchanger tubing, is 500 mm. Blackbody BB100-V1 is installed in a working position either horizontally or vertically inside the cryo-vacuum chamber, with the possibility of mounting via front flange to customer-owned equipment. The reference PRT-100 temperature sensor includes a special removable holder to support the possibility of its time-to-time recalibration; it is placed in the center of the external (back) side of V-grooved bottom of radiating cavity of BB100-V1. The temperature control of BB100-V1 is based on the utilization of the liquid circulation-type thermostat PR1845 with heat-transfer liquid KRYO-51 with a -45 to 200 C working temperature range. In order to satisfy the customers requirements of 20 m overall length of tubing (metal sylphon tubing for placement inside vacuum chamber and vyton polymere pipes outside chamber), connecting BB100-V1 with LAUDA thermostat, the last one is equipped with two (suction and forcing) pumps. Specially-designed software to monitor the temperature of the radiating cavity was developed with LabView 5.0 (from National Instruments, USA). The signals from PRT-100 precision MINCO resistors are connected to a 20-channel scanner card (HP 34901A) and measured with the digital voltmeter. A visual representation of sample scans across BB100-V1 aperture, obtained in a cryo-vacuum chamber at Keldysh Space Center cooled with liquid nitrogen, is shown in Fig. 4(a) and 4(b).

2.3 BB100K1

Blackbody model BB100K1 was created for operation in a joint complex, together with VTBB source at KRISS, and based upon the prototype blackbody source BB100V1. BB100K1 features a wide aperture of 100 mm diameter that is convenient for carrying out routine calibration procedures of laboratory equipment. BB100K1 is intended for usage as reference temperature radiation source for calibration of radiometers and thermometers within a 233 to 363 K (40 to +90 ) temperature range, and for an IR wavelength range from 1.5 m to 15 m (Table 4). It should be utilized under the following conditions: 1. Under cryo-vacuum, with radiator being placed fully inside vacuum chamber; 2. Inside a chamber purged with N2 or inert gas or dry air, with radiator being placed fully inside the working volume; 3. In an open air environment in the temperature range from -20 C to 90 C, with certain limitations at temperatures below the dew point. Temperature distribution along the radiating cavity and across the BB bottom is monitored by five precision PRTs and a digital multimeter equipped with a scanner card. Two sensors are placed along the cavity, and three are placed on the bottom of the cavity. The central bottom sensor is easily removable for recalibration. Similar to the VTBB source, the liquid thermostat UNISTAT 705 is used for stabilization of the BB100K1 temperature at the level of 0.01 C. BB100K1 demonstrated stable operation inside a vacuum chamber in the temperature range from -60 C to 90 C, and in a dry-air or inert gas environment in the temperature range from -40 C to 90 C, with used with an extra hood with an aperture. The effective emissivity of the radiating BB100K1 cavity, also covered with LORD Aeroglaze Z306 black paint, was calculated as 0.996 0.001 with the STEEP3 software. Measurements, including temperature uniformity tests and thermal imaging of VTBB and BB100K1 radiation sources, were performed

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



Environment operation conditions

Vacuum (1.3 10-2 Pa) or inert or N2 gas purge or open air (with limitations below dew point) -60 to 90 C in vacuum -40 to 90 C in inert gas or dried air -20 to 90 C in open air 120 mm 100 mm 0.1 C 0.05 C By means of external thermostat-circulator Five PRTs (100 )

Working temperature range Cavity inner diameter Opening diameter: Temperature gradient along cavity Temperature instability Temperature control Incorporated temperature sensors
Table 4. Specications of the BB100K1.



Figure 6. Temperature uniformity of the BB100K1 at -9.7 C (a), and of the VTBB cavity at 0.0 C (b).

at KRISS during September 2009. The temperature gradient across the aperture does not exceed 50 mK under dry-air environment in the temperature range from 40 C to 90 C (Fig. 6).
2.4 VLTBB and VMTBB sources

The VLTBB was constructed as a highly stable reference radiation source in the temperature range from -173 C up to 177 C under medium-vacuum conditions (10-3 Pa) and a medium-background environment (liquid-nitrogen-cooled cryo-shroud) for the low background calibration facility at PTB. The VLTBB employs a deep radiation cavity with a cooled 20-mm aperture. The VLTBB cylindrical cavity (40 mm 250.6 mm length) with cone-shaped bottom is made
62 | NCSLI Measure J. Meas. Sci.

of oxygen-free copper (see Fig. 7). Inner walls of the cavity are covered by a black Aeroglaze Z306 paint. The temperature control of the three-zonal VLTBB is provided with the help of rough and precision controllers. Rough stabilization of the VLTBB temperature is performed with the help of the cooling by liquid nitrogen and heating by the electrical heaters. The precision system of the radiator cavity temperature stabilization provides a stability and gradient of temperature along the cavity within the limits of 20 mK across the operating temperature range. Effective emissivity at 150 K, in the spectral range from 10 m to 25 m, is 0.9996. The vacuum variable medium temperature blackbody (VMTBB) featuring a 20 mm aperture, was constructed for a working

temperature range of 150 C to 430 C for the calibration facility at PTB, in the field of reduced background radiation thermometry under vacuum. An external view and crosssection of VMTBB are presented in Fig. 8. The operating conditions are the same as those of VLTBB. The cavity (with a shape similar to VLTBB (26 mm 240 mm length) is made of oxygen-free copper. As a result of investigations at PTB, a special procedure of coating the surface of the cavity by paint with high emissivity has been developed. The cavity surface is coated by chemical nickel plating prior to covering it with high-emissivity paint. To achieve best temperature uniformity along the cavity a three-module design is used based on two precision proportionalintegral-derivative (PID) controllers. The


Figure 7. Schematic drawing and cross-section of VLTBB.

Figure 8. External view and cross-section of VMTBB.

temperature of the cavity is monitored with 15 precision PRTs. The effective emissivity of VMTBB is 0.9994. The results of VMTBB tests carried out at demonstrated 20 mK radiating temperature stability and 100 mK uniformity across the whole temperature range. Measurement and construction details are presented in [4, 5].
3. High Temperature Blackbodies

(from 15 mm to 59 mm inner diameter) have been developed and investigated. The materials of cylindrical cavities used are carbonbased graphite of MPG6, MPG7 type, PG of UPV-1, UPV-1T type and their combination. Pyrolytic graphite has a significantly lower sublimation rate than normal graphite, which makes it a preferable material, in terms of lifetime, for radiators with temperatures above 2500 C.
3.1 BB2000/40

The high-temperature blackbody models (HTBB) such as BB22P, BB39 and BB3200pg, 3500MP, 3500YY are built on the basis of radiation cavity direct resistance heating, which features a graphite tube or a set of linked rings made from anisotropic pyrolytic graphite (PG), heated by the flow of direct current. The most known cavitytype HTBB with PG cavities are the BB3200/3500 series, due to their ultra-high working temperature, high emissivity and stability, not only at VNIIOFI but also at metrological institutes worldwide. These devices are used as spectral radiance (SR) and irradiance (SI) sources for radiometry and for the realization of the high temperature scale. HTBB make it possible to work in an inert gas atmosphere without an outlet window. Three improved HTBB for a temperature range of 800 C to 3400 C with a wide variety of cavity sizes
Vol. 7 No. 3 September 2012

The graphite BB2000/40 blackbody was developed for the calibration of radiation pyrometers at SCEI (Singapore). Operating within the temperature range from 800 C to 2000 C, it features a large cavity opening of 40 mm. The cross-section of the BB2000/40 is presented in Fig. 9, and specifications are shown in Table 5. The BB2000/40 is built using the BB3500YY furnace [9] housing as a basis, but it has a normal graphite tube radiator instead of a heater consisting of PG rings. The radiator consists of a few tube pieces with an outer diameter of 55 mm that are separated from each other by the cavity bottom and the graphite baffles in the rear part, the latter reducing radiation heat losses. The radiator is supported at both ends by PG rings, which
NCSLI Measure J. Meas. Sci. | 63


Figure 9. External view and cross-section of BB2000/40.

Temperature range (oC) Type of radiator Dimensions of radiator (mm) Length Outer diameter Inner diameter Dimensions of radiating cavity (mm) Depth Diameter Opening diameter Emissivity Temperature control Stability ( C)

900 to 2000 Graphite tube 350 55 50 250 50 40 0.998 0.0015 Optical feedback system within 0.1 Water Argon 260 850 380 V 3 phases 800 DC 30 DC

Cooling agent Purging gas Furnace housing dimension (mm) Diameter Length Furnace power supply Max. output current (A) Max. output voltage (V)
Table 5. Specications of the BB2000/40.

serve as additional heaters and improve the temperature uniformity and stability (Figs. 10, 11, 12). In order to obtain a good isothermity of a BB radiating cavity, we should first measure its temporal stability. The measurements performed revealed that the long-term temperature stability (approximately 1 hour) of BB2000/40 at different set point temperatures does not exceed 0.1 C. Short-term temperature stability within a 10 min time period does not exceed 20 mK (Fig. 10). The temperature distribution across the cavity bottom (the diameter of the bottom is 50 mm) was measured using an imaging radiation thermometer (VNIIOFI-made) with a spot image size of 1.5 mm. The distribution was less than 1 C across the whole temperature range from
64 | NCSLI Measure J. Meas. Sci.

900 C to 2000 C. Measurement results are presented in Fig. 11. The uniformity along the cavity axis was measured using a B-type PlatinumRhodium thermocouple at temperatures of 900 C and 1500 C. Figure 12 presents the longitudinal distribution of a temperature measured within 50 mm to 310 mm area along the BB cavity at different set point temperatures of BB2000/40. The temperature curves were obtained without any cavity bottom installed. Hence, a version of the tube heater for thermocouples calibration was utilized. The chosen cavity design, with two PG rings placed at the ends, leads to significant improvement of the temperature uniformity. A longitudinal area with isothermity within 10 C in the area of 180 to 200 mm length along the cavity was obtained.


Figures 10(a), left, and 10(b), right. Measured 1-hour temporal stability of BB temperature achieved at 1430 C and 1877 C.

The experimental data of spatial temperature non-uniformity of the BB2000/40 cavity (across the bottom and along the cavity) presented in Fig. 11 and 12, were taken into account during computations of the effective emissivity of the cavity using the STEEP3 software [15]. The average value of emissivity was 0.998 0.0015. Modifications of the PG furnace, where PG heater rings are replaced partly or totally by graphite elements, extends the lifetime of the heater, reduces the cost for some applications and, sometimes improves the temperature uniformity. The BB2000/40 source can be used as a Planckian radiator or as a furnace for fixedpoint applications. Another advantage of the BB is an opportunity to modify it by replacing the graphite radiator with a set of PG rings in order to reach temperatures as high as 3200 C. The latest development suggests more opportunities for potential users of the BB-type blackbodies.
3.2 BB-PyroG modications




Figure 11 (a, b, c). Measured temperature distribution across the cavity bottom of BB at 890 C, 1500 C and 2036 C.

The BB-PyroG is the smallest blackbody featuring a cavity made from PG rings, with two modifications: BB-PyroG/2500 and BB-PyroG/3000. These BB sources feature a cavity with 15 to 25 mm inner diameter, a 10-mm opening, and a low-consumption power supply. With maximal working temperature of 3200 C, this budget BB is intended for both SR and SI measurements. BB-PyroG emissivity was estimated as 0.9996 0.0003 in the spectral range from 300 nm to 2000 nm. An external view and cross-section of the BB-PyroG are shown
Vol. 7 No. 3 September 2012

in Fig. 13. The housing of the BB-PyroG is reduced in outer diameter and length down to 200 mm and 300 mm, respectively, (flanges are not taking into account), in comparison with 260 mm and 400 mm for the standard BB3500M. The BB-PyroG has a radiator of approximately 200 mm length consisting of PG rings with outer and inner diameters of 25 mm and 15 mm. The radiator can be replaced by a larger one with outer and inner diameters of 35 mm and 25 mm, respectively. The depth of the radiating cavity is 115 mm and the opening diameter is 10 mm.

Annealing PG at temperatures higher than 2000 C changes its properties irreversibly, because the structure of the material is changed. Moreover, a higher annealing temperature causes more significant changes. In particular, the resistivity is essentially reduced with increased annealing temperature. Therefore, it was reasonable to make two versions of the BB-PyroG with different maximum working temperatures: 2500 C and 3000 C (see Fig. 14 dependence of electrical current as a function of the blackbody temperature for both versions). Another advantage of the BB-PyroG over
NCSLI Measure J. Meas. Sci. | 65

in the creation of wider aperture versions with cavity cross-section up to 57 mm. The development of the new BB3500MP in a package with wide-aperture BB on eutectics makes it possible to implement both independent methods of BB temperature determination, and to considerably reduce the uncertainty of temperature scale determination in the area of high temperatures. The improved BB3500MP (Fig. 15) actually operates at PTB as part of an SI standard facility. The original version was first presented in [1]. BB3500MP is mounted in a water-cooled housing consisting of a central cylindrical part (1), rear (2) and front (3) flanges. The outer diameter of the housing is 370 mm, the inner diameter is 300 mm. The heater (4) is in the form of a cylinder assembled from separate pyrolytic graphite rings with outer and inner diameter of 73 and 59 mm, respectively, that are mounted inside the chamber along its axis. The furnace working zone (5) is formed with external walls of cylindrical heater and inner baffles. The heater from both sides leans on graphite electrodes (6) and (7), which are rigidly secured to the front and rear flanges. The rear electrode is fitted with an agile rod (8) supported via ceramic rod (9), with spring device (10) required for compensation of heater thermal expansion. The spring device is fastened to the rear flange from the chamber outer side. The heater is surrounded with thermo shield (11), which looks like a graphite coil with wound insulation fabric. The thermal insulation of the outer layer (270 mm 540 mm) is made from stainless steel. In addition to the detailed investigation of BB3500MP carried out at PTB [10], the results of its emissivity; and (for the first time) the uniformity investigations along the



Figure 12 (a, b). Measured longitudinal temperature distribution of BB radiating cavity at 900 C and 1500 C.

larger blackbodies is the reduced operational time. For example, it takes about 1.5 hours to heat the BB3500M up to operational temperature and another 1.5 hours to cool it down. The massive BB3500MP takes 2.5 hours, but the corresponding time for the BBPyroG is only about 30 minutes. This gives the user more flexibility in planning and saves time for real measurements.
3.3 BB3500M/BB3500MP modications

Another BB series was developed by modifying the BB3500M/BB3500MP blackbodies to use a combined graphite-PG radiator, a design with a larger number of potential users. This flexible design allows the usage of BB3500 as a blackbody with a uniform heating furnace for handling insertions in a form of high-temperature fixed-point (HTFP) cells.

The larger demand for high-temperature wide-aperture (with apertures from 24 to 45 mm) blackbodies BB3200/3500 on the basis of pyrolytic graphite is due to both their unique characteristics of temperature uniformity across radiation field, time stability of emissivity, and convenient use of these sources as heaters for BB graphite crucibles containing metal-carbon eutectics. One of the main reasons for upgrading high-temperature BBs to employ both variable and with fixed-point temperature, was to attain wider flexibility of their use, in particular, with filter radiometers both in spectral irradiance (SI) mode, and in spectral radiance (SR) mode the alternative methods of BB temperature determination. It became necessary to create new wide-aperture (up to 16 mm) crucibles with eutectics used as inserts into the PG cavity of maternal BB3500, which resulted

Figure 13. External view and cross-section of BB-PyroG.


NCSLI Measure J. Meas. Sci.

cavity axis are presented in Fig. 16. Measurements were carried out by means of W-Re thermocouple at 2175 K, 2365 K and 2556 K. For example, the temperature varied within 100 C along a 200 mm zone from the cavity bottom at a temperature of 2556 K. The safety and reliability of the latest versions of the BB3500MP series has been improved with the new elbow design of their water-cooled coppermade flanges. All the mentioned new HTBBs are equipped with an optical feedback temperature control system mounted on the rear flange. The system is based on a lens, optical fiber and filtered Si photodiode assembled with an amplifier in the same case, and allows temperature stabilization within 0.1 C. This rear-view feedback allows the entire BB aperture to be used for measurements.
3.4 Fixed-point BBs on phase transitions of metal-carbon eutectic alloys. Figure 14. Dependence of the required DC current as a function of working temperature for the two BB-PyroG blackbody options: BB-PyroG/2500 with maximum temperature of 2500 C (lower line) and BB-PyroG/3000 with maximum temperature of 3000 C (upper line).

Beginning in 2000, VNIIOFI has developed sample BBs on hightemperature phase transition of metalcarbon eutectics Re-C (2748,5K), Ir-C (2560,5K) and metal carbide - carbon TiC-C (3033,8K), ZrC-C (3154,2K) and HfC-C (3458,7K) (see Table 1), made in the form of graphite crucibles inserted between pyrolytic graphite rings of BB3500 radiation cavity (or BB3500-MP) in its isothermal zone [11,12]. A new generation of high-temperature (from 2748.5 K to 3458.7 K) BBs on phase transitions of Re-C, (MoC)-C, TiC-C, ZrC-C and HfC-C eutectics has been developed. Their unique repeatability with the level of 0.1 to 0.01 % makes it possible to extend the presently available ITS-90 temperature scale up to 3500 K. Investigations performed at VNIIOFI relative to the metals forming eutectics with carbon (graphite) with temperatures of meltingsolidification phase transitions within the range above 2500 C showed the repeatability level of their melting temperatures better than 100 mK. Detailed results of the studies are given in works [1, 2, 11, 12, 16, 17].
4. Summary

Figure 15. Internal organization of electric furnace BB3500MP. Section drawing. Side view.

During the last 30 years VNIIOFI has developed a wide range of precision BB radiation sources that cover the entire optical range from IR to nearest UV, inclusive. Latest achievements in the field of hightemperature and cryogenic technology, application of state-of-the-art materials, made it possible to create in VNIIOFI the Planckian sources with working temperatures varying from 80 K to 3500 K, featuring high stability and uniformity of radiation. Recently VNIIOFI has developed several variable cavity-type blackbodies: low temperature (in the temperature range from -173 C to 430 C) for vacuum-background facilities for pre-flight calibration of space borne radiometric instruments at PTB (Germany), KRISS (Korea) and Keldysh Space Center (Russia), and high temperature blackbodies for use in radiation thermometry: five PG sources and one graphite one with a wide variety of cavity sizes. The main characteristics of these blackbodies are presented in Table 1. Experimental tests of low-temperature BB sources proved their temperature uniformity not exceeding 50 mK in the whole temperature range. A new generation of high-temperature BB sources (from 800 C to 3500 C) can be used as precise Planckian radiators or as furnaces for fixed-point applications. The latest developments have demonstrated that earlier developed blackbodies based on pyrolytic graphite could easily be modified to adapt graphite elements of the heater.
Vol. 7 No. 3 September 2012

Figure 16. Results of experimental investigations of temperature uniformity along the radiating cavity axis of BB3500MP blackbody. Measurements were carried out by means of W-Re thermocouple at 2175 K, 2365 K and 2556 K. 5. Acknowledgements

The work was partially carried out with the financial support of the Ministry of education and science of the Russian Federation.

NCSLI Measure J. Meas. Sci. |


6. References

[1] V. Sapritsky, S. Ogarev, B. Khlevnoy, M. Samoylov, and V. Khromchenko, Blackbody sources for the range 100 K to 3500K for precision measurements in radiometry and thermometry, in TEMPERATURE: Its Measurement and Control in Science and Industry; Volume VII; Eighth Temperature Symposium, AIP Conference Proceedings 684, Chicago, Illinois, pp. 619-624, October 2003. [2] V. Sapritsky, S. Ogarev, B. Khlevnoy, M. Samoylov, V. Khromchenko, and S. Morozova, Dissemination of UltraPrecise Measurements in Radiometry and Remote Sensing within 1003500K Temperature Range on the Base of Blackbody Sources developed in VNIIOFI, Proc. of SPIE, Seattle, Washington, vol. 4818, pp. 127-136, July 2002. [3] J. Hartmann, J. Hollandt, B. Khlevnoy, S. Morozova, S. Ogarev and F. Sakuma, Blackbody and Other Calibration Sources, Chapter 6 in Radiometric Temperature Measurements, I. Fundamentals, Experimental Methods in the Physical Sciences, vol. 42, pp. 241-295, Elsevier, USA, 2009. [4] S. Morozova, N. Parfentiev, B. Lisiansky, V. Sapritsky, N. Dovgilov, U. Melenevsky, B. Gutschwager, C. Monte, and J. Hollandt, Vacuum Variable Temperature Blackbody VLTBB100, Int. J. Thermophys., vol. 29, no. 1, pp. 341-351, February 2008. [5] S. Morozova, N. Parfentiev, B. Lisiansky, U. Melenevsky, B. Gutschwager, C. Monte, and J. Hollandt, Vacuum Variable Medium Temperature Blackbody, Int. J. Thermophys., vol. 31, no. 8-9, pp. 1809-1820, September 2010. [6] S. Morozova, B. Lisyanskiy, A. Stakharny, M. Samoilov, S. Ogarev, Y.-S. Yoo, C.-W. Park and S.-N. Park, Low temperature blackbodies for temperature range from -60 C up to 90 C, Proc. of Joint International Symposium on Temperature, Humidity, Moisture and Thermal Measurements in Industry and Science (TEMPMEKO & ISHM), Portoro, Slovenia, May 2010. [7] V. Sapritsky, National Primary Radiometric Standards of the USSR, Metrologia, vol. 27, no. 1, pp. 53-60, 1990. [8] V. Sapritsky, Black-body radiometry, Metrologia, vol. 32, no. 6, pp. 411-417, 1995. [9] B. Khlevnoy, M. Sakharov, S. Ogarev, V. Sapritsky, Y. Yamada, and K. Anhalt, Investigation of Furnace Uniformity and its Effect on High-Temperature Fixed-Point Performance, Int. J. Thermophys., vol. 29, no. 1, pp. 271-284, February 2008.

[10] P. Sperfeld, S. Pape, B. Khlevnoy, and A. Burdakin, Perfomance limitations of carbon-cavity blackbodies due to absorption bands at the highest temperatures, Metrologia, vol. 46, pp. S170-S173, 2009. [11] V. Sapritsky, B. Khlevnoy, V. Khromchenko, S. Ogarev, M. Samoylov, and Y. Pikalev, High Temperature Fixed-Point Blackbodies Based on Metal-Carbon Eutectics for Precision Measurements in Radiometry, Photometry and Radiation Thermometry, in TEMPERATURE: Its Measurement and Control in Science and Industry; Volume VII; Eighth Temperature Symposium, AIP Conference Proceedings 684, Chicago, Illinois, pp. 273-278, October 2003. [12] V. Sapritky, S. Ogarev, B. Khlevnoy, M. Samoylov, and V. Khromchenko, Development of metalcarbon hightemperature fixed-point blackbodies for precision photometry and radiometry, Metrologia, vol. 40, no. 1, pp. S128-S131, 2003. [13] M. Persky, Review of black surfaces for space-borne infrared systems, Rev. Sci. Instrum., vol. 70, no. 5, pp. 2193-2217, May 1999. [14] H. Yoon, C. Gibson, J. Gardner, Spectral radiance comparison of two blackbodies with temperatures determined using absolute detectors and ITS-90 techniques, in TEMPERATURE: Its Measurement and Control in Science and Industry; Volume VII; Eighth Temperature Symposium, AIP Conference Proceedings 684, Chicago, Illinois, pp. 601-606, October 2003. [15] STEEP3, Version 1.3, Users Guide, Virial International, LLC, 2000. [16] V. Sapritsky, M. Sakharov, S. Morozova, B. Lisiansky, S. Ogarev, B. Khlevnoy, and A. Panfilov, High-temperature Fixed Points on the basis of Metal-Carbon Eutectics and Their Utilization for Radiometric Calibrations of Space Borne Instruments for Remote Sensing, CALCON2004 (Conference on Characterization and Radiometric Calibration for Remote Sensing), Logan, Utah, USA, August 2004. [17] V. Sapritsky, S. Ogarev, and B. Khlevnoy, Dissemination of developed in VNIIOFI high temperature fixed points based on metal-carbon eutectics for space applications of ultra-precise radiometry and spectral radiation thermometry measurements, 34th Sc. Assembly of the Committee on Space Research (COSPAR2002), Houston, Texas, October 2002.


NCSLI Measure J. Meas. Sci.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |



A Sub-Sampling Digital PM/AM Noise Measurement System

Craig W. Nelson and David A. Howe Abstract: A digital phase/amplitude modulation (PM/AM) noise measurement system (DNMS) implementing field programmable gate array (FPGA) based digital down converters (DDCs), and 250 MHz analog to digital converters (ADCs) is reported. Measurements in the first, baseband Nyquist region shows white phase noise floors of less than -180 dBc/Hz. With proper prefiltering of the input signals to prevent undesired aliasing, high bandwidth track and hold amplifiers (THA) extend the operating range of the DNMS to microwave frequencies. Preliminary testing with an 18 GHz THA shows residual white phase noise floors at 10 GHz of less than -160 dBc/Hz.
1. Introduction

During the last decade, the digital phase noise measurement system (DNMS) has become a powerful tool that allows simple and accurate phase noise measurements to be accessible even for the neophyte. The differential measurement of phase between two oscillators with different frequencies, while not requiring a phase lock between the device under test (DUT) and the reference, has drastically simplified the process of making phase noise measurements. The complex analog frequency synthesis process needed to compare two signals with different frequencies can now be replaced with a simple mathematical scaling factor in the DNMS. The basic building block of the DNMS is the digital down converter (DDC) [1, 2] shown in Figure 1. The DDC is a digital implementation of the traditional In-phase/ Quadrature (I/Q) demodulator. The signal to be analyzed is digitized and multiplied by an I/Q reference generated from a numerical oscillator. The I and Q channels are filtered, and down-sampled to a convenient sample rate for mathematical analysis. A rectangular to polar conversion of the I and Q signals yields the phase and amplitude of the input signal. The instantaneous phase of the signal is computed directly in units of radians and no calibration factor is needed [3, 4]. However, because the carrier frequency is directly sampled by an analog-to-digital converter (ADC), one of the primary drawbacks of a DNMS is its limited frequency range of operation.

The modern implementation of the DNMS was created after a Small Business Innovation Research (SBIR) grant request was issued by National Institute of Standards and Technology (NIST) in 2003. This SBIR (SB1341-03-W-0817) was awarded to Timing Solutions Corporation (now part of Symmetricom, Inc.) and they implemented many important improvements to the standard digital down converter (DDC) configuration; primarily, cross-spectrum analysis and the suppression of common mode clock noise by the differential measurement between a DUT and a reference [5]. The commercial implementation consists of a four-channel system where the DUT and reference signals are differentially measured in two parallel measurement systems. The common mode clock noise is suppressed by subtraction in each differential phase measurement. The non-common mode noise such as ADC quantization noise is suppressed by cross-spectrum analysis between the two measurement systems. A block diagram of this configuration is shown in Fig. 2.
2. Description of Basic Measurement System

Higher carrier frequency Higher offset frequency analysis Lower noise floors Amplitude noise Control of all aspects of measurement


The dual reference capability allows the DUT to be measured differentially against two independent signal references. The noise of these references is uncorrelated and can be suppressed by cross-correlation. To be able to implement these additional features and capabilities, a four-channel cross spectrum PM/AM measurement system was constructed at NIST. Each of the four channels digitizes its radio frequency (rf) carrier with a 250 MHz, 16 bit ADC. The four DDCs and decimation chains were implemented in field programmable gate arrays (FPGA). Cascaded integrator-comb

Craig W. Nelson

David A. Howe Time and Frequency Division National Institute of Standards and Technology Boulder, CO 80305

The commercially available DNMS that evolved from the SBIR program is sufficient for most users, but our metrology needs at NIST require several additional features and capabilities as listed below:

Dual reference capability Shorter measurement runs

Work of United States government. Not subject to copyright. No endorsement of any manufacturer is intended or implied.


NCSLI Measure J. Meas. Sci.

(CIC) polyphase decimators [2, 4, 6] were used in the DDC and analysis chain. The cross spectrum fast-Fourier transforms (FFT) were calculated in a PCI eXtensions for Instrumentation (PXI) host computer that was interfaced to the FPGA. Figure 3 shows that the residual white phase noise floor of the DNMS at 10 MHz is less than -180 dBc/Hz. The spectrum of a discrete sampled signal is periodic; therefore, with careful prefiltering, we can take advantage of aliasing to down-convert signals that lie above the Nyquist frequency. The ADCs selected for our implementation have an analog bandwith of about 600 MHz that allows limited sub sampling into the third Nyquist region.
3. Track and Hold Frontend

Figure 1. Block diagram for a digital down converter. Analog to digital converter (ADC), In-phase / Quadrature direct digital synthesizer (I/Q DDS).

To achieve digital PM/AM noise analysis at much higher frequencies than the 600 MHz available by the system described above, a track and hold analog sampling frontend was constructed for the DNMS. A block diagram of the front-end is shown in Figure 4. The track and hold amplifiers[7] with 18 GHz bandwidth and sampling rates of 4 GS/s were selected for the frontend. A fractional-N phase lock loop (PLL) synthesizer with an 18 MHz to 3 GHz tuning range was implemented as a common sampling clock for each THA pair. Careful delay matching of input, output and the clock transmission lines between each THA pair was made to ensure that common mode clock noise can be strongly rejected. Options for providing an external clock and additional clock outputs including a spread spectrum generator were also implemented. The spectrum clock modulator was not used or evaluated in this experiment. Figure 5 shows the block diagram of the completed sub-sampling digital measurement system utilizing a pair of dual THA modules, each with an independent sampling clock.
4. Performance 4.1. Residual Noise Tests

Figure 2. Block diagram of a four-channel DNMS. Reference source (REF), device under test (DUT), analog to digital converter (ADC), digital down- converter (DDC). wx and jx are the angular frequency and phase of the reference or DUT. jADCn represents the ADC quantization noise of channel n suppressed by the cross spectrum. jCLKn represents clock noise suppressed by common mode subtraction. Nave is the number of averages in the cross spectrum calculation.

clock frequencies were chosen to produce a 10.7 MHz intermediate frequency at the output of the THA. 10 MHz was chosen due to the availability of convenient anti-alias filters. The DDC reference clock was set to produce a baseband signal at its output.
4.2. Absolute Noise Tests

a 100 kHz offset frequency. Figure 8 shows the same results all normalized to a carrier frequency of 10 GHz.
4.3. PM/AM Correlation Tests

Residual SSB phase noise floor for the subsampling measurement system were made at 2, 10 and 18 GHz and are shown in Figure 6. The noise floors of L (thermal) = -160, -158 and -144 dBc/Hz and L (1 Hz) = -122, -110, -110 dBc/Hz were respectively achieved. Rejection of common mode clock noise via the differential phase measurement was as high as 60 dB. Non-rejected noise was due to noncorrelated jitter in the clock circuitry of each individual THA. For these measurements the
Vol. 7 No. 3 September 2012

Absolute phase noise measurements between a commercial synthesizer at three different frequencies and a fixed frequency dielectric resonator oscillator (DRO) at 10 GHz are shown in Figure 7. This showcases the capability to measure DUT and reference signals that are not phase locked and show a carrier frequency ratio as high as five. Independent verification of this measurement was made with a commercial phase noise analyzer and the results agreed to within 1 dB with a slightly higher disagreement at

Figure 9 shows simultaneous amplitude and phase noise measurements [8] along with the associated correlations between them. These types of correlations may be useful in the analysis of nonlinear oscillator optimization [9].
5. Conclusions

A four-channel 250 MS/s, 16 bit AM / PM digital noise measurement system was constructed and residual phase noise floors of L(thermal) less than -180 dBc/Hz at 10 MHz were achieved. A four-channel track and hold frontend for the DNMS was demonstrated with extended measurement capabilities up
NCSLI Measure J. Meas. Sci. | 71


Figure 3. Residual single-sideband (SSB) phase noise oor of digital noise measurement system at 10 MHz. A single source at +15 dBm drives both the reference and device under test ports.

Figure 6. Residual SSB phase noise oor of the subsampling DNMS. N = Nyquist region, approximate sample rates = 125 MHz, 2 GHz, 2.5 GHz, and 3 GHz, averaging time is about 10 min.

Figure 4. Dual track and hold with common fractionalN PLL sampling clock. Track and hold amplier (THA), frequency divider (DIV 1-16), spread spectrum generator (DSSS GEN).

Figure 7. Absolute phase noise of E8241A synthesizer at three carrier frequencies versus a xed 10 GHz phase locked DRO. Approximate sample rates = 2 GHz, 2.5 GHz, and 3 GHz, averaging time is ~10 min.

40 GHz, faster FPGA based FFT calculation, and moving to 12 bit, 3.6 GHz analog to digital converters.
6. Acknowledgements

The authors thank their NIST colleagues Neil Ashby, Corey Barnes, Jason DeSalvo, Archita Hati, Jeramy Hughes, Danielle Lirette, and Frank Quinlan for assistance and useful discussions.
7. References

Figure 5. DNMS with track and hold frontend.

to 18 GHz. Future research involving the DNMS will include optical sampling [10] to extend the analysis of carrier frequencies out to
72 | NCSLI Measure J. Meas. Sci.

[1] R. Lyons, Understanding Digital Signal Processing, Prentice Hall, 3rd edition, 2010. [2] F. Harris, Multirate Signal Processing for Communication Systems, Prentice Hall, 1st edition, 2004. [3] D. Bogomolov and D. Ochkov, Phase noise evaluation digital algorithm for precision oscillations, Proceedings of Joint Meeting of 1999 IEEE International Frequency Control


Figure 8. Absolute phase noise of E8241A synthesizer at three carrier frequencies versus a xed 10 GHz phase locked DRO normalized to 10 GHz.

Figure 9. Amplitude and phase noise of synthesizer with correlation.





Symposium and European Frequency and Time Forum, Besancon, France, pp. 11171120, April 1999. L. Angrisani and M. DApuzzo, A digital signal-processing approach for phase noise measurement, Proceedings of 2000 IEEE Instrumentation and Measurement Technology Conference (IMTC), Baltimore, Maryland, vol. 2, pp. 811816, May 2000. J. Grove, J. Hein, J. Retta, P. Schweiger, W. Solbrig, and S. Stein, Direct-digital phase-noise measurement, Proceedings of 2004 IEEE International Frequency Control Symposium, Montreal, Canada, pp. 287-291, August 2004. E. Hogenauer, An economical class of digital filters for decimation and interpolation, IEEE. T. Acoust. Speech, vol. 29, no. 2, pp. 155162, April 1981. W. Kester, ed., The Data Conversion Handbook, 3rd edition, Newnes, 2004.

[8] L. Ruppalt, D. McKinstry, K. Lauritzen, A. Wu, S. Phillips, and S. Talisa, Simultaneous digital measurement of phase and amplitude noise, Proceedings of 2010 IEEE International Frequency Control Symposium, Newport Beach, California, pp. 97102, June 2010. [9] D. Howe, A. Hati, C. Nelson, and D. Lirette, PM-AM Correlation Measurements and Analysis, Proceedings of 2012 IEEE International Frequency Control Symposium, Baltimore, Maryland, May 2012. [10] A. Nejadmalayeri, M. Grein, A. Khilo, J. Wang, M. Sander, M. Peng, C. Sorace, E. Ippen, and F. Kartner, A 16-fs aperturejitter photonic ADC: 7.0 ENOB at 40 GHz, 2011 Conference on Lasers and Electro-Optics (CLEO), Optical Society of America, paper CThI6, May 2011.

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |


New Products and Services

Essco Calibration Laboratory can now Calibrate VaneType Anemometers
The Airow Special Products Ltd. Anemometer Calibration Rig-Open Jet Wind Tunnel is designed for the calibration of vane type anemometers. The Open Jet Wind Tunnel has a velocity range 0.25 to 30.5 m/s (50 to 6000 ft/min). The Open Jet Wind Tunnel consists of a centrifugal fan, driven by a 2.2 kW motor with a frequency speed controller. The fan supplies air through a duct transition into a 200 mm diameter orice meter measuring section tted with a quick release clamping arrangement for changing orice plates. After passing through the orice meter, the air enters a smoothing chamber tted with perforated screens and a honeycomb ow straightener, which provides uniform ow into the nal contraction section. The contraction is then terminated with a sharp edge exit nozzle to which the air is delivered at test velocity, providing a uniform air stream with a diameter of 125 mm.
For more information contact, Joe Morse at:

Yokogawa Meters and Instruments Releases New Digital Earth Tester for Electrical and Maintenance Personnel
Yokogawa Corporation of America is pleased to announce the release of another high quality and wellengineered product, the EY200 Digital Earth Tester. The EY200 will measure earth resistance up to 2,000 using the two pole or three pole methods. It is extremely simple to use, with one button and one switch operation. The EY200 is compact, lightweight and includes a complete test kit. The Yokogawa EY200 was developed to help maintenance personnel and technicians insure proper wiring and grounding of electrical equipment and systems. Conrming the proper wiring and grounding will help to reduce power quality issues and potential safety hazards. The Yokogawa EY200 compliments the Yokogawa portfolio of portable test instruments, including digital multimeters, clamp-on meters, insulation testers, and portable power monitors. The Yokogawa EY200 is dust and water resistant (IP54 rated), CE compliant and CAT III 300 V rated. Standard accessories provided with the unit are two and three pole test leads, earth spikes, soft case, shoulder strap, 6 AA batteries and an instruction manual.
For more information visit:

Hexagon Metrology Launches New ROMER Portable CMM

The Absolute Arm is already known for its pioneering advancements. The upgraded design innovations increase the accuracy of the portable coordinate measuring machine (PCMM) by up to 23 % compared to previous versions. With point repeatability values from 0.016 mm, the ROMER Absolute Arm is the most accurate portable measuring arm produced by Hexagon Metrology. Other cutting-edge features include SmartLock technology, which securely locks the arm in its rest position using a simple switch at the base. SmartLock enables users to lock the arm in any intermediate position to ease inspections in physically limited areas. The new version also contains an easy-to-access battery pack to minimize downtime required for battery changes. Absolute encoders recognize the position of the arm at all times, effectively eliminating the need for complex homing procedures. Automatic probe recognition allows operators to change probes within seconds without the need for recalibration. Its optional integrated scanning solution is factory calibrated and certied with the arm and scanner as a complete system. The ROMER Absolute Arm is available in seven lengths, from 1.5 m to 4.5 m. The Absolute Arm was already the most accurate arm on the market, but with advancements in design, the new version is even more accurate than before. This pushes the envelope of what is possible with portable measurement technology, states Eric Hollenbeck, product manager for portable technology at Hexagon Metrology. The most dramatic improvements will be seen on the larger models including our 4 m and 4.5 m arms the largest in the industry. All new orders will be shipped from the state-of-the-art facility in Oceanside, CA with the revised specications.
To learn more, visit: 74 | NCSLI Measure J. Meas. Sci.


Fluke Biomedical Device is the First to Achieve the Environmentally-Friendly Label

Fluke Biomedical has announced that the new ProSim SPOT Light SpO2 Functional Tester is the rst biomedical test tool on the market to achieve the RoHS E Environmentally-Friendly label. Short for Restriction of Use of Hazardous Substances, RoHS regulations are designed to limit or eliminate substances that are dangerous to the environment and to people. The RoHS E label certies the new SPOT Light SpO2 Functional Tester is designed to protect people and the environment from dangerous substances like lead, mercury, and hexavalent chromium. SPOT Light has already been heralded as the most environmentally-friendly SpO2 test device available today. Jamey Smart, Product Manager for Fluke Biomedical, explains: Other products sneak a prot by using a disposable model: constantly churning through batteries and nger sensors. Disposing of batteries and electronics is terrible for the environment. Our aim was to design a tool that is both affordable and green: big savings, small footprint. The RoHS E Environmentally-Friendly label affirms our commitment to that goal. SPOT Light is lightweight and exible with three custom presets specially designed to make it the fastest and easiest-touse device on the market today for pulse oximeter functional testing. It sets up in seconds to send SpO2 saturation, heart rate, perfusion, transmission, artifact noise, and eight different manufacturers custom r-curves to a pulse oximeter or patient monitor at precision plus or minus 1 % accuracy. A helpful LCD display and three simple buttons make it effortless to rapidly change parameters and view each signal output sent to the pulse oximeter at a glance. An interchangeable, long-life battery ensures uninterrupted all-day operation without the need to connect to a power supply.
For more information, visit:

AKTAKOMS Innovative Power Supply Allows Remote Control Through Any Mobile Device
T&M Atlantic, a distributer of the test and measurement equipment, today unveiled the Aktakom APS-7303L and APS-7305L power supplies in June 2012. These models are economical, programmable, and regulated, but the major advantage of 73xxL series power supplies is the ability to be remote controlled via LAN or USB. The Aktakom Power Manager software allows user to set and pre-set inputs, graph, as well as, time control the output of the power supply. The software works on any PC, iPhone, iPad, or Android mobile device. The Akatom APS-7303L and APS-7305L has been nominated for the best product of the year by the Dealerscope and TechnologyTell magazines
For more information, visit:

Do you have a new product or service announcement you would like featured in NCSLI Measure?
Send it to:

Vol. 7 No. 3 September 2012

NCSLI Measure J. Meas. Sci. |


to the Real Asset World.


Advertisers Index
Agilent Technologies AssetSmart CENAM Symposio 2012 Essco Calibration Laboratory FasTest Fluke Calibration Guildline Isotech North America Martel Electronics Measurements International Mensor Corporation Morehouse Instrument Company NCSLI Expand Your Reach NCSLI Join Us NCSLI Technical Exchange NCSLI Training Center NCSLI Workshop & Symposium Northrop Grumman Ohm-Labs, Inc. Rotronic Vaisala Back Cover Under Pressure To Inside Back Cover Page 69 Page 8 Page 37 Page 6


Integrated Solutions. One Smart Connection.

Asset Management 360, a true asset management solution, elevates asset management to a whole new class of enterprise integration and visibility. Connecting your physical operations and end users with your ERP/Financial Systems. AM360 ultimately improves asset utilization and operational efficiencies. SMART/CMS Calibration and Laboratory Management Software is the only system powerful enough to manage all of your instrument calibration, repair, maintenance, asset tracking and lab management needs in a single solution. SMART/ENCATS Enterprise Cataloging provides a common tracking language across the enterprise.

Calibrate Faster?

Make Connections Fast, Save time and money. Now Thats SMART. Calibrate Faster!
For transmitters, gauges and pressure sensing instruments No wrenches or thread sealants Eliminates thread damage Reduces contamination risks

Twist-to-connect, sleeve and handle actuated 2800 28th Street, Santa Monica, California 90405 USA 310.450.2566 connectors save time Leak-tight sealing from vacuum to 10,000 psi 1/8" to 1/2" NPT, AN37 flare, sanitary style flange and metric with additional sizes available

Improve your lab efficiency calibration connection manifold for calibration of up to four instruments simultaneously! 800.444.2373

Certified to ISO 9001:2008 and accredited to ISO/IEC 17025:2005

Page 21 Page 16 Page 20 Page 19 Page 17 Page 17 Page 55 Page 37 Page 22 Page 2 Page 11 Inside Front Cover Page 20 Page 15 Page 9

Calibration Cert No. 1388.01

Over 30 metrology technicians on staff.

High Accuracy

Reference Class

Single Handed Process Validation

. A . 17 . D . . In D en . R .




NCSLI Measure J. Meas. Sci.

Think Outside the Boxes. All of Your Calibration Boxes!

The Corporate Calibration / Maintenance Management System
Save Money Get Better Control Apply Uniform Metrics Connect Your Labs

Now Thats SMART.


2800 28th Street, Santa Monica, California 90405 USA 310.450.2566

If youre here, OEM cal/repair service is closer than you might think.

You dont need to give up local convenience to enjoy OEM-quality calibration and repair services. Agilent provides a host of local-access options to t your specic needs. Together with our delivery partner, we service Agilent and non-Agilent products to keep you up and running wherever you are.

L cal Agilent Re Lo epa p ir i & Calib ibra bra ati tion Opt tion ptio ions io ns s Local Service e Ce ent ter es Mo obi bile le Cal al Lab bs System Upt p im ime Resident Professiona al Fa F ast s pic ckk-up up p an nd d del eliv iver iv ery er ry Ou ur Se Serv rvic vic ce Ce C nt nter e com er o es s to yo you O r Sy Ou ys st tem e Eng n in inee eers s com o e to you o Dedi De dic ca ate ted Se erv vice e Cent ter r on yo y ur site

Thats thinking ahead. Thats Agilent.

Agilent Preferred Service Partner

Plan for local service and much more Discover how Agilent can help
2012 Agilent Technologies, Inc.

u.s. 1-800-829-4444 canada: 1-877-894-4414 mexico: 1-800-254-2440