Relative Calibration Functional Test Description

The Relative Calibration Test is an example of a fully developed test to supplement a gap identified in the Commissioning Test Protocol Library. 

The purpose of the test is to ensure the relative accuracy of a group of sensors associated with a system or selected portion of a system where errors related to the calibration accuracy window of the sensors could cause energy to be wasted or operating data to be misinterpreted. 

 

Bevel: Functional Test for Relative CalibrationLink to a functional test form for relative calibration.  The sections below describe this test form.

 

 

Functional Testing Benefits

Benefit

Comments

Energy Efficiency Related Benefits

1.       Minimizes the potential for simultaneous heating and cooling due to the specific operating point of sensors with-in their accuracy window. 

Other Benefits

1.       Improves system operability by eliminating false indications of temperature differences that do not exist.  For instance, after relative calibration, a temperature rise across a coil that is supposed to be inactive really will be an indicator of potential energy waste.  While it is difficult to quantify the energy savings that are associated with this, it can be significant over the life of a system.

2.       Improves system performance by minimizing the potential for misrepresenting what is actually going on and acting on that information, either manually our automatically.

Functional Testing Field Tips

Item

Comments

Purpose of Test

The purpose of the test is to ensure the relative accuracy of a group of sensors associated with a system or selected portion of a system where errors related to the calibration accuracy window of the sensors could cause energy to be wasted or operating data to be misinterpreted. 

Instrumentation Required

The fundamental test can be performed without any instrumentation other than the sensors that are being tested.  However, a reference standard is helpful to establish the baseline for comparison when making adjustments.  Minute by minute trending or data logging of the points under test will be useful to document the test results.  A Shortridge meter with a temperature probe makes checking the average mixed air temperature much easier.

Test Conditions

The system needs to be placed in a steady state condition where the parameter measured by the sensors undergoing the relative calibration process can be assumed to be uniform at all points in the portion of the system under test.

Time Required to Test

Test times will vary from 15 minutes to an hour depending on how long it takes to set up for and achieve steady state operation, how many sensors are being calibrated, and the ease of making adjustments.

Acceptance Criteria

1.       With the system in a steady state condition, all sensors read the same value relative to a baseline, with-in their accuracy tolerance prior to adjustment.

2.       With the system in a steady state condition, all sensors read the same value after adjustment.

Potential Problems and Cautions

1.       The system will essentially be out of control for the interval of time under which the test occurs.  This may or may not be acceptable during occupied hours in the area served, so the test may need to be coordinated to occur when temporary deviations from set point can be tolerated.

2.       Absolute sensor calibration should be known to a reasonable degree of certainty.  On new construction projects, this can be established fairly easily by the sensor specifications supplemented with a factory calibration certificate.  On existing projects, good documentation of periodic calibration checks may prove sufficient.  Lacking that it may be desirable to calibrate all sensors to the extent possible in the field.  It may be desirable to consider returning critical sensors to a lab or factory for recalibration or replacing them with sensors of known accuracy and then using this sensor as the baseline sensor for the test[1].

3.       Selection of the baseline sensor can be a critical issue.  Use of a consistent, reliable baseline standard is the best approach to dealing with issue.  Desirable standards include:

Temperature - Lab grade mercury thermometers with a range and graduations appropriate for the application.

Humidity or dewpoint - Sling psychrometer and psychrometric chart or ASHRAE psychrometric equations.

Pressure - Inclined manometer or recently calibrated pressure gauge.

           Lacking a standard, the sensor that is most critical to the HVAC process outcome should be selected as the baseline.  If all sensors are equally critical to the process, then the sensor that most closely represents the median value indicated by all of the sensors under test should be selected as the baseline.

 

In many instances the relative calibration of sensors in a system is more critical than their absolute calibration within the constraints of their specified accuracy[2].  Two sensors that have been calibrated to the same standard with the same accuracy specification will have the same absolute accuracy.  For example, two averaging type RTDs with transmitters may both be certified as +1.5F over their 0-100F span.  This means several things:

1   If either sensor is subjected to a steady-state temperature condition between 0-100F, then the user can expect that the sensor, at its terminals, will accurately indicate the measured temperature within 1.5F of the true value.  If the sensor is indicating a temperature of 57.4F, then the actual temperature is somewhere between 55.9F and 58.9F (the true value plus and minus the stated accuracy range of 1.5F)[3]

2   Without a copy of the sensors calibration certificate, we can only know that the sensor will be indicating a temperature within its accuracy tolerance. 

n    It could be consistently high or low by some amount within its tolerance. 

n    It could be high or low over its entire span, but by some variable amount within its tolerance.

n    It could be high at one end of its span and low at the other, all within its tolerance.

n    It could be randomly high and low over its entire span, all within its tolerance.

Even with a copy of the calibration certificate, all we really know is the deviation at the specific points tested.

3   If both sensors are subjected to the same steady-state condition, they may indicate a temperature difference of as much as 3F[4]

The temperature difference noted in item three would imply a temperature differential that does not exist.  In HVAC systems, temperature differentials usually indicate energy transfers are occurring.  For instance, the temperature change across an active coil is a good indication of its performance.  The systems heat transfer elements are often controlled to ensure a temperature differential across them, for instance, a fixed leaving air temperature.

A temperature rise across a heating coil that is supposed to be off probably means that the control valve is leaking or that there is a problem with the control signal to the control valve.  In either case, energy is being wasted at several points in the system, specifically through:

n    Unnecessary heating of the air stream at the coil.

n    Unnecessary heating plant energy to provide the unnecessary heat to the coil.

n    Unnecessary cooling of the air stream to offset the unnecessary heating in order to maintain comfort.

n    Unnecessary cooling plant energy to provide the unnecessary cooling to the air stream.

Figure 1 The impact of calibrated accuracy of identical sensors serving the same system and operating at different points within their certified calibration accuracy window

The system in Figure 1 requires a 55F cooling coil discharge temperature.  It uses independent control loops for each heat transfer element.  All of the sensors serving the system meet the projects +1.5F accuracy requirement for averaging type sensors.  But, because the one serving the preheat coil is operating at the bottom limit of that range, it detects the outdoor air condition as lower than desired and adds heat, even though this would not be necessary.  This air then reaches the cooling coils controller which not only re-cools the air to remove the unnecessary heat added by the preheat coil, but overcools the air because it is operating at the upper limit of its accuracy window and thus detects the cooling coil leaving condition as being warmer than it actually is.  As a result, the AHU uses heating and cooling energy in an unsuccessful attempt to achieve a leaving air temperature that could have been achieved by simply bringing outdoor air into the system at the current condition.

In many instances, the relative accuracy of the sensors in a system is far more important than their absolute accuracy for ensuring efficient performance and detecting problems.   Two sensors that are performing per specification but indicating a temperature difference that does not exist could be misleading and create operating problems. 

Consider a make up air handling system with a preheat coil and cooling coil where each coil is controlled by an independent control loop, a fairly common arrangement.  Lets further assume that the temperature sensors that provide inputs to these control loops are RTDs with flexible averaging elements and 4-20 ma transmitters with an overall accuracy of +1.5F, a fairly common type of sensor and accuracy for this type of application.  It would be possible for this system to perform unnecessary simultaneous heating and cooling, even if both sensors are operating within their accuracy window and the set points of the control loops had been coordinated.  This is illustrated in Figure 2.  Add an economizer to the picture with another independent control loop and the situation could become even worse.

Figure 2 The impact of cooling coil discharge sensor error on maintaining space conditions inside the design envelope 

The psychrometric chart on the left depicts the coil and space conditions that would be associated with a +1-1/2F calibration error at the sensor controlling the coiling coil discharge temperature in a system serving a clean room.  The chart on the right depicts this same situation for an office building comfort cooling application.   Notice how the error in measurement at the cooling coil discharge still places the space within the design envelope for the comfort cooling process.  This same error places the space outside the design envelope for the clean room process.

All of this is not to say that absolute accuracy is not important.  For instance, if the discharge temperature of a cooling coil is not controlled accurately in a dehumidification process, then space humidity levels may suffer even though the space temperature is satisfactory.  The importance of this will vary with the application as is illustrated in Figure 2  However, even in critical applications like clean rooms, the concern is often not so much for absolute accuracy as it is for stability and repeatability[5].  Absolute accuracy is typically critical in process applications where subjecting the product to a temperature outside of a certain limit will result in unacceptable product or damage.  HVAC systems seldom have to deal with this issue directly although they may impact the performance of systems that do.

Some of these issues, such as the absolute sensor accuracy requirements, are best addressed at the time of design.  Others, like the relative accuracy of the sensors, can only be addressed under operating conditions, and thus, are best left to the commissioning process where a relative calibration can be performed with the system in operation.  In general, a relative calibration test will include the following steps.

1   Identify the sensors in a system where relative accuracy is important in order to ensure efficient operation or proper interpretation of the data the present. Prime candidates include:

n    Sensors monitoring temperature, humidity or pressure differentials across equipment.

n    Sensors used to calculate energy or mass transfer across a piece of equipment.

n    Sensors used in cascaded control loops where the output from one loop becomes the input to another.

2   Identify areas served by the system to be tested and document acceptable deviations from norm that can be tolerated during the time of test[6].

3   Identify and document any phenomenon like fan heat or duct temperature rise due to transmission that could legitimately change the conditions between the sensors under test[7].

4   Document the current software calibration and scaling factors and then return them to standard settings so that the information displayed is the actual, un-augmented value from the sensors under test[8].

5   Place the system in an operating mode that will subject all of the sensors to the same conditions.  Often, this involves running the system with the heat transfer elements valved out and/or shut down and at a fixed flow rate[9].

6   Verify that all sensors are reading within their specified accuracy window relative to each other and/or some reference standard.

7   Select a sensor or a reference standard to be the baseline for relative calibration.

8   Identify sensors with calibration errors outside of the certified accuracy range and correct these errors.

9   Adjust software calibration factors as required so that the sensors all read the same value under the test conditions.

10 Document the software calibration factors.

11 Return the system to normal operation.

If time permits and the system can tolerate a longer test window, test the system at a second, different steady state condition within the normal operating range prior to returning it to service.  For instance, it is often possible to test economizer-equipped units on a mild day so that the test can be performed with full recirculation and with 100% outdoor air, thereby checking the sensors at about 70-75F and at 50-55F. This two-point calibration will help ensure that the final relative calibration factors provide good results under all normal operating conditions.  When performed in this manner, it may take several iterations of the test sequence to establish software calibration factors that provide consistent readings among all sensors at both extremes, which can add significantly to the time it takes to run the test and make the adjustments.   Satisfactory results can often be obtained by running the test at a condition either in the middle of the normal operating range or at the condition seen most often by the system.

Click on the button below the following table to be taken to a functional test for relative accuracy for the temperature sensors in an air handling unit with an economizer cycle, warm-up coil and cooling coil.  The concepts illustrated in this test can be adapted to other system configurations as well as other system types by using the sample as a template for procedures that are specific to your projects.

 



[1] If the parameter is truly critical to the process, a regular program of certifiable calibration should be implemented.  There are several approaches that can be used for this including true multipoint calibration in the field if the necessary constant temperature baths, current and voltage simulators, decade boxes, etc. are available.  A more practical alternative for Owners who do not have access to an instrument shop and/or instrumentation engineers may be to maintain a spare sensor in stock that can be calibrated and switched periodically with the operating sensor.  The sensor that was in service can then be sent out for factory calibration, returned to stock, and then used for the swap at the next calibration cycle.

[2]   For the purposes of our discussion, relative accuracy is defined as the accuracy of the sensors relative to each other and perhaps some field reference standard.  Absolute accuracy is the accuracy of the sensor relative to a NBS traceable standard.

[3]   The window in which the true temperature lies is equal to twice the temperature sensors accuracy range centered on the indicated value.  In our example, if the sensor were reading 1.5F low, then the true temperature would be 1.5F higher than the indicated temperature.  On the other hand, if the sensor were reading 1.5F high, then the true temperature would be 1.5F lower than the indicated temperature.  Since we dont know if the sensor is reading high or low, just that it is capable of reading +1.5F of the true temperature, then we can only assume that the temperature is within this window created by the sensors accuracy tolerance.

[4]   This would occur if one sensor were reading at the high end of its accuracy tolerance and the other sensor were reading at the low end of its accuracy tolerance.

[5] For example in clean rooms, the issue often is not the exact temperature and humidity in the space but the stability of these parameters and the consistency of the documentation of them. 

[6]   The system will be essentially out of control for the duration of the test.  In many cases, this will not be a significant problem since the test will not last that long and/or the test condition can be selected to minimize the disruption to the loads.  However, there may be critical zones on the system, like a lab or a computer room, which cannot tolerate a significant disruption.  This may mean that the test will need to be coordinated to occur when a disruption can be tolerated.  It may also mean that it is necessary to trend the space during the test to document any deviations from norm that occur.

[7]   If the test is to include the return air temperature sensor on an economizer equipped unit, then the best test will result if the system is operated in full recirculation with no minimum outdoor air or exhaust since this virtually eliminates one variable from the system (the temperature change associated with mixing the minimum outdoor air with the return air).  Full return may not be possible depending on the nature of the loads in the building and the time when the test is scheduled to occur.  If it is not possible, then it will be necessary to document the temperature change associated with the mixing function or simply eliminate the return sensor from the test.  Poor mixing can also be a problem for sensors located immediately down stream of the mixing box if the test is performed in any mode other than 100% outdoor air or 100% recirculation.  If the system has leaky dampers, then some temperature change may occur through the mixing plenum even with no outdoor air being introduced actively.  An effort should be made to document this if possible.  Testing on mild days so that any minor leakage that does occur will have a minimum impact on the temperature in the system can mitigate the effect.  Similar considerations apply when testing humidity sensors located at various points in the system.

[8]   For example, if the sensor you are testing is a 4-20 ma sensor with a range of 0-100F, then set the system up so that 4ma indicates 0F and 20ma indicates 100F. You may discover that the system is already set up to do this.  However, it is not uncommon to adjust the software scaling and calibration factors to tweak a sensor reading and compensate for its certified accuracy limitations so that it reads closer to the true temperature.  In fact, this is exactly what we will do later in the procedure to calibrate the sensors relative to each other.  But, at this stage of the test, it is important that the system be set up to the sensor standards to allow a baseline to be established and evaluate if everything is reading within its certified accuracy specification.

[9] The more positive the elimination of the potential for heat transfer is, the better.  For instance, forcing the control valve to a hot water coil closed is the bare minimum requirement.  Forcing the valve closed and manually closing both service valves is better. Shutting down the hot water system is best. It is also important to deal with passive elements like preheat coils with face and bypass dampers and humidifier manifolds to eliminate their impact.  Even if a preheat coil is in full bypass, there can still be heat transfer via radiation and leakage from the heating element, so it needs to be shut down for the test. Even if a humidifier is not active, its jacket heating system is and will add several degrees of temperature rise to the system, thus it needs to be shut down for the test.