ABSTRACT

The classical approach to error analysis in any branch of experimental physics deals usually with the “direct measurement” of a quantity, a temperature, for example. It is related to the sensor and to its connected instrument, with absolute/relative errors provided by its manufacturer. However, other sources of errors that are not related to the instrumentation exist: They depend on the model used to convert the sensor output into the desired quantity. So, a thermocouple provides the temperature of its hot junction, and not always the one of the material it is embedded into. Things get more involved when a heat transfer coefficient h has to be calculated, using temperature and rate of heat flow measurements in one or several experiments. This corresponds to “indirect measurement” of a quantity. The notions of absolute/relative errors are revisited on a statistical basis based on least square estimation. Calculation of the variance–covariance matrix of the quantities to be estimated, based on the sensitivities of the modeled output to them, allows to get their standard deviations, even if these data are correlated. This statistical analysis is implemented on very simple models, and the different causes of errors in indirect measurements are recalled.