ABSTRACT

From the very moment the British started a collaborative research programme in September 1959, as a means to the end of developing interferon as an antiviral drug, the issue of mastering differences had been on the agenda. Making comparisons between the results obtained on different occasions within any one of the collaborating laboratories was already quite demanding; far greater difficulties arose when attempts were made to compare results obtained in different laboratories. Again and again interferon researchers reported considerable variations in experimental results from day to day, week to week, from laboratory to laboratory For the greater part, this was thought to be due to the wide variety of methods and materials employed in the assay of interferon. It was generally believed that standardization might be helpful in order to ensure the reliability and credibility of the products of their research work, both locally and transnationally

The need for standardization was nourished by the mere involvement of the pharmaceutical industry which had practical as well as strategical interests in preventing the existence of a variety of different arbitrary units for expressing the potency of one and the same drug. Performing clinical trials meant passing judgements on the actions of experimental substances within certain limits related to human health. Due to the high level of public responsibility that was involved in experiments with humans, there was special concern for quantitative control and rigour with regard to issues like effectiveness, toxicity and stability Establishing and using standards was considered a necessary operation to give authority to trial data, which could be appealed to in the future.