ABSTRACT

Gases have been used as reactor coolants since the earliest days of nuclear power. Early reactors, such as the Windscale piles in the UK, intended for production of weapons-grade plutonium, used atmospheric air for the coolant in an open cycle. These reactors used air at atmospheric pressure and the flow rate was sufficiently large to limit the temperature rise and thus limit the rate of oxidation of the core graphite. This meant that the main blowers had to pump a large volume of low-density air, with the consequent high power demands that this imposed. For a power reactor the coolant temperature rise has to be greater to achieve a reasonable thermal efficiency, and the pumping power has to be mini-mised. The pumping power is minimised by increasing the density of the gas by containing it in a pressurised closed circuit. However, there are two main problems with using high-pressure, high-temperature air as a (thermal) reactor coolant. First, nitrogen is a strong neutron absorber, which in the process of absorbing neutrons produces radioactive carbon-14. Second, a gas-cooled thermal reactor contains a graphite moderator, and this will oxidise rapidly at high temperatures. Similarly, metallic structures within the reactor will be oxidised more rapidly as the coolant temperature is increased.