ABSTRACT

The use of microorganisms with intent to cause harm or fear has a long history. Ancient Persian, Greek and Roman manuscripts describe the use of decaying bodies to contaminate water sources; the Tartars, laying siege to Caffa in 1346, catapulted the corpses of plague victims over the city walls, and during the French and Indian War (1754-67) the British forces attempted to spread smallpox amongst American Indians by distributing blankets from a smallpox hospital.1,2

In the early twentieth century, interest in the hostile uses of pathogens developed in parallel with increasing understanding of the effects of specific pathogens and other technological advances, which included industrial-scale production of microorganisms. Although the Geneva Protocol of 1925 proscribed the use of biological weapons by signatory states, it does not prevent their possession or development, and many nations, including Canada, France, Great Britain, the United States and the former Soviet Union invested in bioweapons research.3 These programmes, which were not without occupational risk (456 cases of laboratory-acquired infection occurred between 1942 and 1969 in workers at Fort Detrick, United States, including two fatal cases of anthrax and one fatal case of viral encephalitis4) had largely been terminated by the 1970s. The 1972 Biological Weapons Convention – which has over 140 nation-state signatories – prohibits the development, production or stockpiling of biological agents (or their toxins) in ‘quantities that have no justification for prophylactic, protective or other peaceful purposes’.3 However, in some states, notably the Russian Federation and Iraq, offensive bioweapons research continued until at least the early 1990s.5 In 1979, the accidental release and airborne spread of anthrax spores from a research facility in Sverdlovsk caused more than 60 deaths

from inhalational anthrax in the surrounding community, and in 1990, immediately before the Gulf War, concern about a possible threat resulted in the vaccination of more than 150 000 coalition military personnel against anthrax.6,7

The revelation that Iraq had developed bioweapons, the deliberate release of sarin gas on the Tokyo subway by the Aum Shinrikyo sect in 1995, and fears that a sharp decline in financial support might have compromised biosecurity in laboratories in Russia and the former Soviet Union provided the impetus for a detailed review series that examined the potential use of specific pathogens as bioweapons and encouraged policy-makers and planners to ensure that health and other civilian services were adequately prepared to deal with the potential threat of bioterrorism.8-11

In the last 50 years, there have been only five reported incidents involving the intentional release of pathogens. Four involved gastrointestinal or foodborne pathogens (Salmonella typhi, Salmonella enteritidis, Shigella dysenteriae and Ascaris suum). In the largest of these incidents, over 700 people developed symptoms after eating from salad bars in two restaurants in Dalles, OR, in 1984, though the source of the outbreak was not recognized until late in 1985, when it was discovered that followers of Baghwan Shree Rajneesh had deliberately contaminated salads with cultures of S. enteritidis.12-15 In 2001, however, the attacks on the World Trade Center and the Pentagon were followed, a month later, by an outbreak of anthrax caused by deliberate dissemination of anthrax spores via the US Postal Service.16 These events stimulated a period of intense public health activity at international and national levels, focused initially on improving biosecurity and emergency preparedness for bioterrorism, but which now seek to ensure that preparedness planning for bioterrorist threats is integrated in a more rational, generic ‘all-hazards’

approach, with planning for infectious disease emergencies (including pandemic influenza), natural disasters and other public health emergencies.