ABSTRACT

The ECAT II, one of the very first commercial PET systems, has a simple design: its 66 detectors were arranged in a single partial ring to measure coincidence events along 363 lines of response (Phelps et al. 1978). For image reconstruction, it came equipped with a DEC PDP-11/45, a computer capable of executing 189,000 instructions per second with only 32 KB of memory. Thirty-five years later, at the time of this writing, PET systems have changed tremendously. They have tens of thousands of detectors, spanning multiple rings. Their image reconstruction uses iterative algorithms that accurately model the coincidence detection process, including scatter, attenuation, resolution blurring, and time of flight. With these advances, computing requirements for processing and reconstruction have exploded. A state-of-the-art PET system, such as the Philips Gemini TF, comprises 28,336 detector elements and performs the reconstruction entirely in list mode to preserve the full spatial and temporal resolution of the measurements (Surti et al. 2007). For practical reconstruction times, the system comes equipped with a cluster of four computers, each with two six-core CPUs-a total peak performance of 640.8 billion floating-point operations per second (FLOPS) (Table 10.1). Stunningly, this amounts to a 3 million-fold increase in the utilization of computing for PET over a span of 30 years, a 60% yearly increase.