ABSTRACT

Let us start by looking back over the history of dependability. During the Cold War in the 1960s, as the Apollo program aimed at putting man on the moon, the fault tolerant computer was proposed as a means of supporting real-time computing and mission-critical applications. This marked the start of active discussion in the fi eld of dependability [1, 2]. The subsequent increase in scale of hardware and software and the increased popularity of online services led to the development of RAS-Reliability, Availability, and Serviceability. Focusing on system-error detection and recovery, this concept integrated resistance to failure (reliability), assurance of a high operation ratio (availability), and the ability to rapidly recover in the case of failure (serviceability or maintainability) [3, 4]. As computers started to be used more and more for business in the latter half of the seventies, two additions were made to RAS-namely, assurance of consistency in data (integrity) and the prevention of unauthorized access to confi dential content (security). This extension of the original concept, RASIS, has provided a standard for the evaluation of information systems. With the turn of the century, the concept of autonomic computing was proposed as a means of achieving the highest possible level of self-sustained dependability in complex systems connected by networks, and this approach took its inspiration from the human involuntary nervous system [5, 6, 7, 8] (Fig. 2-1).