chapter  4
14 Pages

Modeling Network Reliability with Graphs

Technical systems such as communication networks, power grids, tra¢ c systems, or command and control systems have a network-like structure. Let us consider a computer network as one typical example. The topology of a computer network is characterized by two main components, namely computers (server, router, terminals) and transmission lines between computers (copper cable, …ber-optic lines, wireless channels). In a mathematical model, a so-called graph, these two types of network components are represented by vertices (or nodes) and edges. One way to specify a network topology or a graph is to give a list of vertices and edges together with an incidence function that indicates the end vertices of each edge. However, there is a more convenient way to visualize the structure of a network by drawing the corresponding graph. We represent vertices by small circles and edges as lines connecting

Figure 4.1: A graph

Figure 4.2: A graph with directed edges, multiple edges, and loops

those two circles that correspond to their end vertices. Figure 4.1 shows an example. A graph may have directed edges that can be traversed in only one direction

or multiple edges connecting the same pair of vertices. A loop is an edge for which the end vertices coincide. All these possibilities are illustrated in Figure 4.2. The next section of this chapter is devoted to a more precise foundation of graph theoretic tools that are indispensable in network reliability analysis. Besides its topology a network is characterized by a great diversity of pa-

rameters assigned to edges or vertices. An edge may have a length, a capacity (a data transfer rate), costs, and an availability (reliability, i.e. the probability of operating properly during a given period of time). A vertex might have geographic coordinates (a location), costs, and a reliability. The network itself has been designed in order to support a communication process. The description of the communication process leads to a new class of parameters that describe dynamic properties such as packet delay, throughput, or packet loss. An even closer look at the inner structure of the edges and vertices of the network is required if we have to evaluate the network performance or the quality of service. In this case, tra¢ c parameters, protocol properties, and server characteristics have to be incorporated in network analysis. Here we restrict our attention to questions of network reliability. The …rst basic observation in network reliability analysis is the fact that

vertices and edges of a network are subject to random failure. Failures of vertices and edges may be caused by natural disasters such as thunderstorms, by human errors, or technical defects, for instance as a result of overstressing. We assume that the failure distributions of the components (vertices and edges) are known. In general, we presume only the knowledge of the probability that a component operates within a given period of time. These data may be derived from statistical observations, stress tests, or manufacturer quality certi…cates. Usually reliability is de…ned as the probability that a component or system

is operating within a certain period of time. When do we consider a network as being operating? The minimum requirement for a successful communication from one vertex in a network to another one is the existence of a path of operating edges and vertices connecting the two terminal vertices in question. Thus the …rst question in network reliability analysis is the determination of the probability that there exists at least one operating path between two

terminal vertices of a graph and ask for the probability that all these terminal vertices are connected by operating paths. In the following sections we will introduce further generalizations and alternative reliability measures. Once we have introduced a suitable reliability measure the question of

determining the reliability emerges naturally. Unfortunately, most of the reliability problems considered here turn out to be computationally intractable, i.e. (loosely spoken) there exist only algorithms that require an exponentially with the size of the network increasing computation time. Consequently, in order to analyze large networks we are interested in fast approximative methods or in developing bounds for the desired reliability measure. Finally, for huge networks where all analytic methods fail, simulation may help. Often, even when the computation of the reliability is computationally hard, we can …nd interesting classes of networks that can be analyzed e¢ ciently. One such class consists of so-called series-parallel networks that can be easily reduced. The paper on reliable relays by Moore and Shannon [227] can be consid-

ered as the pioneering work in this …eld. An enormous increase of publications in network reliability started around 1980. We can mention here only some milestones: Satyanarayana and his colleagues [269], [272] developed factoring and reduction methods for network reliability calculations. Valiant [300] showed that many important reliability problems are #P-complete, i.e. computational intractable. The …rst two monographs on network reliability by Colbourn [92] and Shier [278] deal with combinatorial and algebraic aspects of network reliability, respectively.