ABSTRACT

Regardless of a system's degree of automation, it is humans who are responsible for its functioning. The basic tenet of human-centred automation - that human operators need to be in control of technical systems - is derived from this responsibility. Ln view of human fallibility in general, as well as loss of control evidenced in accidents in particular, doubts arc expressed about how achievable the goal of human control over technology really is. Reasons for lack of controllability can be found in the normative assumptions of those developing and implementing technology which may support a self-fulfilling prophecy of turning humans into risk factors. However, the ever increasing complexity of systems also has to be acknowledged as a limiting factor. It is therefore suggested to found system design on the premise of partial non-controllability of technology. This approach could help human operators to deal with system opaqueness and uncertainty better by providing systematic information on the limits of control and thereby also relieving them of some of their responsibility. At the same time, this approach would force system designers, the organizations operating the systems, and regulatory institutions to take on responsibility for the use of technical systems whose complexity can no longer be mastered entirely. Consequences for making decisions

on system automation by regulators and companies arc discussed within the larger realm of establishing a new politics of uncertainty on a societal level, which would be based on deliberately giving up the pretence of being in control always.