ABSTRACT

The idea of using time as the basis for predicting human activity has its roots in the early 20th century, specifically in the “scientific management” of Fredrick Taylor (although the idea of breaking work into constituent parts and timing these parts can be traced to the Industrial Revolution in the late 18th century). The basic idea of such approaches was to simplify work and then seek ways of making the work as efficient as possible, i.e., to reduce the time taken for each task step and, as a consequence, to reduce the overall time for the activity. Obviously, such an approach is not without problems. For example, Taylor faced Presidential Select Committee hearings in the U.S. when workers rioted or went on strike in response to the imposition of his methods. At a more basic level, there is no clear evidence that there is “one best way” to perform a sequence of tasks, and people often are adept in employing several ways. Thus, while the timing of task steps can be seen as fairly straightforward, the combination of the task steps into meaningful wholes is problematic.