ABSTRACT

Any discussion about reversibility inevitably touches on another important concept, namely that of entropy. The entropy of a system is often loosely defined as a measure of “disorder,” although such a definition is imprecise because disorder is a subjective concept. In reality, in a closed dynamical system, any increasing function (or, more generally, any non-decreasing function that

“eventually” increases) of the system’s variables that can be identified in that system can be defined as an entropy function of that system. Functions that never increase (i.e., which stay constant, because they also do not decrease) are not interesting entropy functions, especially if at least one other entropy function can be found that experiences some increase. To identify and/or define an entropy function of a system, the system is viewed as a composition of two or more subsystems. The values of variables representing the enclosed subsystems are called microstates, and the aggregations of the enclosing system are called macrostates. An entropy function of the system is then determined as a function of possible mappings from macrostates to microstates as the system evolves. When the system moves from an “old” macrostate to a “new” macrostate, a potential ambiguity arises about the specific set of microstates that underlie the old macrostate that gave rise to the new macrostate. In general, if there is nothing in the system that can be used to resolve this ambiguity, then the system accumulates this ambiguity about the previously taken path during its evolution. Any function that captures this increase of ambiguity about the system’s evolution path qualifies as an entropy function of the system [Gottwald and Oliver, 2009].