Research in the last decade on supervised land-cover classi‡cation has emphasized new distribution-free algorithms as high-performance alternatives to traditional classi‡ers. Such classi‡ers include decision trees, neural networks, nearest neighbor, and support vector machine algorithms. Distribution-free algorithms work on the spectral frontiers between land-cover classes, a marked improvement over conventional parametric classi‡ers reliant on the statistics of central tendency. A number of comparisons between distribution-free methods have been made, which have historically favored parametric techniques. Ince (1987) and Hardin and Thomson (1992) showed that nearest-neighbor classi‡ers were superior to parametric classi‡ers. Hansen et al. (1996) and Friedl and Brodley (1997) found comparable performance between a classi‡cation tree approach and a maximum likelihood one. Key et al. (1989), Bischof et al. (1992), and Gopal et al. (1999) tested the maximum likelihood classi‡er versus neural network classi‡ers and found that the neural network classi‡ers provide accuracies similar to or superior than that provided by the maximum likelihood classi‡er. Likewise, support vector machines have been compared to the maximum likelihood classi‡er and have been found to yield higher accuracies (Huang et al., 2002). Support vector machines, in turn, have been found to outperform decision trees and neural nets (Huang et al., 2002). However, variables such as the number of features, model parameter selection, and the number of training samples can affect the relative performance of distribution-free classi‡ers (Pal and Mather, 2003).