If you wish to obtain an impression of the distribution of, say, an estimator without relying on too many assumptions, you should repeat the estimation with different unique samples from the underlying distribution. Unfortunately in practice, most of the time only one sample is available. So we have to look for other solutions. New relevant data can only be generated by means of new experiments, which are often impossible to conduct in due time, or by a distribution assumption (see Chapter 6 for random number generation). If we do not have any indication what distribution is adequate, we should beware of assuming just any, e.g. the normal distribution. So what should we do? As a solution to this dilemma resampling methods have been developed since the late 1960s. The idea is to sample repeatedly from the only original sample we have available. These repetitions are then used to estimate the distribution of the considered estimator. This way, we can at least be sure that the values in the sample can be realized by the data generating process. In this chapter we will study how to optimally select repetitions from the original sample. After discussing various such methods, the ideas are applied to three kinds of applications: model selection, feature selection, and hyperparameter tuning.