ABSTRACT

Optimization methods have been always a powerful tool for solving large-scale network problems. For large-scale networks, some associated problemss have the some features: the decision variablex is big so that usual methods based on whole gradient computations are prohibitive. In this case, an appropriate way to approach these problems is through coordinate descent methods. These methods were among the first optimization methods studied in literature, but until recently they have not received much attention. A detailed exposition of coordinate descent methods, together with variants and extensions and their convergence properties, has been also given recently in the survey paper. Coordinate descent methods were among the first optimization algorithms suggested for solving unconstrained minimization problems. This chapter provides strong arguments in the favor of coordinate gradient descent methods, such as the simplicity of each iteration and the existence of a general convergence theory, both aspects raising difficulties when coordinate descent methods based on exact minimization are used.