ABSTRACT

This chapter reviews convex-optimization-based algorithms for tensor decomposition. There are several reasons to look into convex optimization for tensor decomposition. First, it allows us to prove worst-case-performance guarantees. Although we might be able to give a performance guarantee for an estimator based on a non-convex optimization (see, e.g., [43]), the practical relevance of the bound would be limited if we cannot obtain the optimum efficiently. Second, the convex methods allow us to side-step the tensor rank selection problem; in practice misspecification of tensor rank can significantly deteriorate the performance, whereas choosing a continuous regularization parameter can be considered an easier task. Third, it allows us to use various efficient techniques developed in the mathematical programming communities, such as proximity operation, alternating direction method of multipliers (ADMM), and duality gap monitoring, which enable us to apply these algorithms to a variety of settings reliably. The norms we propose can be used for both denoising of a fully observed noisy tensor and reconstruction of a lowrank tensor from incomplete measurements. Of course there are limitations to what we can achieve with convex optimization, which we will discuss in Section 14.6. Nevertheless we hope that the methods we discuss here serve to connect tensor decomposition with statistics and (convex) optimization, which have been largely disconnected until recently, and contribute to the better understanding of the hardness and challenges of this area.