Simultaneously solving multiple related estimation tasks is a problem known as multi-task learning in the machine learning literature. A rather flexible approach to multi-task learning consists of solving a regularization problem where a positive semidefinite multi-task kernel is used to model joint relationships between both inputs and tasks. Specifying an appropriate multi-task kernel in advance is not always possible, therefore it is often desirable to estimate one from the data. In this chapter, we overview a family of regularization techniques called output kernel learning (OKL), for learning a multi-task kernel that can be decomposed as the product of a kernel on the inputs and one on the task indices. The kernel on the task indices is optimized simultaneously with the predictive function by solving a joint two-level regularization problem.