New Optimization Techniques for Sparse Approximations
A variety of problems in modern applications of optimization require a selection of a ‘sparse’ solution, a vector with preferably few nonzero entries. Such problems may originate from very different applications in computational statistics, signal or image processing or compressed sensing, finance and machine learning, to mention just a few. Sparse approximation problems are often solved with dedicated and highly specialised first-order methods of optimization.
In this talk I will argue that these problems may be very efficiently solved by the more reliable optimization techniques which involve some use of the (inexact) second-order information as long as this is combined with appropriately chosen iterative techniques of linear algebra, such as for example methods from the Krylov-subspace family. Two particular classes of methods, the Newton Conjugate Gradient and the Interior Point Method will be interpreted as suitable homotopy type approaches and will be applied to solve problems arising from: compressed sensing, multi-period portfolio optimization, classification of data coming from functional Magnetic Resonance Imaging, restoration of images corrupted by Poisson noise, and classification via regularized logistic regression. In all these cases, the performance of the proposed methods will be compared against competitive first-order methods. Computational evidence will be provided to demonstrate that specialized second-order methods compare favourably and often outperform the cutting-edge first-order methods.