Published: Oct. 6, 2015

Learning distance preserving embeddings with application to rank and sparsity constrained metric learning

Bubacarr Bah

The Institute for Computational Engineering and Sciences (ICES)University of Texas at Austin

Date and time: 

Tuesday, October 6, 2015 - 11:00am

Location: 

GRVW 105

Abstract: 

We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows. Our design criteria is aimed at preserving the distances between the data points in the dimensionality reduced space as much as possible relative to their distances in original data space. This approach can be considered as a deterministic Bi-Lipschitz embedding of the data points. We introduce a scalable learning algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by leveraging game theoretic tools. We also provide a generalization characterization of our matrix based on our sample data. We use compressive sensing problems as an example application of our problem, where the Frobenius norm design constraint translates into the sensing energy.

We extend the above results to rank and sparsity constrained metric learning. Choosing a distance preserving measure or metric is fundamental to many signal processing algorithms, such as k-means,nearest neighbor searches, hashing, and compressive sensing. In virtually all these applications, the efficiency of the signal processing algorithm depends on how fast we can evaluate the learned metric. Moreover, storing the chosen metric can create space bottlenecks in high dimensional signal processing problems. As a result, we consider data dependent metric learning with rank as well as sparsity constraints. We propose a new non-convex algorithm and empirically demonstrate its performance on various datasets; a side benefit is that it is also much faster than existing approaches. The added sparsity constraints significantly improve the speed of multiplying with the learned metrics without sacrificing their quality.

SPEAKER BIO: Dr. Bah is a Postdoctoral Fellow at the Institute of Computational Engineering and Sciences (ICES) and Mathematics Department at the University of Texas at Austin, advised by Rachel Ward. Previously he was a postdoctoral fellow under Volkan Cevher at EPFL in Switzerland, and did his PhD in applied and computational mathematics in 2012 under the supervision of Jared Tanner, his MSc at Oxford University, and his BSc in Mathematics and Physics at the University of The Gambia.

His research interests are compressed sensing, sparse approximation, random matrix theory, numerical analysis, mathematical signal processing, and machine learning. More at: https://www.ma.utexas.edu/users/bah/