EE Seminar: Nonparametric Canonical Correlation Analysis

 (The talk will be given in English)

 

Speaker:     Dr. Tomer Michaeli
                   EE, Technion, Haifa

 

Monday, December 5th, 2016
15:00 - 16:00

Room 011, Kitot Bldg., Faculty of Engineering

Nonparametric Canonical Correlation Analysis

Abstract

Canonical correlation analysis (CCA) is a classical representation learning technique for finding correlated variables in multi-view data. This tool has found widespread use in various fields, including recent application to natural language processing, speech recognition, genomics, and cross-modal retrieval. One of the shortcomings of CCA is its restriction to linear mappings, since many real-world multi-view datasets exhibit highly nonlinear relationships. In recent years, several nonlinear extensions of the original linear CCA have been proposed, including kernel and deep neural network methods. These approaches significantly improve upon linear CCA in many practical applications, but have three limitations. First, they are still restricted to families of projection functions which the user must specify (by choosing a kernel or neural network structure). Second, they are computationally demanding. Third, they often produce redundant representations.
In this work, we derive a closed form solution to the nonlinear CCA problem without any functional restrictions. As we show, the solution corresponds to the SVD of a certain operator associated with the joint density of the views. Thus, by estimating the population density from training data, we obtain a practical nonparametric CCA (NCCA) algorithm, which reduces to solving an eigenvalue system. Superficially, this is similar to kernel CCA, but importantly, NCCA does not require the inversion of any kernel matrix. We also derive a partially linear CCA (PLCCA) variant in which one of the views undergoes a linear projection while the other is nonparametric. Finally, we show why spectral representation learning methods may produce redundant representations, and propose a generic technique to prevent redundancy in those algorithms.
As we demonstrate on several test cases, our algorithms are memory-efficient, often run much faster and perform better than kernel CCA and comparable to deep CCA.

This is joint work with Weiran Wang, Karen Livescu and Yochai Blau.

05 בדצמבר 2016, 15:00 
חדר 011, בניין כיתות-חשמל 
אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>