Parametric Dimension Reduction by Preserving Local Structure

Chien-Hsun Lai, Ming-Feng Kuo, Yun-Hsuan Lien, Kuan-An Su, Yu-Shuen Wang

View presentation:2022-10-19T21:39:00ZGMT-0600Change your timezone on the schedule page
2022-10-19T21:39:00Z
Exemplar figure, described by caption below
We extend a well-known dimension reduction method, t-distributed stochastic neighbor embedding (t-SNE), from non-parametric to parametric by training neural networks. Our method achieves high embedding quality while enjoying generalization. In addition, our method is highly efficient, thanks to the mini-batch network training.

Prerecorded Talk

The live footage of the talk, including the Q&A, can be viewed on the session page, Visual Analytics, Decision Support, and Machine Learning.

Fast forward
Keywords

Computing methodologies—Dimensionality reduction and manifold learning—; Human-centered computing—Visualization toolkit

Abstract

We extend a well-known dimension reduction method, t-distributed stochastic neighbor embedding (t-SNE), from non-parametric to parametric by training neural networks. The main advantage of a parametric technique is the generalization of handling new data, which is beneficial for streaming data visualization. While previous parametric methods either require a network pre-training by the restricted Boltzmann machine or intermediate results obtained from the traditional non-parametric t-SNE, we found that recent network training skills can enable a direct optimization for the t-SNE objective function. Accordingly, our method achieves high embedding quality while enjoying generalization. Due to mini-batch network training, our parametric dimension reduction method is highly efficient. For evaluation, we compared our method to several baselines on a variety of datasets. Experiment results demonstrate the feasibility of our method. The source code is available at https://github.com/a07458666/parametric_dr.