Yen-Chen Lin

Research Scientist
Email: yenchenl [at] nvidia (dot) com

Twitter / Google Scholar / Github


I am a research scientist at NVIDIA. I am interested in generative AI.
Previously, I was a Ph.D. student at MIT working with Phillip Isola and Alberto Rodriguez.

Selected Publications

For a full list of my publications, please see here.
MIRA: Mental Imagery for Robotic Affordances
Lin Yen-Chen, Pete Florence, Andy Zeng, Jonathan T. Barron, Yilun Du, Wei-Chiu Ma, Anthony Simeonov, Alberto Rodriguez, Phillip Isola
CoRL 2022 / Video / Paper / Project Page
NeRF lets us synthesize novel orthographic views that work well with pixel-wise algorithms for robotic manipulation.
NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Tsung-Yi Lin, Alberto Rodriguez, Phillip Isola
ICRA 2022 / Video / Paper / Project Page / Slides / Code / Colab

Generating correspondences with Neural Radiance Fields (NeRF) enables robotic manipulation of objects (e.g., forks) that can't be reconstructed by RGB-D cameras or multi-view stereo.
iNeRF: Inverting Neural Radiance Fields for Pose Estimation
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Alberto Rodriguez, Phillip Isola, Tsung-Yi Lin
IROS 2021 / Paper / Project Page

Performing differentiable rendering with Neural Radiance Fields (NeRF) enables object pose estimation and camera tracking.
Debiased Contrastive Learning
Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, Stefanie Jegelka
Neurips 2020 Spotlight / Paper / Code

A debiased contrastive objective that corrects for the sampling of same-label datapoints without knowledge of the true labels.
Learning to See before Learning to Act: Visual Pre-training for Manipulation
Lin Yen-Chen, Andy Zeng, Shuran Song, Phillip Isola, Tsung-Yi Lin

Transferring pre-trained vision models to perform grasping results in better sample efficiency and accuracy.
Experience-embedded Visual Foresight
Lin Yen-Chen, Maria Bauza, Phillip Isola
CoRL 2019 / Paper / Project Page

Meta-learning the video prediction models allows the robot to adapt to new objects' visual dynamics.
Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGBD images
Maria Bauza, Ferran Alet, Lin Yen-Chen, Maria Bauza, Tomas Lozano-Perez, Leslie P. Kaelbling, Phillip Isola Alberto Rodriguez,
IROS 2019 / Paper / Project Page

A dataset for meta-learning dynamic models. It consists of 250 pushes for each of 250 objects, all recorded with RGB-D camera and a high precision tracking system.
Tactics for Adversarial Attack on Deep Reinforcement Learning Agents
Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, Min Sun
IJCAI 2017 / Paper / Project Page

Strategies for adversarial attacks on deep RL agents.
Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos
Yen-Chen Lin*, Hou-Ning Hu*, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, Min Sun
CVPR 2017
Oral Presentation

An agent that learns to guide users where to look in 360° sports videos.
Tell Me Where to Look: Investigating Ways for Assisting Focus in 360° Video
Yen-Chen Lin, Yung-Ju Chang, Hou-Ning Hu, Hsien-Tzu Cheng, Chi-Wen Huang, Min Sun
CHI 2017

A study about how to assist users when watching 360° videos.