I am a Research Scientist at Google, Brain team. I am mainly interested in generative models and representation learning. Specifically, I work on efficient and scalable learning algorithms of deep generative models, with an emphasis on energy-based models (EBMs), but also including other types of models. I also work on representational models with implications in neuroscience. I obtained my Ph.D. from UCLA advised by Song-Chun Zhu and Ying Nian Wu. Prior to that, I received my B.S. degree of Statistics from Peking University in China.
Contact: ruiqigao at ucla dot edu
Highlight research themes:
(1) Maximum likelihood learning of deep generative models.
- Diffusion recovery likelihood of EBMs (Gao et al., ICLR 2021).
- Multi-grid learning of EBMs (Gao et al., CVPR 2018).
- Spatial-temporal top-down models (Xie*, Gao*, et al., AAAI 2020), (Xie*, Gao*, et al., AAAI 2019).
(2) Joint learning of various models.
- Contrastive learning of EBMs with flow-based models (Gao et al., CVPR 2020).
- Mixing MCMC of EBMs with flow-based models as backbones (Nijkamp*, Gao*, et al., arXiv 2020).
- Cooperative learning of EBMs with top-down models (Xie et al., AAAI 2019 & TPAMI).
(3) Representational models with implications in neuroscience.