Minkai Xu「徐民凯」

I am a Ph.D. student at Stanford Computer Science, advised by Stefano Ermon and Jure Leskovec. I'm affiliated with Stanford AI Lab and Stanford ML Group. My research is generously supported by Sequoia Capital Fellowship.

Previously, I received my M.S. from MILA advised by Jian Tang, and B.S. (Summa Cum Laude) from SJTU advised by Weinan Zhang. I also spent time at AI research labs of Nvidia, Meta, Amazon, and ByteDance.

Email: minkai [at] cs.stanford.edu

Google Scholar   /   Twitter   /   Ins   /   Github   /   LinkedIn     

profile photo
Research Topics

My general research area is machine learning, with an emphasis on Generative Models. I'm interested in developing scalable methods for various real-world problems, from vision, language, to science.

News

[Dec. 2023]  New!! We organized "Generative AI and Biology" workshop at NeurIPS'23. Check out our website here!

Publications (* equal contribution)

An all-atom protein generative model
Alexander Chu,  Jinho Kim,  Lucy Cheng,  Gina El Nesr,  Minkai Xu,  Richard Shuai,  Po-Ssu Huang 
Proceedings of the National Academy of Sciences (PNAS), 121.27 (2024): e2311500121.

Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs
Ling Yang,  Zhaochen Yu,  Chenlin Meng,  Minkai Xu,  Stefano Ermon,  Bin Cui 
International Conference on Machine Learning (ICML), 2024.

Equivariant Graph Neural Operator for Modeling 3D Dynamics
Minkai Xu*,  Jiaqi Han*,  Aaron Lou,  Jean Kossaifi,  Arvind Ramanathan,  Kamyar Azizzadenesheli,  Jure Leskovec,  Stefano Ermon,  Anima Anandkumar 
International Conference on Machine Learning (ICML), 2024.

Cross-Modal Contextualized Diffusion Models for Text-Guided Visual Generation and Editing
Ling Yang,  Zhilong Zhang,  Zhaochen Yu,  Jingwei Liu,  Minkai Xu,  Stefano Ermon,  Bin Cui 
International Conference on Learning Representations (ICLR), 2024.

VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs
Ling Yang,  Ye Tian,  Minkai Xu,  Zhongyi Liu,  Shenda Hong,  Wei Qu,  Wentao Zhang,  Bin Cui,  Muhan Zhang,  Jure Leskovec 
International Conference on Learning Representations (ICLR), 2024.

MUDiff: Unified Diffusion for Complete Molecule Generation
Chenqing Hua,  Sitao Luan,  Minkai Xu,  Rex Ying,  Jie Fu,  Stefano Ermon,  Doina Precup 
Learning on Graphs Conference (LoG), 2023. 

Equivariant Flow Matching with Hybrid Probability Transport
Yuxuan Song*,  Jingjing Gong*,  Minkai Xu,  Ziyao Cao,  Yanyan Lan,  Stefano Ermon,  Hao Zhou,  Wei-ying Ma 
Neural Information Processing Systems (NeurIPS), 2023. 

When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability
Sitao Luan,  Chenqing Hua,  Minkai Xu,  Qincheng Lu,  Jiaqi Zhu,  Xiao-Wen Chang,  Jie Fu,  Jure Leskovec,  Doina Precup 
Neural Information Processing Systems (NeurIPS), 2023. 

Scaling Riemannian Diffusion Models
Aaron Lou,  Minkai Xu,  Stefano Ermon 
Neural Information Processing Systems (NeurIPS), 2023. 

Geometric Latent Diffusion Models for 3D Molecule Generation
Minkai Xu,  Alexander Powers,  Ron Dror,  Stefano Ermon,  Jure Leskovec 
International Conference on Machine Learning (ICML), 2023.

FusionRetro: Molecule Representation Fusion via Reaction Graph for Retrosynthetic Planning
Songtao Liu,  Zhengkai Tu*,  Minkai Xu*,  Zuobai Zhang*,  Lu Lin,  Jian Tang,  Peilin Zhao,  Dinghao Wu 
International Conference on Machine Learning (ICML), 2023.

Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D
Bo Qiang*,  Yuxuan Song*,  Minkai Xu,  Jingjing Gong,  Bowen Gao,  Hao Zhou,  Wei-Ying Ma,  Yanyan Lan 
International Conference on Machine Learning (ICML), 2023.

Generative Coarse-Graining of Molecular Conformations
Wujie Wang,  Minkai Xu,  Chen Cai,  Benjamin Kurt Miller,  Tess Smidt,  Yusu Wang,  Jian Tang,  Rafael Gomez-Bombarelli 
International Conference on Machine Learning (ICML), 2022.
Computational chemistry; physics; E(3)-equivariance; graph neural networks.

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation
Minkai Xu,  Lantao Yu,  Yang Song,  Chence Shi,  Stefano Ermon,  Jian Tang 
International Conference on Learning Representations (ICLR), 2022.
Oral Presentation [54/3391]
Geometric probabilistic models; Markov chains; SE(3)-equivariance; denoising diffusion.

Predicting Molecular Conformation via Dynamic Graph Score Matching
Shitong Luo*,  Chence Shi*,  Minkai Xu,  Jian Tang 
Neural Information Processing Systems (NeurIPS), 2021. 
Molecular 3D geometry generation; dynamic graph modeling; SE(3)-equivariance.

An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming
Minkai Xu,  Wujie Wang,  Shitong Luo,  Chence Shi,  Yoshua Bengio,  Rafael Gomez-Bombarelli,  Jian Tang 
International Conference on Machine Learning (ICML), 2021. 
End-to-end learning for 3D geometry generation; bilevel optimization.

Learning Gradient Fields for Molecular Conformation Generation
Chence Shi*,  Shitong Luo*,  Minkai Xu,  Jian Tang 
International Conference on Machine Learning (ICML), 2021.
Long Talk [top 3.0%]
Molecular 3D geometry generation; denoising score-matching; SE(3)-equivariance.

Learning Neural Generative Dynamics for Molecular Conformation Generation
Minkai Xu*,  Shitong Luo*,  Yoshua Bengio,  Jian Peng,  Jian Tang 
International Conference on Learning Representations (ICLR), 2021. 
Molecular 3D structure generation; neural ODE; energy-based models.

Reciprocal Supervised Learning Improves Neural Machine Translation
Minkai Xu,  Mingxuan Wang,  Zhouhan Lin,  Hao Zhou,  Weinan Zhang,  Lei Li 
ArXiv Preprint
Learning NMT models with co-EM framework; text generation; expectation-maximization algorithm.

Energy-Based Imitation Learning
Minghuan Liu,  Tairan He,  Minkai Xu,  Weinan Zhang 
International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2021.
Imitation learning with energy-function as rewards; reinforcement learning; energy-based models.

Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling by Exploring Energy of the Discriminator
Yuxuan Song*,  Qiwei Ye*,  Minkai Xu*,  Tie-yan Liu 
ArXiv Preprint
Bridging the gap between GANs and Energy-Based Models; generative models; contrastive divergence.

Towards Generalized Implementation of Wasserstein Distance in GANs
Minkai Xu,  Zhiming Zhou,  Guansong Lu,  Jian Tang,  Weinan Zhang,  Yong Yu 
AAAI Conference on Artificial Intelligence (AAAI), 2021. 
Novel sobolev duality of Wasserstein distances; Generative Adversarial Nets.

A Graph to Graphs Framework for Retrosynthesis Prediction
Chence Shi,  Minkai Xu,  Hongyu Guo,  Ming Zhang,  Jian Tang 
International Conference on Machine Learning (ICML), 2020. 
Retrosynthesis with variational methods; latent variable models; drug discovery.

GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation
Chence Shi*,  Minkai Xu*,  Zhaocheng Zhu,  Weinan Zhang,  Ming Zhang,  Jian Tang 
International Conference on Learning Representations (ICLR), 2020. 
Molecular graph generation and optimization; flow-based generative models; graph learning.

Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip
Yuxuan Song,  Minkai Xu,  Lantao Yu,  Hao Zhou,  Shuo Shao,  Yong Yu 
AAAI Conference on Artificial Intelligence (AAAI), 2020. 
Robust and efficient compression coding; information theory; implicit generative modeling.

Awards
  • Sequoia Capital Stanford Graduate Fellowship (7 fellows in CS), [Announcement], Stanford. 2022-2026
  • Best Bachelor Thesis (TOP 1%), Towards Generalized Wasserstein GANs [Dissertation][Video], SJTU. 2020
  • Mong Man Wai - Hong Kong Scholarship (~6,000 USD, 15 among 6,000+ sophomore and junior students in SJTU), SJTU. 2018
  • Kwang-Hua Scholarship (Sophomore Ranking 3rd/104), Kwang-Hua Foundation. 2018
  • Shanghai Scholarship (Ranking 2nd/104 of Freshman), Shanghai Education Ministry. 2017
Talks
Professional Services
  • PC member/Reviewer: ICML'21-24, NeurIPS'21-23, ICLR'22-24, AAAI'21-23, TPAMI.




Updated at Feb. 2024
Thanks Jon Barron for this amazing work