About Me
I am an assistant professor at Department of Computer Science, George Mason University. Before that I was a postdoc at Rafik B. Hariri Institute at Boston University, hosted by Francesco Orabona. I received my Ph.D. at Department of Computer Science, The University of Iowa in August 2020, under the advise of Tianbao Yang. Before that I studied at Institute of Natural Sciences and School of Mathematical Sciences at Shanghai Jiao Tong University. I have also spent time working at industrial research labs, such as IBM research AI and Alibaba DAMO Academy. Here is my Google Scholar Citations.
I am looking for self-motivated PhD students (fully-funded) with solid math background and (or) programming skills. If you have strong mathematical abilities to work on theoretical foundations of machine learning or have rich coding experience in machine learning applications (e.g., computer vision, NLP), please drop me an email with your CV and transcript, and apply here. Undergrad and graduate student visitors are also welcome.
I am looking for self-motivated PhD students (fully-funded) with solid math background and (or) programming skills. If you have strong mathematical abilities to work on theoretical foundations of machine learning or have rich coding experience in machine learning applications (e.g., computer vision, NLP), please drop me an email with your CV and transcript, and apply here. Undergrad and graduate student visitors are also welcome.
Research
My research interests are machine learning, optimization, learning theory, and deep learning. My goal is to design provably efficient algorithms for machine learning problems with strong empirical performance. In particular, I work on
Mathematical Optimization for Machine Learning: I focus on designing provably efficient optimization algorithms for machine (deep) learning problems, such as AUC maximization, F-measure optimization, Generative Adversarial Nets, etc.
Statistical Learning Theory: I am interested in sample complexity and computational complexity for modern machine learning problems.
Large-scale Distributed Learning: I design efficient scalable learning algorithms for distributed intelligence under various constraints (e.g., communication, privacy, etc.)
Machine Learning Applications: adversarial attack/defense, model compression and quantization, lifelong learning, etc.
News
- (Jan 2023) One paper was accepted by ICLR 2023. Congratulations to my students!
- (Nov 2022) Two NeurIPS 2022 papers Communication-Efficient Distributed Gradient Clipping and Bilevel Optimization were selected as spotlight presentations (5% acceptance rate).
- (Sep 2022) Three papers were accepted by NeurIPS 2022. Congratulations to my students and co-authors.
- (May 2022) One paper was accepted by ICML 2022. Congratulations to my students and co-authors.
- (Feb 2022) We will give a tutorial on CVPR 2022 about "Deep AUC Maximization".
- (Feb 2022) Two papers were accepted by ALT 2022.
- (Sep 2021) One paper was accepted by NeurIPS 2021.
- (Aug 2021) Joined in CS @ GMU as an assistant professor.
- (June 2021) The paper about nonconvex-nonconcave min-max optimization has been accepted by JMLR.
- (Sep 2020) Two papers were accepted by NeurIPS 2020, with one Spotlight presentation (3% acceptance rate).
- (May 2020) One paper was accepted by ICML 2020.
- (Dec 2019) Two papers were accepted by ICLR 2020.
- (Sep 2018) Three papers were accepted by NeurIPS 2018.
- (May 2018) One paper was accepted by ICML 2018.
Selected Publications [Full List]
-
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
Mingrui Liu, Zhenxun Zhuang, Yunwen Lei, Chunyang Liao.
To appear in Advances in Neural Information Processing Systems 36, 2022. (NeurIPS 2022) (Spotlight, 5% acceptance rate)
-
First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems
Mingrui Liu, Hassan Rafique, Qihang Lin, Tianbao Yang.
Journal of Machine Learning Research, 2021. (JMLR 2021)
-
Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning
Hassan Rafique, Mingrui Liu, Qihang Lin, Tianbao Yang.
Optimization Methods and Software, 2021.
-
Improved Schemes for Episodic Memory-based Lifelong Learning
Yunhui Guo*, Mingrui Liu*, Tianbao Yang, Tajana Rosing. (*: equal contribution)
Advances in Neural Information Processing Systems 33, 2020. (NeurIPS 2020) (Spotlight, 3% acceptance rate, top 4% submissions)
[Code]
-
A Decentralized Parallel Algorithm for Training Generative Adversarial Nets
Mingrui Liu, Wei Zhang, Youssef Mroueh, Xiaodong Cui, Jerret Ross, Tianbao Yang, Payel Das.
Advances in Neural Information Processing Systems 33, 2020. (NeurIPS 2020)
-
Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets
Mingrui Liu, Youssef Mroueh, Jerret Ross, Wei Zhang, Xiaodong Cui, Payel Das, Tianbao Yang.
8th International Conference on Learning Representations, 2020. (ICLR 2020)
-
Stochastic AUC Maximization with Deep Neural Networks
Mingrui Liu, Zhuoning Yuan, Yiming Ying, Tianbao Yang.
8th International Conference on Learning Representations, 2020. (ICLR 2020)
[Code]
-
Fast Stochastic AUC Maximization with O(1/n) Convergence Rate
Mingrui Liu, Xiaoxuan Zhang, Zaiyi Chen, Xiaoyu Wang, Tianbao Yang.
Proceedings of the 35th International Conference on Machine Learning 35, 2018. (ICML 2018)
[Supplement] [Bibtex] [Poster] [Code]
Last update: 01-20-2023