Hello! My name is Saurabh Garg and I am building next-generation AI models at Mistral AI. Previously, I finished my PhD at Machine Learning Department at CMU under Prof. Zachary Lipton and Prof. Siva Balakrishnan. I have also been lucky to collaborate with Prof. Zico Kolter and Prof. Aditi Raghunathan. My PhD was supported by Bloomberg PhD Fellowship, JP Morgan AI PhD Fellowship and Amazon Graduate Fellowship.

I did my undergrad from IIT Bombay, India with major and honors in CS and minors in Applied Statistics in 2018. After that, I spent one amazing year at Samsung Headquaters, Korea.


Updates

Oct 2023: My Apple internship work is out now: TiC-CLIP: Continual Training of CLIP Models. Quite excited about this work! Short version will appear as Oral at NeurIPS Dist Shift Workshop 2023.
Sept 2023: Our work on (i) Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift (ii) Online Label Shift: Optimal Dynamic Regret meets Practical Algorithms (as Spotlight); and (iii) (Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy got accepted at NeurIPS 2023. See you in New Orleans!
Sept 2023: We are organizing R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Foundation Models (R0-FoMo) workshop at NeurIPS, 2023.
May 2023: We will be presenting our work on Downstream Datasets Make Surprisingly Good Pretraining Corpora at ACL 2023
May 2023: We will be presenting two papers at ICML 2023: (i) RLSbench; (ii) CHILS
Jan 2023: Our work on (i) Deconstructing Distributions; and (ii) Understanding SGD Noise got accepted at ICLR 2023
Sept 2022: Our work on (i) Domain adaptation under Open Set Label Shift, (ii) Unsupervised Learning under Latent Label Shift, and (iii) Characterizing Datapoints via Second-Split Forgetting got accepted at NeurIPS 2022.
July 2022: We are organizing Principles of Distribution Shift (PODS) workshop at ICML, 2022.
March 2022: Honored to receive the JP Morgan AI PhD Fellowship and Amazon Graduate Fellowship.
Feb 2022: Code for PU learning and RATT is out now.
Jan 2022: Work on investigate methods to predict target domain performance under distribution shift was accepted at ICLR 2022. [Arxiv link]
Sept 2021: Work on learning from positive and unlabeled data accepted at NeurIPS 2021 as a Spotlight!. [Arxiv link]
May 2021: Two papers at ICML: (i) Work on obtaining generalization bound with unlabeled data got accepted as Long talk at ICML 2021 [Paper]; (ii) Work on understanding heavy tails in PPO to appear as Short Talk at ICML 2021 [Paper].
April 2021: Our work on obtaining generalization gaurantees with unlabeled data will be presented at RobustML Workshop at ICLR 2021 [Paper] [Poster].
April 2021: Our work on understanding behaviour of gradients in PPO will be presented at SEDL Workshop at ICLR 2021. [Paper] [Talk] [Poster].
Feb 2021: Excited to be interning with Hanie Sedghi and Behnam Neyshabur at Google Brain during Summer 21.
Feb 2021: New work on understanding behaviour of gradients in PPO is out on arxiv.
Sept 2020: Our work on label shift got accepted at NeurIPs 2020 [Paper] [Poster].
July 2020: Our work on label shift estimation was accepted as Oral at ICML UDL 2020 [Talk] [Full Paper].
April 2020: Our work on Neural Architecture for Question Answering was an invited Oral at ECIR 2020 [Talk].
June 2019: I will be joining CMU ML Ph.D. in fall 2019.
April 2019: My B.Tech thesis titled "Estimating Uncertainty in MRF-based Image Segmentation: An Exact-MCMC Approach" got accepted at Medical Image Analysis 2019 journal
Dec. 2018: Received Excellence in Research Award from CSE dept, IIT Bombay
Nov. 2018: Presented my paper"Code-Switched Language models using Dual RNNs and Same-Source Pretraining" at EMNLP 2018, Brussels (poster)
Oct. 2018: Paper titled "Neural Architecture for Question Answering Using a Knowledge Graph and Web Corpus" got accepted at Information Retrieval Journal
Sept. 2018: Moved to Suwon, South Korea and joined Samsung Research Korea as Engineer
Sept. 2018: Presented my paper "Dual Language Models for Code Mixed Speech Recognition" at Interspeech 2018, Hyderabad (poster)
Aug. 2018: Graduated from IIT Bombay.
May 2018: Paper titled "Uncertainty Estimation in Segmentation with Perfect MCMC Sampling in Bayesian MRFs" got accepted at MICCAI, 2018 (poster)
Dec 2018: Invited to spend two weeks at Microsoft Research India to work on Indian language technologies with Prof. Preethi Jyothi
May 2017: Internship @ Samsung Research Korea
May 2016: Internship at Purdue Univeristy, US advised by Prof. Alex Pothen
July 2015: Changed branch from Electrical Engineering to Computer Science Engineering
July 2014: Joined IIT Bombay