Atharva Amdekar

Atharva Amdekar

Graduate Student

Stanford University

About Me

Hello there! I am a second-year Master’s student at Stanford University’s Institute of Computational and Mathematical Engineering (ICME). I am broadly interested in Artificial Intelligence, specifically in building systems that can reliably and robustly learn to reason.

In 2022, I spent a wonderful summer interning as an Applied Scientist at Amazon where I worked on permutation-invariant Large Language Models to refine the knowledge graph at the backend of the Amazon Catalogue System. In my first year at Stanford, I worked on a wide variety of projects, ranging from Deep Generative Models, to domain-agnostic Self-Supervised Learning and Natural Language Processing.

Before joining Stanford, I worked as a Quantitative Researcher at a high-frequency trading firm in India where I led the automation and backtesting of trading strategies using Statistical Machine Learning. In 2020, I graduated with a B.Tech after spending 4 wonderful years as an undergraduate at IIT Guwahati, majoring in Mathematics and Computer Science.

If you find my work interesting, or want to discuss about some cool projects that you are working on, feel free to reach out at aamdekar at stanford dot edu

Interests
  • Natural Language Processing
  • Computer Vision
  • Graph Neural Networks
Education
  • M.S, Computational and Mathematical Engineering, 2023

    Stanford University

  • B.Tech, Mathematics and Computer Science, 2020

    Indian Institute of Technology, Guwahati

Experience

 
 
 
 
 
Amazon
Applied Scientist Intern
Amazon
Jun 2022 – Sep 2022 Seattle, Washington
  • Owned an end-to-end service that leveraged state-of-the-art permutation invariant Transformers and Graph Neural Networks to refine the knowledge graph that supports the Amazon Catalogue System.
  • Designed a model that was 2.5x more computationally efficient than the production model and achieved a significant gain on out-of-distribution tasks.
 
 
 
 
 
Stanford AI Laboratory
Graduate Research Assistant
Sep 2021 – May 2022 Stanford, California
  • Collaborated with Prof. Stefano Ermon on diffusion models using score matching for representation learning in JAX.
  • Worked on domain-agnostic Self-Supervised Learning algorithms in Pytorch.
 
 
 
 
 
Stanford University
Master’s Student
Sep 2021 – Present Stanford, California
 
 
 
 
 
iRage Capital
Quantitative Researcher
Jun 2020 – Jul 2021 Mumbai, India
 
 
 
 
 
Indian Institute of Technology, Guwahati
B.Tech, Mathematics and Computer Science
Jul 2016 – Jun 2020 Guwahati, India

Publications

Quickly discover relevant content by filtering publications.
(2022). MoCa: Cognitive Scaffolding for Language Models in Causal and Moral Judgment Tasks. ICML Workshop - Beyond Bayes Poster.

PDF

Contact