Undergraduate Student, Computer Science / Linguistics
The University of Texas at Austin
My research focuses on Natural Language Processing and topics in natural language generation and model interpretability. I'm currently working on improving our understanding of Reinforcement Learning from Human Feedback and new ways to create robust and well-calibrated reward models. I also have done some work with efficiency in decoding algorithms, and using explanations to predict large language model out-of-distribution robustness.
Prior to research, I've also had the chance to participate in quite a few hackathons, and I've made a lot of fun and interesting projects.
Feel free to send me an email if you'd like to chat about RLHF, text generation, interpretability, or NLP in general! Likewise, if you're a high-schooler or undergrad who's interested in getting into machine learning / research, I'm always happy to give advice if you feel like I can be of any help.
[Feb. 2024] A Long Way to Go in RLHF @ UT Austin LIN 393 Seminar
[Nov. 2023] A Long Way to Go in RLHF @ IST & Unbabel Seminar
UT Austin TAUR Lab Undergraduate Research Assistant, Natural Language Processing. Fall 2021-Present.
Drishya.ai. ML Intern. Summer 2020 [Led project for simulating and multi-agent optimization of renewable microgrids].
Reviewer: EMNLP (22)
UT Austin Directed Reading Program Mentor (Spring 2023)
Founder / Teacher - Katy HACK Initiative: I spent 3 years starting/running CS education programs in local elementary / junior high schools
Volunteer Teacher - I spent a summer teaching English, Computer Fundamentals in a rural village in Gujarat