Prasann Singhal
प्रसन्न सिंघल

Undergraduate Student, Computer Science / Linguistics
The University of Texas at Austin

prasanns [at] cs.utexas.edu   PrasannS   prasann_singhal

I am a third year undergraduate student at UT Austin, advised by Greg Durrett, where I am a part of the TAUR lab (Text Analysis, Understanding, and Reasoning).

My research focuses on Natural Language Processing and topics in natural language generation and model interpretability. I'm currently working on improving our understanding of Reinforcement Learning from Human Feedback and new ways to create robust and well-calibrated reward models. I also have done some work with efficiency in decoding algorithms, and using explanations to predict large language model out-of-distribution robustness.

Prior to research, I've also had the chance to participate in quite a few hackathons, and I've made a lot of fun and interesting projects.

Feel free to send me an email if you'd like to chat about RLHF, text generation, interpretability, or NLP in general! Likewise, if you're a high-schooler or undergrad who's interested in getting into machine learning / research, I'm always happy to give advice if you feel like I can be of any help.

Publications

(preprint) D2PO: Discriminator-Guided DPO with Response Evaluation Models code

Prasann Singhal, Nathan Lambert, Scott Niekum, Tanya Goyal, and Greg Durrett. arXiv 2024.

(preprint) A Long Way to Go: Investigating Length Correlations in RLHF code

Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. arXiv 2023.

EEL: Efficiently Encoding Lattices for Reranking code

Prasann Singhal, Jiacheng Xu, Xi Ye, and Greg Durrett. Proceedings of ACL 2023.

Assessing Out-of-Domain Language Model Performance from Few Examples

Prasann Singhal*, Jarad Forristal*, Xi Ye, and Greg Durrett. Proceedings of EACL 2023.

Invited Talks

[Feb. 2024] A Long Way to Go in RLHF @ UT Austin LIN 393 Seminar

[Nov. 2023] A Long Way to Go in RLHF @ IST & Unbabel Seminar

Experience

UT Austin TAUR Lab Undergraduate Research Assistant, Natural Language Processing. Fall 2021-Present.

Drishya.ai. ML Intern. Summer 2020 [Led project for simulating and multi-agent optimization of renewable microgrids].

Service/Teaching

Reviewer: EMNLP (22)

UT Austin Directed Reading Program Mentor (Spring 2023)

Founder / Teacher - Katy HACK Initiative: I spent 3 years starting/running CS education programs in local elementary / junior high schools

Volunteer Teacher - I spent a summer teaching English, Computer Fundamentals in a rural village in Gujarat