Nihal Nayak

headshot_nihal_2024.jpg

One-Page Resume | CV (Updated in Feb 2026)

I am a Postdoctoral Fellow at Harvard University (SEAS) working with David Alvarez-Melis. My research focuses on efficiently building and adapting foundation models through data‑centric solutions.

I completed my Ph.D. in Computer Science at Brown University where I worked with Stephen Bach. During my Ph.D., I studied zero-shot generalization, the ability of an intelligent system to generalize to new classes, tasks, and environments without human annotations. I introduced new learning methods and evaluation techniques for zero-shot generalization through synthetic datasets (Bonito), composition (CSP, CLIP Binding), and structured knowledge (ZSL-KG).

Email : nnayak [at] seas [dot] harvard [dot] edu

selected publications

  1. ArXiv
    A Critical Look at Targeted Instruction Selection: Disentangling What Matters (and What Doesn’t)
    Nihal V. Nayak, Paula Rodriguez-Diaz, Neha Hulkund, Sara Beery, and David Alvarez-Melis
    ArXiv 2026
  2. ICLR
    Boomerang Distillation Enables Zero-Shot Model Size Interpolation
    Sara Kangaslahti,  Nihal V. Nayak, Jonathan Geuter, Marco Fumero, Francesco Locatello, and David Alvarez-Melis
    In The Fourteenth International Conference on Learning Representations 2026
  3. Findings
    Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
    Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach
    In Findings of the Association for Computational Linguistics: ACL 2024 2024
  4. ICLR
    Learning to Compose Soft Prompts for Compositional Zero-Shot Learning
    Nihal V. Nayak Peilin Yu, and Stephen H. Bach
    In International Conference on Learning Representations (ICLR) 2023
  5. ICLR
    Multitask Prompted Training Enables Zero-Shot Task Generalization
    Victor Sanh, Albert Webson, Colin Raffel,  Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani,  Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush
    In International Conference on Learning Representations (ICLR) 2022