Nihal Nayak

headshot_nihal_2024.jpg

I am a Ph.D. candidate in Computer Science at Brown University. I work with Stephen Bach on zero-shot generalization in deep neural networks and, more broadly, on learning with limited labeled data.

Here is a summary of my recent research:

  • Synthetic Data Generation. Introduced Bonito, an open-source model that converts unannotated text from specialized domains into instruction tuning datasets to adapt large language models without annotations (ACL Findings, 24).

  • Compositionality. Introduced compositional prompt tuning, a new parameter-efficient prompt learning method that learns to decompose classes into sub-concepts and recomposes them at test time for improved zero-shot performance (ICLR, 23). Next, created a synthetic benchmark to systematically study compositionality in vision language models. We find that they often fail to generalize to compositions that require binding (EACL Findings, 24).

  • Structured Knowledge. Created ZSL-KG, a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representation from common sense knowledge graphs (TMLR, 22).

CV (Updated in June, 2024)

Email : nnayak2 [at] cs [dot] brown [dot] edu

news

Jun 5, 2024 Excited to share that Bonito was accepted to ACL Findings 2024.
May 3, 2024 Spotlight talk at NeNLP and invited talks at Snokel (Video) and Amy Greenwald’s Reseach Group on Bonito.
Feb 27, 2024 Excited to share new preprint on adapting large language models to tasks in specialized domains using Bonito, an open-source model that converts raw, unannotated data into instruction tuning datasets.
Dec 31, 2023 Our work Does CLIP Bind Concepts? Probing Compositionality in Large Image Models was accepted to Findings: EACL 2024.

selected publications

  1. Findings
    Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
    Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach
    In Findings of the Association for Computational Linguistics: ACL 2024 2024
  2. Findings
    Does CLIP Bind Concepts? Probing Compositionality in Large Image Models
    Martha Lewis,  Nihal V. Nayak Peilin Yu, Qinan Yu, Jack Merullo,  Stephen H. Bach, and Ellie Pavlick
    In Findings of the Association for Computational Linguistics: EACL 2024 2024
  3. ICLR
    Learning to Compose Soft Prompts for Compositional Zero-Shot Learning
    Nihal V. Nayak Peilin Yu, and Stephen H. Bach
    In International Conference on Learning Representations (ICLR) 2023
  4. TMLR
    Zero-Shot Learning with Common Sense Knowledge Graphs
    Nihal V. Nayak, and Stephen H. Bach
    Transactions on Machine Learning Research 2022
  5. ICLR
    Multitask Prompted Training Enables Zero-Shot Task Generalization
    Victor Sanh, Albert Webson, Colin Raffel,  Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani,  Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush
    In International Conference on Learning Representations (ICLR) 2022