Nihal Nayak

nayak_brown22.jpeg

I am a Ph.D. candidate of Computer Science at Brown University. I work with Stephen Bach on learning with limited labeled data and zero-shot learning.

In summer 2022, I interned at ASAPP with Clemens Rosenbaum and Ethan R. Elenberg on estimating the quality of clusterings.

Before my Ph.D., I worked as a NLP Engineer at Stride.AI. I have also worked with Prof. H S Jamadagni at the Indian Institute of Science (IISc) as an Intern and then briefly as a Project Assistant.

CV (Updated in October, 2022)

Email : nnayak2 [at] cs [dot] brown [dot] edu

news

Apr 1, 2023 New pre-print out: Does CLIP bind concepts? Probing compositionality in large image models.
Jan 19, 2023 Our work on compositional soft prompting was accepted to ICLR 2023.
Oct 5, 2022 Excited to share new pre-print on evaluating clusterings with few labeled data points.
Jul 14, 2022 Our work ZSL-KG was accepted to TMLR.

selected publications

  1. ICLR
    Learning to Compose Soft Prompts for Compositional Zero-Shot Learning
    Nihal V. Nayak Peilin Yu, and Stephen H. Bach
    In International Conference on Learning Representations (ICLR) 2023
  2. TMLR
    Zero-Shot Learning with Common Sense Knowledge Graphs
    Nihal V. Nayak, and Stephen H. Bach
    Transactions on Machine Learning Research 2022
  3. ICLR
    Multitask Prompted Training Enables Zero-Shot Task Generalization
    Victor Sanh, Albert Webson, Colin Raffel,  Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani,  Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush
    In International Conference on Learning Representations (ICLR) 2022