Shivam Shandilya

Research Fellow, Microsoft Research

shivam.jpg

Hi. I am currently working as a Research Fellow at Microsoft Research Lab (MSRI) in Bangalore, India. My research focuses on developing efficient models and frameworks for large language models (LLMs). I have worked on projects such as Task-Aware Prompt Compression and the Self-Assessing LLM framework, aiming to enhance efficiency and performance in various NLP tasks.

Prior to my current role, I completed my B.Tech in Electrical and Electronics Engineering from Birla Institute of Technology, Mesra. During my undergraduate, I contributed to the PyZombis project under the Python Software Foundation during Google Summer of Code, 2022. I also spent a summer as a research intern at CoEAMT, IIT Kharagpur.

My research interests broadly lie in the intersection of NLP and efficiency, and I am passionate about exploring ways to make language technologies more accessible and effective.

2024

  1. taco_rl.png
    TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning
    Shivam Shandilya, Menglin Xia, Supriyo Ghosh, Huiqiang Jiang, Jue Zhang, and 2 more authors
    2024
  2. salc_iclr.png
    Unveiling Context-Aware Criteria in Self-Assessing LLMs
    Taneesh Gupta, Shivam Shandilya, Xuchao Zhang, Supriyo Ghosh, Chetan Bansal, and 2 more authors
    2024
  3. aamas.png
    Streetwise Agents: Empowering Offline RL Policies to Outsmart Exogenous Stochastic Disturbances in RTC
    Aditya Soni, Mayukh Das, Anjaly Parayil, Supriyo Ghosh, Shivam Shandilya, and 6 more authors
    2024