Research Interests:

My research interests include improving LLMs for interactive decision making, empowering decision making agents with language, and improving LLM alignment with feedback.

My research experience ranges from deep learning to reinforcement learning (RL) to natural language processing (NLP). I’ve worked with simulators such as ScienceWorld, AI2-THOR, NetHack, and Minecraft to solve embodied tasks that require natural language understanding that is grounded in environment. I’m familiar with Q learning, policy gradient, and model-based methods for RL. I’m experienced with language modeling, finetuning, in-context learning, RL for LLMs, and RLHF.


Research Papers:

Nottingham, Kolby, Bodhisattwa Prasad Majumder, Bhavana Dalvi Mishra, Sameer Singh, Peter Clark, Roy Fox. “Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills.Proceedings of the 40th International Conference on Machine Learning (2024).

ICML 2024

Nottingham, Kolby, Yasaman Razeghi, Kyungmin Kim, JB Lanier, Pierre Baldi, Roy Fox, and Sameer Singh. “Selective Perception: Optimizing State Descriptions with Reinforcement Learning for Language Model Actors.North American Chapter of the Association for Computational Linguistics (2023).

NAACL 2024

Nottingham, Kolby, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Sameer Singh, and Roy Fox. “Do Embodied Agents Dream of Pixelated Sheep?: Embodied Decision Making using Language Guided World Modelling.” Proceedings of the 39th International Conference on Machine Learning (2023).

ICML 2023

Nottingham, Kolby, Alekhya Pyla, Sameer Singh, and Roy Fox. “Learning to Query Internet Text for Informing Reinforcement Learning Agents.” Reinforcement Learning and Decision Making Conference (2022).

RLDM 2022

Kirby, Robert, Kolby Nottingham, Rajarshi Roy, Saad Godil, and Bryan Catanzaro. “Guiding Global Placement With Reinforcement Learning.” arXiv preprint arXiv:2109.02631 (2021).

arxiv

Nottingham, Kolby, Litian Liang, Daeyun Shin, Charless C. Fowlkes, Roy Fox, and Sameer Singh. “Modular Framework for Visuomotor Language Grounding.” Embodied AI Workshop (2021).

workshop

Nottingham, Kolby, Anand Balakrishnan, Jyotirmoy Deshmukh, and David Wingate. “Using logical specifications of objectives in multi-objective reinforcement learning.” Human-AI Collaboration in Sequential Decision-Making (2021).

workshop


Research Experience

University of California Irvine

UCI NLP Group and Intelligent Dynamics (indy) Lab

September 2020 – Current

Brigham Young University

Perception, Control, and Cognition Lab

Dec 2018 – April 2020

  • Developed deep RL environment framework Holodeck in python and C++
  • Refactored sensor data retrieval from Unreal Engine for dynamic level generation
  • Built and maintained deep RL agent implementations
  • Designed RL lab for new deep learning course in pytorch
Human Centered Machine Intelligence Lab

Aug 2018 – Dec 2018

  • Implemented Value Iteration algorithm
  • Designed MDP and POMDP environments
  • Ran resilience tests varying transition and observation probabilities
Crowd Acoustics Project

Feb 2018 – Aug 2018

  • Worked with crowd noise team to design data features and labels
  • Built data labeling application for gathered data in javascript
  • Lead team that experimented with classification algorithms
  • First author for presentation at ASA conference 2018

University of Southern California

Cyber Physical Systems: Verification, Intelligence, Design, and Analysis

May 2019 – Aug 2019

  • Prepared Turtlebot3 robotics for simulation with Gazebo
  • Built multi-objective RL agents and environments
  • Wrote parser and interpreter for custom propositional logic
  • Submitted first author work to ICLR 2020