NeurIPS 2022 in-person Workshop, December 2nd, New Orleans


Sponsors

This workshop is sponsored by:
and

Description

Language is one of the most impressive human accomplishments and is believed to be the core to our ability to learn, teach, reason and interact with others [1; 2; 3]. Learning many complex tasks or skills would be significantly more challenging without relying on language to communicate, and language is believed to have a structuring impact on human thought [4; 5; 51]. Written language has also given humans the ability to store information and insights about the world and pass it across generations and continents. Yet, the ability of current state-of-the art reinforcement learning agents to understand natural language is limited.

Practically speaking, the ability to integrate and learn from language, in addition to rewards and demonstrations, has the potential to improve the generalization, scope and sample efficiency of agents [6]. For example, agents that are capable of transferring domain knowledge from textual corpora might be able to much more efficiently explore in a given environment or to perform zero or few shot learning in novel environments [7]. Furthermore, many real-world tasks, including personal assistants and general household robots, require agents to process language by design, whether to enable interaction with humans, or simply use existing interfaces [8; 9].

To support this field of research, we are interested in fostering the discussion around:

  • methods that can effectively link language to actions and observations in the environment [10; 11; 12; 13; 14; 15; 16; 17; 18];
  • research into language roles beyond encoding goal states, such as structuring hierarchical policies [19; 20; 21; 22], communicating domain knowledge [23; 24] or reward shaping [25; 26, 27];
  • methods that can help identify and incorporate outside textual information about the task, or general-purpose semantics learned from outside corpora [28; 29; 30];
  • novel environments and benchmarks enabling such research and approaching complexity of real-world problem settings [31; 32; 33; 34; 35; 36].

The aim of the workshop on Language in Reinforcement Learning is to steer discussion and research of these problems by bringing together researchers from several communities, including reinforcement learning, robotics, natural language processing, computer vision and cognitive psychology. Since the last edition of LaReL at ICML 2020 [45; 46], we have seen a wealth of papers leveraging language to support or enhance cognitive functions in artificial agents: systematic generalization [37; 17], relational and causal learning [38], abstraction and planning [39; 40; 41] or goal imagination [42]. These results revive the old debate on the cognitive functions of language [43; 44] and let us foresee new ways for language to support artificial agents’ cognitive abilities beyond goal and reward specification [52]. We have also seen more work on transfer of commonsense knowledge from pretrained large language models (LLMs) [47; 48] beyond linguistic tasks to sequential decision making settings. Recent work has shown how LLMs can be used as planners [39; 49; 50], enabling learning in sparse data settings and to assist out of distribution generalization to novel tasks.

In the wake of tremendous progress on multimodal models, NLP, and RL, in this workshop we will try to take a step back and focus discussion on upcoming challenges in integrating language and sequential decision-making. We will prepare some hot questions as well as collect community-contributed questions through our website and social media; we will then focus the panel discussion on some of these questions as well as host an open breakout rooms session where all attendants can contribute to the discussion around those open problems.

Some example questions:

  • How can knowledge stored in a pretrained language model be transferred to an RL policy ?
  • How can we update a language model from feedback in an environment ?
  • How can we build open-ended environments for language-using agents ?
  • Can general language agents be trained from imitation learning alone or is feedback from an environment required ?
  • How can we specify reward or collect data for language agents at scale ?

References

[1] The development of categorization in the second year and its relation to other cognitive and linguistic developments, Gopnik and Meltzoff, 1987;
[2] Core knowledge, Spelke and Kinzler, 2007;
[3] Cognitive effects of language on human navigation, Schusterman et al., 2011;
[4] Words as invitations to form categories: Evidence from 12-to 13-month-old infants, Waxman, 1995;
[5] Linguistically modulated perception and cognition: The label-feedback hypothesis, Lupyan, 2012;
[6] A survey of Reinforcement Learning informed by natural Language. Luketina et al 2019;
[7] Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning, Hanjie et al., 2021;
[8] Vision and language navigation: Interpreting visually-grounded navigation instructions in real environments, Anderson et al., 2017;
[9] Robots that use language, Tellex et al., 2020;
[10] Reinforcement Learning for mapping instructions to actions, Branavan et al., 2009;
[11] Understanding natural language commands for robotic navigation and mobile manipulation, Tellex et al, 2011;
[12] Learning to interpret natural language navigation instructions from observation, Chen et al., 2011;
[13] Natural language communication with robots, Bisk et al., 2016;
[14] Listen, Attend, and Walk: neural mapping of navigational instructions to action sequences, Mei et al., 2016;
[15] Grounded language learning in a simulated 3d world, Hermann et al., 2017
[16] Imitating interactive intelligence, Anderson et al., 2020
[17] Language conditioned imitation learning over unstructured data, Lynch and Sermanet, 2020
[18] Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning, DeepMind Interactive Agents Team, 2021
[19] Modular Multitask Reinforcement Learning with Policy Sketches, Andreas et al 2016
[20] Language as an Abstraction for Hierarchical Deep Reinforcement Learning, Jiang et al., 2019;
[21] ELLA: Exploration through Learned Language Abstraction, Mirchandani et al., 2021;
[22] Improving Intrinsic Exploration with Language Abstractions, Mu et al., 2021;
[23] Grounding Language for Transfer in Deep Reinforcement Learning, Narasimhan et al., 2017
[24] RTFM: Generalising to Novel Environment Dynamics via Reading, Zhong et al., 2020
[25] Learning to Understand Goal Specifications by Modelling Reward, Bahdanau et al., 2018;
[26] Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation, Wang et al., 2018
[27] Using Natural Language for Reward Shaping in Reinforcement Learning, Goyal et al. 2019;
[28] Learning to Win by Reading Manuals in a Monte-Carlo Framework, Branavan et al. 2012;
[29] What can you do with a rock? Affordance extraction via word embeddings, Fulda et al. 2017;
[30] Keep CALM and explore: language models for action generation in text-based games, Yao et al. 2021;
[31] Embodied Question Answering, Das et al., 2018;
[32] TextWorld: A Learning Environment for Text-based Games, Côté et al., 2018;
[33] Interactive fiction games, a colossal adventure, Hausknecht et al. 2019;
[34] BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning, Chevalier-Boisvert et al. 2019;
[35] ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks, Sridhar et al., 2019;
[36] Habitat 2.0: Training Home Assistants to Rearrange their Habitat, Szot et al., 2021;
[37] Grounded language learning fast and slow, Hill et al., 2020;
[38] Tell me why ! - Explanations support learning of relational and causal structure, Lampinen et al., 2021;
[39] Skill induction and planning with latent language, Sharma et al. 2021,
[40] Leveraging Language to Learn Program Abstractions and Search Heuristics, Wong et al., 2021;
[41] Alfworld: aligning text and embodied environments for interactive learning, Sridhar et al., 2021;
[42] Language as a cognitive tool to imagine goals in curiosity-driven exploration, Colas et al., 2021;
[43] Thought and Language, Vygotsky, 1934;
[44] Language and Thought, Carruthers at al. 1998;
[45] https://larel-ws.github.io/
[46] https://icml.cc/virtual/2020/workshop/5748
[47] Language models are few-shot learners, Brown et al., 2020;
[48] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, Raffel et al., 2019;
[49] Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents, Huang et al., 2022;
[50] Pre-Trained Language Models for Interactive Decision-Making, Li et al., 2022;
[51] Evidence from an emerging sign language reveals that language supports spatial cognition, Pyers et al., 2010;
[52] Vygotskian Autotelic Artificial Intelligence: Language and Culture Internalization for Human-Like AI, Colas et al., 2022