Speaker Details

Anima Anandkumar

Anima Anandkumar

NVIDIA & Caltech

Bio:
Prof. Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.

Keynote Title:
An Open-Ended Embodied Agent with Large Language Models.

Keynote Abstract:
We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent's abilities rapidly and alleviates catastrophic forgetting. Empirically, Voyager shows strong in-context lifelong learning capability and exhibits exceptional proficiency in playing Minecraft. It obtains 3.3x more unique items, travels 2.3x longer distances, and unlocks key tech tree milestones up to 15.3x faster than prior SOTA. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize.

Francois Chollet

François Chollet

Google

Bio:
Dr. François Chollet is a software engineer and AI researcher at Google. He is well-known for the development of the Keras deep learning platform. He has demonstrated strong interests in (i) understanding the nature of abstraction and developing algorithms capable of autonomous abstraction (i.e. general intelligence), (ii) democratizing the development and deployment of AI technology, by making it easier to use and explaining it clearly, (iii) leveraging technology, in particular AI, to help people gain greater agency over their circumstances and reach their full potential, and (iv) understanding and simulating the early stages of human cognitive development (e.g. developmental psychology, cognitive robotics).

Keynote Title:
The missing rungs on the ladder to general AI.

Keynote Abstract:
We look at the state of reasoning capabilities in LLMs, investigate the origins of generalization in LLMs, and provide suggestions for how to bridge current gaps.

Jitendra Malik

Jitendra Malik

Meta & UC Berkeley

Bio:
Prof. Jitendra Malik is Arthur J. Chick Professor in the Department of Electrical Engineering and Computer Science at the University of California at Berkeley. Prof. Malik's group has worked on computer vision, computational modeling of biological vision, computer graphics and machine learning. Prof. Malik has broad interests in a variety of topics in computer vision and machine learning, and is perhaps one of the most experienced in the vision domain to talk about how to place the recent progress in AI within the broad context of achieving machine intelligence.

Keynote Title:
Vision and Language for Long Range Video Understanding.

Elizabeth Spelke

Elizabeth Spelke

Harvard University

Bio:
Prof. Elizabeth Spelke is the Marshall L. Berkman Professor of Psychology at Harvard University and an investigator at the NSF-MIT Center for Brains, Minds and Machines. Her laboratory focuses on the sources of uniquely human cognitive capacities, including capacities for formal mathematics, for constructing and using symbols, and for developing comprehensive taxonomies of objects. She probes the sources of these capacities primarily through behavioral research on human infants and preschool children, focusing on the origins and development of their understanding of objects, actions, people, places, number, and geometry. In collaboration with computational cognitive scientists, she aims to test computational models of infants’ cognitive capacities. In collaboration with economists, she has begun to take her research from the laboratory to the field, where randomized controlled experiments can serve to evaluate interventions, guided by research in cognitive science, that seek to enhance young children’s learning.

Keynote Title:
Three Foundations for Children’s Learning: Perception, Language, and Core Knowledge.

Jiajun Wu

Jiajun Wu

Stanford University

Bio:
Jiajun Wu is an Assistant Professor of Computer Science at Stanford University, working on computer vision, machine learning, and computational cognitive science. Before joining Stanford, he was a Visiting Faculty Researcher at Google Research. He received his PhD in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Wu's research has been recognized through the AFOSR Young Investigator Research Program (YIP), the ACM Doctoral Dissertation Award Honorable Mention, the AAAI/ACM SIGAI Doctoral Dissertation Award, the MIT George M. Sprowls PhD Thesis Award in Artificial Intelligence and Decision-Making, the 2020 Samsung AI Researcher of the Year, the IROS Best Paper Award on Cognitive Robotics, and faculty research awards from JPMC, Samsung, Amazon, and Meta.

Keynote Title:
Concept Learning Across Domains and Modalities.