Recent advances in robot learning are heavily dependent on deep learning approaches, which typically need significant computing power, long training times and large amounts of data. While excellent at solving specific tasks, the solutions produced by deep learning architectures tend to be unpredictably brittle. Biological organisms such as insects on the other hand have evolved robust intelligence and fast learning with sparse data and limited computational power. Learning in biological systems is an active process emerging from the interaction of evolved brains, bodies and behaviours, employing a combination of behavioural strategies and specialised sensors that actively structure sensory input while selective attention drives learning to the most salient information, massively reducing the search space.
This workshop will bring together experts from robotics, neuroethology, behavioural neuroscience, computational neuroscience, reinforcement learning and deep learning, to discuss challenges, synergies and to explore new directions towards rapid and robust robot learning, driven by behavioural strategies for active sensing and selective attention mechanisms.
The workshop will consist of invited talks, spotlight presentations, a poster session and interdisciplinary panel discussions. We aim to improve the communication across a diverse set of scientists who are at various stages of their careers. To promote dialogue regardless of seniority we will encourage senior presenters to share their presentations with PhD students or postdocs who have contributed to the research. To encourage critical discussions, we will also encourage all presenters to include a description of limitations of their work.
The focus topics for our workshop include, but are not restricted to
- Zero-shot, one-shot and few-shot learning in robots
- Active learning in animals and robots
- Active sensing
- Selective attention
The questions we aim to address in the workshop
- How to curate sensory information via selective attention mechanisms?
- How to structure sensory information via directed behavior?
- How multi-sensory integration can contribute to rapid learning?
- How to bootstrap learning via selective attention?
Queensland University of Technology
Robot navigation and perception
Donders Institute for Brain, Cognition and Behaviour
Exploration via electrosense & touch
University of Massachusetts
Time-aware machine learning
Carnegie Mellon University
Topic: Human-like generalization
University of Sussex
Insect-inspired visual learning
University of Sheffield
Insect-inspired robot learning
University of Southern Denmark
Zero, one and few shot learning
University of Texas
Topic: Embodied AI
09:00 – 09:05: Welcome by organizers
09:05 – 09:30: Invited Talk #1
09:30 – 09:55: Invited Talk #2
09:55 – 10:30: Spotlight Sessions (5 min / paper)
10:30 – 11:10: Coffee Break & Poster Sessions (40 min.)
11:10 – 11:35: Invited Talk #3
11:35 – 12:00: Invited Talk #4
12:00 – 12:30: Q&A + Panel Discussion (Moderator: Danish Shaikh)
Topics for Q&A
- Curating sensory information via selective attention mechanisms
- Structuring sensory information via directed behaviour
12:30 – 14:00: Lunch (1.5 hrs.)
14:00 – 14:25: Invited Talk #5
14:25 – 14:50: Invited Talk #6
14:50 – 15:15: Spotlight Sessions (5 min / paper)
15:15 – 15:55: Coffee Break & Poster Sessions (40 min.)
15:55 – 16:20: Invited Talk #7
16:20 – 16:45: Invited Talk #8
16:45 – 17:15: Q&A + Panel Discussion (Moderator: Danish Shaikh)
Topics for Q&A
- Contribution of multi-sensory integration in rapid learning
- Bootstrapping learning via selective attention
17:15 – 17:30: Awards and concluding remarks