Recent advances in robot learning are heavily dependent on deep learning approaches, which typically need significant computing power, long training times and large amounts of data. While excellent at solving specific tasks, the solutions produced by deep learning architectures tend to be unpredictably brittle. Biological organisms such as insects on the other hand have evolved robust intelligence and fast learning with sparse data and limited computational power. Learning in biological systems is an active process emerging from the interaction of evolved brains, bodies and behaviours, employing a combination of behavioural strategies and specialised sensors that actively structure sensory input while selective attention drives learning to the most salient information, massively reducing the search space.
This workshop will bring together experts from robotics, neuroethology, behavioural neuroscience, computational neuroscience, reinforcement learning and deep learning, to discuss challenges, synergies and to explore new directions towards rapid and robust robot learning, driven by behavioural strategies for active sensing and selective attention mechanisms.
The workshop will consist of invited talks, spotlight presentations, a poster session and interdisciplinary panel discussions. We aim to improve the communication across a diverse set of scientists who are at various stages of their careers. To promote dialogue regardless of seniority we will encourage senior presenters to share their presentations with PhD students or postdocs who have contributed to the research. To encourage critical discussions, we will also encourage all presenters to include a description of limitations of their work.
The focus topics for our workshop include, but are not restricted to
- Zero-shot, one-shot and few-shot learning in robots
- Active learning in animals and robots
- Active sensing
- Selective attention
The questions we aim to address in the workshop
- How to curate sensory information via selective attention mechanisms?
- How to structure sensory information via directed behavior?
- How multi-sensory integration can contribute to rapid learning?
- How to bootstrap learning via selective attention?