IROS Workshop: Robotics Challenges for Machine Learning II

Date: September 22, 2008

Registration Information


Russ Tedrake (MIT)
Nicholas Roy (MIT)
Jan Peters (Max Planck Institute for Biological Cybernetics)
Jun Morimoto (ATR)

Objectives and Topics

There is an increasing interest in machine learning and statistics within the robotics community. At the same time, there has been a growth in the learning community in using robots as motivating applications for new algorithms and formalisms. Rapid progress requires researchers from both disciplines to come together and agree on the challenges, problem formulations, and solution techniques. Specific themes of the workshop include:

  • learning models of robots, tasks or environments
  • learning plans and control policies by imitation and reinforcement learning
  • representations which facilitate learning, such as low-dimensional embeddings of movements
  • learning representations and task abstractions by unsupervised learning
  • probabilistic inference of task parameters from multi-modal sensory information
  • integration of learning into control architectures.

This workshop will also serve to kick-off the new IEEE Technical Committee (TC) on Robot Learning.

The workshop will include talks by a number of the top researchers in the field, who will articulate particular problems in robotics that will benefit from learning, as well as methods and progress to date. The workshop will also feature a peer-reviewed poster session, and many opportunities for discussion.

Intended Audience

The intended audience is robotics researchers who are actively engaged in machine learning research, or who are interested in exploring machine learning ideas in their future work. We would specifically like to encourage students to participate.

Workshop Schedule

September 22 (Monday), 09:00 til 19:00

Invited talks are 25 min + 5 min for questions/discussion

  • 09:00 - Introduction and Welcome (Russ Tedrake)
  • 09:05 - Russ Tedrake (MIT) - Learning Control at Intermediate Reynolds Numbers [pdf]
  • 09:30 - Ingmar Posner (Oxford) - Robot Learning in Urban Environments
  • 10:00 - Martin Riedmiller (University of Osnabrueck) - Learning on Real Robots - Methods and Applications [pdf]
  • 10:30 - Coffee break
  • 10:50 - Poster Spotlight 1 (Jan Peters)
  • 10:55 - Jun Morimoto (ATR) - Low-dimensional Feature Extraction for Policy Improvement
  • 11:25 - Tomohiro Shibata (NAIST) - Reinforcement Learning for Assisting Humans [pdf]
  • 11:55 - Dana Kulic (U Tokyo) - Incremental Learning of Full Body Motions [pdf]
  • 12:25 - Lunch
  • 14:00 - Poster Spotlight 2 (Jun Morimoto)
  • 14:05 - Nick Roy (MIT) - Bridging the gap between what humans mean and what robots see [pdf]
  • 14:35 - Chad Jenkins (Brown University) - Robot Learning from Mulivalued Demonstration [pdf]
  • 15:05 - Manuel Lopes (Instituto Superior Technico) - Learning and Development
  • 15:35 - Minoru Asada (Osaka University) - Cognitive Developmental Robotics: Seeking for the Principle of Development [pdf]
  • 16:05 - Coffee break - poster session setup
  • 16:20 - Poster Spotlight 3 (Nick Roy)
  • 16:25 - Jan Peters (Max Planck Institute for Biological Cybernetics)
  • 16:50 - Workshop wrap-up and transition to posters
  • 17:00 - Poster session setup
  • 17:10 - Poster session

Poster Session

We received a surprising number of abstract submissions for the workshop. The following high-quality abstracts were accepted for presentation:

  1. Transfer Learning with Differently-abled Robots, by Lakshmanan Balaji and Balaraman Ravindran [pdf]
  2. Short- and Long-term Adaptation of Visual Place Memories for Mobile Robots, by Feras Dayoub and Tom Duckett [pdf]
  3. Conditional Random Fields for Outdoor Object Mapping, by Bertrand Douillard, Dieter Fox, and Fabio Ramos [pdf]
  4. Human Motion Prediction in a Human-Robot Joint Task, by Elena Gribovskaya, Aude Billard, and Cedric Bouzyd [pdf]
  5. Learning multi-objective robot control policies from demonstration, by Daniel H Grollman and Odest Chadwicke Jenkins [pdf]
  6. Invariant Feature Learning on a Mobile Robot, by Raia Hadsell and Yann LeCun [pdf]
  7. A finite and receding horizon neural controller in humanoid robotics, by Serena Ivaldi, Marco Baglietto, Giorgio Metta, Riccardo Zoppoli and Giulio Sandini [pdf]
  8. Autonomous on-line learning of reaching behavior in a humanoid robot, by Lorenzo Jamone, Lorenzo Natale, Giorgio Metta, and Giulio Sandini [pdf]
  9. First Steps in Building a Framework for Learning by Experimentation, by Alex Juarez, Timo Henne, Bjorn Kahl, Erwin Prassler, and Monica Reggiani [pdf]
  10. Learning to Model Articulated Objects in Unstructured Environments, by Dov Katz and Oliver Brock [pdf]
  11. Learning Non-Parametric Prediction and Observation Models for Bayesian Filtering via Gaussian Process Regression, by Jonathan Ko and Dieter Fox [pdf]
  12. On-line Human Motion prediction with Gaussian Process Dynamical Models for Human-Robot Interaction, by Takamitsu Matsubara, Sang-Ho Hyon, and Jun Morimoto [pdf]
  13. A Novel Framework for Learning Attention Control in a multi-dimensional sensory space, by Maryam S. Mirian, Majid Nili Ahmadabadi, Hadi Firouzi, Babak N. Araabi1, and Ronald R. Siegwart [pdf]
  14. 3D Probabilistic Representations for Vision and Action, by Justus H. Piater and Renaud Detry [pdf]
  15. Gaussian Processes for Robotic Regression Problems — Learning Local Smoothness and Scaling to Large Data Sets, by Christian Plagemann, Sebastian Mischke, Kristian Kersting, and Wolfram Burgard [pdf]
  16. Discretization of the State Space with a Stochastic Version of the Value Iteration Algorithm, by Cristina Pomares and Domingo Gallardo [pdf]
  17. A Two-level Model of Motor Learning, by Camille Salaun, Vincent Padois, Olivier Sigaud, and Anthony Truchet [pdf]
  18. Learning through Experience – Optimizing Performance by Repetition, by Angela Schollig and Raffaello D’Andrea [pdf]
  19. Learning 2D Subspaces for User-Controlled Robot Grasping, by Aggeliki Tsoli and Odest Chadwicke Jenkins [pdf]