2019


Video support of the pre-print "Automated Generation of Reactive Programs from Human Demonstration for Orchestration of Robot Behaviors" ( arxiv)

Playful was used to program an interactive dance on Softbank Robotics Pepper

Playful's dynamic behavior deployed for a public exhibition at the BMZ Innovation Forum, Berlin

A dynamic behavior developed using Playful and deployed in Softbank Robotics Pepper. It is built on top of NaoQi, ROS, OpenPose and PCL.

2018


Our work on machine ethics with Michael and Susan Anderson.This is a proof of concept for deployment of ethical able robot for elderly care. When a patient refuses to take the medicine required for his/her health, what should the robot do ? For the first time, Nao is able to reason ethically about the mater.

Playful was used for controlling the robot and bridging it to the ethical engine. Nao and the patient are tracked by a depth sensor (not seen on video). The behavior is autonomous and reactive. No remote control or scripts were used.

The video RGB stream from the head mounted xtion of Apollo (see video below). This corresponds to the field of vision of the robot, as the xtion is the only visual sensory input of the robot (the RGB is completed with depth information).

Our complient controller (see video below) in the context of a full interactive behavior. The full software stack is very modular, allowing us to create rapidly new robotic behaviors.

More info on my blog.

For a longer video: https://www.youtube.com/watch?v=ivJ8ZdUPOPo

Complient controller for smooth human robot interaction, implemented on Max Planck Apollo. Inverse kinematics is based on coordinate descent with a touch of online machine learning (use of the SOMA algorithm, see here). Sensory information are provided by a head mounted xtion (not seen on the video).

The software stack:
  • Lower level realtime control: SL running on a xenomai patched ubuntu 14.04
  • Middleware: ROS indigo, bridged to SL via ROS realtime
  • Behavior orchestration: Playful (see video below)

2017



An overview of Playful, a software framework for organizing python primitives into dynamic behavior trees.

No external sensor is used (sensory information limited to head mounted RGB cameras).

Interfacing between Playful (see video above) and TiaGO (PAL Robotics).

The robot should simply points to the cup. The achievement is that this has been developped in only a week (from first contact with the robot to release of this video). The long term project is to implement ethical behavior into service robot, in collaboration with Michael and Susan Leigh Anderson.

I had the chance to perform this work directly in PAL Robotics offices in Barcelona. It was a real pleasure to be their guest, not only because of their warm welcome, but also because I could see how they take customer support very seriously. I am looking forward to work more on TiaGO (which should happen soon).

Insight on the ongoing project of implementing ethics in robots was presented at the AAAI Workshop 2016 (see here).


Primitives for head tracking and inverse kinematics.

The code for inverse kinematics implements coordinate descent and is robot independant (the same code is used in Max Planck Apollo and PAL robotics TiaGO, see videos above). It works without modification on underactuated systems (as for Nao, in this video) and on overactuated systems (as for Apollo, see videos above).

The first video of Playful, a scripting language for organizing behavior primitives in dynamic behavior trees. A Kalman filter primitive is used to enhance robustness.

2012


TDM was used to bridge existing primitives (see video right below) to the system for human-robot emotionally assisted interactions developed by my friend and colleague Anna Gruebler.

This work was published in Advanced Robotics (see here),
presented at the 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2011) (see here)
and an extension presented at the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2013) (see here)

Nao performs a wide range of behaviors.

Because they are based on the same set of primitives, and because TDM makes it easy to organize primitives into coherent behaviors, programming each of these behaviors was done in a very limited time.

This work has been published in Robotics and Autonomous Systems (see here)
and related considerations on usability in robotic programming were presented at 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012) (see here)

Nao performs a quite complex behavior, including some planning and obstacle avoidance.

All the planning is done online and the position of the balls/target are unknow to the robot prior to execution.

This shows the effect of adding simple continuous motion optimization primitives to a behavior orchestrated by TDM (see 2011 videos below). The primitive runs quietly in parallel with the primitives related to head tracking and kicking. And it results in a very precise kick.

The boost in speed (compared to older videos) is mostly due to the upgrade of the video camera implemented in the (at the time) latest release of Nao.

2011


Support video for workpresented at the 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2011) (see here).

The paper presents concrete examples of behavior primitives and provides an insight on how these primitives can be combined to create robotic behaviors.

A behavior created using TDM. The objective of the robot is to bring the red and the green ball to the yellow pod. This behavior was obtained via association of simple primitives.

After a first scan of the table, the robot makes a plan on how to achieve its goal. But you may notice that this plan fails (the red ball rolls to an unexpected place), yet the robot keeps going on, without stopping to replan. This is because some of the primitives are "cognitive" and perform continous replanning.

First video of the TDM software (Targets-Drives-Means), which objective is to make easy the creation of behavior based on association of behavior primitives.

The challenge is that the final behavior must look "smooth", i.e. the transition from one behavior primitive to the other invisible to the viewer (in other word, the behavior should not look "robotic").