Robotic motion planning for active environmental manipulation | |
Robotic motion planning usually happens in static environments, and we typically do not reason about how we can (or need to) change the environment in order to reach certain places or fulfill certain tasks. In reality, this is an important part that can not be neglected: think of grabbing a chair to reach an object high up, or building a bridge to walk to another place over a gap (see images below, from [1]). Another setting can be found in the construction industry, where we need to build scaffolding to facilitate work higher up. In these cases, the robot intentionally extend its reachable space by building supports (bridges or makeshift staircases). Traditional motion planning techniques requires explicitly representing and enumerating all the different surfaces on which a robot could move, which usually involves case-specific assumptions and engineering heuristics. [1] Atlas Gets a Grip | Boston Dynamics: https://www.youtube.com/watch?v=-e1_QhJ1EhQ |
![]() |
Reinforcement learning for robotic 3D bin packing | |
Many industrial assembly tasks require high accuracy when manipulating parts, e.g., when inserting an object somewhere, such as a box that we would like to put in a bin. Here, the tolerances that are required can vary wildly. While industrial manipulators are generally accurate in a controlled setting, as soon as the environment is not as structured as we would like it to be, we need to leverage feedback policies to correct for inaccuraies in estimation, and unexpected/unmodeled effects. The setting we are interested in in this work is robotic bin packing, i.e., enabling a robot to pick objects (initially boxes/parcels), and (the focus of this work) placing them in a box, where other previously placed object possibly obstruct the placement. |
![]() |
Motion Imitation and Control for Humanoid Robot using Diffusion | |
To achieve effective and natural interaction, humanoid robots may need to closely imitate human motion, which encompasses walking, object manipulation, and environmental interactions. Motion capture data, which captures human motion with high precision, serves as an excellent resource for training robotic systems to replicate human movements. Diffusion models are a class of generative models designed to handle multi-modal distributions, making them highly suitable for complex motion generation tasks. Recent state-of-the-art methods use diffusion to produce human like motions for character animation or to imitate human expert data for controlling robotic arms. The goal of this project is to explore diffusion approaches for imitating motion data from humans to obtain control policies for humanoid robots. |
![]() |
Powered by SiROP - the academic career network