Publication | Closed Access
Learning Dense Visual Correspondences in Simulation to Smooth and Fold Real Fabrics
53
Citations
42
References
2021
Year
Unknown Venue
Realistic RenderingRobotic SystemsEngineeringDexterous ManipulationIntelligent RoboticsObject ManipulationComputer-aided DesignFabric Manipulation TasksImage AnalysisSoft RoboticsDifferentiable RenderingRobotic Fabric ManipulationVisual ComputingDense Visual CorrespondencesRobot LearningKinematicsComputational GeometryReal-time Computer GraphicGeometric ModelingInitial Fabric ConfigurationMachine VisionDesignMotion SynthesisComputer ScienceFold Real FabricsComputer VisionPhysically Based AnimationNatural SciencesRobotic ManipulationRobotics
Robotic fabric manipulation is challenging due to the infinite dimensional configuration space, self-occlusion, and complex dynamics of fabrics. There has been significant prior work on learning policies for specific fabric manipulation tasks, but comparatively less focus on algorithms which can perform many different tasks. We take a step towards this goal by learning point-pair correspondences across different fabric configurations in simulation. Then, given a single demonstration of a new task from an initial fabric configuration, these correspondences can be used to compute geometrically equivalent actions in a new fabric configuration. This makes it possible to define policies to robustly imitate a broad set of multi-step fabric smoothing and folding tasks. The resulting policies achieve 80.3% average task success rate across 10 fabric manipulation tasks on two different physical robotic systems. Results also suggest robustness to fabrics of various colors, sizes, and shapes. See https://tinyurl.com/fabric-descriptors for supplementary material and videos.
| Year | Citations | |
|---|---|---|
Page 1
Page 1