This research aims at developing and testing perception and manipulation strategies that will allow a robot to grasp and manipulate objects from a complex scene, e.g., an unstructured self-occluding heap of reflective metallic parts in manufacturing environments, or a heap of unknown/un-modeled and/or deformable waste materials in nuclear decommissioning or mixed waste recycling. The project addresses the key challenges mentioned in the call, namely, the grasping and manipulation of objects by robots using novel, hardware-independent, robust techniques, composed of modularisable subtasks. General strategies will be developed that will be reproducible on different hardware configurations. Indeed, from the outset, the project focuses on robustness and reproducibility, which are key concepts that connect all project objectives. The fundamental scientific questions addressed in the project can be summarized as follows: 1) Robust visual data collection, segmentation and production of sets of graspable features in complex and difficult real scenes, 2) Grasp planning based on grasping visible features (instead of object models) and hardware independent implementation of the grasping strategies, 3) Grasping or re-grasping strategies, to best enable desired post-grasp actions, and also based on extrinsic dexterity, namely the exploitation of the environment or the robot's dynamic capabilities, and 4) Integration of all project components into an operational scheme that will be implemented in the laboratory settings of all participants. Regarding visual data collection and analysis (item 1), algorithms capable of working in unstructured environments associated with uncertainty will be developed. The project will tackle difficult environments, which are characteristic of a variety of industrial applications. We note that industrial benchmark datasets are comparatively few in the vision and robotics research communities, despite their clear economic importance and also significant intellectual complexity. In item 2, the concept of graspable features will be developed and used to devise novel grasping strategies. Means of evaluating the performance of the manipulation strategies will also be developed in order to assess the quality of the results obtained. By managing the perception-action loop using the detection of graspable features, the project will also provide tools for potentially handling unknown objects in unknown environments. Item 3 follows the concept of graspable features since a graspable feature may yield a proper temporary grasp but may require re-grasping depending on the task to be performed. Finally, the integration of the project components will also raise issues of implementation, real-time constraints and other practical limitations.
Experiments will be conducted in all participating research groups, initially using identical or similar equipment and then using different set ups and configurations in order to demonstrate generalisation, reproducibility and robustness. The perception aspect of the work will focus on visually complex, noisy and cluttered scenes. The manipulation aspect of the work will focus on generality and reproducibility, based on searching for graspable features rather than relying on object models. Finally, the project will generate a large amount of data, which will be logged, shared and made available to the international robotics research community as a set of public benchmark challenges, including training and testing data.
|