DGLR-Publikationsdatenbank - Detailansicht

Autor(en):
M. Badri, M. Gewehr, S. Klinkner
Zusammenfassung:
Planetary rovers have proven to be invaluable assets within space exploration missions, providing profound insights and expanding scientific knowledge. As the frontiers of space exploration continue to expand, there is a growing demand for rovers with advanced capabilities and a higher degree of autonomy. A key aspect of future rover generations involves their ability to autonomously manipulate objects in extraterrestrial environments, particularly in missions involving sample collection, analysis, or return. However, this task presents significant challenges due to the unknown nature of objects and the complex terrains encountered. To address these challenges, this paper investigates the potential of deep learning techniques to enhance autonomous robotic grasping in extraterrestrial environments. The proposed approach introduces an end-to-end grasp estimation system, enabling rovers equipped with robotic arms to autonomously execute grasping actions solely based on visual information provided by its on-board sensors. Transfer learning is employed, harnessing the power of pre-trained deep learning models from computer vision applications and pre-existing public grasping datasets. These models are then finetuned using a self-generated dataset containing objects relevant to manipulation tasks in space exploration missions. To bridge the gap between space exploration and deep learning research, a pipeline is introduced to automatically generate a large, labelled dataset of objects suitable for autonomous grasping in planetary exploration missions. Additionally, a 3D planetary robot simulation environment is developed as the core platform for generating and automatically labelling synthetic custom data, emulating conditions encountered in extraterrestrial environments. Preliminary results demonstrate the promising ability of the system to successfully grasp novel objects based only on RGB-D visual information, achieving a success rate of 85%. This approach enables future rovers equipped with human-like grasping abilities to operate without prior knowledge of target objects, while maintaining resource efficiency, thereby enhancing autonomous on-board decision-making in space missions.
Veranstaltung:
Deutscher Luft- und Raumfahrtkongress 2023, Stuttgart
Verlag, Ort:
Deutsche Gesellschaft für Luft- und Raumfahrt - Lilienthal-Oberth e.V., Bonn, 2024
Medientyp:
Conference Paper
Sprache:
englisch
Format:
21,0 x 29,7 cm, 24 Seiten
URN:
urn:nbn:de:101:1-2024011713031313998666
DOI:
10.25967/610429
Stichworte zum Inhalt:
Space Robotics, Deep Learning, Autonomous robotic grasping, RGB-D perception, Transfer Learning, Convolutional neural network, End-to-end grasping, Object manipulation
Verfügbarkeit:
Download - Bitte beachten Sie die Nutzungsbedingungen dieses Dokuments: Copyright protected
Kommentar:
Zitierform:
Badri, M.; Gewehr, M.; Klinkner, S. (2024): Autonomous Robotic Grasping using Deep Learning Methods for Resource Efficient Space Robotics Applications. Deutsche Gesellschaft für Luft- und Raumfahrt - Lilienthal-Oberth e.V.. (Text). https://doi.org/10.25967/610429. urn:nbn:de:101:1-2024011713031313998666.
Veröffentlicht am:
17.01.2024