|Grasping Time and Pose Selection for Robotic Prosthetic Hand Control Using Deep Learning Based Object Detection
Hae-June Park, Bo-Hyeon An, Su-Bin Joo , Oh-Won Kwon, Min Young Kim*, and Joonho Seo*
International Journal of Control, Automation, and Systems, vol. 20, no. 10, pp.3410-3417, 2022
Abstract : This paper presents an algorithm to control a robotic prosthetic hand by applying deep learning (DL) to select a grasping pose and a grasping time from 2D images and 3D point clouds. This algorithm consists of four steps: 1) Acquisition of 2D images and 3D point clouds of objects; 2) Object recognition in the 2D images; 3) Grasping pose selection; 4) Choice of a grasping time and control of the prosthetic hand. The grasping pose selection is necessary when the algorithm detects many objects in the same frame, and must decide which pose of the prosthetic hand should use. The pose was chosen considering the object that was to the prosthesis. The grasping time was determined by the operating point when approaching the selected target after selecting the grasping pose; this choice uses an empirically-determined distance threshold. The proposed method achieved 89% accuracy of the grasping the intended object. The failures occurred because of slight inaccuracy in object localization, occlusion of target objects, and the inability of DL object detection. Work to solve these shortcomings is ongoing. This algorithm will help to improve the convenience of the user of a prosthetic hand.
Computer vision, grasping pose selection, grasping time selection, point cloud, 3D distance.