RANET: A Grasp Generative Residual Attention Network for Robotic Grasping Detection Qian-Qian Hong, Liang Yang*, and Bi Zeng
International Journal of Control, Automation, and Systems, vol. 20, no. 12, pp.3996-4004, 2022
Abstract : This paper presents a novel grasp generative residual attention network (RANET) for generating antipodal robotic grasp from multi-modal images with the pixel-wise method. To strengthen the generalization ability of unknown objects, this paper proposed a new structure that differs from the previous grasp generative network in that it additionally integrates a coordinate attention mechanism and a symmetrical skip connection, respectively. Using the coordinate attention module to emphasize meaningful information of the feature map and the symmetrical skip connection to remain more fine-grained details of feature. Moreover, a multi atrous convolution module is included in the structure to capture more high-level information, while a hypercolumn feature fusion method is incorporated for getting the best from the complementation of different layers¡¯ features. Through evaluation on public datasets, the result demonstrates that we achieve 98.9% accuracy on the Cornell dataset which is the state-of-the-art performance with real-time speed(¡ 17 ms), meanwhile, we represent a 93.9% accuracy performance on the Jacquard dataset.
Keyword :
Convolutional neural networks, deep learning, grasping detection, vision.
|