Human–robot interfaces have many applications, including prosthetics and artificial wrists [1
], manufacturing and industrial assembly lines [3
], surgery in medical robotics [4
], hand rehabilitation [5
], assisted living and care [6
], soft wearable robotics [7
], territory patrolling [8
], drone-based delivery and logistics [9
], military robotics applications [10
], smart agriculture [11
], and student teaching [13
]. However, achieving efficient object grasping and dexterous manipulation capabilities in robots remains an open challenge [15
]. When designing anthropomorphic robot hands, i.e., robotic manipulators who have a structure (joints and links) similar to a human hand), it is challenging to achieve feedback from actuators, sensors, and the mechanical part of the manipulator about the shape, texture, and other physical characteristics of the grasped object. Grasping is considered one of the must-have skills that the robots need to master before being successfully adopted as a replacement for many manual operations. The robotic hand grasping task is commonly implemented using rigid robotic hands, which need accurate control with tactile sensor feedback. Robotic manipulation tasks have become very demanding when a gripper needs to have a feasible plan to solve grasping problems under possible uncertainties such as items occupying unknown positions or having some item properties (like shape) unknown for a robot.
Developing an accurate robot grasping for a wide range of object shapes is a serious challenge for order delivery and home service robotics. The anthropomorphic robot hand has to grasp and lift items without pre-knowledge of their weight and damping characteristics. Optimizing their reliability, and range is complex due to inherent uncertainty in control, sensing, and tactile feedback [16
]. The robotic hand control parameters depend upon the geometric properties of the item, such as shape, and position, while grasping control is related to the structure of the gripper [17
Currently, the robots empowered by artificial intelligence algorithms can accomplish moving objects through grasping. Grasp detection based on neural networks can help robots to precisely perceive surrounding environments. For example, Alkhatib et al. [18
] evaluate the grasp robustness of a three-fingered robotic hand based on position, and movement speed measured at each joint, achieving 93.4% accuracy of predicting the grasping stability with unknown gripped items due to inexpensive tactile sensors used. Dogar et al. [19
] considered the problem of searching optimal robot configurations for grasping operations during the collaborative part assembly task as a constraint satisfaction problem (CSP). They proposed the algorithm for simplifying the problem by dividing it into a sequence of atomic grasping actions and optimizing it by removing the unnecessarily repeated grasps from the plan. Gaudeni et al. [20
] suggested an innovative grasping strategy based on a soft modular pneumatic surface, which uses pressure sensors to assess the item’s pose and center of mass and recognize the contact between the robot’s gripper and the grasped item. The strategy was validated on multiple items of different shapes and dimensions. Golan et al. [21
] developed a general-purpose robotic hand of variable-structure that can adapt to fit a wide range of objects. The adaptation is ensured by rearranging the hand’s structure for the desired grasp so the previously unseen items could be grasped. Homberg et al. [22
] designed a soft hand to effectively grasp and recognize object shapes based on their state measurements. The internal sensors allow for the hand configuration and the item to be detected. A clustering algorithm is adopted to recognize each grasped item in the presence of uncertainty in the item’s shape. Hu et al. [23
] proposed using a trained Gaussian process classifier for determining the feasibility of grasping points of a robot.
Ji et al. [24
] linked vision and robot hand grasping control to attain reliable and accurate item grasping in a complex cluttered scene. By fusing sensor data, real-time grasping control was obtained that provided the capability to manipulate with various items of unknown weight, stiffness, and friction. Kang et al. [25
] developed an integrated gripper that fitted an under-actuated gripper with a suction gripping system for grasping different items in various environments. Experiments using a diverse range of items under various grasping scenarios were executed to demonstrate the grasping capability. Kim et al. [26
] developed a 14 degrees of freedom (DoF) robotic hand with five fingers and a wrist with a tendon-driven mechanism that minimizes friction and optimizes efficiency and back-drivability for human-like payload and compact dexterity. Mu et al. [27
] constructed a robot with a prototype of an end-effector for picking kiwifruit, while its artificial fingers were mounted with fiber sensors to find the best position for grabbing the fruit without any damage. Neha et al. [28
] performed grasping simulations of the four-fingered robotic hand in Matlab to demonstrate that the developed robotic hand model can grasp items of different shapes and sizes. Zhou et al. [29
] proposed an intuitive grasping control approach for a custom 13-DOF anthropomorphic soft robotic hand. The Leap Motion controller acquires the human hand joint angles in real-time and is mapped into the robotic hand joints. The hand was demonstrated to attain good grasping performance for safe and intuitive interactions without strict accuracy requirements.
Neural networks and deep learning have been successfully adopted to improve control of robotic hands and object grasping. For example, Mahler et al. [16
] trained separate Grasp Quality Convolutional Neural Networks (GQ-CNNs) for each robot gripper. The grasping policy was trained on the Dex-Net 4.0 dataset to maximize efficiency while using a separate GQ-CNN for each gripper. The approach was validated for the bin-picking task with up to 50 diverse heaps of previously unseen items. James et al. [30
] used Randomized-to-Canonical Adaptation Networks (RCANs) for training a vision-based grasping reinforcement learning unit in a simulator with no real-world data and then transfer it to the physical world to achieve 70% of zero-shot grasping success on unknown items. Setiawan et al. [31
] suggested a data-driven approach for controlling robotic fingers to assist users in bi-hand item manipulation. The trained neural network is used to control the robotic finger motion. The method was tested on ten bimanual tasks, such as operating a tablet, holding a large item, grasping a bottle, and opening a bottle cap. Song et al. [32
] performed detection of robotic grasp using region proposal networks. The experiments performed on the Cornell grasp and Jacquard datasets demonstrated high grasp detection accuracy. Yu et al. [33
] suggested a vision-based grasping method based on a deep grasping guidance network (DgGNet) and a recognition network MobileNetv2 that can recognize the occluded item, while DgGNet calculates the best grasp action for the grasped item and controls the manipulator movement.
A typical limitation of previous implementations is the high implementation cost, which constrains its wide application by end-users. Our approach uses 3D printing technology and consumer-grade Leap Motion (Leap Motion Inc., San Francisco, CA, USA) sensors to develop an anthropomorphic multi-finger robotic hand, which is affordable to end-users and efficient in small-scale grasping applications.
This article aimed to develop and evaluate a robotic hand, which can identify the shape of the item it is holding, for custom grasping tasks. We trained the developed robotic hand on several simple-shaped items using supervised artificial intelligence techniques and the hand gesture data acquired by the Leap Motion device. This paper is an extended version of the paper presented at the ICCSA’2020 conference [34
]. In [34
], we used the Naïve Bayes algorithm to predict the shape of objects grasped by the robotic hand based on its fingers’ angular position, achieving an accuracy of 92.1%. In this paper, we further improve the result by adopting the hybrid classifier based on deep feature extraction using a transforming autoencoder and a Support Vector Machine (SVM) for object shape recognition.
5. Discussion and Conclusions
An anthropomorphic robotic manipulator’s design allows the robot to be efficiently used in several applications such as object grasping that allow the robot to work to operate in an environment that is more suitable to human manual work [39
]. Although a human hand has many unique characteristics that allow grabbing and holding objects of diverse shapes, the anthropomorphic robotic hand can still be used for repetitive object grabbing and moving tasks such as for industrial conveyors [40
] or medical laboratories [41
]. Specifically, designing a multi-fingered arm that can grasp and hold reliably and has versatile manipulative abilities cannot be performed with a generic gripper [42
]. The designed hand is a low-cost alternative to other known 3D printed robotic hands [43
Several previous works tried to combine the Leap Motion sensor to recognize the grasping movements and control the robotic hand, either physical or virtual. Moldovan and Staretu [46
] used Leap Motion to control the virtual robotic hand by recognizing a human hand’s grasping movements. However, no quantitative evaluation of the experiment in grasping a ball was done. Zhang et al. [47
] used the Leap Motion controller and a ray detection rendering method to generate tactile feedback. They used four types of shape (cube, ball, cylinder, and pyramid) for recognition, but evaluated shape recognition using only the qualitative 1o-point scale. Zhou et al. [29
] also used the Leap Motion controller to capture a human hand’s joint angles in real-time. Then the human hand joint angle position was mapped into the robotic hand to perform object grasping. However, they also made no effort in recognizing the shape of an object.
In this paper, the robotic hand was designed to execute human-like grasping of items of various simple shapes such as balls or rectangular boxes. Using the robotic hand and the Leap Motion device’s data, we have achieved a 94.4% accuracy of shape recognition, which improved the results reported in our previous paper [34