Next Article in Journal
Performance Optimization of MANET Networks through Routing Protocol Analysis
Next Article in Special Issue
An Empirical Review of Automated Machine Learning
Previous Article in Journal
EEG and Deep Learning Based Brain Cognitive Function Classification
Previous Article in Special Issue
Folding-BSD Algorithm for Binary Sequence Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Evaluation of Anthropomorphic Robotic Hand for Object Grasping and Shape Recognition

by
Rahul Raj Devaraja
1,
Rytis Maskeliūnas
1 and
Robertas Damaševičius
2,*
1
Department of Multimedia Engineering, Kaunas University of Technology, 51423 Kaunas, Lithuania
2
Department of Software Engineering, Kaunas University of Technology, 51423 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Submission received: 13 November 2020 / Revised: 19 December 2020 / Accepted: 21 December 2020 / Published: 22 December 2020
(This article belongs to the Special Issue Selected Papers from ICCSA 2020)

Abstract

:
We developed an anthropomorphic multi-finger artificial hand for a fine-scale object grasping task, sensing the grasped object’s shape. The robotic hand was created using the 3D printer and has the servo bed for stand-alone finger movement. The data containing the robotic fingers’ angular position are acquired using the Leap Motion device, and a hybrid Support Vector Machine (SVM) classifier is used for object shape identification. We trained the designed robotic hand on a few monotonous convex-shaped items similar to everyday objects (ball, cylinder, and rectangular box) using supervised learning techniques. We achieve the mean accuracy of object shape recognition of 94.4%.

1. Introduction

Human–robot interfaces have many applications, including prosthetics and artificial wrists [1,2], manufacturing and industrial assembly lines [3], surgery in medical robotics [4], hand rehabilitation [5], assisted living and care [6], soft wearable robotics [7], territory patrolling [8], drone-based delivery and logistics [9], military robotics applications [10], smart agriculture [11,12], and student teaching [13,14]. However, achieving efficient object grasping and dexterous manipulation capabilities in robots remains an open challenge [15]. When designing anthropomorphic robot hands, i.e., robotic manipulators who have a structure (joints and links) similar to a human hand), it is challenging to achieve feedback from actuators, sensors, and the mechanical part of the manipulator about the shape, texture, and other physical characteristics of the grasped object. Grasping is considered one of the must-have skills that the robots need to master before being successfully adopted as a replacement for many manual operations. The robotic hand grasping task is commonly implemented using rigid robotic hands, which need accurate control with tactile sensor feedback. Robotic manipulation tasks have become very demanding when a gripper needs to have a feasible plan to solve grasping problems under possible uncertainties such as items occupying unknown positions or having some item properties (like shape) unknown for a robot.
Developing an accurate robot grasping for a wide range of object shapes is a serious challenge for order delivery and home service robotics. The anthropomorphic robot hand has to grasp and lift items without pre-knowledge of their weight and damping characteristics. Optimizing their reliability, and range is complex due to inherent uncertainty in control, sensing, and tactile feedback [16]. The robotic hand control parameters depend upon the geometric properties of the item, such as shape, and position, while grasping control is related to the structure of the gripper [17].
Currently, the robots empowered by artificial intelligence algorithms can accomplish moving objects through grasping. Grasp detection based on neural networks can help robots to precisely perceive surrounding environments. For example, Alkhatib et al. [18] evaluate the grasp robustness of a three-fingered robotic hand based on position, and movement speed measured at each joint, achieving 93.4% accuracy of predicting the grasping stability with unknown gripped items due to inexpensive tactile sensors used. Dogar et al. [19] considered the problem of searching optimal robot configurations for grasping operations during the collaborative part assembly task as a constraint satisfaction problem (CSP). They proposed the algorithm for simplifying the problem by dividing it into a sequence of atomic grasping actions and optimizing it by removing the unnecessarily repeated grasps from the plan. Gaudeni et al. [20] suggested an innovative grasping strategy based on a soft modular pneumatic surface, which uses pressure sensors to assess the item’s pose and center of mass and recognize the contact between the robot’s gripper and the grasped item. The strategy was validated on multiple items of different shapes and dimensions. Golan et al. [21] developed a general-purpose robotic hand of variable-structure that can adapt to fit a wide range of objects. The adaptation is ensured by rearranging the hand’s structure for the desired grasp so the previously unseen items could be grasped. Homberg et al. [22] designed a soft hand to effectively grasp and recognize object shapes based on their state measurements. The internal sensors allow for the hand configuration and the item to be detected. A clustering algorithm is adopted to recognize each grasped item in the presence of uncertainty in the item’s shape. Hu et al. [23] proposed using a trained Gaussian process classifier for determining the feasibility of grasping points of a robot.
Ji et al. [24] linked vision and robot hand grasping control to attain reliable and accurate item grasping in a complex cluttered scene. By fusing sensor data, real-time grasping control was obtained that provided the capability to manipulate with various items of unknown weight, stiffness, and friction. Kang et al. [25] developed an integrated gripper that fitted an under-actuated gripper with a suction gripping system for grasping different items in various environments. Experiments using a diverse range of items under various grasping scenarios were executed to demonstrate the grasping capability. Kim et al. [26] developed a 14 degrees of freedom (DoF) robotic hand with five fingers and a wrist with a tendon-driven mechanism that minimizes friction and optimizes efficiency and back-drivability for human-like payload and compact dexterity. Mu et al. [27] constructed a robot with a prototype of an end-effector for picking kiwifruit, while its artificial fingers were mounted with fiber sensors to find the best position for grabbing the fruit without any damage. Neha et al. [28] performed grasping simulations of the four-fingered robotic hand in Matlab to demonstrate that the developed robotic hand model can grasp items of different shapes and sizes. Zhou et al. [29] proposed an intuitive grasping control approach for a custom 13-DOF anthropomorphic soft robotic hand. The Leap Motion controller acquires the human hand joint angles in real-time and is mapped into the robotic hand joints. The hand was demonstrated to attain good grasping performance for safe and intuitive interactions without strict accuracy requirements.
Neural networks and deep learning have been successfully adopted to improve control of robotic hands and object grasping. For example, Mahler et al. [16] trained separate Grasp Quality Convolutional Neural Networks (GQ-CNNs) for each robot gripper. The grasping policy was trained on the Dex-Net 4.0 dataset to maximize efficiency while using a separate GQ-CNN for each gripper. The approach was validated for the bin-picking task with up to 50 diverse heaps of previously unseen items. James et al. [30] used Randomized-to-Canonical Adaptation Networks (RCANs) for training a vision-based grasping reinforcement learning unit in a simulator with no real-world data and then transfer it to the physical world to achieve 70% of zero-shot grasping success on unknown items. Setiawan et al. [31] suggested a data-driven approach for controlling robotic fingers to assist users in bi-hand item manipulation. The trained neural network is used to control the robotic finger motion. The method was tested on ten bimanual tasks, such as operating a tablet, holding a large item, grasping a bottle, and opening a bottle cap. Song et al. [32] performed detection of robotic grasp using region proposal networks. The experiments performed on the Cornell grasp and Jacquard datasets demonstrated high grasp detection accuracy. Yu et al. [33] suggested a vision-based grasping method based on a deep grasping guidance network (DgGNet) and a recognition network MobileNetv2 that can recognize the occluded item, while DgGNet calculates the best grasp action for the grasped item and controls the manipulator movement.
A typical limitation of previous implementations is the high implementation cost, which constrains its wide application by end-users. Our approach uses 3D printing technology and consumer-grade Leap Motion (Leap Motion Inc., San Francisco, CA, USA) sensors to develop an anthropomorphic multi-finger robotic hand, which is affordable to end-users and efficient in small-scale grasping applications.
This article aimed to develop and evaluate a robotic hand, which can identify the shape of the item it is holding, for custom grasping tasks. We trained the developed robotic hand on several simple-shaped items using supervised artificial intelligence techniques and the hand gesture data acquired by the Leap Motion device. This paper is an extended version of the paper presented at the ICCSA’2020 conference [34]. In [34], we used the Naïve Bayes algorithm to predict the shape of objects grasped by the robotic hand based on its fingers’ angular position, achieving an accuracy of 92.1%. In this paper, we further improve the result by adopting the hybrid classifier based on deep feature extraction using a transforming autoencoder and a Support Vector Machine (SVM) for object shape recognition.

2. Methods

2.1. Formal Definition of a Problem

From the perspective of mechanics, the hand is a multi-link “mechanism”. The kinematic chain of bone links of the “mechanism” of a human hand can be represented as a diagram shown in Figure 1 (for the sake of simplicity, the palm of the arm is represented as a plane). We can consider the robotic arm as a spatial mechanism; the number of degrees of freedom (DoF) is 27. The hinges of the robotic fingers have 3, 2, and 2 degrees of freedom; therefore, the total number of DoFs is 7. Except for the thumb, the others have two DoFs; with one DoF—two connections and with two DoFs—one connection. In addition to a wide variety of hand movements, the hand and the fingers also have great mobility, flexibility, and a wealth of possible movements. All this, within reach of fingers, provides a grip on an object of any shape and makes it possible to perform various actions on objects with the help of the fingers.
To draw up a kinematic model, the manipulator is specified by the base coordinate system and each link’s coordinate system. The base coordinate system is called the “zero” coordinate system ( x 0 , y 0 , z 0 ) , which is the inertial coordinate system of the manipulator. As for each link, then on the axis of its articulation define the Cartesian orthonormal coordinate system ( x i , y i , z i , 1 ) , where i = 1 , 2 , , n , and n is equal to the number of DoFs of the robotic manipulator. When the electric drive sets the i -th joint in motion, the i -th link begins to move relative to the i -th link, and since the i -th coordinate system is connected with the i -th link, it moves with it so that the n -th coordinate system moves together with the last n -th link of the manipulator.
When considering the anthropomorphic robotic fingers only, they are composed of three links (named phalanxes) except for the thumb finger, which has only two phalanxes. The three phalanxes are called proximal, medial, and distal, while their joints are metacarpophalangeal (MCP), proximal interphalangeal (PIP), and distal interphalangeal (DIP) joints, respectively. The MCP joints have two DoFs, while the PIP and DIP joints have only one DoF each (see Figure 2). The origin of the coordinate system is assigned to the center of the MCP joint. The angle of the MCP joint is denoted as φ, the angle of the PIP joint is denoted as ε, and the angle of the DIP joint is denoted as τ.
Given an object σ with its spatial dimensions z 3 , which denote the 3D shape of the object’s surface as S . A grasp by a robotic hand manipulator can be defined as g = ( C , θ ) , where C   =   ( x ;   y ;   z ) 3 are the coordinates of the robotic hand fingers, and θ = ( φ , ε , τ ) is the Euler angle vector representing the 3D orientation of the fingers.

2.2. Outline

The supervised prediction of object shape using the robotic hand has four steps: 1. Data acquisition and labeling. 2. Feature selection and dimensionality reduction. 2. Classification model training. 4. Object shape prediction. The steps are summarized in Figure 3 and are explained below.
For data acquisition, we employ the Leap Motion sensor and a hand motion theory described in detail in [35]. Note that here we recognize the motions of the anthropomorphic robotic hand rather than a human hand. The Leap Motion controller uses two Infrared (IR) cameras and three IR LEDs. These cameras track the IR light, which exceeds the light spectrum visible to a human eye.
The captured image data are sent to the personal computer (PC) to extract tracking information about the hand, hand’s finger, and grasped objects. The Leap Motion SDK has an inbuilt function, aimed to recognize each finger of the hand. The angles between each finger’s proximal and intermediary bone are calculated (see “Raw data” in Figure 3) and used for further processing. The griping tasks are then performed using the robotic hand, and the data is labeled based on the shape of the gripped object. All the finger data are captured and streamlined for pre-processing (denoising and normalization of data). The collected dataset contains the angle values for each separate finger of 1200 instances from three differently shaped objects: Ball, Cylinder, and Rectangular Box (see Figure 4). The data attributes represent and the three angles ( φ , ε , τ ) between bones of the individual fingers.

2.3. Architecture of the Classification Model

To implement the detector of robotic hand grasping actions, we use a transforming autoencoder [36], which is a subtype of a more general class of autoencoder neural networks [36]. The advantage of transforming autoencoder is being resistant to variable change compact representation of input variables. Next, we consider the architecture and principles of operation of transforming autoencoder. A transforming autoencoder is a neural network, which is learning by the backpropagation method. It is based on the following principle: the output level of the network is structurally equal to the input, and input values are used as reference values for training autoencoder—thus, the neural network learns to predict the same data that it receives at the input. The function encapsulated by such a network in the general case is trivial, but in in the case of an autoencoder, an additional restriction is imposed on the network—the presence of a “bottleneck” in one of the intermediate (hidden) layers, i.e., a layer which has fewer neurons than the input layer. The neurons of such a layer thus represent a representation of input data. Considering the use of non-linear activation functions neurons and many autoencoder layers, such a representation can be compact and accurate. Unlike classical multilayer perceptrons, which have a homogeneous structure within the layer, the transforming autoencoder is a heterogeneous network consisting of several smaller size networks. Each such network is called a capsule. All autoencoder capsules have the same structure, and each capsule contains one decisive neuron, taking a value in the range [0,1], corresponding to the probability that the object is present in the image. The capsule encodes the spatial position object in a compact form corresponding to the selected representation coordinates. Thus, the network architecture (presented in Figure 5) allows you to get not only a compact object representation but also explicitly set a semantic value for each code element of the autoencoder. Since the code generated by the transforming autoencoder represents the object positioning parameters, then in cases where position information is available, it becomes possible to conduct explicit supervised learning by comparing the approximated autoencoder value with given positioning values.

3. Implementation of Robotic Hand

The robotic hand architecture used in this study is based on known robotic hand prototypes with a pulley-tendon transmission with fingers, which are moved by serial kinematic chains with revolute joints (e.g., see [37]). Such design minimizes friction loss and improves performance and back drivability. The robotic hands were 3D printed using an open-source design obtained from inmoov (http://inmoov.fr/) following the principles of pattern-based design [38]. The Inmoov designs support the principal elements of human hand anatomy, such as bones, ulna, joints, and tendons (Figure 6).
Designing a system to perform the shape recognition of held items consists of (1) servo actions, (2) I/O interface, and (3) algorithm suite (see Figure 7). As defined by the algorithm shown in Figure 7, the robotic hand will first perform the gripping task on an unknown shape object, and the pressure sensors will register the contact with an object. Then the servo motors controlling the hand tendons’ movements will stop their action and the spatial positions of the finger joints are recorded. The spatial positions of fingers are transformed into angles (in degrees) and stored as a dataset. After data normalization, the data are sent to the spot-checking procedure, which performs the data validity checking and removes measurements with corrupt (or not available) values. Then, the prediction of the shape is of a gripped object is performed. If there is not enough data to make a reliable prediction, the gripping action is repeated.

4. Data Collection and Results

4.1. Data Collection

The process of data collection and neural network training is summarized in Figure 8. As each finger has three bones joined together, it leads to three angular values calculated for each finger. To recognize these bones and each finger on the hand, we use the in-built function provided by Leap Motion SDK. Once all the finger angles are captured and are stored against the hand holding the objects, they are saved to a.csv file. The collected dataset includes three objects (circle, rectangular box, and cylinder) trained from 12 different people for the initial approximation for algorithm analysis. The example of data collected for different object grasping tasks is shown in Figure 9.

4.2. Analysis of Features

The data encoded using the autoencoder network is used as features to characterize finger positions and joint bending data. We have evaluated the importance of each type of shape’s angular features using the two-sample t-test with a pooled variance estimate. The results of the feature ranking are presented in Figure 10. Note that different features are relevant for the recognition of different item shapes.
For recognizing the shape of the Ball object, the most relevant feature is provided by the Thumb and Index fingers, for the shape of the Rectangular object—by the Ring and Thumb fingers, and for the shape of the Cylinder object—by the Thumb finger.
For example, see the most informative value distribution (according to the results of feature ranking) Middle and Pinky finger angles in Figure 11, the Ring and Index finger angles in Figure 12, and the Pinky and Thumb finger angles in Figure 13.

4.3. Evaluation of Results

Finally, we used autoencoder features as an input to the Support Vector Machine (SVM) classifier. We used SVM with radial basis function (RBF) kernel. The kernel has hyperparameters, gamma, and C; the best fitting values were found using the grid search method. The classifier’s evaluation is an estimate of how well an object shape recognition algorithm works in a real-world environment.
To evaluate the performance of the classification quantitatively, we used 10-fold cross-validation. The mean accuracy of object shape recognition achieved using is 94.4%. The confusion matrix of the results is given in Figure 14. Note that the Cylinder and Ball shapes are confused more often due to their similarity in shape.

5. Discussion and Conclusions

An anthropomorphic robotic manipulator’s design allows the robot to be efficiently used in several applications such as object grasping that allow the robot to work to operate in an environment that is more suitable to human manual work [39]. Although a human hand has many unique characteristics that allow grabbing and holding objects of diverse shapes, the anthropomorphic robotic hand can still be used for repetitive object grabbing and moving tasks such as for industrial conveyors [40] or medical laboratories [41]. Specifically, designing a multi-fingered arm that can grasp and hold reliably and has versatile manipulative abilities cannot be performed with a generic gripper [42]. The designed hand is a low-cost alternative to other known 3D printed robotic hands [43,44,45].
Several previous works tried to combine the Leap Motion sensor to recognize the grasping movements and control the robotic hand, either physical or virtual. Moldovan and Staretu [46] used Leap Motion to control the virtual robotic hand by recognizing a human hand’s grasping movements. However, no quantitative evaluation of the experiment in grasping a ball was done. Zhang et al. [47] used the Leap Motion controller and a ray detection rendering method to generate tactile feedback. They used four types of shape (cube, ball, cylinder, and pyramid) for recognition, but evaluated shape recognition using only the qualitative 1o-point scale. Zhou et al. [29] also used the Leap Motion controller to capture a human hand’s joint angles in real-time. Then the human hand joint angle position was mapped into the robotic hand to perform object grasping. However, they also made no effort in recognizing the shape of an object.
In this paper, the robotic hand was designed to execute human-like grasping of items of various simple shapes such as balls or rectangular boxes. Using the robotic hand and the Leap Motion device’s data, we have achieved a 94.4% accuracy of shape recognition, which improved the results reported in our previous paper [34].

Author Contributions

All authors have contributed equally to this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yanco, H.A.; Norton, A.; Ober, W.; Shane, D.; Skinner, A.; Vice, J. Analysis of Human-robot Interaction at the DARPA Robotics Challenge Trials. J. Field Robot. 2015, 32, 420–444. [Google Scholar] [CrossRef]
  2. Bajaj, N.M.; Spiers, A.J.; Dollar, A.M. State of the art in artificial wrists: A review of prosthetic and robotic wrist design. IEEE Trans. Robot. 2019, 35, 261–277. [Google Scholar] [CrossRef]
  3. Lee, J.-D.; Li, W.-C.; Shen, J.-H.; Chuang, C.-W. Multi-robotic arms automated production line. In Proceedings of the 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018. [Google Scholar] [CrossRef]
  4. Beasley, R.A. Medical Robots: Current Systems and Research Directions. J. Robot. 2012, 2012, 401613. [Google Scholar] [CrossRef]
  5. Heung, K.H.L.; Tong, R.K.Y.; Lau, A.T.H.; Li, Z. Robotic glove with soft-elastic composite actuators for assisting activities of daily living. Soft Robot. 2019, 6, 289–304. [Google Scholar] [CrossRef] [PubMed]
  6. Malūkas, U.; Maskeliūnas, R.; Damaševičius, R.; Woźniak, M. Real time path finding for assisted living using deep learning. J. Univers. Comput. Sci. 2018, 24, 475–487. [Google Scholar]
  7. Kang, B.B.; Choi, H.; Lee, H.; Cho, K. Exo-glove poly II: A polymer-based soft wearable robot for the hand with a tendon-driven actuation system. Soft Robot. 2019, 6, 214–227. [Google Scholar] [CrossRef]
  8. Luneckas, M.; Luneckas, T.; Udris, D.; Plonis, D.; Maskeliunas, R.; Damasevicius, R. Energy-efficient walking over irregular terrain: A case of hexapod robot. Metrol. Meas. Syst. 2019, 26, 645–660. [Google Scholar] [CrossRef]
  9. Ivanovas, A.; Ostreika, A.; Maskeliūnas, R.; Damaševičius, R.; Połap, D.; Woźniak, M. Block matching based obstacle avoidance for unmanned aerial vehicle. In Artificial Intelligence and Soft Computing; Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J., Eds.; ICAISC 2018, Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 10841. [Google Scholar] [CrossRef]
  10. Simon, P. Military Robotics: Latest Trends and Spatial Grasp Solutions. Int. J. Adv. Res. Artif. Intell. 2015, 4. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, B.; Xie, Y.; Zhou, J.; Wang, K.; Zhang, Z. State-of-the-art robotic grippers, grasping and control strategies, as well as their applications in agricultural robots: A review. Comput. Electron. Agric. 2020, 177. [Google Scholar] [CrossRef]
  12. Adenugba, F.; Misra, S.; Maskeliūnas, R.; Damaševičius, R.; Kazanavičius, E. Smart irrigation system for environmental sustainability in africa: An internet of everything (IoE) approach. Math. Biosci. Eng. 2019, 16, 5490–5503. [Google Scholar] [CrossRef] [PubMed]
  13. Burbaite, R.; Stuikys, V.; Damasevicius, R. Educational robots as collaborative learning objects for teaching computer science. In Proceedings of the IEEE International Conference on System Science and Engineering, ICSSE 2013, Budapest, Hungary, 4–6 July 2013; pp. 211–216. [Google Scholar] [CrossRef]
  14. Martisius, I.; Vasiljevas, M.; Sidlauskas, K.; Turcinas, R.; Plauska, I.; Damasevicius, R. Design of a neural interface based system for control of robotic devices. In Communications in Computer and Information Science; Skersys, T., Butleris, R., Butkiene, R., Eds.; Information and Software Technologies, ICIST 2012; Springer: Berlin/Heidelberg, Germany, 2012; Volume 319. [Google Scholar] [CrossRef]
  15. Billard, A.; Kragic, D. Trends and challenges in robot manipulation. Science 2019, 364. [Google Scholar] [CrossRef] [PubMed]
  16. Mahler, J.; Matl, M.; Satish, V.; Danielczuk, M.; DeRose, B.; McKinley, S.; Goldberg, K. Learning ambidextrous robot grasping policies. Sci. Robot. 2019, 4. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, C.; Zhang, X.; Zang, X.; Liu, Y.; Ding, G.; Yin, W.; Zhao, J. Feature sensing and robotic grasping of objects with uncertain information: A review. Sensors 2020, 20, 3707. [Google Scholar] [CrossRef] [PubMed]
  18. Alkhatib, R.; Mechlawi, W.; Kawtharani, R. Quality assessment of robotic grasping using regularized logistic regression. IEEE Sens. Lett. 2020, 4. [Google Scholar] [CrossRef]
  19. Dogar, M.; Spielberg, A.; Baker, S.; Rus, D. Multi-robot grasp planning for sequential assembly operations. Auton. Robot. 2019, 43, 649–664. [Google Scholar] [CrossRef] [Green Version]
  20. Gaudeni, C.; Pozzi, M.; Iqbal, Z.; Malvezzi, M.; Prattichizzo, D. Grasping with the SoftPad, a Soft Sensorized Surface for Exploiting Environmental Constraints With Rigid Grippers. IEEE Robot. Autom. Lett. 2020. [Google Scholar] [CrossRef]
  21. Golan, Y.; Shapiro, A.; Rimon, E.D. A variable-structure robot hand that uses the environment to achieve general purpose grasps. IEEE Robot. Autom. Lett. 2020, 5, 4804–4811. [Google Scholar] [CrossRef]
  22. Homberg, B.S.; Katzschmann, R.K.; Dogar, M.R.; Rus, D. Robust proprioceptive grasping with a soft robot hand. Auton. Robot. 2019, 43, 681–696. [Google Scholar] [CrossRef] [Green Version]
  23. Hu, J.; Sun, Y.; Li, G.; Jiang, G.; Tao, B. Probability analysis for grasp planning facing the field of medical robotics. Meas. J. Int. Meas. Confed. 2019, 141, 227–234. [Google Scholar] [CrossRef]
  24. Ji, S.; Huang, M.; Huang, H. Robot intelligent grasp of unknown objects based on multi-sensor information. Sensors 2019, 19, 1595. [Google Scholar] [CrossRef] [Green Version]
  25. Kang, L.; Seo, J.-T.; Kim, S.-H.; Kim, W.-J.; Yi, B.-J. Design and Implementation of a Multi-Function Gripper for Grasping General Objects. Appl. Sci. 2019, 9, 5266. [Google Scholar] [CrossRef] [Green Version]
  26. Kim, Y.-J.; Lee, Y.; Kim, J.; Lee, J.-W.; Park, K.-M.; Roh, K.-S.; Choi, J.-Y. RoboRay hand: A highly backdrivable robotic hand with sensorless contact force measurements. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar] [CrossRef]
  27. Mu, L.; Cui, G.; Liu, Y.; Cui, Y.; Fu, L.; Gejima, Y. Design and simulation of an integrated end-effector for picking kiwifruit by robot. Inf. Process. Agric. 2020, 7, 58–71. [Google Scholar] [CrossRef]
  28. Neha, E.; Suhaib, M.; Asthana, S.; Mukherjee, S. Grasp analysis of a four-fingered robotic hand based on matlab simmechanics. J. Comput. Appl. Res. Mech. Eng. 2020, 9, 169–182. [Google Scholar] [CrossRef]
  29. Zhou, J.; Chen, X.; Chang, U.; Pan, J.; Wang, W.; Wang, Z. Intuitive control of humanoid soft-robotic hand BCL-13. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Beijing, China, 6–9 November 2018; pp. 314–319. [Google Scholar] [CrossRef]
  30. James, S.; Wohlhart, P.; Kalakrishnan, M.; Kalashnikov, D.; Irpan, A.; Ibarz, J.; Bousmalis, K. Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 12619–12629. [Google Scholar] [CrossRef] [Green Version]
  31. Setiawan, J.D.; Ariyanto, M.; Munadi, M.; Mutoha, M.; Glowacz, A.; Caesarendra, W. Grasp posture control of wearable extra robotic fingers with flex sensors based on neural network. Electronics 2020, 9, 905. [Google Scholar] [CrossRef]
  32. Song, Y.; Gao, L.; Li, X.; Shen, W. A novel robotic grasp detection method based on region proposal networks. Robot. Comput. Integr. Manuf. 2020, 65. [Google Scholar] [CrossRef]
  33. Yu, Y.; Cao, Z.; Liang, S.; Geng, W.; Yu, J. A novel vision-based grasping method under occlusion for manipulating robotic system. IEEE Sens. J. 2020, 20, 10996–11006. [Google Scholar] [CrossRef]
  34. Devaraja, R.R.; Maskeliūnas, R.; Damaševičius, R. AISRA: Anthropomorphic Robotic Hand for Small-Scale Industrial Applications. In Proceedings of the 20th International Conference on Computational Science and its Applications, ICCSA 2020, Cagliari, Italy, 1–4 July 2020; pp. 746–759. [Google Scholar] [CrossRef]
  35. Vaitkevičius, A.; Taroza, M.; Blažauskas, T.; Damaševičius, R.; Maskeliūnas, R.; Woźniak, M. Recognition of American sign language gestures in a virtual reality using leap motion. Appl. Sci. 2019, 9, 445. [Google Scholar] [CrossRef] [Green Version]
  36. Hinton, G.E.; Krizhevsky, A.; Wang, S.D. Transforming Auto-Encoders//Artificial Neural Networks and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2011; pp. 44–51. [Google Scholar]
  37. Ospina, D.; Ramirez-Serrano, A. Sensorless in-hand manipulation by an underactuated robot hand. J. Mech. Robot. 2020, 12. [Google Scholar] [CrossRef]
  38. Damaševičius, R.; Majauskas, G.; Štuikys, V. Application of design patterns for hardware design. In Proceedings of the Design Automation Conference, Anaheim, CA, USA, 2–6 June 2003; pp. 48–53. [Google Scholar] [CrossRef]
  39. Jamil, M.F.A.; Jalani, J.; Ahmad, A.; Zaid, A.M. An Overview of Anthropomorphic Robot Hand and Mechanical Design of the Anthropomorphic Red Hand—A Preliminary Work. In Towards Autonomous Robotic Systems; Dixon, C., Tuyls, K., Eds.; TAROS 2015, Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9287. [Google Scholar] [CrossRef]
  40. Juočas, L.; Raudonis, V.; Maskeliūnas, R.; Damaševičius, R.; Woźniak, M. Multi-focusing algorithm for microscopy imagery in assembly line using low-cost camera. Int. J. Adv. Manuf. Technol. 2019, 102, 3217–3227. [Google Scholar] [CrossRef]
  41. Damaševičius, R.; Maskeliūnas, R.; Narvydas, G.; Narbutaitė, R.; Połap, D.; Woźniak, M. Intelligent automation of dental material analysis using robotic arm with Jerk optimized trajectory. J. Ambient Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  42. Neha, E.; Suhaib, M.; Mukherjee, S. Design Issues in Multi-finger Robotic Hands: An Overview. In Advances in Engineering Design; Prasad, A., Gupta, S., Tyagi, R., Eds.; Lecture Notes in Mechanical Engineering; Springer: Singapore, 2019. [Google Scholar] [CrossRef]
  43. Souhail, A.; Vassakosol, P. Low cost soft robotic grippers for reliable grasping. J. Mech. Eng. Res. Dev. 2018, 41, 88–95. [Google Scholar] [CrossRef]
  44. Khan, A.H.; Nower Khan, F.; Israt, L.; Islam, M.S. Thumb Controlled Low-Cost Prosthetic Robotic Arm. In Proceedings of the IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 20–22 February 2019. [Google Scholar] [CrossRef]
  45. Andrews, N.; Jacob, S.; Thomas, S.M.; Sukumar, S.; Cherian, R.K. Low-Cost Robotic Arm for differently abled using Voice Recognition. In Proceedings of the 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23–25 April 2019. [Google Scholar] [CrossRef]
  46. Moldovan, C.C.; Staretu, I. An Anthropomorphic Hand with Five Fingers Controlled by a Motion Leap Device. Procedia Eng. 2017, 181, 575–582. [Google Scholar] [CrossRef]
  47. Zhang, Z.; Lu, X.; Hagihara, Y.; Yimit, A. Development of a high-performance tactile feedback display for three-dimensional shape rendering. Int. J. Adv. Robot. Syst. 2019, 16. [Google Scholar] [CrossRef]
Figure 1. Conceptual schematics of the robotic arm and its degrees of freedom. The joints of the robotic arm manipulator are shown by numbers.
Figure 1. Conceptual schematics of the robotic arm and its degrees of freedom. The joints of the robotic arm manipulator are shown by numbers.
Computers 10 00001 g001
Figure 2. Conceptual schematics of the anthropomorphic robot arm fingers.
Figure 2. Conceptual schematics of the anthropomorphic robot arm fingers.
Computers 10 00001 g002
Figure 3. Outline of methodology for training the robotic hand for object grasping tasks.
Figure 3. Outline of methodology for training the robotic hand for object grasping tasks.
Computers 10 00001 g003
Figure 4. Structure of the object shape recognition dataset.
Figure 4. Structure of the object shape recognition dataset.
Computers 10 00001 g004
Figure 5. The architecture of transforming autoencoder.
Figure 5. The architecture of transforming autoencoder.
Computers 10 00001 g005
Figure 6. The 3D printed robotic hand: (a) interior and (b) exterior.
Figure 6. The 3D printed robotic hand: (a) interior and (b) exterior.
Computers 10 00001 g006
Figure 7. Control of robotic hand gripper action.
Figure 7. Control of robotic hand gripper action.
Computers 10 00001 g007
Figure 8. Training and prediction robotic hand motion action.
Figure 8. Training and prediction robotic hand motion action.
Computers 10 00001 g008
Figure 9. Example of data collected during grasping tasks. The angles for the proximal interphalangeal joint of each finger are shown.
Figure 9. Example of data collected during grasping tasks. The angles for the proximal interphalangeal joint of each finger are shown.
Computers 10 00001 g009
Figure 10. The results of feature ranking using a two-sample t-test with pooled variance estimate criterion.
Figure 10. The results of feature ranking using a two-sample t-test with pooled variance estimate criterion.
Computers 10 00001 g010
Figure 11. Value distribution of τ angle of the Thumb finger and ε angle of the Index finger (with 1, 2, and 3 σ confidence limits). Values are in angular units.
Figure 11. Value distribution of τ angle of the Thumb finger and ε angle of the Index finger (with 1, 2, and 3 σ confidence limits). Values are in angular units.
Computers 10 00001 g011
Figure 12. Value distribution of ε and φ angles of the Thumb finger (with 1, 2, and 3 σ confidence limits). Values are in angular units.
Figure 12. Value distribution of ε and φ angles of the Thumb finger (with 1, 2, and 3 σ confidence limits). Values are in angular units.
Computers 10 00001 g012
Figure 13. Value distribution of φ angle of the Ring finger and ε angle of the Thumb finger (with 1, 2, and 3 σ confidence limits). Values are in angular units.
Figure 13. Value distribution of φ angle of the Ring finger and ε angle of the Thumb finger (with 1, 2, and 3 σ confidence limits). Values are in angular units.
Computers 10 00001 g013
Figure 14. Confusion matrix of object shape recognition results.
Figure 14. Confusion matrix of object shape recognition results.
Computers 10 00001 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Devaraja, R.R.; Maskeliūnas, R.; Damaševičius, R. Design and Evaluation of Anthropomorphic Robotic Hand for Object Grasping and Shape Recognition. Computers 2021, 10, 1. https://0-doi-org.brum.beds.ac.uk/10.3390/computers10010001

AMA Style

Devaraja RR, Maskeliūnas R, Damaševičius R. Design and Evaluation of Anthropomorphic Robotic Hand for Object Grasping and Shape Recognition. Computers. 2021; 10(1):1. https://0-doi-org.brum.beds.ac.uk/10.3390/computers10010001

Chicago/Turabian Style

Devaraja, Rahul Raj, Rytis Maskeliūnas, and Robertas Damaševičius. 2021. "Design and Evaluation of Anthropomorphic Robotic Hand for Object Grasping and Shape Recognition" Computers 10, no. 1: 1. https://0-doi-org.brum.beds.ac.uk/10.3390/computers10010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop