Next Article in Journal
Laboratory Testing of FBGs for Pipeline Monitoring
Next Article in Special Issue
A Multi-Level Approach to Waste Object Segmentation
Previous Article in Journal
Detecting and Tracking Criminals in the Real World through an IoT-Based System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Contact Position Estimation of Image-Based Areal Soft Tactile Sensor with Printed Array Markers and Image Sensors

1
HRI (Human Robot Interaction) Research Center, Korea Institute of Robotics and Technology Convergence, Pohang-si, Gyeongsangbuk-do 37553, Korea
2
School of Future Automotive & IT Convergence, Kyungpook National University, Daegu 41566, Korea
3
Safety System R&D Group, Korea Institute of Industrial Technology, Daegu 42994, Korea
4
School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
5
Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang-si, Gyeongsangbuk-do 37673, Korea
6
Research Center for Neurosurgical Robotic System, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
These authors are equally contributed to the work as first authors.
Submission received: 9 June 2020 / Revised: 29 June 2020 / Accepted: 3 July 2020 / Published: 7 July 2020
(This article belongs to the Special Issue Intelligent Sensors and Computer Vision)

Abstract

:
Tactile sensors have been widely used and researched in various fields of medical and industrial applications. Gradually, they will be used as new input devices and contact sensors for interactive robots. If a tactile sensor is to be applied to various forms of human–machine interactions, it needs to be soft to ensure comfort and safety, and it should be easily customizable and inexpensive. The purpose of this study is to estimate 3D contact position of a novel image-based areal soft tactile sensor (IASTS) using printed array markers and multiple cameras. First, we introduce the hardware structure of the prototype IASTS, which consists of a soft material with printed array markers and multiple cameras with LEDs. Second, an estimation algorithm for the contact position is proposed based on the image processing of the array markers and their Gaussian fittings. A series of basic experiments was conducted and their results were analyzed to verify the effectiveness of the proposed IASTS hardware and its estimation software. To ensure the stability of the estimated contact positions a Kalman filter was developed. Finally, it was shown that the contact positions on the IASTS were estimated with a reasonable error value for soft haptic applications.

1. Introduction

Tactile sensors have many uses and are expected to be beneficial in various fields. They will gradually be used not only for medical or industrial purposes but also as input devices and contact sensors for robots; tactile sensors for dexterous manipulation with robotics arms were developed by [1,2,3]. Tactile sensors used for measuring biological tissues, such as tumors, were reported by [4,5]. A soft tactile sensor was applied for tactile feedback to grasp the prosthetic limb [6]. Certain tactile sensors function as input devices for intuitive human–computer interaction [7]. If a tactile sensor is applied to various forms of human–machine interactions, not only is it important to have softness from the viewpoint of comfort and safety but it is also necessary for it to be easily customizable and inexpensive. Recently, there were several studies of tactile sensors that used soft material. Certain tactile sensors consisted of semiconductor transducers and soft materials, such as silicone rubber and polydimethylsiloxane [8,9,10,11]. They directly converted the applied force to an electrical signal based on electrophysical properties, such as the piezoelectric effect. However, as the softness of the soft material affects the sensitivity of the sensor signal and their performance, the material needs to be carefully selected and manufactured. These sensors also require a specific production process of their own, and their mass production cost is quite high even if their prototypes can be made at a relatively low cost. However, there are force/tactile sensors that use elastic materials whose deformation can be detected by the optical sensor [12,13,14]. These sensors have several light sources and photo-detectors that are covered by soft materials such as a silicone rubber. They convert the external force in a mechanical deformation into an electrical signal by using the photo-detectors. However, the special resolution of the sensor depends on the number of light sources and the photo-detectors. A tactile sensor that surrounds a conductive liquid-filled skin was also proposed [15,16]. This tactile sensor was capable of force, vibration, and temperature sensing, similar to the capabilities of the human touch. These sensory capabilities have been incorporated into the device without placing a single sensor in the skin. However, they require a specific conductive liquid; the complexly fabricated materials and their stiffness are difficult to change if customization is required according to their application. Recent research has produced a tactile sensor capable of measuring the force with a vector structure [17,18]. The sensor is composed of a complete embedded system and can be manufactured in a thin and compact form. The contact area where the external pressure was applied consisted of several layers of silicon. One of the multiple layers was a section filled with countless particles and was used as an element to express the force exerted from the outside. The sensor can measure the directional force, but it is not possible to measure the pressed depth or the contact position. In addition, despite the small allowable area, four cameras needed to be used, which was quite a disadvantage.
We developed an image-based soft tactile sensor using a small camera and general soft materials with attached array markers, which showed the deformations on the sensor surface [17,18,19]. Certain researchers have investigated image-based tactile sensors with soft materials [20,21,22,23,24,25,26]. The methods proposed in [20,21,22] require specific transparent elastic materials in which micro plastic beads for markers are exquisitely placed. The tactile sensor introduced in [23,24] needs to be filled with a specific fluid or water, which is carefully sealed with a silicon rubber and transparent plate. The tactile sensor proposed in [25,26,27] requires a fine array of pins molded on the inside of a silicon rubber skin. Most of the contact area of the existing tactile sensors use organic compounds such as silicone or synthetic rubber. In addition, to acquire tactile information, it has a common point that a precise electrical circuit design is required, or an expensive specific sensor must be used. Due to this feature, most tactile sensors must be tuned to be optimized for the environment. In addition, scalability may be degraded when upgrading to improve sensor performance and applying to various platforms.
The image-based tactile sensor that we proposed has the following three advantages. First, the various elastic materials, such as silicon rubber, textile, and artificial leather, can be used in the sensor if the markers can be attached or printed. Second, the shape, size, and texture are easily customized according to the application’s requirements by replacing the material. Third, it will be inexpensive because the specific production processes and equipment are not required. Apart from the advantages of hardware, it can be applied to various fields by utilizing the tactile sensor proposed by us. First, it can be applied as an input device capable of 3D input/output. In the case of general input devices such as a keyboard, mouse, tablet, and joystick, two-dimensional input methods are basically supported. The proposed sensor is an input/output device function within the platform in a 3D environment, and it can provide an immediate signal and receive feedback depending on the direction, force, or depth that the user presses. Through this, the robot or device that needs to be controlled can be controlled without dissimilarity, and in the event of an emergency, the user can be notified with a haptic function such as applying a force in the reverse direction or sounding a vibration. Second, it is possible to acquire the human body shape data. In general, a lot of equipment such as a motion capture system is used to extract human motion data. It can acquire high accuracy and various data, but if a person is lying down or leaning against a wall, the marker is obscured, and data measurement is impossible. In addition, it is difficult to acquire 3D human body shape data with a motion capture system. There is also pressure-sensor-based products that can be used in these situations. However, most likely only pressure data is acquired. In addition, there is a problem that the elastic force is poor and the resolution is low. If a person is lying or sitting on a platform that greatly expands the proposed tactile sensor, the human body curvature can be expressed as it is due to the fabric material characteristics of the elastic contact. In addition, information on force, depth, and direction can be acquired based on the high resolution of the image sensor. Furthermore, it has the advantage that it can be applied to various fields.
In this study, a new type of surface tactile sensor (IASTS) is proposed using a printed array marker and image sensor. Additionally, a method of estimating the contact position and depth using the proposed sensor will be described. Section 2 of this paper begins with an introduction to the prototypes of the proposed two models of IASTS and describes algorithms for image processing and feature extraction to estimate contact position and depth, the polynomial fitting algorithms, and the gaussian curve fitting algorithms. Section 3 conducts experiments to demonstrate what was introduced and described in Section 2. In Section 4, we will analyze the experiments performed in Section 3 and conclude in Section 5.

2. Materials and Methods

The IASTS proposed in this paper is classified into two types of devices. It is classified into a model using the single-camera model and the dual-camera model. Figure 1 shows two models of IASTS proposed in this paper.
Figure 1a is a single-camera model of the IASTS prototype. This is the initial prototype model of the sensor proposed in this paper and is mainly used in environments with a small contact range. The contact recognition activation area is about 45 × 60 mm. In the top of Figure 1a, the prototype IASTS using the single-camera model is shown. In the bottom of Figure 1a, the internal structure of the sensor is indicated. When the user presses the contact area (3), the circular marker (4) attached to the lower portion of the upper plate moves according to the pressed position or direction. The number of markers attached to the single camera model is 117 (13 × 9) in total. The image sensor (1) located at the bottom acquires data by recording the circular marker (4) attached to the lower portion of the top plate. The material used for the contact area (3) is a material through which light does not completely pass, and the inside of the sensor is completely dark. To recording the circular marker (4) with the image sensor (1), light is required, but the LED (2) attached to the side plays a role.
Figure 1b is a dual-camera model of the IASTS prototype. The prototype has a size of 153 ×165 × 48 mm and is ergonomically designed; it has an oblique shape that supports the wrist. The contactable area of the sensor was 133 × 91 mm. The material of the contact portion was soft; therefore, there was a possibility that the frame holding the contact portion would be bent. To prevent this, we used a high-strength metal frame. To make the IASTS thin, the electronic components were optimally placed. In the left side of Figure 1b, the prototype using the dual-camera model is shown. The right side of Figure 1b shows the modeling. The operation method is like the single-camera model. However, by increasing the number of image sensors from 1 to 2, it has the advantage of high resolution and wide recognition range. In addition, the number of markers increased by about 1.5 times as compared to the existing, the contact recognition area was expanded, and the thickness was reduced by about 1/2. The total number of markers attached to the dual camera model is 154 (14 × 11). It is also characterized by being designed and manufactured ergonomically so as not to strain the wrist. The advantage of the proposed IASTS is that it has a simple structure as shown in Figure 1, but it is free in size expansion and can be applied to various parts. The description of the algorithm for acquiring related data when the user uses IASTS is described in the next section.

2.1. Contact Position and Depth Estimation Algorithm

IASTS cannot acquire a signal from the contact surface like an established pressure sensor or tactile sensor. To obtain the contact position and depth values, the motion of a circular marker attached to the inside is captured by an image sensor, and then a series of data processing is performed to obtain results. The sequence of data processing is shown in Figure 2.
First, an image including the entire marker is acquired using an image sensor. Second, the features of each marker are extracted through image processing. Third, tracking the marker with the maximum depth value. To obtain the depth value of each marker, a polynomial fitting algorithm is applied. Finally, precise contact position and depth values are obtained by applying the tracking marker and surrounding marker data. The algorithm used here is a Gaussian fitting algorithm. The detailed description of the method of extracting the features of each marker through the image processing, the polynomial fitting algorithm, and the Gaussian curve fitting algorithm will be described again below.

2.2. Image Processing and Feature Extraction

Figure 3 shows the image processing to extract features from the acquired image. First, the original input image from the image sensor was subjected to image preprocessing procedures to extract the features of markers, as shown in Figure 1.
To shorten the processing time, the three red–green–blue channel images were converted into one channel image. The markers attached to this hardware system were green; therefore, a single image was generated by choosing the green channel only for a good contrast image between the background and the marker. Then, the image noise was canceled by morphological operations, and binarization was performed to separate the background and the marker. To do this, we used the Otsu algorithm to determine a threshold for making a black image or a white image from the original image. The Otsu algorithm finds an appropriate threshold value for the brightness of the input image. When the image pixels are categorized into two classes based on the distribution of the histogram, there are two classes that minimize the dispersion within the class or maximize the dispersion between the classes. The internal structure of the IASTS is closed, and the lighting to illuminate the markers takes on a critical role to image the marker shape and intensity. Therefore, the accuracy of the marker detection is increased by acquiring an optimal threshold value by using the Otsu algorithm. The binarized image is subjected to a labeling process for all the markers. When the image preprocessing process is completed, the area and coordinate values of all the markers can be obtained.
Unlike the single-camera model, the dual-camera model must combine two separate screens. We used a primitive method to combine the two screens. In order to prevent the markers from falling within the range of each screen when the markers located on the outside are pressed, the method of combining the two cameras by grasping and pasting the intermediate positions of the overlapping sections was applied.
Figure 4 shows the results obtained based on the image processing.
Figure 4a shows the result of a single-camera model. The blue circle means the marker recognized using labeling, the blue dot means the position of the marker, and the gray circle inside the blue circle means the real area of the filtered marker. Figure 4b shows the result of the dual-camera model, as in the single-camera, the red circle is the marker recognized using labeling, the blue dot is the location of the marker, and the gray circle inside the blue circle is the real area of the filtered marker. This means when a contact occurs on the contact surface based on the features obtained from the image processing process, data such as a movement direction, a movement distance, and a contact depth of each marker can be acquired.

2.3. The Polynomial Fitting Algorithm

To find the marker with the maximum depth value, we first need to know the depth information of each marker. To acquire the depth value based on the data of each marker’s feature (area, coordinate) acquired in Section 2.2, preprocessing is necessary. In the case of the single-camera model IASTS, the distance of the marker located in the front cannot be obtained because a single camera sensor is used. Therefore, it is necessary to calibrate the depth value using the feature data of the marker, and this paper solves it by applying the polynomial fitting algorithm.
Figure 5 shows the experimental environment for acquiring data to perform the polynomial fitting algorithm. In the figure, a structure surrounding the entire contact surface labeled flat steel plate is attached to the 3-axis control stage. Then, by controlling the stage, feature data of a marker that changes for each depth (in 1 mm increments) are extracted. Next, the polynomial fitting algorithm is used to generate a polynomial set of feature data and depth values for each marker. Here we have concluded through several experiments that the third-degree polynomial is the best fit. When the feature data of the markers input in real time is input using the finally generated polynomial set, the depth value of each marker can be derived.
The dual-camera model IASTS is equipped with two camera sensors and can perform the stereo calibration to obtain the distance of the marker located in front. However, in the case of the dual-camera model IASTS proposed in this paper, the stereo calibration process is not performed because the purpose is to simply increase the contactable area. Additionally, since the polynomial fitting algorithm is applied by performing the same experiment as the single camera model IASTS, the content of the dual camera model is omitted in the relevant section.
Figure 6 and Figure 7 show the relationship between marker area and coordinate displacement when the contact surface of IASTS is pressed. In the figure, the blue circle means the initial position of the marker, the green circle is the marker recognized as a contact, the red circle is the marker recognized as a noncontact, and the blue line connected between the blue circle and the green circle indicates the coordinate displacement from the initial position of the marker.
The image sensor is attached to the middle of the sensor. When pressing the center of the contact surface as shown in Figure 6a, the camera shows the result as in Figure 7a. Markers located in the vertical vicinity of the camera lens rarely change position and only change the area. However, as shown in Figure 6b, if the outer surface of the contact surface is pressed, the camera shows the result as in Figure 7b. It can be seen that as the distance from the vertical direction of the camera lens changes, the position and the area change simultaneously. Therefore, when applying the polynomial fitting algorithm, polynomials for two cases must be generated.
Figure 8 shows the results of applying the polynomial fitting algorithm for depth based on data obtained using a 3-axis control stage. Figure 8a shows the result of third-degree polynomial fitting to the area, and Figure 8b shows the result of third-degree polynomial fitting to the coordinate displacement.
Equations (1) and (2) give the polynomial for each case to obtain the depth value. a a ,   b a ,   c a ,   and   d a are polynomial fitting coefficients for the area of the marker for each order, and a c ,   b c ,   c c ,   and   d c are polynomial fitting coefficients for the coordinate displacement of the marker for each order. x a is the area value of the marker, x c is the coordinate displacement value of the marker, y a is the depth value obtained using the marker area value, and y c is the depth data obtained using the coordinate displacement value of the marker.
a a x a 3 + b a x a 2 + c a x a + d a = y a ,
a c x c 3 + b c x c 2 + c c x c + d c = y c ,
Figure 9 shows a flow chart for tracking markers with maximum depth values. As mentioned in Figure 6 and Figure 7, the amount of change in the area of the marker or the difference in coordinate displacement depends on the contact position. Weights are used to distinguish them. The closer the actual contact position is to the vertical position of the camera, the more weight is given to the amount of change in the area of the marker, and the greater the distance, the more weight is given to the coordinate displacement value of the marker.
Figure 10 shows the experimental environment configured to obtain weights for the features of each marker. To obtain a depth value that can be estimated using Equations (1) and (2), a control stage that can be moved in a 3-axis direction is used. A finger-tip was attached to the contact portion. Depth values for area and displacement were obtained by pressing each depth from the marker located at the top left to the marker located at the bottom right. The experiment was conducted dozens of times.
Equations (3) and (4) refer to the process for obtaining the standard deviation of the area and displacement of the marker. The reason for using the standard deviation is that reliability can be determined for each condition when experiments are performed in the same environment multiple times based on different conditions.
( i = 0 n ( y i a y r ) 2 n ) = σ a ,
( i = 0 n ( y i c y r ) 2 n ) = σ c ,
y i a is the value for y a in Equation (1) in the i -th experiment, y i c is the value for y c in Equation (2) in the i -th experiment, and y r means the actual depth value. σ a is the standard deviation for the area, and σ c is the standard deviation for the displacement. Through the above process, each marker has a standard deviation data set for area and displacement.
To obtain the weight of each marker, the standard deviation-based weight obtained in Equations (3) and (4) is calculated using Equation (5).
σ c σ a + σ c = w a , σ a σ a + σ c = w c ,
w a y a + w c y c = y ,
w a + w c = 1 ,
w a is a weight for an area change amount of the corresponding marker, and w c is a weight for a coordinate displacement of the corresponding marker. Using this, the final depth value in Equation (6) can be obtained using the feature data of each marker input in real time. y a and y c represent the results derived from Equations (1) and (2), and y represents the final depth value of the corresponding marker. Equation (7) is a condition for satisfying Equation (6).
When the depth value of each marker is obtained from Equation (6), the marker having the maximum depth value, which is the last step in Figure 9, is traced, so that the marker closest to the contact position can be known. The output as shown in Figure 11 is displayed when an arbitrary position is pressed. The red circle at the bottom of Figure 11 represents the marker with the maximum depth value.

2.4. The Gaussian Curve Fitting Algorithm

In Section 2.3, the polynomial fitting algorithm was used to obtain the marker with the maximum depth value, but it is difficult to confirm the precise contact location with only the information. Therefore, to improve accuracy, a Gaussian curve fitting algorithm is performed using the marker data around the corresponding marker.
The reason for applying the Gaussian curve fitting algorithm to this paper is shown in Figure 12. When pressed at any position on the contact surface as shown in the left side of Figure 12, if viewed from the side as shown in the middle of Figure 12, the contact surface increases in a recessed form. It has the shape of a bell. As shown in the right side of Figure 12, when the shape is flipped, it is like the shape of the Gaussian graph. Assuming that a plurality of circular markers attached to the contact surface are respective points constituting a Gaussian graph, Gaussian curve fitting can be performed using coordinate values of each marker and a peak point serving as a contact point can be obtained. If the Gaussian curve fitting algorithm is not applied, when the marker is not located at the actual contact position, the marker having the largest depth value can be found, but an error is likely to occur.
Define the maximum depth marker as a reference marker and perform Gaussian curve fitting for the horizontal and vertical axes, respectively, as shown in Figure 13. The information of the markers that are separated up to the second line in the up, down, left, and right directions of the reference marker is used.
f ( x ) = a e x p ( b ( x c ) 2 ) ,
l n ( y ) = l n ( a ) b ( x c ) 2 = [ l n ( a ) b c 2 ] + 2 b c x b x 2 ,
l n ( y ) = d 1 + d 2 x + d 3 x 2 ,
b = d 3 ,   c = d 2 2 d 3 ,   a = e x p [ d 1 d 2 2 4 d 3 ] ,
Equation (8) means a general Gaussian function. x represents the coordinates of each peak point (depth), and f ( x ) represents the height value for the x coordinates. When x = c , f ( x ) can obtain the maximum depth value a . Therefore, the purpose is to find a and c . To use the variables in the exponential function, convert them to logarithmic form on both sides to replace them with Equation (9). When curve fitting ln ( y ) of Equation (9) to a quadratic function of x , it can be expressed as Equation (10), and variables d 1 ,   d 2 , and d 3 can be expressed by Equation (11).
l n ( y ) = [ d 1 d 2 d 3 ] [ 1   x   x 2 ] ,
[ 1 x i 2 , j x i 2 , j 2 1 x i 1 , j x i 1 , j 2 1 x i , j x i , j 2 1 x i + 1 , j x i + 1 , j 2 1 x i + 2 , j x i + 2 , j 2 ] 1 [ l n ( y i 2 , j ) l n ( y i 1 , j ) l n ( y i , j ) l n ( y i + 1 , j ) l n ( y i + 2 , j ) ] = [ d 1 d 2 d 3 ] ,
To apply Equation (10) in the current three-dimensional environment, change it into a matrix form as Equation (12) and replace it with Equation (13). Here, x i , j is a coordinate of the reference marker, and y i , j is a depth value of the reference marker. i is the horizontal direction, j is the vertical direction, and Equation (13) uses the four-marker information located on the left and right based on the horizontal direction to find the variable for the horizontal direction. In the case of obtaining the vertical coordinates, the value of i in Equation (13) is fixed and the variable is obtained by changing the value of j . If the variables d 1 ,   d 2 , and d 3 are obtained through Equation (13), c and a can be calculated by substituting in Equation (11), and the maximum depth value for the contact position can be estimated.

3. Results

3.1. Experiment of IASTS (Single-Camera Model)

Experiments and verifications are performed using the IASTS of the single camera model proposed in this paper and the algorithm of the three routines proposed in Section 2. The experiment is performed using the three-axis control stage equipped with the finger-tip used in Figure 10. Figure 14 shows the results of estimating a contact position with a blue dot by performing Gaussian fitting on the neighborhood markers located around the marker having the largest depth value.
Table 1 also shows the data of the actual contact position, estimated contact position through the Gaussian fitting, actual contact depth value, and the estimated depth value in the cases of (a), (b), and (c).
The actual contact position and contact depth were determined quantitatively by using a three-axes control stage. Table 1 shows the numerical data of the contact position estimated through the Gaussian fitting of Figure 14 with its actual values. In addition, three experiments were conducted at the same position. The average error distance and depth through the algorithm was within 3.4 mm in the XY position and 1.8 mm in the Z position.

3.2. Experiment of IASTS (Dual-Camera Model)

As in the single camera model, experiment and verification are conducted using the three-axis control stage equipped with a finger-tip shown in Figure 10. However, the experimental method is different from the single camera model. To do this, two types of experiments were conducted. Figure 15a is the first experiment. After specifying an arbitrary position, the contact surface of IASTS was pressed using a three-axis linear stage, and the results were compared and analyzed. The numbers (1) to (9) seen in Figure 15a specify the contact sequence and contact location. Figure 15b is the second experiment. After specifying an arbitrary position, the contact surface of IASTS was pressed and released using a three-axis linear stage. Then, the results were compared and analyzed when the contact surface was pressed and released again after moving a predetermined distance. According to the experimental plan in Figure 15a, a contact experiment was performed using a three-axis linear stage, and the coordinate data for estimating the contact position was obtained. In addition, the contact depth estimation data was obtained by pressing the depths of 1, 2, and 3 mm at each position in Figure 15a.
The results of estimating the contact position for each contact position are shown in (1) to (9) in Figure 16. Each image in Figure 16 represents the basic interface representing the contact location of IASTS. For the convenience of division, only the calculated contact position was marked with a red dot, and no other information was displayed.
The resulting data for Figure 16 can be seen in Table 2 and Table 3. Table 2 analyzes the actual contact position coordinates and the estimated contact position coordinates. The average error distance for the estimation results of the contact position coordinates of the entire experiment was approximately 3.1 for the X axis and approximately 4.4 for the Y axis. The error rate was approximately 3.56% for the whole case. Table 3 analyzes the actual contact depth and the contact depth estimation result. The error depth for the overall estimated depth result was approximately 0.06 mm. The error rate was approximately 3.83% for the whole case.
Figure 17 shows the results according to the experimental plan in Figure 15b. It compares and analyzes the results when a contact is made after a quantitative value is moved based on a specific location. In Figure 17a–d, the numbers 1, 2, and 3 indicate the order in which the IASTS contact surfaces were pressed. Additionally, the number near the dotted arrow indicates the distance traveled in millimeters. Table 4 shows the experimental results in Figure 17. The estimated contact coordinates of the experiments (a), (b), (c), and (d) were classified into two categories X and Y. The actual moving distance of the linear stage was expressed to obtain the moving value of the pixel according to the actual moving distance. The average of the moving values of the coordinates according to the distance in 1 mm units was approximately 10.28.

4. Discussion

We conducted an experiment to verify the proposed contact position and depth estimation algorithm using two types of sensors (single- and dual-camera models).
First, the data obtained through the experiment in Figure 13 was analyzed using the single-camera model IASTS. The average error distance for the estimation results of the contact position coordinates of the entire experiment was approximately 1.675 mm, and the standard deviation was approximately 1.08 mm. The average error depth for the contact depth estimation result of the whole experiment was approximately 0.84 mm, and the standard deviation of the error was approximately 0.64 mm. The IASTS of the single-camera model was verified so that the error range was relatively small.
Second, the data obtained through the experiments in Figure 16 and Figure 17 were analyzed using the dual-camera model IASTS. In relation to the experiment in Figure 16, the average error distance for the estimation results of the contact position coordinates of the entire experiment was approximately 6.3, and the standard deviation was approximately 2.5. The average error depth for the contact depth estimation result of the whole experiment was approximately 0.06 mm, and the standard deviation of the error was approximately 0.04 mm. In the experiment shown in Figure 17, by obtaining the coordinate values per meter per 1 mm, the average standard deviation was approximately 10.28, and the standard deviation was 0.53. Assuming that the coordinate value per meter per 1 mm was 10, the average error distance of the experiment in Figure 16 was 0.63 mm, and the standard deviation was approximately 0.25 mm.
There were three reasons for the relatively small error in the contact position of the dual-camera IASTS as compared with the single-camera IASTS. The first reason was the hardware difference. The dual-camera IASTS model was based on a single-camera IASTS because it included various factors, such as the contact surface material, camera mount position, lighting arrangement, and light intensity adjustment. The second reason was the number of markers covering the contact surface. The dual-camera IASTS had a larger number of markers than the single-camera IASTS; therefore, the effective range of contacts was relatively wide. The third reason was the difference in resolution. The dual-camera IASTS featured two image sensors. When the resolution increased, the amount of change of the marker could be accurately measured and predicted.

5. Conclusions

This paper describes the structure and principle of a single-camera-type IASTS and dual-camera–type IASTS and introduces the contact position and depth estimation algorithms. To verify the reliability of the proposed system, we conducted and proved data acquisition experiments and analysis. Various haptic systems can be built based on the proposed IASTS. Further, the expressions related to tactile sense in an augmented reality/virtual reality environment are also possible. Additionally, the algorithms can be applied to haptic devices of various sizes by customizing the hardware. In our future studies, we will develop a deep learning-based tactile position measurement algorithm to improve precision. Functions, such as motion recognition and text recognition through specific gestures, will be introduced.

Author Contributions

J.-i.L. wrote this paper, analyzed experiments and data, and derived the results. S.L. created the idea and experiment plan of this study. H.-M.O. and B.R.C. assisted in the experiment and the paper writing in this study. K.-H.S. reviewed trends and academic content of related research fields. Finally, M.Y.K. (corresponding author) guided and managed the research subject and direction of this whole study. All authors have read and agree to the published version of the manuscript.

Funding

This research was supported by the ICT R&D program of MSIP/IITP. (2017-0-01724, Development of A soft wearable suit using intelligent information and meta-material/structure technology for fall prediction and prevention). Also this research was supported by BK21 Plus project funded by the Ministry of Education, Korea(21A20131600011), the KITECH R&D Program of the Development of Soft Robotics Technology for Human-Robot Coexistence Care Robots and the National Research Foundation of Korea (NRF) grant funded by the MSICT, Korea (NRF-2016M2 A2A4A04913462).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sánchez-Durán, J.A.; Hidalgo-López, J.A.; Castellanos-Ramos, J.; Oballe-Peinado, Ó.; Vidal-Verdú, F. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands. Sensors 2015, 15, 20409–20435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Aggarwal, A.; Kirchner, F. Object Recognition and Localization: The Role of Tactile Sensors. Sensors 2014, 14, 3227–3266. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Kampmann, P.; Kirchner, F. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors. Sensors 2014, 14, 6854–6876. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, Y.; Obinata, G.; Kawk, B.; Jung, J.; Lee, S. Vision-based Fluid-Type Tactile Sensor for Measurements on Biological Tissues. Med. Biol. Eng. Comput. 2018, 56, 297–305. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, J.H.; Nyun Kim, Y.N.; Park, H.J. Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization. Sensors 2015, 15, 6306–6323. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Jimenez, M.C.; Fishel, J.A. Evaluation of force, vibration and thermal tactile feedback in prosthetic limbs. In Proceedings of the 2014 IEEE Haptics Symposium (HAPTICS), Houston, TX, USA, 23–26 February 2014. [Google Scholar]
  7. Muxfeldt, A.; Haus, N.; Cheng, A.; Kubus, J.D. Exploring tactile surface sensors as a gesture input device for intuitive robot programming. In Proceedings of the 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), Berlin, Germany, 6–9 September 2016. [Google Scholar]
  8. Pradhono, T.; Somlor, S.; Schmitz, A.; Jamone, L.; Huang, W.; Kristanto, H.; Sugano, S. Design and Characterization of a Three-Axis Hall Effect-Based Soft Skin Sensor. Sensors 2016, 16, 491. [Google Scholar] [CrossRef] [PubMed]
  9. Alves de Oliveira, T.E.; Cretu, A.M.; Petriu, E.M. Multimodal Bio-Inspired Tactile Sensing Module for Surface Characterization. Sensors 2017, 17, 1187. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Maiolino, P.; Maggiali, M.; Cannata, G.; Metta, G.; Natale, L. A Flexible and Robust Large Scale Capacitive Tactile System for Robots, Friction, and Stiffness. IEEE Sens. J. 2013, 13, 3910–3917. [Google Scholar] [CrossRef]
  11. Lamy, X.; Colledani, F.; Geffard, F.; Measson, Y.; Morel, G. Robotic skin structure and performances for industrial robot comanipulation. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Singapore, 14–17 July 2009. [Google Scholar]
  12. Cirrillo, A.; Cirrillo, P.; Maria, G.D.; Natale, C.; Pirozzi, S. Improved Version of the Tactile/Force Sensor Based on Optoelectronic Technology. Procedia Eng. 2016, 168, 826–829. [Google Scholar] [CrossRef]
  13. Melchiorri, C.; Moriello, L.; Palli, G.; Scarcia, U. A new force/torque sensor for robotic application based on optoelectronic components. In Proceedings of the International Conference on Robotics and Automation (ICRA2014), Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
  14. Missinne, J.; Bosman, E.; Van Hoe, B.; Van Steenberge, G.; Kalathimekkad, S.; Van Daele, P.; Vanfleteren, J. Flexible Shear Sensor Based on Embedded Optoelectronic Components. IEEE Photon. Technol. Lett. 2011, 23, 771–773. [Google Scholar] [CrossRef]
  15. Fishel, J.; Loeb, G. Sensing tactile microvibrations with the BioTac—Comparison with human sensitivity. In Proceedings of the IEEE International Conference on Biomedical Robotics and Biomechatronics, Rome, Italy, 24–27 June 2012. [Google Scholar]
  16. Su, Z.; Fishel, J.; Yamamoto, T.; Loeb, G. Use of Tactile Feedback to Control Exploratory Movements to Characterize Object Compliance. Front. Neurorobot. 2012, 6, 1–9. [Google Scholar]
  17. Sferrazza, C.; Wahlsten, A.; Trueeb, C.; Andrea, R.D. Ground Truth Force Distribution for Learning-Based Tactile Sensing: A Finite Element Approach. IEEE Access 2019, 7, 173438–173449. [Google Scholar] [CrossRef]
  18. Trueeb, C.; Sferrazza, C.; Andrea, R.D. Towards vision-based robotic skins: A data-driven, multi-camera tactile sensor. arXiv 2019, arXiv:1910.14526. [Google Scholar]
  19. Lee, J.; Cho, B.; Choi, S.; Kwon, S.; Kim, M.; Lee, S. Image-based 3-axis force estimation of a touch sensor with soft material. In Proceedings of the 2016 16th International Conference on Control, Automation and Systems (ICCAS 2016), Gyeongju, Korea, 16–19 October 2016. [Google Scholar]
  20. Lee, J.; Lee, S.; Cho, B.; Kwon, K.; Oh, H.; Kim, M. Contact Position Estimation Algorithm using Image-based Areal Touch Sensor based on Artificial Neural Network. IEEJ J. Ind. Appl. 2018, 6, 100. [Google Scholar] [CrossRef]
  21. Sato, K.; Kamiyama, K.; Nii, H.; Kawakami, N.; Tachi, S. Measurement of force vector field of robotic finger using vision-based haptic sensor. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, 22–26 September 2008. [Google Scholar]
  22. Sato, K.; Kamiyama, K.; Kawakami, N.; Tachi, S. Finger-Shaped GelForce: Sensor for Measuring Surface Traction Fields for Robotic Hand. IEEE Trans. Haptics. 2010, 3, 37–47. [Google Scholar] [CrossRef] [PubMed]
  23. Yamaguchi, A.; Atkeson, C. Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids 2016), Cancun, Mexico, 15–17 November 2016. [Google Scholar]
  24. Ito, Y.; Kim, Y.; Obinata, G. Robust Slippage Degree Estimation Based on Reference Update of Vision-Based Tactile Sensor. IEEE Sens. J. 2011, 11, 2037–2047. [Google Scholar] [CrossRef]
  25. Ito, Y.; Kim, Y.; Nagai, C.; Obinata, G. Shape sensing by vision-based tactile sensor for dexterous handling of robot hands. In Proceedings of the 2010 IEEE International Conference on Automation Science and Engineering, 2010 (CASE2010), Toronto, Canada, 21–24 August 2010. [Google Scholar]
  26. Lepora, N. Biomimetic Active Touch with Fingertips and Whiskers. IEEE Trans. Haptics 2016, 9, 170–183. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Cramphorn, L.; Ward-Cherrier, B.; Lepora, N. Addition of a Biomimetic Fingerprint on an Artificial Fingertip Enhances Tactile Spatial Acuity. IEEE Robot. Autom. Lett. 2017, 2, 1336–1343. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Two types of image-based areal soft tactile sensor (IASTS) (a) single-camera model and (b) dual-camera model.
Figure 1. Two types of image-based areal soft tactile sensor (IASTS) (a) single-camera model and (b) dual-camera model.
Sensors 20 03796 g001
Figure 2. The sequence of data processing to determine contact position and depth.
Figure 2. The sequence of data processing to determine contact position and depth.
Sensors 20 03796 g002
Figure 3. Input image processing diagram.
Figure 3. Input image processing diagram.
Sensors 20 03796 g003
Figure 4. The result of image processing (a) single-camera model and (b) dual-camera model.
Figure 4. The result of image processing (a) single-camera model and (b) dual-camera model.
Sensors 20 03796 g004
Figure 5. The experimental environment based on three-axis control stage for feature data extraction to perform the polynomial fitting algorithm.
Figure 5. The experimental environment based on three-axis control stage for feature data extraction to perform the polynomial fitting algorithm.
Sensors 20 03796 g005
Figure 6. Correlation of contact surface change with camera sensor according to contact position (a) when pressing the center of the contact surface (b) when pressing the outer of the contact surface.
Figure 6. Correlation of contact surface change with camera sensor according to contact position (a) when pressing the center of the contact surface (b) when pressing the outer of the contact surface.
Sensors 20 03796 g006
Figure 7. Identifies the tendency to change the features of the marker according to the contact location (a) the camera view when pressing the center of the contact surface (b) the camera view when pressing the outer of the contact surface.
Figure 7. Identifies the tendency to change the features of the marker according to the contact location (a) the camera view when pressing the center of the contact surface (b) the camera view when pressing the outer of the contact surface.
Sensors 20 03796 g007
Figure 8. Area data along depth changes and its polynomial fitting (a) the result of third-order polynomial fitting to the area (b) the result of third-order polynomial fitting to the coordinate displacement.
Figure 8. Area data along depth changes and its polynomial fitting (a) the result of third-order polynomial fitting to the area (b) the result of third-order polynomial fitting to the coordinate displacement.
Sensors 20 03796 g008
Figure 9. Marker position estimation diagram from polynomial fitting process.
Figure 9. Marker position estimation diagram from polynomial fitting process.
Sensors 20 03796 g009
Figure 10. Experimental environment for obtaining weighted data sets.
Figure 10. Experimental environment for obtaining weighted data sets.
Sensors 20 03796 g010
Figure 11. Maximum depth value marker tracking results using polynomial fitting algorithm.
Figure 11. Maximum depth value marker tracking results using polynomial fitting algorithm.
Sensors 20 03796 g011
Figure 12. Basis for application of Gaussian fitting algorithm.
Figure 12. Basis for application of Gaussian fitting algorithm.
Sensors 20 03796 g012
Figure 13. Target areas for Gaussian fitting algorithm.
Figure 13. Target areas for Gaussian fitting algorithm.
Sensors 20 03796 g013
Figure 14. Contact position estimation by using a gaussian fitting process (a) the results of contact position estimation when pressing the upper right (b) the results of contact position estimation when pressing the bottom left (c) the results of contact position estimation when pressing the upper left.
Figure 14. Contact position estimation by using a gaussian fitting process (a) the results of contact position estimation when pressing the upper right (b) the results of contact position estimation when pressing the bottom left (c) the results of contact position estimation when pressing the upper left.
Sensors 20 03796 g014
Figure 15. Experiment method of IASTS. (a) 1st experiment rule (b) 2nd experiment rule.
Figure 15. Experiment method of IASTS. (a) 1st experiment rule (b) 2nd experiment rule.
Sensors 20 03796 g015
Figure 16. Experiment 1 for proposed system and algorithm verification (1) when pressing the upper left corner (2) when pressing the upper right corner (3) when pressing the bottom right corner (4) when pressing the bottom left corner (5) when pressing the left (6) when pressing the right (7) when pressing the upper (8) when pressing the bottom (9) when pressing the center.
Figure 16. Experiment 1 for proposed system and algorithm verification (1) when pressing the upper left corner (2) when pressing the upper right corner (3) when pressing the bottom right corner (4) when pressing the bottom left corner (5) when pressing the left (6) when pressing the right (7) when pressing the upper (8) when pressing the bottom (9) when pressing the center.
Sensors 20 03796 g016
Figure 17. Experiment 2 for proposed system and algorithm verification (a) 1st experiment path (b) 2nd experiment path (c) 3rd experiment path (d) 4th experiment path.
Figure 17. Experiment 2 for proposed system and algorithm verification (a) 1st experiment path (b) 2nd experiment path (c) 3rd experiment path (d) 4th experiment path.
Sensors 20 03796 g017
Table 1. Actual contact information and estimated contact information (Unit: mm).
Table 1. Actual contact information and estimated contact information (Unit: mm).
ExperimentReal ContactEstimated ContactError DistanceReal Depth ValueEstimated Depth ValueError
Depth
XYXYXY
(a)431343.1414.290.141.291010.760.76
42.6614.120.341.1210.790.79
42.3514.440.651.4410.650.65
(b)142816.5430.112.542.1177.20.2
16.7529.952.751.957.150.15
16.4129.842.411.847.150.15
(c)161115.4114.310.593.311314.521.52
15.3614.220.643.2214.811.81
15.5214.330.483.3314.521.52
Table 2. Contact coordinate position comparison and error analysis (unit: pixel).
Table 2. Contact coordinate position comparison and error analysis (unit: pixel).
ExperimentReal ContactEstimated ContactError DistanceError Rate (%)
XYXYXYXY
11077010476362.808.57
28417084677570.5910.00
38415808465905100.591.72
41075801155908107.481.72
5107325116322938.410.92
6842325845321340.361.23
74757048079591.0512.86
84755804675908101.681.72
9475325479320450.841.54
Table 3. Contact depth comparison and error analysis (unit: mm).
Table 3. Contact depth comparison and error analysis (unit: mm).
ExperimentReal DepthEstimated DepthError DepthError Rate (%)
11.000.890.1111.00
2.001.910.094.50
3.002.890.113.67
21.000.850.1515.00
2.001.890.115.50
3.002.920.082.67
31.001.090.099.00
2.002.110.115.50
3.003.130.134.33
41.001.070.077.00
2.002.080.084.00
3.003.080.082.67
51.001.030.033.00
2.002.010.010.50
3.002.980.020.67
61.001.060.066.00
2.001.980.021.00
3.002.970.031.00
71.001.050.055.00
2.002.040.042.00
3.003.010.010.33
81.001.030.033.00
2.002.020.021.00
3.003.040.041.33
91.000.980.022.00
2.001.970.031.50
3.002.990.010.33
Table 4. Analysis of contact location along the travel path.
Table 4. Analysis of contact location along the travel path.
Experiment1st Contact
Position (Pixel)
2nd Contact
Position (Pixel)
Moving Distance (mm)Coordinate
per 1 mm
(a)X259466+20 mm10.35
Y215377+15 mm10.80
(b)X712220−49 mm10.04
Y420210−22 mm9.55
(c)X305466+15 mm10.73
Y503341−16 mm10.13
(d_1)X845744−9 mm11.22
Y181460+28 mm10.37
(d_2)X744236−49 mm9.96
Y460219−25 mm9.64

Share and Cite

MDPI and ACS Style

Lee, J.-i.; Lee, S.; Oh, H.-M.; Cho, B.R.; Seo, K.-H.; Kim, M.Y. 3D Contact Position Estimation of Image-Based Areal Soft Tactile Sensor with Printed Array Markers and Image Sensors. Sensors 2020, 20, 3796. https://0-doi-org.brum.beds.ac.uk/10.3390/s20133796

AMA Style

Lee J-i, Lee S, Oh H-M, Cho BR, Seo K-H, Kim MY. 3D Contact Position Estimation of Image-Based Areal Soft Tactile Sensor with Printed Array Markers and Image Sensors. Sensors. 2020; 20(13):3796. https://0-doi-org.brum.beds.ac.uk/10.3390/s20133796

Chicago/Turabian Style

Lee, Jong-il, Suwoong Lee, Hyun-Min Oh, Bo Ram Cho, Kap-Ho Seo, and Min Young Kim. 2020. "3D Contact Position Estimation of Image-Based Areal Soft Tactile Sensor with Printed Array Markers and Image Sensors" Sensors 20, no. 13: 3796. https://0-doi-org.brum.beds.ac.uk/10.3390/s20133796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop