sensors-logo

Journal Browser

Journal Browser

Recognition Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 35879

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Ciencia de la Computación e Inteligencia Artificial, Centros de Gipuzkoa/FACULTAD DE INFORMATICA, Universidad del Pais Vasco, 48940 Leioa, Spain
Interests: machine learning; computer vision; 3D vision; deep learning; video action recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Perception of the environment is an essential skill for robotic applications that interact with their surroundings.  Along with perception often comes the ability to recognize objects, people, or dynamic situations. This skill is of paramount importance in many use cases, from industrial to social robotics.  This Special Issue addresses fundamental or applied research in object recognition, activity recognition, and people recognition, focusing on robotic applications.  

Dr. José María Martínez-Otzeta
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • recognition robotics
  • industrial robotics
  • social robotics
  • object recognition
  • people recognition
  • activity recognition

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 173 KiB  
Editorial
Editorial for the Special Issue Recognition Robotics
by José María Martínez-Otzeta
Sensors 2023, 23(20), 8515; https://0-doi-org.brum.beds.ac.uk/10.3390/s23208515 - 17 Oct 2023
Viewed by 604
Abstract
Perception of the environment is an essential skill for robotic applications that interact with their surroundings [...] Full article
(This article belongs to the Special Issue Recognition Robotics)

Research

Jump to: Editorial, Review

29 pages, 10231 KiB  
Article
Human-Aware Collaborative Robots in the Wild: Coping with Uncertainty in Activity Recognition
by Beril Yalçinkaya, Micael S. Couceiro, Salviano Pinto Soares and Antonio Valente
Sensors 2023, 23(7), 3388; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073388 - 23 Mar 2023
Cited by 2 | Viewed by 1566
Abstract
This study presents a novel approach to cope with the human behaviour uncertainty during Human-Robot Collaboration (HRC) in dynamic and unstructured environments, such as agriculture, forestry, and construction. These challenging tasks, which often require excessive time, labour and are hazardous for humans, provide [...] Read more.
This study presents a novel approach to cope with the human behaviour uncertainty during Human-Robot Collaboration (HRC) in dynamic and unstructured environments, such as agriculture, forestry, and construction. These challenging tasks, which often require excessive time, labour and are hazardous for humans, provide ample room for improvement through collaboration with robots. However, the integration of humans in-the-loop raises open challenges due to the uncertainty that comes with the ambiguous nature of human behaviour. Such uncertainty makes it difficult to represent high-level human behaviour based on low-level sensory input data. The proposed Fuzzy State-Long Short-Term Memory (FS-LSTM) approach addresses this challenge by fuzzifying ambiguous sensory data and developing a combined activity recognition and sequence modelling system using state machines and the LSTM deep learning method. The evaluation process compares the traditional LSTM approach with raw sensory data inputs, a Fuzzy-LSTM approach with fuzzified inputs, and the proposed FS-LSTM approach. The results show that the use of fuzzified inputs significantly improves accuracy compared to traditional LSTM, and, while the fuzzy state machine approach provides similar results than the fuzzy one, it offers the added benefits of ensuring feasible transitions between activities with improved computational efficiency. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

17 pages, 3523 KiB  
Article
A Self-Adaptive Gallery Construction Method for Open-World Person Re-Identification
by Sara Casao, Pablo Azagra, Ana C. Murillo and Eduardo Montijano
Sensors 2023, 23(5), 2662; https://0-doi-org.brum.beds.ac.uk/10.3390/s23052662 - 28 Feb 2023
Cited by 1 | Viewed by 1183
Abstract
Person re-identification, or simply re-id, is the task of identifying again a person who has been seen in the past by a perception system. Multiple robotic applications, such as tracking or navigate-and-seek, use re-identification systems to perform their tasks. To solve the re-id [...] Read more.
Person re-identification, or simply re-id, is the task of identifying again a person who has been seen in the past by a perception system. Multiple robotic applications, such as tracking or navigate-and-seek, use re-identification systems to perform their tasks. To solve the re-id problem, a common practice consists in using a gallery with relevant information about the people already observed. The construction of this gallery is a costly process, typically performed offline and only once because of the problems associated with labeling and storing new data as they arrive in the system. The resulting galleries from this process are static and do not acquire new knowledge from the scene, which is a limitation of the current re-id systems to work for open-world applications. Different from previous work, we overcome this limitation by presenting an unsupervised approach to automatically identify new people and incrementally build a gallery for open-world re-id that adapts prior knowledge with new information on a continuous basis. Our approach performs a comparison between the current person models and new unlabeled data to dynamically expand the gallery with new identities. We process the incoming information to maintain a small representative model of each person by exploiting concepts of information theory. The uncertainty and diversity of the new samples are analyzed to define which ones should be incorporated into the gallery. Experimental evaluation in challenging benchmarks includes an ablation study of the proposed framework, the assessment of different data selection algorithms that demonstrate the benefits of our approach, and a comparative analysis of the obtained results with other unsupervised and semi-supervised re-id methods. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

17 pages, 547 KiB  
Article
A Study on the Role of Affective Feedback in Robot-Assisted Learning
by Gabriela Błażejowska, Łukasz Gruba, Bipin Indurkhya and Artur Gunia
Sensors 2023, 23(3), 1181; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031181 - 20 Jan 2023
Cited by 4 | Viewed by 1871
Abstract
In recent years, there have been many approaches to using robots to teach computer programming. In intelligent tutoring systems and computer-aided learning, there is also some research to show that affective feedback to the student increases learning efficiency. However, a few studies on [...] Read more.
In recent years, there have been many approaches to using robots to teach computer programming. In intelligent tutoring systems and computer-aided learning, there is also some research to show that affective feedback to the student increases learning efficiency. However, a few studies on the role of incorporating an emotional personality in the robot in robot-assisted learning have found different results. To explore this issue further, we conducted a pilot study to investigate the effect of positive verbal encouragement and non-verbal emotive behaviour of the Miro-E robot during a robot-assisted programming session. The participants were tasked to program the robot’s behaviour. In the experimental group, the robot monitored the participants’ emotional state via their facial expressions, and provided affective feedback to the participants after completing each task. In the control group, the robot responded in a neutral way. The participants filled out a questionnaire before and after the programming session. The results show a positive reaction of the participants to the robot and the exercise. Though the number of participants was small, as the experiment was conducted during the pandemic, a qualitative analysis of the data was carried out. We found that the greatest affective outcome of the session was for students who had little experience or interest in programming before. We also found that the affective expressions of the robot had a negative impact on its likeability, revealing vestiges of the uncanny valley effect. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

25 pages, 5475 KiB  
Article
Lightweight Multimodal Domain Generic Person Reidentification Metric for Person-Following Robots
by Muhammad Adnan Syed, Yongsheng Ou, Tao Li and Guolai Jiang
Sensors 2023, 23(2), 813; https://0-doi-org.brum.beds.ac.uk/10.3390/s23020813 - 10 Jan 2023
Cited by 3 | Viewed by 1439
Abstract
Recently, person-following robots have been increasingly used in many real-world applications, and they require robust and accurate person identification for tracking. Recent works proposed to use re-identification metrics for identification of the target person; however, these metrics suffer due to poor generalization, and [...] Read more.
Recently, person-following robots have been increasingly used in many real-world applications, and they require robust and accurate person identification for tracking. Recent works proposed to use re-identification metrics for identification of the target person; however, these metrics suffer due to poor generalization, and due to impostors in nonlinear multi-modal world. This work learns a domain generic person re-identification to resolve real-world challenges and to identify the target person undergoing appearance changes when moving across different indoor and outdoor environments or domains. Our generic metric takes advantage of novel attention mechanism to learn deep cross-representations to address pose, viewpoint, and illumination variations, as well as jointly tackling impostors and style variations the target person randomly undergoes in various indoor and outdoor domains; thus, our generic metric attains higher recognition accuracy of target person identification in complex multi-modal open-set world, and attains 80.73% and 64.44% Rank-1 identification in multi-modal close-set PRID and VIPeR domains, respectively. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

20 pages, 6999 KiB  
Article
Deep Instance Segmentation and Visual Servoing to Play Jenga with a Cost-Effective Robotic System
by Luca Marchionna, Giulio Pugliese, Mauro Martini, Simone Angarano, Francesco Salvetti and Marcello Chiaberge
Sensors 2023, 23(2), 752; https://0-doi-org.brum.beds.ac.uk/10.3390/s23020752 - 09 Jan 2023
Cited by 1 | Viewed by 2034
Abstract
The game of Jenga is a benchmark used for developing innovative manipulation solutions for complex tasks. Indeed, it encourages the study of novel robotics methods to successfully extract blocks from a tower. A Jenga game involves many traits of complex industrial and surgical [...] Read more.
The game of Jenga is a benchmark used for developing innovative manipulation solutions for complex tasks. Indeed, it encourages the study of novel robotics methods to successfully extract blocks from a tower. A Jenga game involves many traits of complex industrial and surgical manipulation tasks, requiring a multi-step strategy, the combination of visual and tactile data, and the highly precise motion of a robotic arm to perform a single block extraction. In this work, we propose a novel, cost-effective architecture for playing Jenga with e.Do, a 6DOF anthropomorphic manipulator manufactured by Comau, a standard depth camera, and an inexpensive monodirectional force sensor. Our solution focuses on a visual-based control strategy to accurately align the end-effector with the desired block, enabling block extraction by pushing. To this aim, we trained an instance segmentation deep learning model on a synthetic custom dataset to segment each piece of the Jenga tower, allowing for visual tracking of the desired block’s pose during the motion of the manipulator. We integrated the visual-based strategy with a 1D force sensor to detect whether the block could be safely removed by identifying a force threshold value. Our experimentation shows that our low-cost solution allows e.DO to precisely reach removable blocks and perform up to 14 consecutive extractions in a row. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

17 pages, 11802 KiB  
Article
Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots
by John Lewis, Pedro U. Lima and Meysam Basiri
Sensors 2023, 23(1), 375; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010375 - 29 Dec 2022
Cited by 4 | Viewed by 1936
Abstract
Teams of mobile robots can be employed in many outdoor applications, such as precision agriculture, search and rescue, and industrial inspection, allowing an efficient and robust exploration of large areas and enhancing the operators’ situational awareness. In this context, this paper describes an [...] Read more.
Teams of mobile robots can be employed in many outdoor applications, such as precision agriculture, search and rescue, and industrial inspection, allowing an efficient and robust exploration of large areas and enhancing the operators’ situational awareness. In this context, this paper describes an active and decentralized framework for the collaborative 3D mapping of large outdoor areas using a team of mobile ground robots under limited communication range and bandwidth. A real-time method is proposed that allows the sharing and registration of individual local maps, obtained from 3D LiDAR measurements, to build a global representation of the environment. A conditional peer-to-peer communication strategy is used to share information over long-range and short-range distances while considering the bandwidth constraints. Results from both real-world and simulated experiments, executed in an actual solar power plant and in its digital twin representation, demonstrate the reliability and efficiency of the proposed decentralized framework for such large outdoor operations. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

14 pages, 10369 KiB  
Article
Finding a Landing Site in an Urban Area: A Multi-Resolution Probabilistic Approach
by Barak Pinkovich, Boaz Matalon, Ehud Rivlin and Hector Rotstein
Sensors 2022, 22(24), 9807; https://0-doi-org.brum.beds.ac.uk/10.3390/s22249807 - 14 Dec 2022
Cited by 1 | Viewed by 1138
Abstract
This paper considers the problem of finding a landing spot for a drone in a dense urban environment. The conflicting requirements of fast exploration and high resolution are solved using a multi-resolution approach, by which visual information is collected by the drone at [...] Read more.
This paper considers the problem of finding a landing spot for a drone in a dense urban environment. The conflicting requirements of fast exploration and high resolution are solved using a multi-resolution approach, by which visual information is collected by the drone at decreasing altitudes so that the spatial resolution of the acquired images increases monotonically. A probability distribution is used to capture the uncertainty of the decision process for each terrain patch. The distributions are updated as information from different altitudes is collected. When the confidence level for one of the patches becomes larger than a prespecified threshold, suitability for landing is declared. One of the main building blocks of the approach is a semantic segmentation algorithm that attaches probabilities to each pixel of a single view. The decision algorithm combines these probabilities with a priori data and previous measurements to obtain the best estimates. Feasibility is illustrated by presenting several examples generated by a realistic closed-loop simulator. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

17 pages, 7161 KiB  
Article
Vision-Based Detection and Classification of Used Electronic Parts
by Praneel Chand and Sunil Lal
Sensors 2022, 22(23), 9079; https://0-doi-org.brum.beds.ac.uk/10.3390/s22239079 - 23 Nov 2022
Cited by 5 | Viewed by 1869
Abstract
Economic and environmental sustainability is becoming increasingly important in today’s world. Electronic waste (e-waste) is on the rise and options to reuse parts should be explored. Hence, this paper presents the development of vision-based methods for the detection and classification of used electronics [...] Read more.
Economic and environmental sustainability is becoming increasingly important in today’s world. Electronic waste (e-waste) is on the rise and options to reuse parts should be explored. Hence, this paper presents the development of vision-based methods for the detection and classification of used electronics parts. In particular, the problem of classifying commonly used and relatively expensive electronic project parts such as capacitors, potentiometers, and voltage regulator ICs is investigated. A multiple object workspace scenario with an overhead camera is investigated. A customized object detection algorithm determines regions of interest and extracts data for classification. Three classification methods are explored: (a) shallow neural networks (SNNs), (b) support vector machines (SVMs), and (c) deep learning with convolutional neural networks (CNNs). All three methods utilize 30 × 30-pixel grayscale image inputs. Shallow neural networks achieved the lowest overall accuracy of 85.6%. The SVM implementation produced its best results using a cubic kernel and principal component analysis (PCA) with 20 features. An overall accuracy of 95.2% was achieved with this setting. The deep learning CNN model has three convolution layers, two pooling layers, one fully connected layer, softmax, and a classification layer. The convolution layer filter size was set to four and adjusting the number of filters produced little variation in accuracy. An overall accuracy of 98.1% was achieved with the CNN model. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

15 pages, 7500 KiB  
Article
AIDM-Strat: Augmented Illegal Dumping Monitoring Strategy through Deep Neural Network-Based Spatial Separation Attention of Garbage
by Yeji Kim and Jeongho Cho
Sensors 2022, 22(22), 8819; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228819 - 15 Nov 2022
Cited by 6 | Viewed by 1551
Abstract
Economic and social progress in the Republic of Korea resulted in an increased standard of living, which subsequently produced more waste. The Korean government implemented a volume-based trash disposal system that may modify waste disposal characteristics to handle vast volumes of waste efficiently. [...] Read more.
Economic and social progress in the Republic of Korea resulted in an increased standard of living, which subsequently produced more waste. The Korean government implemented a volume-based trash disposal system that may modify waste disposal characteristics to handle vast volumes of waste efficiently. However, the inconvenience of having to purchase standard garbage bags on one’s own led to passive participation by citizens and instances of illegally dumping waste in non-standard plastic bags. As a result, there is a need for the development of automatic detection and reporting of illegal acts of garbage dumping. To achieve this, we suggest a system for tracking unlawful rubbish disposal that is based on deep neural networks. The proposed monitoring approach obtains the articulation points (joints) of a dumper through OpenPose and identifies the type of garbage bag through the object detection model, You Only Look Once (YOLO), to determine the distance of the dumper’s wrist to the garbage bag and decide whether it is illegal dumping. Additionally, we introduced a method of tracking the IDs issued to the waste bags using the multi-object tracking (MOT) model to reduce the false detection of illegal dumping. To evaluate the efficacy of the proposed illegal dumping monitoring system, we compared it with the other systems based on behavior recognition. As a result, it was validated that the suggested approach had a higher degree of accuracy and a lower percentage of false alarms, making it useful for a variety of upcoming applications. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

21 pages, 10732 KiB  
Article
Application of Smoothing Spline in Determining the Unmanned Ground Vehicles Route Based on Ultra-Wideband Distance Measurements
by Łukasz Rykała, Andrzej Typiak, Rafał Typiak and Magdalena Rykała
Sensors 2022, 22(21), 8334; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218334 - 30 Oct 2022
Cited by 2 | Viewed by 1130
Abstract
Unmanned ground vehicles (UGVs) are technically complex machines to operate in difficult or dangerous environmental conditions. In recent years, there has been an increase in research on so called “following vehicles”. The said concept introduces a guide—an object that sets the route the [...] Read more.
Unmanned ground vehicles (UGVs) are technically complex machines to operate in difficult or dangerous environmental conditions. In recent years, there has been an increase in research on so called “following vehicles”. The said concept introduces a guide—an object that sets the route the platform should follow. Afterwards, the role of the UGV is to reproduce the mentioned path. The article is based on the field test results of an outdoor localization subsystem using ultra-wideband technology. It focuses on determining the guide’s route using a smoothing spline for constructing a UGV’s path planning subsystem, which is one of the stages for implementing a “follow-me” system. It has been shown that the use of a smoothing spline, due to the implemented mathematical model, allows for recreating the guide’s path in the event of data decay lasting up to a several seconds. The innovation of this article originates from influencing studies on the smoothing parameter of the estimation errors of the guide’s location. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

11 pages, 2553 KiB  
Article
Improved Monitoring of Wildlife Invasion through Data Augmentation by Extract–Append of a Segmented Entity
by Jaekwang Lee, Kangmin Lim and Jeongho Cho
Sensors 2022, 22(19), 7383; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197383 - 28 Sep 2022
Cited by 4 | Viewed by 1552
Abstract
Owing to the continuous increase in the damage to farms due to wild animals’ destruction of crops in South Korea, various methods have been proposed to resolve these issues, such as installing electric fences and using warning lamps or ultrasonic waves. Recently, new [...] Read more.
Owing to the continuous increase in the damage to farms due to wild animals’ destruction of crops in South Korea, various methods have been proposed to resolve these issues, such as installing electric fences and using warning lamps or ultrasonic waves. Recently, new methods have been attempted by applying deep learning-based object-detection techniques to a robot. However, for effective training of a deep learning-based object-detection model, overfitting or biased training should be avoided; furthermore, a huge number of datasets are required. In particular, establishing a training dataset for specific wild animals requires considerable time and labor. Therefore, this study proposes an Extract–Append data augmentation method where specific objects are extracted from a limited number of images via semantic segmentation and corresponding objects are appended to numerous arbitrary background images. Thus, the study aimed to improve the model’s detection performance by generating a rich dataset on wild animals with various background images, particularly images of water deer and wild boar, which are currently causing the most problematic social issues. The comparison between the object detector trained using the proposed Extract–Append technique and that trained using the existing data augmentation techniques showed that the mean Average Precision (mAP) improved by ≥2.2%. Moreover, further improvement in detection performance of the deep learning-based object-detection model can be expected as the proposed technique can solve the issue of the lack of specific data that are difficult to obtain. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

25 pages, 3259 KiB  
Article
A Bio-Inspired Endogenous Attention-Based Architecture for a Social Robot
by Sara Marques-Villarroya, Jose Carlos Castillo, Juan José Gamboa-Montero, Javier Sevilla-Salcedo and Miguel Angel Salichs
Sensors 2022, 22(14), 5248; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145248 - 13 Jul 2022
Cited by 2 | Viewed by 2412
Abstract
A robust perception system is crucial for natural human–robot interaction. An essential capability of these systems is to provide a rich representation of the robot’s environment, typically using multiple sensory sources. Moreover, this information allows the robot to react to both external stimuli [...] Read more.
A robust perception system is crucial for natural human–robot interaction. An essential capability of these systems is to provide a rich representation of the robot’s environment, typically using multiple sensory sources. Moreover, this information allows the robot to react to both external stimuli and user responses. The novel contribution of this paper is the development of a perception architecture, which was based on the bio-inspired concept of endogenous attention being integrated into a real social robot. In this paper, the architecture is defined at a theoretical level to provide insights into the underlying bio-inspired mechanisms and at a practical level to integrate and test the architecture within the complete architecture of a robot. We also defined mechanisms to establish the most salient stimulus for the detection or task in question. Furthermore, the attention-based architecture uses information from the robot’s decision-making system to produce user responses and robot decisions. Finally, this paper also presents the preliminary test results from the integration of this architecture into a real social robot. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

25 pages, 5075 KiB  
Article
Benchmarking Object Detection Deep Learning Models in Embedded Devices
by David Cantero, Iker Esnaola-Gonzalez, Jose Miguel-Alonso and Ekaitz Jauregi
Sensors 2022, 22(11), 4205; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114205 - 31 May 2022
Cited by 6 | Viewed by 5069
Abstract
Object detection is an essential capability for performing complex tasks in robotic applications. Today, deep learning (DL) approaches are the basis of state-of-the-art solutions in computer vision, where they provide very high accuracy albeit with high computational costs. Due to the physical limitations [...] Read more.
Object detection is an essential capability for performing complex tasks in robotic applications. Today, deep learning (DL) approaches are the basis of state-of-the-art solutions in computer vision, where they provide very high accuracy albeit with high computational costs. Due to the physical limitations of robotic platforms, embedded devices are not as powerful as desktop computers, and adjustments have to be made to deep learning models before transferring them to robotic applications. This work benchmarks deep learning object detection models in embedded devices. Furthermore, some hardware selection guidelines are included, together with a description of the most relevant features of the two boards selected for this benchmark. Embedded electronic devices integrate a powerful AI co-processor to accelerate DL applications. To take advantage of these co-processors, models must be converted to a specific embedded runtime format. Five quantization levels applied to a collection of DL models are considered; two of them allow the execution of models in the embedded general-purpose CPU and are used as the baseline to assess the improvements obtained when running the same models with the three remaining quantization levels in the AI co-processors. The benchmark procedure is explained in detail, and a comprehensive analysis of the collected data is presented. Finally, the feasibility and challenges of the implementation of embedded object detection applications are discussed. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

24 pages, 18598 KiB  
Article
Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction
by Dominykas Strazdas, Jan Hintz, Aly Khalifa, Ahmed A. Abdelrahman, Thorsten Hempel and Ayoub Al-Hamadi
Sensors 2022, 22(3), 923; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030923 - 25 Jan 2022
Cited by 20 | Viewed by 4121
Abstract
This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these [...] Read more.
This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework’s implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research

26 pages, 2731 KiB  
Review
RANSAC for Robotic Applications: A Survey
by José María Martínez-Otzeta, Itsaso Rodríguez-Moreno, Iñigo Mendialdua and Basilio Sierra
Sensors 2023, 23(1), 327; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010327 - 28 Dec 2022
Cited by 20 | Viewed by 4596
Abstract
Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform [...] Read more.
Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

Back to TopTop