sensors-logo

Journal Browser

Journal Browser

Assistive Robots for Healthcare and Human-Robot Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 39754

Special Issue Editors


E-Mail Website
Guest Editor
Fondazione Casa Sollievo della Sofferenza, Department of Medical Sciences, Complex Unit of Geriatrics Viale Cappuccini, 1, 71013 San Giovanni Rotondo, FG, Italy
Interests: geriatrics; neurocognitive disorders; psychological and behavioural symptoms; information and communication technologies; ambient assisted living
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Fondazione Casa Sollievo della Sofferenza, Department of Medical Sciences, Complex Unit of Geriatrics Viale Cappuccini, 1, 71013 San Giovanni Rotondo, FG, Italy
Interests: genetics; pharmacology; cognition disorders; neurodegenerative diseases; memory; clinical neuropsychology; cognitive neuropsychology; executive function; cognitive neuroscience; learning and memory
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Assistive technologies like Assistive Robots (AR) are being considered as enablers to support the process of care giving, potentially enhancing patient well-being and decreasing caregiver workload. Currently, it needs to deepen the research about person-centered care, multimodal interaction, multimodal data collection, caregiver expectancy model to improve AR acceptability.

In light of these assumptions, the Human-Robot Interaction (HRI) field is devoted to understanding, designing, and assessing the robotic systems used by human being.

By definition, the interaction implicates the communication. In light of this assumption, research in the HRI field is increasingly focused on the development of robots equipped with intelligent communicative abilities, in particular speech-based natural-language conversational abilities. These efforts directly relate to the research area of computational linguistics, generally defined as “the subfield of computer science concerned with using computational techniques to learn, understand, and produce human language content”. The advances and results in computational linguistics provide a foundational background for the development of so-called Spoken Dialogue Systems, i.e., computer systems designed to interact with humans using spoken natural language. The ability to communicate using natural language is a fundamental requirement for a robot that interacts with human being. Then, spoken dialogue is generally considered as the most natural way for social human-robot interaction. The sensing technologies represent a key role in the HRI and new approaches or application of existing ones in novel way could be really significant in facilitating the improvement of this field and consequently in all the sub-fields related to it.

The central focus of this Special Issues will be to advance novel technologies applied in healthcare processes that have shown exceptional promise in models of HRI though the use of new sensors or methodologies capable to adapt, combine or improve the existing ones.  

The first important question concerns the modalities needed to sense the emotional state of people by the robot. Secondly, there is the problem of modelling the interaction between human and robot, not only on a haptic level, but also on an emotional level.

Dr. Grazia D'Onofrio
Dr. Daniele Sancarlo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • development of new sensing methodologies to facilitate the HRI
  • improvement of existing technologies in HRI
  • application of multimodal approaches in HRI
  • role of emotional detection in the HRI
  • ethical aspects of HRI
  • value sensitive design in care robotics
  • patient centeredness
  • acceptability and usability assessment
  • impact of robot embodiment and how this affected the interactions

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 179 KiB  
Editorial
Assistive Robots for Healthcare and Human–Robot Interaction
by Grazia D’Onofrio and Daniele Sancarlo
Sensors 2023, 23(4), 1883; https://0-doi-org.brum.beds.ac.uk/10.3390/s23041883 - 08 Feb 2023
Cited by 1 | Viewed by 2052
Abstract
Assistive robots are still mostly prototypes that only remotely recall human interactive dynamics [...] Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)

Research

Jump to: Editorial, Review

16 pages, 5004 KiB  
Article
A Care Robot with Ethical Sensing System for Older Adults at Home
by Jong-Wook Kim, Young-Lim Choi, Sang-Hyun Jeong and Jeonghye Han
Sensors 2022, 22(19), 7515; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197515 - 03 Oct 2022
Cited by 11 | Viewed by 3155
Abstract
Many studies have explored emotional and mental services that robots can provide for older adults, such as offering them daily conversation, news, music, or health information. However, the ethical issues raised by using sensors for frail older adults to monitor their daily movements [...] Read more.
Many studies have explored emotional and mental services that robots can provide for older adults, such as offering them daily conversation, news, music, or health information. However, the ethical issues raised by using sensors for frail older adults to monitor their daily movements or their medication intake, for instance, are still being discussed. In this study, we develop an older adult-guided, caregiver-monitored robot, Dori, which can detect and recognize movement by sensing human poses in accordance with two factors from the human-centered artificial intelligence (HCAI) framework. To design the care robot’s services based on sensing movement during daily activities, we conducted focus group interviews with two groups—caregivers and medical staff—on the topic of care robot services not for patients but for prefrail and frail elderly individuals living at home. Based on their responses, we derived the focal service areas of cognitive support, emotional support, physical activity support, medication management, and caregiver management. We also found the two groups differed in their ethical judgments in the areas of dignity, autonomy, controllability, and privacy for services utilizing sensing by care robots. Therefore, the pose recognition technology adopted in the present work uses only joint coordinate information extracted from camera images and thus is advantageous for protecting human dignity and personal information. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

26 pages, 6878 KiB  
Article
Investigating Methods for Cognitive Workload Estimation for Assistive Robots
by Ayca Aygun, Thuan Nguyen, Zachary Haga, Shuchin Aeron and Matthias Scheutz
Sensors 2022, 22(18), 6834; https://0-doi-org.brum.beds.ac.uk/10.3390/s22186834 - 09 Sep 2022
Cited by 6 | Viewed by 1779
Abstract
Robots interacting with humans in assistive contexts have to be sensitive to human cognitive states to be able to provide help when it is needed and not overburden the human when the human is busy. Yet, it is currently still unclear which sensing [...] Read more.
Robots interacting with humans in assistive contexts have to be sensitive to human cognitive states to be able to provide help when it is needed and not overburden the human when the human is busy. Yet, it is currently still unclear which sensing modality might allow robots to derive the best evidence of human workload. In this work, we analyzed and modeled data from a multi-modal simulated driving study specifically designed to evaluate different levels of cognitive workload induced by various secondary tasks such as dialogue interactions and braking events in addition to the primary driving task. Specifically, we performed statistical analyses of various physiological signals including eye gaze, electroencephalography, and arterial blood pressure from the healthy volunteers and utilized several machine learning methodologies including k-nearest neighbor, naive Bayes, random forest, support-vector machines, and neural network-based models to infer human cognitive workload levels. Our analyses provide evidence for eye gaze being the best physiological indicator of human cognitive workload, even when multiple signals are combined. Specifically, the highest accuracy (in %) of binary workload classification based on eye gaze signals is 80.45 ∓ 3.15 achieved by using support-vector machines, while the highest accuracy combining eye gaze and electroencephalography is only 77.08 ∓ 3.22 achieved by a neural network-based model. Our findings are important for future efforts of real-time workload estimation in the multimodal human-robot interactive systems given that eye gaze is easy to collect and process and less susceptible to noise artifacts compared to other physiological signal modalities. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

8 pages, 1660 KiB  
Article
A New Robotic Endoscope Holder for Ear and Sinus Surgery with an Integrated Safety Device
by Guillaume Michel, Philippe Bordure and Damien Chablat
Sensors 2022, 22(14), 5175; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145175 - 11 Jul 2022
Cited by 2 | Viewed by 1763
Abstract
In the field of sinus and ear surgery, and more generally in microsurgery, the surgeon is faced with several challenges. The operations are traditionally carried out under binocular loupes, which allows for the surgeon to use both hands for a microinstrument and an [...] Read more.
In the field of sinus and ear surgery, and more generally in microsurgery, the surgeon is faced with several challenges. The operations are traditionally carried out under binocular loupes, which allows for the surgeon to use both hands for a microinstrument and an aspiration tool. More recently, the development of endoscopic otological surgery allowed for seeing areas that are difficult to access. However, the need to handle the endoscope reduces the surgeon’s ability to use only one instrument at a time. Thus, despite anaesthesia, patient motions during surgery can be very risky and are not that rare. Because the insertion zone in the middle ear or in the sinus cavity is very small, the mobility of the endoscope is limited to a rotation around a virtual point and a translation for the insertion of the camera. A mechanism with remote center motion (RCM) is a good candidate to achieve this movement and allow for the surgeon to access the ear or sinus. Since only the translational motion along the main insertion axis is enabled, the ejection motion along the same axis is safe for the patient. A specific mechanism allows for inserting and ejecting the endoscope. In a sense, the position is controlled, and the velocity is limited. In the opposite sense, the energy stored in the spring allows for very quick ejection if the patient moves. A prototype robot is presented using these new concepts. Commercially available components are used to enable initial tests to be carried out on synthetic bones to validate the mobility of the robot and its safety functions. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

21 pages, 12553 KiB  
Article
Validation of an RF Image System for Real-Time Tracking Neurosurgical Tools
by Carolina Blanco-Angulo, Andrea Martínez-Lozano, Carlos G. Juan, Roberto Gutiérrez-Mazón, Julia Arias-Rodríguez, Ernesto Ávila-Navarro and José M. Sabater-Navarro
Sensors 2022, 22(10), 3845; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103845 - 19 May 2022
Cited by 4 | Viewed by 1910
Abstract
A radio frequency (RF)-based system for surgical navigation is presented. Surgical navigation technologies are widely used nowadays for aiding the surgical team with many interventions. However, the currently available options still pose considerable limitations, such as line-of-sight occlusion prevention or restricted materials and [...] Read more.
A radio frequency (RF)-based system for surgical navigation is presented. Surgical navigation technologies are widely used nowadays for aiding the surgical team with many interventions. However, the currently available options still pose considerable limitations, such as line-of-sight occlusion prevention or restricted materials and equipment allowance. In this work, we suggest a different approach based on a microwave broadband antenna system. We combine techniques from microwave medical imaging, which can overcome the current limitations in surgical navigation technologies, and we propose methods to develop RF-based systems for real-time tracking neurosurgical tools. The design of the RF system to perform the measurements is shown and discussed, and two methods (Multiply and Sum and Delay Multiply and Sum) for building the medical images are analyzed. From these measurements, a surgical tool’s position tracking system is developed and experimentally assessed in an emulated surgical scenario. The reported results are coherent with other approaches found in the literature, while overcoming their main practical limitations. The discussion of the results discloses some hints on the validity of the system, the optimal configurations depending on the requirements, and the possibilities for future enhancements. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

16 pages, 2717 KiB  
Article
Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project)
by Grazia D’Onofrio, Laura Fiorini, Alessandra Sorrentino, Sergio Russo, Filomena Ciccone, Francesco Giuliani, Daniele Sancarlo and Filippo Cavallo
Sensors 2022, 22(8), 2861; https://0-doi-org.brum.beds.ac.uk/10.3390/s22082861 - 08 Apr 2022
Cited by 9 | Viewed by 2963
Abstract
Background: Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims [...] Read more.
Background: Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims were to identify if traditional machine learning algorithms could be used to assess every user emotions separately, to relate emotion recognizing in two robotic modalities: static or motion robot, and to evaluate the acceptability and usability of assistive robot from an end-user point of view. Methods: Twenty-seven hospital employees (M = 12; F = 15) were recruited to perform the experiment showing 60 positive, negative, or neutral images selected in the International Affective Picture System (IAPS) database. The experiment was performed with the Pepper robot. Concerning experimental phase with Pepper in active mode, a concordant mimicry was programmed based on types of images (positive, negative, and neutral). During the experimentation, the images were shown by a tablet on robot chest and a web interface lasting 7 s for each slide. For each image, the participants were asked to perform a subjective assessment of the perceived emotional experience using the Self-Assessment Manikin (SAM). After participants used robotic solution, Almere model questionnaire (AMQ) and system usability scale (SUS) were administered to assess acceptability, usability, and functionality of robotic solution. Analysis wasperformed on video recordings. The evaluation of three types of attitude (positive, negative, andneutral) wasperformed through two classification algorithms of machine learning: k-nearest neighbors (KNN) and random forest (RF). Results: According to the analysis of emotions performed on the recorded videos, RF algorithm performance wasbetter in terms of accuracy (mean ± sd = 0.98 ± 0.01) and execution time (mean ± sd = 5.73 ± 0.86 s) than KNN algorithm. By RF algorithm, all neutral, positive and negative attitudes had an equal and high precision (mean = 0.98) and F-measure (mean = 0.98). Most of the participants confirmed a high level of usability and acceptability of the robotic solution. Conclusions: RF algorithm performance was better in terms of accuracy and execution time than KNN algorithm. The robot was not a disturbing factor in the arousal of emotions. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

11 pages, 1238 KiB  
Article
Robots in Eldercare: How Does a Real-World Interaction with the Machine Influence the Perceptions of Older People?
by Slawomir Tobis, Joanna Piasek, Miroslawa Cylkowska-Nowak and Aleksandra Suwalska
Sensors 2022, 22(5), 1717; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051717 - 22 Feb 2022
Cited by 7 | Viewed by 2583
Abstract
(1) Background: Using autonomous social robots in selected areas of care for community-dwelling older adults is one of the promising approaches to address the problem of the widening care gap. We posed the question of whether a possibility to interact with the technology [...] Read more.
(1) Background: Using autonomous social robots in selected areas of care for community-dwelling older adults is one of the promising approaches to address the problem of the widening care gap. We posed the question of whether a possibility to interact with the technology to be used had an impact on the scores given by the respondents in various domains of needs and requirements for social robots to be deployed in care for older individuals. (2) Methods: During the study, the opinions of older people (65+; n = 113; with no severe cognitive impairment) living in six social care institutions about a robot in care for older people were collected twice using the Users’ Needs, Requirements and Abilities Questionnaire (UNRAQ): after seeing a photo of the robot only and after a 90–150 min interaction with the TIAGo robot. (3) Results: Mean total scores for both assistive and social functions were higher after the interaction (p < 0.05). A positive correlation was found between opinion changes in social and assistive functions (r = 0.4842; p = 0.0000). (4) Conclusions: Preimplementation studies and assessments should include the possibility to interact with the robot to provide its future users with a clear idea of the technology and facilitate necessary customisations of the machine. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

20 pages, 1359 KiB  
Article
Topic Break Detection in Interview Dialogues Using Sentence Embedding of Utterance and Speech Intention Based on Multitask Neural Networks
by Kazuyuki Matsumoto, Manabu Sasayama and Taiga Kirihara
Sensors 2022, 22(2), 694; https://0-doi-org.brum.beds.ac.uk/10.3390/s22020694 - 17 Jan 2022
Cited by 2 | Viewed by 2128
Abstract
Currently, task-oriented dialogue systems that perform specific tasks based on dialogue are widely used. Moreover, research and development of non-task-oriented dialogue systems are also actively conducted. One of the problems with these systems is that it is difficult to switch topics naturally. In [...] Read more.
Currently, task-oriented dialogue systems that perform specific tasks based on dialogue are widely used. Moreover, research and development of non-task-oriented dialogue systems are also actively conducted. One of the problems with these systems is that it is difficult to switch topics naturally. In this study, we focus on interview dialogue systems. In an interview dialogue, the dialogue system can take the initiative as an interviewer. The main task of an interview dialogue system is to obtain information about the interviewee via dialogue and to assist this individual in understanding his or her personality and strengths. In order to accomplish this task, the system needs to be flexible and appropriate for detecting topic switching and topic breaks. Given that topic switching tends to be more ambiguous in interview dialogues than in task-oriented dialogues, existing topic modeling methods that determine topic breaks based only on relationships and similarities between words are likely to fail. In this study, we propose a method for detecting topic breaks in dialogue to achieve flexible topic switching in interview dialogue systems. The proposed method is based on multi-task learning neural network that uses embedded representations of sentences to understand the context of the text and utilizes the intention of an utterance as a feature. In multi-task learning, not only topic breaks but also the intention associated with the utterance and the speaker are targets of prediction. The results of our evaluation experiments show that using utterance intentions as features improves the accuracy of topic separation estimation compared to the baseline model. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

22 pages, 5284 KiB  
Article
Behavioral Data Analysis of Robot-Assisted Autism Spectrum Disorder (ASD) Interventions Based on Lattice Computing Techniques
by Chris Lytridis, Vassilis G. Kaburlasos, Christos Bazinas, George A. Papakostas, George Sidiropoulos, Vasiliki-Aliki Nikopoulou, Vasiliki Holeva, Maria Papadopoulou and Athanasios Evangeliou
Sensors 2022, 22(2), 621; https://0-doi-org.brum.beds.ac.uk/10.3390/s22020621 - 14 Jan 2022
Cited by 9 | Viewed by 2396
Abstract
Recent years have witnessed the proliferation of social robots in various domains including special education. However, specialized tools to assess their effect on human behavior, as well as to holistically design social robot applications, are often missing. In response, this work presents novel [...] Read more.
Recent years have witnessed the proliferation of social robots in various domains including special education. However, specialized tools to assess their effect on human behavior, as well as to holistically design social robot applications, are often missing. In response, this work presents novel tools for analysis of human behavior data regarding robot-assisted special education. The objectives include, first, an understanding of human behavior in response to an array of robot actions and, second, an improved intervention design based on suitable mathematical instruments. To achieve these objectives, Lattice Computing (LC) models in conjunction with machine learning techniques have been employed to construct a representation of a child’s behavioral state. Using data collected during real-world robot-assisted interventions with children diagnosed with Autism Spectrum Disorder (ASD) and the aforementioned behavioral state representation, time series of behavioral states were constructed. The paper then investigates the causal relationship between specific robot actions and the observed child behavioral states in order to determine how the different interaction modalities of the social robot affected the child’s behavior. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

17 pages, 711 KiB  
Article
Pilots for Healthy and Active Ageing (PHArA-ON) Project: Definition of New Technological Solutions for Older People in Italian Pilot Sites Based on Elicited User Needs
by Grazia D’Onofrio, Laura Fiorini, Lara Toccafondi, Erika Rovini, Sergio Russo, Filomena Ciccone, Francesco Giuliani, Daniele Sancarlo and Filippo Cavallo
Sensors 2022, 22(1), 163; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010163 - 27 Dec 2021
Cited by 8 | Viewed by 3111
Abstract
Background: The Pilots for Healthy and Active Ageing (PHArA-ON) project aimsto ensure reality smart and active living for Europe’s ageing population by creating a set of integrated and highly customizable interoperable open platforms with advanced services, devices, and technologies and tools. The aim [...] Read more.
Background: The Pilots for Healthy and Active Ageing (PHArA-ON) project aimsto ensure reality smart and active living for Europe’s ageing population by creating a set of integrated and highly customizable interoperable open platforms with advanced services, devices, and technologies and tools. The aim of the present study was to determine the needs and preferences of older people and their caregivers for improving healthy and active aging and guiding the technological development of thePHArA-ON system. Methods: A pre-structured interview was administered to older adults, informal caregivers and professional caregivers (including social operators) taking part in the piloting sessions. Results: Interviews were carried out in Umana Persone Social Enterprise R&D Network (UP) in Tuscany, and Ospedale Casa SollievodellaSofferenza (CSS) in Apulia. A total of 22 older adults, 22 informal caregivers, 13 professional caregivers and 4 social operators were recruited. A prioritization analysis of services, according to the stakeholder’s needs, has determined two fundamental need categories: Heath Management (i.e., stimulation and monitoring), and Socialisation (i.e., promoting social inclusion). Conclusions: The main scientific contributions to this study are the following: to design and evaluate technology in the context of healthy and active ageing, to acquire relevant knowledge on user needs to develop technologies that can handle the real life situations of older people, obtain useful insights about the attitude and availability of end-users in using technologies in clinical practice, and to provide important guidelines to improve the PHArA-ON system. Specific experimentation stages were also carried out to understand which kind of technology is more acceptable, and to obtain feedback regarding the development priority related to the impact of the proposed services. Research through fruitful and continuous interaction with the different subjects involved in the development process of the system, as well as with stakeholders, enabled the implementation of a platform which could be further and easily integrated and improved. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

17 pages, 5770 KiB  
Article
User Local Coordinate-Based Accompanying Robot for Human Natural Movement of Daily Life
by Hsiao-Kuan Wu, Po-Yin Chen, Hong-Yi Wu and Chung-Huang Yu
Sensors 2021, 21(11), 3889; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113889 - 04 Jun 2021
Cited by 1 | Viewed by 1893
Abstract
Considering the trend of aging societies, accompanying technology can help frail, elderly individuals participate in daily activities. The ideal accompanying robot should accompany the user in a proper position according to the activity scenarios and context; the prerequisite is that the accompanying robot [...] Read more.
Considering the trend of aging societies, accompanying technology can help frail, elderly individuals participate in daily activities. The ideal accompanying robot should accompany the user in a proper position according to the activity scenarios and context; the prerequisite is that the accompanying robot should quickly move to a designated position and closely maintain it regardless of the direction in which the user moves. This paper proposes a user local coordinate-based strategy to satisfy this need. As a proof of concept, a novel “string-pot” approach was utilized to measure the position difference between the robot and the target. We implemented the control strategy and assessed its performance in our gait lab. The results showed that the robot can follow the user in the designated position while the user performs forward, backward, and lateral movements, turning, and walking along a curve. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

21 pages, 8302 KiB  
Review
A Meta-Analysis on Remote HRI and In-Person HRI: What Is a Socially Assistive Robot to Do?
by Nan Liang and Goldie Nejat
Sensors 2022, 22(19), 7155; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197155 - 21 Sep 2022
Cited by 7 | Viewed by 2412
Abstract
Recently, due to the COVID-19 pandemic and the related social distancing measures, in-person activities have been significantly reduced to limit the spread of the virus, especially in healthcare settings. This has led to loneliness and social isolation for our most vulnerable populations. Socially [...] Read more.
Recently, due to the COVID-19 pandemic and the related social distancing measures, in-person activities have been significantly reduced to limit the spread of the virus, especially in healthcare settings. This has led to loneliness and social isolation for our most vulnerable populations. Socially assistive robots can play a crucial role in minimizing these negative affects. Namely, socially assistive robots can provide assistance with activities of daily living, and through cognitive and physical stimulation. The ongoing pandemic has also accelerated the exploration of remote presence ranging from workplaces to home and healthcare environments. Human–robot interaction (HRI) researchers have also explored the use of remote HRI to provide cognitive assistance in healthcare settings. Existing in-person and remote comparison studies have investigated the feasibility of these types of HRI on individual scenarios and tasks. However, no consensus on the specific differences between in-person HRI and remote HRI has been determined. Furthermore, to date, the exact outcomes for in-person HRI versus remote HRI both with a physical socially assistive robot have not been extensively compared and their influence on physical embodiment in remote conditions has not been addressed. In this paper, we investigate and compare in-person HRI versus remote HRI for robots that assist people with activities of daily living and cognitive interventions. We present the first comprehensive investigation and meta-analysis of these two types of robotic presence to determine how they influence HRI outcomes and impact user tasks. In particular, we address research questions regarding experience, perceptions and attitudes, and the efficacy of both humanoid and non-humanoid socially assistive robots with different populations and interaction modes. The use of remote HRI to provide assistance with daily activities and interventions is a promising emerging field for healthcare applications. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

16 pages, 9367 KiB  
Review
A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism
by Amal Alabdulkareem, Noura Alhakbani and Abeer Al-Nafjan
Sensors 2022, 22(3), 944; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030944 - 26 Jan 2022
Cited by 42 | Viewed by 8992
Abstract
Recent studies have shown that children with autism may be interested in playing with an interactive robot. Moreover, the robot can engage these children in ways that demonstrate essential aspects of human interaction, guiding them in therapeutic sessions to practice more complex forms [...] Read more.
Recent studies have shown that children with autism may be interested in playing with an interactive robot. Moreover, the robot can engage these children in ways that demonstrate essential aspects of human interaction, guiding them in therapeutic sessions to practice more complex forms of interaction found in social human-to-human interactions. We review published articles on robot-assisted autism therapy (RAAT) to understand the trends in research on this type of therapy for children with autism and to provide practitioners and researchers with insights and possible future directions in the field. Specifically, we analyze 38 articles, all of which are refereed journal articles, that were indexed on Web of Science from 2009 onward, and discuss the distribution of the articles by publication year, article type, database and journal, research field, robot type, participant age range, and target behaviors. Overall, the results show considerable growth in the number of journal publications on RAAT, reflecting increased interest in the use of robot technology in autism therapy as a salient and legitimate research area. Factors, such as new advances in artificial intelligence techniques and machine learning, have spurred this growth. Full article
(This article belongs to the Special Issue Assistive Robots for Healthcare and Human-Robot Interaction)
Show Figures

Figure 1

Back to TopTop