Next Article in Journal
Relationship of Binge Drinking with Impairments Affecting Memory and Executive Function among University Students: A Cross-Sectional Study in Northern Spain
Next Article in Special Issue
Phyx.io: Expert-Based Decision Making for the Selection of At-Home Rehabilitation Solutions for Active and Healthy Aging
Previous Article in Journal
Optimization of Micro-Pollutants’ Removal from Wastewater Using Agricultural Waste-Derived Sustainable Adsorbent
Previous Article in Special Issue
Promoting Reminiscences with Virtual Reality Headsets: A Pilot Study with People with Dementia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Methodological Quality of User-Centered Usability Evaluation of Ambient Assisted Living Solutions: A Systematic Literature Review

by
Rute Bastardo
1,
Ana Isabel Martins
2,
João Pavão
3,
Anabela Gonçalves Silva
2 and
Nelson Pacheco Rocha
4,*
1
UNIDCOM, Science and Technology School, University of Trás-os-Montes and Alto Douro, Quinta de Prado, 5001-801 Vila Real, Portugal
2
Center for Health Technology and Services Research, Health Sciences School, University of Aveiro, 3810-193 Aveiro, Portugal
3
INESC-TEC, Science and Technology School, University of Trás-os-Montes and Alto Douro, Quinta de Prado, 5001-801 Vila Real, Portugal
4
Department of Medical Sciences, IEETA-Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2021, 18(21), 11507; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph182111507
Submission received: 9 September 2021 / Revised: 20 October 2021 / Accepted: 28 October 2021 / Published: 1 November 2021
(This article belongs to the Special Issue Supportive Systems for Active and Healthy Aging)

Abstract

:
This study aimed to determine the methodological quality of user-centered usability evaluation of Ambient Assisted Living (AAL) solutions by (i) identifying the characteristics of the AAL studies reporting on user-centered usability evaluation, (ii) systematizing the methods, procedures and instruments being used, and (iii) verifying if there is evidence of a common understanding on methods, procedures, and instruments for user-centered usability evaluation. An electronic search was conducted on Web of Science, Scopus, and IEEE Xplore databases, combining relevant keywords. Then, titles and abstracts were screened against inclusion and exclusion criteria, and the full texts of the eligible studies were retrieved and screened for inclusion. A total of 44 studies were included. The results show a great heterogeneity of methods, procedures, and instruments to evaluate the usability of AAL solutions and, in general, the researchers fail to consider and report relevant methodological aspects. Guidelines and instruments to assess the quality of the studies might help improving the experimental design and reporting of studies on user-centered usability evaluation of AAL solutions.

1. Introduction

The worldwide population is ageing and the related longer life-expectancy represents an extraordinary challenge in terms of public healthcare policies, due to the changing patterns of disease, the demanding expectations of patients, and the financial restrictions. The current ideal political paradigm, supported in concepts such as ageing in place [1] or active ageing [2,3], consider that older adults should continue living in the community rather than being forced to move to residential care units because of their cognitive and physical limitations.
The success of this approach depends not only on the characteristics of the individuals and their health conditions, influenced by different factors that interact with each other continuously and in subtle ways [4], namely physical, mental, and behavioral factors, but also environmental factors (e.g., the living environment, the support of relatives, the availability and accessibility and of health care, social services, or community support [5]). Therefore, innovative solutions are required to guarantee the autonomy and independence of the increasing number of older adults within friendly environments.
Ambient Assisted Living (AAL) is one of the resources available to promote age-friendly environments to facilitate the maintenance of typical activities and values of middle age. The AAL paradigm refers to intelligent technologies, products and services embedded in the physical environment and aims to maintain the independence and general quality of life of the individuals as they age, by providing secure and supportive environments, optimizing healthcare provision, namely when in presence of chronic diseases, promoting healthy lifestyles which positively impact physical and cognitive functioning, and facilitating social involvement and active participation in the society [6].
AAL is supported in the technological developments of the last decades that, among other possibilities, increased the capacity to develop and manufacture systems that employ smart components highly integrated and miniaturized [7]. This remarkable development makes possible the vision of Weiser [8] regarding ubiquitous computing by bringing computing devices into everyday life (e.g., integration of computing power and sensing features into anything, including everyday objects like white goods, toys, or furniture), in such a way that the users would not notice their presence. On the other hand, the AAL shares with ubiquitous computing the provision of effortless interaction, being context awareness [9] an important feature to allow the adaptation of the environment states to the human being preferences.
AAL must have the capacity to properly distinguish the human beings present in the environment, to recognize the individual roles, needs, preferences and limitations, to evaluate situational context, to allow different answers according to personal requirements and situational contexts and to anticipate desires and needs without conscious mediation. Therefore, the aggregation and processing of sensory data from different devices [7], to automatic change the environment are crucial issues of the AAL paradigm [10,11].
Moreover, a distinctive characteristic of ubiquitous computing and consequently of AAL is the interaction with all kinds of elements through different types of interfaces. In addition to the well-known graphical user interfaces, other types of interfaces are being proposed combining several input modalities, such as, voice, haptic, gesture or body movement interaction. These interfaces represent an increased diversity in terms of communication channels [12]. Since each independent channel is called a modality, the interaction might be unimodal or multimodal. Multimodal interactions together with context awareness imply high complexity in terms of implementation of user interaction mechanisms, but this high complexity implementation must be translated in simple and usable interfaces.
The term usability is related to the ability of a product or a service to help the user achieve a specific goal in a given situation while enjoying its use [13,14]. Good usability is usually associated with [15]: lower error rates, more intuitive products and systems, higher acceptance rates and decreased time and effort to attain a specific goal.
The usability evaluation is an important part of the overall development of user interaction systems, which consists of iterative cycles of design, prototyping and validation. Most development processes focus entirely on adherence to technical specifications. This is one of the main reasons why some products or systems have failed to gain broad acceptance [15]. The introducing of user-centered methods aims to ensure the acceptance of the products and services being developed.
The literature describes several methods, procedures, and instruments to evaluate the usability of digital solutions [16]. Certain evaluations rely on usability experts (i.e., involving the inspection of the digital solution by experts to evaluate the various aspects of user interaction against an established set of principles of interface design and usability [17,18]), while others rely on end users (i.e., experiments involving end users to determine their perceptions [19]). These perceptions are gathered using different methods (e.g., test and inquiry) and techniques (e.g., interviews, think-aloud, and observation), which are usually combined [20] to perform a comprehensive evaluation.
Previous reviews aimed to study various aspects of AAL, including technological ecosystems [21], systems architectures [22], human activities’ recognition [23], acceptance in rehabilitation [24], questionnaires for user experience assessment [12], interventions [25,26], and bibliometric analysis [27]. However, the authors of this review were not able to identify studies systematically reviewing and evaluating the evidence on the quality of the user-centered usability evaluation of AAL solutions. Therefore, this systematic literature review aims to (i) identify the characteristics of the AAL studies reporting on user-centered usability evaluation, (ii) systematize the methods, procedures and instruments being used, and (iii) verify if there is evidence on a common understanding on methods, procedures, and instruments for user-centered usability evaluation.
The study intends to contribute to the quality of user-centered usability evaluation of AAL solutions by (i) reviewing the main research recently published, (ii) determining and discussing the usability evaluation methods, procedures, and instruments being used, (iii) determining the major methodological drawbacks, (iv) identifying good practices, and (vi) promoting a common understanding of the methodological approaches.

2. Materials and Methods

This systematic review followed the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [28]. To perform the systematic literature review, the authors defined a review protocol with explicit descriptions of the methods to be used and the steps to be taken [29]: (i) the research questions; (ii) the search strategies; (iii) the inclusion and exclusion criteria; (iv) the screening procedures; (v) data extraction; (vi) methodological quality assessment; and (vii) synthesis and reporting.

2.1. Research Questions

Based on the analysis of the literature in the field of usability evaluation of digital solutions and previous work of the research team, a lack of consensus in the academic literature regarding the methods, procedures, and instruments being used for evaluating usability of AAL solutions was identified. To have a more in-depth knowledge of the practices on user-centered usability evaluation of AAL solutions, the following research question was formulated:
  • RQ1: What is the methodological quality of user-centered usability evaluation of AAL solutions?
This broad question was subdivided into three additional secondary research questions:
  • RQ2: What are the characteristics of the AAL studies reporting on user-centered usability evaluation in terms of study demographics, publication date, country of publication, purpose of the AAL reported solution and interaction modalities?
  • RQ3: What are the methods (e.g., test methods, inquiry methods or both), procedures (e.g., environment where the usability evaluation is conducted), and instruments being used (e.g., validated instruments or purposively developed instruments)?
  • RQ4: Do existing studies on user-centered usability evaluation of AAL solutions follow quality recommendations when assessed against the Critical Assessment of Usability Studies Scale (CAUSS) [30]?

2.2. Search Strategies

The resources chosen for the review were three electronic databases (i.e., Scopus, Web of Science, and IEEE Xplorer). Boolean queries were prepared to include all the articles that have their titles, abstract or keywords conformed to the conjunction (i.e., AND Boolean operator) of the following expressions:
  • “AAL”, “ambient assisted living”, “ambient assisted technology”, “ambient assistive technology” or “ambient intelligence”;
  • “UX”, “user experience”, or “usability”;
  • “Evaluation” or “assessment”.
The expressivity of the search procedure depends on the database. As an example, the query expression to retrieve articles from the Scopus database was de following: TITLE-ABS-KEY ((AAL or “ambient assisted living” or “ambient assisted technology” or “ambient assistive technology” or “ambient intelligence”) and (UX or “user experience” or usability) and (evaluation or assessment)).
The electronic literature search was performed in January 2021 and included all the references published before 31 December 2020.

2.3. Inclusion and Exclusion Criteria

References were included if they reported on user-centered usability evaluation of AAL solutions that might be used to support older adults by promoting secure and supportive environments, optimizing healthcare provision, promoting healthy lifestyles, and facilitating social involvement and active participation in the society [6].
References were excluded if they (i) did not have abstracts, (ii) were not written in English, (iii) reported on reviews, surveys, or market studies, (iv) were books, reported on workshops, or special issues announcements, (v) reported on studies whose primary objectives were not usability assessment, or (vi) reported on studies that were not relevant for the objective of this systematic review.

2.4. Screening Procedures

The analysis and selection of the studies were performed in three steps:
  • First step—the authors removed the duplicates, the articles without abstract and not written in English;
  • Second step—the authors assessed all titles and abstracts for relevance and those clearly not meeting the inclusion and exclusion criteria were removed;
  • Third step—the authors assessed the full text of the remaining articles against the outlined inclusion and exclusion criteria and the final list of the studies to be considered for the review was created.
Throughout this entire process, all articles were analyzed by three authors and any disagreement between the authors was discussed and resolved by consensus.

2.5. Data Extraction

Concerning data extraction, the following information was registered in a data sheet prepared by the authors for each of the studies included in the review: (i) the demographics of the study (i.e., authors and respective affiliations, year and source of publication); (ii) the scope of the study; (iii) the purpose of the AAL solution being reported; (iv) details of the interaction technologies being used; (v) the methods, techniques, instruments and procedures applied to evaluate usability; (vi) the characteristics of the participants involved in the usability evaluation; and (vi) the outcomes being reported.

2.6. Methodological Quality Assessment

Three authors independently assessed the methodological quality of included studies using a scale developed to assess the methodological quality of studies evaluating usability of electronic health products and services, the Critical Assessment of Usability Studies Scale (CAUSS) [30]. The CAUSS has 15 items that can be scored “yes” or “no”. This scale is both valid and reliable (Intraclass Correlation Coefficient—ICC = 0.81) [30]. Each study was assessed by at least two authors. This quality assessment was undertaken in two steps: first three manuscripts were assessed by all the three authors involved in this step of the review to foster a common understanding of the scale items. Then, all the remaining manuscripts were independently assessed by two of the three authors. During both steps, disagreements were resolved by discussion and a final decision achieved by consensus. Percentage of agreement between the assessors was calculated for each one of the 15 items of the scale.

2.7. Synthesis and Reporting

Based on the demographic data of the included studies, a synthesis of studies’ characteristics was prepared, which included: (i) the number of studies published in conference proceedings and in scientific journals; (ii) the distribution of the studies by year and the publication rate, which was calculated using RMS Least Square Fit; and (iii) the distribution of the studies by country. Since some studies involved multidisciplinary teams, it was considered the institutional affiliation of the first author of each study to determine the number of studies per nation.
The different AAL solutions described by the included primary studies were coded in terms of AAL domains and interaction modalities. In what concerns the AAL domains, a tabular presentation was prepared, which considered four domains [6]: (i) secure and supportive environment; (ii) healthcare provision; (iii) healthy lifestyles; (iv) social involvement and active participation. These domains were further divided into various purposes [6]: (i) daily living activities and falls prevention for secure and supportive environment; (ii) home monitoring, remote care, telerehabilitation, and medication management for healthcare provision; (iii) physical activity, cognitive activity, physical and cognitive activity for healthy lifestyles; and (iv) social inclusion and participation in leisure activities for social involvement and active participation. In turn, concerning the interaction modalities, the classification considered both the traditional unimodal graphic user interface approach (i.e., visual interaction) and multimodal approaches (i.e., visual interaction together with voice, auditory, gesture or other interaction modalities, such as immersive virtual reality or robots) [12].
In terms of usability evaluation, the number and mean age of the participants, as well as the testing environment were identified, and the procedures used in each study were classified into test and inquiry methods and respective techniques: the method of test includes techniques such as observation, performance or think aloud and the method of inquiry includes techniques such as interviews, scales, or questionnaires.
Finally, based on the results of the application of CAUSS, the authors performed an analysis of the usability evaluation methods, procedures, and instruments of the included studies and a tabular and narrative synthesis was prepared.

3. Results

3.1. Study Selection

Figure 1 presents the flowchart of the systematic review. A total of 5635 studies were retrieved from the initial search of the selected databases.
The first step yielded 3026 studies since 2639 studies were removed because they (i) were duplicated (i.e., 874 studies), (ii) did not have abstracts (i.e., 1734 studies), or (iii) were not written in English (i.e., 31 studies).
During the second step, one study was excluded because it was retracted and 2928 studies were removed, because they (i) reported on reviews, surveys, or market studies (i.e., 234 studies), (ii) were books, reported workshops, or were special issues announcements (i.e., 98 studies), or (iii) were not relevant for the objective of this systematic review, since they did not report user-centered usability evaluation of AAL solutions that might be used to support older adults (i.e., 2596 studies).
Finally, after the full text analysis (i.e., the third step), 53 studies were removed since they did not meet the inclusion and exclusion criteria.
Therefore, 44 studies [31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74] were included in this systematic review.

3.2. Demographics of the Included Studies

Of the included 44 studies, some reported on the same research projects: studies [32,33], studies [45,46,47] and studies [36,52,58] were respectively related to the European funded projects ALADIN, iStoppFalls and Robot-ERA, while [68,74] were related to a project funded by the European Commission and co-funded by the Swiss Confederation.
In terms of publication types, ten studies were published in conference proceedings [31,32,34,35,38,40,42,61,62,72] and 34 studies were published in scientific journals [33,36,37,39,41,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,63,64,65,66,67,68,69,70,71,73,74].
Concerning the publication years, the included studies were published between 2008 (i.e., one study [31]) and 2020 (i.e., five studies [70,71,72,73,74]). The diagram in Figure 2 demonstrates a trend towards an increasing number of publications, and more than two-thirds of the studies (i.e., 30 studies [45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74]) were published in the last five years and more than one-third of the studies (i.e., 15 studies [60,61,62,63,64,65,66,67,68,69,70,71,72,73,74]) were published in the last two years.
The Figure 3 represents the distribution of the included studies by country. Europe has the highest contribution (i.e., 43 studies). Comparatively, the remaining regions of the world have relatively residual contributions: together, North America, South America and Asia contributed with three studies.
As can be seen in Table 1, 16 studies (i.e., 36% of the included studies) reported on the involvement of multinational research teams.

3.3. Purpose of the Reported AAL Solutions

Table 2 presents the AAL domains and purposes of the included studies. The promotion of secure and supportive environments and the optimization of healthcare provision were the AAL domains with the highest number of studies, respectively 20 and 14 studies.
From the 20 studies related to secure and supportive environments, 15 were focused on AAL solutions to support daily living activities and, consequently, to increase the independence of older adults, while five studies were focused on AAL solutions to prevent falls.
In turn, the 14 studies related to healthcare provision were focused on home monitoring (five studies), telerehabilitation (four studies), remote care (three studies) and medication management (two studies).
Moreover, eight studies reported on usability evaluation of AAL solutions to promote healthy lifestyles (i.e., physical and cognitive activities). Finally, two studies reported on the usability evaluation of AAL solutions to promote social involvement and active participation. One of the studies was focused on social inclusion, while the other reported the use of AAL solutions to promote the participation of older adults in leisure activities.

3.4. Interaction Modalities

Concerning the interaction modalities (Table 3), 11 studies reported on unimodal approaches based on visual interaction. In turn, the remainder studies reported on multimodal approaches based on different interaction technologies, namely visual interaction together with voice, auditory, gesture, or other interaction modalities, such as immersive virtual reality or robots.

3.5. Methodological Quality Assessment

According to Figure 4, only four items of the methodological quality scale were scored positively for more than 80% of included studies (i.e., the items 3, 4, 5, and 14). In contrast, there are three items that were scored positively for only 6 or less (≤13%) of the included studies (i.e., the items 8, 10, and 11). All the remaining items (i.e., eight items) were scored positively for 20 (44%) to 33 (72%) of included studies. Percentage agreement between the two authors who performed the quality ratings of each item varied between 78% and 96% (Table 4). Moreover, the percentage of agreement between each one of three pairs of evaluators considering the different items were similar and varied between 71% and 100%.

3.6. Detailed Analysis of Usability Evaluation Procedures

Details of the experimental studies of the included studies are present in Table 5, namely the solution being evaluated, the usability assessment methods and techniques that were used, number and average age of the participants and the test environments.
A detailed analysis of usability evaluation procedures (Table 6) revealed that test methods were used in 25 studies and inquiry methods were used in 39 studies. These add up to more than the number of included studies (i.e., 44), as 20 (45%) studies were based on a multimethod approach (i.e., combined both test and inquiry methods). Among the different techniques of the test method, the most reported were observation (n = 13; 29%) and performance evaluation (n = 12; 27%). Regarding inquiry, the most reported techniques were questionnaires/scales (n = 35; 80%) and interviews (n = 13; 29%). Several studies combined two or more techniques of the same method. Of the 35 studies that used scales/questionnaires, 19 (54%) studies used at least one valid and reliable usability evaluation instrument, 13 (37%) used questionnaires developed by the authors of the included studies without any reference to their psychometric characteristics (e.g., validity and reliability) and three (9%) used instruments based on technology acceptance models (Table 7).
In terms of valid and reliable instruments (Table 7), the System Usability Scale (SUS) emerged as the most used instrument, being used in 16 studies (i.e., [43,45,54,56,58,59,61,64,65,66,67,68,69,70,71,74]). Other instruments being used, both alone or in combination with SUS, were the IBM Computer Usability Satisfaction Questionnaires (IBM-CUSQ) (n = 2), the ICF-Based Usability Scale (ICF-US) (n = 2), the Post-Study System Usability Questionnaire (PSSUQ) (n = 1), the Usefulness, Satisfaction and Ease of Use Questionnaire (USE) (n = 1), and the Human Robot Interaction Scale (HRI) (n = 1).
With regards to model-based instruments, the models used were the Technology Acceptance Model (TAM) (n = 2) and the Unified Theory of Acceptance and Use of Technology (UTAUT) (n = 1).
Each study number of participants varied from four (i.e., studies [37,42]) to 153 (i.e., study [45]). Two studies did not include any information related to the age of the participants [60,72]. In turn, six studies did not report the mean age of the participants, although in five of them participants were at least 65 years old [32,39,48,55,67], while in another study, participants were at least 50 years old [66]. Considering the studies that reported the mean age of participants, in three of them [40,59,73] the mean age was less than 60 years of age, while in the remainder of the studies, the mean age was higher than 60 years old.
The environment where usability evaluation was conducted is diverse (Table 8). For 13 out of the 44 included studies (30%), usability evaluation was conducted at the participant’s homes, in 10 (23%) studies it was conducted at institutional sites as day care or nursing homes, in 12 (27%) studies it was performed in a living lab, and in nine studies (20%) it was conducted at research labs.

4. Discussion

This systematic review aimed to synthetize existing evidence on user-centered usability evaluation of AAL solutions.
Concerning the characteristics of the AAL studies reporting on user-centered usability evaluation (i.e., the second research question), the domains and purposes of the AAL solutions described in the included publications are in line with the AAL program objectives and include the promotion of secure and supportive environments and the optimization of healthcare provision, promotion of healthy lifestyles and promotion of social involvement and active participation [75]. Concerning the interaction modalities, most publications report on solutions with multimodal approaches based on different interaction technologies, namely visual, voice, auditory and gesture interaction. Multimodality is a critical factor in the successful deployment of AAL solutions [76] enabling individuals with different needs or the same individual in different contexts to select specific interaction modes.
Results also show that there is a growing trend of interest in the usability evaluation of AAL solutions, which is reflected in the increasing number of publications over the last years. Most authors are affiliated at Institutions based in Europe, which is a predictable result, as the AAL emerged as an initiative of the European Union that aimed to respond to the needs of the elderly population in Europe [75,77]. However, although the AAL program aims to create synergies between researchers based at different European countries, the research teams of most publications are affiliated to institutions from the same country, suggesting that this aim of the AAL program was not fully accomplished. Moreover, despite the strong investment of European Commission and Member States, it seems that the AAL concept has a poor expressiveness outside Europe.
Considering the methods, procedures, and instruments being reported (i.e., the third research question) more than half of the studies of this review used only one usability evaluation method, being the inquiry method the most used. An important aspect in usability evaluation is to use valid and reliable evaluation instruments (i.e., scales and questionnaires). However, in this review, many studies reported on the utilization of ad-hoc instruments. Examples of poor practices from a methodological quality point of view are the use of questionnaires developed purposively for a study without any attempt to assess its validity by an expert panel or against a gold standard and without specifying the questions and the process of development (e.g., [35,36,38,39,41,49,55,63,73]), extracting some questions from previously validated instruments compromising their validity and reliability (e.g., [53]), or assessing reliability but not validity (e.g., [52,60]). Ensuring validity is ensuring that an instrument is assessing what is supposed to be assessed and ensuring reliability is ensuring that the instruments give consistent results across repeated assessments. Although there might be reasons to develop or adapt a scale/questionnaire, its validity and reliability must be evidenced [77], which was not the case of the questionnaires used in 37% of the studies included in this review (i.e., [35,36,38,39,41,49,52,53,55,60,63,73]). The finding that SUS was the most commonly usability scale reported in the included studies (i.e., [43,45,54,56,58,59,61,64,65,66,67,68,69,70,71,74]) is in line with a previous review on user-centered usability evaluation [78] and it suggests that this is a widely accepted instrument, usually regarded as a golden standard in terms of usability evaluation.
In this review, most studies were conducted either in real environments (e.g., participant’s homes or institutional sites as day care or nursing homes) or in conditions that simulate the home environment (i.e., living labs). Only 12 studies were carried out in the laboratory context, but it is not possible to establish an association between the testing environment and the maturity level of the applications (e.g., the solution reported by [32] was in an early development stage and the usability evaluation was carried out in an institutional context) nor between the testing environment and the purposes of the applications being developed. For instance, the seven studies evaluating the usability of social robots were conducted in living labs [34,36,38,57,58], institutional site [71] and participant’s home [49], while an application to support daily tasks [69] was evaluated in a laboratory context. Moreover, among the five applications aiming to prevent falls [45,46,47,48,66], only one was evaluated in a laboratory context [66], although all these applications had equivalent maturity levels. Similarly, among the four included telerehabilitation applications [44,56,62,63], one application [63] was evaluated in an institutional site, although its maturity level was identical to the maturity level of applications with identical purposes that were evaluated in a laboratory context (i.e., [44,56]). In turn, among the four applications proposing virtual physical training solutions [35,54,61,64], the one that was evaluated in a research laboratory (i.e., [61]) was in an early development stage when compared with the other three applications.
Thirty-two studies (73% of the included studies) were carried out in the participants’ home, institutional sites and living labs. This suggests a concern of studies’ authors for taking the evaluation closer to real conditions, even for applications in an early development stage (e.g., [32]).
A limited number of studies present both usability evaluation by users and by experts. Combining these two types of evaluation is recommended as a good practice to have a comprehensive and complementary view of potential usability problems [79]. However, this result might be biased due to the focus of the present review on user-centred usability evaluation.
Whether the researcher conducting the usability evaluation received adequate training or is external to the team of researchers who developed the AAL solution is seldom reported by the studies included in this review. However, this information is of great relevance as both the inexperience of the researcher and a potential conflict of interest might impact the results of the usability evaluation [80]. Usability evaluation involves close interaction between the researcher and the participants, methods and procedures are complex and depend on this interaction and, therefore, require experience and knowledge to be assessed effectively as well as independence to minimize the potential for unwantedly influencing participants [80].
Considering the first and primary research question (i.e., what is the methodological quality of user-centered usability evaluation of AAL solutions?), the results of this review suggest that there is the need to pay careful attention to quality as a considerable number of studies fail to report on pre-identified quality criteria. Nevertheless, these findings are aligned with the findings of previous reviews using the same quality scale, the CAUSS [30,78].
The development of scientific knowledge is built on existing knowledge and new research to achieve a deeper understanding of a particular topic. As in any another scientific topic, the generation of comprehensive knowledge related to the usability of AAL solutions depends on the methodological quality of the respective research studies. The lack of robust methodological approaches prevents the generalization of research results and the consolidation of the area.
Specifically, the results of this review demonstrate that the methodological quality of user-centred usability evaluation of AAL solutions prevents not only the generalization of results to be used and deepened in further studies, but also the translation of the developed solutions. In this respect, it should be pointed that some AAL solutions might be considered as medical devices, which means that their translation to daily use solutions requires a certification according to very strict regulatory frameworks that consider as mandatory requirement the high methodological quality of all assessment procedures [81]. Moreover, usability evaluation is only one step of all the assessment steps that must be performed. For instance, once it has been demonstrated that an AAL product or service is usable, there is the need of experimental studies involving end users to assess the efficacy and efficiency of the proposed solutions. This means that unreliable results in terms of usability evaluation might have consequences for subsequent assessments.
Looking for the results of the AAL European strategy, although a considerable investment was made for creating market-ready products and services for older people, and the significant number of projects, only a residual number of solutions reached the market [27,82]. This is a consequence of multiple causes, including the difficulty in tailoring AAL solutions to individual end-users’ needs [82], which demands not only deep knowledge and comprehensive definitions of user requirements, but also robust methodologies to assess the feedback of the end-users. Furthermore, inefficiencies of the research process that facilitates the propagation of mistakes and promotes useless and costly repetition represent huge barriers for the translation of results into innovation [83].
Based on the analysis of the included studies, it is possible to conclude that there is heterogeneity on the methods, procedures, and instruments for the user-centered usability evaluation. The validity and reliability of scientific results are essential for their reproducibility and reusability, namely for creating market-ready solutions. Therefore, to facilitate the reproducibility and reusability of the research results related to AAL solutions the openness and the transparency of all steps of the research process should be increased. Among other requirements, there is a need to comply with established methodological guidelines, standardized study designs, and use of reporting checklists to ensure a detailed description of study methods and resulted data [83]. An example of guidelines that can be used to inform both study design and study reporting, in addition to the CAUSS which was used in the present study as a guide for methodological quality assessment, is suggested by our team in a previous publication [80]. This proposed a guide to consider when designing and reporting a user-centered usability evaluation study and includes aspects such as the characteristics of the person conducting the usability evaluation that should be reported, characteristics of the participants assessing the digital solution that should be reported, aspects of methods and techniques used and environment where the usability evaluation is taking place. By using these guidelines at the designing stage authors guarantee that methodological options are carefully considered in advance and by using them at the reporting stage authors improve reproducibility and comparability of results across studies. Improving the reporting of usability evaluation studies will also facilitate, in the long term, the comparison of the ability of different usability procedures in detecting usability problems.
The results of this review should be viewed against some limitations, mainly related to the search strategy. Defining the search keywords was problematic due to many terms employed by researchers. Moreover, although the databases that were considered (i.e., Scopus, Web of Science, and IEEE Xplorer) are representative of the scientific literature, probably there are similar studies that were not indexed by these databases. Furthermore, the reliance solely on the English language may reduce the number of studies considered in the analysis. Finally, grey literature was not considered.
Despite these limitations, in terms of research methods, the authors tried to follow rigorous procedures for the studies’ selection and data extraction, so that the results of the evaluation of methodological quality of user-centered usability evaluation of AAL solutions are relevant and might contribute to the quality of future studies.

5. Conclusions

An overall analysis of the results suggests that there is high heterogeneity among user-centered usability evaluation studies in terms of methods, procedures, and instruments. Furthermore, studies are of low methodological quality as they fail to consider and report relevant methodological aspects. Therefore, as a main conclusion of this study, future AAL research must pay special attention to the design and reporting of the studies on user-centered usability evaluation.
In this respect, guidelines and instruments to assess the quality of the studies, such as CAUSS [30], which has been used for the methodological quality assessment of the included studies, should be considered when designing and reporting user-centered usability evaluations of AAL solutions.

Author Contributions

Conceptualization, N.P.R.; writing—original draft preparation, N.P.R.; writing—review and editing, A.I.M., A.G.S., J.P., R.B. and N.P.R.; investigation, A.I.M., A.G.S., J.P., R.B. and N.P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Fundação para a Ciência e a Tecnologia in the scope of the project UID/CEC/00127/2021 and project SH4ALL—Smart Health for All (POCI-01-0247-FEDER-046115), from Compete 2020, Lisboa 2020 and Portugal 2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sixsmith, A.; Sixsmith, J. Ageing in Place in the United Kingdom. Ageing Int. 2008, 32, 219–235. [Google Scholar] [CrossRef]
  2. World Health Organization. Active Ageing: A policy Framework; WHO: Geneve, Switzerland, 2002. [Google Scholar]
  3. World Health Organization. A Glossary of Terms for Community Health Care and Services for Older Persons; WHO: Geneve, Switzerland, 2004. [Google Scholar]
  4. Glass, T.A.; McAtee, M.J. Behavioral science at the crossroads in public health: Extending horizons, envisioning the future. Soc. Sci. Med. 2006, 62, 1650–1671. [Google Scholar] [CrossRef]
  5. Lindgren, H.; Baskar, J.; Guerrero, E.; Nieves, J.C.; Nilsson, I.; Yan, C. Computer-Supported Assessment for Tailoring Assistive Technology. In Proceedings of the 6th International Conference on Digital Health Conference, Montréal, QC, Canada, 11–13 April 2016; ACM: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  6. Jaschinski, C.; Allouch, S.B. Listening to the ones who care: Exploring the perceptions of informal caregivers towards ambient assisted living applications. J. Ambient Intell. Humaniz. Comput. 2019, 10, 761–778. [Google Scholar] [CrossRef]
  7. Cook, D.J.; Das, S.K. Pervasive computing at scale: Transforming the state of the art. Pervasive Mob. Comput. 2012, 8, 22–35. [Google Scholar] [CrossRef] [Green Version]
  8. Weiser, M. Hot topics: Ubiquitous computing. IEEE Comput. 1993, 26, 71–72. [Google Scholar] [CrossRef]
  9. Augusto, J. (Ed.) Handbook of Ambient Assisted Living: Technology for Healthcare, Rehabilitation and Well-Being; IOS Press: Amsterdam, The Netherlands, 2012. [Google Scholar]
  10. Costa, R.; Carneiro, D.; Novais, P.; Lima, L. Ambient assisted living. In Proceedings of the 3rd Symposium of Ubiquitous Computing and Ambient Intelligence, Salamanca, Spain, 22–24 October 2008; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
  11. Camarinha-Matos, L.; Vieira, W. Intelligent mobile agents in elderly care. Robot. Auton. Syst. 1999, 27, 59–75. [Google Scholar] [CrossRef]
  12. Díaz-Oreiro, I.; López, G.; Quesada, L.; Guerrero, L.A. UX Evaluation with Standardized Questionnaires in Ubiquitous Computing and Ambient Intelligence: A Systematic Literature Review. Adv. Hum. Comput. Interact. 2021, 5518722. [Google Scholar] [CrossRef]
  13. International Organization for Standardization. ISO 9241 Part 11—Guidance on Usability; International Organization for Standardization: Geneve, Switzerland, 1999. [Google Scholar]
  14. Nielsen, J. Usability Engineering; Morgan Kaufmann: San Francisco, CA, USA, 1994. [Google Scholar]
  15. Bevan, N.; Claridge, N.; Petrie, H. Tenuta: Simplified guidance for usability and accessibility. In Proceedings of the HCI International 2005, Las Vegas, NV, USA, 22–27 July 2005; Lawrence Erlbaum: Mahwah, NJ, USA, 2005. [Google Scholar]
  16. Morrissey, K. A review of ‘universal methods of design: 100 ways to research complex problems, develop innovative ideas, and design effective solutions’. Visit. Stud. 2014, 17, 222–224. [Google Scholar] [CrossRef]
  17. Dix, A.; Finlay, G.; Abowd, G.; Beale, R. Human-Computer Interaction; Prentice Hall: Hooboken, NJ, USA, 2004. [Google Scholar]
  18. Da Costa, R.P.; Canedo, E.D.; de Sousa, R.T.; de Oliveira, R.A.; Villalba, L.J. Set of usability heuristics for quality assessment of mobile applications on smartphones. IEEE Access 2019, 7, 116145–116161. [Google Scholar] [CrossRef]
  19. Bernsen, N.; Dybkjær, L. Multimodal Usability; Springer: London, UK, 2010. [Google Scholar]
  20. Martins, A.; Queirós, A.; Silva, A.; Rocha, N. Usability Evaluation Methods: A Systematic Review. Human Factors. In Software Development and Design; Mahmood, Z., Saeed, S., Eds.; IGI Global: Hershey, PA, USA, 2015; pp. 250–273. [Google Scholar]
  21. Marcos-Pablos, S.; García-Peñalvo, F.J. Technological ecosystems in care and assistance: A systematic literature review. Sensors 2019, 19, 708. [Google Scholar] [CrossRef] [Green Version]
  22. Garcés, L.; Oquendo, F.; Nakagawa, E.Y. Assessment of reference architectures and reference models for ambient assisted living systems: Results of a systematic literature review. Int. J. E-Health Med. Commun. 2020, 11, 17–36. [Google Scholar] [CrossRef]
  23. Sanchez-Comas, A.; Synnes, K.; Hallberg, J. Hardware for recognition of human activities: A review of smart home and AAL related technologies. Sensors 2020, 20, 4227. [Google Scholar] [CrossRef]
  24. Choukou, M.A.; Shortly, T.; Leclerc, N.; Freier, D.; Lessard, G.; Demers, L.; Auger, C. Evaluating the acceptance of ambient assisted living technology (AALT) in rehabilitation: A scoping review. Int. J. Med. Inform. 2021, 104461. [Google Scholar] [CrossRef]
  25. Grossi, G.; Lanzarotti, R.; Napoletano, P.; Noceti, N.; Odone, F. Positive technology for elderly well-being: A review. Pattern Recognit. Lett. 2020, 137, 61–70. [Google Scholar] [CrossRef]
  26. Nilsson, M.Y.; Andersson, S.; Magnusson, L.; Hanson, E. Ambient assisted living technology-mediated interventions for older people and their informal carers in the context of healthy ageing: A scoping review. Health Sci. Rep. 2021, 4, e225. [Google Scholar] [CrossRef]
  27. Puliga, G.; Nasullaev, A.; Bono, F.; Gutiérrez, E.; Strozzi, F. Ambient assisted living and European funds: A bibliometric approach. Inf. Technol. People 2020. [Google Scholar] [CrossRef]
  28. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, P. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2009, 8, 336–341. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Xiao, Y.; Watson, M. Guidance on conducting a systematic literature review. J. Plan. Educ. Res. 2019, 39, 93–112. [Google Scholar] [CrossRef]
  30. Silva, A.G.; Simões, P.; Santos, R.; Queirós, A.; Rocha, N.P.; Rodrigues, M. A scale to assess the methodological quality of studies assessing usability of electronic health products and services: Delphi study followed by validity and reliability testing. J. Med. Internet Res. 2019, 21, e14829. [Google Scholar] [CrossRef] [Green Version]
  31. Morandell, M.M.; Hochgatterer, A.; Fagel, S.; Wassertheurer, S. Avatars in assistive homes for the elderly. In Proceedings of the Symposium of the Austrian HCI and Usability Engineering Group, Graz, Austria, 20–21 November 2008; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
  32. Maier, E.; Kempter, G. AAL in the Wild–Lessons Learned. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, San Diego, CA, USA, 19–24 July 2009; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  33. Hazzam, A.; Kohls, N.; Plankensteiner, A.; Becker, U.; Ritter, W.; Maier, E.; Plischke, H.; Sauer, S.; Grigore, O.; Kempter, G. Implementing ambient assisting technologies in elder-care: Results of a pilot study. Synesis J. Sci. Technol. Ethics Policy 2011, 2, G27–G38. [Google Scholar]
  34. Werner, K.; Oberzaucher, J.; Werner, F. Evaluation of human robot interaction factors of a socially assistive robot together with older people. In Proceedings of the 6th International Conference on Complex, Intelligent, and Software Intensive Systems, Palermo, Italy, 4–6 July 2012; IEEE: Piscataway, NJ, USA, 2012. [Google Scholar] [CrossRef]
  35. Bleser, G.; Steffen, D.; Weber, M.; Hendeby, G.; Stricker, D.; Fradet, L.; Marin, F.; Ville, N.; Carré, F. A personalized exercise trainer for the elderly. J. Ambient Intell. Smart Environ. 2013, 5, 547–562. [Google Scholar] [CrossRef] [Green Version]
  36. Cavallo, F.; Limosani, R.; Manzi, A.; Bonaccorsi, M.; Esposito, R.; Di Rocco, M.; Pecora, F.; Teti, G.; Saffiotti, A.; Dario, P. Development of a socially believable multi-robot solution from town to home. Cogn. Comput. 2014, 6, 954–967. [Google Scholar] [CrossRef] [Green Version]
  37. Harjumaa, M.; Idigoras, I.; Isomursu, M.; Garzo, A. Expectations and user experience of a multimodal medicine management system for older users. J. Assist. Technol. 2014, 8, 51–56. [Google Scholar] [CrossRef]
  38. Cavallo, F.; Aquilano, M.; Bonaccorsi, M.; Limosani, R.; Manzi, A.; Carrozza, M.C.; Dario, P. Improving domiciliary robotic services by integrating the ASTRO robot in an AmI infrastructure. In Gearing up and Accelerating Cross-Fertilization between Academic and Industrial Robotics Research in Europe; Röhrbein, F., Veiga, G., Natale, C., Eds.; Springer: Cham, Switzerland, 2014; pp. 267–282. [Google Scholar] [CrossRef] [Green Version]
  39. Blasco, R.; Marco, Á.; Casas, R.; Cirujano, D.; Picking, R. A smart kitchen for ambient assisted living. Sensors 2014, 14, 1629–1653. [Google Scholar] [CrossRef] [PubMed]
  40. Ribeiro, V.S.; Martins, A.I.; Queirós, A.; Silva, A.G.; Rocha, N.P. AAL@ MEO: Interactive Digital-TV to Support Home Care. Stud. Health Technol. Inform. 2015, 217, 1024–1029. [Google Scholar] [CrossRef]
  41. Costa, S.E.; Rodrigues, J.J.; Silva, B.M.; Isento, J.N.; Corchado, J.M. Integration of wearable solutions in aal environments with mobility support. J. Med. Syst. 2015, 39, 184. [Google Scholar] [CrossRef]
  42. Dias, M.S.; Vilar, E.; Sousa, F.; Vasconcelos, A.; Pinto, F.M.; Saldanha, N.; Eloy, S. A living labs approach for usability testing of Ambient Assisted Living technologies. In Proceedings of the 4th International Conference Design, User Experience, and Usability: Design Discourse, Los Angeles, CA, USA, 2–7 August 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
  43. Sanchez-Morillo, D.; Crespo, M.; Leon, A.; Crespo Foix, L.F. A novel multimodal tool for telemonitoring patients with COPD. Inform. Health Soc. Care 2015, 40, 1–22. [Google Scholar] [CrossRef]
  44. Morán, A.L.; Ramírez-Fernández, C.; Meza-Kubo, V.; Orihuela-Espina, F.; García-Canseco, E.; Grimaldo, A.I.; Sucar, E. On the effect of previous technological experience on the usability of a virtual rehabilitation tool for the physical activation and cognitive stimulation of elders. J. Med. Syst. 2015, 39, 1–11. [Google Scholar] [CrossRef] [PubMed]
  45. Vaziri, D.D.; Aal, K.; Ogonowski, C.; Von Rekowski, T.; Kroll, M.; Marston, H.R.; Poveda, R.; Gschwind, Y.J.; Delbaere, K.; Wieching, R.; et al. Exploring user experience and technology acceptance for a fall prevention system: Results from a randomized clinical trial and a living lab. Eur. Rev. Aging Phys. Act. 2016, 13, 6. [Google Scholar] [CrossRef] [Green Version]
  46. Ogonowski, C.; Aal, K.; Vaziri, D.; Rekowski, T.V.; Randall, D.; Schreiber, D.; Wieching, R.; Wulf, V. ICT-based fall prevention system for older adults: Qualitative results from a long-term field study. ACM Trans. Comput.-Hum. Interact. 2016, 23, 1–33. [Google Scholar] [CrossRef]
  47. Ejupi, A.; Gschwind, Y.J.; Valenzuela, T.; Lord, S.R.; Delbaere, K. A kinect and inertial sensor-based system for the self-assessment of fall risk: A home-based study in older people. Hum.-Comput. Interact. 2016, 31, 261–293. [Google Scholar] [CrossRef]
  48. Pripfl, J.; Körtner, T.; Batko-Klein, D.; Hebesberger, D.; Weninger, M.; Gisinger, C. Social service robots to support independent living. Z. Gerontol. Geriatr. 2016, 49, 282–287. [Google Scholar] [CrossRef]
  49. Amaxilatis, D.; Chatzigiannakis, I.; Mavrommati, I.; Vasileiou, E.; Vitaletti, A. Delivering elder-care environments utilizing TV-channel based mechanisms. J. Ambient Intell. Smart Environ. 2017, 9, 783–798. [Google Scholar] [CrossRef] [Green Version]
  50. Teixeira, A.; Ferreira, F.; Almeida, N.; Silva, S.; Rosa, A.F.; Pereira, J.C.; Vieira, D. Design and development of Medication Assistant: Older adults centred design to go beyond simple medication reminders. Univers. Access Inf. Soc. 2017, 16, 545–560. [Google Scholar] [CrossRef]
  51. Goumopoulos, C.; Papa, I.; Stavrianos, A. Development and evaluation of a mobile application suite for enhancing the social inclusion and well-being of seniors. Informatics 2017, 4, 15. [Google Scholar] [CrossRef] [Green Version]
  52. Fiorini, L.; Esposito, R.; Bonaccorsi, M.; Petrazzuolo, C.; Saponara, F.; Giannantonio, R.; De Petris, G.; Dario, P.; Cavallo, F. Enabling personalised medical support for chronic disease management through a hybrid robot-cloud approach. Auton. Robot. 2017, 41, 1263–1276. [Google Scholar] [CrossRef] [Green Version]
  53. Orso, V.; Spagnolli, A.; Gamberini, L.; Ibanez, F.; Fabregat, M.E. Interactive multimedia content for older adults: The case of SeniorChannel. Multimed. Tools Appl. 2017, 76, 5171–5189. [Google Scholar] [CrossRef]
  54. Konstantinidis, E.I.; Bamparopoulos, G.; Bamidis, P.D. Moving real exergaming engines on the web: The webFitForAll case study in an active and healthy ageing living lab environment. IEEE J. Biomed. Health Inform. 2017, 21, 859–866. [Google Scholar] [CrossRef] [PubMed]
  55. Costa, C.R.; Anido-Rifón, L.E.; Fernández-Iglesias, M.J. An open architecture to support social and health services in a smart TV environment. IEEE J. Biomed. Health Inform. 2017, 21, 549–560. [Google Scholar] [CrossRef]
  56. Pedroli, E.; Greci, L.; Colombo, D.; Serino, S.; Cipresso, P.; Arlati, S.; Mondellini, M.; Boilini, L.; Giussani, V.; Goulene, K.; et al. Characteristics, usability, and users experience of a system combining cognitive and physical therapy in a virtual environment: Positive bike. Sensors 2018, 18, 2343. [Google Scholar] [CrossRef] [Green Version]
  57. Cavallo, F.; Limosani, R.; Fiorini, L.; Esposito, R.; Furferi, R.; Governi, L.; Carfagni, M. Design impact of acceptability and dependability in assisted living robotic applications. Int. J. Interact. Des. Manuf. 2018, 12, 1167–1178. [Google Scholar] [CrossRef]
  58. Di Nuovo, A.; Broz, F.; Wang, N.; Belpaeme, T.; Cangelosi, A.; Jones, R.; Esposito, R.; Cavallo, F.; Dario, P. The multi-modal interface of Robot-Era multi-robot services tailored for the elderly. Intell. Serv. Robot. 2018, 11, 109–126. [Google Scholar] [CrossRef] [Green Version]
  59. Cortellessa, G.; Fracasso, F.; Sorrentino, A.; Orlandini, A.; Bernardi, G.; Coraci, L.; De Benedictis, R.; Cesta, A. ROBIN, a telepresence robot to support older users monitoring and social inclusion: Development and evaluation. Telemed. E-Health 2018, 24, 145–154. [Google Scholar] [CrossRef]
  60. Yilmaz, Ö. An ambient assisted living system for dementia patients. Turk. J. Electr. Eng. Comput. Sci. 2019, 27, 2361–2378. [Google Scholar] [CrossRef]
  61. Chartomatsidis, M.; Goumopoulos, C. A Balance Training Game Tool for Seniors using Microsoft Kinect and 3D Worlds. In Proceedings of the 5th International Conference on Information and Communication Technologies for Ageing Well and e-Health, Heraklion, Greece, 2–4 May 2019; SciTePress: Setúbal, Portugal, 2019. [Google Scholar] [CrossRef]
  62. Couto, F.; de Lurdes Almeida, M.; dos Anjos Dixe, M.; Ribeiro, J.; Braúna, M.; Gomes, N.; Caroço, J.; Monteiro, L.; Martinho, R.; Rijo, R.; et al. Digi&Mind: Development and validation of a multi-domain digital cognitive stimulation program for older adults with cognitive decline. Procedia Comput. Sci. 2019, 164, 732–740. [Google Scholar] [CrossRef]
  63. Palestra, G.; Rebiai, M.; Courtial, E.; Koutsouris, D. Evaluation of a rehabilitation system for the elderly in a day care center. Information 2019, 10, 3. [Google Scholar] [CrossRef] [Green Version]
  64. Rebsamen, S.; Knols, R.H.; Pfister, P.B.; de Bruin, E.D. Exergame-Driven high-Intensity interval training in untrained community dwelling older adults: A Formative one group quasi-experimental feasibility trial. Front. Physiol. 2019, 10, 1019. [Google Scholar] [CrossRef] [Green Version]
  65. Delmastro, F.; Dolciotti, C.; La Rosa, D.; Di Martino, F.; Magrini, M.; Coscetti, S.; Palumbo, F. Experimenting Mobile and e-Health Services with Frail MCI Older People. Information 2019, 10, 253. [Google Scholar] [CrossRef] [Green Version]
  66. Money, A.G.; Atwal, A.; Boyce, E.; Gaber, S.; Windeatt, S.; Alexandrou, K. Falls Sensei: A serious 3D exploration game to enable the detection of extrinsic home fall hazards for older adults. BMC Med. Inform. Decis. Mak. 2019, 19, 85. [Google Scholar] [CrossRef]
  67. Wohlfahrt-Laymann, J.; Hermens, H.; Villalonga, C.; Vollenbroek-Hutten, M.; Banos, O. MobileCogniTracker. J. Ambient Intell. Humaniz. Comput. 2019, 10, 2143–2160. [Google Scholar] [CrossRef] [Green Version]
  68. Adcock, M.; Thalmann, M.; Schättin, A.; Gennaro, F.; de Bruin, E.D. A pilot study of an in-home multicomponent exergame training for older adults: Feasibility, usability and pre-post evaluation. Front. Aging Neurosci. 2019, 11, 304. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Gullà, F.; Menghi, R.; Papetti, A.; Carulli, M.; Bordegoni, M.; Gaggioli, A.; Germani, M. Prototyping adaptive systems in smart environments using virtual reality. Int. J. Interact. Des. Manuf. 2019, 13, 597–616. [Google Scholar] [CrossRef] [Green Version]
  70. Macis, S.; Loi, D.; Ulgheri, A.; Pani, D.; Solinas, G.; La Manna, S.; Cestone, V.; Guerri, D.; Raffo, L. Design and usability assessment of a multi-device SOA-based telecare framework for the elderly. IEEE J. Biomed. Health Inform. 2019, 24, 268–279. [Google Scholar] [CrossRef]
  71. Zlatintsi, A.; Dometios, A.C.; Kardaris, N.; Rodomagoulakis, I.; Koutras, P.; Papageorgiou, X.; Maragos, P.; Tzafestas, C.S.; Vartholomeos, P.; Hauer, K.; et al. I-Support: A robotic platform of an assistive bathing robot for the elderly population. Robot. Auton. Syst. 2020, 103451, 126. [Google Scholar] [CrossRef]
  72. Zhunio, C.S.; Orellana, P.C.; Patiño, A.V. A Memory Game for Elderly People: Development and Evaluation. In Proceedings of the Seventh International Conference on eDemocracy & eGovernment (ICEDEG), Buenos Aires, Argentina, 20–22 April 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  73. Brauner, P.; Ziefle, M. Serious motion-based exercise games for older adults: Evaluation of usability, performance, and pain mitigation. JMIR Serious Games 2020, 8, e14182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Adcock, M.; Sonder, F.; Schättin, A.; Gennaro, F.; de Bruin, E.D. A usability study of a multicomponent video game-based training for older adults. Eur. Rev. Aging Phys. Act. 2020, 17, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. European Union. Opinion of the European Economic and Social Committee on the Proposal for a Decision of the European Parliament and of the Council on the Participation by the Community in a Research and Development Programme AiMed. at Enhancing the Quality of life of Older People through the Use of New Information and Communication Technologies (ICT), Undertaken by Several Member States; European Commission: Brussels, Belgium, 2007. [Google Scholar]
  76. Turk, M. Multimodal interaction: A review. Pattern Recognit. Lett. 2014, 36, 189–195. [Google Scholar] [CrossRef]
  77. Inal, Y.; Wake, J.D.; Guribye, F.; Nordgreen, T. Usability evaluations of mobile mental health technologies: Systematic review. J. Med. Internet Res. 2020, 22, e15337. [Google Scholar] [CrossRef]
  78. Almeida, A.F.; Rocha, N.P.; Silva, A.G. Methodological Quality of Manuscripts Reporting on the Usability of Mobile Applications for Pain Assessment and Management: A Systematic Review. Int. J. Environ. Res. Public Health 2020, 17, 785. [Google Scholar] [CrossRef] [Green Version]
  79. Queirós, A.; Silva, A.; Alvarelhão, J.; Rocha, N.P.; Teixeira, A. Usability, accessibility and ambient-assisted living: A systematic literature review. Univers. Access Inf. Soc. 2015, 14, 57–66. [Google Scholar] [CrossRef]
  80. Silva, A.G.; Caravau, H.; Martins, A.; Almeida, A.M.; Silva, T.; Ribeiro, Ó.; Santinha, G.; Rocha, N.P. Procedures of User-Centered Usability Assessment for Digital Solutions: Scoping Review of Reviews Reporting on Digital Solutions Relevant for Older Adults. JMIR Hum. Factors 2021, 8, e22774. [Google Scholar] [CrossRef] [PubMed]
  81. European Commission. Regulation (EU) 2017/745 of the European Parliament and the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC; European Commission: Brussels, Belgium, 2017. [Google Scholar]
  82. Van Grootven, B.; Van Achterberg, T. The European Union’s Ambient and Assisted Living JoInt. Programme: An evaluation of its impact on population health and well-being. Health Inform. J. 2019, 25, 27–40. [Google Scholar]
  83. Lusoli, W. (Ed.) Reproducibility of Scientific Results in the EU: Scoping Report; Publications Office of the European Union: Brussels, Belgium, 2020. [Google Scholar] [CrossRef]
Figure 1. Systematic Reviews Flowchart.
Figure 1. Systematic Reviews Flowchart.
Ijerph 18 11507 g001
Figure 2. Studies by year and publication rate (calculated using RMS Least Square Fit).
Figure 2. Studies by year and publication rate (calculated using RMS Least Square Fit).
Ijerph 18 11507 g002
Figure 3. Distribution of the selected studies by country.
Figure 3. Distribution of the selected studies by country.
Ijerph 18 11507 g003
Figure 4. Number of studies that met each item, after consensus was reached between reviewers.
Figure 4. Number of studies that met each item, after consensus was reached between reviewers.
Ijerph 18 11507 g004
Table 1. Multinational teams.
Table 1. Multinational teams.
ReferencesMultinational Teams
[33]Germany, United States of America, Italy, Austria, and Romain
[35]Germany, Sweden, and France
[36]Italy and Sweden
[37]Finland and Spain
[39]Spain and United Kingdom
[41]Portugal and Spain
[47]Austria, Australia, and Chile
[49]Greece, Italy, and United Kingdom
[53,56]Italy and Spain
[58]United Kingdom, Italy, and Belgium
[63]France and Greece
[64,68]Switzerland and Sweden
[67]Netherlands and Spain
[71]Greece, Germany, Italy, and United Kingdom
Table 2. Domains and purposes of the AAL solutions reported by the included studies.
Table 2. Domains and purposes of the AAL solutions reported by the included studies.
DomainsPurposesReferences
Secure and supportive environmentDaily living activities[31,32,33,34,36,38,39,42,49,57,58,59,60,69,71]
Falls prevention[45,46,47,48,66]
Healthcare provisionHome monitoring[41,43,65,67,70]
Telerehabilitation[44,56,62,63]
Remote care[40,52,55]
Medication management[37,50]
Healthy lifestypesPhysical activity[35,54,61,64,73]
Cognitive activity[72]
Physical and cognitive activity[68,74]
Social involvement and active participationSocial inclusion[51]
Participation in leisure activities[53]
Table 3. Interaction modalities and respective terminal equipment.
Table 3. Interaction modalities and respective terminal equipment.
InteractionTerminal EquipmentReferences
Visual InteractionPersonal computer[66]
Mobile (i.e., tablet or smartphone)[60,62,65,72]
Mobile and personal computer[41]
Mobile and interactive TV[54]
Interactive TV[32,33,40,49]
Visual and auditory interactionMobile[37,51]
Interactive TV[53,55,70]
Visual and voice interactionMobile[43]
Visual, voice and auditory interactionMobile[42,50,67]
Mobile and interactive TV[39]
Enhanced communication agents[31,35]
Visual and gesture interactionPersonal computer and WiiMote[44]
Personal computer and Kinect[61]
Interactive TV and wearable inertial sensors[68,74]
Interactive TV, wearable inertial sensors and Kinect[45,46,47]
Interactive TV and Kinect[73]
Interactive TV and position sensors[64]
Personal computer, RGB cameras and depth sensors[63]
Other interaction modalities Immersive virtual reality[56,69]
Robots[34,36,38,48,52,57,58,59,71]
Table 4. Level of agreement between the reviewers for each item of CAUSS.
Table 4. Level of agreement between the reviewers for each item of CAUSS.
Item123456789101112131415
Agreement78%78%98%93%87%80%78%83%89%85%91%87%85%96%78%
Table 5. Usability assessment design.
Table 5. Usability assessment design.
#Solution Being EvaluatedTestInquiryParticipantsTest Environment
P 1O 2T 3W 4I 5S 6Q 7C 8NumberMean Age (Years)I 9L 10P 11R 12
[31]Smart companion---x----1082x---
[32]Adaptive lighting application-x------12- 13x---
[33]Adaptive lighting application-x------1271x---
[34]Social robot--x---x-1677-x--
[35]Virtual physical training----x-x-3069x---
[36]Social robot-x----x-2574-x--
[37]Medication management-x--x---480x---
[38]Social robot------x-1574-x--
[39]Smart Kitchen-x----x-63- 13-x--
[40]Remote care-x------3058x---
[41]Home monitoring------x-1170--x-
[42]Smart companionx----xx-480-x--
[43]Home monitoringxxx--xx-2576--x-
[44]Telerehabilitation-x----x-3265---x
[45]Falls prevention -x--xx--15373-x--
[46]Falls prevention ----x---1273--x-
[47]Falls prevention----x---6274--x-
[48]Falls prevention----x---14- 13--x-
[49]Social robot----xxx-779--x-
[50]Medication management-xx--xx-1070---x
[51]Application to promote social inclusionx---x-x-2266--x-
[52]Remote care------x-2373-x--
[53]Leisure activitiesx-----x-2070--x-
[54]Virtual physical training-----x-x1473-x--
[55]Remote care------x-62- 13--x-
[56]Telerehabilitation----xx--570---x
[57]Social robot------x-1775-x--
[58]Social robot-----xx-8278-x--
[59]Social robot-----x--2537---x
[60]Safety application------x-44- 13-x--
[61]Virtual Physical trainingx---xx--1273---x
[62]Telerehabilitationxx-------472---x
[63]Telerehabilitation----x-x-680x---
[64]Virtual physical trainingx-x--x--1272x---
[65]Home monitoring x----xx 1080x---
[66]Falls prevention applicationx-x-xx--15- 13---x
[67]Home monitoring-----xx-26- 13--x-
[68]Virtual physical and cognitive trainingxxx--xx-2174--x-
[69]Support to daily tasks----xx--660---x
[70]Home monitoring-----x--1973--x-
[71]Robotx----x--10474x---
[72]Cognitive game------x-63- 13x---
[73]Virtual physical trainingx-----x-13546 x--
[74]Virtual physical and cognitive training-xx--xx-2171---x
Notes: x reported; - not reported; Test method (1 Performance; 2 Observation; 3 Think aloud; 4 Wizard of Oz); Inquiry method (5 Interviews; 6 Scales; 7 Questionnaires; 8 Card sorting); Test environment (9 Institutional site; 10 Living lab; 11 Participant home; 12 Research lab); Participants (13 Mean age of the participants not reported).
Table 6. Usability evaluation methods.
Table 6. Usability evaluation methods.
MethodsStudies
Exclusively test methods[31,32,33,40,62]
Exclusively inquiry methods[35,38,41,47,48,49,52,54,55,56,57,58,59,60,63,67,69,70,72]
Multimethod (test and inquiry methods)[34,36,37,39,42,43,44,45,46,50,51,53,61,64,65,66,68,71,73,74]
Table 7. Usability evaluation instruments.
Table 7. Usability evaluation instruments.
Instruments NatureStudy
Validated scales and questionnaires[34,42,43,45,50,54,56,58,59,61,64,65,66,67,68,69,70,71,74]
Ad-hoc scales and questionnaires[35,36,38,39,41,49,52,53,55,60,63,73]
Scales and questionnaires based on technology acceptance models[44,51,57,72]
Table 8. Environments where the usability evaluations were conducted.
Table 8. Environments where the usability evaluations were conducted.
Test EnvironmentStudies
Participant’s home[41,43,46,47,48,49,51,53,55,64,67,68,70]
Institutional site as day care or nursing home[31,32,33,35,37,40,63,65,71,72]
Living lab[34,36,38,39,42,45,52,54,57,58,60,73]
Research lab[44,50,56,59,61,62,66,69,74]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bastardo, R.; Martins, A.I.; Pavão, J.; Silva, A.G.; Rocha, N.P. Methodological Quality of User-Centered Usability Evaluation of Ambient Assisted Living Solutions: A Systematic Literature Review. Int. J. Environ. Res. Public Health 2021, 18, 11507. https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph182111507

AMA Style

Bastardo R, Martins AI, Pavão J, Silva AG, Rocha NP. Methodological Quality of User-Centered Usability Evaluation of Ambient Assisted Living Solutions: A Systematic Literature Review. International Journal of Environmental Research and Public Health. 2021; 18(21):11507. https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph182111507

Chicago/Turabian Style

Bastardo, Rute, Ana Isabel Martins, João Pavão, Anabela Gonçalves Silva, and Nelson Pacheco Rocha. 2021. "Methodological Quality of User-Centered Usability Evaluation of Ambient Assisted Living Solutions: A Systematic Literature Review" International Journal of Environmental Research and Public Health 18, no. 21: 11507. https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph182111507

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop