Next Article in Journal
The Effect of the Optimization Selection of Position Analysis Route on the Forward Position Solutions of Parallel Mechanisms
Previous Article in Journal
Identifying Potential Mosquito Breeding Grounds: Assessing the Efficiency of UAV Technology in Public Health
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings

1
Department of Communication Science, Media Psychology Program, Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
2
Department of Computing and School of Design, The Hong Kong Polytechnic University, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
Submission received: 7 October 2020 / Revised: 6 November 2020 / Accepted: 10 November 2020 / Published: 13 November 2020
(This article belongs to the Section Medical Robotics and Service Robotics)

Abstract

:
Life-like humanoid robots are on the rise, aiming at communicative purposes that resemble humanlike conversation. In human social interaction, the facial expression serves important communicative functions. We examined whether a robot’s face is similarly important in human-robot communication. Based on emotion research and neuropsychological insights on the parallel processing of emotions, we argue that greater plasticity in the robot’s face elicits higher affective responsivity, more closely resembling human-to-human responsiveness than a more static face. We conducted a between-subjects experiment of 3 (facial plasticity: human vs. facially flexible robot vs. facially static robot) × 2 (treatment: affectionate vs. maltreated). Participants (N = 265; Mage = 31.5) were measured for their emotional responsiveness, empathy, and attribution of feelings to the robot. Results showed empathically and emotionally less intensive responsivity toward the robots than toward the human but followed similar patterns. Significantly different intensities of feelings and attributions (e.g., pain upon maltreatment) followed facial articulacy. Theoretical implications for underlying processes in human-robot communication are discussed. We theorize that precedence of emotion and affect over cognitive reflection, which are processed in parallel, triggers the experience of ‘because I feel, I believe it’s real,’ despite being aware of communicating with a robot. By evoking emotional responsiveness, the cognitive awareness of ‘it is just a robot’ fades into the background and appears not relevant anymore.

1. Introduction

“When dealing with people, remember you are not dealing with creatures of logic, but creatures of emotion.” Dale Carnegie (2012, p. 16), Emotional Intelligence.
Robots are being introduced in our society at rapid pace, not only in terms of industrial machines or housekeeping support such as lawn mowers and robotic vacuums but increasingly so in terms of humanlike and embodied ‘social entities’ that are built for communicative and social purposes. Various sources predict that robots “will permeate wide segments of daily life by 2025” [1] and 2016 was claimed as a pivotal year for the proliferation of humanoid social robots [2]. Particularly in view of aging societies [3,4], and shrinking financial and human resources to sustain sufficient levels of social welfare and healthcare [5], an increase in need for supportive social robots is foreseen [6,7]. In other areas such as education and public service, social robots are on the rise as well [8,9,10]. Robots capable of exhibiting natural-appearing social qualities are particularly promising because human-to-human communication is the most intuitive and natural way of interaction. To investigate the social nature of social robots as driven by their communicative function, insights from mediated communication and media psychological research are in place [9,11].
Researchers argue that creating a feeling of relating to the robot in a way we normally do in face-to-face communication would facilitate the interaction with communicative humanoid robots. For example, when the user attributed ‘a mind,’ the robot was more easily accepted [7,12] and affective involvement or empathy is likewise considered key in social robotics research [13,14]. Previous work on computer-mediated communication showed that people tend to treat the machine-mediated interaction as human-based. As argued in the empirically supported paradigms of Computers Are Social Actors (CASA) [15,16] and Media Equation [17,18], people apply human schemata of communication to computers and other media. Studies in human-robot interaction often refer to these frameworks [19]. Building on the general principle that human schemata of communication are applied to mediated interaction, the current study aims to go beyond in detailing how facial articulacy and affective processes are important to exploit the full potential of human-robot communication.
In the following, we outline the theoretical arguments that underpin our hypotheses. Then, we explain the methodology and experimental design to test those hypotheses, assuming differences in users’ responses to variations in facial articulacy of two humanoid robots and a human face. Results are then discussed in view of our theoretical framework and contextualized for its implications.

1.1. Theoretical Background

Various research showed that when it comes to human social interaction, the face is a most powerful communication channel [20,21]. Facial expressivity serves an important function in communication as it enables the interaction partner to quickly infer information about the sender [22,23]. Through facial expressions, affective and emotional states are communicated, which have been extensively studied in interpersonal communication in psychology as well as with virtual characters or avatars [21,24,25,26]. For example, after morphing the smile of the sender onto his or her avatar, the receivers of an avatar with a bigger smile than its user originally had, described the interaction experience with more positive affect and more social presence than in the ‘normal’ condition [27]. A recent review of computer-mediated communication (CMC) research on avatars asserts that an avatar’s qualities and features are derived from behavioral realism such as appropriate facial expressions, among others [28]. Combined, results suggest that communicating affective and emotional states through facial expressions improves human-machine interaction.
Like Broadbent [9], we believe that affective and emotional responsiveness is quintessential to human-robot communication, for example, by providing social robots with ‘a life of their own’ and by making them emotionally ‘expressive’ [12,29]. For example, humanlike perceptual capacities would support humanlike communication, such as providing robots with human functionalities to produce and recognize facial expressions, gestures, sounds and words, and interpret and respond to these as in social communication [14,30]. Today, many social robots are equipped with such capacities and are increasingly optimized for social interaction, although speech technologies and autonomous conversations are still a challenge [31,32]. Today’s humanlike robots compete in their facial articulacy, such as Nadine [33], Jia Jia [34], Erica [35], and Sophia [36]). They illustrate how designers strive to come close to the feel of human-human-communication in which facial look-alike plays an essential role in developing life-like humanoid social robots. A study comparing facial expressions of robot heads with human facial expressions found that the intended expressions in robot faces were adequately recognized [37]. To create the feel of humanlike communication, the face plays a fundamental role and communicates the emotional or affective state of the interlocutor. However, systematic studies testing such assumptions by the affective responsiveness of users are scarce in current robotic research.
Facial expressions are a primary means of conveying social information and observing another’s facial expression is essential in evoking an emotional and empathetic response [38,39,40,41]. Because empathy underlies adequate social communication [42,43,44], it may help if a robot can raise empathy in its human users. Previous research investigated whether humans feel empathy for social robots in similar ways as they do for human beings when in pain [45,46]. These studies, however, did not include facial expressions of humanoid robots. Rosenthal-von der Pütten’s study [45] compared a human dressed in black and filmed from the back with Pleo, a baby toy-animal robot. Suzuki’s study [46] compared cutting a human finger versus a robot finger with a knife. Therefore, the current study uniquely focused on the face and compared two types of humanoid robots that differed in level of facial articulacy with a human actor. Both the robots and human were filmed from the front, torsos only (see Method).
The question of affective involvement of users is considered a central issue in social robotics research [13,14]. If robots have a facially detailed humanlike face, this may help to transfer emotional responsiveness and empathy. Generally, researchers assume that emotional responsiveness contributes to humanness and in lack thereof, even an actual human being will be considered ‘less human,’ ‘not humane,’ and ‘robotic’ [47]. However, uncertainty exists to the extent that a robot should look like a human. On the one hand, research indicates that emotionally expressive and understanding robots are considered more trustworthy and warmly [48], and facilitate social interaction [30]. On the other hand, research found that in certain emotionally sensitive cases, users may prefer the robot [49]. Proponents of the Uncanny valley [41,50] even suggest that when a robot approaches but does not achieve human-like perfection, the earlier built up familiarity with and affinity toward the robot is negatively affected [51,52]. However, robot design studies highlight the impact of mechanical limitations in the face of a robot [53], and [54] for example, showed the incapability of a robot to reproduce the emotions of ‘fear’ and ‘disgust’ due to a lack of actuators in the face and the thickness of the silicon skin.
Whether robots are designed as more humanlike or mechanical, we argue that when a humanoid robot evokes emotional responsiveness in the user, the cognitive notions of ‘it is just a robot’ will become less salient. This can be explained from the way affect-based information is processed in the human brain. In line with current neuropsychological theorizing [55,56,57], we propose a dynamically intertwined parallel processing model [58,59] for interaction with social robots. Hence, elaborated in the following, we argue that robots capable of eliciting affective responsiveness are particularly promising in being ‘taken for real’, i.e., as a synthetic communication partner that feels like a human conversation partner.
The brain processes emotional or affective information through complementary routes, indicated as lower and higher pathways referring to the location in the brain areas [60,61,62]. The lower pathway reflects the fast processing of feelings and emotions associated with affective engagement, sensations, and arousing information, for example, in detecting a threat. It instigates an instantaneous response. This is often considered intuitive, automatic, or subconscious. The higher pathway is relatively slower and more reflective, reappraising the incoming information, regulating emotions as well as reflecting on ongoing processes in the lower pathway. Although the lower and higher pathways are debated in the literature, neuroscientists do acknowledge the relatively slower and faster processes that dynamically interact as one system [55,63]. Herein, the architecture for emotions is more about “process” and less about structure [57].
We interpret such parallel processing as occurring through coordinated networks of rapid support signals from lower-order thalamic inputs that coalesce with more sophisticated processing systems of the cortex [59]. This may include the processing of prediction errors, yet both types of input are ultimately processed by cortical circuits in the cerebral cortex [50,51,52,53,54,55,56,57,58,59,60,61,62,63]. Consequently, the higher and lower pathways should merely be seen as metaphors for the relatively slower and faster intertwined neural processing.
For example, an initial threat-avoidant response may be reappraised in realizing that the apparent threat was not real (e.g., when the initially felt shock was evoked by a horror movie). Such parallel and dynamically intertwined processing can explain why people may initially feel a fictional, virtual, or robotic encounter as real, or take information from media as real–because ‘it just feels real’ [59,64]. Several neural structures are (unconsciously) active during emotional face processing, even if that face is artificial [65]. Dependent on the strength of processing the information via the more effortful cognitive processes (‘higher pathway’), an individual may reappraise the interaction as ‘non-human’ in hindsight. The emotional response seems to blur the borders between ‘fact and fake.’ The instantaneous response based on emotional or accompanying sensory feedback apparently takes (momentary) control precedence over cognitive reflection and biases subsequent information processing [58,59]. People may differ highly in this respect, which might be dependent on how they process affective or emotion-arousing content.
Applied to interacting with a social robot that elicits emotional responsiveness, the knowledge that one is interacting with a humanoid robot rather than a real human being may withdraw to the background and renders the interaction more (psychologically) real [66,67,68]. A humanlike social robot may then more easily resemble natural-appearing social qualities. Thus, if human resemblance of a robot may already instigate processing this ‘artificial other’ as if it were a real human being at the intuitive level (i.e., ‘lower pathway’), this will then facilitate social interaction. ‘Forgetting’ then, that it is just a robot may raise empathy in the observer when the robot is maltreated or lead the observer to project feelings or attribute emotions to the robot when ‘it is in pain’ [69]. Obviously, a robot cannot feel pain, yet, this knowledge momentarily has disappeared into the background and is overruled by the emotional or affective responsiveness that takes control precedence [59]. Greater facial plasticity in social robots may therefore help to instigate such an instantaneous emotional responsivity, in particular because the face is essential in human-human communication, as argued above.

1.2. The Current Study and Hypotheses

The current study focused on how users respond to humanoid robots and extends prior work by investigating emotional responsiveness to different levels of a robot’s facial articulacy as compared to a human’s face. Based on the above review of the literature, we argue that greater plasticity in the robot’s face will instigate affective responses that resemble human-human communication. However, such look-alike responsiveness may differ in intensity according to the robot’s facial articulacy. The following hypotheses guide our research:
Hypothesis 1.
Greater plasticity in the robot’s face elicits responses that more closely resemble human-to-human responsiveness, resulting in higher levels of emotional responsiveness, corresponding to the portrayed emotion.
Hypothesis 2.
This holds in particular when the social entity (human/humanoid robot) is ‘in pain’ or maltreated, resulting in higher levels of empathy in the observer.
Hypothesis 3.
People maintain human schemata in attributing emotions to humanoid robots in a similar way as they do toward humans, in particular upon maltreatment, resulting in intensities corresponding to the facial articulacy.
These hypotheses were tested in an experimental design for differences in responses to variations in detailed facial articulacy of two humanoid robots and a human face. Human and robots were treated either in a harmful or in a friendly way to provide the assumed emotional context to the observer.

2. Materials and Method

2.1. Participants and Design

Participants were adults (N = 265; M = 31.5; SD = 12.7; 47% male) who were recruited voluntarily in waiting rooms and the library and completed the study on a tablet of the research assistant. Furthermore, a link was posted on various platforms online to reach a wide sample. The study was approved by the Institutional Review Board. Participants provided informed consent upon starting the questionnaire. About one third (33.3%) of participants followed vocational training or professional education and 22.8% completed or followed university level. Participants were randomly assigned to conditions. The three-leveled experimental factor ‘social entity’ contrasted the two humanoid robots and the human. Each of them was presented in various conditions of maltreatment, accompanied by facial expressions of ‘being in pain’ versus being cuddled and treated affectionately. Thus, the design was a 3 (facial articulacy: human vs. facially flexible robot vs. facially static robot) by 2 (treatment: affectionate vs. maltreated) between-subjects design. The dependent variables were levels of observers’ emotional responsiveness, empathy, and emotion attribution (i.e., projection of emotions onto the social entity).

2.2. Materials

Facial articulacy was varied by a human actress versus two different humanoid robots: a 60 cm tall Robokind “Alice” with a flexible face (designed with Hanson’s FrubberTM) versus the facially static “Nao,” 57 cm tall (SoftBank; in the study named “Zora” (i.e., female)) (Figure 1). The emotional expressions were created by treating each of them in two different ways: A nice friendly way (e.g., being caressed, hugged, massaged) versus maltreatment (e.g., strangulation, suffocation, being slapped). Each treatment consisted of six actions, creating 12 variations of emotional expressivity (6 × pleasant vs. 6 × expressions of ‘in pain’), the same for each of the three social entities. To secure identical stimulus presentations, treatments were recorded, resulting in six video clips, one for each condition. Each clip briefly showed the six faces within one condition, each clip lasting for 1 min (images for conditions are provided in Appendix A).

2.3. Measures

All dependent measures were 5-point Likert type scales (1 = not (agree); 5 = (agree) a lot) to allow for reporting various intensities of emotional responsiveness and comparable answers for each item.
Emotional responsiveness was measured by asking participants how they felt watching the clip. Participants responded through the positive and negative affect scale (PANAS) [70], consisting of 20 items: 10 negative feelings (e.g., scared, afraid, upset, distressed, guilty) and 10 positive feelings (e.g., enthusiastic, interested, alert, strong, excited). Cronbach’s α = 0.88.
Empathy (i.e., state empathy) was measured through combining items from Rosenthal-von der Pütten et al. [45] and several items added from other scales to create a wider coverage of the concept (all items in Appendix B). In all, these were 14 items (e.g., compassion, sympathy, feeling with, feeling sorry). A principal component analysis showed two underlying dimensions that were highly correlated and therefore treated as one scale for the current study (Appendix B). After reverse-coding the contra-indicative items (negatively loading in Appendix B), Cronbach’s α = 0.83.
Attribution of feelings was measured with a variety of six items [cf. 44,45]: ‘X was in pain”; “X felt pleasure” (R); “X felt relaxed” (R); “X felt afraid”; “X suffered”; “X felt sad.” A higher score indicated stronger projection of negative feelings onto the social entity. Cronbach’s α = 0.72.
Demographic measures were age, gender, level of education.

3. Results

Results of a two-way MANOVA with ‘facial articulacy’ (3 levels) and ‘treatment’ (2 levels) as independent factors and emotional responsiveness (positive and negative scales), empathy, and attribution of emotions as dependent variables, showed main effects for both factors, Wilk’s Λ = 0.79, F (2,258) = 33.57, p < 0.001; η p 2 = 0.21. Main effects occurred both for type of treatment, F (3,257) = 130.32, p < 0.001; η p 2 = 0.603 and type of facial articulacy, F (6,514) = 4.69, p < 0.001; η p 2 = 0.052. A significant interaction effect was also found, F (6,514) = 5.72, p < 0.001; η p 2 = 0.063.
Posthoc analyses of emotional responsiveness revealed that the participants reported more positive feelings (for themselves) after the pleasant treatment (M = 25.53, SD = 8.16) than after the maltreatment (M = 19.20, SD = 6.75), F (1,259) = 22.32, p < 0.001; η p 2 = 0.08, which is in accordance with expectations. These differences were found for both robots Alice (MΔ = 5.39, p = 0.001) and Nao/Zora (MΔ = 4.42, p = 0.005), and for the human (MΔ= 3.2, p = 0.05). Participants felt more negative emotions after the maltreatment (M = 19.92, SD = 8.02) than after the friendly treatment (M = 15.19, SD = 6.33), F (1,259) = 28.12, p < 0.001; η p 2 = 0.10. These differences were significant for Alice and the human, but for Nao/Zora, this effect was only marginally significant, F (1,84) = 3.35, p = 0.071; η p 2 = 0.038. Figure 2 highlights the most relevant differences. Table 1 reports the details of these results.
Thus, for either a human or a robot, participants’ emotional responsiveness was in accordance with how the social entity was treated, with maltreatment raising more negative feelings and a friendly treatment more positive emotional responsiveness. The results support our hypothesis 1 in that greater plasticity in the robot’s face (Alice) elicited responses that more closely resembled human-to-human responsiveness, resulting in higher levels of emotional responsiveness, corresponding to the portrayed emotion.

3.1. Empathy

Posthoc analyses showed a significant main effect for the factor facial articulacy, F (2,259) = 8.45, p < 0.001, η p 2 = 0.06, with the participants’ empathetic responses being highest for the human (M = 3.24, SD = 0.90), followed by empathy for Alice (M = 2.93, SD = 0.71) and for Nao/Zora (M = 2.86, SD = 0.76), regardless of the treatment condition. The type of treatment also differed significantly for the participants’ empathetic responses, F (1,259) = 110.56, p < 0.001; η p 2 = 0.30, with higher levels of empathy for the maltreatment condition (M = 3.44, SD = 0.81) than for the pleasant treatment (M = 2.56, SD = 0.54), regardless of social entity. A significant interaction effect also was found, F (2,259) = 9.22, p < 0.001; η p 2 = 0.07, indicating that a harmful treatment raised most empathy compared to a friendly treatment, more so for the human but also for each of the robots (higher than the midpoint of the scale). However, the level of empathy was significantly higher for the human compared to each of the robots (i.e., MΔAlice = 0.69, p < 0.001; MΔ Nao = 0.72, p < 0.001). See Figure 3.
Thus, the results partly support our hypothesis 2 in that greater plasticity in the face (i.e., the human) upon maltreatment (i.e., expressivity upon being strangled, suffocated, etc.) elicited the highest level of empathy in the observer. The empathetic responses toward the robots, in particular upon maltreatment, resembled human-to-human responsiveness in that empathy of the robots was felt in moderate intensity (beyond the midpoint of the scale). However, the observers’ level of empathy did not significantly differ for the two robots (when analyzed together with the human).

3.2. Attribution of Feelings

Posthoc analyses showed a significant main effect of type of treatment on attribution of feelings to the social entity, F (1,259) = 236.69, p < 0.001; η p 2 = 0.60, indicating that the participants attributed significantly more negative feelings to the social entity after the harmful treatment than after the friendly treatment. The main effect for social entity, however, was not significant but the interaction was significant, F (2,259) = 14.41, p < 0.001; η p 2 = 0.10. Hence, participants’ attribution of feelings differed for the human and robots depending on the type of treatment: Attribution of feelings in the maltreatment condition differed significantly between the human and the robots (see Figure 4). The human being was considered to be ‘in pain’ and ‘unhappy’ the most (M = 4.50, SD = 0.12, p = 0.01), followed by feelings attributed to Alice, which were more unhappy and ‘in pain’ than feelings attributed to Nao/Zora (i.e., Alice: M = 3.99, SD = 0.12; Nao: M = 3.67, SD = 0.12; p = 0.049). Thus, for the maltreatment condition, results support hypothesis 3: Participants projected feelings onto robots as they do to a human in accordance with the treatment and in higher intensities to the entity with higher facial articulacy. Particularly during maltreatment, the levels of attributed emotions followed the level of facial expressiveness: highest for the human, then Alice, and least so for Nao/Zora.

4. Discussion and Conclusions

The aim of the current study was to investigate emotional responsiveness to different levels of a robot’s facial articulacy as compared to a human’s face. We created variations in detailed facial articulacy of two humanoid robots and a human actress, who were treated either in a harmful or affectionate way. We expected that observers’ emotional responsiveness, in terms of feeling for, empathizing with, and attributing feelings to a humanoid robot, would generally resemble human-human communication. Furthermore, we expected that more detailed facial articulacy would correspondingly affect participants’ emotional responsiveness.
Results showed that participants did respond emotionally and showed empathy toward both robots and also endowed them with emotions, yet less intense than they did toward the human. Furthermore, responses to the robots were in accordance with general expectations for humans who are maltreated and ‘in pain,’ raising in the observer more intense levels of negative emotions, empathy, and ‘feeling sorry’ than a positive and friendly treatment. Results further indicated stronger responsiveness toward more detailed articulacy in the robot’s face (i.e., robot Alice) for some types of emotional responsiveness, in particular when the robot was maltreated. Particularly when maltreated, people attributed the robot with ‘feeling pain,’ intensities following its facial articulacy. Thus, people do respond affectively to humans and robots alike, particularly so when maltreated, and less so if the robot has limited facial expressions (i.e., more so if they are more like us). This is a first experimental study that directly compared facial articulacy of humanoid robots to a human in affecting users’ emotional responsiveness, showing that detailed facial articulacy does make a difference.
Our findings support and further extend previous work. The finding that people do respond affectively to humans and robots supports the idea of CASA and Media Equation that people apply similar schemata in computer-mediated communication, here human-robot communication, as in human-human-communication [15,17,68]. We found this specifically for users’ emotional responses to humanoid robots. We also saw that this only goes so far: Humans evoke more affective responses than robots do and less expressive robots exert less intense responses.
In line with previous research, our findings show that people empathize with robots in similar ways as they do with humans. Yet, although empathy toward the robots was beyond the midpoint of the scale, which is remarkable in itself, it was on a lower level than for the human actress. These results are in line with the neuropsychological findings of Suzuki et al. [46], showing that participants responded empathetically to humans as well as to robots whose fingers were cut, yet the intensity was lower for the robot. Therefore, we do not conclude that empathy and emotional responsiveness is the same for human and robot—although both were felt sorry for.
This is different from the Media Equation on which the study of Rosenthal-von der Pütten et al. [45] was based. In not finding differences between the human and the robot condition, they concluded for sameness. However, similar responsiveness and “comparable activation patterns” [45] cannot be deduced from a lack of effect [71], which moreover may be due to a lack of overall power. Furthermore, the robot in [45] was not a humanoid but represented an animal (Pleo) and the video clips in [45], as in Suzuki et al. [46], did not show suffering faces. This may also explain why no significant differences in empathy were found in those studies.
Whereas Suzuki et al. [46] found lower intensities for affective responses to robots, our study shows that higher facial plasticity positively modulates that effect. In line with our theoretical arguments in the Introduction of this paper, the levels of attributed emotions followed the level of facial expressiveness: highest for the human, then for facially articulate Alice, and least so for facially static Nao/Zora. Our study shows that in human-robot communication, a robot’s facial articulacy apparently enables the human partner to infer information about the sender [22,23], even if that sender is artificial. This is in accordance with human-human communication [38,39,41]. By attributing feelings onto the robots in accordance with their facial articulacy, the observers apparently inferred its affective and emotional states. Such affective processes, like empathy, support adequate social communication [42,43,44] and facilitate social interaction [30,48], also with a robot. In [12] as well, a robot with a more humanlike face display was perceived to have more mind and a better personality.
We also obtained indirect evidence for our notion of parallel processing of human-robot communication based on the neuropsychological processing of affective and emotional information, outlined in the Introduction. In communication, facial expressiveness serves to infer emotional states, among other things. Even if a face is artificial, several neural structures are active during processing of the ‘emotional’ face [65]. As said, the brain processes emotional, sensory, and affective information though coordinated networks of rapid support signals from lower-order thalamic inputs that coalesce with more sophisticated processing in the cerebral cortex [55,56,57,58,59]. These reflect relatively faster and slower processes, respectively, which dynamically interact as one system [55,59,63]. Hence, we argue that robots eliciting affective responsiveness easily are ‘taken for real,’ that is, as a synthetic communication partner that feels like a human conversation partner.
By evoking emotional responsiveness, the cognitive reflections of ‘it is just a robot’ become less salient. This is a key notion in our dynamically intertwined parallel processing theory of media to explain the natural feel of communicating with humanlike robots, based on current neuropsychological evidence [59]. In attributing feelings to the robots, such as pain and sadness when maltreated, the observer still may be aware that s/he observes a robot, knowing that a robot does not feel pain. However, this awareness is backgrounded because emotional responsiveness may take control precedence over ongoing higher order processes. The emotional response seems to blur the borders between ‘fact and fake.’ Thus, a robot’s greater facial plasticity seems to instigate an instantaneous emotional responsivity, in particular when treated harmfully. In hindsight, the user may reappraise the interaction as ‘non-human’ and report on the robotic nature. Future research should further test this theorizing, for example, through neuropsychological research. Individual differences in how people process affective or emotion-arousing encounters with robots then also could be studied.
As a limitation, we should mention that using a video study can sort effects different from actual human-robot interactions. However, in conducting this type of research, it would be very hard to compare a human confederate with a robot in actual presence. The robot can repeat precisely the same act over and over again, whereas a human would hardly be capable of delivering the same performance across participants. This is even harder if that confederate has to act in different ways (e.g., acting happy vs. sad). Therefore, Woods, Walters, Koay, and Dautenhahn [72] recommend the use of video clips in cases where interaction is low, such as ours. Videotaped materials can be repeated multiple times without changing the contents and have been successfully applied in various human-robot studies [45,49]. Nevertheless, future research is required to further unravel the differences between video and actual interaction with a robot.
One could also counter that in general, Alice resembled the human actress relatively more than Nao/Zora, which could have influenced the results. Nonetheless, we did find that responses to the more humanoid robot more closely resembled responses to the human, at higher levels of emotional responsiveness, and in line with the portrayed emotion. It is most likely that this was caused by the greater plasticity of Alice’s face, probably precisely because her face looked more human.
For communicative purposes, it may be helpful that users can relate to a robot in ways we normally do in face-to-face communication. Affective involvement is considered central in social robotics research [13,14] and research showed that robots were more easily accepted when the user attributed ‘a mind’ to the robot [7,73]. These results bring about important knowledge for understanding human-robot communication as well as for the design and implementation of social robots in various walks of life. This also draws attention to how the humanoid faces are made and the importance of the materials and actuators [53]. In the not-too-distant future, social robots may be colleagues at work: Social robots are successful at autism therapy [74], they receive guests in the waiting area, help at hotel desks, work as museum guides, and entertain in theme parks [9]. They accompany senior citizens in elder homes [75], help them to live longer independently [76], and serve as tutors at schools [8].
Depending on the different social roles that a robot may take and different tasks, the robot may need more or less facial plasticity and human likeness to communicate effectively. For example, more facial articulacy may support the feel of companionship for lonely people [66] whereas in acquiring school tasks, too much expressiveness of the robot might be distracting [10,77]. Furthermore, a robot that needs help appears more acceptable than a robot that takes control [76]. Additionally, in certain situations a less expressive robot may be preferred in a clear-cut information-transmission task than an all too expressive human [49]. Therefore, different types of tasks, roles and functions for which the robot is designed may require different communicative features of the robot partner. Interestingly, oftentimes we may require such flexibility in human colleagues, yet the promising feature of humanoid robots is that we can shape their communicative skills in conjunction with the task we design them for.
In conclusion, the current study proposed a theoretical framework that contextualizes and explains the observers’ emotional responsiveness to humanoid robots. The results supported and extended previous research in showing that facial articulacy of robots elicits emotional responsiveness in the observer in similar (but not same) ways as in human-human-communication. The intensity of such emotional responsiveness is in accordance with the level of articulacy of the robot’s face, in particular when maltreated. When robots are designed for communicative purposes, for example, in healthcare, education, or professional service, the robot may have a face that shows emotionality, enabling users to affectively relate to the robot to a level appropriate to the task or goal. If the aim is to stimulate social interaction, robot’s facial expressiveness may facilitate the intuitive processing of the ‘artificial other’ as if it were just ‘another human being.’ By evoking emotional responsiveness, the cognitive awareness of ‘it is just a robot’ fades into the background and seems not relevant anymore.

Author Contributions

Conceptualization, E.A.K.; methodology, E.A.K. and J.F.H.; formal analysis, E.A.K.; investigation, E.A.K.; data curation, E.A.K.; writing—original draft preparation, E.A.K.; writing—review and editing, E.A.K. and J.F.H.; visualization, E.A.K., and J.F.H.; supervision, E.A.K.; funding acquisition, E.A.K. and J.F.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study is funded by the Netherlands Organization for Scientific Research (NWO Open Competition–Digitalization, grant number: 406.DI.19.005) and the VU Foundation Amsterdam (grant number: AB/rk/2019/100).

Acknowledgments

We are very grateful to Richelle de Rie for collecting the data and preparing the materials. We also thank the participants who voluntarily participated in our study without any further gratification.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Appendix A

Note. Images of the robots Alice and Nao (named Zora), and the human actress as stimulus materials, respectively, in panel A) the maltreatment condition and in panel B, the affectionate condition (next page).
Figure A1. Stimulus Materials: Maltreatment condition (panel A), affectionate condition (panel B).
Figure A1. Stimulus Materials: Maltreatment condition (panel A), affectionate condition (panel B).
Robotics 09 00092 g0a1aRobotics 09 00092 g0a1b

Appendix B

Note. Items (Translated from Dutch) and Principal Component Analysis of the empathy scale (see ‘measurements’ in Methods section) ordered for highest factor loading.
Table A1. Factor loadings after Principal Component Analysis for Empathy scale.
Table A1. Factor loadings after Principal Component Analysis for Empathy scale.
Factor Loadings (PCA)
Item12
1. I could imagine that X liked being touched. (R)−0.8480.343
2. I felt comfortable when I watched what happened to X (R)−0.8300.418
3. I thought it was nasty what happened in the clip.0.797
4. I felt sorry for X for what was going on. 0.764
5. It was pleasant to me, to see what happened to X. (R)−0.7640.419
6. I pitied for X.0.7570.361
7. I felt guilty when I watched what happened to X.0.599
8. I felt uncomfortable when I watched the clip.0.590
9. I wanted to comfort X.0.5650.517
10. I could involve myself well into the feeling situation of X. 0.780
11. I could feel with how X must feel. 0.773
12. I was interested in what happened to X. 0.747
13. I felt sympathy for X. 0.746
14. Watching X in this situation left me cold. (R) −0.521
Note. Factor loadings < 0.30 are discarded. A minus-sign indicates a contra-indicative item and is reverse coded in the analyses.

References

  1. Pew Research Center. AI, Robotics, and the Future of Jobs. 2014. Available online: http://www.pewinternet.org/2014/08/06/future-of-jobs/ (accessed on 26 October 2018).
  2. Tobe, F. 2016 Will be a Pivotal Year for Social Robots, The Robot Report. Available online: https://www.therobotreport.com/news/2016-will-be-a-big-year-for-social-robots (accessed on 13 December 2015).
  3. Bongaarts, J.; Cavanaghi, S.; Jones, G.; Luchsinger, G.; McDonals, P.; Mbacké, C.; Sobotka, T. The power of choice. In Reproductive Rights and the Demographic Transition; State of World Population 2018; UNFPA: New York, NY, USA, 2018. [Google Scholar]
  4. European Commission. Record High Old-Age Dependency Ratio in the EU. Eurostat. 8.5. Available online: https://ec.europa.eu/eurostat/web/products-eurostat-news/-/DDN-20180508-1 (accessed on 5 December 2018).
  5. Allen, K.; Wearden, G. Eurozone Unemployment Hits New High, The Guardian. Available online: http://www.guardian.co.uk/business/2013/apr/30/eurozone-unemployment-record-high (accessed on 5 December 2018).
  6. Asaro, P.M. What Should We Want from a Robot Ethic? Int. Rev. Inf. Ethics 2006, 6, 9–16. [Google Scholar]
  7. Stafford, R.Q.; Macdonald, B.A.; Jayawardena, C.; Wegner, D.M.; Broadbent, E. Does the Robot Have a Mind? Mind Perception and Attitudes Towards Robots Predict Use of an Eldercare Robot. Int. J. Soc. Robot. 2014, 6, 17–32. [Google Scholar] [CrossRef]
  8. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Broadbent, E. Interactions with Robots: The Truths We Reveal About Ourselves. Annu. Rev. Psychol. 2017, 68, 627–652. [Google Scholar] [CrossRef] [Green Version]
  10. Kennedy, J.; Baxter, P.; Belpaeme, T. The Robot Who Tried Too Hard. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’15, Portland, OH, USA, 2–5 March 2015; Association for Computing Machinery (ACM): New York, NY, USA, 2015; pp. 67–74. [Google Scholar]
  11. Krämer, N.C.; Eimler, S.; Von Der Pütten, A.; Payr, S. Theory of companions. What can theoretical models contribute to applications and understanding of human—robot interaction? Appl. Artif. Intell. 2012, 25, 474–502. [Google Scholar] [CrossRef]
  12. Broadbent, E.; Kumar, V.; Li, X.; Sollers, J.; Stafford, R.Q.; Macdonald, B.A.; Wegner, D.M. Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived to Have More Mind and a Better Personality. PLoS ONE 2013, 8, e72589. [Google Scholar] [CrossRef]
  13. Damiano, L.; DuMouchel, P.; Lehmann, H. Artificial Empathy: An Interdisciplinary Investigation. Int. J. Soc. Robot. 2015, 7, 3–5. [Google Scholar] [CrossRef]
  14. Leite, I.; Pereira, A.; Mascarenhas, S.; Martinho, C.; Prada, R.; Paiva, A. The influence of empathy in human–robot relations. Int. J. Hum. Comput. Stud. 2013, 71, 250–260. [Google Scholar] [CrossRef]
  15. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  16. Hong, S.; Sundar, S.S. Social Responses to Computers in Cloud Computing Environment: The Importance of Source Orientation; ACM, CHI: Vancouver, BC, Canada, 7–12 May 2011. [Google Scholar]
  17. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  18. Nass, C.; Moon, Y.; Carney, P. Are People Polite to Computers? Responses to Computer-Based Interviewing Systems1. J. Appl. Soc. Psychol. 1999, 29, 1093–1109. [Google Scholar] [CrossRef]
  19. Bartneck, C.; Rosalia, C.; Menges, R.; Deckers, I. Robot Abuse: A Limitation of the Media Equation. In Abuse: The Darker Side of Human-Computer Interaction: An Interact 2005 Workshop; De Angeli, A., Brahnam, S., Wallis, P., Eds.; Interact: Rome, Italy, 12 September 2005; pp. 54–57. [Google Scholar]
  20. Ghiselin, M.; Ekman, P.; Gruber, H.E. Darwin and Facial Expression: A Century of Research in Review; Malor Books: Cambridge, MA, USA, 2006; ISBN 978-1-883536-88-6. [Google Scholar]
  21. Schindler, S.; Zell, E.; Botsch, M.; Kissler, J. Differential effects of face-realism and emotion on event-related brain potentials and their implications for the uncanny valley theory. Sci. Rep. 2017, 7, srep45003. [Google Scholar] [CrossRef]
  22. De Melo, C.M.; Carnevale, P.J.; Read, S.; Gratch, J. Reading people’s minds from emotion expressions in interdependent decision making. J. Pers. Soc. Psychol. 2014, 106, 73–88. [Google Scholar] [CrossRef] [Green Version]
  23. Russell, J.A.; Fernandez Dols, J.M. The Psychology of Facial Expression, 1st ed.; Cambridge University Press: Cambridge, UK, 1997; ISBN 978-0-521-58796-9. [Google Scholar]
  24. Konijn, E.A.; Van Vugt, H.C. Emotions in Mediated Interpersonal Communication: Toward Modeling Emotion in Virtual Humans. In Mediated Interpersonal Communication; Konijn, E.A., Utz, S., Tanis, M., Barnes, S., Eds.; Routledge: New York, NY, USA, 2008; pp. 100–130. [Google Scholar]
  25. Ochs, M.; Niewiadomski, R.; Pelachaud, C. Facial Expressions of Emotions for Virtual Characters. In The Oxford Handbook of Affective Computing; Calvo, R., D’Mello, S., Gratch, J., Eds.; Oxford University Press (OUP): New York, NY, USA; London, UK, 2015. [Google Scholar]
  26. Ravaja, N.; Bente, G.; Kätsyri, J.; Salminen, M.; Takala, T. Virtual Character Facial Expressions Influence Human Brain and Facial EMG Activity in a Decision-Making Game. IEEE Trans. Affect. Comput. 2018, 9, 285–298. [Google Scholar] [CrossRef] [Green Version]
  27. Oh, S.Y.; Bailenson, J.N.; Krämer, N.; Li, B. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments. PLoS ONE 2016, 11, e0161794. [Google Scholar] [CrossRef] [Green Version]
  28. Nowak, K.L.; Fox, J. Avatars and computer-mediated communication: A review of the definitions, uses, and effects of digital representations. Rev. Commun. Res. 2018, 6, 30–53. Available online: https://nbn-resolving.org/urn:nbn:de:0168-ssoar-55777-7 (accessed on 26 January 2019).
  29. Gockley, R.; Bruce, A.; Forlizzi, J.; Michalowski, M.; Mundell, A.; Rosenthal, S.; Sellner, B.; Simmons, R.; Snipes, K.; Schultz, A.; et al. Designing robots for long-term social interaction. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1338–1343. [Google Scholar] [CrossRef] [Green Version]
  30. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  31. Mavridis, N. A review of verbal and non-verbal human–robot interactive communication. Robot. Auton. Syst. 2015, 63, 22–35. [Google Scholar] [CrossRef] [Green Version]
  32. Vossen, P.; Báez, S.; Bajčetić, L.; Bašić, S.; Kraaijeveld, B. A communicative robot to learn about us and the world. In Proceedings of the Dialogue 2019, Moscow, Russia, 29 May–1 June 2019. [Google Scholar]
  33. Nadine is a Humanoid Robot ’Receptionist’ at Nanyang Technological University (NTU) in Singapore. Footage Shows her Having a Chat with her Creator, Nadia Thalmann. Available online: https://www.youtube.com/watch?v=cvbJGZf-raY (accessed on 5 November 2020).
  34. Jia Jia is a Humanoid Robot Produced by University of Science and Technology of China. Footage shows Its Launch Event in Hefei, Anhui Province, April 15. 2016. Available online: https://www.youtube.com/watch?v=ZFB6lu3WmEw (accessed on 5 November 2020).
  35. Erica is a Humanoid Robot Produced by the Science and Technology Agency, Osaka University, the Advanced Telecommunications Research Institute International, and Kyoto University. Footage Shows Erica Having a ‘Natural Conversation’. Available online: https://www.youtube.com/watch?v=iW3_Ft1t0mY; http://www.japantoday.com/category/lifestyle/view/erica-the-android-can-have-completely-natural-conversations (accessed on 5 November 2020).
  36. Sophia is a Humanoid Robot Equipped with Human-Like Conversation, Developed by David Hanson, Hanson Robotics (Hong Kong). Available online: http://www.hansonrobotics.com/robot/sophia/ (accessed on 5 November 2020).
  37. Mirnig, N.; Strasser, E.; Weiss, A.; Kühnlenz, B.; Wollherr, D.; Etscheligi, M. Can you read my face? A methodological variation for assessing facial expressions of robotic heads. Int. J. Soc. Robot. 2015, 7, 63–76. [Google Scholar] [CrossRef]
  38. Adolphs, R. Social cognition and the human brain. Trends Cogn. Sci. 1999, 3, 469–479. [Google Scholar] [CrossRef]
  39. Dimberg, U.; Andréasson, P.; Thunberg, M. Emotional Empathy and Facial Reactions to Facial Expressions. J. Psychophysiol. 2011, 25, 26–31. [Google Scholar] [CrossRef]
  40. Frith, C. Role of facial expressions in social interactions. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3453–3458. [Google Scholar] [CrossRef] [Green Version]
  41. Misselhorn, C. Empathy with Inanimate Objects and the Uncanny Valley. Minds Mach. 2009, 19, 345–359. [Google Scholar] [CrossRef]
  42. Dovidio, J.F.; Johnson, J.D.; Gaertner, S.L.; Pearson, A.R.; Saguy, T.; Ashburn-Nardo, L. Prosocial motives, emotions, and behavior: The better angels of our nature. In Prosocial Motives, Emotions, and Behavior; Mikulincer, M., Shaver, P.R., Eds.; American Psychological Association (APA): Worcester, MA, USA, 2010; pp. 393–408. [Google Scholar]
  43. Derksen, F.; Bensing, J.; Lagro-Janssen, A. Effectiveness of empathy in general practice: A systematic review. Br. J. Gen. Pract. 2013, 63, e76–e84. [Google Scholar] [CrossRef]
  44. Singer, T. Understanding others: Brain mechanisms of theory of mind and empathy. In Neuroeconomics: Decision Making and the Brain; Glimcher, P.W., Camerer, C.F., Poldrack, R.A., Fehr, F., Eds.; Academic Press: London, UK, 2008; pp. 251–268. [Google Scholar]
  45. Rosenthal-von der Pütten, A.M.; Schulte, F.P.; Eimler, S.C.; Sobieraj, S.; Hoffmann, L.; Maderwald, S.; Brand, M.; Krämer, N.C. Investigations on empathy towards humans and robots using fMRI. Comput. Hum. Behav. 2014, 33, 201–212. [Google Scholar] [CrossRef]
  46. Suzuki, Y.; Galli, L.; Ikeda, A.; Itakura, S.; Kitazaki, M. Measuring empathy for human and robot hand pain using electroencephalography. Sci. Rep. 2015, 5, 15924. [Google Scholar] [CrossRef] [Green Version]
  47. Ellis, H.D.; Lewis, M.B. Capgras delusion: A window on face recognition. Trends Cogn. Sci. 2001, 5, 149–156. [Google Scholar] [CrossRef]
  48. Gou, M.S.; Vouloutsi, V.; Grechuta, K.; Lallée, S.; Verschure, P. Empathy in Humanoid Robots; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2014; pp. 423–426. [Google Scholar]
  49. Hoorn, J.F.; Winter, S.D. Here Comes the Bad News: Doctor Robot Taking Over. Int. J. Soc. Robot. 2018, 10, 519–535. [Google Scholar] [CrossRef] [Green Version]
  50. Mori, M. The uncanny valley. Energy 1997, 7, 33–35. [Google Scholar] [CrossRef]
  51. Tinwell, A.; Grimshaw, M.; Nabi, D.A.; Williams, A. Facial expression of emotion and perception of the uncanny valley in virtual characters. Comput. Human Behav. 2001, 27, 741–749. [Google Scholar] [CrossRef]
  52. Hoorn, J.F. Theory of Robot Communication: I. The Medium is the Communication Partner. arXiv:cs 2018, arXiv:1812.04408. Available online: https://arxiv.org/ftp/arxiv/papers/1812/1812.04408.pdf (accessed on 30 January 2019).
  53. Aly, A.; Tapus, A. On designing expressive robot behavior: The effect of affective cues on interaction. SN Comput. Sci. 2020, 1, 1–17. [Google Scholar] [CrossRef]
  54. Vlachos, E.; Schärfe, H. Android emotions revealed. In International Conference on Social Robotics; Springer: Berlin/Heidelberg, Germany, 2012; pp. 56–65. [Google Scholar]
  55. Barrett, L.F. The theory of constructed emotion: An active inference account of interoception and categorization. Soc. Cogn. Affect. Neurosci. 2017, 12, 1–23. [Google Scholar] [CrossRef] [PubMed]
  56. Casey, B. Beyond Simple Models of Self-Control to Circuit-Based Accounts of Adolescent Behavior. Annu. Rev. Psychol. 2015, 66, 295–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. LeDoux, J.E.; Brown, R. A higher-order theory of emotional consciousness. Proc. Natl. Acad. Sci. USA 2017, 114, E2016–E2025. [Google Scholar] [CrossRef] [Green Version]
  58. Crone, E.A.; Konijn, E.A. Media use and brain development during adolescence. Nat. Commun. 2018, 9, 588–598. [Google Scholar] [CrossRef] [Green Version]
  59. Konijn, E.A.; Achterberg, M. Neuropsychological underpinnings of emotional responsiveness to media. In The International Encyclopedia of Media Psychology; van den Bulck, J., Sharrer, E., Ewoldsen, D., Mares, M.-L., Eds.; Wiley Publishers: Hoboken, NJ, USA, 2020. [Google Scholar]
  60. LeDoux, J. Low roads and higher order thoughts in emotion. Cortex 2014, 59, 214–215. [Google Scholar] [CrossRef]
  61. De Gelder, B.; Van Honk, J.; Tamietto, M. Emotion in the brain: Of low roads, high roads and roads less travelled. Nat. Rev. Neurosci. 2011, 12, 425. [Google Scholar] [CrossRef] [Green Version]
  62. Pessoa, L.; Adolphs, R. Emotion and the brain: Multiple roads are better than one. Nat. Rev. Neurosci. 2011, 12, 425. [Google Scholar] [CrossRef] [Green Version]
  63. LeDoux, J.E.; Hofmann, S.G. The subjective experience of emotion: A fearful view. Curr. Opin. Behav. Sci. 2018, 19, 67–72. [Google Scholar] [CrossRef] [Green Version]
  64. Konijn, E.A.; Van Der Molen, J.H.W.; Van Nes, S. Emotions Bias Perceptions of Realism in Audiovisual Media: Why We May Take Fiction for Real. Discourse Process. 2009, 46, 309–340. [Google Scholar] [CrossRef]
  65. Moser, E.; Derntl, B.; Robinson, S.D.; Fink, B.; Gur, R.C.; Grammer, K. Amygdala activation at 3T in response to human and avatar facial expressions of emotions. J. Neurosci. Methods 2007, 161, 126–133. [Google Scholar] [CrossRef] [PubMed]
  66. Van Kemenade, M.; Konijn, E.A.; Hoorn, J. Robots Humanize Care—Moral Concerns Versus Witnessed Benefits for the Elderly. Health Inf. J. 2015, 648–653. [Google Scholar] [CrossRef]
  67. Konijn, E.A.; Hoorn, J.F. Parasocial Interaction and Beyond: Media Personae and Affective Bonding. In The International Encyclopedia of Media Effects; Roessler, P., Hoffner, C., Zoonen, L.V., Eds.; Wiley: Hoboken, NJ, USA, 2017; pp. 1–15. [Google Scholar]
  68. Hoorn, J.F. Theory of Robot Communication: II. Befriending a Robot over Time. arXiv 2018, arXiv:1812.04406. Available online: https://arxiv.org/ftp/arxiv/papers/1812/1812.04406.pdf (accessed on 30 January 2019).
  69. Kahn, P.H.J.; Kanda, T.; Ishiguro, H.; Freier, N.G.; Severson, R.L.; Gill, B.T.; Ruckert, J.H.; Shen, S. “Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Dev. Psychol. 2012, 48, 303–314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Watson, D.; Clark, L.A.; Tellegen, A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J. Pers. Soc. Psychol. 1988, 54, 1063–1070. [Google Scholar] [CrossRef] [PubMed]
  71. Konijn, E.A.; Van De Schoot, R.; Winter, S.D.; Ferguson, C.J. Possible Solution to Publication Bias Through Bayesian Statistics, Including Proper Null Hypothesis Testing. Commun. Methods Meas. 2015, 9, 280–302. [Google Scholar] [CrossRef]
  72. Woods, S.N.; Walters, M.L.; Koay, K.L.; Dautenhahn, K. Methodological Issues in HRI: A Comparison of Live and Video-Based Methods in Robot to Human Approach Direction Trials. In Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN ’06), Hatfield, UK, 6–8 September 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 51–58. [Google Scholar]
  73. Marchetti, A.; Manzi, F.; Itakura, S.; Massaro, D. Theory of Mind and Humanoid Robots from a Lifespan Perspective. Zeitschrift für Psychologie 2018, 226, 98–109. [Google Scholar] [CrossRef]
  74. Kim, E.S. Robots for Social Skills Therapy in Autism: Evidence and Designs toward Clinical Utility Dissertation; Yale University: New Haven, CT, USA, 2013. [Google Scholar]
  75. Liang, A.; Piroth, I.; Robinson, H.; Macdonald, B.A.; Fisher, M.; Nater, U.M.; Skoluda, N.; Broadbent, E. A Pilot Randomized Trial of a Companion Robot for People with Dementia Living in the Community. J. Am. Med. Dir. Assoc. 2017, 18, 871–878. [Google Scholar] [CrossRef]
  76. Hoorn, J.; Konijn, E.; Germans, D.; Burger, S.; Munneke, A. The In-between Machine—The Unique Value Proposition of a Robot or Why we are Modelling the Wrong Things. In Proceedings of the 7th International Conference on Agents and Artificial Intelligence, Lisbon, Portugal, 10–12 January 2015. [Google Scholar]
  77. Konijn, E.A.; Hoorn, J.F. Robot tutor and pupils’ educational ability: Teaching the times tables. Comput. Educ. 2020, 157, 103970. [Google Scholar] [CrossRef]
Figure 1. Stimulus materials, neutral images for human actress, robot Alice and robot Nao/Zora, respectively.
Figure 1. Stimulus materials, neutral images for human actress, robot Alice and robot Nao/Zora, respectively.
Robotics 09 00092 g001
Figure 2. Interaction effect between facial articulacy and treatment of social entity on emotional responsiveness of observers.
Figure 2. Interaction effect between facial articulacy and treatment of social entity on emotional responsiveness of observers.
Robotics 09 00092 g002
Figure 3. Interaction effect between facial articulacy and treatment of social entity on empathy of observers.
Figure 3. Interaction effect between facial articulacy and treatment of social entity on empathy of observers.
Robotics 09 00092 g003
Figure 4. Interaction effect between type of treatment and social entity on attribution of negative feelings onto the social entity.
Figure 4. Interaction effect between type of treatment and social entity on attribution of negative feelings onto the social entity.
Robotics 09 00092 g004
Table 1. Means (M) and Standard deviations (SD) for each of the cells in the comparison of ‘facial articulacy’ and ‘treatment’, and the results of the two-factorial MANOVA with emotional responsiveness as dependent variable, separately for negative (a) and positive (b) feelings.
Table 1. Means (M) and Standard deviations (SD) for each of the cells in the comparison of ‘facial articulacy’ and ‘treatment’, and the results of the two-factorial MANOVA with emotional responsiveness as dependent variable, separately for negative (a) and positive (b) feelings.
nFriendly TreatmentHarmful TreatmentF η p 2 p
MSDMSD
a. Negative Feelings
Human9013.985.7721.478.0725.630.2260.001
Alice8915.496.5119.368.715.670.0610.019
Nao/Zora8616.146.6518.867.123.350.0380.071
b. Positive Feelings
Human9024.297.8821.097.084.110.0450.05
Alice8923.649.0818.256.0410.8340.1110.001
Nao/Zora8622.637.5218.216.818.160.0890.005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Konijn, E.A.; Hoorn, J.F. Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings. Robotics 2020, 9, 92. https://0-doi-org.brum.beds.ac.uk/10.3390/robotics9040092

AMA Style

Konijn EA, Hoorn JF. Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings. Robotics. 2020; 9(4):92. https://0-doi-org.brum.beds.ac.uk/10.3390/robotics9040092

Chicago/Turabian Style

Konijn, Elly A., and Johan F. Hoorn. 2020. "Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings" Robotics 9, no. 4: 92. https://0-doi-org.brum.beds.ac.uk/10.3390/robotics9040092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop