Next Article in Journal
Ultrahigh Resolution Polarization Sensitive Optical Coherence Tomography of the Human Cornea with Conical Scanning Pattern and Variable Dispersion Compensation
Previous Article in Journal
Mass Transfer of Microscale Light-Emitting Diodes to Unusual Substrates by Spontaneously Formed Vertical Tethers During Chemical Lift-Off
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recommendation Agent Adoption: How Recommendation Presentation Influences Employees’ Perceptions, Behaviors, and Decision Quality

by
Émilie Bigras
,
Pierre-Majorique Léger
* and
Sylvain Sénécal
Tech3Lab, HEC, Montréal, QC H3T 2A7, Canada
*
Author to whom correspondence should be addressed.
Submission received: 16 September 2019 / Revised: 27 September 2019 / Accepted: 3 October 2019 / Published: 11 October 2019

Abstract

:

Featured Application

Transparent design of artificial intelligence based recommendation agents positively influences users’ performance and adoption of those systems.

Abstract

The purpose of this paper is to report the results of a laboratory experiment that investigated how assortment planners’ perceptions, usage behavior, and decision quality are influenced by the way recommendations of an artificial intelligence (AI)-based recommendation agent (RA) are presented. A within-subject laboratory experiment was conducted with twenty subjects. Participants perceptions and usage behavior toward an RA while making decisions were assessed using validated measurement scales and eye-tracking technology. The results of this study show the importance of a transparent RA demanding less cognitive effort to understand and access the explanations of a transparent RA on assortment planners’ perceptions (i.e., source credibility, sense of control, decision quality, and satisfaction), usage behavior, and decision quality. Results from this study suggest that designing RAs with more transparency for the users bring perceptual and attitudinal benefits that influence both the adoption and continuous use of those systems by employees. This study contributes to filling the literature gap on RAs in organizational contexts, thus advancing knowledge in the human–computer interaction literature. The findings of this study provide guidelines for RA developers and user experience (UX) designers on how to best create and present an AI-based RA to employees.

1. Introduction

Optimizing the composition and size of an inventory is critical for retailers to maximize their sales or gross margin [1]. In order to create an optimal assortment of products, both consumers’ needs and retailers’ constraints must be respected [2,3]. Assortment planners need to take into account qualitative and quantitative criteria, which include a great number of variables (e.g., past sales, retail trends, inventory, sales forecasts), while making assortment decisions for retailers [4]. In order to create the most optimal assortment of products, assortment planners must examine these variables thoroughly and compare them based on their level of importance which relies on the consumers’ needs and retailers’ constraints that must be respected. This important amount of information that needs to be considered by the assortment planners could negatively impact their decision quality [5], thus negatively affecting the current and future sales of retailers [1].
To reduce the risk associated with information overload, employees’ decision-making process can now be aided by artificial intelligence (AI)-based recommender systems [6]. Though AI-based recommendation agents (RAs) are now becoming more common in organizational contexts, past research on RAs has mainly focused on the online consumer context [7,8].
In this paper, we answer the following research question: How does the way recommendations of an AI-based RA are presented influence assortment planners’ perceptions (i.e., source credibility, sense of control, decision quality, and satisfaction), usage behaviors, and decision quality? Based on the literature on the consumer context, we expect that the perceptions, usage behavior, and decision quality of consumers and assortment planners will align even though their RA adoption process differs. This study contributes to filling the gap in the literature on RAs in organizational contexts and provides insights for RA developers and user experience (UX) designers on how to best present an AI-based RA to employees for them to consider and adopt its recommendations.

2. Background

This study builds upon past research on RAs focusing on the online consumer context. We reviewed the literature on transparency and cognitive effort to then assess their impact on users’ perceptions, usage behavior, and decision quality.

2.1. RA Transparency

Transparency of an intelligent agent is exposed when the logical reasoning behind its recommendations is explained to its users [9]. In a cooperative problem-solving context, where a system is present to support the decision-making process of a user [10], explanations are crucial for users to consider the recommendations of a knowledge-based system to support their decisions [11]. In an organizational context, understanding the logical reasoning behind the recommendations of an intelligent agent is crucial for employees to justify their decisions to their superiors [12]. Without perceiving the usefulness of these knowledgeable explanations, users will dismiss the recommendations presented to them through their decision-making process [13]. The details of the algorithm responsible for the recommendations of an RA can either be exposed completely or partially [14,15]. Exposing a part of the algorithm responsible for the recommendations of an RA is usually done to protect the details of an algorithm or to diminish its complexity [14].

2.2. Cognitive Effort and RA Transparency

In the online consumer context, without the recommendations of an AI-based RA, users, throughout their decision-making process, must gather and consider all the decision relevant information [16]. When this information load becomes too excessive for the user’s limited capacity in processing information, cognitive fatigue and confusion emerge [17]. Information overload is then reached, thus negatively affecting the efficiency and effectiveness of the users’ decisions [5]. The users’ perceived cognitive effort, required to process information while making a decision, has been shown to diminish with the usage of RAs [18]. However, by adding explanations exposing the logical reasoning behind the recommendations of an RA, the cognitive effort of the users increases [11]. Hence, without these explanations, the recommendations of an RA would be ignored by the users throughout their decision-making process. Nonetheless, with these explanations, information overload could be reached by the users, thus negatively affecting their decision-making process. Consequently, the need for transparency and the cognitive effort needed to understand and access these explanations must be rightfully balanced.

2.3. Impact of Transparency and Cognitive Effort on Users’ Perceptions, Usage Behavior, and Decision Quality

Past research on RAs focusing on the online consumer context shows the impact of transparency and cognitive effort, when these two factors are not rightfully balanced, on users’ perceptions (i.e., source credibility, sense of control, decision quality, and satisfaction), usage behavior, and decision quality.
Source credibility. To perceive the recommendations of an AI-based RA as credible, users need to recognize these suggestions as believable [19]. Multiple dimensions have been shown to influence source credibility (e.g., trustworthiness, expertise, attractiveness) [20]. However, prior research on recommender systems’ perceived credibility has mainly focused on two key dimensions: trustworthiness and expertise [21,22]. The RA is perceived as trustworthy by the users when its recommendations are recognized as reliable and honest [23]. The RA is considered an expert when it is identified by the users as having the ability and the skills to recommend effectively [7,23]. Therefore, users perceived source credibility can increase with the help of transparency [24]. Explaining the logical reasoning behind the recommendations of an RA establishes users’ trust and shows the expertise of these suggestions [16]. Furthermore, according to the literature, users’ trust also varies with the RA’s perceived ease of use (PEOU) [8]. This RA’s PEOU is negatively affected when the cognitive effort of the users increases [23].
Sense of Control. In riskier contexts (e.g., car purchase), users that do not understand the recommendations of an RA have been shown to ignore these recommendations throughout their decision-making process [25]. Hence, without perceiving a sense of control over the RA’s recommendations, users will not consider these suggestions [26]. This feeling of control has been shown to be enhanced with transparency [15]. Although a transparent RA increases cognitive effort, transparency is critical for users to feel a sense of control over the RA’s recommendations [27]. In addition, according to the literature, explaining the logical reasoning behind the recommendations of an RA partially, through simple explanations, is more effective on the users perceived sense of control than exposing the details of its algorithm completely [15,25].
Decision quality. Decision makers, in order to maximize the accuracy of their decisions (i.e., decision quality), must invest more cognitive effort [23]. However, decision makers are generally looking to maximize the accuracy of their decisions and minimize the cognitive effort invested [28]. Evaluating all the available alternatives in depth, through a complex decision-making process, is normally impossible for users [29]. Therefore, in order to reduce cognitive effort, decision makers will be willing to settle for a non-optimal choice [30]. Hence, users will adapt their decision-making strategies to their environment [31]. For instance, in an online context, shoppers could first begin their decision-making process by screening the available products (i.e., initial screening stage) [32]. They could then select the most relevant items and compare them in depth [32]. This could finally lead to the purchase of a product [32]. By including a decision aid (e.g., RA) throughout this decision-making process, users could be able to make more accurate decisions more easily [33].
When used in a decision-making process, RAs have been shown to increase decision quality and reduce cognitive effort [34,35]. Previous research on decision aids confirmed that decision makers can increase the depth of their evaluation, between the available alternatives, by consulting the recommendations of a decision aid, thus positively affecting decision quality [36]. With a transparent RA, users are able to perceive the usefulness of its recommendations through their decision-making process [13]. By considering these recommendations, both subjective and objective decision quality (i.e., the user’s perceived and actual performance) increases [35]. However, the cognitive effort associated with understanding the logical reasoning behind these recommendations (i.e., the increase in cognitive load) will negatively impact the user’s subjective and objective decision quality [35].
Satisfaction. User satisfaction with a decision-making process is perceived when expectations are confirmed [37]. Therefore, the users’ actual performance with an RA must be in line with or exceed their expectations [23]. Prior research on decision support systems has shown the importance of information quality on users’ decision-making satisfaction [38,39]. Information quality is composed of multiple dimensions (e.g., information accuracy, information completeness, information relevance) [40]. Thus, understanding the logical reasoning behind the recommendations of an AI-based RA, through knowledgeable explanations, is essential for users to perceive the quality of these suggestions. Furthermore, these recommendations, when perceived as useful, have been demonstrated to positively influence user satisfaction with the RA and the decision-making process with the RA [23]. However, a transparent RA (i.e., increase in cognitive load) will decrease user satisfaction [18,23,35,39].
RA adoption and usage. To consider the recommendations of an AI-based RA in an online context, shoppers must perceive its suggestions as accurate, easy to use, satisfactory, trustworthy, and useful [8,23,35,41]. Thus, the first interaction of a user with an RA (i.e., pre-adoption) is crucial in influencing its adoption and usage [8,42]. A transparent RA is then essential for users to accept these recommendations throughout their decision-making process [24,43]. However, in an organizational context, employees have no choice in adopting their employer’s information system [44]. Nevertheless, the continued usage of that system mainly depends on its perceived usefulness and ease of use [45].
In the context of a transparent RA, a user experiencing information overload while consulting its knowledgeable explanations is expected to accept the RA’s recommendations [46]. Therefore, as a result of cognitive fatigue or confusion, a user would accept the RA’s recommendations without understanding the logical reasoning behind them [17]. The RA’s recommendations would then be consulted more frequently by the user due to their increased importance in the user’s decision-making process [47,48].

3. Hypotheses

This review of prior research clearly shows a gap in the existing research into RAs. Compared to the consumers’ perceptions, usage behavior, and decision quality toward RAs in an online context, the employees’ perceptions, usage behavior, and decision quality toward RAs in an organizational context has not been studied in depth. Building upon the preceding literature review that mostly characterized the user as a consumer, we postulate that the perceptions, usage behavior, and decision quality of a consumer and an employee should align even though their RA adoption process differs. Hence, the way recommendations are presented should influence, similarly to a consumer, the perceptions, usage behavior, and decision quality of an assortment planner.
In order to create an optimal assortment of products, assortment planners must maximize their cognitive effort invested [4,5]. However, past research has shown that decision makers want to maximize the accuracy of their decisions and minimize their cognitive effort invested [28]. Therefore, by considering the recommendations of an AI-based RA throughout their decision-making process, assortment planners could diminish their cognitive effort invested and maximize their decision accuracy [46]. Unfortunately, without understanding the logical reasoning behind these recommendations, assortment planners should dismiss these suggestions [11,13]. A transparent RA is then crucial for assortment planners to justify their assortment decisions to their superiors [12]. Due to the cognitive effort needed by the assortment planners to access and understand the explanations of a transparent RA, the accuracy of their decisions could be diminished [9]. Hence, a transparent RA demanding less cognitive effort seems to be the most suitable balance between transparency and cognitive effort.
Thus, we postulate that a transparent RA demanding less cognitive effort to access and understand its explanations (i.e., a transparent RA together with low cognitive effort) will have a greater positive influence on assortment planners’ perception regarding source credibility [8,24], control [15], decision quality [13,35], and satisfaction [18,23,35,39] than other RAs (i.e., a non-transparent RA together with low cognitive effort and a transparent RA together with high cognitive effort). We thus formulate the following hypothesis.
H1. A transparent RA demanding low cognitive effort will have a greater positive impact on assortment planners’ perception towards the RA’s credibility (H1a), their perceived sense of control (H1b), their decision quality (H1c), and their satisfaction regarding the RA (H1d) than other RAs.
In addition, we suggest, based on the review of the literature, that a transparent RA together with low cognitive effort will have a greater positive influence on assortment planners’ objective decision quality than other RAs (i.e., a non-transparent RA together with low cognitive effort and a transparent RA together with high cognitive effort) (Pereira, 2001) [35]. We thus postulate the following hypothesis.
H2. A transparent RA demanding low cognitive effort will have a greater positive impact on assortment planners’ decision quality than other RAs.
Furthermore, based on Eppler and Mengis (2004) [17], Aljukhadar et al. (2012) [46], and Chen and Epps (2013) [47], we postulate that assortment planners, throughout their decision-making process, will consult more frequently the recommendations of a transparent RA demanding more cognitive effort to access and understand its explanations (i.e., a transparent RA together with high cognitive effort) compared to other RAs (i.e., a non-transparent RA together with low cognitive effort and a transparent RA together with low cognitive effort). We therefore postulate the following hypothesis.
H3. Assortment planners will consult more frequently throughout their decision-making process the recommendations of a transparent RA coupled with high cognitive effort compared to other RAs.
Moreover, based on prior research showing that the first interaction of a user with an RA will determine its adoption and usage [8,42], we postulate that assortment planners will consult more frequently and for a longer period of time the explanations of a transparent RA at the beginning of their decision-making process, rather than at the end [24,43]. We thus put forward the following hypothesis.
H4. Assortment planners will consult more frequently (H4a) and for a longer period (H4b) of time the explanations of a transparent RA at the beginning of their decision-making process, rather than at the end.
Finally, based on Karahanna et al. (1999) [44] and Davis et al. (1989) [45], we suggest that explaining the logical reasoning behind the recommendations of an RA (i.e., a transparent RA) will have a greater positive influence on assortment planners’ intention to adopt and use an RA throughout their decision-making process compared to a non-transparent RA. We thus postulate the following hypothesis.
H5. A transparent RA will have a greater positive impact on assortment planners’ intention of adopting and using an RA throughout their decision-making process compared to a non-transparent RA.

4. Methodology

4.1. Experimental Setting

A within-subject laboratory experiment was performed with subjects who had to make assortment decisions by using an experimental RA prototype for assortment planning developed by JDA (Scottsdale, AZ, USA). This prototype was developed with Axure RP 8 and was made available, to the participants, through a 1680 × 1050 resolution monitor. A total of twenty professionals (11 men and 9 women) participated in the study and all participants had experience in using application software to perform the product assortment task involved in this experiment. The average age of this sample was 26 years old with a standard deviation of 3.9 years. Each participant completed a consent form and received a $30 gift card as a compensation. This project was approved by the Institutional Review Board (IRB) of our institution.
As illustrated in Figure 1, participants had to make assortment decisions for two similar fictitious scenarios that were counterbalanced. The context of these scenarios was the retail clothing industry (i.e., dresses (Scenario 1) and male upper body clothing (Scenario 2). Each scenario began with a practice task that did not include an RA to familiarize participants with the assortment planning software. This practice task was followed for each scenario by three distinct conditions in a counterbalanced order. Each condition displayed a particular recommendation representation that was designed based on the two experimental factors (i.e., transparency and cognitive effort). Task 1 reflected a non-transparent RA and low cognitive effort condition (T1), Task 2 represented a transparent RA and low cognitive effort condition (T2), and Task 3 was a transparent RA and high cognitive effort condition (T3). The non-transparent RA and high cognitive effort condition was not part of the experiment since it is a type of RA that neither organizations nor employees would realistically use. This experiment lasted on average two hours and no time constraint was imposed for each task (about 5 min per task).
For each task (3 tasks) of each scenario (2 scenarios), 24 different products were displayed (see Figure 2). From these 24 distinctive products, participants needed to select a fixed number of products (ranging from 6 to 7) to create an optimal assortment of products. The total number of products that needed to be selected by the participants for each condition was stated in both scenarios. Furthermore, the products of each task were similarly presented to the participants with an image, its name including its brand, and its product score (i.e., RA’s recommendation). The RA’s recommendation of each product was generated using AI and varied between 0 and 100. In order to ensure the authenticity of the recommendations, product scores were generated from previous data from an analogous industry using JDA Luminate (Scottsdale, AZ), an artificial intelligence platform to predict market demand. Depending on its score number, each RA recommendation was surrounded by a specific color (i.e., green > 66, 66 ≥ orange > 33, and red ≤ 33).
For T1 (non-transparent RA and low cognitive effort), the RA’s recommendations were the only information made available to the participants. For T2 (transparent RA and low cognitive effort) and T3 (transparent RA and high cognitive effort), the RA’s recommendations were also made available to the participants, however, participants could also have access to the partially exposed details of the algorithm responsible for the RA’s recommendations (i.e., explanations). By clicking on each product, participants could consult further product information (e.g., attributes, past sales, margin, and comparative products), which was included in the algorithm responsible for each RA recommendation. The cognitive effort necessary to access and understand these explanations differed between T2 and T3. For T2, the additional product information was made available through a modal window (see Figure 3). As for T3, the explanations of each product score were made available through a different page, which required additional navigation and cognitive effort from the participants (see Figure 4).
At the end of the experiment, a semi-structured interview was conducted (about 15 min). Throughout this interview, participants were asked to discuss their decision-making process. Two main subjects were addressed. The first subject focused on the strategies used by the participants to make an assortment decision. The second topic examined the participants understanding of the RA’s recommendations. All interviews were recorded and transcribed.

4.2. Measures

Self-reported measures. After each task, participants completed a questionnaire. This questionnaire measured participants’ perception towards the RA in terms of source credibility (i.e., 10 items on a 7-point semantic differential scale to measure perceived trustworthiness (Cronbach’s α = 0.93) and expertise (Cronbach’s α = 0.94) [20]), control (i.e., 5-point SAM Scale to measure perceived dominance [49]), satisfaction (i.e., 3 items on a 10-point semantic differential scale to measure perceived satisfaction (Cronbach’s α = 0.97) [50]), and future usage as an aid (i.e., 3 items on a 7-point likert scale to measure the intention to adopt and use the RA as a decision aid (Cronbach’s α = 0.94) [41]). The measurement scales were all adapted to the context of this study. Furthermore, the questionnaire also asked participants to rate their perceived task decision quality from 1 to 10.
Decision quality measures. The decision quality of the participants was only measured for one scenario (i.e., the first scenario). A predetermined optimal assortment was provided by JDA for each condition of this scenario. JDA was not able to provide predetermined optimal assortments for both scenarios. Therefore, participants, with the help of the scenario’s guidelines and all the information made available to them, were led, for each task, to a predetermined optimal assortment of products. The assortment of products selected by each participant for each condition was compared to the predetermined optimal assortment, thus evaluating the decision accuracy (i.e., decision quality) of the participants. A decision quality score was then created, for each task and each participant, by awarding one point per selected product included in the predetermined optimal assortment. This decision quality score was then transformed in a percentage for each task and each participant.
Behavioral measures. The behavioral intentions of a user in adopting and using the recommendations of an AI-based RA throughout a decision-making process can be investigated with the help of questionnaires [8,41,44]. While questionnaires are helpful in capturing self-report measures, they do not capture the actual conscious and unconscious behavior of a user throughout a decision-making task [51]. For example, with the help of eye-tracking technology, the eye movements of a user can be monitored, thus capturing the user’s actual behavior (i.e., visual attention) throughout a decision-making process [52,53]. The objective adoption and usage of an RA can be determined by knowing where and what a user is looking at any given time (i.e., fixation), [52,53]. The duration of an ocular fixation (i.e., gaze duration [48]) can be associated with the cognitive effort needed by a user in processing the information consulted [54]. When a specific element is frequently consulted by a user (i.e., total fixation count [48]), it can be characterized as important to the user [47]. Hence, the eye-tracking technology was used in this study to capture the total fixation count and gaze duration of participants on specific elements throughout each decision-making process.
The eye movements of the participants were monitored at a 60 Hz sampling rate with a Smart Eye Pro System (Gothenburg, Sweden). For each participant, a gaze calibration was performed using a 9-point calibration grid. This process was repeated until sufficient accuracy was obtained (±2 degrees of accuracy). The eye-tracking data analysis was conducted with the MAPPS 2016.1 software. Predetermined areas of interest (AOIs) were created to collect the gaze duration and the total fixation count of each AOI. These AOIs were generated for each RA recommendation and for each modal window or new page examined by the participants. An ocular fixation was characterized as satisfactory at 200 milliseconds and above [55].
Data analysis. All statistical analyses were performed with SAS 9.4 and considered the within-subject design of this study. The psychometric and eye-tracking data collected for each condition of the two similar scenarios were aggregated in the analysis.

5. Results

H1 suggested that a transparent RA together with low cognitive effort (T2) will have a greater positive impact on participants’ perception regarding source credibility (H1a), control (H1b), decision quality (H1c), and satisfaction (H1d) than other conditions (T1 and T3). A linear regression with a random intercept model was performed to test the difference between the means of participants’ perceptions for each combination of tasks (see Table 1). The results provided strong support for H1b (T2 is greater than T1 and T3, respectively 0.3500, p = 0.0185; 0.3000, p = 0.0365). However, as illustrated in Table 1, H1a, H1c, and H1d were only partially confirmed. The results showed that exposing the logical reasoning behind the RA (i.e., a transparent RA) had a significant effect on the assortment planners’ perception towards the RA regarding source credibility (H1a), decision quality (H1c), and satisfaction (H1d) (T2 greater than T1, respectively 0.7055, p ≤ 0.0001; 0.4706, p = 0.0474; 0.6068, p = 0.0027). Furthermore, the increased cognitive effort necessary to access and process the explanations of a transparent RA had no impact on the assortment planners’ perception towards the RA regarding source credibility (H1a), decision quality (H1c), and satisfaction (H1d) (i.e., the difference between transparent RA and low cognitive effort (T2) and transparent RA and high cognitive effort (T3) was not statistically significant).
H2 stipulated that a transparent RA together with low cognitive effort (T2) will have a greater positive impact on the assortment planners’ objective decision quality than other conditions (T1 and T3). A linear regression with a random intercept model was performed to test the difference between the means of participants’ objective decision quality for each combination of conditions. As illustrated in Table 2, the results provided strong support for H2 (T2 is greater than T1 and T3, respectively 1.0500, p = 0.0001 and 0.2330, p = 0.0001).
H3 suggested that assortment planners will consult throughout their decision-making process the recommendations of a transparent RA coupled with high cognitive effort (T3) more frequently than when exposed to other conditions (T1 and T2). A Poisson regression with a random intercept model was performed to test the difference between the least square means of the RA’s recommendations total fixation count of each condition. In support of H3, the results showed that participants exposed to the transparent RA and high cognitive effort condition (T3) were consulting more frequently the RA’s recommendations throughout their decision-making process than when exposed to the two other conditions (i.e., the non-transparent RA and low cognitive effort (T1) and transparent RA and low cognitive effort (T2) conditions) (T3 is greater than T1 and T2, respectively 1.1617, p = 0.0054; 0.7471, p = 0.0225).
H4 stipulated that assortment planners will consult more frequently (H4a) and for a longer period of time (H4b) the explanations of a transparent RA (T2 and T3) at the beginning of their decision-making process, rather than at the end. A Wilcoxon signed rank test with a one-tailed level of significance was performed to compare the difference between the first 25% and the last 25% of the frequency (i.e., total fixation count) and the period of time (i.e., gaze duration) with which the explanations of a transparent RA were consulted for each condition (T2 and T3). The results showed that the explanations of a transparent RA coupled with low cognitive effort (T2) were consulted more frequently (H4a) and for a longer period of time (H4b) at the beginning of the participants’ decision-making process, rather than at the end (respectively, p = 0.0137 and p = 0.0171). However, the explanations of a transparent RA associated with high cognitive effort (T3) were consulted more frequently (H4a), but not for a longer period of time (H4b), at the beginning of the participants’ decision-making process, rather than at the end (respectively, p = 0.0279 and p = 0.0552). Furthermore, a similar test comparing the difference between the first 40% and the last 40% of the frequency (i.e., total fixation count) and the period of time (i.e., gaze duration) with which the explanations of a transparent RA were consulted for each condition (T2 and T3) was performed and confirmed these results. Hence, H4a was supported and H4b was partially supported.
H5 suggested that a transparent RA (T2 and T3) compared to a non-transparent RA (T1) will have a greater positive impact on the assortment planners’ intention of adopting an RA as an aid throughout their decision-making process. In order to test this hypothesis, a linear regression with a random intercept model was performed, thus comparing the difference between the means of assortment planners perceived intention of adopting an RA throughout their decision-making process for each combination of tasks. As illustrated in Table 3, the results suggest that the assortment planners’ intention of adopting an RA as an aid is positively increased when the logical reasoning behind the RA is partially exposed (i.e., a transparent RA) (T2 and T3 are greater than T1, respectively 1.0705, p = 0.0001; 0.9681, p = 0.0002). Therefore, H5 was supported.

6. Discussion

Our results revealed that a transparent RA together with low cognitive effort had a greater positive impact on the participants perceived sense of control than the other RAs (i.e., the non-transparent RA and low cognitive effort and transparent RA and high cognitive effort conditions) (H1b). In addition, the results showed that the participants perceived RA credibility (H1a), decision quality (H1c), and satisfaction (H1d) were positively affected by a transparent RA and were not impacted by the cognitive effort needed to access and understand the explanations of a transparent RA (i.e., low cognitive effort versus high cognitive effort). Compared to the participants’ perceived decision quality, their objective decision quality was significantly higher when they were exposed to a transparent RA demanding less cognitive effort (i.e., the transparent RA and low cognitive effort condition) (H2). Furthermore, the recommendations of a transparent RA demanding more cognitive effort (i.e., the transparent RA and high cognitive effort condition) were consulted more frequently by the participants compared to the recommendations of the other conditions (i.e., the non-transparent RA and low cognitive effort and transparent RA and low cognitive effort conditions) (H3). Moreover, the explanations of a transparent RA demanding less (more) cognitive effort were consulted more frequently (H4a) and (but not) for a longer period of time (H4b) at the beginning of the participants’ decision-making process, rather than at the end. When exposed to a transparent RA, the participants’ intention of adopting an RA throughout their decision-making process also increased (H5).
These findings contribute to filling the literature gap on RAs in organizational contexts, thus advancing knowledge in the human–computer interaction (HCI) literature. For example, past research on RAs in the online context showed that the perceived usefulness, through transparency is crucial for the adoption and continuous usage of an RA by consumers [13]. The results of this study showed that the participants’ intention to adopt an RA throughout their decision-making process was significantly higher when they were exposed to a transparent RA, thus aligning with the findings of the online consumer context. In an organizational context, employees must adopt their employer’s information system [44]. However, their decisions also need to be justified to their superiors [12]. Hence, this paper validates how the perceptions and usage behaviors of assortment planners are influenced by the way recommendations of an AI-based RA are presented.
The results of this study provide insights for RA developers and UX designers on how to best present AI-based recommendations to employees for them to consider and adopt these recommendations. The recommendations of an AI-based RA are based on an overwhelming amount of data [33]. Consequently, based on our findings, exposing partially the logical reasoning behind the recommendations of an RA through easily accessible explanations seems to be key. Such findings can contribute to the creation of best practices in UX design. However, the way to best present these explanations is a challenge for UX designers. A condensed visual representation, rightfully balancing transparency and cognitive effort, must be created by designers [56]. In addition, after the experiment, participants shared insights on the strategies they used while making assortment decisions and their understanding of the RA’s recommendations. These interviews brought forward the concept of a customized RA (9 of 20 participants), which could be considered by RA developers and UX designers to enhance employees’ RA adoption and continuous usage. Specifically, those participants expressed the interest of being able to make an ad-hoc adjustment to the scores to incorporate their tacit knowledge of the industry. To some extent, the results call for keeping employees “in the loop” by allowing them to make ongoing adjustments to AI-based recommendations. Such a collaboration could help create a feeling of cooperation with the RA, thereof preserving trust over time.
This paper has several limitations that must be acknowledged. The organizational context of this study was based on the retail clothing industry. To generalize the results of this study, future research should focus on different industries. Furthermore, the results of this study were based on a sample of twenty professionals, which could be characterized has a small sample size even though this is a typical sample size for a Neurois research [57]. Hence, future research should replicate this study with a larger sample size to confirm our findings. Also, the techniques proposed by Léger et al. [58] and Courtemanche et al. [59] could be used to understand the emotional and cognitive states of an employee at the time of fixation on the recommendations of an RA.

7. Conclusions

When the decision-making process of an employee is aided by an AI-based RA, the risk associated with information overload decreases both for the employee and its employer. However, employees need to be inclined to use AI-based RAs to perform their daily work. Results from this study suggest that designing RAs with more transparency for the users bring perceptual and attitudinal benefits that will influence both the adoption and continuous use of those systems by employees. Yet, more research is needed to fully understand how RAs are used by employees and how the latter are affected by RAs in an organizational context.

Author Contributions

É.B. was involved in Experimental design, Data Collection, Analysis, Paper Write-up. P.-M.L. and S.S. were involved in Experimental design, Analysis, Paper Write-up and Revision (R).

Funding

This research was funded by The Natural Sciences and Engineering Research Council of Canada and Prompt Innov.

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Mantrala, M.K. Why is assortment planning so difficult for retailers? A framework and research agenda. J. Retail. 2009, 85, 71–83. [Google Scholar] [CrossRef]
  2. Amine, A.; Cadenat, S. Efficient retailer assortment: A consumer choice evaluation perspective. Int. J. Retail. Distrib. Manag. 2003, 31, 486–497. [Google Scholar] [CrossRef]
  3. Handelsman, M.; Munson, J.M. On integrating consumer needs for variety with retailer assortment decisions. ACR North. Am. Adv. 1985, 12, 108–112. [Google Scholar]
  4. Brijs, T. Using association rules for product assortment decisions: A case study. In KDD; Citeseer: Diepenbeek, Belgium, 1999. [Google Scholar]
  5. Lurie, N.H. Decision making in information-rich environments: The role of information structure. J. Consum. Res. 2004, 30, 473–486. [Google Scholar] [CrossRef]
  6. Andrews, W. Predicts 2018: Artificial Intelligence; Gartner: Stanford, CT, USA, 2017; p. 343423. [Google Scholar]
  7. Senecal, S.; Nantel, J. The influence of online product recommendations on consumers’ online choices. J. Retail. 2004, 80, 159–169. [Google Scholar] [CrossRef]
  8. Wang, W.; Benbasat, I. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. J. Manag. Inf. Syst. 2007, 23, 217–246. [Google Scholar] [CrossRef]
  9. Gregor, S.; Benbasat, I. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Q. 1999, 23, 497–530. [Google Scholar] [CrossRef]
  10. De Greef, H.; Neerincx, M.A. Cognitive support: Designing aiding to supplement human knowledge. Int. J. Hum. Comput. Stud. 1995, 42, 531–571. [Google Scholar] [CrossRef]
  11. Gregor, S. Explanations from knowledge-based systems and cooperative problem solving: An empirical study. Int. J. Hum. Comput. Stud. 2001, 54, 81–105. [Google Scholar] [CrossRef]
  12. Heitmann, M.; Lehmann, D.R.; Herrmann, A. Choice goal attainment and decision and consumption satisfaction. J. Mark. Res. 2007, 44, 234–250. [Google Scholar] [CrossRef]
  13. Zanker, M. The influence of knowledgeable explanations on users’ perception of a recommender system. In Proceedings of the Sixth ACM Conference on Recommender Systems, Dublin, Ireland, 9–13 September 2012. [Google Scholar]
  14. Gedikli, F.; Jannach, D.; Ge, M. How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum. Comput. Stud. 2014, 72, 367–382. [Google Scholar]
  15. Vig, J.; Sen, S.; Riedl, J. Tagsplanations: Explaining recommendations using tags. In Proceedings of the 14th International Conference on Intelligent User Interfaces, Sanibel Island, FL, USA, 8–11 February 2009; pp. 47–56. [Google Scholar]
  16. Pu, P.; Chen, L. Trust building with explanation interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces, Sydney, Australia, 29 January–1 February 2006; pp. 93–100. [Google Scholar]
  17. Eppler, M.J.; Mengis, J. The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Inf. Soc. 2004, 20, 325–344. [Google Scholar] [CrossRef]
  18. Bechwati, N.N.; Xia, L. Do computers sweat? The impact of perceived effort of online decision aids on consumers’ satisfaction with the decision process. J. Consum. Psychol. 2003, 13, 139–148. [Google Scholar]
  19. Fogg, B.J. How do users evaluate the credibility of Web sites?: A study with over 2,500 participants. In Proceedings of the 2003 Conference on Designing for User Experiences, San Francisco, CA, USA, 6–7 June 2003. [Google Scholar]
  20. Ohanian, R. Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness. J. Advert. 1990, 19, 39–52. [Google Scholar] [CrossRef]
  21. Hyan Yoo, K.; Gretzel, U. The influence of perceived credibility on preferences for recommender systems as sources of advice. Inf. Technol. Tour. 2008, 10, 133–146. [Google Scholar] [CrossRef]
  22. Yoo, K.-H.; Gretzel, U. Creating more credible and persuasive recommender systems: The influence of source characteristics on recommender system evaluations. In Recommender Systems Handbook; Springer: Boston, MA, USA, 2011; pp. 455–477. [Google Scholar]
  23. Xiao, B.; Benbasat, I. E-commerce product recommendation agents: Use, characteristics, and impact. MIS Q. 2007, 31, 137–209. [Google Scholar] [CrossRef]
  24. Sinha, R.; Swearingen, K. The role of transparency in recommender systems. In Proceedings of the CHI’02 Extended Abstracts on Human Factors in Computing Systems, Minneapolis, MN, USA, 20–25 April 2002. [Google Scholar]
  25. Herlocker, J.L.; Konstan, J.A.; Riedl, J. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, Philadelphia, PA, USA, 2–6 December 2000; pp. 241–250. [Google Scholar]
  26. Swearingen, K.; Sinha, R. Beyond algorithms: An HCI perspective on recommender systems. In ACM SIGIR 2001 Workshop on Recommender Systems; Citeseer: Berkeley, CA, USA, 2001; Volume 13, pp. 1–11. [Google Scholar]
  27. Konstan, J.A.; Riedl, J. Recommender systems: From algorithms to user experience. User Model. User Adapt. Interact. 2012, 22, 101–123. [Google Scholar] [CrossRef]
  28. Payne, J.W. The Adaptive Decision Maker; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  29. Beach, L.R. Broadening the definition of decision making: The role of prechoice screening of options. Psychol. Sci. 1993, 4, 215–220. [Google Scholar] [CrossRef]
  30. Johnson, E.J.; Payne, J.W. Effort and accuracy in choice. Manag. Sci. 1985, 31, 395–414. [Google Scholar] [CrossRef]
  31. Payne, J.W. Contingent decision behavior. Psychol. Bull. 1982, 92, 382. [Google Scholar] [CrossRef]
  32. Häubl, G.; Trifts, V. Consumer decision making in online shopping environments: The effects of interactive decision aids. Mark. Sci. 2000, 19, 4–21. [Google Scholar] [CrossRef]
  33. Dellaert, B.G.; Häubl, G. Searching in choice mode: Consumer decision processes in product search with recommendations. J. Mark. Res. 2012, 49, 277–288. [Google Scholar] [CrossRef]
  34. Huseynov, F.; Huseynov, S.Y.; Özkan, S. The influence of knowledge-based e-commerce product recommender agents on online consumer decision-making. Inf. Dev. 2016, 32, 81–90. [Google Scholar] [CrossRef]
  35. Pereira, R.E. Influence of query-based decision aids on consumer decision making in electronic commerce. Inf. Resour. Manag. J. 2001, 14, 31–48. [Google Scholar] [CrossRef]
  36. Hoch, S.J.; Schkade, D.A. A psychological approach to decision support systems. Manag. Sci. 1996, 42, 51–64. [Google Scholar] [CrossRef]
  37. Kim, D.J.; Ferrin, D.L.; Rao, H.R. Trust and satisfaction, two stepping stones for successful e-commerce relationships: A longitudinal exploration. Inf. Syst. Res. 2009, 20, 237–257. [Google Scholar] [CrossRef]
  38. Bharati, P.; Chaudhury, A. An empirical investigation of decision-making satisfaction in web-based decision support systems. Decis. Support. Syst. 2004, 37, 187–197. [Google Scholar] [CrossRef]
  39. Delone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  40. Bailey, J.E.; Pearson, S.W. Development of a tool for measuring and analyzing computer user satisfaction. Manag. Sci. 1983, 29, 530–545. [Google Scholar] [CrossRef]
  41. Komiak, S.Y.; Benbasat, I. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q. 2006, 30, 941–960. [Google Scholar] [CrossRef]
  42. Hengstler, M.; Enkel, E.; Duelli, S. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 2016, 105, 105–120. [Google Scholar] [CrossRef]
  43. Swearingen, K.; Sinha, R. Interaction design for recommender systems. Des. Interact. Syst. 2002, 6, 312–334. [Google Scholar]
  44. Karahanna, E.; Straub, D.W.; Chervany, N.L. Information technology adoption across time: A cross-sectional comparison of pre-adoption and post-adoption beliefs. MIS Q. 1999, 23, 183–213. [Google Scholar] [CrossRef]
  45. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  46. Aljukhadar, M.; Senecal, S.; Daoust, C.-E. Using recommendation agents to cope with information overload. Int. J. Electron. Commer. 2012, 17, 41–70. [Google Scholar] [CrossRef]
  47. Chen, S.; Epps, J. Automatic classification of eye activity for cognitive load measurement with emotion interference. Comput. Methods Progr. Biomed. 2013, 110, 111–124. [Google Scholar] [CrossRef]
  48. Lai, M.-L. A review of using eye-tracking technology in exploring learning from 2000 to 2012. Educ. Res. Rev. 2013, 10, 90–115. [Google Scholar] [CrossRef]
  49. Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  50. Sirdeshmukh, D.; Singh, J.; Sabol, B. Consumer trust, value, and loyalty in relational exchanges. J. Mark. 2002, 66, 15–37. [Google Scholar] [CrossRef]
  51. Léger, P.-M. Precision is in the eye of the beholder: Application of eye fixation-related potentials to information systems research. J. Assoc. for Inf. Syst. 2014, 15, 3. [Google Scholar] [CrossRef]
  52. Vayre, J.-S. Effet distracteur des agents de recommandation et stratégies de navigation des consommateurs. Revue d’Interaction Homme-Machine 2006, 7, 1–29. [Google Scholar]
  53. Wook Chae, S.; Chang Lee, K. Exploring the effect of the human brand on consumers’ decision quality in online shopping: An eye-tracking approach. Online Inf. Rev. 2013, 37, 83–100. [Google Scholar] [CrossRef]
  54. Just, M.A.; Carpenter, P.A. Eye fixations and cognitive processes. Cogn. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
  55. Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 1998, 124, 372. [Google Scholar] [CrossRef] [PubMed]
  56. Nilsson, N.J. Principles of Artificial Intelligence; Morgan Kaufmann: Burlington, MA, USA, 2014. [Google Scholar]
  57. Riedl, R.; Léger, P.-M. Fundamentals of NeuroIS; Studies in Neuroscience, Psychology and Behavioral Economics; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  58. Leger, P.-M.; Riedl, R.; vom Brocke, J. Emotions and ERP information sourcing: The moderating role of expertise. Ind. Manag. Data Syst. 2014, 114, 456–471. [Google Scholar] [CrossRef]
  59. Courtemanche, F. Physiological heatmaps: A tool for visualizing users’ emotional reactions. Multimed. Tools Appl. 2018, 77, 11547–11574. [Google Scholar] [CrossRef]
Figure 1. Experimental protocol.
Figure 1. Experimental protocol.
Applsci 09 04244 g001
Figure 2. Twenty-four products displayed per task.
Figure 2. Twenty-four products displayed per task.
Applsci 09 04244 g002
Figure 3. Modal window for each product of task 2 (T2).
Figure 3. Modal window for each product of task 2 (T2).
Applsci 09 04244 g003
Figure 4. New page for each product of task 3 (T3).
Figure 4. New page for each product of task 3 (T3).
Applsci 09 04244 g004
Table 1. Participants’ perception results.
Table 1. Participants’ perception results.
HypothesisResultEstimatep-Value
Source credibilityH1a (T2 > T1 and T2 > T3)T2 > T10.7055<0.0001
T2 > T30.09160.2554
ControlH1b (T2 > T1 and T2 > T3)T2 > T10.35000.0185
T2 > T30.30000.0365
Decision qualityH1c (T2 > T1 and T2 > T3)T2 > T10.47060.0474
T2 > T30.29910.0678
SatisfactionH1d (T2 > T1 and T2 > T3)T2 > T10.60680.0027
T2 > T30.09640.3052
Note: Linear regression with random intercept model. One-tailed p-value adjusted for multiple comparison by the method of Holm.
Table 2. Participants’ decision quality results.
Table 2. Participants’ decision quality results.
HypothesisResultEstimatep-Value
Decision qualityH2 (T2 > T1 and T2 > T3)T2 > T11.05000.0001
T2 > T30.23300.0001
Note: Linear regression with random intercept model. One-tailed p-value adjusted for multiple comparison by the method of Holm.
Table 3. Participants’ intention to adopt the recommendation agent (RA).
Table 3. Participants’ intention to adopt the recommendation agent (RA).
HypothesisResultsEstimatep-Value
Intention to adopt the RA as a decision AidH5 (T2 > T1 and T3 > T1)T2 > T11.07050.0001
T3 > T10.96810.0002
Note: Linear regression with a random intercept model. One-tailed p-value adjusted for multiple comparison by the method of Holm.

Share and Cite

MDPI and ACS Style

Bigras, É.; Léger, P.-M.; Sénécal, S. Recommendation Agent Adoption: How Recommendation Presentation Influences Employees’ Perceptions, Behaviors, and Decision Quality. Appl. Sci. 2019, 9, 4244. https://0-doi-org.brum.beds.ac.uk/10.3390/app9204244

AMA Style

Bigras É, Léger P-M, Sénécal S. Recommendation Agent Adoption: How Recommendation Presentation Influences Employees’ Perceptions, Behaviors, and Decision Quality. Applied Sciences. 2019; 9(20):4244. https://0-doi-org.brum.beds.ac.uk/10.3390/app9204244

Chicago/Turabian Style

Bigras, Émilie, Pierre-Majorique Léger, and Sylvain Sénécal. 2019. "Recommendation Agent Adoption: How Recommendation Presentation Influences Employees’ Perceptions, Behaviors, and Decision Quality" Applied Sciences 9, no. 20: 4244. https://0-doi-org.brum.beds.ac.uk/10.3390/app9204244

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop