Next Article in Journal
Source Identification of Cd and Pb in Typical Farmland Topsoil in the Southwest of China: A Case Study
Next Article in Special Issue
Determinants of Sustainable Waste Management Behavior of Malaysian Academics
Previous Article in Journal
Modeling Population Spatial-Temporal Distribution Using Taxis Origin and Destination Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Sustainable News Business: Understanding Readers’ Perceptions of Algorithm-Generated News Based on Cultural Conditioning

1
Department of Culture & Tourism Contents, Kyung Hee University, Seoul 02447, Korea
2
Department of Advertising & PR, Daegu Catholic University, Daegu 38430, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(7), 3728; https://0-doi-org.brum.beds.ac.uk/10.3390/su13073728
Submission received: 11 February 2021 / Revised: 17 March 2021 / Accepted: 22 March 2021 / Published: 26 March 2021
(This article belongs to the Special Issue Social and New Technology Challenges of Sustainable Business II)

Abstract

:
The use of algorithms is beginning to replace human activities in the news business, and the presence of this technique will only continue to grow. The ways in which public news readers perceive the quality of news articles written by algorithms and how this perception differs based on cultural conditioning remain issues of debate. Informed by the heuristic-systematic model (HSM) and the similarity-attraction theory, we attempted to answer these questions by conducting a three-way one-way analysis of variance (ANOVA) test with a 2 (author: algorithm vs. human journalist) × 2 (media: traditional media vs. online media) × 2 (cultural background: the US vs. South Korea) between-subjects experiment (N = 360). Our findings revealed that participants perceived the quality of news articles written by algorithms to be higher than those written by human journalists. We also found that when news consumption occurs online, algorithm-generated news tends to be rated higher than human-written news in terms of quality perception. Further, we identified a three-way interaction effect of media types, authors, and cultural backgrounds on the quality perception of news articles. As, to the best of our knowledge, this study is the first to theoretically examine how news readers perceive algorithm-generated news from a cultural point of view, our research findings may hold important theoretical and practical implications.

1. Introduction

The rapid development of digital technology is driving changes in various fields of society, and the field of media is no exception to this evolution. In particular, news organizations are making efforts to improve the efficiency of news generation by employing news-generating algorithms to simulate natural language. In particular, the scope of artificial intelligence (AI) is constantly expanding and the technology is being used in various areas ranging from marketing to finance, from transportation to agriculture, from healthcare to security, and from chatbots to artificial creativity and production in manufacturing [1,2]. Clearly, AI has started to significantly impact general media and news agencies under the name of “robot journalism” (also known as computational journalism, algorithm journalism, or machine-written journalism) [2,3]. Robot journalism has been introduced to, or recognized by, the general public since 2010 and refers to the automatic generation of news articles using AI-based algorithms with no involvement of human journalists [2]. The Associated Press (AP), Forbes, and Los Angeles Times, to name popular examples, have already made use of such technology [4].
At present, journalism industries view robot journalism as an opportunity to enhance news quality, accuracy, and message credibility since it allows news content to be produced faster, in multiple languages, and possibly with fewer mistakes and bias [2,4]. Furthermore, this innovative type of journalism has the potential to create a sustainable business model for news organizations. The consumption of news on digital media tends to be highly selective [4,5], and these changes in the media environment have produced significant challenges for news organizations [5]. From a marketing perspective, scholars encourage news producers to use algorithms to cover simple local reports because they may reduce costs and broaden news audience [2,5]. Furthermore, news organizations are now experiencing a loss of authority in creating news values and facing audiences’ skepticism about the truth. Automation advocates assert that robot journalism may not only be the future of journalism, it may also be beneficial to news media’s audiences, since algorithms allow for an unbiased account of facts [5]. Thus, understanding the current status of AI applications in news business and the audiences who use automated news content is a crucial part of achieving sustainable news business.
Nevertheless, research on algorithm-based, automated, or AI news is still scarce [6]. In its initial stage, research on AI-generated articles or robot journalism in the news media industry mainly focused on the area of technology development and the issue of whether AI-generated articles could replace human-written articles [2,5]. As robot journalism has been further developed, the scope of research has expanded to include various subjects such as the ethics of algorithms, legal issues (e.g., article copyright), potential harm and limitations, and feedback from news readers [2,5,7]. However, there has been little research on public news readers’ perceptions of, or responses to AI-generated news articles (See Table 1).
In particular, little empirical research has been conducted on cross-cultural settings and their effects on different media. Thus, as one way of helping to bridge this gap, this study aims to examine how public news readers perceive and evaluate automated news from a cultural perspective. Specifically, we focus on how the public’s perception of the quality of two types of articles (i.e., algorithm-generated news vs. human-written news) depends on the news medium and some aspects of culture. Accordingly, this study aims to answer the following questions:
(1)
How do public news readers perceive the quality of algorithm-generated news articles?
(2)
How does the public’s evaluation of the quality of algorithm-generated news articles (and those written by human journalists) vary according to the media outlets that publish the articles?
(3)
How does this vary according to different cultural backgrounds?
This study consists of two parts. First, we conducted a one-way analysis of variance (ANOVA) to assess whether the public’s perception of the quality of news articles differs according to the information they receive about the authors of such articles and a two-way ANOVA (treatment group × medium) to investigate whether this perception varies according to the medium through which readers access the news. We then examined this from a cross-cultural perspective to investigate how news users in the US and South Korea perceive algorithm-generated news articles differently from those written by human journalists by performing a three-way ANOVA (treatment group × medium × cultural background). By doing so, the present study is expected to provide a better understanding of readers’ perceptions of algorithm-generated news content in a cross-cultural setting, which is becoming an essential part of today’s news media landscape.

2. Theoretical Foundation

The current paper starts by introducing and defining “robot journalism” before going on to discuss the Heuristic–Systematic Model (HSM) and similarity attraction theory to explain how and why it can be applied to the field of robot journalism. Although the theory of similarity attraction is often utilized to explain why people build relationships with others that are similar to themselves, this study utilizes the key concept to examine readers’ perceptions of automated news content.

2.1. Artificial Intelligence (AI) and Changes in Journalism

The term “robot journalism” refers to automated news production using AI algorithms that generate narratives that sound as though they could have been written by humans [2,5]. Robot journalism is also considered a type of computational journalism, which, according to Young and Hermida [6], refers to “forms of algorithmic, social scientific and mathematical processes and systems for the production of news”. The term “robot” in this case implies the latest computer programs containing AI-based algorithms that detail specific instructions to replicate the end results of journalism in a specific order. Algorithms have been actively used in news domains because this new technology provides a significant opportunity for journalists and news organizations. In fact, robot journalism has strength in its ability to automate repetitive and simple tasks (e.g., rebroadcasting or replication, bridging between traditional news outlets and social media, and the curation or aggregation of content from multiple sources) through the rapid and accurate analysis of data [5]. Moreover, it is suggested that algorithms can be used for diverse journalistic processes, such as fact-checking and fake news verification, creating questions, and producing interactive news [2,5,7]. By handling routine tasks, algorithms can allow journalists to invest more time in investigative reporting. Robot journalism can thus free individual journalists from routine tasks and allow them to spend more time concentrating on in-depth analysis, commentary, and investigative reporting. Consequently, the rise of writing algorithms in newsrooms not only makes it possible to generate and offer a wide range of quality news faster and at a larger scale, it can also help news organizations to provide accurate and structured news content with fewer errors (if the underlying data is not wrong, biased, or incomplete). More importantly, the advent of robot journalism based on AI-generated news articles represents a new phase of media transformation, as it has started to reshape fundamental aspects of news business [7]. In fact, traditional news organizations are keen to increase the value of their media assets to compete against digital media organizations. As part of this self-improvement strategy, traditional news media have made efforts to improve the efficiency of news delivery by using computational algorithms. Admittedly, a representative example of this phenomenon is the use of AI in the automation of news production. This advanced form of journalism, given the apparent economic benefits of providing opportunities to save manual operations costs, can be considered a sustainable business model for news organizations.

2.2. Reader Perception of Algorithm-Generated News

Various controversies have arisen regarding users’ perceptions of news articles written by AI and human journalists. For example, researchers have reported that news articles written by AI algorithms are as good as or better than those written by human journalists in terms of their message credibility because the automated decision-making processes based on AI-based algorithms are not monitored or corrected in the journalistic norm of transparency [5,12,13]. Previous studies have also found algorithm-generated news to be more descriptive, accurate, informative, objective, and uninteresting than those written by humans [6,7,14]. However, some of these studies have asserted that human-written news articles are discerned to be more coherent, well-written, and clear [14]. In addition, a recent study has raised doubts over the reliability and accuracy of AI-generated news articles [15]. We point out that there was limitation in investigating both positive and negative perceptions and acceptance of algorithm-generated news. First, there is a general lack of public understanding of the application of robot news. That is, perceptions or views regarding algorithm-generated news have been examined by academia and/or the news industry (i.e., journalists), not directly from news consumers. Little research has reported how robot journalism is perceived from the perspective of actual news audiences [5,7,16].
The majority of previous studies that explored the perceived quality of algorithm-generated news articles yielded similar results. However, both the public and journalists failed to recognize the differences between news reports written by algorithms and journalists [10]. Researchers could not account completely for why readers’ perceptions of algorithm-generated news vary [10]. Admittedly, attitudes towards technology and innovation, as well as the criteria of perception or appreciation of information content, may be affected by many factors. We assume that the public (i.e., general news audience) and journalists’ stereotypical perceptions of and different attitudes toward algorithms might yield perceptions or evaluations that differ from those written by algorithms and human journalists. Thus, factors affecting readers’ perceptions and evaluations of news written by algorithms and human journalists need to be considered from a more complex perspective, and more importantly, from a reader’s perspective. Therefore, focusing on the public’s perception of news reports written by algorithms, the current study aims to determine whether public news readers’ evaluations of the quality of articles varies according to the manipulation of the author (i.e., algorithm-generated news vs. human-written news).

2.3. The Influence of Readers’ Attitudes towards Human Journalists

The overall image of journalists in many countries is quite masculine [7,10,17]. Researchers perceive journalists as cynical, aggressive, cocky, self-reliant, tough, and unsympathetic [13,14]. Similarly, according to Bridger [18], the portrayals of photojournalists contained in 59 films were also framed negatively. In line with these findings, the public hold negative stereotypes of journalists in South Korea and find their credibility to be low [10]. A special report published by the South Korea Press Foundation indicated that journalists’ credibility decreased from 3.00 to 2.68 on a 5.00-point scale, and the score for their ethical transparency also decreased from 2.95 to 2.77 from 2006 to 2014 [19]. The Korean public’s criticism of the news media has been growing since the Sewol ferry disaster in 2014, in which a ferry carrying 476 passengers sunk on the coast of South Korea, resulting in 304 deaths, 250 of which were local high school students. The tragic accident was broadcasted nationwide and shook the country. Huge criticism was directed at the South Korean government and media for its disaster response and attempts to downplay the government’s culpability. In particular, the country’s mainstream broadcast media uncritically relayed the government’s claims that it had done everything in its power to rescue passengers without checking the facts or questioning the government. An increasing number of Korean people call reporters “giregi”, a hybrid word that combines the Korean words for “reporter” and “trash”. Another statistical study on news consumption conducted by Reuters Institute in 2016 revealed that South Koreans’ trust for news organizations and journalists was only 17%, which was ranked the 25th out of the 26 countries that participated in the survey [20]. These data imply that journalists in South Korea fall short of the public’s ethical standards [10]. It may thus be expected that the general public is less favorable to human journalists’ work and more positive towards algorithms’ work. We expect that this assumption may be applied to both groups (i.e., US and Korean readers), which could be especially noticeable in South Korean readers. Thus, we hypothesize that when it is stated that the author of a news articles is an algorithm, readers’ perceptions of the quality of the journalism will be higher, but it will be lower when the author of the article is stated to be a human journalist.
Hypothesis 1 (H1):
Public news readers perceive the quality of reports differently when they are told that articles are written by journalists or generated by algorithms. That is, regardless of their nationality, news readers perceive the quality of algorithm-generated news higher than human-written news.

2.4. Media Outlets and the Perception of Algorithm-Generated News

News readers select certain news when they are convinced of its credibility [2,7]. In fact, from an information reliability perspective, a long tradition of research on news messages has mainly focused on the factors that influence the evaluation of message credibility [21,22,23]. Although various factors can influence the evaluation of message credibility in news articles, information processing theories such as the Heuristic-Systematic Model (HSM) reveal that humans evaluate messages through a combination of systematic processing that involve the effortful evaluation of a message and heuristic, cue-based processing that renders judgments based on cognitive rules of thumb to help process information with relatively little effort [24]. The HSM is a model of information processing that originated from persuasion research in social psychology [25]. Studies in persuasion research have examined how received messages can change people’s attitudes. The HSM suggests that when they are being persuaded, people first establish the validity of the given message utilizing an efficient heuristics and systematic processing with the precise mix determined by multiple factors such as perceived importance of the decision outcome or risk, time pressure, and skill level.
Basically, people tend to be cognitive misers [26], relying more frequently on heuristic-based processing to make decisions, particularly when their motivation or ability to process information is relatively low [24,27,28]. In line with the notion of the “cognitive miser,” the HSM recognizes that people do not necessarily attempt to generate validity assessments with the highest possible accuracy or reliability [28,29]. In other words, people tend to limit their cognitive resources and investment of time if they lack motivation or capability [25,29]. Such situations are quite common for the online news environment. Thus, Choi [30] argues that readers do not put significant cognitive effort into news selection in an online news environment [30]. In this case, the HSM assumes that news readers tend to rely on heuristics to determine the reliability of information. Moreover, in addition to mental shortcuts triggered by features of a message like length or source attractiveness [27], the affordances of digital media that operate on the periphery of media can also activate heuristics that influence news readers’ evaluations of message credibility [31,32].
Meanwhile, news readers may prefer human journalists over AI algorithms due to the effect of similarity-attraction [33]. Many studies in social psychology have revealed that individuals tend to prefer others who are similar to themselves based on factors such as appearance, opinion, or personality [31,33]. This principle of human–human communication through similarity-attraction is often generalizable to nonhuman actors [31]. In other words, the principle of similarity-attraction may cause news readers to have negative perceptions towards machine automation because their preferences for communicators with human attributes (e.g., humanlike appearance) can trigger the similarity-attraction effect [31,34,35]. Human journalists (relative to algorithms) can thus be perceived as more anthropomorphic, which is also expected to result in more favorable perceptions of the credibility of their messages. If this is the case, audiences’ preferences for human sources over algorithms can be also found in that the propensity of subscribers to traditional printed newspapers is generally conservative and conforms to group norms [30]. It has also been confirmed that the credibility in human sources that readers have constructed by subscribing to printed newspapers in the past has the potential to lead them to prefer human-written news via the indirect pathway of source anthropomorphism [31].
However, with the advent of software-generated content, many scholars report that journalistic content written by an algorithm is hardly discernible from human journalists’ work or is even more preferred [5,6,7,12,14]. Although scholars suggest that audiences respond favorably to automated content, a review of relevant literature shows that perceptions of algorithm-generated news are varied. Based on these insights, in the condition of online news consumption, we may expect that the way in which audiences’ perceptions of algorithm-generated news varies can be explained in part by media heuristics and differences in perceptions of source anthropomorphism, which would be expected to be more favorable for human-written news relative to algorithm-generated news [31]. Specifically, it is theorized that news readers’ favorable perceptions will be the greatest when they read news articles written by algorithm authors in the context of online news consumption, given its positive influence on the principle of similarity-attraction to digital news media (relative to traditional printed newspapers). To that end, the present study not only examines the effect of human and algorithm cues, but also the influence of the effect of media type: when news consumption occurs online. Thus, the following hypothesis is proposed:
Hypothesis 2 (H2):
When public news readers are told that reports have been written by journalists or generated by algorithms, they perceive the quality of news differently depending on the type of media. Specifically, public news readers will perceive the quality of algorithm-generated news to be higher than that of human-written news in online media than in traditional media.
In this study, we also examine how cultural diversity in groups affects news readers’ perception of the quality of news articles generated by algorithms. While there are hundreds of possible definitions of “culture”, Triandis [36] defines culture as consisting of “shared elements that provide the standards for perceiving, believing, evaluating, communicating, and acting among those who share a language, a historic period, and a geographic location”. Acknowledging the argument that “culture is communication and communication is culture” [37], it has been found that culture has a significant influence on how people perceive and respond to the news messages they encounter [7]. According to Hall [37], in a high-context culture (e.g., Korea), individuals prefer indirect communication and implicit expressions relying heavily on symbols and metaphors, while in a low-context culture (e.g., the US), individuals prefer direct communication and explicit expressions to convey meaning. Some scholars claim that Hall’s cultural framework may not address deep insight into societies’ cultural and historical roots to explain cultural differences [7]. However, Hall’s work has provided an important methodological framework for researchers to explore how news readers interpret news information across cultures, and his model has demonstrated decent external validity and generalizability in studying cross-cultural differences.
Clearly, psychological thinking styles differ across cultures in the same manner as other cultural practices. According to Hall’s model, people in East Asia are characterized by having a higher degree of holistic thinking, which is defined as an orientation to the interrelationship between an object and its given context [7,37,38]. Meanwhile, Westerners are more likely to be characterized by an analytic thinking style and an orientation to the detachment of an object from its context [7,38]. In particular, the holistic/analytic thinking framework provides a validated theoretical framework to compare cultural differences in how news readers perceive the quality of algorithm-generated news articles in comparison with those written by human journalists. For example, based on Hall’s framework, a recent comparative study by Zheng et al. [7] reported cultural differences in US and Chinese news readers’ assessments of algorithm-generated news content: US readers preferred news reports written by human reporters over those generated by algorithms.
Therefore, employing a cross-cultural approach, this study explores how news readers in the US and South Korea perceive algorithm-generated news articles differently from those written by human journalists. In doing so, based on the framework of holistic/analytic thinking [39], the current study investigates whether cultural factors may play important roles in the processing of news quality perception across the media outlets (i.e., traditional or online media). Hence, the following research question is formulated:
RQ1
Do US and South Korean news readers perceive the quality of news articles differently when they are told that news articles have been written by journalists or generated by algorithms, depending on the type of media? Specifically, do South Korean news readers perceive the quality of algorithm-generated news to be higher than human-written news in online news media?

3. Methods

This study conducted an experimental 2 (author: algorithm vs. journalist) × 2 (medium: traditional media vs. online media) × 2 (cultural background: the US vs. Korean readers) design to investigate users’ perceptions of automated news. In particular, we examined possible cultural differences between US and Korean news readers when they evaluated the quality of news articles produced by different authors (i.e., algorithms or journalists). SPSS 22 software package was used for data analysis.

3.1. Participants

We compared readers’ perceptions of the quality of algorithm-generated news in our US and South Korea samples because it is generally considered that people in these two countries represent distinctive Western and East Asian cultural values, respectively [7]. A total of 360 participants were recruited in the US and South Korea for this study from July 2020 to January 2021. The US participants (n = 179) were collected through the Amazon Mechanical Turk (MTurk) system (Mage = 36, SD = 12.01). Among the US participants who disclosed their gender, more than half were males (n = 93, 52%), and the rest females (n = 86, 48%). The Korean panel samples (n = 181) were recruited by a major research firm (Macromill Embrain) in Seoul, Korea (Mage = 31, SD = 11.21), and the majority of the participants were male (n = 91, 50.3%), with the rest female (n = 90, 49.7%). Participants were also asked to report their political ideology (1 = strong liberal, 5 = strong conservative) and party affiliation (1 = strongly democratic, 5 = strongly republican) for the purpose of compiling descriptive statistics about the sample. The US participants primarily identified as liberal (M = 2.87, SD = 1.21) and democratic (M = 2.80, SD = 1.16). South Korean participants also identified as liberal (M = 2.65, SD = 1.24) and democratic (M = 2.60, SD = 1.11).

3.2. The Stimuli and Procedure

Two news articles—one in the field of economics (titled “The Best Walmart Prime Day Deals 2020”) and one in the pro sports category (titled “Tom Brady and the Patriots Have to Part Ways Someday”)—were used as the stimuli, which were adapted from the New York Times. The news articles were edited and shortened so that they were of similar length, ranging from 110 to 140 words. We chose topics that were neither exceedingly exciting nor too boring to the participants, and it was ensured that the content did not cover a specific religion, race, or culture, and that the news reports were written for the general public. In addition, it was important to consider that the articles were old enough that none of the participants had read the reports before, thereby avoiding any pre-test effects [7].
Prior to data collection, the news articles were pre-tested to check whether the quality of the reports were perceived differently. In the pre-test on 40 people, the quality perception (clear, coherent, concise, and well-written) of news articles was measured on a 5-point Likert scale developed by Sundar and Nass [40], and the results showed that the difference in quality perception of the articles was not significant. To manipulate the media outlets, each of the two news articles was presented with one of the two media outlets (i.e., online or traditional media). In this study, The New York Times was chosen as a representative of traditional media in the US, and HuffPost news represented an example of popular online media outlets. Next, we created four stimuli articles with two news topics and two different media. Then, a total of eight conditions were created through the combination of the author’s name “Adrian Brooks” (unisex name), which indicated that the article was created by a human journalist, and “Algorithm Insights”, that indicated that the article was generated by an algorithm. Accordingly, one of each of the articles was randomly assigned to the US participants.
The English news articles were then translated into Korean for the Korean participants by a bilingual speaker, and the final version was reviewed by a professional translator to ensure the accuracy of the translated version, during which the cultural nuances were also checked to minimize the risk of distortion or misinterpretation. To achieve adequate external validity, the two media outlets were changed to represent the media outlets with which Korean news readers were familiar. In this study, The New York Times was used to represent a traditional media outlet, as it publishes its Korean version online. However, the online media outlet, HuffPost news, was changed to Naver News because, as Korea’s number one online portal, Naver uses its own algorithms to curate news articles and many Koreans read almost every important news article on the Naver app or its own website. As a result, for the Korean group, each of the two stimuli news articles had two versions with either The New York Times Korea or Naver News as the media outlet. Therefore, a total of four experimental conditions were created with two news articles and two news outlets. Then, like those in the US group, each Korean participant was randomly assigned to read one of the eight conditions according to the two types of reporters (i.e., algorithm or human journalist). Additionally, Korean participants were informed that “Algorithm Insights” was a US media company that uses algorithms to generate news reports. Each participant was assigned to read two news articles and then asked to evaluate the articles. At the end, all the participants were asked to answer a few demographic questions concerning their age, gender, income, and education level.

3.3. Independent and Dependent Variables

Media. As mentioned above, the media outlets that published the news articles were manipulated as either a traditional or an online medium. For the US participants, The New York Times and HuffPost were selected to represent traditional and online media outlets, respectively. For the Korean participants, The New York Times Korea and Naver News were chosen to represent each type of media outlet.
Authors. The news writers were manipulated as either a human journalist named Adrian Brooks or algorithms called Algorithm Insights. The manipulation procedure was the same for both the US and the Korean participants.
Cultural background. Cultural background was measured by the participants’ nationality (i.e., the US or Korean). According to studies on cultural differences, the East emphasizes a relationship in a relational and social context; meanwhile, the West values self-realization, personal perspectives, and ego with independent abilities [36].
Perceived content quality. The quality of new articles was measured by four items that were developed by Sundar and Nass [41]. We asked the participants to rate on a 7-point Likert scale (1 = describes very poorly, 7 = describes very well) how well the adjectives (clear, coherent, concise, and well-written) describe the article they has just read (α = 0.97).

4. Results

At the beginning of this study, a series of ANOVA tests were conducted to measure whether the participants’ demographic variables such as age, gender, and income or education level had any impact on the findings. The results indicated that none of these demographics were significant factors in the participants’ perceptions of the quality of the news articles. A manipulation check was first conducted before the study.

4.1. Manipulation Check

Apart from the main survey, participants completed a manipulation check designed to assess whether they identified the author of news stories as a human journalist, an algorithm, or whether they were unsure about the type of author. Using a new set of samples (N = 245), a Chi-Square test showed that 93% of participants assigned in the human journalist condition identified the author as a human reporter, while in the algorithm-generated news condition, 94% of participants recognized the author as a writing algorithm (χ2 (2) = 58.71, p < 0.05). In other words, upon seeing “Adrian Brooks” in the byline of a news article, most participants correctly recognized that the article had been written by a human journalist, and they identified the author as an algorithm when they saw “Algorithm Insights”.
Another Chi-Square test with newly selected samples (N = 240) was conducted to perform a manipulation check in order to evaluate the effectiveness of media outlet manipulation. The results showed that most participants in the HuffPost/Naver News condition successfully identified the media outlet as online media (94%), as did those in The New York Times/The New York Times Korea condition (95%), χ2 (2) = 61.52, p < 0.05. Therefore, the manipulation was successful in varying the type of authors and media outlets.

4.2. The Analyses and Results

As has been suggested by previous studies in this field, we used an experimental design because this approach allows for the manipulation of variables and ensures internal validity [7,10]. In this study, we employed a 2 (author: algorithm vs. human journalist) × 2 (media: traditional media vs. online media) × 2 (cultural background: the US vs. South Korea) between-subject design. To test H1 and H2, we conducted a series of ANOVA tests, the results of which revealed a significant main effect according to the type of author (i.e., journalist or algorithm) (F (1, 352) = 15.826, p < 0.05). In other words, it showed that both the US and Korean participants perceive the quality of news articles generated by algorithms (M = 4.13, SD = 0.95) to be higher than those written by human journalists (M = 3.60, SD = 0.73). Therefore, H1 was fully supported. Then, H2 predicted that news readers would perceive the quality of news differently by the interaction effect of authors and media. As predicted, the analysis revealed a significant interaction effect (F (1, 352) = 62.826, p < 0.05). According to the result of carrying out a planned contrast to examine the detailed interactions between variables, when news users read the news in traditional media, they perceived the quality of news articles written by human journalists (M = 3.96, SD = 0.57) to be higher than those generated by algorithms (M = 3.53, SD = 0.87; F (1, 178) = 14.992, p < 0.05). However, when reading through online media, they perceived the quality of algorithm-generated news (M = 4.75, SD = 0.57) to be better than human journalists’ work (M = 3.26, SD = 0.68). This result indicates that the perception of the quality of news articles written by journalists or algorithms varies depending on the medium of news dissemination; F(1, 178) = 78.193, p < 0.05) (See Figure 1).
Next, to answer RQ 1, this study examined whether US and South Korean news readers evaluate the quality of news articles differently according to both authors and media outlets. As presented in Figure 2, the result of planned contrast revealed a significant three-way interaction (F (1, 352) = 77.514, p < 0.05). Specifically, the analysis revealed that when reading news articles in online media, the US participants perceived the algorithm-generated news articles (M = 4.37, SD = 0.42) had a higher quality than those written by journalists (M = 3.46, SD = 0.73; F (1, 87) = 18.90, p < 0.05). Meanwhile, when reading news articles in traditional media, the difference in their quality perception between articles written (or generated) by reporters (or algorithms) was not significant (M = 3.63, SD = 0.59; M = 3.80, SD = 0.52; F (1, 88) = 1.96, p > 0.05).
Like the US news readers, Korean participants also perceived the quality of news articles generated by algorithms (M = 5.12, SD = 0.44) to be higher than those written by journalists (M = 3.07, SD = 0.57) when the reports were published in online media (F(1, 89) = 57.32, p < 0.05). However, when reading news reports from traditional media outlets, a reversed trend was found. Figure 2 shows that Korean participants perceived the quality of news articles written by journalists (M = 4.29, SD = 0.43) to be higher than that of those generated by algorithms (M = 3.25, SD = 1.05; F (1, 89) = 37.13, p < 0.05).

5. Conclusions and Discussion

While there has been growing interest in algorithm-generated news reports from both academia and industry, understanding how readers perceive and evaluate the quality of automated news remains a matter of debate and investigation. Furthermore, the ways in which people adapt and adjust to journalistic automation is a critical issue for sustainable news business [1,5]. As discussed, several recent studies have examined readers’ responses to automated news content [6]. However, few studies have examined the public’s perceptions of algorithm-generated news, and the debate has not been concluded. Moreover, we argue that less attention has been paid to cultural differences in readers’ perceptions of algorithm-generated news and the impact of such differences in cross-cultural environments. Under such situational awareness, the goal of the present study was to show the ways in which public news readers’ perceptual responses to reports driven by AI algorithms varies according to certain conditioning such as news media outlets and cultural diversity. Thus, this study investigated readers’ perceptions of the journalistic content produced by algorithms in terms of the influence of media type (i.e., traditional vs. online media) and how their perceptions of algorithm-generated news differed cross-culturally.
Overall, in line with our hypothetical expectations, the results show that public news readers in the US and South Korean perceive algorithm-generated news reports differently from those written by human journalists. That is, both readers from the US and South Korea perceived the quality of the news articles written by algorithms to be higher than those by human journalists. In other words, no cultural difference was found here between the two groups of news readers when they were not primed with the type of media that they assessed.
Interestingly, however, it turned out that news readers who were primed to think that they were reading algorithm-generated news articles online perceived that the quality of the algorithm’s work was higher than that of human-written news. In contrast, when news readers were told that the news articles they read were published in traditional media, they tended to rate a human journalist’s work higher than algorithm-generated news articles in terms of quality. This may be explained by the ways in which the affordances of online media activate news readers’ heuristics that influence their evaluations of the credibility of the message [31,32]. It would also be plausible that this impressive result occurred due to the effect of similarity-attraction. That is, news readers’ preferences for communicators with human attributes could trigger the similarity attraction effect [31,34,35]; thus, the quality of human journalists’ work (relative to algorithms’ work) could have been perceived more favorably.
Next, answering the question posed in this study, we found a three-way interaction effect of media types (i.e., traditional vs. online media), authors (i.e., algorithm vs. journalist), and cultural backgrounds (i.e., US vs. Korean readers) on the quality perception of news articles. Specifically, US news readers evaluated the quality of news articles written by algorithms to be higher regardless of whether the media outlets were framed as a traditional or online medium. Meanwhile, Korean news consumers showed a different perception towards the quality of news articles written by algorithms or journalists when they were delivered through traditional media. Korean news readers, like the US public, perceived news articles generated by algorithms to be higher-quality than those by human journalists. However, when reading articles in traditional media, Korean readers perceived news articles written by journalists to be higher-quality than those produced by algorithms, which was different from the perceptions of US news readers. One possible explanation for this is that this result was caused by the cultural difference in thinking styles. We argue that differences in thinking styles can shape how people interpret and respond to news messages. To this extent, a holistic orientation (as compared with an analytic orientation) may help explain differences in the quality perception for Korean news readers. It was mentioned earlier that people in in the East Asian region, including the countries of Korea, China, and Japan, are characterized by holistic thinking, while Westerners are characterized by analytic thinking [39,42]. In terms of the application of cultural differences, the results of our study imply that the Korean participants’ holistic thinking was activated when they read news articles from the traditional media (i.e., The New York Times), and as a result, the holistic processing between author and media type led to a favorable response to the similarity between a news bot (i.e., algorithm) and online news media, and vice versa.
The current study makes several theoretical contributions. First, results from this study contribute to a better understanding of the public’s perception of algorithm-generated content, which has become an essential part of the expanding world of digital media. Importantly, this study provided an empirical exploration of robot journalism in a distinctive cultural setting: the US vs. South Korea. Although previous work in this field (e.g., Zheng et al. [7]) has attempted to address this issue of cultural differences in Western and the East Asian countries by conducting experimental studies, no theoretical explanations for the influence of news media and cultural factors on readers’ perceptions of algorithm-generated news have been provided. However, in the current experimental study, we provided theoretically-guided insight into the underlying mechanisms of readers’ perceptions of robot journalism. By employing a cross-cultural approach, the inference from this study is that robot journalism can be said to be perceived credible. It is also noteworthy that individual properties such as cultural diversity in thinking styles are key factors that increase news readers’ perception of message quality, which could lead to a more positive evaluation of robot journalism. For academia, these findings also suggest that future research should broaden its view to consider moderating factors contributing to news readers’ acceptance and/or responses to news content generated by algorithms.
Further, the findings of this study can provide insights into the diffusion of robot journalism and the new direction of sustainable news business. Previous studies have mainly focused on users’ acceptance of technology-related services and reported that such acceptance relies heavily on technological factors such as ease of use, usefulness, and self-efficacy [2,4,5,12]. However, in the case of robot journalism, we found that media and cultural factors have a profound influence on readers’ evaluations of news reports written by AI algorithms and/or human journalists. These findings could be useful to news organizations that are considering using algorithms in the production of news. For example, news organizations with global networks will have a relatively greater effect if they use robot journalism to operate an online news platform, especially in East Asian countries.

6. Limitations and Suggestions

Although the current study provides valuable insights into readers’ perceptions of algorithm-generated news content, this study has several limitations. First and most importantly, the current study’s external validity could have been enhanced. For example, this study was conducted online, which may differ from the actual environment in which people access and consume the news. Thus, it is recommended that future research enhance the external validity by using real-word environments or crafting the style and appearance of real media outlets to yield more convincing data. Further, we used The New York Times as the stimuli, which may be a suitable fit for US participants, but not less relevant for South Korean participants. This could also weaken the external validity of this study. Furthermore, as different news articles were used for the two groups of participants, the risk of threatening external validity is high because readers may question whether the differences detected could be due to the stimulus content or the topics. Future research should also use news articles from the same domain and ensure that they are neither perceived as too exciting nor too insipid by the participants.
Second, although Amazon MTurk is a practical tool that can be used to access a sample representing the US population, the survey data obtained from MTurk did not represent a true random sample. Concerning this issue, the results should be interpreted with caution, as they cannot be generalized to all US news readers. Third, it is necessary to test other potential moderating variables that may impact readers’ perceptions of robot journalism. For example, additional studies may focus on the moderating role of individuals’ level of new media literacy, efficacy, or information and communication technology (ICT) knowledge as a moderator of news readers’ perceptions of algorithm-driven reports. Despite these limitations, we believe that this study responds to the urgent need to establish a theoretical framework to explain public perceptions of and responses to robot journalism. We hope that findings from this study can provide meaningful foundational data for future sustainable news business.

Author Contributions

Y.K. outlined research ideas, collected, and analyzed data, and wrote the article; H.L. outlined research ideas, wrote the article. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by research grants from Daegu Catholic University in 2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not Aplicable

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yigitcanlar, T.; Cugurullo, F. The sustainability of artificial intelligence: An urbanistic viewpoint from the lens of smart and sustainable cities. Sustainability 2020, 12, 8548. [Google Scholar] [CrossRef]
  2. Kim, S.; Kim, B. A decision-making model for adopting Al-generated news articles: Preliminary results. Sustainability 2020, 12, 7418. [Google Scholar] [CrossRef]
  3. Carlson, M. News algorithms, photojournalism and the assumption of mechanical objectivity in journalism. Dig. Journal. 2019, 7, 1117–1133. [Google Scholar] [CrossRef]
  4. Graefe, A. Guide to Automated Journalism. 2016. Available online: http://towcenter.org/research/guide-to-automated-journalism/ (accessed on 24 December 2020).
  5. Hong, H.; Oh, H.J. Utilizing bots for sustainable news business: Understanding users’ perspectives of news bots in the age of social media. Sustainability 2020, 12, 6515. [Google Scholar] [CrossRef]
  6. Young, M.L.; Hermida, A. From Mr. and Mrs. outlier to central tendencies: Computational journalism and crime reporting at the Los Angeles Times. Dig. Journal. 2015, 3, 381–397. [Google Scholar] [CrossRef]
  7. Zheng, Y.; Zhong, B.; Yang, F. When algorithms meet journalism: The user perception to automated news in a cross-cultural context. Comput. Hum. Behav. 2018, 86, 266–275. [Google Scholar] [CrossRef]
  8. Kim, D.; Kim, S. Newspaper companies’ determinants in adopting robot journalism. Technol. Forecast. Soc. Chang. 2017, 117, 184–195. [Google Scholar] [CrossRef]
  9. Kim, D.; Kim, S. Newspaper journalists’ attitudes towards robot journalism. Telemat. Inform. 2018, 35, 340–357. [Google Scholar] [CrossRef]
  10. Jung, J.; Song, H.; Kim, Y.; Im, H.; Oh, S. Intrusion of software robots into journalism: The public’s and journalists’ perceptions of news written by algorithms and human journalists. Comput. Hum. Behav. 2017, 71, 291–298. [Google Scholar] [CrossRef]
  11. Wölker, A.; Powell, T.E. Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism 2018. [Google Scholar] [CrossRef] [Green Version]
  12. Jung, J.; Chan-Olmsted, S.; Park, B.; Kim, Y. Factors affecting e-book reader awareness, interest, and intention to use. New Media Soc. 2012, 14, 204–224. [Google Scholar] [CrossRef]
  13. Van der Kaa, H.; Krahmer, E. Journalist versus news consumer: The perceived credibility of machine written news. In Proceedings of the Computation + Journalism Conference, Columbia University, New York, NY, USA, 24–25 October 2014; Volume 24, pp. 24–25. [Google Scholar]
  14. Garrett, R.K.; Long, J.A.; Jeong, M.S. From partisan media to misperception: Affective polarization as Mediator. J. Commun. 2019, 69, 490–512. [Google Scholar] [CrossRef]
  15. Ho, J.H.; Lee, G.G.; Lu, M.T. Exploring the implementation of a legal AI bot for sustainable development in legal advisory institutions. Sustainability 2020, 12, 5991. [Google Scholar] [CrossRef]
  16. Keller, T.R.; Klinger, U. Social bots in election campaigns: Theoretical, empirical, and methodological implications. Political Commun. 2019, 36, 171–189. [Google Scholar] [CrossRef] [Green Version]
  17. Evensen, B.J. The drunken journalist: The biography of a film stereotype. J. Hist. 2001, 27, 43. [Google Scholar]
  18. Bridger, E. From the ridiculous to the sublime: Stereotypes of photojournalists in the movies. Vis. Commun. Q. 1997, 4, 4–11. [Google Scholar] [CrossRef]
  19. Korea Press Foundation. National Audience Survey; Korea Press Foundation: Seoul, Korea, 2014. [Google Scholar]
  20. Reuter Institute. Digital News Report. 2016. Available online: https://reutersinstitute.politics.ox.ac.uk/our-research/digital-news-report-2016 (accessed on 12 March 2021).
  21. Carter, R.F.; Greenberg, B.S. Newspapers or television: Which do you believe? Journal. Q. 1965, 42, 29–34. [Google Scholar] [CrossRef]
  22. McCroskey, J.C. Scales for the measurement of ethos. Speech Monogr. 1966, 33, 65–72. [Google Scholar] [CrossRef]
  23. Metzger, M.J.; Flanagin, A.J.; Eyal, K.; Lemus, D.R.; McCann, R.M. Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Ann. Int. Commun. Assoc. 2003, 27, 293–335. [Google Scholar]
  24. Chen, S. The heuristic-systematic model in its broader context. In Dual Process Theories in Social Psychology; Chaiken, S., Trope, Y., Eds.; Guilford: New York, NY, USA, 1999; pp. 73–96. [Google Scholar]
  25. Chen, S.; Chaiken, S. The Heuristic-Systematic Model in Its Broader Context; Guilford Press: New York, NY, USA, 1999. [Google Scholar]
  26. Moskowitz, G.B.; Skurnik, I.; Galinsky, A.D. The history of dual-process notions, and the future of preconscious control. In Dual-Process Theories in Social Psychology; Guilford Press: New York, NY, USA, 1999; pp. 12–36. [Google Scholar]
  27. Chaiken, S. Heuristic and Systematic Information Processing within and beyond the Persuasion Context. Unintended Thought; Guilford: New York, NY, USA, 1989; pp. 212–252. [Google Scholar]
  28. Todorov, A.C.; Chaiken, S.S.; Henderson, M.D. The Heuristic-Systmatic Model of social information processing. In The Persuasion Handbook. Thousand Oaks (195–212); Sage Publications: San Francisco, CA, USA, 2002. [Google Scholar]
  29. Luo, X.R.; Zhang, W.; Burd, S.; Seazzu, A. Investigating phishing victimization with the Heuristic–Systematic Model: A theoretical framework and an exploration. Comput. Secur. 2013, 38, 28–38. [Google Scholar] [CrossRef]
  30. Choi, J. How do users choose news in online news environment?—Investigating the predictors and consequences of using different news cues on online portal news sites. Korean J. Journal. Commun. Stud. 2018, 62, 143–169. [Google Scholar] [CrossRef]
  31. Waddell, T.F. Can an algorithm reduce the perceived bias of news? Testing the effect of machine attribution on news readers’ evaluations of bias, anthropomorphism, and credibility. Journal. Mass Commun. Q. 2019, 96, 82–100. [Google Scholar] [CrossRef]
  32. Sundar, S.S.; Jia, H.; Waddell, T.F.; Huang, Y. Toward a theory of interactive media effects (TIME): Four models for explaining how interface features affect user psychology. In Handbook of Psychology of Communication Technology; Sundar, S.S., Ed.; Wiley Blackwell: Boston, MA, USA, 2015; pp. 47–86. [Google Scholar]
  33. Byrne, D. An overview (and underview) of research and theory within the attraction paradigm. J. Soc. Pers. Relatsh. 1997, 14, 417–431. [Google Scholar] [CrossRef]
  34. Lee, H.; Cho, C.H. Uses and gratifications of smart speakers: Modelling the effectiveness of smart speaker advertising. Int. J. Advert. 2020, in press. [Google Scholar] [CrossRef]
  35. Nowak, K.L.; Rauh, C. Choose your “buddy icon” carefully: The influence of avatar androgyny, anthropomorphism and credibility in online interactions. Comput. Hum. Behav. 2008, 24, 1473–1493. [Google Scholar] [CrossRef]
  36. Triandis, H.C. The psychological measurement of cultural syndromes. Am. Psychol. 1996, 51, 407. [Google Scholar] [CrossRef]
  37. Hall, E.T. The silent language. In Anchor Books; Sage: New York, NY, USA, 1959. [Google Scholar]
  38. Nisbett, R. The Geography of Thought: How Asians and Westerners Think Differently and Why; Free Press: New York, NY, USA, 2004. [Google Scholar]
  39. Nisbett, R.E.; Peng, K.; Choi, I.; Norenzayan, A. Culture and systems of thought: Holistic versus analytic cognition. Psychol. Rev. 2001, 108, 291–310. [Google Scholar] [CrossRef] [Green Version]
  40. Kitayama, S.; Markus, H.R.; Matsumoto, H.; Norasakkunkit, V. Individual and collective processes in the construction of the self: Self-enhancement in the United States and self-criticism in Japan. J. Personal. Soc. Psychol. 1997, 72, 1245–1266. [Google Scholar] [CrossRef]
  41. Sundar, S.S.; Nass, C. Conceptualizing sources in online news. J. Commun. 2001, 51, 52–72. [Google Scholar] [CrossRef]
  42. Nisbett, R.E.; Masuda, T. Culture and point of view. Proc. Natl. Acad. Sci. USA 2003, 100, 11163–11170. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Perceived quality of news articles: Media × Author interaction.
Figure 1. Perceived quality of news articles: Media × Author interaction.
Sustainability 13 03728 g001
Figure 2. Perceived quality of news articles: Cultural background × Media × Author interaction—US participants (a); Korean participants (b).
Figure 2. Perceived quality of news articles: Cultural background × Media × Author interaction—US participants (a); Korean participants (b).
Sustainability 13 03728 g002
Table 1. Summary of related literature and research gap.
Table 1. Summary of related literature and research gap.
Thematic VariableResearcherContributionResearch Gap
Perceived news qualityZheng et al. [7]Explored how US and Chinese news users perceive the quality of algorithm-generated news reports and found that news users’ quality perception varies depending on their nationality.
  • Lacking attention to public news readers’ responses to robot journalism.
  • Only a few studies have examined the public’s perceptions of algorithm-generated news in cross-cultural environments.
  • Cultural differences in readers’ perception of algorithm-generated news and its impact is not theoretically known.
Perceived acceptability/adoptionKim & Kim [2]Claimed that readers’ perceived media reliability is a successful predictor of their news bot acceptance.
Hong & Oh [5]Revealed that news readers’ self-efficacy significantly impacted their news bot acceptance.
Kim & Kim [8]Found newspaper companies’ determinants in adopting robot journalism.
Perceived attitudeKim & Kim [9]Identified journalists’ attitudes towards robot journalism and suggested that robots have limitations and possess the potential to harm journalism.
Jung et al. [10]Compared the public’s and journalists’ perceptions of algorithm-generated news and found that their evaluations varied by author cue.
CredibilityWölker & Powell [11]Found that readers’ perceptions of the credibility of human, algorithm, and combined news were equal.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, Y.; Lee, H. Towards a Sustainable News Business: Understanding Readers’ Perceptions of Algorithm-Generated News Based on Cultural Conditioning. Sustainability 2021, 13, 3728. https://0-doi-org.brum.beds.ac.uk/10.3390/su13073728

AMA Style

Kim Y, Lee H. Towards a Sustainable News Business: Understanding Readers’ Perceptions of Algorithm-Generated News Based on Cultural Conditioning. Sustainability. 2021; 13(7):3728. https://0-doi-org.brum.beds.ac.uk/10.3390/su13073728

Chicago/Turabian Style

Kim, Yunju, and Heejun Lee. 2021. "Towards a Sustainable News Business: Understanding Readers’ Perceptions of Algorithm-Generated News Based on Cultural Conditioning" Sustainability 13, no. 7: 3728. https://0-doi-org.brum.beds.ac.uk/10.3390/su13073728

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop