Next Article in Journal
Comparing Prophet and Deep Learning to ARIMA in Forecasting Wholesale Food Prices
Next Article in Special Issue
Examining Deep Learning Architectures for Crime Classification and Prediction
Previous Article in Journal
Probabilistic Day-Ahead Wholesale Price Forecast: A Case Study in Great Britain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

A Brief Taxonomy of Hybrid Intelligence

by
Niccolo Pescetelli
Department of Humanities and Social Sciences, New Jersey Institute of Technology, Newark, NJ 07102, USA
Submission received: 20 July 2021 / Revised: 21 August 2021 / Accepted: 22 August 2021 / Published: 1 September 2021
(This article belongs to the Special Issue Improved Forecasting through Artificial Intelligence)

Abstract

:
As artificial intelligence becomes ubiquitous in our lives, so do the opportunities to combine machine and human intelligence to obtain more accurate and more resilient prediction models across a wide range of domains. Hybrid intelligence can be designed in many ways, depending on the role of the human and the algorithm in the hybrid system. This paper offers a brief taxonomy of hybrid intelligence, which describes possible relationships between human and machine intelligence for robust forecasting. In this taxonomy, biological intelligence represents one axis of variation, going from individual intelligence (one individual in isolation) to collective intelligence (several connected individuals). The second axis of variation represents increasingly sophisticated algorithms that can take into account more aspects of the forecasting system, from information to task to human problem-solvers. The novelty of the paper lies in the interpretation of recent studies in hybrid intelligence as precursors of a set of algorithms that are expected to be more prominent in the future. These algorithms promise to increase hybrid system’s resilience across a wide range of human errors and biases thanks to greater human-machine understanding. This work ends with a short overview for future research in this field.

1. Introduction

The last decade has seen major advances in computation power and its increased availability at a low-cost due to cloud computing. Advances in machine learning and statistical learning methods, the availability of big data, mobile devices, and social networks has led to an explosion of artificial intelligence and its applications to a wide range of commercial, governmental, and social products and services. Many parallels exist between biological and artificial intelligence. Both neuroscience and artificial intelligence can draw inspiration from each other and generate new theories and hypotheses [1,2,3,4]. However, while artificial intelligence is building momentum, it is also becoming apparent that artificial intelligence and biological intelligence are also characterized by different mechanisms, oftentimes not entirely overlapping. Compared to biological intelligence, artificial intelligence requires much larger training sets to recognize patterns, is less robust to slight variations in the input, does not take into account prior experience, cultural norms, situational or contextual information, and is less able to generalize to new problems and environments [5,6,7]. Thus, rather than using AI to replace human intelligence, researchers have started to investigate how to integrate human and artificial intelligence by creating hybrid human–artificial systems [8,9,10]. Hybrid intelligence is arguably a more effective and nuanced approach to solve complex problems that both humans and machines struggle to solve alone. According to one definition, hybrid intelligence (HI) is “... the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines [11]”.
As research on hybrid intelligence expands, so does the space of design choices when designing a hybrid system. One must consider several aspects of the problem space, task, sources of error and information, and relation between human and machine elements of the system. For example, a hybrid system aimed at forecasting rare events is likely to employ very different strategies than a system designed to predict frequent events. In this paper, I propose a brief taxonomy of hybrid human–machine systems to help researchers and forecasting practitioners navigate existing options and opportunities in this field. I use the weak version of the term “taxonomy” to broadly indicate soft categories that can cluster classes of algorithms that are observed in the research. As the subject of a perspective paper, these classes should be interpreted as fuzzy categories rather than hard-bounded boxes. Instead of focusing exclusively on forecasting applications, I draw upon research in collective intelligence, human factors, and psychology. Understanding how other disciplines are integrating human and machine intelligence in their design can help forecasting researchers and practitioners by providing a blueprint that can be adapted to the specific necessities of their field. Compared to previous excellent reviews that have been written on the subject of hybrid intelligence [11,12], the present perspective focuses more on papers, results and insights from behavioral disciplines, cognitive science and psychology. This novel lens can better support practitioners interested in designing better human-aware algorithms. The papers in this article were selected by searching various academic repositories, using the terms “hybrid” and “human–machine”.

2. Biological Intelligence

Figure 1 shows the proposed taxonomy, representing the two main axes of variation. The first axis is composed of examples of biological (human) intelligence. It spans from individual problem-solvers to dyads to teams and networks of problem-solvers acting together to solve a specific task. Compared to individual problem-solvers, multiple problem-solvers can benefit from social learning and achieve greater accuracy [13]. Research on joint perceptual tasks has recently unveiled the conditions and cognitive functions underpinning accuracy improvements in joint decision-making [14,15,16,17]. These results are very relevant for studies on forecasting and researchers in forecasting have highlighted similar effects in forecasting tasks. Human forecasting is improved when forecasters can benefit from each others’ estimations, arguments, evidence, and signals of confidence [18,19,20,21]. More recently, research in collective intelligence expanded their focus of analysis from teams to networks of problem-solvers [22,23,24,25]. Although social processes can sometimes lead to collective gains as well as collective losses [26,27], under the right conditions the capacity for collective intelligence tends to scale up with size [28,29,30]. Hybrid human–machine systems in forecasting and decision-making can be built to amplify human intelligence, from individual users to networks of problem-solvers. Thus, an orthogonal axis of variation can be drawn for hybrid intelligence. In the rest of the paper, I describe the various levels of this second axis, as AI moves from being an assistant to human endeavor, to progressively more sophisticated roles as peer, facilitator, and system-level operator.

3. Algorithms as Assistants

Many artificial intelligence use cases nowadays can be categorized as being of external support to human intelligence. At this level, algorithms make it easier for humans to perform complex, lengthy, or repetitive tasks by automating some of the processes that are more costly for humans to perform. The algorithm reduces the cognitive load of forecasters or increases the amount of relevant information available to the human problem-solver. For example, algorithms as assistants can help humans sort and find existing task-relevant information or present information in a more accessible way for humans to digest.
This category may include not just actual digital assistants such as Siri, Alexa, and Cortana [31,32,33,34,35], but also recommender systems [36,37], search engines [38,39,40,41,42,43], data visualization tools [44,45], argumentation mapping tools [46], image captioning [47], and translators [48,49,50]. These algorithms enable human problem-solvers to amplify human intelligence. For example, in forecasting, analysts, and forecasters routinely use algorithms as assistants to filter the relevant data to make accurate predictions [21,51,52]. Also fitting this category are those algorithms that provide the technological medium where interaction between human problem-solvers occurs, such as platforms for crowdsourcing [18,51,53,54,55,56]. In this example, technology assists human interaction by providing the digital infrastructure that facilitates finding better solutions or amplifies collective human intelligence. However, the algorithm (or platform) does not actively try to influence the outcome of human interaction.
Designing algorithms as assistants does not necessarily require understanding the task that the human wants to perform. Nor does it require an understanding of the biases that affect a human performing such a task. For example, when finding relevant web pages, a search engine does not need to know why users need the information they have typed in the search bar. Nor does it need to understand that the user adapts what they read to their own political views or biases. The algorithm requires only knowledge of the information space where the task occurs (e.g., the network of existing web pages that need to be ranked and recommended). For example, algorithms as assistants do not model how information provided by the algorithm is likely to interact with human behavior to produce collective gains or losses [57]. It is known that algorithmic behavior and biases can interact with human biases [21,58,59]. Designing algorithms as assistants does not need to consider this interaction.

4. Algorithms as Peers

The second level of the hybrid intelligence scale is the use of algorithms as human peers. This level is perhaps the first that comes to mind when we think of hybrid intelligence. Fueled by science fiction and humanoid robots populating the collective imagination, this second level of hybrid intelligence involves designing algorithms reaching human-like and better-than-human performance in previously human-only domains.
Examples of this type of algorithm include AI beating humans at human games such as Atari, Poker, and Go [60,61,62,63], and job displacement and replacement due to automation [64,65,66,67]. These algorithms are designed to mimic humans and to perform overlapping tasks with humans’. A better-than-human algorithm may entirely replace human problem-solvers while certain algorithms as peers can also populate hybrid human-algorithm teams to strengthen team resilience, performance and scale [10,68].
In forecasting, human forecasters often partner with machine models performing the same forecasting tasks [69,70,71,72,73]. Assuming data about the problem being solved is available and machine-readable, artificial intelligence can help with feature identification and prediction [74]. Recent work on human-machine ensembles shows that aggregating human and machine forecasts can outperform each individual component [75].
Designing algorithms as peers requires knowledge of the task and problem space, not just the information space where the task occurs. For instance, to play Go, AlphaGo needs to understand both task-relevant information (e.g., the board state) and the game’s rules. Likewise, in forecasting, algorithmic assistants only need to present or visualize relevant information to a human forecaster. However, when acting as peers, forecasting models need to translate task-relevant information into sensible forecasts in the same way a human analyst would. They move from being a support to a (human) forecaster to being a forecaster themselves. Importantly however, designing algorithms as peers do not necessarily require modeling the human collaborator that is part of the hybrid system.

5. Algorithms as Facilitators

A slightly more sophisticated form of hybrid intelligence uses algorithms as facilitators of human intelligence. In this category, the algorithmic agent does not just try to solve the task as a human but uses its model of human behavior to maximize the chances of good human performance. While algorithms as peers act independently from human agents, algorithms as facilitators act as a function of people’s behavior. What the algorithm does is dependent on the cognitive strategies and problem-solving abilities employed by their human counterpart. The algorithm corrects (either explicitly or implicitly) for human cognitive biases so as to facilitate the system’s aggregate performance.
From medical to military, human decisions are known to be affected by a range of biases emerging from cognitive adaptation known as heuristics [76,77,78,79]. Although adaptive under a wide range of domains, these heuristics can, under specific circumstances, result in maladaptive or seemingly irrational behavior [80,81,82,83]. AI algorithms that correct or de-bias individuals and groups of human solvers thus act as facilitators. This is not a novel field of study. Studies on human factors and human-machine interaction have traditionally focused on designing technology and products that incorporate human error and human capability. For instance, in medical and military decisions, people are known to show a confirmation bias that can compromise the objectivity of the decision by neglecting or under-estimating conflicting evidence. User interfaces, human factors and human-centered design can successfully de-bias human decisions [84,85]. More recently, however, researchers have started to play around the idea of artificial players (e.g., bots) playing alongside human problem-solvers using strategies that, although not effective in isolation, are effective when aggregated together with human strategies. For example, the use of locally noisy autonomous agents embedded in a network of human problem-solvers can increase the likelihood of a social system finding a global solution in a network coloring task [86]. This task is known to have many local optima where human-only agents can get stuck. Thus, the addition of artificial agents employing a random noisy strategy facilitated the solution of the task, even if those same artificial agents alone could not have found the global solution by themselves [87]. It has been recently suggested that algorithms with different problem-solving strategies than humans can improve overall performance by increasing the diversity and thus the resilience of social learning [88]. In another study, researchers embedded machines within human groups to moderate team conflict [89]. Algorithms as facilitators cannot solve the task by themselves but require interaction with human solvers.
Designing an algorithm as a facilitator requires knowledge of the information space and task, as the previous level (algorithms as peers). On top of that however, it also requires knowledge of the human problem-solvers performing the task alongside the AI. It requires anticipating human pitfalls and biases to design systems that can be resilient to systematic human failures. AI researchers are increasingly using human-centered design and social intelligence to design algorithms that can better anticipate and react to human errors, biases and irrationalities [12,90,91,92]. Researchers in AI and hybrid intelligence (HI) have called for greater awareness of humans in the loop by modelling their mental states, recognizing their beliefs and intentions, giving cogent explanations on-demand, and fostering trust [11,12,52,93]. The current research agenda in HI is often framed around four central properties that are required for such hybrid intelligent systems, namely being collaborative, adaptive, responsible, and explainable [11]. In line with this research agenda, algorithms as facilitators move the role of algorithms from being merely tools (algorithms as assistants) or human replacements (algorithms as peers) to being collaborative team members that can model (implicitly or explicitly) human partners. This novel hybrid interaction generates a complex dynamic system that can produce novel ideas, patterns, and solutions than humans or algorithms could do in isolation. Notice, however, that algorithms as facilitators still operate within the boundaries of the task being solved. This assumption is relaxed in the following category, namely algorithms as system-level operators.

6. Algorithms as System-Level Operators

The last level of this taxonomy includes a class of algorithms that I believe will be more and more relevant in the future. These algorithms are what I call “system-level operators”. I believe this level of algorithmic interaction with human agents offers the most interesting opportunities for the future of hybrid intelligence but is currently relatively underexplored compared to the other three. Although not enough work has been conducted in this area, I try here to offer a glimpse into how these kinds of algorithms might look in the future. I define an algorithm as a system-level operator if the algorithm in question can observe the full information–task–people–AI system and the reciprocal interactions between its elements. This knowledge is used so that the algorithm can act on the system to maximize the system’s performance. These algorithms are not expected to operate as problem-solvers from within the task boundaries. Instead, they can change the interactions among problem-solvers to maximize overall aggregate performance. For instance, some algorithms in hybrid intelligence are designed to dynamically select crowdsourced human ideas and information to amplify the power of collective intelligence in real-world challenges [55] and open-ended problems [94]. The algorithm actively controlled the flow of information on the platform to help good performance. In these examples, the algorithm does not operate as an artificial problem-solver (i.e., it does not share the same action set as human players). Thus, it does not replace a human or compensate for their bias from within the task. Instead, it acts as an ex machina system controller. In this regard, the algorithm acting as a system-level operator is dynamic and highly adaptive [11]. There are many advantages of algorithms as system-level operators, including adaptability and design. Some preliminary research on these types of algorithms has been recently conducted. In collective intelligence, for example, one study compared different algorithms that dynamically manipulate the network structure of human problem-solvers [95]. In this work, the algorithm acts as an invisible hand that strategically controls the information flow of the system and the relations between human problem-solvers, tasks, and information to achieve greater performance.
Furthermore, I suggest that this type of algorithm will also be more prominent in forecasting. Again, research so far has been sparse but there are reasons to believe that it will increase as interest and theory for human-machine understanding grows [96]. In particular, one study used a dynamic ensemble method to allocate human and algorithmic forecasters to minimize expected errors dynamically [70]. The algorithmic forecasters are algorithms as peers to the human forecasters. However, the algorithm controlling the dynamic allocation of human and machine forecasters is a system-level operator. I propose that one lens that needs to be adopted when designing algorithms to operate in hybrid environments is that of system control. More research is needed to create more theory-driven methods that can dynamically change the parameters of the forecasting system as a function of the amount and type of data available, the uncertainty of the forecasts and new evidence. A better understanding of both human cognitive biases and machine biases will help create better hybrid forecasting systems. A better understanding of the biases affecting each component of the hybrid system can help designing more principled approaches to selecting algorithmic versus human decisions, which will result in more robust forecasting systems.
Designing algorithms as system-level operators is challenging, but I believe it also offers the greatest promises for hybrid intelligence. This is because as interest in algorithmic and human bias grows, there will be an interest from various stakeholders, from companies to policy makers, to develop algorithms that can produce resilient hybrid systems. On top of task and problem-solver knowledge (as in previous levels), designing algorithms as system-level operators also requires knowledge of the interactions between task and problem-solvers over time. It requires sensing and flexibly adapting parameters of the forecasting system to slight variations in the state of the world. In hybrid forecasting, dynamical model update and system performance monitoring allow hybrid systems to control their own performance, e.g., to collect more evidence when uncertain, to self-correct after errors and to learn from past mistakes. Although still underexplored, system-level AI promises to show greater flexibility in the emerging system’s behavior and greater robustness in performance because the system dynamically adapts to human interactions within the system and to change in the environment outside it [11,97].

7. Future Directions

Hybrid forecasting should not be viewed as the silver bullet to successful forecasting [98]. However, as machine learning techniques expand in scope and reach, there will be an increased capacity of artificial intelligence to complement and augment biological and collective intelligence. According to this perspective, increased capacity for hybrid intelligence scales up with the capacity to include more dimensions of a forecasting system. The paper outlined soft boundaries in which artificial intelligence’s ability to achieve better and more robust forecasting will depend on the capacity of algorithms to incorporate knowledge of the problem space, task and the human forecasters acting in it. Importantly, the increasingly sophisticated levels of hybrid intelligence described in this paper should not be seen as hard categorical boundaries. Instead, hybrid systems may show elements of different levels to various degrees. Finally, people’s increasing awareness of how algorithmic and human biases are often intertwined can play a pivotal role in boosting the development of more transparent and robust AI, and better hybrid human–AI intelligence. Rather than restricting algorithmic design to a few stakeholders with niche skill sets, teams should diversify their expertise, including behavioral and social scientists and machine learning practitioners.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study as it is not an empirical work.

Conflicts of Interest

The author declares no conflict of interests.

References

  1. Hassabis, D.; Kumaran, D.; Summerfield, C.; Botvinick, M. Neuroscience-Inspired Artificial Intelligence. Neuron 2017, 95, 245–258. [Google Scholar] [CrossRef] [Green Version]
  2. Leudar, I.; James, L.; McClelland, D.R.; The PDP Research Group. Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 1. Foundations. Vol. 2. Psychological and biological models. J. Child Lang. 1989, 16, 467–470. [Google Scholar] [CrossRef]
  3. Hebb, D.O. The first stage of perception: Growth of the assembly. In The Organization of Behavior; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1949; Volume 4, pp. 60–78. [Google Scholar]
  4. Whittington, J.C.; Bogacz, R. Theories of Error Back-Propagation in the Brain. Trends Cogn. Sci. 2019, 23, 235–250. [Google Scholar] [CrossRef] [Green Version]
  5. Mitchell, M. Artificial Intelligence: A Guide for Thinking Humans; Picador: London, UK, 2020. [Google Scholar]
  6. Guo, H.; Peng, L.; Zhang, J.; Qi, F.; Duan, L. Fooling AI with AI: An Accelerator for Adversarial Attacks on Deep Learning Visual Classification. In Proceedings of the 2019 IEEE 30th International Conference on Application-Specific Systems, Architectures and Processors (ASAP), New York, NY, USA, 15–17 July 2019; p. 136. [Google Scholar]
  7. Chang, C.-L.; Hung, J.-L.; Tien, C.-W.; Kuo, S.-Y. Evaluating Robustness of AI Models against Adversarial Attacks. In Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence; Association for Computing Machinery (ACM): New York, NY, USA, 2020; pp. 47–54. [Google Scholar]
  8. Chen, L.; Ning, H.; Nugent, C.D.; Yu, Z. Hybrid Human-Artificial Intelligence. Computer 2020, 53, 14–17. [Google Scholar] [CrossRef]
  9. Dekker, S.W.A.; Woods, D.D. MABA-MABA or Abracadabra? Progress on Human-Automation Co-ordination. Cogn. Technol. Work. 2002, 4, 240–244. [Google Scholar] [CrossRef]
  10. Sankar, S. Transcript of “The Rise of Human-Computer Cooperation”. Available online: https://www.ted.com/talks/shyam_sankar_the_rise_of_human_computer_cooperation/transcript?language=en (accessed on 10 July 2021).
  11. Akata, Z.; Balliet, D.; De Rijke, M.; Dignum, F.; Dignum, V.; Eiben, G.; Fokkens, A.; Grossi, D.; Hindriks, K.; Hoos, H.; et al. Research Agenda for Hybrid Intelligence: Augment-ing Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence. Computer 2020, 53, 18–28. [Google Scholar] [CrossRef]
  12. Kambhampati, S. Challenges of Human-Aware AI Systems. AI Mag. 2020, 41, 3–17. [Google Scholar] [CrossRef]
  13. Rendell, L.; Boyd, R.; Cownden, D.; Enquist, M.; Eriksson, K.; Feldman, M.W.; Fogarty, L.; Ghirlanda, S.; Lillicrap, T.; Laland, K.N. Why Copy Others? Insights from the Social Learning Strategies Tournament. Science 2010, 328, 208–213. [Google Scholar] [CrossRef] [Green Version]
  14. Bahrami, B.; Olsen, K.; Latham, P.E.; Roepstorff, A.; Rees, G.; Frith, C. Optimally Interacting Minds. Science 2010, 329, 1081–1085. [Google Scholar] [CrossRef] [Green Version]
  15. Sorkin, R.D.; Hays, C.J.; West, R. Signal-detection analysis of group decision making. Psychol. Rev. 2001, 108, 183–203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Pescetelli, N.; Rees, G.; Bahrami, B. The perceptual and social components of metacognition. J. Exp. Psychol. Gen. 2016, 145, 949–965. [Google Scholar] [CrossRef]
  17. Koriat, A. When Are Two Heads Better than One and Why? Science 2012, 336, 360–362. [Google Scholar] [CrossRef] [Green Version]
  18. Rosenberg, L.; Baltaxe, D.; Pescetelli, N. Crowds vs. swarms, a comparison of intelligence. In Proceedings of the 2016 Swarm/Human Blended Intelligence Workshop (SHBI), Cleveland, OH, USA, 21–23 October 2016; pp. 1–4. [Google Scholar]
  19. Tetlock, P.E.; Gardner, D. Superforecasting: The Art and Science of Prediction; Crown Publishers: New York, NY, USA, 2015. [Google Scholar]
  20. Silver, I.; Mellers, B.A.; Tetlock, P.E. Wise teamwork: Collective confidence calibration predicts the effectiveness of group dis-cussion. J. Exp. Soc. Psychol. 2021, 96, 104157. [Google Scholar] [CrossRef]
  21. Pescetelli, N.; Rutherford, A.; Rahwan, I. Modularity and composite diversity affect the collective gathering of information online. Nat. Commun. 2021, 12, 1–10. [Google Scholar] [CrossRef] [PubMed]
  22. Almaatouq, A.; Noriega-Campero, A.; Alotaibi, A.; Krafft, P.M.; Moussaid, M.; Pentland, A. Adaptive social networks promote the wisdom of crowds. Proc. Natl. Acad. Sci. USA 2020, 117, 11379–11386. [Google Scholar] [CrossRef] [PubMed]
  23. Becker, J.; Brackbill, D.; Centola, D. Network dynamics of social influence in the wisdom of crowds. Proc. Natl. Acad. Sci. USA 2017, 114, E5070–E5076. [Google Scholar] [CrossRef] [Green Version]
  24. Mason, W.; Watts, D.J. Collaborative learning in networks. Proc. Natl. Acad. Sci. USA 2012, 109, 764–769. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Barkoczi, D.; Galesic, M. Social learning strategies modify the effect of network structure on group performance. Nat. Commun. 2016, 7, 13109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Lorenz, J.; Rauhut, H.; Schweitzer, F.; Helbing, D. How social influence can undermine the wisdom of crowd effect. Proc. Natl. Acad. Sci. USA 2011, 108, 9020–9025. [Google Scholar] [CrossRef] [Green Version]
  27. Hahn, U.; Von Sydow, M.; Merdes, C. How Communication Can Make Voters Choose Less Well. Top. Cogn. Sci. 2019, 11, 194–206. [Google Scholar] [CrossRef]
  28. Kao, A.B.; Miller, N.; Torney, C.; Hartnett, A.; Couzin, I.D. Collective Learning and Optimal Consensus Decisions in Social Ani-mal Groups. PLoS Comput. Biol. 2014, 10, e1003762. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Surowiecki, J. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations; Abacus: London, UK; Little, Brown Book Group: Boston, MA, USA, 2004; p. 271. [Google Scholar]
  30. De Condorcet, M. Essai sur L’application de L’analyse à la Probabilité des Decisions Rendues à la Pluralité des Vois; De l’Imprimerie Royale: Paris, France, 1785. [Google Scholar]
  31. Brill, T.M.; Munoz, L.; Miller, R.J. Siri, Alexa, and other digital assistants: A study of customer satisfaction with artificial intelli-gence applications. J Mark Manag. 2019, 35, 1401–1436. [Google Scholar] [CrossRef]
  32. Hennig, N. Siri, Alexa, and Other Digital Assistants: The Librarian’s Quick Guide; ABC-CLIO: Santa Barbara, CA, USA, 2018. [Google Scholar]
  33. Sarikaya, R. The technology powering personal digital assistants. In Proceedings of the Sixteenth Annual Conference of the International Speech Communication Association, Dresden, Germany, 6–10 September 2015; Available online: https://www.isca-speech.org/archive/interspeech_2015/i15_4002.html (accessed on 23 August 2021).
  34. Picard, C.; Smith, K.E.; Picard, K.; Douma, M.J. Can Alexa, Cortana, Google Assistant and Siri save your life? A mixed-methods analysis of virtual digital assistants and their responses to first aid and basic life support queries. BMJ Innov. 2020, 6, 26–31. [Google Scholar] [CrossRef]
  35. Boyd, M.; Wilson, N. Just ask Siri? A pilot study comparing smartphone digital assistants and laptop Google searches for smoking cessation advice. PLoS ONE 2018, 13, e0194811. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Ricci, F.; Rokach, L.; Shapira, B. Introduction to recommender systems handbook. In Recommender Systems Handbook; Springer: Boston, MA, USA, 2011; pp. 1–35. [Google Scholar]
  37. Koren, Y.; Bell, R. Advances in Collaborative Filtering. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Eds.; Springer: Boston, MA, USA, 2015; pp. 77–118. [Google Scholar]
  38. Epstein, R.; Robertson, R.E. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proc. Natl. Acad. Sci. USA 2015, 112, E4512–E4521. [Google Scholar] [CrossRef] [Green Version]
  39. Hannak, A.; Sapiezynski, P.; Kakhki, A.M.; Krishnamurthy, B.; Lazer, D.; Mislove, A.; Wilson, C. Measuring personalization of web search. In Proceedings of the 22nd international conference on World Wide Web-WWW ’13, Rio de Janeiro, Brazil, 13–17 May 2013; pp. 527–538. [Google Scholar]
  40. Steiner, M.; Magin, M.; Stark, B.; Geiß, S. Seek and you shall find? A content analysis on the diversity of five search engines’ re-sults on political queries. Inf. Commun. Soc. 2020, 1–25. [Google Scholar] [CrossRef]
  41. Dwork, C.; Kumar, R.; Naor, M.; Sivakumar, D. Page Rank Aggregation Methods A Review. Int. J. Comput. Sci. Eng. 2018, 6, 976–980. [Google Scholar] [CrossRef]
  42. Schwartz, C. Web search engines. J. Am. Soc. Inf. Sci. 1998, 49, 973–982. [Google Scholar] [CrossRef]
  43. Damaschk, M.; Donicke, T.; Lux, F. Multiclass Text Classification on Unbalanced, Sparse and Noisy Data; Linköping University Electronic Press: Turku, Finland, 2019; pp. 58–65. [Google Scholar]
  44. Perkel, J.M. Data visualization tools drive interactivity and reproducibility in online publishing. Nat. Cell Biol. 2018, 554, 133–134. [Google Scholar] [CrossRef]
  45. Ali, S.M.; Gupta, N.; Nayak, G.K.; Lenka, R.K. Big data visualization: Tools and challenges. In Proceedings of the 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I), Noida, India, 14–17 December 2016; pp. 656–660. [Google Scholar]
  46. Simari, G.; Rahwan, I. (Eds.) Argumentation in Artificial Intelligence; Springer: Boston, MA, USA, 2009. [Google Scholar]
  47. You, Q.; Jin, H.; Wang, Z.; Fang, C.; Luo, J. Image Captioning with Semantic Attention. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4651–4659. [Google Scholar]
  48. Wu, Y.; Schuster, M.; Chen, Z.; Le, Q.V.; Norouzi, M.; Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K.; et al. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv 2016, arXiv:1609.08144. [Google Scholar]
  49. Varela-Salinas, M.J.; Burbat, R. Google Translate and deepl: Breaking Taboos in Translator Training. 2018. Available online: https://riuma.uma.es/xmlui/handle/10630/16310 (accessed on 20 June 2021).
  50. Nakazawa, T.; Yu, K.; Kawahara, D.; Kurohashi, S. Example-based machine translation based on deeper NLP. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT) 2006, Kyoto, Japan, 27–28 November 2006; Available online: https://www.isca-speech.org/archive/iwslt_06/papers/slt6_064.pdf (accessed on 23 August 2021).
  51. Rosenberg, L.; Pescetelli, N.; Willcox, G. Artificial Swarm Intelligence amplifies accuracy when predicting financial markets. In Proceedings of the 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 58–62. [Google Scholar]
  52. Keller, N.; Jenny, M.A.; Spies, C.A.; Herzog, S.M. Augmenting Decision Competence in Healthcare Using AI-based Cognitive Models. In Proceedings of the 2020 IEEE International Conference on Healthcare Informatics (ICHI), Oldenburg, Germany, 15–18 June 2020; pp. 1–4. [Google Scholar]
  53. Difallah, D.E.; Catasta, M.; Demartini, G.; Ipeirotis, P.G.; Cudré-Mauroux, P. The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015; pp. 238–247. [Google Scholar]
  54. Pickard, G.; Pan, W.; Rahwan, I.; Cebrian, M.; Crane, R.; Madan, A.; Pentland, A. Time-Critical Social Mobilization. Science 2011, 334, 509–512. [Google Scholar] [CrossRef] [Green Version]
  55. Stefanovitch, N.; Alshamsi, A.; Cebrian, M.; Rahwan, I. Error and attack tolerance of collective problem solving: The DARPA Shredder Challenge. EPJ Data Sci. 2014, 3, 13. [Google Scholar] [CrossRef] [Green Version]
  56. Ungar, L.; Mellers, B.; Satopää, V.; Tetlock, P.; Baron, J. The good judgment project: A large scale test of different methods of combining expert predictions. In Proceedings of the 2012 AAAI Fall Symposium Series, Arlington, VA, USA, 2–4 November 2012; Available online: http://people.uvawise.edu/pww8y/Supplement/-ConceptsSup/Orgs/DecMking/MassDecmking/GoodJudgmentProj/MethodsCombineExpertPredictions.pdf (accessed on 31 August 2021).
  57. Bak-Coleman, J.B.; Alfano, M.; Barfuss, W.; Bergstrom, C.T.; Centeno, M.A.; Couzin, I.D.; Donges, J.F.; Galesic, M.; Gersick, A.S.; Jacquet, J.; et al. Stewardship of global collective behavior. Proc. Natl. Acad. Sci. USA 2021, 10, 118. [Google Scholar] [CrossRef]
  58. Pipergias Analytis, P.; Barkoczi, D.; Lorenz-Spreen, P.; Herzog, S. The Structure of Social Influence in Recommender Networks. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 2655–2661. [Google Scholar]
  59. Analytis, P.P.; Barkoczi, D.; Herzog, S.M. Social learning strategies for matters of taste. Nat. Hum. Behav. 2018, 2, 415–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforce-ment learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  61. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  62. Hassabis, D. Artificial Intelligence: Chess match of the century. Nature 2017, 544, 413–414. [Google Scholar] [CrossRef] [Green Version]
  63. Hsu, F.-H. Behind Deep Blue: Building the Computer that Defeated the World Chess Champion; Princeton University Press: Princeton, NJ, USA, 2002. [Google Scholar]
  64. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; W. W. Norton & Company: New York, NY, USA, 2014. [Google Scholar]
  65. Frank, M.R.; Sun, L.; Cebrian, M.; Youn, H.; Rahwan, I. Small cities face greater impact from automation. J. R. Soc. Interface 2018, 15, 20170946. [Google Scholar] [CrossRef]
  66. Frank, M.R.; Autor, D.; Bessen, J.E.; Brynjolfsson, E.; Cebrian, M.; Deming, D.J.; Feldman, M.; Groh, M.; Lobo, J.; Moro, E.; et al. Toward understanding the impact of artificial intelligence on labor. Proc. Natl. Acad. Sci. USA 2019, 116, 6531–6539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Hancock, B.; Lazaroff-Puck, K.; Rutherford, S. Getting practical about the future of work. McKinsey Q. 2020, 2, 123–132. [Google Scholar]
  68. Licklider, J.C.R. Man-Computer Symbiosis. IRE Trans. Hum. Factors Electron. 1960, 14, 4–11. [Google Scholar] [CrossRef] [Green Version]
  69. Abeliuk, A.; Benjamin, D.M.; Morstatter, F.; Galstyan, A. Quantifying machine influence over human forecasters. Sci. Rep. 2020, 10, 1–14. [Google Scholar] [CrossRef]
  70. Miyoshi, T.; Matsubara, S. Dynamically Forming a Group of Human Forecasters and Machine Forecaster for Forecasting Economic Indicators. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 461–467. [Google Scholar]
  71. Huang, W.; Nakamori, Y.; Wang, S.-Y. Forecasting stock market movement direction with support vector machine. Comput. Oper. Res. 2005, 32, 2513–2522. [Google Scholar] [CrossRef]
  72. Beger, A.; Ward, M.D. Assessing Amazon Turker and automated machine forecasts in the Hybrid Forecasting Competition. In Proceedings of the 7th Annual Asian Political Methodology Conference, Kyoto, Japan, 5–6 January 2019; Available online: https://0-www-cambridge-org.brum.beds.ac.uk/core/membership/services/aop-file-manager/file/5c17e2e44e1070411cb0e7e6/APMM-2019-Andreas-Beger.pdf (accessed on 31 August 2021).
  73. Zellner, M.; Abbas, A.E.; Budescu, D.V.; Galstyan, A. A survey of human judgement and quantitative forecasting methods. R. Soc. Open Sci. 2021, 8, 201187. [Google Scholar] [CrossRef] [PubMed]
  74. Chong, E.; Han, C.; Park, F.C. Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies. Expert Syst. Appl. 2017, 83, 187–205. [Google Scholar] [CrossRef] [Green Version]
  75. Huang, Y.; Abeliuk, A.; Morstatter, F.; Atanasov, P.; Galstyan, A. Anchor Attention for Hybrid Crowd Forecasts Aggregation. arXiv 2020, arXiv:2003.12447. [Google Scholar]
  76. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
  77. Gigerenzer, G. Why Heuristics Work? Perspect. Psychol. Sci. 2008, 3, 20–29. [Google Scholar] [CrossRef]
  78. Hertwig, R.; Gigerenzer, G. The “Conjunction Fallacy” Revisited: How Intelligent Inferences Look Like Reasoning Errors. J Behav. Decis. Mak. 1999, 12, 275–305. [Google Scholar] [CrossRef]
  79. Gigerenzer, G.; Hoffrage, U. How to Improve Bayesian Reasoning without Instruction: Frequency Formats. Psychol. Rev. 1995, 102, 684–704. [Google Scholar] [CrossRef]
  80. Pescetelli, N.; Yeung, N. The role of decision confidence in advice-taking and trust formation. J. Exp. Psychol. Gen. 2021, 150, 507–526. [Google Scholar] [CrossRef]
  81. Pescetelli, N.; Yeung, N. The effects of recursive communication dynamics on belief updating. Proc. R. Soc. B 2020, 287, 20200025. [Google Scholar] [CrossRef]
  82. Kahneman, D. Thinking Fast and Slow; Farrar, Straus & Giroux: New York, NY, USA, 2011. [Google Scholar]
  83. Tversky, A.; Kahneman, D. Loss Aversion in Riskless Choice: A Reference-Dependent Model. Q. J. Econ. 1991, 106, 1039–1061. [Google Scholar] [CrossRef]
  84. Cook, M.B.; Smallman, H.S. Human Factors of the Confirmation Bias in Intelligence Analysis: Decision Support from Graphical Evidence Landscapes. Hum. Factors 2008, 50, 745–754. [Google Scholar] [CrossRef] [PubMed]
  85. Juvina, I.; LaRue, O.; Widmer, C.; Ganapathy, S.; Nadella, S.; Minnery, B.; Ramshaw, L.; Servan-Schreiber, E.; Balick, M.; Weischedel, R. Computer-Supported Collaborative Information Search for Geopolitical Forecasting. In Understanding and Improving Information Search: A Cognitive Approach; Fu, W.T., van Oostendorp, H., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 245–266. [Google Scholar]
  86. Shirado, H.; Christakis, N.A. Locally noisy autonomous agents improve global human coordination in network experiments. Nature 2017, 545, 370–374. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Yates, C.A.; Erban, R.; Escudero, C.; Couzin, I.; Buhl, J.; Kevrekidis, I.G.; Maini, P.K.; Sumpter, D.J.T. Inherent noise can facilitate coherence in collective swarm motion. Proc. Natl. Acad. Sci. USA 2009, 106, 5464–5469. [Google Scholar] [CrossRef] [Green Version]
  88. Brinkmann, L.; Gezerli, D.; von Kleist, K.; Müller, T.F.; Rahwan, I.; Pescetelli, N. Hybrid social learning in human-algorithm cul-tural transmission. SocArXiv. [CrossRef]
  89. Jung, M.F.; Martelaro, N.; Hinds, P.J. Using Robots to Moderate Team Conflict. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; pp. 229–236. [Google Scholar]
  90. Guszcza, J. Smarter Together: Why Artificial Intelligence Needs Human-Centered Design; Deloitte: London, UK, 2018. [Google Scholar]
  91. Merriam, G. If A.I. Only Had a Heart: Why Artificial Intelligence Research Needs to Take Emotions More Seriously. J. Artif. Intell. Conscious. 2021, 2150012. [Google Scholar] [CrossRef]
  92. Dautenhahn, K. A Paradigm Shift in Artificial Intelligence: Why Social Intelligence Matters in the Design and Development of Robots with Human-Like Intelligence. Comput. Vis. 2007, 4850, 288–302. [Google Scholar]
  93. Wenskovitch, J.; North, C. Interactive Artificial Intelligence: Designing for the “Two Black Boxes” Problem. Computer 2020, 53, 29–39. [Google Scholar] [CrossRef]
  94. Pescetelli, N.; Cebrian, M.; Rahwan, I. BeeMe: Real-Time Internet Control of Situated Human Agents. Computer 2020, 53, 49–58. [Google Scholar] [CrossRef]
  95. Burton, J.; Hahn, U.; Almaatouq, A.; Amin Rahimian, M. Algorithmically Mediating Communication to Enhance Collective Decision-Making in Online Social Networks. In Proceedings of the ACM Collective Intelligence Conference 2021, Pittsburgh, PA, USA, 29–30 June 2021. [Google Scholar]
  96. Kaufmann, R.; Gupta, P.; Taylor, J. An Active Inference Model of Collective Intelligence. Entropy 2021, 23, 830. [Google Scholar] [CrossRef] [PubMed]
  97. Johnson, M.; Shrewsbury, B.; Bertrand, S.; Wu, T.; Duran, D.; Floyd, M.; Abeles, P.; Stephen, D.; Mertins, N.; Lesman, A.; et al. Team IHMC’s lessons learned from the DARPA ro-botics challenge trials. J. Field Robot. 2015, 32, 192–208. [Google Scholar] [CrossRef]
  98. Supers vs. Hybrid Systems-Good Judgment. Available online: https://goodjudgment.com/resources/the-superforecasters-track-record/supers-vs-hybrid-systems/ (accessed on 10 July 2021).
Figure 1. A taxonomy of hybrid intelligence. Biological intelligence spans from individuals in isolation to larger groups and networks performing a task. Orthogonal to this axis, algorithms can assist decisions by offering increasingly higher levels of intelligence, from smart assistants (Alexa, Siri, and Cortana) to system-level operators.
Figure 1. A taxonomy of hybrid intelligence. Biological intelligence spans from individuals in isolation to larger groups and networks performing a task. Orthogonal to this axis, algorithms can assist decisions by offering increasingly higher levels of intelligence, from smart assistants (Alexa, Siri, and Cortana) to system-level operators.
Forecasting 03 00039 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pescetelli, N. A Brief Taxonomy of Hybrid Intelligence. Forecasting 2021, 3, 633-643. https://0-doi-org.brum.beds.ac.uk/10.3390/forecast3030039

AMA Style

Pescetelli N. A Brief Taxonomy of Hybrid Intelligence. Forecasting. 2021; 3(3):633-643. https://0-doi-org.brum.beds.ac.uk/10.3390/forecast3030039

Chicago/Turabian Style

Pescetelli, Niccolo. 2021. "A Brief Taxonomy of Hybrid Intelligence" Forecasting 3, no. 3: 633-643. https://0-doi-org.brum.beds.ac.uk/10.3390/forecast3030039

Article Metrics

Back to TopTop