Intelligence and the Future of Work: Assessment, Relevance and Application of Cognitive and Other Abilities in the Age of Artificial Intelligence

A special issue of Journal of Intelligence (ISSN 2079-3200).

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 25514

Special Issue Editors


E-Mail Website
Guest Editor
University College London, London, UK
Interests: Individual differences; consumer behaviour; talent management; entrepreneurship; creativity; social media psychology; personality assessment; psychometrics; experiential profiling

E-Mail Website
Guest Editor
University College London, London, UK
Interests: individual differences; behavioural anlaytics; computational psychometrics; personality profiling; cognitive ability testing; game based assessment; industrial and organisational psychology; entrepreneurship; creativity; values; consumer psychology

Special Issue Information

Dear Colleagues,

Human intelligence and individual differences in ability have long fascinated scholars and lay people alike. Decades of research and several meta-analyses illustrate the importance of cognitive and other human abilities in determining life outcomes such as career success, longevity, and relationships. 

The fields of Organizational Psychology and Human Resource Management, in particular, have benefited from this research, whereby measures of intelligence and ability have been used to inform recruitment and selection, employee development, and coaching. 

The application of intelligence and ability testing in these industrial–organizational contexts is not without problems and criticisms: testing can be tedious and impractical, intelligence tests produce undesired group differences, and research on the relationship between ability and life outcomes is robust for cognitive ability, but less consistent for other abilities such as emotional intelligence. 

As with other areas in the social sciences, digitalization and the rise of artificial intelligence have enabled new directions for research on intelligence in the industrial–organizational context. Notably, they have given rise to new assessment formats for the measurement of intelligence and ability, including game-based assessments.

The world of work is rapidly changing. Research on the relationship between intelligence and career success is based on increasingly outdated modes of working. At the same time, with artificial intelligence and automation taking over tasks that were long performed by humans as well as producing information that was previously inaccessible, the requirements for worker abilities are changing.

This Special Issue aims to highlight new developments in intelligence research and testing across three areas of investigation:

  1. Technology-driven testing of intelligence and abilities, for example, in the form of game-based psychometric assessments
  2. Human intelligence in the age of artificial intelligence, including
    1. research on human abilities relevant to the workplace of the future 
    2. investigating differences and similarities in human and artificial intelligence decision making
  3. New findings from the application of intelligence and ability testing in          industrial–organizational psychology, including 
    1. the relationship between abilities and job performance
    2. evaluations of the impact of ability testing on diversity and inclusion

Prof. Dr. Tomas Chamorro-Premuzic
Dr. Franziska Leutner
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

10 pages, 313 KiB  
Article
Selecting for Learning Potential: Is Implicit Learning the New Cognitive Ability?
by Luke M. Montuori and Lara Montefiori
J. Intell. 2022, 10(2), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence10020024 - 15 Apr 2022
Viewed by 3661
Abstract
For decades, the field of workplace selection has been dominated by evidence that cognitive ability is the most important factor in predicting performance. Meta-analyses detailing the contributions of a wide-range of factors to workplace performance show that cognitive ability’s contribution is partly mediated [...] Read more.
For decades, the field of workplace selection has been dominated by evidence that cognitive ability is the most important factor in predicting performance. Meta-analyses detailing the contributions of a wide-range of factors to workplace performance show that cognitive ability’s contribution is partly mediated by the learning of task-relevant skills and job-specific declarative knowledge. Further, there is evidence to suggest that this relationship is a function of task complexity, and partially mediated by learning performance in workplace induction and training activities. Simultaneously, evidence is mounting that stable individual differences in implicit learning exist, which are at least partially independent of traditional measures of intelligence. In this article we provide an overview of recent advances in our understanding of implicit learning, outline some of the advantages offered by its measurement, and highlight some of the challenges associated with its adoption as a measure of interest. Full article
17 pages, 7759 KiB  
Article
Understanding the Emotional Intelligence Discourse on Social Media: Insights from the Analysis of Twitter
by Shardul Shankar and Vijayshri Tewari
J. Intell. 2021, 9(4), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence9040056 - 24 Nov 2021
Cited by 8 | Viewed by 4824
Abstract
Social networks have created an information diffusion corpus that provides users with an environment where they can express their views, form a community, and discuss topics of similar or dissimilar interests. Even though there has been an increasingly rising demand for conducting an [...] Read more.
Social networks have created an information diffusion corpus that provides users with an environment where they can express their views, form a community, and discuss topics of similar or dissimilar interests. Even though there has been an increasingly rising demand for conducting an emotional analysis of the users on social media platforms, the field of emotional intelligence (EI) has been rather slow in exploiting the enormous potential that social media can play in the research and practice of the framework. This study, thus, tried to examine the role that the microblogging platform Twitter plays in enhancing the understanding of the EI community by building on the Twitter Analytics framework of Natural Language Processing to further develop the insights of EI research and practice. An analysis was conducted on 53,361 tweets extracted using the hashtag emotional intelligence through descriptive analytics (DA), content analytics (CA), and network analytics (NA). The findings indicated that emotional intelligence tweets are used mostly by speakers, psychologists (or other medical professionals), and business organizations, among others. They use it for information dissemination, communication with stakeholders, and hiring. These tweets carry strong positive sentiments and sparse connectedness. The findings present insights into the use of social media for understanding emotional intelligence. Full article
Show Figures

Figure 1

13 pages, 842 KiB  
Article
Examining the Use of Game-Based Assessments for Hiring Autistic Job Seekers
by Colin Willis, Tracy Powell-Rudy, Kelsie Colley and Joshua Prasad
J. Intell. 2021, 9(4), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence9040053 - 03 Nov 2021
Cited by 6 | Viewed by 8512
Abstract
Although people with autism are protected under the Americans with Disabilities Act of 1990, there is little theoretical or practical effort to determine whether traditional pre-employment assessments unfairly impact autistic job seekers. Due to the lack of emphasis on social communication, game-based assessments [...] Read more.
Although people with autism are protected under the Americans with Disabilities Act of 1990, there is little theoretical or practical effort to determine whether traditional pre-employment assessments unfairly impact autistic job seekers. Due to the lack of emphasis on social communication, game-based assessments (GBAs) may offer a way of assessing candidate ability without disadvantaging autistic candidates. A total of 263 autistic job seekers took one of two game-based assessment packages designed to measure cognitive ability. After comparing their results to 323 college-graduate job seekers in the general population, we found that performance on the GBAs was generally similar in both populations, although some small differences were detected. Implications for hiring decisions are discussed. Full article
Show Figures

Figure 1

11 pages, 317 KiB  
Article
Systematizing Audit in Algorithmic Recruitment
by Emre Kazim, Adriano Soares Koshiyama, Airlie Hilliard and Roseline Polle
J. Intell. 2021, 9(3), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence9030046 - 17 Sep 2021
Cited by 9 | Viewed by 7095
Abstract
Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by companies who have [...] Read more.
Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by companies who have developed AI-driven recruitment solutions that include aggregation of appropriate candidates (Hiretual), interviewing through a chatbot (Paradox), video interview assessment (MyInterview), and CV-analysis (Textio), as well as estimation of psychometric characteristics through image-(Traitify) and game-based assessments (HireVue) and video interviews (Cammio). However, driven by concern that such high-impact technology must be used responsibly due to the potential for unfair hiring to result from the algorithms used by these tools, there is an active effort towards proving mechanisms of governance for such automation. In this article, we apply a systematic algorithm audit framework in the context of the ethically critical industry of algorithmic recruitment systems, exploring how audit assessments on AI-driven systems can be used to assure that such systems are being responsibly deployed in a fair and well-governed manner. We outline sources of risk for the use of algorithmic hiring tools, suggest the most appropriate opportunities for audits to take place, recommend ways to measure bias in algorithms, and discuss the transparency of algorithms. Full article
Back to TopTop