Learning from Psychometric Data

A special issue of Psych (ISSN 2624-8611). This special issue belongs to the section "Psychometrics and Educational Measurement".

Deadline for manuscript submissions: closed (15 October 2020) | Viewed by 13627

Special Issue Editor


E-Mail Website
Guest Editor
Department of Economics and Statistics, Universtiy of Udine, 33100 Udine, Italy
Interests: measurement error models; multilevel modeling; item response models; Rasch modeling; equating; social network analysis

Special Issue Information

Dear Colleagues,

In recent years, technology has enhanced the possibility of collecting and analyzing high-dimensional datasets, meaning both a large number of observations and a large number of variables. The complexity of the data often requires elaborate psychometric models that involve many parameters, possibly leading to numerical instability of the estimates. This is especially true in small samples, due to the limited information available. In all cases, it is important to extract the relevant patterns and relations, disentangling them from the noise. In this respect, statistical learning approaches could be exploited to reduce the variability of the estimates or attain sparse solutions, hence improving the interpretability of the results and the predictive capacity of the model. This Special Issue of Psych aims to review the current state-of-the-art on statistical learning methods for psychometric data, propose methodological advancements, compare methods available in the literature by simulation and/or application to real data, and describe interesting case studies. This Special Issue aims to cover, without being limited to, the following areas:

  • Regularization;
  • Shrinkage methods;
  • Variable selection;
  • Classification;
  • Partitioning;
  • Ensemble methods;
  • Dimension reduction.

Prof. Dr. Michela Battauz
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Psych is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Statistical learning
  • Machine learning
  • Data mining
  • Measurement
  • Psychometrics
  • Data analysis

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1352 KiB  
Article
Better Rating Scale Scores with Information–Based Psychometrics
by James Ramsay, Juan Li and Marie Wiberg
Psych 2020, 2(4), 347-369; https://0-doi-org.brum.beds.ac.uk/10.3390/psych2040026 - 15 Dec 2020
Cited by 5 | Viewed by 2996
Abstract
Diagnostic scales are essential to the health and social sciences, and to the individuals that provide the data. Although statistical models for scale data have been researched for decades, it remains nearly universal that scale scores are sums of weights assigned a priori [...] Read more.
Diagnostic scales are essential to the health and social sciences, and to the individuals that provide the data. Although statistical models for scale data have been researched for decades, it remains nearly universal that scale scores are sums of weights assigned a priori to question choice options (sum scores), respectively. We propose several modifications of psychometric testing theory that together demonstrate remarkable improvements in the quality of rating scale scores. Our model represents performance as a space with a metric structure by transforming probability into surprisal or information. The estimation algorithm permits the analysis of data from tens and hundreds of thousands of test takers in a few minutes on consumer level computing equipment. Standard errors of performance estimates are shown to be as small as a quarter of those of sum scores. Open access software resources are presented. Full article
(This article belongs to the Special Issue Learning from Psychometric Data)
Show Figures

Graphical abstract

23 pages, 710 KiB  
Article
Automated Test Assembly for Large-Scale Standardized Assessments: Practical Issues and Possible Solutions
by Giada Spaccapanico Proietti, Mariagiulia Matteucci and Stefania Mignani
Psych 2020, 2(4), 315-337; https://0-doi-org.brum.beds.ac.uk/10.3390/psych2040024 - 25 Nov 2020
Cited by 3 | Viewed by 2716
Abstract
In testing situations, automated test assembly (ATA) is used to assemble single or multiple test forms that share the same psychometric characteristics, given a set of specific constraints, by means of specific solvers. However, in complex situations, which are typical of large-scale assessments, [...] Read more.
In testing situations, automated test assembly (ATA) is used to assemble single or multiple test forms that share the same psychometric characteristics, given a set of specific constraints, by means of specific solvers. However, in complex situations, which are typical of large-scale assessments, ATA models may be infeasible due to the large number of decision variables and constraints involved in the problem. The purpose of this paper is to formalize a standard procedure and two different strategies—namely, additive and subtractive—for overcoming practical ATA concerns with large-scale assessments and to show their effectiveness in two case studies. The MAXIMIN and MINIMAX ATA methods are used to assemble multiple test forms based on item response theory models for binary data. The main results show that the additive strategy is able to identify the specific constraints that make the model infeasible, while the subtractive strategy is a faster but less accurate process, which may not always be optimal. Overall, the procedures are able to produce parallel test forms with similar measurement precision and contents, and they minimize the number of items shared among the test forms. Further research could be done to investigate the properties of the proposed approaches under more complex testing conditions, such as multi-stage testing, and to blend the proposed approaches in order to obtain the solution that satisfies the largest set of constraints. Full article
(This article belongs to the Special Issue Learning from Psychometric Data)
Show Figures

Figure 1

19 pages, 362 KiB  
Article
Measurement of Inter-Individual Variability in Assessing the Quality of Life in Respondents with Celiac Disease
by Silvia Bacci, Daniela Caso, Rosa Fabbricatore and Maria Iannario
Psych 2020, 2(4), 296-314; https://0-doi-org.brum.beds.ac.uk/10.3390/psych2040023 - 23 Nov 2020
Viewed by 2194
Abstract
Quality of life of Celiac Disease (CD) patients is affected by constraints in their physical, social and emotional behaviour. Our objective is to assess differences in two relevant dimensions of the Celiac Quality of Life (CQoL) scale, Limitations due to the disease and [...] Read more.
Quality of life of Celiac Disease (CD) patients is affected by constraints in their physical, social and emotional behaviour. Our objective is to assess differences in two relevant dimensions of the Celiac Quality of Life (CQoL) scale, Limitations due to the disease and Dysphoria (i.e., feelings of depression and discomfort), in relation to the perceived social support and some individual and disease-related characteristics. The paper exploits suitable unidimensional Item Response Theory (IRT) models to individually analyse the two mentioned dimensions of the CQoL and Multidimensional Latent Class IRT models for ordinal polytomous items in order to detect sub-populations of CD patients that are homogenous with respect to the perceived CQoL. The latter methods allow to address patients with similar characteristics to the same treatment, performing at the same time a more tailored overture to health promotion programmes. The analysis extracts the relevant patterns and relations among CD patients, disentangling respondents receiving CD diagnosis in adolescence or adult age rather than in childhood (the first perceive high levels of Limitations and Dysphoria), patients with high perceived social support, a factor influencing in a positive way motivation to engage in management of CD-related distress and psychological well-being, and participants who are married or cohabiting. The latter report higher latent trait levels. Full article
(This article belongs to the Special Issue Learning from Psychometric Data)
Show Figures

Figure 1

10 pages, 385 KiB  
Article
Regularized Estimation of the Four-Parameter Logistic Model
by Michela Battauz
Psych 2020, 2(4), 269-278; https://0-doi-org.brum.beds.ac.uk/10.3390/psych2040020 - 16 Nov 2020
Cited by 11 | Viewed by 3174
Abstract
The four-parameter logistic model is an Item Response Theory model for dichotomous items that limit the probability of giving a positive response to an item into a restricted range, so that even people at the extremes of a latent trait do not have [...] Read more.
The four-parameter logistic model is an Item Response Theory model for dichotomous items that limit the probability of giving a positive response to an item into a restricted range, so that even people at the extremes of a latent trait do not have a probability close to zero or one. Despite the literature acknowledging the usefulness of this model in certain contexts, the difficulty of estimating the item parameters has limited its use in practice. In this paper we propose a regularized estimation approach for the estimation of the item parameters based on the inclusion of a penalty term in the log-likelihood function. Simulation studies show the good performance of the proposal, which is further illustrated through an application to a real-data set. Full article
(This article belongs to the Special Issue Learning from Psychometric Data)
Show Figures

Figure 1

11 pages, 832 KiB  
Article
Conditional or Pseudo Exact Tests with an Application in the Context of Modeling Response Times
by Clemens Draxler and Stephan Dahm
Psych 2020, 2(4), 198-208; https://0-doi-org.brum.beds.ac.uk/10.3390/psych2040017 - 30 Oct 2020
Cited by 4 | Viewed by 1962
Abstract
This paper treats a so called pseudo exact or conditional approach of testing assumptions of a psychometric model known as the Rasch model. Draxler and Zessin derived the power function of such tests. They provide an alternative to asymptotic or large sample theory, [...] Read more.
This paper treats a so called pseudo exact or conditional approach of testing assumptions of a psychometric model known as the Rasch model. Draxler and Zessin derived the power function of such tests. They provide an alternative to asymptotic or large sample theory, i.e., chi square tests, since they are also valid in small sample scenarios. This paper suggests an extension and applies it in a research context of investigating the effects of response times. In particular, the interest lies in the examination of the influence of response times on the unidimensionality assumption of the model. A real data example is provided which illustrates its application, including a power analysis of the test, and points to possible drawbacks. Full article
(This article belongs to the Special Issue Learning from Psychometric Data)
Show Figures

Figure 1

Back to TopTop