Learning Analytics to Aid Formative Assessment—a Focus on the Role of Underlying Mathematical Models

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Engineering Mathematics".

Deadline for manuscript submissions: closed (30 August 2022) | Viewed by 6256

Special Issue Editor


E-Mail Website
Guest Editor
Department of Science Education, University of Copenhagen, 2100 Copenhagen, Denmark
Interests: complex networks; learning analytics; formative assessment; educational technology; inquiry based education; scenario narratives

Special Issue Information

Dear Colleagues,

This Special Issue welcomes reports on studies that combine assessment for learning with learning analytics.

Assessment is an integral part of teaching and learning at any age and in any context. Assessment can be used for two different purposes. Assessment for learning is a feedback process, in which the learner receives information, which they can then use to improve. Assessment of learning, on the other hand, evaluates the present state in relation to a reference state, e.g., the present skill level in reference to a desired level.

In learning analytics, researchers attempt to “exploit data generated in educational settings for purposes of optimizing learning and the environments in which it occurs” (Nouri et al 2019). Learning analytics has been used to predict student success in educational systems and to assess various parts of learning processes after they have occurred. However, there is potential for using learning analytics for formative purposes—to provide learners with information, which they can use to improve.

Such an endeavor requires valid and robust underlying mathematical models. Using such models, it should be possible to tailor feedback to learners. For example, humans may use such models to become aware of regularities or patterns in learner behavior, which would not otherwise be apparent. Another example is the design of automated systems which use mathematical models to provide appropriate feedback to learners. The feedback provided might be visual, auditive, haptic, or any combination of human sensing. 

As such, this Special Issue welcomes any study which uses some kind of mathematical model to directly aid the provision of formative assessment to learners.

Dr. Jesper Bruun
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • assessment
  • formative
  • learning analytics
  • feedback
  • mathematical models

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1436 KiB  
Article
How Micro-Lectures Improve Learning Satisfaction and Achievement: A Combination of ECM and Extension of TAM Models
by Peijie Jiang, Tommy Tanu Wijaya, Mailizar Mailizar, Zulfah Zulfah and Astuti Astuti
Mathematics 2022, 10(19), 3430; https://0-doi-org.brum.beds.ac.uk/10.3390/math10193430 - 21 Sep 2022
Cited by 4 | Viewed by 2470
Abstract
This study aimed to examine the potential of micro-lectures as effective technology-based learning media in mathematics. It proposed a hypothesis that using micro-lectures affects learning satisfaction and achievement in mathematics. Data were collected using a questionnaire developed from the acceptance model theory (TAM) [...] Read more.
This study aimed to examine the potential of micro-lectures as effective technology-based learning media in mathematics. It proposed a hypothesis that using micro-lectures affects learning satisfaction and achievement in mathematics. Data were collected using a questionnaire developed from the acceptance model theory (TAM) and the extended Expectation Confirmation Model (ECM). Respondents comprised 233 students from six classes that used micro-lectures to learn mathematics for one semester at a public junior high school. The data were analyzed quantitatively using structural equation modeling assisted by SMART PLS 3.0 software. The results showed that perceived usefulness was the most significant factor in the learning achievement. Student attitude towards micro-lectures was the strongest positive factor in learning satisfaction. Furthermore, the proposed model explained 76.9% and 77.3% of the factors related to learning and satisfaction in using micro-lectures, respectively. It implies that micro-lectures affect learning satisfaction and achievement in mathematics. These results indicate that using micro-lectures in mathematics lessons increases learning satisfaction and achievement. They could assist schools, teachers, and local education ministries in planning, evaluating, and implementing micro-lectures in teaching and learning activities to improve education quality. Full article
Show Figures

Figure 1

21 pages, 2080 KiB  
Article
Cognitive Trait Model: Measurement Model for Mastery Level and Progression of Learning
by Jaehwa Choi
Mathematics 2022, 10(15), 2651; https://0-doi-org.brum.beds.ac.uk/10.3390/math10152651 - 28 Jul 2022
Cited by 1 | Viewed by 1096
Abstract
This paper seeks to establish a framework which operationalizes cognitive traits as a portion of the predefined mastery level, the highest level expected to successfully perform all of the relevant tasks of the target trait. This perspective allows us to use and interpret [...] Read more.
This paper seeks to establish a framework which operationalizes cognitive traits as a portion of the predefined mastery level, the highest level expected to successfully perform all of the relevant tasks of the target trait. This perspective allows us to use and interpret the cognitive trait levels in relative quantities (e.g., %s) of the mastery level instead of relative standings (i.e., rankings) on an unbounded continuum. To facilitate the proposed perspective, this paper presents an analytical framework that has support on the [0, 1] trait continuum with truncated logistic link functions. The framework provides a solution to cope with the chronic question of “relative standings or magnitudes of learning outcome?” in measuring cognitive traits. The proposed framework is articulated relative to the traditional models and is illustrated with both simulated and empirical datasets within the Bayesian framework, estimated with the Markov chain Monte Carlo method. Full article
Show Figures

Figure 1

19 pages, 1346 KiB  
Article
Learning Analytics and Computerized Formative Assessments: An Application of Dijkstra’s Shortest Path Algorithm for Personalized Test Scheduling
by Okan Bulut, Jinnie Shin and Damien C. Cormier
Mathematics 2022, 10(13), 2230; https://0-doi-org.brum.beds.ac.uk/10.3390/math10132230 - 25 Jun 2022
Cited by 5 | Viewed by 2038
Abstract
The use of computerized formative assessments in K–12 classrooms has yielded valuable data that can be utilized by learning analytics (LA) systems to produce actionable insights for teachers and other school-based professionals. For example, LA systems utilizing computerized formative assessments can be used [...] Read more.
The use of computerized formative assessments in K–12 classrooms has yielded valuable data that can be utilized by learning analytics (LA) systems to produce actionable insights for teachers and other school-based professionals. For example, LA systems utilizing computerized formative assessments can be used for monitoring students’ progress in reading and identifying struggling readers. Using such LA systems, teachers can also determine whether progress is adequate as the student works towards their instructional goal. However, due to the lack of guidelines on the timing, number, and frequency of computerized formative assessments, teachers often follow a one-size-fits-all approach by testing all students together on pre-determined dates. This approach leads to a rigid test scheduling that ignores the pace at which students improve their reading skills. In some cases, the consequence is testing that yields little to no useful data, while increasing the amount of instructional time that students miss. In this study, we propose an intelligent recommender system (IRS) based on Dijkstra’s shortest path algorithm that can produce an optimal assessment schedule for each student based on their reading progress throughout the school year. We demonstrated the feasibility of the IRS using real data from a large sample of students in grade two (n = 668,324) and grade four (n = 727,147) who participated in a series of computerized reading assessments. Also, we conducted a Monte Carlo simulation study to evaluate the performance of the IRS in the presence of unusual growth trajectories in reading (e.g., negative growth, no growth, and plateau). Our results showed that the IRS could reduce the number of test administrations required at both grade levels by eliminating test administrations in which students’ reading growth did not change substantially. In addition, the simulation results indicated that the IRS could yield robust results with meaningful recommendations under relatively extreme growth trajectories. Implications for the use of recommender systems in K–12 education and recommendations for future research are discussed. Full article
Show Figures

Figure 1

Back to TopTop