Artificial Intelligence and Education

A special issue of Education Sciences (ISSN 2227-7102).

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 26514

Special Issue Editor


E-Mail Website
Guest Editor
School of Engineering, University of Thessaly, Volos, Greece
Interests: applied mathematics; artificial intelligence in education; artificial intelligence in public administration; artificial neural networks; data analysis; educational informatics; educational management; environment; geographical information systems; GIS in education; machine learning; predicting student performance; project based learning; remote sensing; technology in education; spatial analysis; university teaching; web-based learning systems

Special Issue Information

Dear Colleagues,

The application of artificial intelligence has rapidly increased in many scientific sectors. The development of artificial intelligence based educational techniques has been upgraded significantly the last years. Implementing artificial intelligence and artificial neural networks in education includes many kinds of intelligent instructional and evaluation techniques such as: intelligent tutoring systems, intelligent assessment of student performance, intelligent virtual agents, talking robots, humanized chatbots, and any other educational technique based on artificial intelligence.

Utilizing artificial intelligence technologies in the classroom can be very valuable to many kinds of learners, but most crucially to the students with special needs who can benefit from a more flexible educational solution that artificial intelligence can provide. Artificial intelligence can be combined with other technologies (e.g., Speech recognition) in order to develop artificial talking tutors who can communicate with the students and interact with during the learning process. Furthermore, Intelligent Tutoring Systems (ITSs) can be used as e-learning systems based on Artificial Intelligence approaches in order to advance adaptive and personalized learning according to the individual characteristics of every student.

Artificial intelligence can improve the teaching practices in classroom through many ways and that is the main reason that we need to further examine the involvement of artificial intelligence in education procedures.

This Special Issue aims at highlighting the broad field of artificial intelligence applications in education, regarding any type of artificial intelligence that is correlated with education, such as learning methodologies, intelligent tutoring systems, intelligent student guidance and assessments, intelligent educational chatbots, artificial tutors, etc., in order to advance and enrich the existing literature with new artificial intelligence approaches and methodologies in education.

We are looking forward to receiving your submission of new research or review article focusing towards the application of artificial intelligence based techniques in education.

Dr. Georgios N. Kouziokas
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Education Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence in education
  • artificial intelligence algorithms in education
  • artificial neural networks in education
  • artificial intelligence in student evaluation
  • assessing student performance using artificial intelligence
  • educational robotics
  • evaluation of artificial intelligence educational systems
  • Generalized Intelligent Framework for Tutoring
  • intelligent adaptive learning
  • intelligent agent-based learning environments
  • intelligent agents on the internet
  • intelligent chatbots in education
  • intelligent tutoring systems
  • intelligent virtual reality based learning systems
  • pedagogical artificial agents
  • spatial artificial intelligence in education

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 3631 KiB  
Article
Extending Smart Phone Based Techniques to Provide AI Flavored Interaction with DIY Robots, over Wi-Fi and LoRa interfaces
by Dimitrios Loukatos and Konstantinos G. Arvanitis
Educ. Sci. 2019, 9(3), 224; https://0-doi-org.brum.beds.ac.uk/10.3390/educsci9030224 - 27 Aug 2019
Cited by 17 | Viewed by 4576
Abstract
Inspired by the mobile phone market boost, several low cost credit card-sized computers have made the scene, able to support educational applications with artificial intelligence features, intended for students of various levels. This paper describes the learning experience and highlights the technologies used [...] Read more.
Inspired by the mobile phone market boost, several low cost credit card-sized computers have made the scene, able to support educational applications with artificial intelligence features, intended for students of various levels. This paper describes the learning experience and highlights the technologies used to improve the function of DIY robots. The paper also reports on the students’ perceptions of this experience. The students participating in this problem based learning activity, despite having a weak programming background and a confined time schedule, tried to find efficient ways to improve the DIY robotic vehicle construction and better interact with it. Scenario cases under investigation, mainly via smart phones or tablets, involved from touch button to gesture and voice recognition methods exploiting modern AI techniques. The robotic platform used generic hardware, namely arduino and raspberry pi units, and incorporated basic automatic control functionality. Several programming environments, from MIT app inventor to C and python, were used. Apart from cloud based methods to tackle the voice recognition issues, locally running software alternatives were assessed to provide better autonomy. Typically, scenarios were performed through Wi-Fi interfaces, while the whole functionality was extended by using LoRa interfaces, to improve the robot’s controlling distance. Through experimentation, students were able to apply cutting edge technologies, to construct, integrate, evaluate and improve interaction with custom robotic vehicle solutions. The whole activity involved technologies similar to the ones making the scene in the modern agriculture era that students need to be familiar with, as future professionals. Full article
(This article belongs to the Special Issue Artificial Intelligence and Education)
Show Figures

Figure 1

22 pages, 2365 KiB  
Article
I Didn’t Understand, I´m Really Not Very Smart”—How Design of a Digital Tutee’s Self-Efficacy Affects Conversation and Student Behavior in a Digital Math Game
by Betty Tärning and Annika Silvervarg
Educ. Sci. 2019, 9(3), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/educsci9030197 - 24 Jul 2019
Cited by 10 | Viewed by 3590
Abstract
How should a pedagogical agent in educational software be designed to support student learning? This question is complex seeing as there are many types of pedagogical agents and design features, and the effect on different student groups can vary. In this paper we [...] Read more.
How should a pedagogical agent in educational software be designed to support student learning? This question is complex seeing as there are many types of pedagogical agents and design features, and the effect on different student groups can vary. In this paper we explore the effects of designing a pedagogical agent’s self-efficacy in order to see what effects this has on students´ interaction with it. We have analyzed chat logs from an educational math game incorporating an agent, which acts as a digital tutee. The tutee expresses high or low self-efficacy through feedback given in the chat. This has been performed in relation to the students own self-efficacy. Our previous results indicated that it is more beneficial to design a digital tutee with low self-efficacy than one with high self-efficacy. In this paper, these results are further explored and explained in terms of an increase in the protégé effect and a reverse role modelling effect, whereby the students encourage digital tutees with low self-efficacy. However, there are indications of potential drawbacks that should be further investigated. Some students expressed frustration with the digital tutee with low self-efficacy. A future direction could be to look at more adaptive agents that change their self-efficacy over time as they learn. Full article
(This article belongs to the Special Issue Artificial Intelligence and Education)
Show Figures

Figure 1

14 pages, 1056 KiB  
Article
Optimal Weighting for Exam Composition
by Sam Ganzfried and Farzana Yusuf
Educ. Sci. 2018, 8(1), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/educsci8010036 - 09 Mar 2018
Cited by 3 | Viewed by 5098
Abstract
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and [...] Read more.
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were 30 multiple choice questions worth 3 points, 15 true/false with explanation questions worth 4 points, and 5 analytical exercises worth 10 points. We describe a novel framework where algorithms from machine learning are used to modify the exam question weights in order to optimize the exam scores, using the overall final score as a proxy for a student’s true ability. We show that significant error reduction can be obtained by our approach over standard weighting schemes, i.e., for the final and midterm exam, the mean absolute error for prediction decreases by 90.58% and 97.70% for linear regression approach respectively resulting in better estimation. We make several new observations regarding the properties of the “good” and “bad” exam questions that can have impact on the design of improved future evaluation methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Education)
Show Figures

Figure 1

Other

Jump to: Research

24 pages, 2192 KiB  
Case Report
The AI-Atlas: Didactics for Teaching AI and Machine Learning On-Site, Online, and Hybrid
by Thilo Stadelmann, Julian Keuzenkamp, Helmut Grabner and Christoph Würsch
Educ. Sci. 2021, 11(7), 318; https://0-doi-org.brum.beds.ac.uk/10.3390/educsci11070318 - 25 Jun 2021
Cited by 10 | Viewed by 5167
Abstract
We present the “AI-Atlas” didactic concept as a coherent set of best practices for teaching Artificial Intelligence (AI) and Machine Learning (ML) to a technical audience in tertiary education, and report on its implementation and evaluation within a design-based research framework and two [...] Read more.
We present the “AI-Atlas” didactic concept as a coherent set of best practices for teaching Artificial Intelligence (AI) and Machine Learning (ML) to a technical audience in tertiary education, and report on its implementation and evaluation within a design-based research framework and two actual courses: an introduction to AI within the final year of an undergraduate computer science program, as well as an introduction to ML within an interdisciplinary graduate program in engineering. The concept was developed in reaction to the recent AI surge and corresponding demand for foundational teaching on the subject to a broad and diverse audience, with on-site teaching of small classes in mind and designed to build on the specific strengths in motivational public speaking of the lecturers. The research question and focus of our evaluation is to what extent the concept serves this purpose, specifically taking into account the necessary but unforeseen transfer to ongoing hybrid and fully online teaching since March 2020 due to the COVID-19 pandemic. Our contribution is two-fold: besides (i) presenting a general didactic concept for tertiary engineering education in AI and ML, ready for adoption, we (ii) draw conclusions from the comparison of qualitative student evaluations (n = 24–30) and quantitative exam results (n = 62–113) of two full semesters under pandemic conditions with the result of previous years (participants from Zurich, Switzerland). This yields specific recommendations for the adoption of any technical curriculum under flexible teaching conditions—be it on-site, hybrid, or online. Full article
(This article belongs to the Special Issue Artificial Intelligence and Education)
Show Figures

Figure 1

32 pages, 13205 KiB  
Tutorial
Educational Stakeholders’ Independent Evaluation of an Artificial Intelligence-Enabled Adaptive Learning System Using Bayesian Network Predictive Simulations
by Meng-Leong HOW and Wei Loong David HUNG
Educ. Sci. 2019, 9(2), 110; https://0-doi-org.brum.beds.ac.uk/10.3390/educsci9020110 - 20 May 2019
Cited by 21 | Viewed by 6041
Abstract
Artificial intelligence-enabled adaptive learning systems (AI-ALS) are increasingly being deployed in education to enhance the learning needs of students. However, educational stakeholders are required by policy-makers to conduct an independent evaluation of the AI-ALS using a small sample size in a pilot study, [...] Read more.
Artificial intelligence-enabled adaptive learning systems (AI-ALS) are increasingly being deployed in education to enhance the learning needs of students. However, educational stakeholders are required by policy-makers to conduct an independent evaluation of the AI-ALS using a small sample size in a pilot study, before that AI-ALS can be approved for large-scale deployment. Beyond simply believing in the information provided by the AI-ALS supplier, there arises a need for educational stakeholders to independently understand the motif of the pedagogical characteristics that underlie the AI-ALS. Laudable efforts were made by researchers to engender frameworks for the evaluation of AI-ALS. Nevertheless, those highly technical techniques often require advanced mathematical knowledge or computer programming skills. There remains a dearth in the extant literature for a more intuitive way for educational stakeholders—rather than computer scientists—to carry out the independent evaluation of an AI-ALS to understand how it could provide opportunities to educe the problem-solving abilities of the students so that they can successfully learn the subject matter. This paper proffers an approach for educational stakeholders to employ Bayesian networks to simulate predictive hypothetical scenarios with controllable parameters to better inform them about the suitability of the AI-ALS for the students. Full article
(This article belongs to the Special Issue Artificial Intelligence and Education)
Show Figures

Figure 1

Back to TopTop