Next Article in Journal
Behavioral Pattern Analysis between Bilingual and Monolingual Listeners’ Natural Speech Perception on Foreign-Accented English Language Using Different Machine Learning Approaches
Next Article in Special Issue
A Framework for Exploring Churches/Monuments/Museums of Byzantine Cultural Influence Exploiting Immersive Technologies in Real-Time Networked Environments
Previous Article in Journal
Metallization of Thermoplastic Polymers and Composites 3D Printed by Fused Filament Fabrication
Previous Article in Special Issue
Multi-Agent Reinforcement Learning Framework in SDN-IoT for Transient Load Detection and Prevention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Conference Report

Waking Up In the Morning (WUIM): A Smart Learning Environment for Students with Learning Difficulties

by
Polyxeni Kaimara
1,*,
Ioannis Deliyannis
1,
Andreas Oikonomou
2 and
Emmanuel Fokides
3
1
Department of Audio and Visual Arts, Ionian University, 49100 Corfu, Greece
2
School of Pedagogical & Technological Education, 54639 Thessaloniki, Greece
3
Department of Primary School Education, University of the Aegean, 85132 Rhodes, Greece
*
Author to whom correspondence should be addressed.
Submission received: 2 June 2021 / Revised: 10 July 2021 / Accepted: 12 July 2021 / Published: 16 July 2021

Abstract

:
Effectiveness, efficiency, scalability, autonomy, engagement, flexibility, adaptiveness, personalization, conversationality, reflectiveness, innovation, and self-organization are some of the fundamental features of smart environments. Smart environments are considered a good learning practice for formal and informal education; however, it is important to point out the pedagogical approaches on which they are based. Smart learning environments (SLEs) underline the flexibility of eclectic pedagogy that places students at the center of any educational process and takes into account the diversity in classrooms. Thus, SLEs incorporate pedagogical principles derived from (1) traditional learning theories, e.g., behaviorism and constructivism, (2) contemporary pedagogical philosophy, e.g., differentiated teaching and universal design for learning, (3) theories that provide specific instructions for educational design, e.g., cognitive theory of multimedia learning and gamification of learning. The innovative concept of transmedia learning is an eclectic pedagogical approach, which in addition to learning principles, blends all available media so far. WUIM is a transmedia program for training independent living skills aimed primarily at children with learning disabilities, which emerged from the composition of pedagogical theories, traditional educational materials and cutting-edge technologies such as augmented and virtual reality, and art-based production methodologies. This paper outlines the development of WUIM, from the prototyping presented at the 4th International Conference in Creative Writing (2019) to the Alpha and Beta stages, including user and expert evaluations.

1. Introduction

Mapping of the field “smart environments”, initially, requires clarification of the term “smart”. What exactly does a smart environment mean and what are the features of an environment to be considered smart? Reviewing the proposed definitions of smartness (intelligence), we conclude that features such as the ability to learn, and the ability to adapt are frequently included [1]. In the context of learning, a set of indicators divided into three categories provide the framework for defining smart learning environments (SLEs): (a) necessary, (b) highly desirable, and (c) likely [2]. More specifically, the category “necessary” includes parameters such as effectiveness, efficiency, scalability, and autonomy. “Highly desirable” refers to the SLEs that are characterized by engagement, flexibility, adaptiveness, and personalization. The category “likely” encompasses conversationality, reflectiveness, innovation, and self-organization. SLEs counts on smart technologies both on hardware and software. Smart hardware includes small, portable, and affordable devices such as smartphones/tablets, laptops, Google Cardboards, VR glasses, head-mounted devices (HMDs), electronic bags (e-bag), wearable devices, and sensors that support students and teachers anytime and anywhere; or bigger devices such as interactive whiteboards, smart tables, and cloud computing [3,4,5]. Respectively, adaptability and flexibility are the fundamentals of smart software. Smart software is used to control all kinds of learning systems, learning tools, online resources, educational games that use social networking, learning analytics, visualization, virtual reality, etc. Developing SLEs is a process that can be described from different perspectives. An inclusive perspective comprises the availability of hardware/software and their use in content development, user interface (UI) design, interaction design (IxD), and ultimately user experience (UX) design [6]. Contemporary technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and holograms bring together these features. These technologies are often distinguished; however, this is not always the case [7]. A pioneering contemporary trend is to spread content across multiple platforms creating transmedia experiences.
SLEs supports the design and development of innovative transmedia alternatives for students and teachers while interactive games are an example of an innovative learning experience. The systems developed by our transdisciplinary design team include some or all of the cutting-edge technologies using gamification techniques [7,8,9,10,11,12]. The principal pillar of a transmedia learning experience is interactivity. When designing educational projects, interactivity is defined by a four-dimensional dynamic system: learners, content, pedagogy, and context [13,14].
For the WUIM project, the four dimensions are analyzed as follows:
  • Learners: Students with special educational needs (SEN) and/or disability (D) (from now on SEND) and their typically developing (TD) peers.
  • Context: Inclusive settings, where students from various backgrounds and with different physical, cognitive, and psychosocial skills are welcome by their neighborhood schools. The concept of inclusive educational context (inclusive education means all children are educated in the same classrooms, in the same schools, providing real learning opportunities for groups who have traditionally been excluded, not only children with disabilities, but children from cultural, religious, ethnic backgrounds). is the result of successive discussions in international forums. The culmination of the international meetings was the “Statement and Framework for Action on Special Needs Education” approved by the World Conference on Special Needs organized by the Government of Spain in cooperation with UNESCO in Salamanca from 7 to 10 June 1994 [15].
  • Content: Derives from the functional domain of Activities of Daily Living (ADLs) and more specifically from the field of training in morning routines using traditional educational materials, and cutting-edge technologies such as AR, VR, and 360° videos.
  • Pedagogy: The synthesis of traditional learning theories such as behaviorism and constructivism and innovative teaching-learning approaches such as Differentiated Instruction (DI) [16], Cognitive Theory of Multimedia Learning (CTML) [17,18] and Universal Design for Learning (UDL) [19,20].
In such an educational environment, the transmedia storytelling process can be a valuable tool, a novel educational strategy and a resource for deeper learning through learners’ participation, experimentation, and expression [21]. WUIM design and development encompass the most modern technologies so far, such as video production with 360-degree cameras, AR, and virtual reality (VR), combined with gamification techniques and traditional educational puzzles, while providing technical and scientific know-how to school community, i.e., teachers, students, and parents, to develop their own stories in the context of transmedia learning. The overall WUIM project includes three games: (1) WUIM-Puzzle, (2) WUIM-AR, and (3) WUIM-VR.
In this work, we present the development stage of the WUIM project, i.e., Alpha and Beta stage and the improvements after the formative evaluation conduct by potential users of the applications. The original conference paper “Transmedia storytelling meets Special Educational Needs students: a case of Daily Living Skills Training” was presented at the 4th International Creative Writing Conference (2019), which describes the stages of design, content creation, and prototyping [22].

2. Theoretical and Conceptual Framework

2.1. Smart Learning Environments, Interactivity, and Gamification

Smart learning environments (SLEs) [7] facilitate students to learn not from but with their teachers during practicing life skills in collaboration with their classmates. For a learning environment to be called smart, some conditions should be met. An SLE is enriched with suitable digital material that receives content from real-world problems and at the same time is adaptable, effective, efficient, enjoyable, and can engage students and teachers anytime and anywhere [2,3]. Developing SLEs is a software engineering process to create educational content combined with pedagogical principles [7]. The following figure visualizes the overall description of smart learning environments as rendered by Spector [23] (Figure 1). Designers and teachers collaborate to ensure that the learning materials provided are flexible and in line with students’ interests and needs, and the platform that distributes them can supply immediate feedback, improve over time, monitor effort and student performance, offer personalized tutoring, suggest solutions, and report the outcomes. Through this process, students are empowered and not only learn but also create their own learning materials.
In the past, educational content was delivered through single-media systems while later innovative systems are designed to offer adaptable interactive scenarios that are often applied to a variety of media of cutting-edge technologies including AR and VR. VR and AR can be combined with haptic solutions and gamification to enhance learning and provide diverse experiences [24]. However, SLEs require not just technological support but also the transformation of theory into practice taking into account the context in which the latter is unrolled and therefore the teaching methods [25]. This transformation reflects Dewey’s philosophy regarding education that stressed the connection between education and real-life experiences. “Life is a self-renewing process through action upon the environmentContinuity of life means continual readaptation of the environment to the needs of living organismsThe continuity of any experience, through renewing of the social group, is a literal factEducation, in its broadest sense, is the means of this social continuity of lifeis the necessity of teaching and learning for the continued existence of a societyIn final account, then, not only does social life demand teaching and learning for its own permanence, but the very process of living together educates” [26] (pp. 7, 8, 10, 12). SLEs enhance the 21st-century fundamental life skills that students can draw from their everyday experiences.
Learning moves from a passive to a more interactive process, as is defined by the ICAP taxonomy of four modes of educational activities. ICAP framework is excellently utilized to interpret several learning environments such as SLEs. ICAP is the acronym of the words: interactive, constructive, active, and passive [27]. According to ICAP which is the result of Chi and Wylie’s (2014) review of cognitive learning theories, learning is more effective if progressively moves from the passive to an interactive set of knowledge–change processes, through active and constructive learners’ participation. For example, it is recognized that students can receive information passively by observing a video without doing anything else. In this case, video modelling is the single-media system as other passive media such as traditional educational television which is a closed information transmission system from one to many where only a few people have control over the content [28]. Educational television programs cannot be personalized to meet the needs of each learner and to fit the prior knowledge and developmental level [29]. For more active participation which could lead students to engagement and meaningful learning, students are encouraged to manipulate the video content by pausing, playing, fast-forward, rewind, etc., and to discover these parts of videos that are essential and relevant for their learning goals. Undoubtedly, active learning engages students cognitively but the constructive activities, including activities such as explaining concepts in the video, integrating across multimedia resources, creating content by video shooting, and comparing and contrasting to prior knowledge or other materials, contribute to generating new knowledge. Finally, interactive refers to both human–human and human–computer activities occurs when a response is expected from either a human or a computer. Interaction is observed either during discussions between peers or teacher and students regarding the educational content or in a computer-based learning environment when students select a response from the options menu and according to their choice, the software adjusts content provided, and then students respond to the computer’s response. Teachers’ roles are fundamental as facilitators, collaborators, technicians, managers, and researchers besides their traditional roles as course designers and developers [30]. SLEs change the teacher-centered approaches into a student-centered pedagogy in which the interest is moved to the learners who take an interactive role, i.e., the shift from the traditional instruction which reproduces the knowledge to the interaction that encourages learners to discover, experiment, and construct their own learning experiences.
Thus, interactivity is considered the cornerstone of 21st-century skills, which, however, has its roots in antiquity with games being considered one of the oldest forms of interactivity [31]. Certainly, games are not just a privilege for children but are especially for them. It is the sense of freedom that creates the conditions of effortless engagement with any activity that resembles a game [32]. The best way for children to complete their schoolwork is to turn it into a playful activity through gamification techniques [33]. Gamification refers to the utilization of game elements in an activity that is not a game to motivate, increase, and maintain the interest of those involved [34]. The dynamic elements of games are rules, goals, feedback, challenges, conflicts, interaction, achievements, narration, and competition. Competition does not have to be related to an opponent but it can be problems or puzzles that the player/student is trying to solve [35]. Gamification design is a complex process of transdisciplinary teamwork. Psychology, pedagogy, game design, and programming are involved, although gamification does not necessarily include digital technology [36]. When game elements are added to learning materials, social interaction is enhanced, performance is improved and students are motivated to engage in a learning activity, that they would not otherwise participate, as it would probably be tedious, challenging, or boring [6,36,37]. If the goal is to teach a valuable skill of real-life (e.g., cooking) rewards based on gamification can be effective. However, rewards should be quickly replaced by more crucial elements, such as storytelling, plot, rules, or freedom to explore other paths, and opportunities for reflection. This process is known as “meaningful gamification” [38]. Educational activities that incorporate techniques of meaningful gamification is a student-centered approach that allows teachers and students to connect real-world problems or situations to the school environment. In a digital world, games can also take digital form. Due to constant technological advances which continue to evolve at a high-speed pace, digital educational games are developed aiming not only at content knowledge but also at cultivating four super-skills, i.e., critical thinking, communication, collaboration, and creativity [39]. Given that some students may struggle to learn, others to perform much more than expected, while others follow the typical course, the transformation of pedagogical approaches is required [16]. In diverse classrooms in which no student is considered as “special”, collaboration and communication between teachers and students through creative activities enhance critical thinking and form the concept of a Differentiated Instruction (DI).

2.2. From Differentiated Instruction, Cognitive Theory of Multimedia Learning and Universal Design for Learning to Transmedia Learning

Differentiated Instruction (DI) is the pedagogical approach that emerged from the growing trend of including students with SEND and their TD peers in general education [40]. However, it is neither identical to the individual teaching, i.e., “one-to-one teaching”, nor equivalent to a method of producing individualized educational plans for each student, as determined by the traditional individualized instruction, which is often implemented in special education settings. It is rather a personalized learning process according to the level of readiness, interests, and learning profile of each student. One of the most important differences between individualized and personalized learning is that the latter supported by DI that assumes teamwork, without isolation or stigmatization of students with difficulties. Examples of differentiation of instruction are related to the content, process, product, and learning environment [41] (Table 1). Applying DI is beneficial for all students regardless of their learning style, needs and preference.
Given that each student is unique and that students have significant differences in the way they are motivated to participate in learning activities or to perceive and to understand information, to navigate a learning environment and to express what they know, we conclude there is no one means of presentation/representation, expression, engagement, and assessment that is considered the best for all students [20]. This conclusion is the core of Universal Design for Learning that resembles Differentiated Instruction and rooted in the idea that there is no “one-size-fits-all” teaching method. The goal of Universal Design for Learning is to design and develop multimodal systems and educational materials to meet the needs of each student. Multimodal educational systems which employ modern technology have encompassed the principles of constructivism and the research field of human–computer interaction [18,42,43]. Multimodality is based on Montessori pedagogy, whose educational method takes into consideration all human senses, visual, auditory, tactile, gustatory, olfactory, and kinesthetic, and is addressed to all preschoolers regardless of their physical and cognitive capabilities [44]. The principles of Mayer’s Cognitive Theory of Multimedia Learning outline the guidelines for instructional design which utilizes multimodality to foster learning [17,18]. Multimedia learning occurs when people construct mental representations from the material presented in two or more forms, e.g., words and pictures, as it has been proved that people can learn more deeply and the represented information is better recalled and retained when encoded by words and pictures together than only by words [45].
The theoretical framework and background of this conclusion is the dual-coding theory [46]. An example of the dual-coding theory is subtitling, which in addition to the primary obvious need that it came to cover, namely translation, is also a valuable vocabulary learning tool (verbal and nonverbal/auditory–visual encodings) [47,48]. Moreover, subtitles for the deaf and hard of hearing (SDH) are necessary for people with hearing difficulties and are enforced by the deaf community and legislation [49] (article 67, Greek Law 4488/2017). Multimedia processing requires less cognitive load than static illustrations and a multimedia content is more interesting, entertaining, and motivating than paper-based pictures and text [17,18]. Mayer’s Cognitive Theory of Multimedia Learning [17] includes 12 basic (Table 2) and nine advanced (Table 3) principles for multimedia learning environments design [14].
In sum, multimedia refers to the presentation of material and learning refers to learner’s knowledge construction: verbal words insert into the human cognitive system through the audio path (narration) and written words and images through the visual path (text on the screen, illustrations, charts, graphs, photographs, or maps) or dynamically both audio and visual through animation, videos, or interactive images. Figure 2 outlines the architecture of the human knowledge construction process as proposed by the Cognitive Theory of Multimedia Learning.
A new dimension of multimodal discourse with a specific narrative structure is transmedia storytelling. TS exploits different languages, for example, verbal and visual and media like cinema, comics, television, and video games to construct a narrative world as a process of presenting a story in which each content element is distinct and also contributes uniquely to a larger whole [21,51]. Multimedia focuses on the number of different types of content expression, while transmedia refers to a set of narrative and non-narrative elements that are systematically spread through many platforms and emphasizes the way of content distribution and flow across these platforms [52]. Although the term “transmedia” was coined by media scholar Marsha Kinder in 1991 as cited in Jenkins [53], in ancient Greece, an equivalent term “ekphrasis” was used to describe skills that could simultaneously convey an artistic content or a mythological narrative in different forms such as painting, sculpture, drama, oral poetry, writing, dance, performance, and pottery [54]. People have managed to tell stories using various tools. Literary narration exploits the tools of both oral and written speech, theatre of oral speech and dramatization, cinematic narration of the audiovisual media, etc. [55]. A story about a particular theme can be told by combining several types of media such as images, text, video clips, sound narration, and music [56].
An educational program, into and outside the classrooms, using transmedia storytelling techniques and utilizing smart devices to create immersive environments enables students and teachers to expand their learning experience [5,57]. The pedagogical approaches of transmedia learning are based on learning theories of constructivism and connectivism and extend not only to the use of a variety of media, but also to students’ interaction with narration, both through digital and non-digital platforms [58,59]. “Older” educational platforms, such as print books, flashcards, and puzzles can also be utilized in critical and creative ways in a learning environment. A transmedia learning model sets new challenges for students and teachers, requires the achievement of a wider range of collaboration and communication skills and underlines the active role of students in their knowledge construction. According to Fleming, transmedia learning is the best pedagogical method for the 21st century, as it comprehensively uses technology with consistent and positive learning outcomes, by allowing, on the one hand, content to flow seamlessly across platforms [58] and on the other hand by providing a framework for students to understand media, participate in the narrative expansion actively and interact in increasingly complex ways as prosumers (“prosumers”: both consumers and producers, to be a prosumer demands an active contribution to the content and therefore knowledge construction) [60,61]. Transmedia learning is flexible, can happen any time and anywhere, making students participate and learn regardless of time, and place, due to its capability to spread out content from different mobile devices like smartphones and tablets. Mobile storytelling is considered a part of transmedia storytelling [62].

3. WUIM: A Transmedia Project for Inclusive Educational Settings

Transmedia storytelling provides teachers with the opportunity to facilitate their students to think critically, to identify with the material, and to acquire, generalize and maintain knowledge, supplying a value framework for constructivist educational pedagogy [63]. Transmedia learning incorporates transmedia storytelling techniques, ubiquitous technology capabilities, real-world experiences, and student-centered pedagogies, creating extremely productive and powerful learning experiences [58]. Especially in inclusive educational settings, transmedia learning combined with Differentiated Instruction, Cognitive Theory of Multimedia Learning and Universal Design for Learning could be the unified methodology for promoting personalized learning, because it meets individual needs and adapts to students’ readiness, learning styles and preferences, increasing their participation, engagement and motivation and improving their different cognitive and functional skills [64]. Children with developmental disabilities often have difficulty performing daily living skills. Acquiring these skills can lead to increased independence and therefore instruction focuses on acquisition, generalization, and maintenance of those skills. In this paper, we present the design and development of WUIM, a transmedia educational project for morning routine addressed to typically developing preschoolers and students of first grades of elementary schools and their peers with SEND. The goal of WUIM is to engage students in morning preparation activities performed at home. More specifically, the content of WUIM focuses on activities that students have to complete from the moment they wake up (alarm clock) until they go to school [6,22,65,66]. WUIM extends to three types of applications:
  • “WUIM-Puzzles” encompasses traditional board games with flashcards and wooden block puzzles which constitute the toys of the whole project.
  • “WUIM-AR” is an AR game-like application that is combined with WUIM-Puzzles.
  • “WUIM-VR” is a VR simulation using gamification techniques.

3.1. User Experience Design

The basic axis of WUIM design was user experience (UX) and the seven factors that determine it [6,67]. At each design stage and for each of its components, the answers to the questions posed about usefulness, usability, findability, creditability, desirability, accessibility, and value justified the decisions made (Figure 3).
Therefore, each UX factor acted as a component of an evaluation checklist. The initial evaluation of the overall project by the interdisciplinary design team led to specific content development decisions. These decisions were confirmed by the subsequent formative evaluation of both users and experts (c.f. section Formative User and Expert Evaluation). The following describes the factors that were taken into account when designing the WUIM [6].
  • Usefulness: Quality, importance, and value of a system or product to serve users’ purposes. Although from the very outset, the transdisciplinary design team decided that WUIM will consist of affordable low-cost applications, this decision does not mean that the quality of materials and coding were overlooked.
  • Usability and utility are the two components of usefulness: A usable product provides effectiveness, efficiency, engagement, error tolerance, and ease of learning. However, even though a product is easy and pleasant to use, it might not be useful to someone if it does not meet his/her needs. WUIM was designed to be easy to use for both students and teachers. The primary goal was for WUIM to be an effective, efficient, engaging learning material that allows users to easily learn content, both in terms of activities and technology.
  • Findability: Ease of finding a product or system. WUIM is available online and for free.
  • Creditability: A product leads users to trust it, not only because it does the job it promised to do, but also because it can last for a reasonable amount of time, with accurate and appropriate information. WUIM in addition to the activities it provides, i.e., morning routine training, offers know-how to teachers and parents to develop their own material.
  • Desirability: Branding, image, identity, aesthetics, and emotional design, which are also elements of user interface design. During the WUIM design, special emphasis was given to each of these features. Thus, a short name was chosen with intensity and easy to remember. Attention was also paid to user interface design according to the principles of Cognitive Theory of Multimedia Learning and guidelines of Web Accessibility for Designers (https://webaim.org/ (accessed on 2 June 2021)) and Microsoft (https://docs.microsoft.com/en-us/windows/win32/uxguide/vis-fonts (accessed on 2 June 2021)) regarding the choice of font type, colors and size, background colors, and contrasts.
  • Accessibility: The experience provided to users, even if they have a disability, such as hearing, vision, movement or learning difficulties. WUIM focus on motor, learning and hearing difficulties, and students with visual problems but not totally blind.
  • Value: The ultimate goal of a product is the added value it provides to users. Besides the vital educational content, an overview of the factors that shape the user experience corresponds to the design and development of WUIM.
WUIM, depending on children’s cognitive or physical limitations, provides adaptation and parameterization, regarding the different difficulty levels, material, and help. Taking into account the UX factors and the pedagogical background of WUIM, we believe that it could be considered as SLE.

3.2. A Process from Educational Design to Development

Each of the four dimensions of interactivity was the guide to WUIM design (Figure 4). Students with special educational needs (SEN) and/or disability (D) and their typically developing (TD) peers are the target group (learners). The content derived from the functional domain of Activities of Daily Living (ADLs) using traditional educational materials, and cutting-edge technologies. The educational context refers to inclusive settings. Finally, pedagogy is an eclectic pedagogical method stemmed from the synthesis of traditional learning theories such as behaviorism and constructivism and innovative teaching-learning approaches such as Differentiated Instruction, Cognitive Theory of Multimedia Learning and Universal Design for Learning.
The material was designed having studied the diverse characteristics of SEND students, the content of the activities performed in a typical family home every morning and the basic principles of student-centered pedagogical approaches. The design team’s goal was for the product to be a good practice to facilitate inclusive education implementation. It is a fact that inclusive education does not apply to the extent required by an inclusive society [68]. A major obstacle to its implementation is the reluctant attitude of teachers, which is greatly affected by the lack of appropriate educational material. In terms of educational material, research on digital game implementation in schools concludes that teachers are also reluctant with a neutral attitude towards the utilization of digital games in their lessons [69,70,71,72,73,74,75,76,77]. The main obstacles mentioned by teachers are the lack of financial resources, teacher training and professional development, and limited technological equipment [72]. Aware of these parameters, the transdisciplinary design team decided to develop WUIM in such a way to be know-how for the development of low-cost gaming applications using everyday devices.
WUIM is a transmedia educational project that flows across traditional educational materials such as flashcards and puzzles, and cutting-edge technologies such as AR and VR. The most characteristic schematic interpretation of transmediality is credited to Pratten [73]. Figure 5 is an adaptation of Pratten’s figure and illustrate the transmediality of WUIM [67]. Transmedia narratives consist of multiple distribution channels that contribute to the creation of a single experience. Each medium is combined by the audience as a piece of a puzzle to complete the story of the transmedia narrative. In our case, puzzles, AR, and VR are combined by children to create a comprehensive learning experience.
WUIM deals with a real-life situation: learning the morning routine at home. The morning routine is a key part of the functional skills that lead to independent living. WUIM includes hygiene-related activities that must be followed in a specific order, such as “I wash my hands after using the toilet”, or “I brush my teeth after eating”. For the VR version, content is related to saving resources and time. For example, “I eat breakfast wearing pajamas” because there is a chance to get dirty. In this case, “I change and I put on my clothes”. Otherwise, if “I eat wearing the clothes and get dirty, then I have to undress, put the clothes in the laundry and try to find and wear other clothes”. Within WUIM there are rules of social behavior, such as “I greet my parent before I go to school”.
The educational design was based on the holistic, integrated, and experiential approach of three domain of learning according to Bloom’s taxonomy [74]: cognitive (knowledge-based), practical—psychomotor (action-based), and affective (emotion-based). WUIM-Puzzle and WUIM-AR are based on behaviorist learning theory which relies on the connections between stimuli and responses [6]. Learning is enhanced by task analysis, contiguity, practice, repetition, sequencing, reinforcement as extrinsic motivation, and feedback [75]. Task analysis method and chaining are basic training strategies for daily living activities within the Applied Behavior Analysis (ABA) approach [76]. In practice, learning is achieved through a systematic series of trial-and-error action. However, WUIM-Puzzle also relies on directed discovery learning which is a constructivist approach [77,78]. WUIM-VR is based on unstructured discovery learning. Bruner [77] pointed out that children can experience success or failure, not as a reward or punishment respectively as indicated by the behaviorism, but as an additional piece of information that will lead them to knowledge. In sum, for all three versions, the aim is for students to learn the correct sequence of activities from waking up in the morning to get ready to leave the house and go to school. The ultimate goal of WUIM is for students to transfer the acquired morning routine skills to real life. Learning is achieved not by punishments (loss of lives or points) if children do not make the right choice but by enhancing intrinsic motivation through the dynamic elements of games and the unique features of augmented and virtual reality. This process does not discourage students and is one of the most effective ways to keep playing, trying to figure out the combination of activities. Besides learning outcomes, objectives are to foster collaborative learning, according to Vygotskian’s social-constructivist approach [79], to motivate teachers and students to develop their own educational material through easy-to-use applications and to develop positive attitudes for both inclusive education, as well as for digital educational games.
The Unity and Vuforia platforms were used to develop digital applications and a 360-degree camera was used to create the content. The devices needed for applications developed with digital technology are everyday devices such as smartphones and tablets, virtual reality glasses to which smartphones are adapted, headphones, and, if a child has head instability due to spasticity, controllers (Figure 6).

3.3. Game Development

3.3.1. Concept Definition and Visual Symbol Selection

The first step in developing content was to define the story and choose the symbols according to the fundamental concept, i.e., training children in the morning routine. Transmedia content development is based on the same story and practices shared symbols, which are distributed across different platforms.
Under the permission of Tobii Dynavox Picture Communication Symbols® (PCS), we chose specific symbols from the Boardmaker collection that serve the story flow, focusing on personal hygiene rules and social norms: alarm clock, toilet, handwashing, breakfast, toothbrushing, dressing up, parent hugs, and walking to school (Figure 7).
There are six activities performed during the morning preparation (“alarm clock” is the trigger to start the activities and activities are completed after the parent’s hugs) considering the limitations on human’s capacity and the magical number seven, plus or minus two for processing information [6,81]. Common elements of all applications were the story, the use of PCS and the goal, i.e., to complete all the appropriate steps to prepare for school. When children play with the puzzle, they are asked to place the pieces/activities in sequence in the correct order. AR uses the puzzle pieces as triggers/targets and in VR, symbols operate as option buttons. The selection of the specific symbols (PCS) was based on the following:
  • Creditability and desirability: PCS are widely known in the field of special education, recognized by most children with SEND, are very comprehensive and meet the requirements of interface design. The narration of each symbol is very understandable. Besides, PCS can be used as a storyboard, i.e., a graphic layout that sequences illustrations and images.
  • Usability: PCS are effective and efficient and no extra training time is required.
  • Utility: PCS are easily printed and portable.
  • Findability: PCS are easy to find.
  • Accessibility: PCS are characterized by simplicity in depicting people and objects. Studies on the effect of the amount of detail on pictorial recognition memory have concluded that the simpler a picture, the lower cognitive load and therefore avoiding problems with concentration and distraction [82]. Besides, according to the instructions of AR development engines, pictures simplicity serves their rapid response as AR triggers.

3.3.2. Puzzle Design-Production

For puzzle design-production, we proceeded with a wooden shape puzzle design. The shapes were designed in two versions as two difficulty level (Figure 8, Figure 9, Figure 10 and Figure 11). The first level contains only square shapes, so students have to place the correct block considering only the activity depicted on the symbol (Figure 8 and Figure 9). The second and easiest level (Figure 10 and Figure 11) includes geometric shapes that guide students alongside the symbols (essentially, the game provides built-in gameplay that promotes constructive thinking) [83]. The two difficulty levels were determined based on the principle of equity so that no student is excluded from the learning experience. Learning shapes is resulting in a side benefit [67].
To create the wooden puzzles, shapes were designed with CorelDRAW® software and then were cut with CNC (computer numerical control) wood route by a carpenter (Figure 12a). The symbols were then printed on a vinyl sticker (Figure 12b).
Both levels provide material with handles (pins) to support students with fine motor skill difficulties (Figure 13). Figure 14 shows the complete traditional educational material: wooden boards-wedges that make up the WUIM-Puzzle.

3.3.3. Film Production

For film production, three basic steps were followed: pre-production, production, and post-production [84]. In the pre-production phase, script, storyboard, shooting list, and breakdown sheets were prepared. The script was developed with the program Fade In Professional Screenwriting Software© that is a friendly-user application and the most advanced software used by professionals (https://www.fadeinpro.com/ (accessed on 2 June 2021)) (Figure 15). According to the script, there are two actors, a 10-year-old girl named Vicky, who is the main character, and her mother. Vicky (game avatar), once she is out of bed, follows a series of events guided by everyday logic: first she goes to the bathroom/toilet and, after washing her hands, heads to the kitchen to have breakfast. When breakfast is over, she gets back to the bathroom, where she brushes her teeth. She then goes to her bedroom, takes off her pajamas, puts on her clothes and heads to the front door area where she puts on her shoes, picks up her school bag, greets her mom, and opens the front door to go to school [67].
For the storyboard, we chose to proceed with photo shooting so that along with the technical information needed for the video recordings, such as objects (props), lighting, costumes, characters, sounds, the actors rehearsed their “role”, etc.
Consequently, we proceeded with the shotlist and the breakdown sheets. Shots were taken during the production phase with a 360° camera (Figure 16 and Figure 17).
During the post-production phase, we proceeded with the video-editing, converting the 360° videos (Figure 18a) to standard videos be compatible with the AR platform (Figure 18b). Finally, we added sounds and subtitles, according to the guidelines of Web Accessibility for Designers (https://webaim.org/ (accessed on 2 June 2021)) (Figure 19). The videos have been selected from a series of shots. Videos also can function independently, e.g., teeth brushing, so it is not needed to be fully integrated into an application, but they can be autonomous educational material for other learning activities as well.
Also, a human pedagogical agent was used in the form of a filmed actor (Figure 20). The pedagogical agent’s role focuses on welcoming students, guiding them (tutorial) and providing instructions for the gameplay, help and oral rewards and reinforcements (e.g., well done, congratulations), when students find the appropriate picture/puzzle or persuasions to continue the effort (Figure 21).

3.3.4. AR and VR Game Development

WUIM AR and VR games could be described as simulations since their content originates from real-life based on 360° interactive videos that enhance presence and immersion. Human–computer interaction was developed by the Unity Game Engine and Vuforia Engine, adopting a third-person perspective. Digital material development followed a parallel progression (Figure 22).
The following is the steps that had been developed for each technology separately.
  • For the film production, three basic steps were followed: pre-production, production, and post-production. The pre-production phase included the scripting, the storyboard, the shotlist and the breakdown sheet. The production phase included the shooting. Post-production included video and audio editing, subtitles, voice-over and sound design. Finally, the footage processing, so that depending on the type of game technology, i.e., VR or AR, videos were integrated into the code.
  • For the VR game, we procced as follows: We set the winning conditions and other game parameters. Then, we defined the environment and user interface (UI) design according to users’ characteristics. Next, we developed the code on the game console and associated the videos with the scenes. After the game flow was checked according to the game logic adopted by the design team, the VR game testing with the devices (smartphones/tablets, desktop, HMDs) was performed.
  • For the AR game, we procced as follows: We set the winning conditions and other game parameters. The AR game is more directional than the VR version, the logic is serial and the winning conditions are different from the winning conditions of the VR game, which is freer compared to the AR. After defining the images that are the targets of AR and editing them to be recognized by a camera of tablets and smartphones according to Vuforia specifications, they were integrated into the game development engine (Unity). After configuring the code and data and retesting on devices, the application was ready for distribution.
Both AR and VR applications were developed using Unity as the underlying Game Engine which allows fully-featured applications to be implemented for free under educational licensing and provides all the basic functions needed to build a game and many minor functions to speed up the development process. Unity 3D Game Engine also offers a unified environment in which the game developer can manage game assets such as graphics, sounds, and code used to implement the game’s logic (scripts). In addition, the environment allows the creation of game levels, debugging, sound mixing, performance measurement (profiling), and many other tools useful for the development process. Furthermore, for WUIM-AR the Vuforia AR Engine was used due to its compatibility with Unity and similar licensing terms. Vuforia’s image training and recognition routines are used to identify the physical items through the handheld device camera, which are then linked to the content that includes 3D models and video displayed within the augmented reality application. WUIM-AR supports interaction through mobile devices (tablets and smartphones). Navigation is implemented through the user’s movement and game elements are detected by the sensor (camera or LiDAR sensor). Touch is used as a means of interacting with the content and interface environment.
  • WUIM-AR Development
WUIM-AR is a target-based AR application, which has been fully developed, i.e., has reached the Beta stage. AR targets are visual cues that trigger a specific action determined by the program displaying information through a camera application on a smartphone/tablet WUIM-AR is interconnected with WUIM-Puzzle. The AR flow chart (Figure 23) summarizes the gameplay, interfaces, and settings such as tutorial, difficulty level, language, background, audio, subtitles provided to students, and help.
The pieces/symbols of WUIM-Puzzle are the triggers/targets for the AR to operate. Vuforia Engine Developer Portal processed the images (PCS) and created a database containing the targets. The instructions provided by Vuforia Engine refer that targets must have specific features, such as sharp edges, optimal image dimensions, aspect ratio, high image contrast, no repetitive patterns, format, and distributed textured areas. The software assesses the target quality. The best quality is scored with 5-stars, but also 3- or 4-star quality is satisfactory. Below 3 stars the application may have a hard time recognizing a target and at 1 star or 0, the target should be replaced. As already mentioned, WUIM-AR depends on the 7 flashcards (alarm clock, toilet, handwashing, breakfast, toothbrushing, dressing up, and parent hugs) or the wooden blocks of WUIM-Puzzles. Vuforia Engine Developer Portal created a database with all targets (PCS) to be used in WUIM-AR including images for both the game per se and tutorials.
After creating the database, we proceeded with gaming experience development using the Unity Game Engine. First, we created a list which is a chain of targets. Each target recognizes which comes before it. The program also holds videos or sounds of the pedagogical agent that should be displayed when the student faces difficulty to find the next target.
Following this, we imported data in the inspector window. When a target and/or a unique combination of targets are detected, the matching video is displayed.
The previously scanned target of the chain is stored in memory. When the target is recognized, the game engine checks if the prior target is the same as the stored one. If so, the recognized target becomes the stored one and the matching video and audio are displayed. If not, a beep sounds in combination with a short vibration according to the user settings and depending on how many times the student did not find the right sequence. Settings such as difficulty level, subtitles, language, and graphics are stored in a file to remain the same even after the game is over.
Taking into account students’ preferences, needs, and learning profile, the system offers several user interfaces to serve different kinds of users. There are options menu and settings which have been supplied for language (Greek or English), sounds, subtitles, font size, tutorials, help and background colors, either for tablets (Figure 24) or for smartphones (Figure 25). Important aspects of the interface, which are related to feedback, flow, positive emotions and motivations and have a direct impact on learning, are the applications’ simplicity and aesthetics [85].
  • WUIM-VR development
WUIM-VR is a gamified simulation that has reached the Alpha stage. WUIM-AR, ideally, is played with smartphone Head-Mounted VR Glasses. The game is structured in rooms of a typical family house based on escape room game pedagogy [67]. Escape room pedagogy relies on Vygotsky’s social-constructivist approach that engages students in real-life activities enhancing role-playing, creativity, decision-making, communication, and critical thinking [86]. Students/players are allowed to navigate freely at home, interact and experiment by choosing any route they wish. During this non-linear tour, students learn smoothly from the results of their choices. It is a common issue for children to make intentional mistakes based on their curiosity to discover what will happen if they choose an inappropriate action especially if they know it will have no consequences. So, students are welcome to navigate, discover, and make mistakes. This process is a problem-solving process. Figure 26 illustrates an overview of actions and transmissions between rooms. There are three scenes (rooms), and conditions are displayed on connecting arrows. The only condition for a successful game-over is to fulfil the string: “WC→hand clean→food→teeth clean→clothes”.
A basic intention of VR development is to provide the know-how to teachers to set up VR experiences without special coding knowledge. This encourages collaboration and relieves the extra stress of lack of technological knowledge. It is also a good opportunity to work with the school’s computer science teacher. The VR experience comprises three parts: source material, configuration, and game engine. It is applied in a way that separates the roles of directors/filmmakers, video/audio editors, programmers, and writers of the transdisciplinary design team.
The source materials: materials are videos, audio clips, and a model of the states, transitions between them, and variables to be tracked as the game unfolds. The storyboard and the breakdown sheets facilitated filming and recording so that the pops, agent audio clips, audio cues, transition videos, and static scene photos were filmed/captured in such a way so no re-shooting will be needed. All visual material was created with a 360-degree camera so that it can be imported directly into the Unity Game Engine, with varying levels of compression to allow for hardware limitations on the output devices.
The configuration (game flow): the complicated part of a VR scenario, especially interactive ones with an open-world slant, is predicting all the possible transitions between states in the game. This is deliberately separated from the design and source material creation and handled by the game engine.
All media and an XML file representing the model of the experience were created first using a standardized dictionary (an XSD is available). This data was fed Unity Game Engine, which supports transition videos in MP4 format, static images, and stereo audio.
Each scene contains an image or video where the user interacts with hotspots, which are graphic icons superimposed on the skybox. This involves looking at a hotspot in VR, which then is activated after a short wait and triggers an optional 3D video transition displayed around the player (Figure 27).
Movements inside the VR space are not supported, so scenes are mostly about looking around and finding hotspots rather than interacting with the scene objects themselves. Scenes can also contain conditional narrator audio or feedback for activating hotspots. If a child has difficulty stabilizing the head (e.g., spastic cerebral palsy) then he/she is allowed to use controllers. Attributes of the player, e.g., score, time, clothing on or off, are also tracked. This system is limited to unstructured values so could be “yes”/”no”, a number or a string.
Game designers can specify conditions to define, add or remove transitions based on boolean expressions using free variables or built-in flags, such as if a scene has been played or which was the previous scene. Besides, the transitions can change the value of free variables, e.g., a player can have different transitions shown if they enter a room (scene) from one entrance to another or visiting one scene first may set a flag that changes the scene in a later, unrelated transition. The game engine dynamically loads an XML-encoded model of the script into the Unity application as scenes and a state machine.
The game engine is a state transition engine imported into Unity. It is written in C# and loads an XML file of scenes with attributes and transitions between those scenes. It then creates a VR environment based on the properties of each scene, e.g., initial camera position, background audio, narrator cues, and hotspots to re-create the dynamic system modelled in the XML file. The engine’s built-in functionality is handling of scenes, transitions, and playback of media based on the pseudo-script language contained in the configuration. It is as simple as possible to allow open-ended games to be visualized and experienced in VR. Finally, the Unity game engine integrates with the VR drivers necessary for output.

3.4. Game Logic—Gameplay

Wooden puzzle and AR application share the same gameplay, i.e., the way a game is played while the VR version is more complex. Game mechanics, i.e., connections of student actions with content and challenges (e.g., variable difficulty level, achievements, rewards) are different for each application.

3.4.1. WUIM-Puzzle and WUIM-AR Gameplay

WUIM-Puzzle and WUIM-AR are played with the same wooden boards and blocks. As mentioned, each wooden shape block indicates an activity. The students are asked to place the six wooden block pieces into the appropriate slots of the wooden board to complete the story. The wooden pieces are given mixed, but always in the same mixed order. Since teachers know each students’ learning profile, they can give them the appropriate difficulty level wooden board in advance. Thus, students place wooden shape block pieces guided not only by the content but also by the shape. In this way, all students participate in the learning process without stigmatization or isolation in the case of weakness to complete the task. When the wooden shape block pieces are placed in the correct order according to the content, e.g., we brush our teeth after eating, students have the opportunity to employ smart devices like smartphones and tablets to proceed with the AR application. The unique combination of wooden shape blocks is the trigger for the AR application to function by displaying the appropriate video (overlay) (Figure 28 and Figure 29).
If the wooden shape block pieces are not placed in the correct order, then a sound “beep” is heard and/or the mobile device vibrates (according to students interface preference) and the human pedagogical agent prompts students to try again. The pedagogical agent provides help progressively. After three unsuccessful attempts, the agent proposes to students what to do. The role of the pedagogical agent can also be performed by a peer.

3.4.2. WUIM-VR Gameplay

In WUIM-VR, students are involved in a 3D real-world house where the story unfolds through a Virtual Reality Headset. However, if a student denies wearing the headset then he/she can play on screen. Symbols/activities which have also been used in WUIM-Puzzle and WUIM-AR application are displayed as VR buttons. Depending on the button selected by the students, the story is revealed by showing the next video. The VR version allows free navigation in the house without forcing students to choose specific routes. However, there is the possibility of adjusting the number of failed selections. To complete the game five criteria must be met as successful transitions. Figure 30 demonstrates the route in which only two transitions (criteria) have been met, i.e., BEDROOM→BATHROOM→WC→FINISH. By selecting the player “finish” it seems that three criteria have not been met and so it is proposed to continue.
There are two restrictions. According to hygiene rules, every time students choose the button “WC” they must then choose the button “HANDS”, otherwise, the avatar is not allowed to eat. The second restriction which is also derived from the hygiene rules is that after eating students have to brush their teeth. The condition for “game over-well done” is that the students have guided the avatar to WC, to wash her hands (HANDS), to eat (FOOD), to brush her teeth (TEETH), and to put on her clothes (CLOSET). Figure 31 illustrates that all five criteria are met and so students have completed all the activities that need to be performed at home before leaving school.
VR drives to greater immersion that, while it has been proven to lead to effective learning; however, using VR headsets can lead to isolation [87]. To overcome this obstacle and to promote collaborative learning, students monitor their classmate’s progress through screens, tablet, or computer, even an interactive whiteboard, and whenever their classmate in the VR environment needs help, they can provide it.

4. Formative User and Expert Evaluation

User experience is the most powerful indicator of a product’s success. Consequently, during designing and developing a product/service, designers and developers need to have end-users in mind. UX evaluation methods at any stage in the design and development life cycle are used to improve UX. Evaluation methods are quantitative and qualitative and regarding games are classified into three distinct phases: (a) pre-game, (b) in-game, and (c) post-game [88,89]. Bernhaupt [90] suggests that UX evaluation methods can be classified into methods that involve users, experts, and user data. More specifically UX methods are:
  • User-based methods, which include focus groups, interviews, questionnaires, experiments, observation, and bio-physiological measurements,
  • Expert-based methods such as heuristic evaluations,
  • Automated methods like telemetry analysis, and
  • Specialized methods for the evaluation of social game play or action games.
Evaluations by the users themselves are crucial for improving the provided products. User-based formative evaluations are fundamental even from the stage of design and prototyping. Novel invasive and less invasive psychophysiological player testing such as skin conductance (SC), facial electromyography (EMG), measurement of electrodermal activity (EDA), heart rate (HR), electroencephalography (EEG), and eye-tracking along with traditional such as behavioral observation, think-aloud protocol (players are asked to talk about what they are thinking as they play through the game), interviews, heuristic evaluation, focus groups, questionnaires, and game metrics provide a comprehensive assessment of many game components [90,91,92].
Depending on the design and development stage of any application, qualitative user-based evaluation methods through observation and interviews, as well as quantitative methods through self-reference measurements, structured or semi-structured interviews, usability tests, and questionnaires aimed at focused groups, are performed [93,94,95]. Questionnaires, behavioral observation, think-aloud protocol, and interviews are the most frequently used non-invasive evaluation tools.
WUIM was evaluated by both children with disabilities and their therapists using questionnaires, structured interviews, think-aloud protocol and researchers’ observations. The core purpose of the evaluation was to draw conclusions about the factors that determine the UX and the characteristics of SLEs. Thus, two main research questions were formulated:
  • Research Question 1 (RQ1): Does WUIM accomplish the seven criteria that shape the user experience? Is it useful, usable, findable, creditable, desirable, accessible, and valuable?
  • Research Question 2 (RQ2): Is WUIM a smart learning environment? In more detail, it gathers features such as effectiveness, efficiency, scalability, autonomy, engagement, flexibility, adaptiveness, personalization, conversationality, reflectiveness, innovation, and self-organization?

4.1. Participants

Evaluations were conducted by two focus groups. The first group consisted of students with SEND (end-users) and the second consisted of their therapists (content experts). None of them had experience with AR and VR.
Eleven (N = 11) children with SEND formed the students’ group. Two students with moderate intellectual disability (IQ 35–49, mental age from 6 to 9 years) [96], three students with severe intellectual disability (IQ 20–34, mental age from 3 to 6 years), three students with cerebral palsy (one with severe intellectual disability), one student with Down syndrome with severe intellectual disability, one student with ASD with severe intellectual disability, tactile defensiveness and hyperactivity, and one student with ASD without intellectual disability.
Content experts focus group consisted of seven (N = 7) specialized therapists for children with disabilities: a social worker, an occupational therapist, a speech therapist, a health visitor, a nurse, a special auxiliary staff, and a physiotherapist assistant.

4.2. Data Collection

Throughout the development life cycle, the design team conducted repeated internal tests and evaluations based on the System Usability Scale (SUS) [97] and the Serious Game Evaluation Scale (SGES) (see Table A1 in Appendix A) [89,95,98,99,100]. SGES is a questionnaire of a five-point Likert scale that includes 33 statements developed by a team of researchers that includes the authors of this article. SGES simultaneously evaluates 11 factors that shape users’ perceptions when playing games. The 11 factors are divided into four groups [89]:
  • Content: subjective adequacy of feedback, subjective adequacy of educational material, subjective clarity of learning objectives, subjective quality of the narration.
  • Technical characteristics: subjective usability/playability (functionality), subjective audiovisual experience/aesthetics, subjective realism.
  • User state of mind: presence/immersion, pleasure.
  • Characteristics that allow learning: subjective relevance to personal interests, and motivation.
Content experts completed SGES, while the users were involved in a structured interview that was based on the SGES. There were two reasons why the research team conducted structured interviews. The first reason was that children who participated in the evaluation could not read and write and the second was for the results to be measurable.

4.3. Procedure

Prior to implementation, following the principles of ethics and code, research approval was obtained by the Research Ethics and Conduct Committee of Ionian University, Greek Ministry of Education, and Scientific Council of the Center for Physical Medicine and Rehabilitation of the General Hospital of Florina, Greece, and the Center for Creative Employment of Children with Disabilities of the Municipality of Florina. Besides, written parental consents were obtained for the children’s participation.
Both evaluation groups were initially informed of the researchers’ intention to conduct the WUIM evaluation study. Primarily, the research process was explained, that is, they would play three games and then either complete a questionnaire or discuss with the researchers. They were also informed that they could stop playing whenever they wanted and that they were free to express their feelings and thoughts while playing.
  • First, researchers gave the puzzles with increased difficulty (square pieces). The students were not given additional information regarding the content and the narration, to examine whether they could be able to place the activities in order without help. If difficulty was identified, then they were given the easiest version, the one with multiple shapes.
  • They were then given the tablets. Researchers told children to press the symbol “a face brushing its teeth” on the screen. This symbol is the WUIM-AR app start button (Figure 32). Afterwards, researchers told the children to follow the pedagogical agent’s instructions. After finishing the puzzle game and augmented reality, participants completed the questionnaire or proceeded with the interview respectively.
  • Then, they dealt with the VR application. Due to the use of special headset devices, the process focused on how to adapt smartphones in headsets, to wear and start the game. They were then allowed to navigate freely. When they finished the game, its evaluation followed.
The applications were evaluated both by each student individually (Figure 33) and during the team collaborative process (Figure 34) depending on the children’s profiles. For example, a child on the autism spectrum disorder having communication difficulties did not want to join the children group.

5. Results

Due to the small sample, we did not statistically analyze the data we collected from the questionnaires and structured interviews but we proceeded with trend analysis. We also drew conclusions from our observations and the recordings of the participants’ think-aloud protocol.
In sum, the results of the Likert scale five-point “strongly disagree” (1) to “strongly agree” (5) trend analysis are presented below (Table 4 and Table 5). All participants (students and experts) rated the applications with a score of 4 or 5, that is agree or strongly agree. The comparative correlation of the results points out that children have evaluated more positively than content experts.
These results were expected as during playing great excitement was observed especially while children were manipulating the devices. On the one hand, excitement is very promising because it motivates students, but on the other hand, there is a risk of sticking to new media and not to technology-mediated learning experiences.
Although the research sample was extremely small to yield valid results, it seems that the overall VR-based experience is slightly better than the AR-based in both groups (Table 6). VR learning environment can be used as a scaffolding tool to support learning [24]. Equally, considering that AR enhances collaborative learning [101,102,103], a combination of both technologies with traditional board games can benefit students’ learning.
Both research questions were answered by therapists during structured interviews.
  • RQ1: Does WUIM accomplish the seven criteria that shape the user experience? Is it useful, usable, findable, creditable, desirable, accessible, and valuable?
  • User answers were positive for all seven factors that shape the UX.
  • Indicatively, we summarize user comments during the observation by the researchers (think-aloud protocol):
    -
    When I was playing the games, I did not catch a real toothbrush but I caught the wooden pieces and imagined that I was catching a real one” (Irini, a girl with moderate intellectual disability and mental age of 7 years)
    -
    I liked the tablet. I held a tablet for the first time in my life and it was easy” (Vasso, a girl with moderate intellectual disability and mental age of 9 years)
    -
    I did not feel stressed with the VR Head-Mounted glasses. On the contrary, I found them fascinating”. In addition, the researchers observed that as long as Panagiotis wore the HMD, the involuntary muscle spasms significantly decreased (Panayiotis, a boy with spastic cerebral palsy, difficulty with fine motor skills, and without intellectual disability).
    -
    I really like the technology but I would like aid to hold the tablet more steadily decreased” (Alexis, a boy with spastic cerebral palsy, difficulty with fine motor skills, and without intellectual disability).
  • RQ2: Is WUIM a smart learning environment? In more detail, it gathers features such as effectiveness, efficiency, scalability, autonomy, engagement, flexibility, adaptiveness, personalization, conversationality, reflectiveness, innovation, and self-organization?
  • When therapists were asked if they considered WUIM effective, efficient, scalable, autonomous, engaging, flexible, adaptive, personalized, reflective, innovative, and self-directed and whether they would apply similar training materials, they were initially hesitant. Therapists had no gaming experience, they have not been introduced to either virtual or augmented reality, nor involved in content design and development process with innovative technologies. When they were informed that they do not need special programming knowledge for content development, they showed particular interest in creating their own puzzles.
Regarding UI and technical equipment, both children’s remarks and our observations led us to the supply of special tablet cases, so that they can be held by children who have difficulty with fine mobility and/or spasticity. We also increased the size of buttons and letters even more (Figure 35).

6. Limitations and Future Work

The study has limitations that bear mentioning, the first being the small sample size. Then again, the study’s primary objective was to evaluate the prototypes developed during the Alpha and Beta stages. Only children with SEND were included. Therefore, it is unknown how effective WUIM is for teaching the Activities of Daily Living (ADLs) to students without disabilities. The above limitations can function as guidelines for future research. Larger sample sizes and students/children with and without SEND can help to more thoroughly evaluate the effectiveness of applications utilizing VR, AR, and 360-degree videos. Learning is a long-term process, which includes the acquisition, generalization, and maintenance of any content of interventions. Thus, follow-up and/or longitudinal studies will provide useful insights. Given that the study was conducted under controlled conditions, the use of WUIM in real-life conditions and measures regarding its effects, again, in real-life, will determine the usefulness of the application. Finally, several studies are needed for identifying whether the philosophy behind WUIM, both in terms of content and design, could function as the lever of successful inclusive education.

7. Conclusions

Nowadays, children with and without learning difficulties are immersed in technology and through their engagement, they are preparing themselves for real-life challenges utilizing the benefits of simulations and therefore experiential learning. Smart learning environments (SLEs) underline the flexibility of eclectic pedagogy that places students at the center of the educational process and takes into account the diversity in classrooms. The objectives of WUIM are, on the one hand, to cultivate the collaboration between children and, on the other hand, to teach ADLs and the process of developing educational content with cutting-edge technologies. However, the fundamental goal of WUIM is to deliver innovative content based on transmedia learning and to facilitate students with SEND and their typically developing peers to learn ADLs playfully into an inclusive learning environment. This paper outlines the development of WUIM, from the prototyping presented at the 4th International Conference in Creative Writing to the Alpha and Beta stages, including user and expert evaluations. By providing in detail, the content design and development process, we aimed to prove that transmedia stories are perceived as opportunities to create innovative learning scenarios and to support student-centered educational practices, and collaborative skills. As mentioned, one of the objectives of WUIM is to give step-by-step instructions to teachers, even to those without programming knowledge, to develop their own game-based educational scenarios with their ICT colleagues and students. Teachers can create AR applications by following the instructions given at https://bit.ly/3ueeezB (accessed on 2 June 2021) (supplementary material).
In terms of evaluations, the purpose was two-fold. The first goal was to examine the quality of the content, both pedagogically and technologically, and therefore to improve, and the second goal was to identify whether children with disabilities could experience contemporary technologies and game logic. To that end, two research questions were considered. First key question is whether WUIM fulfils the features that make up the user experience and the second question is whether WUIM can be classified as an SLE. Regarding the first question, internal evaluations of the transdisciplinary design team, experts’ and children’s comments, through SUS and SGES questionnaires, structured interviews and think-aloud protocols, led to the conclusion that the seven factors of UX, i.e., usefulness, usability, findability, creditability, desirability, accessibility, and value, are satisfied. For the second question and taking into account the characteristics of SLEs, i.e., adaptation, context-awareness, dynamic, formative feedback, students’ empowerment, engagement in meaningful discourse, improvements over time, monitor progress, personalization, recommendations for appropriate actions and choices, tracking performance and outcomes, we say with confidence that WUIM can be considered a strong SLE candidate.

Supplementary Materials

Supplementary Materials: How to create augmented reality targets: https://bit.ly/3ueeezB.

Author Contributions

Conceptualization, P.K. and I.D.; methodology, P.K., I.D., A.O. and E.F.; investigation, P.K.; writing—original draft preparation, P.K.; visualization, P.K., and A.O.; writing—review and editing, I.D., A.O. and E.F.; supervision, I.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The educational project “Waking Up In the Morning—WUIM” was carried out with the assistance of Giorgos Miliotis, Special Technical Personnel and the following students of the Department of Audio and Visual Arts, Ionian University, Corfu-Greece: Stavros Karakoutis (AR development, interfaces, user experience and programming), Evaggelia Koumantsioti (storyboard, photography and pedagogical agent), Aris Melachrinos (video and image editing), Evaggelos Pandis (recordings, sound and video editing), Marinos Pavlidis (director, video shooting and editing). We would also like to thank little Paraskevi Rizou, the avatar, for acting performance and self-directing. Special thanks to children, their parents and therapists from the Center of Physical Medicine and Rehabilitation (General Hospital of Florina-Greece), the Center for Creative Employment of Children with Disabilities (Municipality of Florina) and the Association of Parents and Guardians of Children with Disabilities, “Sundberg” (Regional Unit of Florina), who were willing to play and help us improve the applications. After all, nothing about children without children.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Serious Game Evaluation Scale (SGES).
Table A1. Serious Game Evaluation Scale (SGES).
FactorStatement
PresenceI was deeply concentrated in the game
If someone was talking to me, I couldn’t hear him
I forgot about time passing while playing the game
EnjoymentI think the game was fun
I felt bored while playing the game *
It felt good to successfully complete the tasks in this game
Subjective learning effectiveness I felt that this game can ease the way I learn
This game made learning more interesting
I will definitely try to apply the knowledge I learned with this game
Subjective narratives’ adequacyI was captivated by the game’s story from the beginning
I enjoyed the story provided by the game
I could clearly understand the game’s story
Subjective realismThere were times when the virtual objects seemed to be as real as the real ones
When I played the game, the virtual world was more real than the real world
Subjective feedback’s adequacyI received immediate feedback on my actions
I received information on my success (or failure) on the intermediate goals immediately
Subjective audiovisual
adequacy
I enjoyed the sound effects in the game
I think the game’s audio fits the mood or style of the game
I enjoyed the game’s graphics
I think the game’s graphics fit the mood or style of the game
Subjective relevance to personal interestsThe content of this game was relevant to my interests
I could relate the content of this game to things I have seen, done, or thought about in my own life
It is clear to me how the content of the game is related to things I already know
Subjective goals’ clarityThe game’s goals were presented at the beginning of the game
The game’s goals were presented clearly
Subjective ease of useI think it was easy to learn how to use the game
I imagine that most people will learn to use this game very quickly
I felt that I needed help from someone else in order to use the game because It was not easy for me to understand how to control the game *
Subjective adequacy of the learning materialIn some cases, there was so much information that it was hard to remember the important points *
The exercises in this game were too difficult *
MotivationThis game did not hold my attention *
When using the game, I did not have the impulse to learn more about the learning subject *
The game did not motivate me to learn *
Note * = Item for which its scoring was reversed.

References

  1. Legg, S.; Hutter, M. A Collection of Definitions of Intelligence. In Proceedings of the 2007 conference on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006; IOS Press: Amsterdam, The Netherlands, 2007; Available online: https://0-dl-acm-org.brum.beds.ac.uk/doi/10.5555/1565455.1565458 (accessed on 20 May 2021).
  2. Spector, J.M. Conceptualizing the emerging field of smart learning environments. Smart Learn. Environ. 2014, 1, 2. [Google Scholar] [CrossRef] [Green Version]
  3. Kaimara, P.; Deliyannis, I. Why Should I Play This Game? The Role of Motivation in Smart Pedagogy. In Didactics of Smart Pedagogy; Daniela, L., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 113–137. [Google Scholar]
  4. Zhu, Z.-T.; Yu, M.-H.; Riezebos, P. A research framework of smart education. Smart Learn. Environ. 2016, 3, 4. [Google Scholar] [CrossRef] [Green Version]
  5. Kaimara, P.; Poulimenou, S.-M.; Oikonomou, A.; Deliyannis, I.; Plerou, A. Smartphones at schools? Yes, why not? Eur. J. Eng. Res. Sci. 2019, 1–6. [Google Scholar] [CrossRef]
  6. Kaimara, P.; Oikonomou, A.C.; Deliyannis, I.; Papadopoulou, A.; Miliotis, G.; Fokides, E.; Floros, A. Waking up in the morning (WUIM): A transmedia project for daily living skills training. Technol. Disabil. 2021, 33, 137–161. [Google Scholar] [CrossRef]
  7. Deliyannis, I.; Kaimara, P. Developing Smart Learning Environments Using Gamification Techniques and Video Game Technologies. In Didactics of Smart Pedagogy; Daniela, L., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 285–307. ISBN 9783030015510. [Google Scholar]
  8. Poulimenou, S.-M.; Kaimara, P.; Papadopoulou, A.; Miliotis, G.; Deliyannis, I. Tourism policies for communicating World Heritage Values: The case of the Old Town of Corfu in Greece. In Proceedings of the 16th NETTIES CONFERENCE: Access to Knowledge in the 21st Century the Interplay of Society, Education, ICT and Philosophy; IAFeS—International Association for eScience, Ed.; IAFeS: Wien, Austria, 2018; Volume 6, pp. 187–192. [Google Scholar]
  9. Poulimenou, S.-M.; Kaimara, P.; Deliyannis, I. Promoting Historical and Cultural Heritage through Interactive Storytelling Paths and Augmented Reality [Aνάδειξη της Ιστορικής Και Πολιτιστικής Κληρονομιάς μέσω Διαδραστικών Διαδρομών Aφήγησης και Επαυξημένη Πραγματικότητα]. In Proceedings of the 2nd Pan-Hellenic Conference on Digital Cultural Heritage-EuroMed, Volos, Greece, 1–3 December 2017; pp. 627–636. (In Greek). [Google Scholar]
  10. Kaimara, P.; Renessi, E.; Papadoloulos, S.; Deliyannis, I.; Dimitra, A. Fairy tale meets ICT: Vitalizing professions of the past [Το παραμύθι συναντά τις ΤΠΕ: ζωντανεύοντας τα επαγγέλματα του παρελθόντος]. In Proceedings of the 3rd International Conference in Creative Writing, Corfu, Greece, 6–8 October 2017; Kotopoulos, T.H., Nanou, V., Eds.; Postgraduate Programme ‘Creative Writing’—University of Western Macedonia: Corfu, Greece, 2018; pp. 292–318. ISBN 978-618-81047-9-2. (In Greek) [Google Scholar]
  11. Deliyannis, I.; Poulimenou, S.-M.; Kaimara, P.; Filippidou, D.; Laboura, S. BRENDA Digital Tours: Designing a Gamified Augmented Reality. Application to Encourage Gastronomy Tourism and local food exploration. In Proceedings of the 2nd International Conference of Cultural Sustainable Tourism, Maia, Portugal, 13–15 October 2020. [Google Scholar]
  12. Poulimenou, S.-M.; Kaimara, P.; Deliyannis, I. World Heritage Monuments Management Planning of in the light of UN Sustainable Development Goals: The case of the Old Town of Corfu. In Proceedings of the 4th International Conference on “Cities’ Identity through Architecture and Arts”, Pisa, Italy, 14–16 December 2020. [Google Scholar]
  13. Sims, R. An interactive conundrum: Constructs of interactivity and learning theory. Australas. J. Educ. Technol. 2000, 16, 45–57. [Google Scholar] [CrossRef] [Green Version]
  14. Kaimara, P.; Poulimenou, S.-M.; Deliyannis, I. Digital learning materials: Could transmedia content make the difference in the digital world? In Epistemological Approaches to Digital Learning in Educational Contexts; Daniela, L., Ed.; Routledge: London, UK, 2020; pp. 69–87. [Google Scholar]
  15. UNESCO. World Conference on Special Needs Education: Access and Quality. Final Report; United Nations Educational, Scientific and Cultural Organization & International Bureau of Education: Paris, France, 1994. [Google Scholar]
  16. Tomlinson, C.A.; Brighton, C.; Hertberg, H.; Callahan, C.M.; Moon, T.R.; Brimijoin, K.; Conover, L.A.; Reynolds, T. Differentiating Instruction in Response to Student Readiness, Interest, and Learning Profile in Academically Diverse Classrooms: A Review of Literature. J. Educ. Gift. 2003, 27, 119–145. [Google Scholar] [CrossRef] [Green Version]
  17. Mayer, R. Introduction to Multimedia Learning. In The Cambridge Handbook of Multimedia Learning; Mayer, R., Ed.; Cambridge University Press: Cambridge, UK, 2014; pp. 1–24. [Google Scholar]
  18. Moreno, R.; Mayer, R. Interactive Multimodal Learning Environments. Educ. Psychol. Rev. 2007, 19, 309–326. [Google Scholar] [CrossRef]
  19. Meyer, A.; Rose, D.H.; Gordon, D. CAST: Universal Design for Learning: Theory & Practice. Available online: http://udltheorypractice.cast.org/home?5 (accessed on 29 May 2021).
  20. CAST Universal Design for Learning Guidelines Version 2.2. Available online: http://udlguidelines.cast.org (accessed on 29 May 2021).
  21. Rodrigues, P.; Bidarra, J. Transmedia Storytelling as an Educational Strategy: A Prototype for Learning English as a Second Language. Int. J. Creat. Interfaces Comput. Graph. 2016, 7, 56–67. [Google Scholar] [CrossRef]
  22. Kaimara, P.; Deliyannis, I.; Oikonomou, A.; Miliotis, G. Transmedia storytelling meets Special Educational Needs students: A case of Daily Living Skills Training. In Proceedings of the 4th International Conference on Creative Writing Conference, 12–15 September 2019, Florina, Greece; Kotopoulos, T.H., Vakali, A.P., Eds.; Postgraduate Programme ‘Creative Writing’—University of Western Macedonia: Florina, Greece, 2021; pp. 542–561. ISBN 978-618-5613-00-6. [Google Scholar]
  23. Spector, J.M. The potential of smart technologies for learning and instruction. Int. J. Smart Technol. Learn. 2016, 1, 21. [Google Scholar] [CrossRef]
  24. Daniela, L.; Lytras, M.D. Editorial: Themed issue on enhanced educational experience in virtual and augmented reality. Virtual Real. 2019, 23, 325–327. [Google Scholar] [CrossRef]
  25. Ertmer, P.A.; Newby, T.J. Behaviorism, Cognitivism, Constructivism: Comparing Critical Features From an Instructional Design Perspective. Perform. Improv. Q. 2013, 26, 43–71. [Google Scholar] [CrossRef]
  26. Dewey, J. Democracy and Education: An Introduction to the Philosophy of Education; Macmillan: New York, NY, USA, 1922; ISBN 9780199272532. [Google Scholar]
  27. Chi, M.T.H.; Wylie, R. The ICAP Framework: Linking Cognitive Engagement to Active Learning Outcomes. Educ. Psychol. 2014, 49, 219–243. [Google Scholar] [CrossRef]
  28. Deliyannis, I. The Future of Television-Convergence of Content and Technology; Deliyannis, I., Ed.; IntechOpen: London, UK, 2019. [Google Scholar]
  29. Fisch, S.M. Children’s Learning from Educational Television; Routledge: New York, NU, USA, 2014; ISBN 9781410610553. [Google Scholar]
  30. Li, K.C.; Wong, B.T.-M. Review of smart learning: Patterns and trends in research and practice. Australas. J. Educ. Technol. 2021, 37, 189–204. [Google Scholar] [CrossRef]
  31. Salen, K.; Zimmerman, E. Rules of Play-Game Design Fundamentals; Massachusetts Institute of Technology: Cambridge, MA, USA, 2004; ISBN 0-262-24045-9. [Google Scholar]
  32. Klopfer, E.; Osterweil, S.; Salen, K. Moving Learning Games Forward, Obstacles Opportunities & Openness; The Education Arcade: Cambridge, MA, USA, 2009. [Google Scholar]
  33. Balducci, F.; Grana, C. Affective Classification of Gaming Activities Coming from RPG Gaming Sessions. In E-Learning and Games. Edutainment 2017; Tian, F., Gatzidis, C., El Rhalibi, A., Tang, W., Charles, F., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10345. [Google Scholar] [CrossRef] [Green Version]
  34. Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From game design elements to gamefulness. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments; ACM Press: New York, NY, USA, 2011; pp. 9–15. [Google Scholar]
  35. Prensky, M. Digital Game-Based Learning; Paragon House: St. Paul, MN, USA, 2007. [Google Scholar]
  36. Morschheuser, B.; Hassan, L.; Werder, K.; Hamari, J. How to design gamification? A method for engineering gamified software. Inf. Softw. Technol. 2018, 95, 219–237. [Google Scholar] [CrossRef] [Green Version]
  37. Plass, J.L.; Homer, B.D.; Kinzer, C.K. Foundations of Game-Based Learning. Educ. Psychol. 2015, 50, 258–283. [Google Scholar] [CrossRef]
  38. Nicholson, S. A User-Centered Theoretical Framework for Meaningful Gamification. In Proceedings of the GlLS 8.0 Games + Learning + Society Conference, Madison, WI, USA, 13–15 June 2012; Martin, C., Ochsner, A., Squire, K., Eds.; ETC Press: Madison, WI, USA, 2012; pp. 223–230. [Google Scholar]
  39. Daniela, L. Epistemological Approaches to Digital Learning in Educational Contexts; Daniela, L., Ed.; Routledge: London, UK, 2020; ISBN 9780429319501. [Google Scholar]
  40. Landrum, T.J.; McDuffie, K.A. Learning styles in the age of differentiated instruction. Exceptionality 2010, 18, 6–17. [Google Scholar] [CrossRef]
  41. Tomlinson, C.A. Differentiation of Instruction in the Elementary Grades; Report No. ED 443572; ERIC Clearinghouse on Elementary and Early Childhood Education: Champaign, IL, USA, 2000.
  42. Oviatt, S. Advances in robust multimodal interface design. IEEE Comput. Graph. Appl. 2003, 23, 62–68. [Google Scholar] [CrossRef]
  43. Jonassen, D.H. Technology as cognitive tools: Learners as designers. ITForum Pap. 1994, 1, 67–80. [Google Scholar]
  44. Montessori, M. The Montessori Method: Scientific Pedagogy as Applied Child Education in ‘The Children’s Houses’, with Additions and Revisions by the Author; Frederick A Stokes Company: New York, NY, USA, 1912. [Google Scholar]
  45. Tindall-Ford, S.; Chandler, P.; Sweller, J. When two sensory modes are better than one. J. Exp. Psychol. Appl. 1997, 3, 257–287. [Google Scholar] [CrossRef]
  46. Clark, J.M.; Paivio, A. Dual coding theory and education. Educ. Psychol. Rev. 1991, 3, 149–210. [Google Scholar] [CrossRef] [Green Version]
  47. Kanellopoulou, C.; Kermanidis, K.L.; Giannakoulopoulos, A. The Dual-Coding and Multimedia Learning Theories: Film Subtitles as a Vocabulary Teaching Tool. Educ. Sci. 2019, 9, 210. [Google Scholar] [CrossRef] [Green Version]
  48. Sadoski, M. A Dual Coding View of Vocabulary Learning. Read. Writ. Q. 2005, 21, 221–238. [Google Scholar] [CrossRef]
  49. Woodin, S. SUITCEYES (Smart, User-friendly, Interactive, Tactual, Cognition-Enhancer That Yields Extended Sensosphere) Scoping Report on Law and Policy on Deafblindness, Disability and New Technologies; United Kingdom: SUITCEYES-European Union’s Horizon 2020 Programme. 2020. Available online: https://www.hb.se/en/research/research-portal/projects/suitceyes-/ (accessed on 2 June 2021).
  50. Mayer, R.E. Applying the science of learning to medical education. Med. Educ. 2010, 44, 543–549. [Google Scholar] [CrossRef]
  51. Scolari, C.A. Transmedia Storytelling: Implicit Consumers, Narrative Worlds, and Branding in Contemporary Media Production. Int. J. Commun. 2009, 3, 586–606. [Google Scholar]
  52. Herr-Stephenson, B.; Alper, M.; Reilly, E.; Jenkins, H. T Is for Transmedia: Learning through Transmedia Play; USC Annenberg Innovation Lab and the Joan Ganz Cooney Center at Sesame Workshop: Los Angeles, CA, USA; New York, NY, USA, 2013. [Google Scholar]
  53. Jenkins, H. Wandering through the Labyrinth: An Interview with USC’s Marsha Kinder. Int. J. Transmedia Lit. 2015, 1, 253–275. [Google Scholar]
  54. López-Varela Azcárate, A. Transmedial Ekphrasis. From Analogic to Digital Formats. IJTL Int. J. Transmedia Lit. 2015, 45–66. [Google Scholar] [CrossRef]
  55. Lygkiaris, M.; Deliyannis, I. Aνάπτυξη Παιχνιδιών: Σχεδιασμός Διαδραστικής Aφήγησης Θεωρίες, Τάσεις και Παραδείγματα [Game Development: Designing Interactive Narrative Theories, Trends and Examples]; Fagottobooks: Athens, Greece, 2017; ISBN 9789606685750. [Google Scholar]
  56. Robin, B.R.; McNeil, S.G. What educators should know about teaching digital storytelling. Digit. Educ. Rev. 2012, 22, 37–51. [Google Scholar] [CrossRef]
  57. Fleming, L. Expanding Learning Opportunities with Transmedia Practices: Inanimate Alice as an Exemplar. J. Media Lit. Educ. 2013, 5, 370–377. [Google Scholar]
  58. Pence, H.E. Teaching with Transmedia. J. Educ. Technol. Syst. 2011, 40, 131–140. [Google Scholar] [CrossRef]
  59. Alper, M.; Herr-Stephenson, R. Transmedia Play: Literacy across Media. J. Media Lit. Educ. 2013, 5, 366–369. [Google Scholar]
  60. Jenkins, H. Transmedia Storytelling and Entertainment: An annotated syllabus. Continuum 2010, 24, 943–958. [Google Scholar] [CrossRef]
  61. Scolari, C.A.; Lugo Rodríguez, N.; Masanet, M. Transmedia Education. From the contents generated by the users to the contents generated by the students. Rev. Lat. Comun. Soc. 2019, 74, 116–132. [Google Scholar] [CrossRef] [Green Version]
  62. Nordmark, S.; Milrad, M. Tell Your Story About History: A Mobile Seamless Learning Approach to Support Mobile Digital Storytelling (mDS). In Seamless Learning in the Age of Mobile Connectivity; Springer: Singapore, 2015; pp. 353–376. [Google Scholar]
  63. Teske, P.R.J.; Horstman, T. Transmedia in the classroom: Breaking the fourth wall. In Proceedings of the 16th International Academic MindTrek Conference (MindTrek ’12); Association for Computing Machinery, Ed.; ACM Press: New York, NY, USA, 2012; pp. 5–9. [Google Scholar] [CrossRef]
  64. Aiello, P.; Carenzio, A.; Carmela, D.; Gennaro, D.; Tore, S.D.; Sibilio, M. Transmedia Digital Storytelling to Match Students’ Cognitive Styles in Special Education. Research Educ. Media 2013, 5, 123–134. [Google Scholar]
  65. Kaimara, P.; Miliotis, G.; Deliyannis, I.; Fokides, E.; Oikonomou, A.; Papadopoulou, A.; Floros, A. Waking-up in the morning: A gamified simulation in the context of learning activities of daily living. Technol. Disabil. 2019, 31, 195–198. [Google Scholar] [CrossRef] [Green Version]
  66. Kaimara, P.; Deliyannis, I.; Oikonomou, A.; Fokides, E.; Miliotis, G. An innovative transmedia-based game development method for inclusive education. Digit. Cult. Educ. 2021, in press. [Google Scholar]
  67. Soegaard, M. The Basic of User Experience Design; Interaction Design Foundation; Available online: https://www.interaction-design.org/ebook (accessed on 30 May 2021).
  68. United Nations General Comment No. 6 (2018) on Equality and Non-Discrimination; UN Committee on the Rights of Persons with Disabilities: Geneva, Switzerland, 2018.
  69. Kaimara, P.; Fokides, E.; Oikonomou, A.; Deliyannis, I. Undergraduate students’ attitudes towards collaborative digital learning games. In Proceedings of the 2nd International Conference Digital Culture and AudioVisual Challenges, Interdisciplinary Creativity in Arts and Technology, Corfu, Greece, 10–11 May 2019; pp. 63–64. [Google Scholar]
  70. Fokides, E.; Kaimara, P. Future teachers’ views on digital educational games [Oι απόψεις των μελλοντικών εκπαιδευτικών για τα ψηφιακά εκπαιδευτικά παιχνίδια]. Themes Sci. Technol. Educ. 2020, 13, 83–95. (In Greek) [Google Scholar]
  71. Kaimara, P.; Fokides, E. Future teachers’ views on digital educational games [Oι απόψεις των μελλοντικών εκπαιδευτικών για τα ψηφιακά εκπαιδευτικά παιχνίδια]. In Proceedings of the 2nd Pan-Hellenic Conference: Open Educational Resources and E-Learning, Korinthos, Greece, 13–14 December 2019; Jimoyiannis, A., Tsiotakis, P., Eds.; Department of Social and Education Policy of the University of Peloponnese & Hellenic Association of ICT in Education (HAICTE): Korinthos, Greece, 2019; p. 41. (In Greek) [Google Scholar]
  72. Kaimara, P.; Fokides, E.; Oikonomou, A.; Deliyannis, I. Potential Barriers to the Implementation of Digital Game-Based Learning in the Classroom: Pre-service Teachers’ Views. Technol. Knowl. Learn. 2021. [Google Scholar] [CrossRef]
  73. Pratten, R. Getting Started withTransmedia Storytelling: A Practical Guide for Beginners; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2015; ISBN 1515339165. [Google Scholar]
  74. Bloom, B.S. Taxonomy of educational objectives: The classification of educational goals. In Handbook I: Cognitive Domain; Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R., Eds.; McKay: New York, NY, USA, 1956. [Google Scholar]
  75. Lowyck, J. Bridging Learning Theories and Technology-Enhanced Environments: A Critical Appraisal of Its History. In Handbook of Research on Educational Communications and Technology; Spector, J.M., Merrill, M.D., Elen, J., Bishop, M.J., Eds.; Springer: New York, NY, USA, 2014; pp. 3–20. ISBN 978-1-4614-3184-8. [Google Scholar]
  76. Pratt, C.; Steward, L. Applied Behavior Analysis: The Role of Task Analysis and Chaining. Available online: https://www.iidc.indiana.edu/pages/Applied-Behavior-Analysis (accessed on 25 May 2021).
  77. Bruner, J.S.; Watson, R. Child’s Talk: Learning to Use Language; Oxford University Press: New York, NY, USA, 1983; ISBN 198576137. [Google Scholar]
  78. Lillard, A.S. Playful learning and Montessori education. Am. J. Play 2013, 5, 157–186. [Google Scholar]
  79. Vygotsky, L. Mind in Society: The Development of Higher Psychological Processes, 2nd ed.; Cole, M., John-Steiner, V., Scribner, S., Souberman, E., Eds.; Harvard University Press: Cambridge, UK, 1978; ISBN 0-674-57628-4. [Google Scholar]
  80. Bach, H. Composing a Visual Narrative Inquiry. In Handbook of Narrative Inquiry: Mapping a Methodology; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2007; pp. 280–307. ISBN 1456564684. [Google Scholar]
  81. Miller, G.A. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 1994, 101, 343–352. [Google Scholar] [CrossRef]
  82. Pezdek, K.; Chen, H.C. Developmental differences in the role of detail in picture recognition memory. J. Exp. Child Psychol. 1982, 33, 207–215. [Google Scholar] [CrossRef]
  83. Ke, F. Designing and integrating purposeful learning in game play: A systematic review. Educ. Technol. Res. Dev. 2016, 64, 219–244. [Google Scholar] [CrossRef]
  84. Panagopoulos, I. Reshaping Contemporary Greek Cinema through a Re-Evaluation of the Historical and Political Perspective of Theo Angelopoulos’s Work. Ph.D. Thesis, University of Central Lancashire, Preston, UK, 2019. [Google Scholar]
  85. Murphy, C. Why Games Work and the Science of Learning. In Proceedings of the MODSIM World 2011 Conference and Expo ‘Overcoming Critical Global Challenges with Modeling & Simulation’; NASA Conference Publication: Virginia Beach, VA, USA, 2012; pp. 383–392. [Google Scholar]
  86. Fotaris, P.; Mastoras, T. Escape Rooms for Learning: A Systematic Review. In Proceedings of the 13th European Conference on Game Based Learning; Odense, Denmark, 3–4 October 2019; Eleaek, L., Ed.; Academic Conferences Ltd.: Reading, UK, 2019; pp. 235–243. [Google Scholar] [CrossRef]
  87. Dalgarno, B.; Lee, M.J.W. What are the learning affordances of 3-D virtual environments? Br. J. Educ. Technol. 2010, 41, 10–32. [Google Scholar] [CrossRef]
  88. Faizan, N.D.; Löffler, A.; Heininger, R.; Utesch, M.; Krcmar, H. Classification of Evaluation Methods for the Effective Assessment of Simulation Games: Results from a Literature Review. Int. J. Eng. Pedagog. 2019, 9, 19. [Google Scholar] [CrossRef] [Green Version]
  89. Fokides, E.; Atsikpasi, P.; Kaimara, P.; Deliyannis, I. Let players evaluate serious games. Design and validation of the Serious Games Evaluation Scale. Int. Comput. Games Assoc. ICGA 2019, 41, 116–137. [Google Scholar] [CrossRef]
  90. Bernhaupt, R. User Experience Evaluation Methods in the Games Development Life Cycle. In Game User Experience Evaluation. Human–Computer Interaction Series; Bernhaupt, R., Ed.; Springer: Cham, Switzerland, 2015; pp. 1–8. [Google Scholar]
  91. Mirza-Babaei, P.; Nacke, L.; Gregory, J.; Collins, N.; Fitzpatrick, G. How does it play better? Exploring User Testing and Biometric Storyboards in Games User Research. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2013; pp. 1499–1508. [Google Scholar]
  92. Nacke, L.; Drachen, A.; Göbel, S. Methods for Evaluating Gameplay Experience in a Serious Gamming Context. Int. J. Comput. Sci. Sport 2010, 9, 40–51. [Google Scholar]
  93. Almeida, P.; Abreu, J.; Silva, T.; Varsori, E.; Oliveira, E.; Velhinho, A.; Fernandes, S.; Guedes, R.; Oliveira, D. Applications and Usability of Interactive Television. In Communications in Computer and Information Science, Proceedings of the 6th Iberoamerican Conference, JAUTI 2017, Aveiro, Portugal, 12–13 October 2017; Abásolo, M.J., Abreu, J., Almeida, P., Silva, T., Eds.; Springer: Cham, Switzerland, 2018; Volume 813, pp. 44–57. [Google Scholar] [CrossRef]
  94. Bernhaupt, R.; Mueller, F. ‘Floyd’ Game User Experience Evaluation. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems—CHI EA ’16; ACM Press: New York, NY, USA, 2016; pp. 940–943. [Google Scholar]
  95. Kaimara, P.; Fokides, E.; Oikonomou, A.; Atsikpasi, P.; Deliyannis, I. Evaluating 2D and 3D serious games: The significance of student-player characteristics. Dialogoi Theory Prax. Educ. 2019, 5, 36–56. [Google Scholar] [CrossRef] [Green Version]
  96. World Health Organization. World Health Organization International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10), 5th ed.; World Health Organization: Geneva, Switzerland, 2016. [Google Scholar]
  97. Brooke, J. SUS: A ‘quick and dirty’ usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, I.L., Eds.; Taylor & Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
  98. Fokides, E.; Atsikpasi, P.; Kaimara, P.; Deliyannis, I. Factors Influencing the Subjective Learning Effectiveness of Serious Games. J. Inf. Technol. Educ. Res. 2019, 18, 437–466. [Google Scholar] [CrossRef] [Green Version]
  99. Kaimara, P.; Fokides, E.; Plerou, A.; Atsikpasi, P.; Deliyannis, I. Serious Games Effect Analysis On Player’s Characteristics. Int. J. Smart Educ. Urban Soc. 2020, 11, 75–91. [Google Scholar] [CrossRef]
  100. Fokides, E.; Kaimara, P.; Deliyannis, I.; Atsikpasi, P. Development of a scale for measuring the learning experience in serious games. In Proceedings of the 1st International Conference Digital Culture and AudioVisual Challenges, Interdisciplinary Creativity in Arts and Technology, Corfu, Greece, 1–2 June 2018; Panagopoulos, M., Papadopoulou, A., Giannakoulopoulos, A., Eds.; CEUR-WS: Corfu, Greece, 2018; Volume 2811, pp. 181–186. [Google Scholar]
  101. Wen, Y. Augmented reality enhanced cognitive engagement: Designing classroom-based collaborative learning activities for young language learners. Educ. Technol. Res. Dev. 2021, 69, 843–860. [Google Scholar] [CrossRef]
  102. Alzahrani, N.M. Augmented Reality: A Systematic Review of Its Benefits and Challenges in E-learning Contexts. Appl. Sci. 2020, 10, 5660. [Google Scholar] [CrossRef]
  103. Badilla-Quintana, M.G.; Sepulveda-Valenzuela, E.; Salazar Arias, M. Augmented Reality as a Sustainable Technology to Improve Academic Achievement in Students with and without Special Educational Needs. Sustainability 2020, 12, 8116. [Google Scholar] [CrossRef]
Figure 1. Smart technologies for learning and teaching (source: adapted from Spector [23] (p. 28)).
Figure 1. Smart technologies for learning and teaching (source: adapted from Spector [23] (p. 28)).
Technologies 09 00050 g001
Figure 2. Mayers’ Cognitive Theory of Multimedia Learning [50] (p. 545).
Figure 2. Mayers’ Cognitive Theory of Multimedia Learning [50] (p. 545).
Technologies 09 00050 g002
Figure 3. User experience factors.
Figure 3. User experience factors.
Technologies 09 00050 g003
Figure 4. The four-dimensional dynamic system of interactivity.
Figure 4. The four-dimensional dynamic system of interactivity.
Technologies 09 00050 g004
Figure 5. WUIM transmedia educational project (source: adapted from Pratten [73] (p. 3)).
Figure 5. WUIM transmedia educational project (source: adapted from Pratten [73] (p. 3)).
Technologies 09 00050 g005
Figure 6. WUIM transmedia educational project (source: adapted from Pratten [80] (p. 2)).
Figure 6. WUIM transmedia educational project (source: adapted from Pratten [80] (p. 2)).
Technologies 09 00050 g006
Figure 7. 8-Steps morning routine using PCS.
Figure 7. 8-Steps morning routine using PCS.
Technologies 09 00050 g007
Figure 8. Square wooden board.
Figure 8. Square wooden board.
Technologies 09 00050 g008
Figure 9. Square wooden block.
Figure 9. Square wooden block.
Technologies 09 00050 g009
Figure 10. Multiple shape wooden board.
Figure 10. Multiple shape wooden board.
Technologies 09 00050 g010
Figure 11. Square wooden board.
Figure 11. Square wooden board.
Technologies 09 00050 g011
Figure 12. (a) Puzzles were designed with CorelDRAW® software; (b) symbols are printed on a vinyl sticker.
Figure 12. (a) Puzzles were designed with CorelDRAW® software; (b) symbols are printed on a vinyl sticker.
Technologies 09 00050 g012
Figure 13. (a) Sticking the stickers and putting pins; (b) wooden pieces with pins.
Figure 13. (a) Sticking the stickers and putting pins; (b) wooden pieces with pins.
Technologies 09 00050 g013
Figure 14. WUIM_Puzzle: the wooden block set consist of wooden boards and pieces.
Figure 14. WUIM_Puzzle: the wooden block set consist of wooden boards and pieces.
Technologies 09 00050 g014
Figure 15. Pre-production phase: script, storyboard, shooting list and breakdown sheets.
Figure 15. Pre-production phase: script, storyboard, shooting list and breakdown sheets.
Technologies 09 00050 g015
Figure 16. Production phase: while shooting.
Figure 16. Production phase: while shooting.
Technologies 09 00050 g016
Figure 17. Production phase: while shooting.
Figure 17. Production phase: while shooting.
Technologies 09 00050 g017
Figure 18. Post-production phase: Video converting for AR application (a) 360° video; (b) Standard video.
Figure 18. Post-production phase: Video converting for AR application (a) 360° video; (b) Standard video.
Technologies 09 00050 g018
Figure 19. Post-production phase: subtitle editing. Subtitling while the pedagogical agent speaks (Greek subtitle: Τι να κάνει η Βίκυ τώρα; English translation: What should Vicky do now?).
Figure 19. Post-production phase: subtitle editing. Subtitling while the pedagogical agent speaks (Greek subtitle: Τι να κάνει η Βίκυ τώρα; English translation: What should Vicky do now?).
Technologies 09 00050 g019
Figure 20. Post-production phase: shooting the pedagogical agent.
Figure 20. Post-production phase: shooting the pedagogical agent.
Technologies 09 00050 g020
Figure 21. Post-production phase: color subtitles/background. Subtitling while the pedagogical agent speaks (Greek subtitle: Συγχαρητήρια! Oλοκλήρωσες με επιτυχία το παιχνίδι! English translation: Congratulations! You have successfully completed the game!).
Figure 21. Post-production phase: color subtitles/background. Subtitling while the pedagogical agent speaks (Greek subtitle: Συγχαρητήρια! Oλοκλήρωσες με επιτυχία το παιχνίδι! English translation: Congratulations! You have successfully completed the game!).
Technologies 09 00050 g021
Figure 22. Overview game development process.
Figure 22. Overview game development process.
Technologies 09 00050 g022
Figure 23. WUIM-AR flowchart.
Figure 23. WUIM-AR flowchart.
Technologies 09 00050 g023
Figure 24. WUIM-AR UI for tablet.
Figure 24. WUIM-AR UI for tablet.
Technologies 09 00050 g024
Figure 25. WUIM-AR UI for smartphone.
Figure 25. WUIM-AR UI for smartphone.
Technologies 09 00050 g025
Figure 26. Scene graph of VR game.
Figure 26. Scene graph of VR game.
Technologies 09 00050 g026
Figure 27. A hotspot being activated via a rotating indicator.
Figure 27. A hotspot being activated via a rotating indicator.
Technologies 09 00050 g027
Figure 28. Shooting triggers (AR application).
Figure 28. Shooting triggers (AR application).
Technologies 09 00050 g028
Figure 29. Overlay after brushing teeth trigger recognition (AR application).
Figure 29. Overlay after brushing teeth trigger recognition (AR application).
Technologies 09 00050 g029
Figure 30. Game logic: two of the five criteria are met.
Figure 30. Game logic: two of the five criteria are met.
Technologies 09 00050 g030
Figure 31. Game logic: all criteria are met.
Figure 31. Game logic: all criteria are met.
Technologies 09 00050 g031
Figure 32. (a) WUIM-AR start button; (b) WUIM-AR button/icon on screen.
Figure 32. (a) WUIM-AR start button; (b) WUIM-AR button/icon on screen.
Technologies 09 00050 g032
Figure 33. Individual evaluation: (a) student with moderate intellectual disability; (b) student with cerebral palsy and severe intellectual disability.
Figure 33. Individual evaluation: (a) student with moderate intellectual disability; (b) student with cerebral palsy and severe intellectual disability.
Technologies 09 00050 g033
Figure 34. Team evaluation: (a) students with intellectual disability; (b) students with cerebral palsy and without intellectual disability.
Figure 34. Team evaluation: (a) students with intellectual disability; (b) students with cerebral palsy and without intellectual disability.
Technologies 09 00050 g034
Figure 35. Improvement UI: (a) before evaluation; (b) after evaluation.
Figure 35. Improvement UI: (a) before evaluation; (b) after evaluation.
Technologies 09 00050 g035
Table 1. Examples of differentiated instruction.
Table 1. Examples of differentiated instruction.
Differentiated Instruction
(DI)
ContentSimultaneous presentation with audio and visual media.
ProcessMore time available for a student to complete a task or encouragement of a more advanced student to look an issue into in greater depth
ProductProviding more expression options (creating a puppet show, writing a letter, painting, etc.).
Learning environmentProvision of a place where students can work alone and quietly without disruptions or instead work collectively (collaborative learning).
Table 2. Basic principles of Multimedia Learning.
Table 2. Basic principles of Multimedia Learning.
1MultimediaPeople learn better from words and pictures than from words alone.
2ModalityPeople learn better from graphics and narrations than from on-screen (printed) text (people learn better from a multimedia message when the words are spoken rather than written).
3RedundancyPeople learn better just with animation and narration (it might be difficult for some learners to understand e.g., foreign language learners or certain auditory learning disabilities).
4SegmentingPeople learn better when a multimedia lesson is presented in user-paced segments rather than as a continuous unit.
5Pre-trainingPeople learn better from a multimedia lesson when they know the names and characteristics of the main concepts (they already know some of the basics).
6CoherencePeople learn better when extraneous material is excluded rather than included.
7SignalingPeople learn better when they are shown exactly what to pay attention to on the screen.
8Spatial contiguityPeople learn better when corresponding words and pictures are presented near rather than far from each other on the page or screen.
9Temporal contiguityPeople learn better when corresponding words and pictures are presented simultaneously rather than successively.
10PersonalizationPeople learn better when the words of a multimedia presentation are in conversational style rather than formal style
11VoicePeople learn better when the words are spoken in a standard-accented (friendly) human voice rather than a machine voice or foreign-accented human voice.
12ImagePeople do not necessarily learn better when the speaker’s image is on the screen.
Table 3. Advanced principles of Multimedia Learning.
Table 3. Advanced principles of Multimedia Learning.
1Guided-discoveryPeople learn better when guidance is incorporated into discovery-based multimedia environments.
2Worked-out examplePeople learn better when they receive worked out examples in initial skill learning.
3CollaborationPeople can learn better with collaborative online learning activities.
4Self-explanationPeople learn better when they are encouraged to generate self-explanations during learning.
5Animation and interactivityPeople do not necessarily learn better from animation than from static diagrams.
6Navigationpeople learn better in hypertext environments when appropriate navigation aids are provided
7Site mapPeople can learn better in an online environment when the interface includes a map showing where the learner is in the lesson.
8Prior knowledgeInstructional design principles that enhance multimedia learning for novices may hinder multimedia learning for more expert learners.
9Cognitive agingInstructional design principles that effectively expand working memory capacity are especially helpful for older learners.
Table 4. Results of WUIM-AR user-based and content expert-based evaluation.
Table 4. Results of WUIM-AR user-based and content expert-based evaluation.
WUIM-AR Factor GroupsStudents with SENDTherapists
AContent4.84.6
BTechnical characteristics4.64.2
CState of mind4.34.3
DCharacteristics that allow learning4.74.6
Table 5. Results of WUIM-VR user-based and content expert-based evaluation.
Table 5. Results of WUIM-VR user-based and content expert-based evaluation.
WUIM-VR Factor GroupsStudents with SENDTherapists
AContent4.94.7
BTechnical characteristics4.54.1
CState of mind4.84.7
DCharacteristics that allow learning4.84.7
Table 6. User evaluation comparison between WUIM-AR and WUIM-VR.
Table 6. User evaluation comparison between WUIM-AR and WUIM-VR.
WUIM-ARWUIM-VR
Students with SEND4.64.75
Therapists4.44.55
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kaimara, P.; Deliyannis, I.; Oikonomou, A.; Fokides, E. Waking Up In the Morning (WUIM): A Smart Learning Environment for Students with Learning Difficulties. Technologies 2021, 9, 50. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9030050

AMA Style

Kaimara P, Deliyannis I, Oikonomou A, Fokides E. Waking Up In the Morning (WUIM): A Smart Learning Environment for Students with Learning Difficulties. Technologies. 2021; 9(3):50. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9030050

Chicago/Turabian Style

Kaimara, Polyxeni, Ioannis Deliyannis, Andreas Oikonomou, and Emmanuel Fokides. 2021. "Waking Up In the Morning (WUIM): A Smart Learning Environment for Students with Learning Difficulties" Technologies 9, no. 3: 50. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9030050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop