Next Article in Journal
An Analysis of Students’ Cognitive Bias in Experimental Activities Following a Lab Manual
Next Article in Special Issue
Are Physical Experiences with the Balance Model Beneficial for Students’ Algebraic Reasoning? An Evaluation of two Learning Environments for Linear Equations
Previous Article in Journal
The Factors Influencing Urban Health Services among Ethnic Groups in the U.S.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Digital Learning Environments in Higher Education: A Literature Review of the Role of Individual vs. Social Settings for Measuring Learning Outcomes

1
Knowledge Construction Lab, Leibniz-Institut für Wissensmedien, Schleichstraße 6, D-72076 Tübingen, Germany
2
Department of Psychology, Eberhard Karls University, Schleichstraße 4, D-72076 Tübingen, Germany
*
Author to whom correspondence should be addressed.
Submission received: 27 February 2020 / Revised: 8 March 2020 / Accepted: 14 March 2020 / Published: 18 March 2020
(This article belongs to the Special Issue Learning Environments)

Abstract

:
Research on digital learning environments has traditionally applied either an individual perspective or a social perspective to learning. Based on a literature review, we examined to what extent individual or social perspectives determined the learning outcome variables that researchers have used as measurements in existing studies. We analyzed prototypical approaches to operationalize learning settings (individual vs. social) published in peer-reviewed journals and identified their relation to several measures of learning outcomes. We rated n = 356 articles and included n = 246 articles in the final analysis. A total of 159 studies (64.6%) used an individual learning setting, while 87 studies (35.4%) used a social learning setting. As learning outcome measures, we observed self-reports, observable behavior, learning skills, elaboration, personal initiatives, digital activity, and social interactions. The two types of learning settings differed regarding the measurement of elaboration and social interactions. We discuss of the implications of our findings for future research and conclude that researchers should investigate further measures of learning outcomes in digital learning settings.

1. Introduction

The efficient use of digital learning environments in higher education is an important research topic from both a scientific and a practical perspective. Learning in digital learning environments is characterized by the provision of learning materials that are independent of time and location, and by broad access to learning materials. Moreover, digital learning environments also support educational opportunities for all types of learners and provide digitally-enhanced instruction [1,2,3]. Educational researchers from diverse disciplines have been trying to identify the success factors of learning with digital media in higher education for about two decades [4,5,6,7,8,9,10]. One central aim of higher education is to foster students’ potential for high-quality accomplishments [11,12,13,14] and support them in applying their knowledge to future challenges in their professional lives [15,16]. Therefore, research on the use of digital learning environments in higher education should pay particular attention to learning outcomes as a prerequisite for evaluating learning success.
There are two main reasons why researchers and practitioners recommend the use of digital learning environments in higher education. First, in an increasingly digitalized world, education needs to be digital as well [17,18,19,20]. Students should be encouraged and empowered to use digital media for communication and collaboration as well as for learning and knowledge exchange in an appropriate way to become competent and proficient members of a knowledge society. Second, digital learning environments promise to make learning and teaching more effective, for example, by increasing learners’ motivation [14,21], adapting to students’ prior knowledge [16], or providing the possibility for mobile and ubiquitous learning [22,23].
However, the findings of existing studies on the impact of digital media on learning are ambiguous [24,25,26,27]. In general, influencing factors, such as teachers [28,29], prior knowledge [30,31], or the novelty of the particular digital setting [32] seem to have greater effects on learning outcomes than the use of digital media per se. One reason for marginal findings on the effects of digital media in these studies might be that they are highly heterogeneous with regard to measurements and the learning settings that they applied. Therefore, the study presented here summarizes common measurements of variables that capture learning outcomes in existing empirical studies. This contributes to finding a common language of researchers to describe effects by having a shared understanding of distinctive learning outcomes. We also argue that the particular theoretical perspective that researchers and practitioners take toward learning with digital media may have an impact on how they design learning environments, how they operationalize relevant variables, and how they measure learning outcomes [26]. Research on digital learning environments has traditionally applied two perspectives of examining and understanding how people learn [33]: A cognitive, individual-oriented perspective that focuses on individual cognition, and a social, community-oriented perspective that focuses on distributed cognition and collaboration [34,35,36]. The cognitive perspective has been upheld mainly in psychology and in cognitive science research, while the social perspective has been the dominant approach in the learning sciences for roughly 30 years now [37].
The objective of the study presented here was to examine how a cognitive perspective compared to a social perspective determined the dependent variables that researchers have used in existing studies. The goal of this approach is to comprehend the role that these theoretical perspectives play in the design of digital learning environments and the evaluation of learning outcomes.

2. Theoretical Perspectives on Learning and Research Question

Understanding the importance of these theoretical perspectives is one precondition for transferring scientific results into educational practice. In the following sections, we summarize the key ideas, concepts, and methods of the individual and social perspectives on digital learning environments and introduce our study idea.

2.1. Individual Perspectives on Learning

Individual perspectives deal with individual information processing and focus on individual thinking, including attention, mental representation, learning, memory, problem solving, and decision-making [38,39]. From this standpoint, learning can be described as selecting information and acquiring knowledge through the encoding, storage, and retrieval of information. Learning activities of a single learner would then involve, for example, content-specific examination of learning materials (e.g., leading to understanding), achieving a certain knowledge state (e.g., leading to a test result), or individually creating a previously defined product (e.g., leading to an essay or a work object). From this perspective, digital media can be used to adaptively provide learning content and instructions. Beyond studying how students learn, it is relevant to understand how learners can be instructed or supported [40,41,42].
Cognitive theorizing also takes meta-cognitive monitoring into account, as this is a major aspect of self-regulated learning [43]. Cognitive models often rely on training, problem solving, or computational thinking [44,45,46]. They emphasize strategies for instructing learners to understand new information, construct mental representations of knowledge, and integrate information into cognitive schemas [47]. Cognitive processes are mechanisms that induce learning depending on the mental capacity of learners [48,49,50]. With the development of digital media, learning environments can be designed in such a way that they can rise to the challenge of meeting a learner’s current cognitive load.
To predict learning from a cognitive perspective, researchers have investigated the process of knowledge acquisition where learners create mental representations of their knowledge [51,52,53,54]. From a cognitive perspective, a learner’s memory and cognitive capacity [55,56,57,58], attention [59], or decision-making [60] are characteristics which determine learning. Research on individual learning with digital media, for example, indicated that dealing with digital learning material implies handling multimodality and interactivity [61,62], and that digital learning material is associated with specific e-tools [63] or virtual elements [32,64].

2.2. Social Perspectives on Learning

The social perspective postulates that learning is strongly influenced by the social environment in which it occurs [65]. This assumption is in line with the social constructivist theory developed by Vygotsky [65] and indicates that learners need to be actively engaged in their social environment. As a consequence, learning can be conceptualized as a cooperative or collaborative endeavor [66]. From this perspective, individual learning is socially mediated and not independent of the social context it is embedded in. On the contrary, the individual learners’ cognitive systems strongly interact with social systems [67]. Learning (as an individual process) and knowledge construction (as a collective process) depend on knowledge-related activities that arise through socio-cognitive conflicts between these two systems [36]. Thus, communication and social interaction may trigger learning and knowledge construction [68,69,70].
To solve a task for the first time, learners need scaffolding and support from peers or teachers [71,72,73]. If two or more people work together in computer-supported collaborative learning (CSCL), CSCL researchers tend not to focus on what happens in a single learner’s cognitive system. They rather take the interaction among people or within the CSCL environment into account [74]. Learners are part of a social context and they learn how to act and interact within this situation. Such learners make important contributions in two respects. On the one hand, they internalize knowledge and develop as an individual; and on the other hand, they react to other people and externalize their knowledge into the social context [36]. From a social perspective, learning activity and learning outcomes are strongly dependent on the interaction within the group.
Research on social learning in digital learning environments indicates that dealing with digital media implies the development of social awareness [75]. This involves making use of social media and social communication technology [76,77], and defining one’s role in a digital network [20,78]. Moreover, social awareness can be developed by using specific e-tools [79] or virtual elements [64,80] to influence learning in a social setting.

2.3. The Research Presented Here

To answer the question how learning outcomes have been measured in empirical studies on learning with digital media in higher education, it is relevant for educational research and practice to understand what perspectives researchers have on learning and, as a consequence, how they tend to measure learning outcomes [19]. In the study presented here, we focused on individual and social approaches in existing empirical research in order to identify whether there is a relationship between the learning setting that researchers have chosen for their studies in higher education and the particular variables they have measured. Educational researchers tend to operationalize a learning setting in their research according to their own general perspective on learning, as described above. For this reason, we identified the learning setting of each study and considered which learning outcomes were measured. This approach provided an overview of existing empirical research, which can serve as a basis for further discussion. In identifying certain gaps in the research, it furthermore suggests where future studies need to focus.
We hypothesized that there would be a relationship between the general perspective and the particular learning outcomes in such a way to show that researchers use different dependent variables in their learning settings due to their particular perspectives. To examine potential co-occurrence of learning perspective and learning outcomes on a basis with solid data, we gathered data from published studies on digital learning environments in higher education. In the following section, we describe the method of our empirical procedure in detail.

3. Method

3.1. Searching the Literature: Article Selection

Following the procedure proposed by Cooper [81], we analyzed which learning outcomes previous research has captured in the context of digital learning environments in higher education. We used the standard Web of Science search (ISI Web of Knowledge, Clarivate Analytics). We selected the categories education, psychology, and computer science and identified four relevant topics for our search: (1) digital learning environments, (2) instructional design, (3) higher education, and (4) performance criteria. The first string ensured a neutral perspective which enabled us to find studies about digital learning environments. The second string focused on the instructional perspective, as we aimed to examine processes of learning and teaching. The third string restricted the search to higher education and academic performance [82]. Performance criteria refer to the proficiency of a learner or a group in a given task (e.g., individual or collaborative) and the resulting activities for accomplishing this task. Consequently, the fourth string aimed at identifying performance criteria that are frequently considered for evaluating achievement in higher education. The search procedure and the inclusion criteria for the articles considered for analysis are shown in Figure 1.

3.2. Gathering Information from Studies: Coding Guide

Our variables of interest were measures of learning outcome (seven categories as described below) and learning setting (individual / social orientation). Independent raters assessed this information in all of the 356 articles and transferred it into an SPSS coding sheet (the rating procedure is described in Section 3.3). The availability of the information was substantial for an article’s inclusion in further analyses. Articles without detailed information about these particular variables of interest were excluded.

3.2.1. Learning Setting: Individual vs. Social Orientation

In the studies we selected, we identified two different orientations in higher education learning settings that represented a researcher’s perspective: An individual orientation vs. a social orientation. Studies with an individual orientation supported individual learners in digital learning environments to create a mental representation and to foster knowledge acquisition [83,84]. This orientation implied that learning activities should be affected by individual cognitive, motivational, and behavioral aspects. A study design was coded as an individual setting if its abstract revealed that learners in individual learning scenarios were assigned to an individual task that they fulfilled on their own.
In studies with a social orientation, learners’ participation in social systems and the collaborative application of learning materials were key aspects of learning. This was the case, for example, if two or more people worked together in a CSCL environment [85,86,87]. A study design was thus coded as representing a social setting if the study abstract indicated any kind of collaborative task, either accomplished in a group or through teamwork.

3.2.2. Measurements of Learning Outcomes

We created categories that indicated several different options for measuring particular learning outcomes in digital learning environments. We focused on the outcomes of learning processes, knowledge construction, or knowledge-related activities. By developing meaningful categories for the measurement of learning outcomes, we met the challenge of sorting through current requirements of research in learning with digital media and acquiring first insights from the articles (i.e., records were screened with inclusion criteria in mind). We took two theoretical approaches into account that describe learning processes in digital learning environments [20,52,88,89].
Chi and Wylie [52] proposed four types of engagement activities: passive, active, constructive, and interactive. Engagement can be interpreted as a continuum of growing learning processes with predefined learning materials. The authors describe typical materials and activities which enable handling information within a digital learning environment safely and lead to success in learning. Therefore, any attempt at measuring learning outcomes should include these considerations.
Wilson et al. [90] proposed the perspective of social networking to describe learning in digital communities. They proposed a hypothetical individual learner who was embedded in a social network and who fulfilled a certain role. This goes along with the assumption that the key to learning effectiveness is to create interaction, to encourage deep reflection, and to reach definitive conclusions [91]. The authors considered different levels of performance and provided suggestions about how to order skills and competencies. We considered this also to be a potential method to measure learning with digital media.
These approaches provide theoretical frameworks to describe what is important for research in learning with digital media and offer a basis for categorizing measurements of learning outcomes. We integrated subjective (i.e., self-reports) and objective measurements (i.e., observable behavior) as well as measurements of self-regulation and knowledge changes (i.e., learning skills, elaboration). Furthermore, we integrated the current need to measure learning with digital media that emerged from the reasoning above [91], that is, measurement of interaction on a personal, technological, and context-specific level (i.e., personal initiative, digital activity, and social interaction). For each category, we first provide a short description and theoretical assumption extracted from existing research. We then point out the role of each category for learning in digital learning environments in general and provide examples to underpin our categorizations, which we grouped into the superordinate categories of method, cognition, and activities (see Table 1).
1. Self-report
Self-reports reveal what individual learners think about their abilities, the learning material, the digital learning environment, or the learning outcomes they wanted to achieve [14,21,92]. Such data may be relevant for understanding the subjective side of learning, but they also carry the risk of containing biased information due to a fragile and subjective measurement.
Examples: Learners’ perceptions of their own attitude, satisfaction, or motivation; self-reported information may comprise personal relevance, commitment, self-efficacy, and perceived importance or beliefs.
2. Observable behavior
Observable behavior represents the objective behavior of a learner and evaluates learning outcomes in an action-oriented manner. This measurement focuses on the goals of learners and their intention to learn. It includes activities such as choice of lectures, persistence, or efficacy to complete a course [93,94].
Examples: Passing or not passing a course, course selection, choosing a field of study or a subject of specialization.
3. Learning skills
This category relies on models of self-regulation, for example, the model of self-regulated learning that comprises three-layer levels of regulation processes [95]: regulation of the self, of the learning process, and of processing modes. At another level, according to Krathwohl [96], learning outcomes in this category refer to metacognitive knowledge, or knowledge about own cognitive processes, reflection, or self-regulation.
Examples: Reading and listening skills, awareness of group processes, or writing involved in acquiring learning skills.
4. Elaboration
Elaboration refers to learners’ levels of cognitive processing. Either the surface approach, deep approach, or achieving approach were used in the learning processes [97,98]. Learning outcomes reflect these levels of processing via recall, comprehension, or application tasks [99,100,101].
Examples: Multiple choice tests, vocabulary tests, comprehension tests, or essays.
5. Personal initiative
Being pro-active is a core characteristic for initiating interaction with digital media. Personal initiative is helpful to begin learning in digital environments and to take advantage of educational opportunities in learning with digital media. A minimum of personal initiative is important to initiate learning [102,103].
Examples: Participation, attendance, access to email accounts, data about log-ins, quantity of sent emails, or number of contributions.
6. Digital activity
This category includes engagement in a digital learning environment and active contributions [104]. Digital activity can be measured as performance via log-files. This may include learners’ accessing information, their management, integration, and evaluation of information, their search inquiries, and their use of blogs or wikis.
Examples: Help-seeking behavior, network behavior, search behavior, or reflective and conscious usage of digital environments.
7. Social interaction
Since most digital learning environments integrate several instructional designs, and even individual learning settings include communication tools and situations, this is not a category that would be limited to a social learning setting. This indicator allows considering studies that measure a kind of social interaction as a dependent variable. It includes activities that refer to any kind of cooperation or collaboration [105].
Examples: A dyad’s discussion, a presentation, or a discussion outcome on an individual and a group level.

3.3. Evaluating the Quality of Studies: Rating Procedure

Three raters (a research associate and two student assistants) were trained to identify available variables of interest within the articles’ abstracts. They received written descriptions of the categories (Table 1), written descriptions of the learning settings (see Section 3.2.1), and verbal instructions about the categorization tasks and about the coding sheet. Raters were trained to identify available dependent variables (i.e., measurements of learning outcomes) and to assign these variables to one of the seven categories of the coding guide. In those cases, in which the raters were unable to make a decision based on the abstract, they were instructed to examine the relevant information in main text. In order to ensure a high level of inter-rater agreement, we only included ratings with a full agreement among the raters.
The raters rated study design (0 = exclude article, 1 = individual, 2 = social) and measures of learning outcomes (0 = not available, 1 = self-report, 2 = observable behavior, 3 = learning skills, 4 = elaboration, 5 = personal initiative, 6 = digital activity, 7 = social interaction). An article’s abstract could reveal more than one measurement of learning outcomes; in this case, several codes were assigned.
The rating procedure needed to distinguish two methodological approaches: on the one hand, we identified available unambiguous descriptions of one learning setting per article (i.e., individual or social). On the other hand, we identified dependent variables within these articles (i.e., measure of learning outcomes). We created word clouds to visualize word frequency using the website https://tagcrowd.com. We excluded filling words and used a maximum of 75 words in the word clouds.

4. Results

We first present the findings regarding the learning settings. Then we provide the results of the dependent variables used in the articles, that is, the measurements of learning outcomes. Finally, we present the findings regarding our general research question.

4.1. Learning Settings

As described above, we identified n = 246 articles with a clear description of a learning setting (see Figure 1). The digital learning setting of 159 articles revealed an individual orientation and 87 articles revealed a social orientation in their study design. Descriptions of the individual and social learning settings can be seen in Figure 2. These depictions show that studies with a social learning setting used more varied terms to describe variables. While more studies used an individual learning setting, the word cloud of the social settings include more as well as more diverse words.
The n = 246 articles were published in 14 different peer-reviewed journals. The journals Computers & Education (76), BMC Medical Education (31), and Educational Technology & Society (27) provided most of the identified articles. Studies from journals such as Computers & Education and BMC Medical Education used more individual than social learning settings, whereas studies in The Internet and Higher Education and the Australasian Journal of Educational Technology described more social than individual settings (Table 2).

4.2. Measurements of Learning Outcomes

In total, raters identified n = 306 dependent variables for a measurement of learning outcomes from the n = 246 articles. Self-report (128) and elaboration (113) were captured most frequently as dependent variables in individual as well as social learning settings, while the least used measurements were personal initiative (1) and observable behavior (9) (Table 3).
We provide examples of measures of learning outcomes from the identified articles. We also provide word clouds for the two most frequently used dependent variables self-report (Figure 3) and elaboration (Figure 4). The word frequencies of dependent variables that were used to measure learning outcomes are depicted in these visualizations.
An example of a study using self-report to measure learning outcomes reported self-efficacy beliefs and intrinsic motivation [32]. Studies that measured learning outcomes through observable behavior used variables such as drop-out rates [106], number and duration of sessions [107], completion of exams [108], or class attendance [109]. Examples of measures in the category learning skills were intercultural communicative competence, intercultural awareness and intercultural knowledge [110], or learners’ reflection levels [111]. Examples of elaboration measures were exams on lecture content [112] or problem-solving activities [113]. We identified one measure of learning outcome that represented personal initiative on a social level: Shea et al. [114] examined learning presence through social network analysis and quantitative content analysis in a student public class discussion (i.e., personal initiative), as well as private products of knowledge construction (i.e., elaboration). For digital activity, we identified tracking systems [115] and search activities [116] as measures of learning outcomes. Studies rated within the category social interaction used measures such as team-learning outcomes [117], mutual feedback [118], or team discussions [119].

4.3. Measures of Learning Outcomes in Individual and Social Learning Settings

Across all categories, a chi-squared test showed that measures of learning outcomes differed significantly between the chosen learning settings, χ2 (6, 306) = 25.89, p < 0.001 (Table 3 and Table 4). In social learning settings researchers used elaboration significantly less frequently than in individual learning settings, binom (27, 113, prob = 1/3), p = 0.036. Elaboration was the favorite measure of learning outcomes for researchers with an individual research approach. We also found that the category with studies that measured social interaction as a dependent variable was chosen for social as well as for individual learning settings, but that these social interaction measures were more frequently used in social than in individual learning settings, binom (1, 11, prob = 1/3), p < 0.001. There were no significant differences for the other categories (Figure 5).

5. Discussion

In the study presented here, we focused on cognitive and social approaches in existing empirical research in order to identify whether there was a relationship between the learning settings and the measurement of learning outcomes that were applied by researchers. In each study, we identified the general design of digital learning environment and, at the same time, considered the respective measures of learning outcomes that were used in that study. In total, there were more studies from researchers who used individual settings than studies with social settings. We also found that self-reports and elaboration were captured most often as measures of learning outcomes. We had hypothesized that there would be a relationship between the general perspective (individual vs. social learning setting) and the particular measurement of learning outcomes, which would indicate that researchers use different dependent variables depending upon their particular research approach. As hypothesized, we found that the measures of learning outcomes in studies with an individual learning setting differed from the measures in studies with a social learning setting. Researchers with an individual approach used different variables to evaluate learning outcomes compared to researchers who used a social setting. The comparison of studies with individual and social settings revealed that the measure elaboration was used relatively more often in studies with individual settings than in studies with social settings. The measure social interaction, in contrast, was used less often in studies with individual settings than in studies with a social approach.

5.1. Heterogeneous Evaluation of Learning Outcomes

This literature review has generated a broad variety of performance criteria as indicators for measuring learning outcomes in higher education. The results regarding learning outcomes with digital media in higher education showed that the classification of learning outcomes is not consistent in previous studies. Even for the selected and narrow context of higher education, the terminology for learning outcomes is heterogeneous. However, the database search produced many results of high-quality studies. There seem to be overall favorite variables for measuring learning outcomes, like elaboration and self-reports.

5.2. Learning Outcomes in Learning Settings

Researchers with an individual perspective used elaboration relatively more often to evaluate learning than researchers with a social orientation. This is comprehensible, as learning activities in an individual learning setting often explicitly refer to recall, comprehension, or cognitive performance for predicting achievement or success. In addition, individual processes that are particularly relevant from an educational viewpoint are learning and memory processes in terms of selecting information and acquiring knowledge through encoding, storage, and retrieval of information. So individual learning is usually seen from a cognitive perspective [120]. Our analysis disclosed a relative frequency of 42.16% to measure elaboration in studies with an individual learning setting. Nevertheless, the analysis also found the usage of variables to describe elaboration in studies that represented a social perspective with a relative frequency of 26.47%. Overall, 113 out of 306 ratings (36.93%) indicated measures of elaboration. Elaboration appears to be a measure that is highly relevant for both individual and social approaches.
Self-report was also a very frequently used category. Altogether, in 41.83% of the studies, raters identified self-reports as outcome measures. Self-reports seem to be highly relevant for understanding what learners think about their abilities, the learning material, the digital learning environment, or the learning outcomes they wanted to achieve irrespective of whether researchers used an individual or social setting. In total, 241 out of 306 measurements belonged to the self-report or elaboration category. These two measurements do not seem to be particularly tailored to digital settings, however. Self-reports and elaboration are relatively traditional measures and do not really take digital characteristics into account.
We had expected to find more measures of learning outcomes in the activity category, which included personal initiative and digital activities as these have a specifically digital focus. These activities are supposed to be highly relevant in the digital world, but only 30 out of 306 measurements belonged to these categories. Network-specific access to learning institutions in higher education, digital learning material, and digital communication tools as well as digital learning environments have become more and more prevalent, and the need for research in this context has grown. Therefore, the low frequency of variables that measured digital activity in both individual and social learning settings was surprising. We hope that future studies will take these variables into account more frequently.

5.3. Limitations

Our findings may have been affected by the selection of the journals in our database search. We only included peer-reviewed studies, and peer-reviewed journals have specific aims and scopes. For example, BMC Medical Education focuses on training and evaluation of performance, such as grades, which may promote research about learning progress of individual learners. This focus on the individual level and the cognitive performance of learners could explain why there were more studies from an individual than from a social perspective (Table 2). At the same time, however, this is a pity, because social settings are of course also very important for medical training [121]. This applies, for example, to the field of doctor-patient communication [122,123] or to inter-professional cooperation [124]. Computers & Education is additionally interested in field studies with interventions in designs with pre- and post-tests of individual learners or control groups to compare digital vs. non-digital learning environments. Testing and comparing these pedagogical issues of digital technology as they pertain to individuals also could lead to publishing more studies of researchers with an individual than a social orientation. However, the focus on interventions should not prevent researchers from studying the role of social variables and settings in the future.
Furthermore, the gathered data resulted from identifying particular learning outcomes and did not take other potentially relevant aspects into account. For example, the data did not consider the discussions of the articles. Therefore, it could be a sensible next step to carry out more comprehensive analyses of empirical studies that would provide detailed descriptions of context, design, dependent and independent variables, statistics, and effects of the performance criteria in individual and social learning settings. Moreover, it could be promising to collect data about independent variables that tend to co-occur. Future studies could also analyze the structure of the studies and conduct statistical investigations to predict successful outcomes with digital media in higher education.

5.4. Balancing Perspectives

We argue for balancing research approaches that deal with learners’ development on an individual level as well as the development of the social context in higher education. So far, many educational researchers have tended to examine either an individual or a social learning setting in their research. We hope that our results can be used in future research that aims to bring both approaches together. Furthermore, the interrelatedness between individual and social processes makes it sometimes difficult to distinguish among various knowledge-related activities, and researchers need to disentangle influencing effects and variables of interest. We postulate that this is an iterative process where researchers should reflect upon their own perspective in a responsible manner and make it transparent to others.
With respect to interdisciplinary research and collaboration projects, it could be promising to define commonalities between different research approaches. According to O’Connor and Allen [15], learning should take into account that learners aim to form an identity and to become community members. The authors emphasize that learning must be considered to be a relational phenomenon. As a result, it could be a goal of higher education to train students for a role within a professional network. From this point of view, it seems to be the social system and the individual learner who would benefit from an adequate fit of an educational system, a digital learning environment, and measurements of learning outcomes [20,29,90]. This could create a closer link between research and teaching, support the ongoing change process for institutional implementations of digital learning environments, and be a contribution to the ongoing challenge of adequately preparing teachers for digital teaching environments.

6. Conclusions

We provided an overview of empirical studies of digital environments in higher education and the learning outcomes they have measured. As hypothesized, results revealed overall that there was a relationship between the learning setting that was applied (individual vs. social orientation) and the variables used to measure learning outcomes. We found support for our categorizations of measuring learning outcomes in terms of self-reports, observable behavior, learning skills, elaboration, personal initiative, digital activity, and social interaction. The analysis revealed two particularly popular measures of learning outcomes (i.e., self-reports and elaboration).
Our approach of gathering and structuring a huge amount of data from high-quality empirical studies may enable practitioners and scientists to rely on and refer to the results. We identified variables and categorized measures of learning outcomes of students. The results provide first insights into frequently used measures. This study was a first step in the direction of investigating this research topic. In sum, the goal of describing measures of learning outcomes was achieved, and the chosen categorization to describe evaluations of learning outcomes of individuals provide a foundation for further study. With respect to the properties of digital learning environments, future studies should try to elaborate on and potentially revise these categorizations.
For future research, we recommend more creative measurements of variables to evaluate learning outcomes in digital learning settings in the context of higher education. For this purpose, researchers need to carefully reflect upon their research subjects and study designs in digital learning environments and think about how to deal with measuring learning. Disentangling influencing effects and independent variables would be helpful to make interdisciplinary educational research sustainable for the future.

Author Contributions

Conceptualization, E.K. and J.K.; methodology, E.K.; software, E.K.; validation, E.K. and J.M. and U.C. and J.K.; formal analysis, E.K.; investigation, E.K.; data curation, E.K.; writing—original draft preparation, E.K.; writing—review and editing, E.K. and J.K.; visualization, E.K.; supervision, J.K. and U.C.; project administration, J.M.; funding acquisition, U.C. and J.M.. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Bundesministerium für Bildung und Forschung: 16DHL1010.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Chan, T.-W.; Roschelle, J.; Hsi, S.; Sharples, M.; Brown, T.; Patton, C.; Cherniavsky, J.; Pea, R.; Norris, C. One-to-one technology-enhanced learning: An opportunity for global research collaboration. Res. Pract. Technol. Enhanc. Learn. 2006, 1, 3–29. [Google Scholar] [CrossRef] [Green Version]
  2. Hartnett, M.; Brown, M.; Anderson, B. Learning in the digital age: How are the ways in which we learn changing with the use of technologies? In Facing the Big Questions in Teaching: Purpose, Power and Learning, 2nd ed.; St. George, S.B., O’Neill, J., Eds.; Cengage: Melbourne, Australia, 2014; pp. 116–125. [Google Scholar]
  3. Johnason, L.; Adams Becker, S.; Cummins, M.; Estrada, V.; Freeman, A.; Hall, C. NMC Horizon Report: 2016 Higher Education Edition; New Media Consortium: Austin, TX, USA, 2016. [Google Scholar]
  4. Mothibi, G. A meta-analysis of the relationship between e-learning and students’ academic achievement in higher education. J. Educ. Pract. 2015, 6, 6–9. [Google Scholar]
  5. Pea, R.D. The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. J. Learn. Sci. 2004, 13, 423–451. [Google Scholar] [CrossRef]
  6. Perelmutter, B.; McGregor, K.K.; Gordon, K.R. Assistive technology interventions for adolescents and adults with learning disabilities: An evidence-based systematic review and meta-analysis. Comput. Educ. 2017, 114, 139–163. [Google Scholar] [CrossRef] [PubMed]
  7. Schneider, M.; Preckel, F. Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychol. Bull. 2017, 143, 565–600. [Google Scholar] [CrossRef] [PubMed]
  8. Stepanyan, K.; Littlejohn, A.; Margaryan, A. Sustainable e-learning: Toward a coherent body of knowledge. J. Educ. Technol. Soc. 2013, 16, 91–102. [Google Scholar]
  9. Volery, T.; Lord, D. Critical success factors in online education. Int. J. Educ. Manag. 2000, 14, 216–223. [Google Scholar] [CrossRef] [Green Version]
  10. Wu, W.-H.; Wu, Y.-C.J.; Chen, C.-Y.; Kao, H.-Y.; Lin, C.-H.; Huang, S.-H. Review of trends from mobile learning studies: A meta-analysis. Comput. Educ. 2012, 59, 817–827. [Google Scholar] [CrossRef]
  11. Ellis, R.A.; Pardo, A.; Han, F. Quality in blended learning environmentsSignificant differences in how students approach learning collaborations. Comput. Educ. 2016, 102, 90–102. [Google Scholar] [CrossRef]
  12. Graham, C.R.; Woodfield, W.; Harrison, J.B. A framework for institutional adoption and implementation of blended learning in higher education. Internet High. Educ. 2013, 18, 4–14. [Google Scholar] [CrossRef]
  13. Hassanzadeh, A.; Kanaani, F.; Elahi, S. A model for measuring e-learning systems success in universities. Expert Syst. Appl. 2012, 39, 10959–10966. [Google Scholar] [CrossRef]
  14. Sun, P.-C.; Tsai, R.J.; Finger, G.; Chen, Y.-Y.; Yeh, D. What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 2008, 50, 1183–1202. [Google Scholar] [CrossRef]
  15. O’Connor, K.; Allen, A. Learning as the organizing of social futures. Natl. Soc. Stud. Educ. 2010, 109, 160–175. [Google Scholar]
  16. Yang, Y.Q.; van Aalst, J.; Chan, C.K.K.; Tian, W. Reflective assessment in knowledge building by students with low academic achievement. Int. J. Comput. Supported Collab. Learn. 2016, 11, 281–311. [Google Scholar] [CrossRef] [Green Version]
  17. Fraillon, J.; Ainley, J.; Schulz, W.; Friedman, T.; Gebhardt, E. Preparing for Life in a Digital Age: The IEA International Computer and Information Literacy Study International Report; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  18. Koehler, M.J.; Mishra, P.; Cain, W. What is technological pedagogical content knowledge (TPACK)? J. Educ. 2013, 193, 13–19. [Google Scholar] [CrossRef] [Green Version]
  19. Laurillard, D.; Alexopoulou, E.; James, B.; Bottino, R.M.; Bouhineau, D.; Chioccariello, A.; Correia, S.; Davey, P.; Derry, J.; Dettori, G. The kaleidoscope scientific vision for research in technology enhanced learning. Position paper prepared for the European Commission, DG INFSO, under contract N°. IST 507838 as a. 2007. Available online: Telearn.archives-ouvertes.fr/hal-00190011/ (accessed on 18 November 2018).
  20. Wilson, M.; Scalise, K.; Gochyyev, P. Learning in digital networks as a modern approach to ICT literacy. In Assessment and Teaching of 21st Century Skills; Care, E., Griffin, P., Eds.; Springer: Washington, DC, USA, 2018; pp. 181–210. [Google Scholar]
  21. Muenks, K.; Miele, D.B. Students’ thinking about effort and ability: The role of developmental, contextual, and individual difference factors. Rev. Educ. Res. 2017, 87, 707–735. [Google Scholar] [CrossRef]
  22. Hatlevik, O.E.; Christophersen, K.-A. Digital competence at the beginning of upper secondary school: Identifying factors explaining digital inclusion. Comput. Educ. 2013, 63, 240–247. [Google Scholar] [CrossRef]
  23. Scherer, R.; Rohatgi, A.; Hatlevik, O.E. Students’ profiles of ICT use: Identification, determinants, and relations to achievement in a computer and information literacy test. Comput. Hum. Behav. 2017, 70, 486–499. [Google Scholar] [CrossRef] [Green Version]
  24. Al Zahrani, H.; Laxman, K. A critical meta-analysis of mobile learning research in higher education. J. Technol. Stud. 2016, 42, 2–17. [Google Scholar] [CrossRef]
  25. Bernard, R.M.; Borokhovski, E.; Schmid, R.F.; Tamim, R.M.; Abrami, P.C. A meta-analysis of blended learning and technology use in higher education: From the general to the applied. J. Comput. High. Educ. 2014, 26, 87–122. [Google Scholar] [CrossRef]
  26. Kirkwood, A.; Price, L. Examining some assumptions and limitations of research on the effects of emerging technologies for teaching and learning in higher education. Br. J. Educ. Technol. 2013, 44, 536–543. [Google Scholar] [CrossRef]
  27. Steenbergen-Hu, S.; Cooper, H. A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. J. Educ. Psychol. 2014, 106, 331–347. [Google Scholar] [CrossRef] [Green Version]
  28. Røkenes, F.M.; Krumsvik, R.J. Prepared to teach ESL with ICT? A study of digital competence in norwegian teacher education. Comput. Educ. 2016, 97, 1–20. [Google Scholar] [CrossRef] [Green Version]
  29. Scherer, R.; Siddiq, F.; Tondeur, J. The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Comput. Educ. 2019, 128, 13–35. [Google Scholar] [CrossRef]
  30. Connor, C.M.; Day, S.L.; Zargar, E.; Wood, T.S.; Taylor, K.S.; Jones, M.R.; Hwang, J.K. Building word knowledge, learning strategies, and metacognition with the Word-Knowledge e-Book. Comput. Educ. 2019, 128, 284–311. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Davis, D.; Chen, G.; Hauff, C.; Houben, G.-J. Activating learning at scale: A review of innovations in online learning strategies. Comput. Educ. 2018, 125, 327–344. [Google Scholar] [CrossRef] [Green Version]
  32. Thai, N.T.T.; De Wever, B.; Valcke, M. The impact of a flipped classroom design on learning performance in higher education: Looking for the best “blend” of lectures and guiding questions with feedback. Comput. Educ. 2017, 107, 113–126. [Google Scholar] [CrossRef]
  33. Hoadley, C. A short history of the learning sciences. In International Handbook of the Learning Sciences; Fischer, F., Hmelo-Silver, C.E., Eds.; Routledge: New York, NY, USA, 2018; pp. 11–23. [Google Scholar]
  34. Brown, J.S.; Collins, A.; Duguid, P. Situated cognition and the culture of learning. Educ. Res. 1989, 18, 32–42. [Google Scholar] [CrossRef]
  35. Danish, J.; Gresalfi, M. Cognitive and sociocultural perspective on learning: Tensions and synergy in the learning sciences. In International Handbook of the Learning Sciences; Fischer, F., Hmelo-Silver, C.E., Eds.; Routledge: New York, NY, USA, 2018; pp. 34–43. [Google Scholar]
  36. Kimmerle, J.; Moskaliuk, J.; Oeberst, A.; Cress, U. Learning and collective knowledge construction with social media: A process-oriented perspective. Educ. Psychol. 2015, 50, 120–137. [Google Scholar] [CrossRef] [Green Version]
  37. Cress, U.; Kimmerle, J. Collective knowledge construction. In International Handbook of the Learning Sciences; Fischer, F., Hmelo-Silver, C.E., Eds.; Routledge: New York, NY, USA, 2018; pp. 137–146. [Google Scholar]
  38. Anderson, J.R. The Architecture of Cognition; Psychology Press: New York, NY, USA, 2013. [Google Scholar]
  39. Anderson, J.R. ACT: A simple theory of complex cognition. Am. Psychol. 1996, 51, 355–365. [Google Scholar] [CrossRef]
  40. Hoadley, C.M. Learning and design: Why the learning sciences and instructional systems need each other. Educ. Technol. 2004, 44, 6–12. [Google Scholar]
  41. Mayer, R.E. The Cambridge Handbook of Multimedia Learning, 2nd ed.; Cambridge University Press: New York, NY, USA, 2014. [Google Scholar]
  42. Van Merriënboer, J.J.; Clark, R.E.; De Croock, M.B. Blueprints for complex learning: The 4C/ID-model. Educ. Technol. Res. Dev. 2002, 50, 39–61. [Google Scholar] [CrossRef]
  43. Winne, P.H.; Hadwin, A.F. Studying as self-regulated learning. Metacognition Educ. Theory Pract. 1998, 93, 27–30. [Google Scholar]
  44. Förster, M.; Weiser, C.; Maur, A. How feedback provided by voluntary electronic quizzes affects learning outcomes of university students in large classes. Comput. Educ. 2018, 121, 100–114. [Google Scholar] [CrossRef]
  45. Lotz, C.; Scherer, R.; Greiff, S.; Sparfeldt, J.R. Intelligence in actionEffective strategic behaviors while solving complex problems. Intelligence 2017, 64, 98–112. [Google Scholar] [CrossRef]
  46. Scherer, R. Learning from the past–the need for empirical evidence on the transfer effects of computer programming skills. Front. Psychol. 2016, 7, 1390. [Google Scholar] [CrossRef] [Green Version]
  47. Tsarava, K.; Moeller, K.; Ninaus, M. Training computational thinking through board games: The case of crabs & turtles. Int. J. Serious Games 2018, 5, 25–44. [Google Scholar]
  48. Chandler, P.; Sweller, J. Cognitive load theory and the format of instruction. Cogn. Instr. 1991, 8, 293–332. [Google Scholar] [CrossRef]
  49. Paas, F.; Renkl, A.; Sweller, J. Cognitive load theory and instructional design: Recent developments. Educ. Psychol. 2003, 38, 1–4. [Google Scholar] [CrossRef]
  50. Sweller, J. Cognitive load theory, learning difficulty, and instructional design. Learn. Instr. 1994, 4, 295–312. [Google Scholar] [CrossRef]
  51. Barsalou, L.W. Perceptual symbol systems. Behav. Brain Sci. 1999, 22, 577–660. [Google Scholar] [CrossRef] [PubMed]
  52. Chi, M.T.; Wylie, R. The ICAP framework: Linking cognitive engagement to active learning outcomes. Educ. Psychol. 2014, 49, 219–243. [Google Scholar] [CrossRef]
  53. Estes, Z.; Verges, M.; Barsalou, L.W. Head up, foot down: Object words orient attention to the objects’ typical location. Psychol. Sci. 2008, 19, 93–97. [Google Scholar] [CrossRef]
  54. Glenberg, A.M.; Gallese, V. Action-based language: A theory of language acquisition, comprehension, and production. J. Devoted Study Nerv. Syst. Behav. 2012, 48, 905–922. [Google Scholar] [CrossRef]
  55. Atkinson, R.C.; Shiffrin, R.M. Human memory: A proposed system and its control processes1. In Psychology of Learning and Motivation. Advances in Research and Theory; Spence, K.W., Spence, J.T., Eds.; Academic Press: New York, NY, USA, 1968; Volume 2, pp. 89–195. [Google Scholar]
  56. Baddeley, A.D.; Hitch, G. Working Memory. In Psychology of Learning and Motivation; Bower, G.H., Ed.; Academic Press: New York, NY, USA, 1974; Volume 8, pp. 47–89. [Google Scholar]
  57. Clark, J.M.; Paivio, A. Dual coding theory and education. Educ. Psychol. Rev. 1991, 3, 149–210. [Google Scholar] [CrossRef] [Green Version]
  58. Paivio, A. Imagery and Verbal Processes; Psychology: New York, NY, USA, 2013. [Google Scholar]
  59. Higgins, E.T.; Silberman, I. Development of Regulatory Focus: Promotion and Prevention as Ways of Living Motivation and Self-Regulation Across the Life Span; Cambridge University: Cambridge, UK, 1998. [Google Scholar]
  60. Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. Adv. Exp. Soc. Psychol. 1986, 19, 123–205. [Google Scholar]
  61. Moreno, R.; Mayer, R. Interactive multimodal learning environments. Educ. Psychol. Rev. 2007, 19, 309–326. [Google Scholar] [CrossRef]
  62. Strømsø, H.I. Multiple Models of Multiple-Text Comprehension: A Commentary. Educ. Psychol. 2017, 52, 216–224. [Google Scholar] [CrossRef]
  63. Coldwell, J.; Craig, A.; Goold, A. Using etechnologies for active learning. Interdiscip. J. Inf. Knowl. Manag. 2011, 6, 95–106. [Google Scholar]
  64. Bailenson, J.N.; Yee, N.; Blascovich, J.; Beall, A.C.; Lundblad, N.; Jin, M. The use of immersive virtual reality in the learning sciences: Digital transformations of teachers, students, and social context. J. Learn. Sci. 2008, 17, 102–141. [Google Scholar] [CrossRef] [Green Version]
  65. Vygotsky, L.S. Mind in Society; Harvard University Press: Cambridge, MA, USA, 1978. [Google Scholar]
  66. Doolittle, P.E. Vygotsky’s zone of proximal development as a theoretical foundation for cooperative learning. J. Excell. Coll. Teach. 1997, 8, 83–103. [Google Scholar]
  67. Cress, U.; Kimmerle, J. A systemic and cognitive view on collaborative knowledge building with wikis. Int. J. Comput. Supported Collab. Learn. 2008, 3, 105. [Google Scholar] [CrossRef] [Green Version]
  68. Aghaee, N.; Keller, C. ICT-supported peer interaction among learners in Bachelor’s and Master’s thesis courses. Comput. Educ. 2016, 94, 276–297. [Google Scholar] [CrossRef]
  69. Goggins, S.; Xing, W. Building models explaining student participation behavior in asynchronous online discussion. Comput. Educ. 2016, 94, 241–251. [Google Scholar] [CrossRef]
  70. Smet, M.D.; Keer, H.V.; Wever, B.D.; Valcke, M. Cross-age peer tutors in asynchronous discussion groups: Exploring the impact of three types of tutor training on patterns in tutor support and on tutor characteristics. Comput. Educ. 2010, 54, 1167–1181. [Google Scholar] [CrossRef]
  71. Damon, W.; Phelps, E. Critical distinctions among three approaches to peer education. Int. J. Educ. Res. 1989, 13, 9–19. [Google Scholar] [CrossRef]
  72. Dillenbourg, P. What do you mean by collaborative learning? In Collaborative-Learning: Cognitive and Computational Approaches; Dillenbourg, P., Ed.; Elsevier: Oxford, UK, 1999; pp. 1–19. [Google Scholar]
  73. Fischer, M.H.; Riello, M.; Giordano, B.L.; Rusconi, E. Singing Number in Cognitive Space—A Dual-Task Study of the Link Between Pitch, Space, and Numbers. Top. Cogn. Sci. 2013, 5, 354–366. [Google Scholar] [CrossRef] [Green Version]
  74. Engelmann, T.; Dehler, J.; Bodemer, D.; Buder, J. Knowledge awareness in CSCL: A psychological perspective. Comput. Hum. Behav. 2009, 25, 949–960. [Google Scholar] [CrossRef]
  75. Reis, R.C.D.; Isotani, S.; Rodriguez, C.L.; Lyra, K.T.; Jaques, P.A.; Bittencourt, I.I. Affective states in computer-supported collaborative learning: Studying the past to drive the future. Comput. Educ. 2018, 120, 29–50. [Google Scholar] [CrossRef]
  76. Eid, M.I.M.; Al-Jabri, I.M. Social networking, knowledge sharing, and student learning: The case of university students. Comput. Educ. 2016, 99, 14–27. [Google Scholar] [CrossRef]
  77. Sobaih, A.E.E.; Moustafa, M.A.; Ghandforoush, P.; Khan, M. To use or not to use? Social media in higher education in developing countries. Comput. Hum. Behav. 2016, 58, 296–305. [Google Scholar] [CrossRef]
  78. Buder, J.; Schwind, C.; Rudat, A.; Bodemer, D. Selective reading of large online forum discussions: The impact of rating visualizations on navigation and learning. Comput. Hum. Behav. 2015, 44, 191–201. [Google Scholar] [CrossRef]
  79. Erkens, M.; Bodemer, D. Improving collaborative learning: Guiding knowledge exchange through the provision of information about learning partners and learning contents. Comput. Educ. 2019, 128, 452–472. [Google Scholar] [CrossRef]
  80. Schneider, B.; Pea, R. Real-time mutual gaze perception enhances collaborative learning and collaboration quality. Int. J. Comput. Supported Collab. Learn. 2013, 8, 375–397. [Google Scholar] [CrossRef]
  81. Cooper, H. Research Synthesis and Meta-Analysis. A Step-by-Step Approach, 5th ed.; Sage: Thousand Oaks, CA, USA, 2017; Volume 2. [Google Scholar]
  82. Richardson, M.; Abraham, C.; Bond, R. Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychol. Bull. 2012, 138, 353–387. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Sfard, A. On two metaphors for learning and the dangers of choosing just one. Educ. Res. 1998, 27, 4–13. [Google Scholar] [CrossRef]
  84. Suping, S.M. Conceptual Change among Students in Science. ERIC Digest. Available online: Files.eric.ed.gov/fulltext/ED482723.pdf (accessed on 6 July 2018).
  85. Liu, C.-C.; Chen, Y.-C.; Diana Tai, S.-J. A social network analysis on elementary student engagement in the networked creation community. Comput. Educ. 2017, 115, 114–125. [Google Scholar] [CrossRef]
  86. Siqin, T.; van Aalst, J.; Chu, S.K.W. Fixed group and opportunistic collaboration in a CSCL environment. Int. J. Comput. Supported Collab. Learn. 2015, 10, 161–181. [Google Scholar] [CrossRef]
  87. Wang, X.; Kollar, I.; Stegmann, K.; Fischer, F. Adaptable scripting in computer-supported collaborative learning to foster knowledge and skill acquisition. In Connecting Computer-Supported Collaborative Learning to Policy and Practice: CSCL2011 Conference Proceedings; Spada, H., Stahl, G., Eds.; International Society of the Learning Sciences: Hongkong, China, 2011; Volume I, pp. 382–389. [Google Scholar]
  88. Binkley, M.; Erstad, O.; Herman, J.; Raizen, S.; Ripley, M.; Miller-Ricci, M.; Rumble, M. Defining twenty-first century skills. In Assessment and Teaching of 21st Century Skills; Care, E., Griffin, P., Eds.; Springer: Washington, DC, USA, 2012; pp. 17–66. [Google Scholar]
  89. Chi, M.T.; Adams, J.; Bogusch, E.B.; Bruchok, C.; Kang, S.; Lancaster, M.; Levy, R.; Li, N.; McEldoon, K.L.; Stump, G.S. Translating the ICAP theory of cognitive engagement into practice. Cogn. Sci. 2018. [Google Scholar] [CrossRef] [Green Version]
  90. Wilson, M.; Scalise, K.; Gochyyev, P. Rethinking ICT literacy: From computer skills to social network settings. Think. Ski. Creat. 2015, 18, 65–80. [Google Scholar] [CrossRef]
  91. Lorenzo, G.; Moore, J. Five pillars of quality online education. Sloan Consort. Rep. Nation 2002, 15, 9. Available online: www.understandingxyz.com/index_htm_files/SloanCReport-five%20pillars.pdf (accessed on 18 April 2017).
  92. Tondeur, J.; van Braak, J.; Ertmer, P.A.; Ottenbreit-Leftwich, A. Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educ. Technol. Res. Dev. 2017, 65, 555–575. [Google Scholar] [CrossRef]
  93. Cho, M.-H.; Heron, M.L. Self-regulated learning: The role of motivation, emotion, and use of learning strategies in students’ learning experiences in a self-paced online mathematics course. Distance Educ. 2015, 36, 80–99. [Google Scholar] [CrossRef]
  94. Nistor, N.; Neubauer, K. From participation to dropout: Quantitative participation patterns in online university courses. Comput. Educ. 2010, 55, 663–672. [Google Scholar] [CrossRef] [Green Version]
  95. Boekaerts, M. Self-regulated learning: Where we are today. Int. J. Educ. Res. 1999, 31, 445–457. [Google Scholar] [CrossRef]
  96. Krathwohl, D.R. A revision of bloom’s taxonomy: An overview. Theory Pract. 2002, 41, 212–218. [Google Scholar] [CrossRef]
  97. Craik, F.I.M.; Lockhart, R.S. Levels of processing: A framework for memory research. J. Verbal Learn. Verbal Behav. 1972, 11, 671–684. [Google Scholar] [CrossRef]
  98. Phan, H.P.; Deo, B. The revised learning process questionnaire: A validation of a western model of students’ study approaches to the south pacific context using confirmatory factor analysis. Br. J. Educ. Psychol. 2007, 77, 719–739. [Google Scholar] [CrossRef]
  99. Biggs, J.B.; Collis, K.F. Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome); Academic Press: New York, NY, USA, 2014. [Google Scholar]
  100. Bloom, B.S. Taxonomy of Educational Objectives. The Classification of Educational Goals. Handbook 1 Cognitive Domain; David McKay: Ann Arbor, MI, USA, 1956; pp. 20–24. [Google Scholar]
  101. Brabrand, C.; Dahl, B. Using the SOLO taxonomy to analyze competence progression of university science curricula. High. Educ. 2009, 58, 531–549. [Google Scholar] [CrossRef]
  102. Kahu, E.R. Framing student engagement in higher education. Stud. High. Educ. 2013, 38, 758–773. [Google Scholar] [CrossRef]
  103. Fredricks, J.A.; Blumenfeld, P.C.; Paris, A.H. School engagement: Potential of the concept, state of the evidence. Rev. Educ. Res. 2004, 74, 59–109. [Google Scholar] [CrossRef] [Green Version]
  104. Calvani, A.; Fini, A.; Ranieri, M.; Picci, P. Are young generations in secondary school digitally competent? A study on italian teenagers. Comput. Educ. 2012, 58, 797–807. [Google Scholar] [CrossRef] [Green Version]
  105. Kuh, G.D. The national survey of student engagement: Conceptual and empirical foundations. New Dir. Inst. Res. 2009, 2009, 5–20. [Google Scholar] [CrossRef]
  106. Delgado-Almonte, M.; Andreu, H.B.; Pedraja-Rejas, L. Information technologies in higher education: Lessons learned in industrial engineering. Educ. Technol. Soc. 2010, 13, 140–154. [Google Scholar]
  107. Maier, E.M.; Hege, I.; Muntau, A.C.; Huber, J.; Fischer, M.R. What are effects of a spaced activation of virtual patients in a pediatric course? BMC Med. Educ. 2013, 13, 45. [Google Scholar] [CrossRef] [Green Version]
  108. Hachey, A.C.; Wladis, C.; Conway, K. Prior online course experience and GPA as predictors of subsequent online STEM course outcomes. Internet High. Educ. 2015, 25, 11–17. [Google Scholar] [CrossRef]
  109. Susskind, J.E. Limits of powerpoint’s power: Enhancing students’ self-efficacy and attitudes but not their behavior. Comput. Educ. 2008, 50, 1228–1239. [Google Scholar] [CrossRef]
  110. Guillen-Nieto, V.; Aleson-Carbonell, M. Serious games and learning effectiveness: The case of it’s a deal! Comput. Educ. 2012, 58, 435–448. [Google Scholar] [CrossRef]
  111. Chen, N.S.; Wei, C.W.; Wu, K.T.; Uden, L. Effects of high level prompts and peer assessment on online learners’ reflection levels. Comput. Educ. 2009, 52, 283–291. [Google Scholar] [CrossRef]
  112. McKinney, D.; Dyck, J.L.; Luber, E.S. ITunes university and the classroom: Can podcasts replace professors? Comput. Educ. 2009, 52, 617–623. [Google Scholar] [CrossRef]
  113. Hou, H.T. A case study of online instructional collaborative discussion activities for problem-solving using situated scenarios: An examination of content and behavior cluster analysis. Comput. Educ. 2011, 56, 712–719. [Google Scholar] [CrossRef]
  114. Shea, P.; Hayes, S.; Smith, S.U.; Vickers, J.; Bidjerano, T.; Gozza-Cohen, M.; Jian, S.B.; Pickett, A.M.; Wilde, J.; Tseng, C.H. Online learner self-regulation: Learning presence viewed through quantitative content- and social network analysis. Int. Rev. Res. Open Distance Learn. 2013, 14, 427–461. [Google Scholar] [CrossRef]
  115. Hoskins, S.L.; Van Hooff, J.C. Motivation and ability: Which students use online learning and what influence does it have on their achievement? Br. J. Educ. Technol. 2005, 36, 177–192. [Google Scholar] [CrossRef] [Green Version]
  116. Zhou, M.M. SCOOP: A measurement and database of student online search behavior and performance. Br. J. Educ. Technol. 2015, 46, 928–931. [Google Scholar] [CrossRef]
  117. Kim, P.; Hong, J.S.; Bonk, C.; Lim, G. Effects of group reflection variations in project-based learning integrated in a web 2.0 learning space. Interact. Learn. Environ. 2011, 19, 333–349. [Google Scholar] [CrossRef]
  118. Barbera, E. Mutual feedback in e-portfolio assessment: An approach to the netfolio system. Br. J. Educ. Technol. 2009, 40, 342–357. [Google Scholar] [CrossRef] [Green Version]
  119. van Rooij, S.W. Scaffolding project-based learning with the project management body of knowledge (PMBOK (R)). Comput. Educ. 2009, 52, 210–219. [Google Scholar] [CrossRef]
  120. Scherer, R.; Siddiq, F.; Sánchez Viveros, B. The cognitive benefits of learning computer programming: A meta-analysis of transfer effects. J. Educ. Psychol. 2018. [Google Scholar] [CrossRef] [Green Version]
  121. Bientzle, M.; Griewatz, J.; Kimmerle, J.; Küppers, J.; Cress, U.; Lammerding-Koeppel, M. Impact of scientific versus emotional wording of patient questions on doctor-patient communication in an internet forum: A randomized controlled experiment with medical students. J. Med. Internet Res. 2015, 17, e268. [Google Scholar] [CrossRef] [Green Version]
  122. Bientzle, M.; Fissler, T.; Cress, U.; Kimmerle, J. The impact of physicians’ communication styles on evaluation of physicians and information processing: A randomized study with simulated video consultations on contraception with an intrauterine device. Health Expect. 2017, 20, 845–851. [Google Scholar] [CrossRef] [Green Version]
  123. Griewatz, J.; Lammerding-Koeppel, M.; Bientzle, M.; Cress, U.; Kimmerle, J. Using simulated forums for training of online patient counselling. Med. Educ. 2016, 50, 576–577. [Google Scholar] [CrossRef]
  124. Grosser, J.; Bientzle, M.; Shiozawa, T.; Hirt, B.; Kimmerle, J. Acquiring clinical knowledge from an online video platform: A randomized controlled experiment on the relevance of integrating anatomical information and clinical practice. Anat. Sci. Educ. 2019, 12, 478–484. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Search procedure and inclusion criteria for the articles considered for analysis.
Figure 1. Search procedure and inclusion criteria for the articles considered for analysis.
Education 10 00078 g001
Figure 2. Word clouds for individual (left) and social (right) learning settings.
Figure 2. Word clouds for individual (left) and social (right) learning settings.
Education 10 00078 g002
Figure 3. Word cloud for the dependent variables in the category self-report.
Figure 3. Word cloud for the dependent variables in the category self-report.
Education 10 00078 g003
Figure 4. Word cloud for the dependent variables in the category elaboration.
Figure 4. Word cloud for the dependent variables in the category elaboration.
Education 10 00078 g004
Figure 5. Relative frequency of measures of learning outcomes in individual and social learning settings.
Figure 5. Relative frequency of measures of learning outcomes in individual and social learning settings.
Education 10 00078 g005
Table 1. Measurements of learning outcomes in digital learning environments.
Table 1. Measurements of learning outcomes in digital learning environments.
Category ExamplesLearning Outcome is Evaluated on the Basis of …
Method
Self-reportStudents report about their satisfaction, motivation or attitude…experience, perception, or values of a learner.
Observable behaviorEnrollment or final completion of lectures or seminars… intention, persistence or effectiveness of a learner’s behavior.
Cognition
Learning skillsSelf-regulation, awareness or writing skills… meta-cognition.
ElaborationVocabulary-tests or transfer tasks…cognitive measurements.
Activities
Personal initiativeNumber of contributions to discussions or frequency of use… mere participation or pro-activeness of a learner.
Digital activitySourcing and searching behavior… digital maturity level or active usage of digital tools.
Social interactionCollaboration with peers or communication with professors… social influence on activities of a learner.
Table 2. Learning settings (individual vs. social) represented in the journals considered for the analysis.
Table 2. Learning settings (individual vs. social) represented in the journals considered for the analysis.
JournalLearning SettingTotal
IndividualSocial
Advances in Health Sciences Education235
Assessment & Evaluation in Higher Education033
Australasian Journal of Educational Technology61218
BMC Medical Education35944
British Journal of Educational Technology131124
Computers & Education7923102
Educational Technology & Society161228
Educational Technology Research and Development808
Instructional Science516
Interactive Learning Environments11617
International Review of Research in Open and Distance Learning628
Internet and Higher Education81220
Journal of Computer Assisted Learning8715
Journal of Science Education and Technology718
204102306
Table 3. Absolute frequency of measures of learning outcomes (n = 306) in individual and social learning settings.
Table 3. Absolute frequency of measures of learning outcomes (n = 306) in individual and social learning settings.
Learning SettingLearning Outcomes
1234567
individual 79717860141204
social4929271410102
12892611311811306
Note. Learning setting (individual vs. social) and measurement of learning outcomes (1 = self-reports, 2 = observable behavior, 3 = learning skills, 4 = elaboration, 5 = personal initiative, 6 = digital activity, 7 = social interaction).
Table 4. Relative frequency of measures of learning outcomes (n = 306) in relation to learning setting in percent.
Table 4. Relative frequency of measures of learning outcomes (n = 306) in relation to learning setting in percent.
Learning SettingLearning Outcomes
1234567
individual 38.733.438.3342.1606.860.49
social48.041.968.8226.470.983.929.80
Note. Learning setting (individual vs. social) and measures of learning outcomes (1 = self-report, 2 = observable behavior, 3 = learning skills, 4 = elaboration, 5 = personal initiative, 6 = digital activity, 7 = social interaction).

Share and Cite

MDPI and ACS Style

Kümmel, E.; Moskaliuk, J.; Cress, U.; Kimmerle, J. Digital Learning Environments in Higher Education: A Literature Review of the Role of Individual vs. Social Settings for Measuring Learning Outcomes. Educ. Sci. 2020, 10, 78. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci10030078

AMA Style

Kümmel E, Moskaliuk J, Cress U, Kimmerle J. Digital Learning Environments in Higher Education: A Literature Review of the Role of Individual vs. Social Settings for Measuring Learning Outcomes. Education Sciences. 2020; 10(3):78. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci10030078

Chicago/Turabian Style

Kümmel, Elke, Johannes Moskaliuk, Ulrike Cress, and Joachim Kimmerle. 2020. "Digital Learning Environments in Higher Education: A Literature Review of the Role of Individual vs. Social Settings for Measuring Learning Outcomes" Education Sciences 10, no. 3: 78. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci10030078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop