Next Article in Journal
Citizen Science as Part of the Primary School Curriculum: A Case Study of a Technical Day on the Topic of Noise and Health
Next Article in Special Issue
Is Teamwork Different Online Versus Face-to-Face? A Case in Engineering Education
Previous Article in Journal
Spatial–Temporal Interaction Relationship between Ecosystem Services and Urbanization of Urban Agglomerations in the Transitional Zone of Three Natural Regions
Previous Article in Special Issue
Interdisciplinary Experience Using Technological Tools in Sport Science
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Literacy Intervention for Improving Undergraduate Student Critical Thinking of Global Sustainability Issues

1
College of Natural and Health Sciences, Engineering and Mathematics, Bethune-Cookman University, Daytona Beach, FL 32114, USA
2
School of Education, Bethune-Cookman University, Daytona Beach, FL 32114, USA
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(23), 10209; https://0-doi-org.brum.beds.ac.uk/10.3390/su122310209
Submission received: 6 September 2020 / Revised: 4 December 2020 / Accepted: 4 December 2020 / Published: 7 December 2020
(This article belongs to the Special Issue Technology-Enhanced Learning, Open Science and Global Education)

Abstract

:
The promotion of global sustainability within environmental science courses requires a paradigm switch from knowledge-based teaching to teaching that stimulates higher-order cognitive skills. Non-major undergraduate science courses, such as environmental science, promote critical thinking in students in order to improve the uptake of scientific information and develop the rational decision making used to make more informed decisions. Science, engineering, technology and mathematics (STEM) courses rely extensively on visuals in lectures, readings and homework to improve knowledge. However, undergraduate students do not automatically acquire visual literacy and a lack of intervention from instructors could be limiting academic success. In this study, a visual literacy intervention was developed and tested in the face-to-face (FTF) and online sections of an undergraduate non-major Introduction to Environmental Science course. The intervention was designed to test and improve visual literacy at three levels: (1) elementary—identifying values; (2) intermediate—identifying trends; and (3) advanced—using the data to make projections or conclusions. Students demonstrated a significant difference in their ability to answer elementary and advanced visual literacy questions in both course sections in the pre-test and post-test. Students in the face-to-face course had significantly higher exam scores and higher median assessment scores compared to sections without a visual literacy intervention. The online section did not show significant improvements in visual literacy or academic success due to a lack of reinforcement of visual literacy following the initial intervention. The visual literacy intervention shows promising results in improving student academic success and should be considered for implementation in other general education STEM courses.

Graphical Abstract

1. Introduction

Global sustainability issues, including those at the nexus of food, water and energy concerns [1,2], capture attention and provide relevance in such a way that enhances learners’ motivations to learn [3,4]. A desired outcome of higher education is for students to develop the multidimensional and multifaceted human capability of critical thinking [5,6]. In one framework of the process of developing critical thinking, observation and inquiry are initial stages that lead to critical thinking abilities involving such interrelated cognitive constructs as interpretation, explanation, reasoning, evaluation, synthesis, reflection, judgment, metacognition and self-regulation [5]. Visual representations (for example pictures of floods, disease outbreaks, hurricanes, pollution and drought) provide the interfaces and encourage the attention required for learner observations on global sustainability issues. Visual representations for knowledge transfer are increasing relevant in the context of delivering effective, efficient and engaging online learning [4,7,8].
Global sustainability issues are often described as complex, multilayered and ill-defined problems requiring transdisciplinary solutions [9,10]. An ill-defined problem “does not allow a clear mapping of the initial problem space, and the method of achieving the solution is unclear” [11]. Thus, when confronted with global sustainability issues, a desired approach is insightful problem solution characterized by “(i) mental impasse, followed by (ii) restructuring of the problem representation, which leads to (iii) a deeper understanding of the problem, and finally culminates in (iv) an “Aha!” feeling of suddenness and obviousness of the solution” [12]. In problem-solving with complex datasets, especially for ill-defined problems encountered in global sustainability issues, visualizations can help to restructure or narrow the problem space. In academic settings, robust knowledge is an instructional priority and is defined as “deep (encoding typically includes critical features necessary); connected (knowledge is connected between problem solving steps, across problems or concepts in domain, and across domains) and coherent (knowledge is consistent and free of contradictions)” [13]. Visualizations such as concept (mind) maps, a type of connection visual [14], can enable learners to communicate the depth, connectedness and coherence of their knowledge of a topic [15].
The emphasis in this report is on a pilot course-based visual literacy intervention for improving undergraduate student critical thinking of global sustainability issues. Spector and Ma [5] define a developmental process of critical thinking from observation and inquiry to argumentation and reflection (Figure 1). Spector and Ma’s critical thinking framework has four dimensions: abilities (educational perspective), dispositions (psychological perspective), levels (epistemological perspective) and time [5].
According to the Association of College and Research Libraries’ Visual Literacy Competency Standards for Higher Education, visual literacy “is a set of abilities that enables an individual to effectively find, interpret, evaluate, use, and create images and visual media” [16,17]. Thus, visual literacy abilities overlap with the critical thinking abilities of interpretation, reasoning, evaluation, synthesis, reflection and judgment. Visualizations have become a central means of communication and knowledge transfer in print, television and social media. The ability to interpret and critically analyze visualizations is an essential skill for employment [18] and a core competency in US education for grades 9–10 (CSS.ELA-LITERACY.RST.9-10.7) [19,20]. The goal of science education is to increase scientific literacy in all students, not only those interested in science, engineering, technology and mathematics (STEM) careers [21]. Visual literacy allows for the public to be more informed when encountering medical information in graphical displays, improving their understanding of risks and benefits [22]. Furthermore, having a critical eye is important when presented with incomplete visualizations in the media, advertisements or politically motivated statistics geared toward convincing the public to adopt or reject a viewpoint utilizing one-sided information [23]. With the elevation of pseudoscience, visual literacy provides students with skills to identify false and misinterpreted scientific data [24].
The challenges of 21st century society, including climate change, pollution, migration, educational disparities and resource management, require the collaboration of scientists, educators, politicians, economists and the public. Non-major science courses, such as environmental science, serve to (1) promote critical thinking in students in order to improve the uptake of scientific information; and (2) develop the rational decision making used for making more informed decisions [25,26]. The promotion of global sustainability within environmental science courses requires a paradigm switch from knowledge-based teaching to teaching that promotes higher-order cognitive processes, including critical thinking, question asking, decision-making and problem solving [27]. Visualizations are an essential component of science education and require instructors to develop students’ metacognitive skills [28]. Within science texts, the frequency of visualizations can reach 14 graphics per 10 pages, which is the same frequency observed in scientific journals [29]. Within an ecology course, 800 visualizations were observed within the course lectures [29]. Despite the frequency of use, students struggle to understand the information within visualizations due to experiential learning, incorrect student knowledge and difficulties associated with the task [30,31].
Visual literacy is a core competency in K-12 education; however, the United States showed no significant improvement in 8th grade achievement in “Data representation, analysis, and probability” questions between 1995 and 1999 and the U.S. ranked 15th overall among surveyed countries in this content area (Mullis et al., 1999). The 21st century demands of visual literacy require students to attain high levels of graph comprehension [32,33]. K-12 curricula focus on seven types of data visualization (line charts, bar charts, stacked bar chart, pie charts, histograms, scatterplots and box plots); however, common graphics in news outlets, notably the most common visualization, choropleth maps, are not covered in the general curriculum [34]. Higher education must address shortcomings to support visual literacy competencies [32,35]. While it has been listed in the past, visual literacy is not currently listed as an essential learning outcome in higher education [36,37]; however, associated constructs that are included include inquiry and analysis, critical and creative thinking and quantitative literacy [38]. For instructors to support the acquisition of visual literacy, they need to develop or adopt a framework for developing questions on visualizations that will identify student shortcomings and reinforce comprehension of information within the visualizations [33,39,40,41].
Instructors have expectations that information presented in visualizations provide easily accessible answers to questions for students. However, visualizations are not simplifications of information but simultaneously present multiple elements and relationships which are difficult to read, absorb and interpret [42]. There are knowledge and skill gaps between undergraduate students and instructors in visual literacy that must be considered in course planning [43]. Visual literacy is a complex task and an important learned skill which must be emphasized in the classroom [21,36]. Visualizations contain a mixture of relevant and irrelevant information that can confuse or distract students from their task [42]. Even scientists have demonstrated difficulties in correctly interpreting graphs outside of their fields of expertise and have demonstrated interpretations of graphics that differ significantly based on experiential learning [29]. The knowledge gained by the student observing the visualization depends on the quality of the visualization and the user’s knowledge, perception and cognition [44]. Instructors need to recognize these elements and provide opportunities for interactive exploration where students can engage with the visualization and further explore the data. Visualization interpretation requires three information sources: input data (visualization), an interaction and pre-stored knowledge [45]. Course instructors must choose appropriate visualizations, teach students how to interact with the visualizations and increase students’ background knowledge of the topics. The focus of this research is a visual literacy intervention which provides a learning experience for students on how to interact with visualizations.
Three levels of comprehension (elementary, intermediate and advanced) that need to be considered in order to activate the process of visualization comprehension in students have emerged in the literature [33]. At an elementary level, students are expected to locate and read the data from a visualization, which requires the novice-level critical thinking ability of interpretation. At an intermediate level, students read between the data to integrate and interpret trends, which requires students to utilize the beginner and professional-level critical thinking abilities of explanation, reasoning and evaluation. At the advanced level, students read beyond the data and utilize the expert-level critical thinking abilities of synthesis, reflection and judgment [5]. A student’s ability to correctly answer questions is inversely related to the level of difficulty of the question, with intermediate and advanced questions posing greater challenges to students [33,46]. There is consensus that college students need additional support to develop visual literacy in order to be successful in college coursework and research activities [35,36,46,47,48].
While the importance of visual literacy in higher education and the factors impacting student performance are documented above, there are limited examples of visual literacy interventions in the literature [32,49]. The COVID-19 pandemic in 2020 has also led to an urgent need for remote education options through visual interfaces and for visual literacy interventions that instructors can implement in their courses. Most research on visual literacy captures a moment in time rather than monitoring changes in visual literacy after an intervention on visual literacy [21].
The objectives of the research study reported here were:
  • Assess students’ elementary, intermediate and advanced visual literacy within an undergraduate general education Introduction to Environmental Science course.
  • Determine the changes in visual literacy following an intervention.
  • Track changes in visual literacy at midterms, utilizing a standardized assessment for the course.

2. Materials and Methods

The visual intervention was designed for ES 130 Introduction to Environmental Science, a general education non-major course offered at Bethune–Cookman University, Florida, USA. The intervention was launched in two sections of the course taught by the same instructor in Spring 2019. One section was a face-to-face (FTF) honors section and the other was an online section. Only students who completed the entire intervention in each section were included in the analysis (n = 12). The intervention consisted of pre- and post-tests, a lecture and a homework assignment designed to test three levels of scientific visual literacy adapted from Friel et al., 2001: (1) elementary—can the student read the data? (2) Intermediate—can the student read between the data? (3) Advanced—can the student read beyond the data? Spector and Ma’s critical thinking framework [5] helped to explain our research findings and plan future studies.
The lecture introduced students to the importance of visualizations and defined five types of visualizations: one-dimensional visuals, two-dimensional visuals, maps, shape visuals and connection visuals [14]. Each of the visualization types was described and defined for the students. During the intervention, the instructor assisted students in approaching the visualization utilizing components of the Novice’s Information Visualization Sensemaking (NOVIS) model [40]. Each visualization was displayed and students were guided through the process of constructing a frame by identifying textual objects, such as the axis labels, and non-textual objects, such as color coding or shading present. The instructor guided the students through an exploration of the visualization by asking students to read the visuals, find a trend and project the data beyond the visuals, which were designed to address the elementary, intermediate and advanced levels of visual comprehension. This activity involved retrieving information, recalling domain knowledge and personal experience and exploring the visualization [40]. Self-assessment, self-correction and self-explanation are metacognitive practices needed for successful learning. Thus, students participated in metacognitive-focused group discussion in the FTF section. They engaged in discussion on self-assessment and self-correction in their responses to the visual literacy questions. After the guided exploration, an additional visual was presented for practice with the prompt words “read”, “trend” and “projection”. As a means to support critical thinking development through metacognitive practices, the students were asked in the group discussion to explain how they reached their answer. An objective for this self-explanation practice was to enable students who did not find the correct response to obtain the correct response from their peers. Further, the course instructor was able to (1) ensure the students were using the correct skills to reach an answer rather than guessing; and (2) support reinforcement of the visual literacy skills. It is important to note that group discussion was not utilized in the online section.
All visualizations used within the lecture were related to environmental science topics and included a pH scale, fish population demographics, ice coverage, heat maps, sustainable development, food webs and taxonomic trees, etc. The lecture took one and a half 50-min periods to complete with students in the face-to-face class. The lecture was recorded by the instructor into two PowerPoint files for the online course. Students were encouraged to pause the recording to answer questions before advancing to hear the answers; however there was no direct engagement between the instructor and students for asynchronous lectures. There was no way to track which online students engaged with the lecture or watched it in its entirety.
The homework assignment was the same for the online and face-to-face sections; both had questions written in the online learning management system Jenzabar. The homework included a link to an online lesson on reading graphs [50]. The assignment consisted of three visualizations, a bar graph and two pie charts, with ten multiple choice questions and one true/false question. Each question utilized the prompts of “read”, “trend” and “projection” from the lecture. Five of the homework questions were elementary, three were intermediate and three were advanced. The homework questions were autograded and students had unlimited attempts on the homework. Students were able to see their homework score after each attempt but not which questions were incorrect or the correct answers. Both class sections had one week to complete the online homework assignment.
The pre-test and post-test consisted of six visualizations (Figure 2) which were either generated by the instructor or acquired online using a Google search. The content of the visualizations was related to environmental science concepts but the questions and values were designed not to require previous knowledge about the topics. For instance, the students might have needed to be aware of what a seahorse is to answer the questions regarding density, but they did not need to know the difference between lined, dwarf and long-snout seahorses (Figure 2a).
The questions were also designed assuming students had basic knowledge of geography. In Figure 2d, the states and countries were not labeled, but the questions (Table 1) did require students to know the locations of them, with the assumption that these were geographic facts that they had learned in primary and secondary school. One student did ask for assistance locating these geographic areas in the exam, suggesting that these labels should be considered in the future and cautioning about assumptions of prior knowledge when crafting questions. An additional assumption was made that students could do the basic math of addition or subtraction without the aid of a calculator (Table 1, questions 2, 3 and 8). In the face-to-face section and online course the pre-test was given to the students prior to any instruction on visualizations in the lecture or homework. The post-test was given to students in the online class three days after the lecture and homework were completed and one day after the homework was due in the face-to-face class.
In the pre-test and post-test, each visualization was accompanied by three questions, one at each of the three comprehension levels, elementary, intermediate and advanced, for a total of 18 questions (n = 6 for each comprehension level). Four questions were true/false answer choices and the other 14 were multiple choice questions. The pre-test and post-test were given as a scantron quiz in the face-to-face honors section, while the online section took the test electronically as a timed quiz within the Jenzabar software. The face-to-face and online classes were statistically analyzed separately.
Only scores from students who completed the entire intervention, including the pre-test and post-test, lecture and homework, in each class section were included in the statistical analysis. We sought to determine if there was a significant difference in the percentage of correct answers between elementary, intermediate and advanced levels of visual literacy between pre-test and post-test. Thus, the data were analyzed using a two-way repeated measures ANOVA according to Agresti, 2018 [51]. Though the sample size was low (n = 6), the data met the assumption of normality for running the two-way repeated measures ANOVA as indicated by Mauchly’s Test of Sphericity (p > 0.05) [52].
The midterm contained three different visualizations (Figure 3), with one question at each comprehension level (Table 2). The questions consisted of seven multiple choice questions and two questions with an A or B option. The online section had a timed exam using a proctor written in Jenzabar Learning Management System, while the face-to-face course used a scantron exam with the instructor present. The results from each course section were analyzed separately. The percentage of correct answers for each question was analyzed using a one-way ANOVA in IBM SPSS Statistics 25 to determine if there was a significant difference in correct answers between elementary, intermediate and advanced levels of visual literacy questions on the midterm. Though the sample size was low (n = 6), the data met the assumption of normality for a one-way ANOVA indicated by Levene’s test of homogeneity (p > 0.05) [52].
To assess the transfer of visual literacy learning from the intervention to academic success, the exam scores for the FTF and online sections utilizing a visual literacy intervention were compared to those from a previous semester in which the FTF and online sections were taught without a visual literacy intervention. All course sections had five exams, including the final exam. All exam scores within each course type, FTF and online, were compared to a previous semester without visual literacy using an independent samples t-test in IBM SPSS Statistics 25. The data met the assumption of normality for an independent t-test indicated by Levene’s test of homogeneity (p > 0.05) [52].
The course assessment consisted of 16 content-based questions. Fourteen were multiple choice and two were true/false questions. There were three multiple choice questions related to two visualizations on the assessment. One question was intermediate and the other two were advanced. The same assessment was embedded in the final exam each semester. The percentage of correct responses from the class for each question was calculated. The average percentage scored on the questions was reported with the assessment score goal of an aggregate average of 70% or higher. Assessment scores within each course type, FTF and online, were compared to a previous semester without visual literacy using an independent samples t-test in IBM SPSS Statistics 25. The data met the assumption of normality for an independent t-test indicated by Levene’s test of homogeneity (p > 0.05) [52]. The framework for critical thinking formulated by Spector and Ma provided a framework to interpret the findings of the project [5].

3. Results

In each course, online and face-to-face, fourteen students completed the pre- and post-tests, online homework and midterm exam. All fourteen face-to-face students were present for the entire lecture; however, the online students’ engagement with the recorded lectures could not be tracked. The face-to-face section’s homework was completed in 1 to 6 attempts by the students, with an average of 2.5 ± 0.4 attempts. The online section’s homework was completed in 1 to 4 attempts with an average of 2.7 ± 0.3 attempts for the class. The average scores on the homework assignment were 95.5% ± 1.8 and 91.7% ± 1.4 for the face-to-face and online sections, respectively.
The percentage of correct responses in both the online and face-to-face sections did not change significantly between the pre- and post-tests (p > 0.05, Table 3). However, in both classes there were significantly higher correct scores on elementary visual literacy questions compared to advanced visual literacy questions (p < 0.05, Figure 4). In the face-to-face honors sections, 96.5% ± 1.6% of students were able to answer elementary level visual literacy questions compared to an average of 63.2% ± 6.8% on advanced questions. The online class showed similar results for the comparison of the percentage of correct responses between advanced and elementary level questions, 95.1% ± 1.9% and 61.1% ± 4.7%, respectively. There were no significant differences in the percentage of correct scores between intermediate visual literacy questions compared to elementary or advanced in either course (p > 0.05, Figure 4).
The midterm scores on elementary, intermediate and advanced visual literacy questions were not significantly different in the online or face-to-face sections (p > 0.05, Table 4). The variance on advanced level questions was higher compared to elementary and intermediate questions in both course sections, suggesting students were still struggling with advanced questions. The average scores on advanced questions for the midterm in the FTF and online courses were 80.6% ± 15.5 and 66.7% ± 22.0, respectively, compared to the intervention averages of 63.2% ± 6.8 in the FTF section and 61.1% ± 4.7 in the online section (Figure 5). While not statistically compared, the FTF students demonstrated improved scores on advanced level questions from the intervention to the midterm, while the online students demonstrated no improvement.
Exam scores were significantly higher in the visual literacy FTF course compared to the year before when the course was taught without a visual literacy intervention (p < 0.05, Figure 6A). The average exam score in the visual literacy FTF course was 78.9% ± 3.4 compared to 60.5% ± 2.5 in the section without a visual literacy intervention (Table 5). The average exam score in the online section with a visual literacy intervention was 70.3% ± 2.8, higher than the online section without a visual literacy intervention, which averaged 65.7% ± 5.6; however, the differences were not significant (p > 0.05, Figure 6B and Table 6).
The average course assessment scores did not differ significantly between FTF and online sections with or without a visual literacy intervention; however, the FTF visual literacy section was the only section to reach the target assessment benchmark of a 70% average (p < 0.05, Figure 7). The average assessment score in the FTF section with a visual literacy intervention was 70.5% ± 6.3 compared to 65.0% ± 7.0 without (Table 7). In the online sections, the average was 62.5% ± 8.1 with the intervention and 62.3% ± 7.5 without (Table 8).

4. Discussion

The goal of our research is to design and develop effective visual literacy educational interventions to promote critical thinking, insightful problem solving and robust knowledge of global sustainability issues. In this report, which emphasized promoting critical thinking, we presented findings from an undergraduate course-based visual literacy intervention that consisted of a pre-test, post-test, lecture and homework assignment designed to test three levels (elementary, intermediate and advanced) of scientific visual literacy. Our research provides resources such as questions, analysis and critical thinking educational constructs that educators can integrate in courses. Thus, our approach advances the integration of the Association of College and Research Libraries’ Visual Literacy Competency Standards for Higher Education in undergraduate instructional events [53].
The visual literacy intervention reported here had a significant effect on academic success measures through increased exam scores and higher assessment scores for the face-to-face section. Compared to the elementary level, the visual literacy assessment scores for intermediate and advanced level questions were lower. One explanation is that these types of questions require students to manipulate the information in the figure to make comparisons, undertake calculations or generalize or predict trends based on the provided information, which is consistent with previous studies [33,34]. Furthermore, the questions at the intermediate and advanced levels require students to utilize domain-specific knowledge of environmental science to apply the higher level critical thinking abilities of synthesis, reflection and judgment.
After one week following the visual literacy intervention there was not a significant improvement in literacy evident in the post-test; however, a significant transfer of learning was evident at midterms and on students’ overall exam scores in the course for the FTF section. Students in the FTF section with the visual literacy intervention had a 17.4% improvement in advanced literacy questions from the intervention to the midterms. Average exam scores in the FTF section with the visual literacy intervention increased by 18.4% compared to a section without the intervention. While the assessment scores were not significantly higher with the visual literacy intervention, the section with the intervention was the only one to reach the department’s benchmark of an average score of 70%. The delay in intervention effects from the post-test to midterms is not unreasonable considering the time needed for students to develop abilities for advanced visual literacy.
While the visual literacy questions in the pre-test and post-test were developed for students to answer without environmental science content knowledge, the ability to link visuals to prior knowledge has been identified as a critical element of student learning that can aid in understanding but also lead to misconceptions [30,54]. Knowledge extraction from visualizations is not an objective process but relies on a priori knowledge [44]. As students gain environmental science knowledge and experience interpreting visualizations throughout the semester, the knowledge they gain from the visualizations increases. Knowledge influences visual literacy by: (1) influencing student comprehension goals by confirming or disconfirming relationships they expect to find, (2) allowing students to keep track of information in a visual to aid in mental computation and (3) helping students identify potential errors [55]. The visual literacy intervention in the FTF section in our study provided a framework for students to utilize when interpreting visuals, which led to increased understanding of readings and homework and academic success which increased throughout the semester.
The improvement in measures of academic success in the FTF section demonstrates the potential of this visual literacy intervention to facilitate the acquisition of sustainability knowledge in students who are non-majors in environmental science. We observed that students’ acquisition of knowledge on sustainability issues was coupled with increased performance of abilities at higher levels of critical thinking. This suggests that critical thinking abilities learned during the visual literacy interventions were transferred to other learning outcomes of the course. One logical implication of this presents an area we have identified for further investigation consisting of the effects of increased sustainability knowledge, visual literacy and critical thinking on students’ attitudes, behaviors and informed decisions with regard to sustainability topics outside of the classroom.
The same improvement in visual literacy and academic success observed in the FTF course was not evident in the online section, where the averages for elementary, intermediate and advanced visual literacy questions remained the same from the intervention to midterms. Additionally, the overall exam and assessment scores in the online section with the visual literacy intervention did not differ from an online section without the intervention. The differences in academic success measures between the online and FTF sections are less likely to be related to the course format or course content knowledge than to a lack of reinforcement of the visual literacy framework in the online course. In both sections, the same online readings, online homework and lectures were utilized. However, in the FTF course the instructor reinforced the visual literacy framework during lectures throughout the semester by encouraging students to read, identify trends and make predictions with visuals during each lecture. The online sections utilized lectures which were prerecorded before the visual literacy framework was developed. Students in the online section received the framework at the intervention and it was not repeated in any additional coursework throughout the semester. The improvement in the FTF course versus the online course illustrates the importance of repetition and practice and the critical need for instructors to be trained in interventions for developing visual literacy of students.
The online course format did present a challenge in the instructor’s engagement with students with visual literacy. The recorded PowerPoint used in the intervention was asynchronous and thus did not allow for instructors to answer questions or observe when students needed additional support. Students were encouraged to pause the recording to answer questions within the lecture and then to advance to see if they were correct. The lack of metacognitive discussions with the instructor in the online section effectively eliminated self-explanation and self-correction activities, which are useful in developing critical thinking abilities related to synthesis, reflection and judgment. The current lecture format does not allow any analysis which would make it possible to see how students are engaging with the content or even if they watched the lecture at all. Many online programs are available which allow for embedded questions in videos and provide data on student engagement, and which are better options for encouraging student participation and providing data to instructors on student knowledge in an asynchronous environment. Synchronous discussion boards and online meetings would address these bottlenecks to visual literacy interventions in online formats.
Some limitations in the current study should be noted. The number of students in each course section was low (n = 12) and the time allocated to the intervention, which took place within a traditional university course, was limited. However, we argue that the study took place in intact learning environments. Additionally, the coverage of topics in the course, in terms of time spent on content, was constrained by the traditional conditions of the university implementation of the course under which the study took place. The prudential nature of educational research is to begin with small, focused studies that provide scaffolding for a larger study. The modest outcomes of the present study show promise in terms of making deliberate modifications to courses within the constraints of the practice of university course offerings. These initial study findings demonstrate the need to expand the research to additional sections and instructors in order to assess the intervention impacts on a larger sample size. The results of the current study have led to the adoption of the intervention into all ES 130 (Introduction to Environmental Science) sections by three faculty members for two years. It has also been adapted for COVID-19 related visuals for educators and communications majors. We are currently analyzing the data for future publications. Our current report presents course instructors with adaptable instructional strategies for visual literacy educational interventions.
Despite these limitations, there are clear indicators that this intervention was successful in improving visual literacy. The differences observed in student success between the online and FTF section highlighted the need for increased course time for this intervention to be successful. The findings presented here have led to faculty discussions on how to enhance the reinforcement of the visual literacy intervention. This visual literacy reinforcement will involve (1) repeated use of the framework throughout the semester in lectures that use visuals; (2) offering additional practice questions on homework; and (3) monitoring visual literacy in exams.
Global sustainability topics covered in environmental science courses can be used to capture students’ attention, provide relevance and foster critical thinking development [56,57]. We expect that students will have improved understanding of scientific knowledge and develop rational decision making when explicitly taught about the developmental framework of critical thinking. Instructors play a key role in assisting students in interpretation and understanding of course materials in lectures; however, visuals in lectures and textbooks are often lacking in the resources, which facilitate students’ understanding of the content [29]. Instructors’ assumptions that students have mastered visual literacy as a core competency in grades K-12 lead to a disparity between expectations and capacity. The visual literacy pre-test is an effective tool for identifying which level of visual literacy students have and where future instruction should be focused. This visual literacy assessment will ensure that students are equipped with the abilities needed to engage with sustainability topics. Instructors should analyze the pre-test results and provide students with increased practice on levels where they scored poorly. Practice can consist of additional homework questions or reinforcement during lectures. This study demonstrates the need for intervention reinforcement throughout the semester rather than a one-time lesson on visual literacy. Furthermore, with proper application, the intervention can improve visual literacy, promote advanced critical thinking skills and improve knowledge of sustainability topics.
Future adaptations of the visual literacy intervention will increase the quantity of visual literacy questions in homework assignments and specifically increase the quantity of advanced level questions when students are demonstrating lower scores. Due to the importance of a priori knowledge in students’ visual literacy, relevant background information in the form of short summaries or links to relevant news articles related to the visual will be provided on homework assignments. Additionally, identifying when students have difficulties with visualization tasks is important for identifying students who need additional support and guidance in visual literacy [34]. The post-test will be moved later in the semester to allow time for students to develop advanced visual literacy and accurately capture improvements. Students who participate in visual literacy interventions will be reassessed later in their programs for long-term retention of the visual literacy content.

5. Conclusions

The success of the visual literacy intervention at improving student advanced literacy and academic success measures suggests that increased emphasis on visual literacy in general education science courses is needed. Advanced visual literacy requires students to utilize content knowledge gained over time in the course and the advanced critical thinking skills of synthesis, reflection and judgment to read beyond what is displayed in the visual. Advanced visual literacy is a required skill for students to judge the validity of sustainability visuals they will encounter in print, television and social media and make informed decisions from information communicated by the visual. The modest sample size and limited intervention length used in this study were able to demonstrate significant improvements in student success, including statistically significant improvements in exam scores, visual literacy and meeting assessment benchmarks. Additional reinforcement of visual literacy content by course instructor throughout the semester can enhance these improvements. The evidence for the benefit of reinforcement of visual literacy content is the differences observed between the online class, which had little reinforcement, and the face-to-face class, which received reinforcement throughout the semester. While this study’s intervention focused on sustainability themes within a non-major environmental science course, the intervention can be easily adapted for major and non-major STEM courses to establish a visual literacy framework. The implementation of visual literacy educational interventions within courses can support the development of advanced metacognitive skills in students with the benefits of improved academic success and critical thinking outside the classroom.

Author Contributions

Conceptualization, S.E.K., S.R.-B., H.N.T. and R.D.I.; methodology, S.E.K., S.R.-B., H.N.T. and R.D.I.; validation, S.E.K., S.R.B., H.N.T. and R.D.I.; formal analysis, S.E.K.; investigation, S.E.K., S.R.B., H.R.T. and R.D.I.; resources, H.N.T. and R.D.I.; data curation, S.E.K.; writing—original draft preparation, S.E.K.; writing—review and editing, S.E.K., S.R.-B., H.N.T. and R.D.I.; visualization, S.E.K.; project administration, R.D.I. and H.R.T.; funding acquisition, R.D.I. and H.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Department of Education Title III Program at Bethune–Cookman University (P031B170091); National Science Foundation (EHR-1435186; EHR-1623371, EHR-1626602, CSE-1829717; EHR-2029363). Bethune–Cookman University paid for the Article Processing Cost (APC).

Acknowledgments

We thank Dana Zeidler (Distinguished University Professor, Science Education, University of South Florida, USA) for comments that improved the manuscript. We acknowledge the administrative support for the research by the College of Natural and Health Sciences, Engineering and Mathematics as well as the School of Education at Bethune–Cookman University, Daytona Beach, FL 32114, USA.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

The sources of the images included in assessment for visual literacy are presented in Table A1.
Table A1. Website sources of images in Figure 2 and Figure 3. Accessed on 5 September 2020.

References

  1. Treemore-Spears, L.J.; Grove, J.M.; Harris, C.K.; Lemke, L.D.; Miller, C.J.; Pothukuchi, K.; Zhang, Y.; Zhang, Y.L. A workshop on transitioning cities at the food-energy-water nexus. J. Environ. Stud. Sci. 2016, 6, 90–103. [Google Scholar] [CrossRef]
  2. Wade, A.A.; Grant, A.; Karasaki, S.; Smoak, R.; Cwiertny, D.; Wilcox, A.C.; Yung, L.; Sleeper, K.; Anandhi, A. Developing leaders to tackle wicked problems at the nexus of food, energy, and water systems. Elem. Sci. Anthr. 2020, 8, 11. [Google Scholar] [CrossRef]
  3. Keller, J.M. Strategies for stimulating the motivation to learn. Perform. Instr. 1987, 26, 1–7. [Google Scholar] [CrossRef]
  4. Keller, J.M. First principles of motivation to learn and e3-learning. Distance Educ. 2008, 29, 175–185. [Google Scholar] [CrossRef]
  5. Spector, J.M.; Ma, S. Inquiry and critical thinking skills for the next generation: From artificial intelligence back to human intelligence. Smart Learn. Environ. 2019, 6, 8. [Google Scholar] [CrossRef]
  6. Erikson, M.G.; Erikson, M. Learning outcomes and critical thinking–good intentions in conflict. Stud. High. Educ. 2019, 44, 2293–2303. [Google Scholar] [CrossRef] [Green Version]
  7. Crawford, J.; Butler-Henderson, K.; Rudolph, J.; Malkawi, B.; Glowatz, M.; Burton, R.; Magni, P.; Lam, S. COVID-19: 20 countries’ higher education intra-period digital pedagogy responses. J. Appl. Learn. Teach. 2020, 3, 1–20. [Google Scholar]
  8. Spector, J.; Merrill, M.D. Editorial: Effective, efficient and engaging (E3) learning in the digital age. Distance Educ. 2008, 29, 123–126. [Google Scholar] [CrossRef]
  9. Brønn, C.; Brønn, P.S. Sustainability: A Wicked Problem Needing New Perspectives. In Business Strategies for Sustainability; Routledge: Abingdon, UK, 2018. [Google Scholar]
  10. Scholz, R.W. Transdisciplinarity: Science for and with society in light of the university’s roles and functions. Sustain. Sci. 2020, 15, 1033–1049. [Google Scholar] [CrossRef] [Green Version]
  11. Webb, M.E.; Little, D.R.; Cropper, S.J. Insight is not in the problem: Investigating insight in problem solving across task types. Front. Psychol. 2016, 7, 1424. [Google Scholar] [CrossRef] [Green Version]
  12. Sandkühler, S.; Bhattacharya, J. Deconstructing insight: EEG correlates of insightful problem solving. PLoS ONE 2008, 3, e1459. [Google Scholar] [CrossRef] [PubMed]
  13. Richey, J.E.; Nokes-Malach, T.J. Comparing four instructional techniques for promoting robust knowledge. Educ. Psychol. Rev. 2015, 27, 181–218. [Google Scholar] [CrossRef]
  14. Carter, M.G.; Hipwell, P.; Quinnell, L. A picture is worth a thousand words: An approach to learning about visuals. Aust. J. Middle Sch. 2012, 12, 4–15. [Google Scholar]
  15. Kinchin, I.M.; Möllits, A.; Reiska, P. Uncovering Types of Knowledge in Concept Maps. Educ. Sci. 2019, 9, 131. [Google Scholar] [CrossRef] [Green Version]
  16. American Library Association. ACRL Visual Literacy Competency Standards for Higher Education. Available online: http://www.ala.org/acrl/standards/visualliteracy (accessed on 1 September 2020).
  17. Arslan, R.; Nalinci, G.Z. Development of visual literacy levels scale in higher education. Turk. Online J. Educ. Technol. 2014, 13, 61–70. [Google Scholar]
  18. Murray, T.S.; Kirsch, I.S.; Jenkins, L. Adult Literacy in OECD Countries: Technical Report on the First International Adult Literacy Survey; US Department of Education, Office of Educational Research and Improvement: Washington, DC, USA, 1998.
  19. Davidson, R. Using infographics in the science classroom. Sci. Teach. 2014, 81, 34. [Google Scholar] [CrossRef]
  20. DeFauw, D.L.; Saad, K. Creating science picture books for an authentic audience. Sci. Act. 2014, 51, 101–115. [Google Scholar] [CrossRef]
  21. Glazer, N. Challenges with graph interpretation: A review of the literature. Stud. Sci. Educ. 2011, 47, 183–210. [Google Scholar] [CrossRef]
  22. Ancker, J.S.; Senathirajah, Y.; Kukafka, R.; Starren, J.B. Design features of graphs in health risk communication: A systematic review. J. Am. Med. Inform. Assoc. 2006, 13, 608–618. [Google Scholar] [CrossRef] [Green Version]
  23. Gal, I. Adults’ statistical literacy: Meanings, components, responsibilities. Int. Stat. Rev. 2002, 70, 1–25. [Google Scholar] [CrossRef]
  24. Cook, J.; Bedford, D.; Mandia, S. Raising climate literacy through addressing misinformation: Case studies in agnotology-based learning. J. Geosci. Educ. 2014, 62, 296–306. [Google Scholar] [CrossRef]
  25. Zoller, U. Teaching tomorrow’s college science courses—Are we getting it right? J. Coll. Sci. Teach. 2000, 29, 409. [Google Scholar]
  26. Grainger, S.; Mao, F.; Buytaert, W. Environmental data visualisation for non-scientific contexts: Literature review and design framework. Environ. Model. Softw. 2016, 85, 299–318. [Google Scholar] [CrossRef] [Green Version]
  27. Zoller, U. Science education for global sustainability: What is necessary for teaching, learning, and assessment strategies? J. Chem. Educ. 2012, 89, 297–300. [Google Scholar] [CrossRef]
  28. Gilbert, J.K. Visualization: A metacognitive skill in science and science education. In Visualization in Science Education; Springer: New York, NY, USA, 2005; pp. 9–27. [Google Scholar]
  29. Roth, W.-M.; Bowen, G.M. Complexities of graphical representations during lectures: A phenomenological approach. Learn. Instr. 1999, 9, 16. [Google Scholar] [CrossRef]
  30. Leinhardt, G.; Zaslavsky, O.; Stein, M.K. Functions, graphs, and graphing: Tasks, learning, and teaching. Rev. Educ. Res. 1990, 60, 1–64. [Google Scholar] [CrossRef]
  31. Eilam, B.; Gilbert, J.K. Science Teachers’ Use of Visual Representations; Springer: New York, NY, USA, 2014; Volume 8. [Google Scholar]
  32. Börner, K.; Bueckle, A.; Ginda, M. Data visualization literacy: Definitions, conceptual frameworks, exercises, and assessments. Proc. Natl. Acad. Sci. USA 2019, 116, 1857–1864. [Google Scholar] [CrossRef] [Green Version]
  33. Friel, S.N.; Curcio, F.R.; Bright, G.W. Making sense of graphs: Critical factors influencing comprehension and instructional implications. J. Res. Math. Educ. 2001, 32, 124–158. [Google Scholar] [CrossRef] [Green Version]
  34. Lee, S.; Kim, S.-H.; Kwon, B.C. VLAT: Development of a visualization literacy assessment test. IEEE Trans. Vis. Comput. Graph. 2016, 23, 551–560. [Google Scholar] [CrossRef]
  35. Hattwig, D.; Bussert, K.; Medaille, A.; Burgess, J. Visual literacy standards in higher education: New opportunities for libraries and student learning. Port. Libr. Acad. 2013, 13, 61–89. [Google Scholar] [CrossRef] [Green Version]
  36. Kędra, J.; Žakevičiūtė, R. Visual literacy practices in higher education: What, why and how? J. Vis. Lit. 2019, 38, 1–7. [Google Scholar] [CrossRef]
  37. Little, D.; Felten, P.; Berry, C. Liberal education in a visual world. Lib. Educ. 2010, 96, 44–49. [Google Scholar]
  38. Ben-Avie, M.; Kuna, K.; Rhodes, T. Assessment as a strategy, and not a stand-alone activity. AacU Peer Rev. 2019, 20, 4–6. [Google Scholar]
  39. Pfannkuch, M. Comparing box plot distributions: A teacher’s reasoning. Stat. Educ. Res. J. 2006, 5, 27–45. [Google Scholar]
  40. Lee, S.; Kim, S.-H.; Hung, Y.-H.; Lam, H.; Kang, Y.-A.; Yi, J.S. How do people make sense of unfamiliar visualizations?: A grounded model of novice’s information visualization sensemaking. IEEE Trans. Vis. Comput. Graph. 2015, 22, 499–508. [Google Scholar] [CrossRef] [PubMed]
  41. Galesic, M.; Garcia-Retamero, R. Graph literacy: A cross-cultural comparison. Med. Decis. Mak. 2011, 31, 444–457. [Google Scholar] [CrossRef]
  42. Dreyfus, T.; Eisenberg, T. On difficulties with diagrams: Theoretical issues. In Proceedings of the 14th Annual Conference of the International Group for the Psychology of Mathematics Education, Morelos, Mexico, 15–20 July 1990; pp. 27–36. [Google Scholar]
  43. Maltese, A.V.; Harsh, J.A.; Svetina, D. Data visualization literacy: Investigating data interpretation along the novice—expert continuum. J. Coll. Sci. Teach. 2015, 45, 84–90. [Google Scholar] [CrossRef]
  44. Van Wijk, J.J. The value of visualization. In Proceedings of the VIS 05. IEEE Visualization, Minneapolis, MI, USA, 23–28 October 2005; pp. 79–86. [Google Scholar]
  45. Chen, M.; Jaenicke, H. An information-theoretic framework for visualization. IEEE Trans. Vis. Comput. Graph. 2010, 16, 1206–1215. [Google Scholar] [CrossRef]
  46. Nolan, D.; Perrett, J. Teaching and Learning Data Visualization: Ideas and Assignments. Am. Stat. 2016, 70, 260–269. [Google Scholar] [CrossRef] [Green Version]
  47. Metcalfe, A.S.; Blanco, G.L. Visual Research Methods for the Study of Higher Education Organizations. In Higher Education: Handbook of Theory and Research; Paulsen, M.B., Perna, L.W., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 34, pp. 153–202. [Google Scholar] [CrossRef]
  48. Arneson, J.B.; Offerdahl, E.G. Visual Literacy in Bloom: Using Bloom’s Taxonomy to Support Visual Learning Skills. CBE Life Sci. Educ. 2018, 17, ar7. [Google Scholar] [CrossRef] [Green Version]
  49. Chevalier, F.; Henry Riche, N.; Alper, B.; Plaisant, C.; Boy, J.; Elmqvist, N. Observations and Reflections on Visualization Literacy in Elementary School. IEEE Comput. Graph. Appl. 2018, 38, 21–29. [Google Scholar] [CrossRef] [PubMed]
  50. Chamberlin, B.; Ulery, A. The Magic of Reading Graphs. Available online: https://scienceofsoil.com/modules/readingGraphs/readingGraphs.php (accessed on 9 March 2020).
  51. Agresti, A. Statistical Methods for the Social Sciences, 5th ed.; Pearson: Boston, MA, USA, 2018. [Google Scholar]
  52. Pallant, J. SPSS Survival Manual: A Step by Step Guide to Data Analysis Using IBM SPSS, 7th ed.; Routledge: London, UK, 2020; p. 378. [Google Scholar]
  53. Thompson, D.S.; Beene, S. Uniting the field: Using the ACRL visual literacy competency standards to move beyond the definition problem of visual literacy. J. Vis. Lit. 2020, 39, 73–89. [Google Scholar] [CrossRef]
  54. Shah, P. A model of the cognitive and perceptual processes in graphical display comprehension. AAAI Technical Report FS-97-03 1997, 94–101. Available online: https://www.aaai.org/Papers/Symposia/Fall/1997/FS-97-03/FS97-03-012.pdf (accessed on 9 March 2020).
  55. Shah, P.; Freedman, E.G. Bar and line graph comprehension: An interaction of top-down and bottom-up processes. Top. Cogn. Sci. 2011, 3, 560–578. [Google Scholar] [CrossRef] [PubMed]
  56. Thomas, I. Critical thinking, transformative learning, sustainable education, and problem-based learning in universities. J. Transform. Educ. 2009, 7, 245–264. [Google Scholar] [CrossRef]
  57. Hasslöf, H.; Lundegard, I.; Malmberg, C. Students’ qualification in environmental and sustainability education-epistemic gaps or composites of critical thinking. Int. J. Sci. Educ. 2016, 38, 259–275. [Google Scholar] [CrossRef]
Figure 1. A developmental approach to critical thinking by Spector and Ma [5]. Reproduced with open-access permission.
Figure 1. A developmental approach to critical thinking by Spector and Ma [5]. Reproduced with open-access permission.
Sustainability 12 10209 g001
Figure 2. The six visualizations that composed the visual literacy pre-test and post-test. (a) Table; (b) one-dimensional figure; (c) bar graph; (d) choropleth; (e) pie chart; (f) flow chart. Parts b and d through f were obtained from an internet search. The website addresses for the sources of the figures are provided in Table A1 in Appendix A.
Figure 2. The six visualizations that composed the visual literacy pre-test and post-test. (a) Table; (b) one-dimensional figure; (c) bar graph; (d) choropleth; (e) pie chart; (f) flow chart. Parts b and d through f were obtained from an internet search. The website addresses for the sources of the figures are provided in Table A1 in Appendix A.
Sustainability 12 10209 g002
Figure 3. The three visualizations utilized in the midterm. (a) Table; (b) choropleth; (c) line graph. Figures were obtained from an internet search. The website addresses for the sources of the figures are provided in Table A1 in Appendix A.
Figure 3. The three visualizations utilized in the midterm. (a) Table; (b) choropleth; (c) line graph. Figures were obtained from an internet search. The website addresses for the sources of the figures are provided in Table A1 in Appendix A.
Sustainability 12 10209 g003
Figure 4. Pooled results from the pre- and post-tests on visual literacy for the face-to-face section (A) and online section (B) of the Introduction to Environmental Science course. Students in both class sections demonstrated significantly higher scores on elementary visual literacy questions compared to advanced level questions (p < 0.05) but there was no significant difference in scores within each class section in the pre- and post-tests (p > 0.05).
Figure 4. Pooled results from the pre- and post-tests on visual literacy for the face-to-face section (A) and online section (B) of the Introduction to Environmental Science course. Students in both class sections demonstrated significantly higher scores on elementary visual literacy questions compared to advanced level questions (p < 0.05) but there was no significant difference in scores within each class section in the pre- and post-tests (p > 0.05).
Sustainability 12 10209 g004
Figure 5. Results for visual literacy questions on the midterm exam for the face-to-face section (A) and online section (B) of the Introduction to Environmental Science course. Students in both class sections demonstrated no significant difference in scores between elementary, intermediate or advanced visual literacy questions (p > 0.05).
Figure 5. Results for visual literacy questions on the midterm exam for the face-to-face section (A) and online section (B) of the Introduction to Environmental Science course. Students in both class sections demonstrated no significant difference in scores between elementary, intermediate or advanced visual literacy questions (p > 0.05).
Sustainability 12 10209 g005
Figure 6. Average exam scores for the face-to-face section (A) and the online section (B) of the Introduction to Environmental Science course. Students in the FTF section with a visual literacy intervention had higher overall exam scores compared to those in the FTF section without a visual literacy intervention (p < 0.05). No significant difference was found in exam scores between online sections with or without a visual literacy intervention (p > 0.05).
Figure 6. Average exam scores for the face-to-face section (A) and the online section (B) of the Introduction to Environmental Science course. Students in the FTF section with a visual literacy intervention had higher overall exam scores compared to those in the FTF section without a visual literacy intervention (p < 0.05). No significant difference was found in exam scores between online sections with or without a visual literacy intervention (p > 0.05).
Sustainability 12 10209 g006
Figure 7. Average assessment scores for the face-to-face section (A) and the online section (B) of the Introduction to Environmental Science course. No significant difference was found between the average assessment scores with visual literacy (VL). The FTF visual literacy section was the only section to meet the assessment benchmark of a 70% average.
Figure 7. Average assessment scores for the face-to-face section (A) and the online section (B) of the Introduction to Environmental Science course. No significant difference was found between the average assessment scores with visual literacy (VL). The FTF visual literacy section was the only section to meet the assessment benchmark of a 70% average.
Sustainability 12 10209 g007
Table 1. The test questions utilized in the pre-test and post-test for each of the six visualizations and the three comprehension levels.
Table 1. The test questions utilized in the pre-test and post-test for each of the six visualizations and the three comprehension levels.
Question NumberVisualizationQuestionsComprehension Level
Q1TableHow many dwarf seahorses are at location 1?elementary
Q2Which location has more total seahorses than the other?intermediate
Q3T(A) or F (B) Location 1 is a better habitat than location 2 for all seahorse species.advanced
Q4One-dimensional figureWhich wavelength of light would be green in color?elementary
Q5T(A) or F (B) The wavelength of red is longer and lower in frequency than the blue wavelength.intermediate
Q6The colors of plant leaves and flowers that you see are the colors of light that are reflected off the plant to your eyes. Which wavelength of light would be reflected by grass?advanced
Q7Bar graphWhich month has the greatest amount of species 1?elementary
Q8How much higher is the abundance of species 1 in December than November?intermediate
Q9T(A) or F (B) If species 1 abundance declines due to a high abundance of predators, then November would likely have the highest abundance of predators.advanced
Q10ChoroplethWhat was the land surface temperature anomaly for most of Florida?elementary
Q11Which region of the North American continent had the largest warming compared to the other locations?intermediate
Q12Ice melting is likely the highest in which region of the map?advanced
Q13Pie chartWhat is the biggest source of pollution of the ocean?elementary
Q14T(A) or F (B) the biggest source of pollutants comes from land-based pollution rather than air- or ocean-based pollution.intermediate
Q15Which of the following policies would have the biggest reduction on ocean pollutants?advanced
Q16Flow chartWhich species is at the top of the food web?elementary
Q17If there was an increase in owls, which species would be negatively impacted?intermediate
Q18If an infection destroyed all grains in this system, which of the following would happen?advanced
Table 2. The test questions utilized in the midterm exam for each of the three visualizations and the three comprehension levels.
Table 2. The test questions utilized in the midterm exam for each of the three visualizations and the three comprehension levels.
Question NumberVisualizationQuestionsComprehension Level
Q1TableWhich reservoir has the largest water reservoir time?elementary
Q2Does a reservoir that has vapor (A) or solid water (B) have the longest residence time?intermediate
Q3If a pollutant entered the water, which reservoir would it impact the most?advanced
Q4ChoroplethWhat is the temperature of Daytona Beach?elementary
Q5As you move from south to north, what happens to the temperature in the U.S.?intermediate
Q6Global warming is expected to increase global temperatures by 2 °C by the year 2100. What would the color of Daytona be if this happens?advanced
Q7Line graphThe graph above was created by measuring the amount of carbon dioxide in the background atmosphere at an observatory in Hawaii, Mauna Loa, which correlates to the burning of fossil fuels, and measuring the amount of carbon dioxide produced during three volcanic eruptions: Agung, El Chicon and Pinatubo. What was the amount of carbon dioxide produced by the Pinatubo eruption?elementary
Q8The carbon dioxide being measured at Mauna Loa is _____over time.intermediate
Q9Is the amount of carbon dioxide increasing from fossil fuel production greater (A) or smaller (B) than the carbon dioxide from volcanic eruptions?advanced
Table 3. Two-way repeated measures ANOVA summary table for the percentage of correct scores pre-test and post-test for each visual literacy comprehension level.
Table 3. Two-way repeated measures ANOVA summary table for the percentage of correct scores pre-test and post-test for each visual literacy comprehension level.
Class SectionSourcedfMSFp-Value
Face-to-face honorsTest17.7160.0850.783
Comprehension level23381.67.7360.009 *
Test * comprehension level interaction213.500.2450.787
Within groups1055.17
OnlineTest130.86410.363
Comprehension level23613.0414.7480.001 *
Test * comprehension level interaction248.230.8740.447
Within groups1055.170
* Note—MS  =  mean squares, p-value is based on meeting Mauchly’s Test of Sphericity, * denotes a statistical significance of p < 0.05.
Table 4. One-way ANOVA summary table for percentage of correct scores for the midterm exam for each visual literacy comprehension level.
Table 4. One-way ANOVA summary table for percentage of correct scores for the midterm exam for each visual literacy comprehension level.
Class SectionSourcedfMSFp-Value
Face-to-face honorsComprehension level254.010.1790.840
Within groups6300.93
Total8
OnlineComprehension level2378.090.5900.583
Within groups6640.43
Total8
Note—MS = mean squares, p-value is based on meeting Levene’s test of homogeneity.
Table 5. Independent sample t-test exam scores for FTF sections with and without a visual literacy intervention (VL).
Table 5. Independent sample t-test exam scores for FTF sections with and without a visual literacy intervention (VL).
Class SectionNMeanSDtdfp-ValueSignificance
Face-to-face honors with VL5976.211.7−4.851150.015*
Face-to-face honors without VL5862.611.7
*—Same as Table 3
Table 6. Independent sample t-test exam scores for online sections with and without a visual literacy intervention (VL).
Table 6. Independent sample t-test exam scores for online sections with and without a visual literacy intervention (VL).
Class SectionNMeanSDtdfp-ValueSignificance
Face-to-face honors with VL6469.014.0−1.1781000.242NS
Face-to-face honors without VL3865.514.9
Table 7. Independent ample t-test assessment scores for FTF sections with and without a visual literacy intervention.
Table 7. Independent ample t-test assessment scores for FTF sections with and without a visual literacy intervention.
Class SectionNMeanSDtdfp-ValueSignificance
Face-to-face honors with VL1670.525.5−0.586300.562NS
Face-to-face honors without VL166527.8
Table 8. Independent ample t-test assessment scores for online sections with and without a visual literacy intervention.
Table 8. Independent ample t-test assessment scores for online sections with and without a visual literacy intervention.
Class SectionNMeanSDtdfp-ValueSignificance
Face-to-face honors with VL1662.532.5−0.011300.991NS
Face-to-face honors without VL1662.429.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Krejci, S.E.; Ramroop-Butts, S.; Torres, H.N.; Isokpehi, R.D. Visual Literacy Intervention for Improving Undergraduate Student Critical Thinking of Global Sustainability Issues. Sustainability 2020, 12, 10209. https://0-doi-org.brum.beds.ac.uk/10.3390/su122310209

AMA Style

Krejci SE, Ramroop-Butts S, Torres HN, Isokpehi RD. Visual Literacy Intervention for Improving Undergraduate Student Critical Thinking of Global Sustainability Issues. Sustainability. 2020; 12(23):10209. https://0-doi-org.brum.beds.ac.uk/10.3390/su122310209

Chicago/Turabian Style

Krejci, Sarah E., Shirma Ramroop-Butts, Hector N. Torres, and Raphael D. Isokpehi. 2020. "Visual Literacy Intervention for Improving Undergraduate Student Critical Thinking of Global Sustainability Issues" Sustainability 12, no. 23: 10209. https://0-doi-org.brum.beds.ac.uk/10.3390/su122310209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop