Next Article in Journal
Acknowledgment to Reviewers of Systems in 2020
Next Article in Special Issue
Complex Systems Research in K12 Science Education: A Focus on What Works for Whom and under Which Conditions
Previous Article in Journal / Special Issue
Building System Capacity with a Modeling-Based Inquiry Program for Elementary Students: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing Feedback Systems: Examining a Feedback Approach to Facilitation in an Online Asynchronous Professional Development Course for High School Science Teachers

Graduate School of Education, University of Pennsylvania, Philadelphia, PA 19104, USA
*
Author to whom correspondence should be addressed.
Submission received: 8 December 2020 / Revised: 3 January 2021 / Accepted: 20 January 2021 / Published: 26 January 2021

Abstract

:
Many researchers have identified the need for a more holistic understanding of the role of feedback in supporting learning in online environments. This study explores how our design, development, and implementation of an online feedback facilitation system influenced high school science teachers’ learning in an asynchronous teacher professional development online course. We then describe teachers’ and facilitators’, i.e., feedback providers’, perceptions of the effectiveness of the system’s features for supporting participants’ learning and engagement. Our work also responds to recent calls for developing a more nuanced understanding of how the complexity of feedback influences learning and the need for more qualitative research on online facilitators’ and learners’ experiences working with new technologies. Results demonstrated that, despite the difficulty of analyzing the complex variables influencing learners’ interactions and perceptions of the feedback system, designing adaptive feedback systems that draw on the principles of design-based implementation research (DBIR) offer promise for enhancing the systems’ contributions to teacher learning.

1. Introduction

Developments in learning technologies have enabled new forms of interactions between learners and the various components of technological systems [1]. By introducing new possibilities for collaborative learning, these technological advancements have led to the emergence of fields such as computer-supported collaborative learning (CSCL), where the aim is to use technology to scaffold learning through collaboration [2]. The growing deployment of technologically mediated collaborative learning systems has demonstrated the importance of understanding how technological artifacts and structures can be leveraged to achieve the collaborators’ intended learning outcomes [3]. Accordingly, several studies have analyzed how the digital and educational design of technologically mediated collaborative learning systems can create conditions for collaborative knowledge construction [4]. Some of these studies showcased the affordances of technologies in illuminating participants’ behavior (e.g., showcasing participants’ online activity), providing them with data-driven advice (e.g., recommending responses to participants based on their engagement), and offering them multiple development opportunities (e.g., supporting instructors in noticing participants’ challenges, and offering them personalized opportunities to improve their situational interest) [5]. Most of these studies demonstrated the complexity of analyzing the relationship between computer support and collaborative learning due to the plethora of educational technologies and the presence of multiple dynamic, interconnected, and interdependent variables [4]. Through the lens of complexity theory, a CSCL system can be categorized as a complex adaptive system (CAS) that continuously self-organizes and evolves to optimize learning [6]. Within these systems, feedback guides the reaction to such imbalances, which helps learners achieve their desired equilibrium for learning (i.e., learning goal).
As we discuss in more detail later, we view feedback as a collaborative process of learning that is constructed through loops of dialogue and information [7]. Researchers and practitioners are still exploring different approaches for using it to best support learning. This is particularly the case for research on CSCL environments, such as online professional development courses, where researchers continue to explore approaches for offering effective online feedback. Firmin [8] and Gayoung [9] recommended viewing learning within these online environments holistically, and thus analyzing how feedback between learners, instructors, and learning support systems can advance learning [10]. By delving deeper into the literature on online feedback, the feedback of facilitators has been signaled by many researchers as critical for encouraging academic self-efficacy and learner engagement [10]. Siemens [11] and Kasch [12] articulated a range of comprehensive actions and design considerations for supporting educators in fulfilling their instructional roles. Yet, there continues to be a lack of research that explores online facilitation in greater depth and offers insights into how it might look different in an online context [13]. The need for a more holistic understanding of the affordances and limitations of facilitating online learning environments has been recognized by several researchers studying online asynchronous teacher professional development (PD) [14,15]. Although some studies acknowledge the potential of online PD as an accessible, high-quality learning experience [16,17], these studies also note the importance of developing a more nuanced understanding of the complexity of online PD environments where feedback plays a pivotal role in facilitating learning. Moreover, researchers have yet to comprehensively use a complex systems lens to guide the design and implementation of online feedback systems that aim to produce high-quality PD. Doing so would offer more insights into the effective use of a feedback system’s dynamic, interconnected variables to support learning. In this paper, we present a feedback system for facilitating participants’ learning and engagement in an online course designed to support their instruction of complex systems in high school science classrooms. We explore the framework guiding the design, development, and implementation of our feedback system. We then describe how the facilitation system influenced participants’ and facilitators’ perceptions of the effectiveness of the system in supporting participants’ learning and engagement. Our work also responds to recent calls for more qualitative research on the experiences of instructors and learners working with new technologies [13,18].

1.1. Learning as a Complex System of Feedback

Learning is increasingly recognized as a complex system of feedback where, as Sterman [19] noted in his seminal article, “our decisions alter the real world, we receive information feedback about the world and revise the decisions we make and the mental models that motivate those decisions” [20]. In this system, the relationship between learning and teaching is perceived as a dynamic process of knowledge exchange and relationship building rather than a one-way transmission of knowledge [21]. Feedback plays a critical role in nurturing our ability to close the gap between current and desired knowledge states by helping us identify flaws in our learning strategies [22]. It also contributes to our ability to self-regulate to reach or exceed our intended learning goals [23]. Despite the affordances of feedback for supporting learning, as a complex system, feedback processes are sometimes impeded by the ambiguity of the feedback, systematic misperceptions of feedback, insufficient modeling of our cognitive maps, ineffective social and emotional support, and inadequate scientific reasoning skills [24]. For example, feedback lacking specificity and relevance may cause confusion among learners and deter them from engaging with the learning community. Many learners also expect to receive feedback that humanizes them and acknowledges their social and emotional state. As a result, when these learners receive generic feedback that is devoid of social and emotional components, their perception of the quality of the learning experience tends to suffer [22,24]. Our ability to learn through complex feedback systems and overcome their impediments is critical for advancing human knowledge. In developing our online feedback system for supporting participants’ instruction of complex systems, we draw on two areas of research: research documenting the role of feedback in supporting learning and research documenting the value of online facilitation in helping participants achieve the intended learning goals.

1.1.1. Learning through Feedback Loops

To better understand the impact of feedback systems on learning, scholars who have written about the importance of feedback, such as John Dewey [25,26], have referred to the importance of analyzing the feedback-loop character of learning as a guiding framework for supporting better learning outcomes. Argyris and Schön, [27] in their seminal article on feedback loops, recommended viewing learning as an iterative cycle of dynamic “invention, observation, reflection, and action”, where feedback loops can provide helpful insights about how we learn [28]. Most of the research on the impact of feedback loops on learning focuses on two types of loops, the negative feedback loop and the positive feedback loop [29]. The negative feedback loop shown in Figure 1, which is also referred to as the single-loop learning model, refers to the process whereby we learn to reach our learning goals based on our current mental models. According to Sterman [30] and Carless [31], single-loop learning is defined as an attempt to “solve problems with minimal variation in method” and without reviewing the underlying assumptions driving the solution. Single-loop learning does not result in deep change to our mental models. As a first-order linear feedback, a single-loop produces stable convergence to a desired equilibrium. For example, when a student sets a specific learning goal, this goal is the desired equilibrium point that they will work to reach and maintain. In this case, the intended learning goal is reached and maintained through feedback loops between the learning goal recognized by the learner and the learners’ environment. If the environment makes it harder for the learner to reach the intended learning goal, the learner responds by increasing their effort until they are able to return to the desired equilibrium.
In contrast to single-loop learning, a positive feedback-loop, which is also referred to as a double-loop learning model (shown in Figure 2), that refers to a system’s thinking mental model that replaces the “reductionist, partial, narrow, short-term view” of the single-loop learning model with a “holistic, broad, long-term, dynamic view” [31,32]. Double-loop learning involves reframing our cognitive schema by creating new decisions, decision rules, and equilibria based on the feedback we receive. Accordingly, double-loop learning involves revising our mental models and redesigning the system itself rather than being confined to a specific set of rules. As noted by Scott [33], both positive and negative feedback loops are unclosed when those receiving feedback fail to reach their respective goals. Much of the research on feedback loops has focused on the theoretical and practical considerations for using these loops to help attain the system’s goals, especially within the disciplines of management, science, technology, engineering, and mathematics. Nevertheless, more work is needed to advance our understanding of how feedback loops can influence participant learning in online learning environments [34]. Specifically, we need to examine how feedback loops can guide the design and implementation of online feedback systems to support learners’ uptake of feedback by closing feedback loops.

1.1.2. Understanding Online Facilitation

The rapid rise of online offerings has prompted researchers and practitioners to explore the affordances and limitations of online feedback in supporting learning. A few studies have shown that online facilitators’ ability to provide feedback that builds relationships, encourages active participation, and offers subject matter expertise can improve learning outcomes [35]. Furthermore, online facilitators’ feedback has shown promise for its ability to increase the notoriously low course completion rates for most online courses, especially for Massive Open Online Courses (MOOCs) [36]. Facilitation within massive courses such as MOOCs has also benefited from technological supports that allow instructors new ways to support the engagement of large numbers of diverse learners. Facilitation systems of feedback also have the potential to solve some of the collaboration and interactivity issues that are present in some online asynchronous PD [37]. Despite these promising results, more work is needed to understand how online facilitation systems of feedback operate within dynamic, complex online learning systems to overcome their limitations and leverage available resources to maximize learning [38].
Because online learning entails the physical separation of instructors and learners, the design decisions of instructional designers can greatly influence learner engagement and relationship building within online environments [39]. As such, analyzing the design of online feedback systems can help us develop a more holistic conception of how these systems can effectively contribute to participants’ online learning [40,41].

1.1.3. Understanding the Implementation of Online Facilitation Systems of Feedback

One of our study aims is to show how the different features of an online facilitation system of feedback can influence participants’ learning about complex systems in online PD. Despite the presence of various system components that offer feedback to online learners (e.g., peer feedback, automated feedback), our study focuses on the feedback offered through the online facilitation system that we designed specifically for supporting high school biology educators (hereafter referred to as participants) enrolled in an online professional development course. The facilitation system’s design was guided by Yang and Carless’s [42] “feedback triangle” framework shown in Figure 3. This framework views feedback as an interplay of structural, cognitive, and social-affective dimensions that aim to advance learning. The structural dimension refers to the instructional design of the feedback system and how feedback is organized. The cognitive dimension refers to how the content of the facilitator’s feedback supports participants’ understanding of the course content. The social-affective dimension refers to the interpersonal and relational exchange of feedback between the facilitator and the participant. As noted above, as a collaborative process of learning, the goal of the co-constructive model of feedback is to build equitable relationships where the experiences of both the expert participant and the novice participant are respected and appreciated. The facilitator’s role in this feedback model is to encourage dialogic engagement with their participants based on their common experiences [43]. Research by Gerard [44] and colleagues has shown how PD experiences that adopt co-constructivist feedback models are more likely to enable participants to enhance their comprehension of inquiry-based science learning [45]. Ultimately, our facilitation system aimed to use the feedback triangle’s three dimensions to support a co-constructive feedback model that offers participants double-loop learning opportunities.

1.2. Complex Facilitation System Framework

The framework guiding our facilitation system is depicted in Figure 3.
As discussed in the previous section, the framework has three main dimensions: structural, cognitive, and social-affective. These dimensions are aligned with the literature on best practices for offering feedback that aims to support participants’ instructional practices for teaching complex systems.

2. Methodology

2.1. Context

This study is part of a larger U.S. National Science Foundation project focused on developing curriculum and instruction to support learning about complex systems in high school biology classrooms. This project involved developing an online PD that is designed to take participants approximately 40 h to complete over six weeks. As the second iteration of the online course, this publicly accessible course launched in May 2020 and ended in September of the same year. This iteration of the course included the first attempt to implement a structured online facilitation system. We designed and implemented the facilitation system following the literature on what constitutes high-quality feedback in online learning environments while effectively deploying the different components of the feedback triangle to support participant learning (as described above). The seven facilitators we selected were expert high school biology teachers selected based on their cognitive and social-affective contributions to the course’s previous iterations. To enhance our facilitation system’s effectiveness and likelihood of scalable success, we relied on the four principles of design-based implementation research (DBIR) developed by Fishman and colleagues [46]. DBIR aims to challenge the “traditional” barriers between research, practice, and policymaking in a way that facilitates the design of educational experiences that are effective, sustainable, and scalable [47]. DBIR’s four principles include: a focus on the persistent problems of practice facing the various stakeholders engaging with the education system; a commitment to an iterative and inclusive process of collaborative design; a concern for using systematic inquiry to develop theories and knowledge related to implementation processes and student learning outcomes; and a concern for developing the capabilities of researchers, practitioners, and policymakers to influence sustainable change within systems [47]. See Appendix A, Table A1 for more details on how the different principles were incorporated into our design process.

2.2. Pedagogical Method

Our facilitation system was guided by the feedback triangle framework. The facilitation framework’s structural dimension involved the instructional design of the facilitation system and its organization to support participant learning. Jung [10] discussed the technology acceptance theory, which suggested that participants’ perceptions of the usefulness and ease of use of the structure and function of a learning support system can influence participants’ willingness to continue their online learning experience. Drawing from this theory and other best practices for designing online facilitation structures [48], we developed a facilitation manual that included the facilitators’ pedagogical and administrative roles and responsibilities and the structure in which they were operating. We aligned the guide’s recommendations for high-quality feedback with the literature on best practices for using feedback to support participants’ needs [49,50]. The guide also included an overview of the components of the facilitation structure guiding the facilitation system. The first component involved attending facilitation training. Second, facilitators were asked to participate in biweekly facilitation meetings designed to allow facilitators an opportunity to exchange knowledge, discuss improvements to the facilitation system, and nurture a learning community supportive of constructive facilitation. Third, facilitators were assigned a cohort of participants based on the facilitator’s experiences and the participants’ preferences, which they communicated in a pre-PD survey. Fourth, participants were asked to attend three synchronous meetups to discuss course content and connect with the course participants. Fifth, facilitators were tasked with monitoring their cohort members’ activity every week and offering them support when needed. Facilitators were guided by Kizilcec et al.’s. [51] engagement clustering methodology to help them decide on the appropriate actions needed to encourage participants within every engagement cluster. Sixth, template emails were shared with facilitators to help them encourage participants’ participation within each activity cluster. Participants who were not engaging for more than three weeks were offered the opportunity to choose to continue within a cohort, transfer to another track, or complete the course at their own pace [52]. Seventh, facilitators had weekly online office hours, which were informal sessions that revolved primarily around being accessible to participants for support.
Several instructional decisions that were not facilitation-specific ended up impacting the implementation of the facilitation system. First, by dividing the course into three suggested timing tracks, a track being a six-week period in which facilitators can mentor a cohort of participants and support their completion of the PD, the facilitation system had to support participants’ transition from one track to the other when needed. Second, with the course being hosted on the edX online platform, the facilitation system was confined by the platform’s features. The organization of the course content and the required tasks were two other factors that influenced how the facilitation system operated.
Researchers have shown that the quality of the cognitive dimension of feedback can significantly influence participants’ academic self-efficacy and cognitive engagement [53]. More specifically, Darling-Hammond and colleagues [54] recommended several cognitive scaffolds for supporting participants in developing their practices for improving learning outcomes. These include: (1) offering participants expert support; (2) promoting participants’ active learning and meaning making; and (3) providing participants with time for feedback and reflection on practice. Accordingly, nurturing participants’ conviction in their cognitive abilities and encouraging them to acquire complex content knowledge were two of the main goals driving our facilitation system’s cognitive dimension. To guide our pursuit of these goals, we drew on the concept of adaptive expertise, defined by Baroody and Dowker [55] “as the ability to apply meaningfully learned procedures flexibly and creatively,” as opposed to routine expertise, which assumes that participants perform the tasks without understanding them. Accordingly, we envisaged facilitators and participants as inquiring professionals working in different contexts that influence their teaching effectiveness and their ability to support their learning and that of their participants [56]. Adaptive expertise influenced the operationalization of our facilitation system’s cognitive dimension in a variety of ways. First, when selecting our facilitators, we evaluated their cognitive contributions to the course through their discussion posts and their implementation of the course content to ensure that the facilitators had a deep understanding of the content and its implementation in different contexts and settings. Second, we offered facilitators training on how to provide high-quality feedback, and we had ongoing discussions on ways to improve the content of our feedback during the biweekly facilitation meetings. Third, facilitators who were facing some challenges with some aspects of the course content were connected to other facilitators or other subject matter experts who would respond to their inquiries. Fourth, we continuously engaged in discussions with our facilitators to ensure that the design and implementation of our facilitation system were not causing them or our participants a cognitive overload that would hinder their cognitive engagement within the course [57]. Fifth, synchronous meetings were used to encourage facilitators and participants to reflect on course-related practices and implementation strategies.
The social-affective dimension of the facilitation system was guided by the theory of connectivism, which explains learning as a network phenomenon of building and sustaining connections influenced by technology and socialization [58,59]. The goal of the social-affective dimension of the facilitation system was to leverage participants’ and facilitators’ social capital to promote knowledge distribution among members of the learning community. In our system, we viewed social capital as encompassing the complexity of human relational interactions that are mediated by technology within an online learning community [59,60]. Emotional engagement, defined by participants’ positive emotions towards the various actors and structures within the facilitation system, was also used to support the construction of knowledge and relationships between members of the course’s learning community [49]. Drawing on research that shows the importance of teaching presence on participant persistence and engagement in online environments [61,62], our system focused on supporting the accessibility and availability of facilitators for supporting participants cognitively, socially, and emotionally. First, each facilitator was assigned a cohort of participants based on the facilitators’ experience and the participants’ preferences, which they communicated in a pre-PD survey. This process was designed to leverage the benefits of personalized mentorship in online spaces [63]. Second, facilitators shared the contact information of their cohort members among each other to encourage them to interact within and beyond the course. Third, we encouraged facilitators to use their monitoring reports to send personalized guidance to their participants based on their engagement status. We also offered them templates for these emails. Given the fact that the course was running during COVID-19, our facilitation team was fully aware of the importance of providing our participants with the necessary social and emotional support to support their engagement and learning and to refrain from any activity that would lead them to feel any sense of shame or guilt for their delay. Fourth, we recommended that facilitators send virtual taps on the shoulder to high performing participants to recognize their input while encouraging them to continue their exemplary performance. Fifth, to encourage the formation of a learning community, we offered participants biweekly synchronous meetups to meet and share their experiences. Sixth, the facilitation system offered participants multiple modes for engaging with their facilitators via email, office hours, synchronous meetups, and discussion boards, with the goal of supporting the participants’ different preferences for online social engagement.

2.3. Participants

Of the 180 participants who engaged with the course by posting at least once in the discussion forum, 74 completed a post-PD survey (out of the 76 who completed the course). The who completed the post-PD survey had an average of 14.3 years of experience, with a range of 1 to 28 years, with 13 males, 59 females, and 2 participants who preferred not to indicate a gender. Table 1 shows the demographics of those who completed the course and a post-PD survey. To develop a deeper understanding of participants’ responses to the facilitation survey questions, we selected a sample of 10 participants to participate in semistructured interviews about their experience with the facilitation system. To select our sample, we divided the survey participants into three different satisfaction categories (i.e., low, moderate, and high). The low satisfaction included any participant who rated more than one facilitation question as dissatisfactory. The moderate satisfaction category included participants who rated more than one question as neutral and no question as dissatisfactory. Finally, the high satisfaction category included participants who rated more than one question as satisfactory and no question as dissatisfactory. We then selected three participants from each satisfaction category for four of our seven facilitators. We also selected all the participants who had more than two dissatisfactory ratings. Our participants were working in schools that differed in types (e.g., public, private, etc.), resource level, and location (e.g., urban, rural, and suburban). Accordingly, we selected a sample that would allow us to control for school resource level, type, and location while including participants with different satisfaction categories. This sample included participants operating in high- and middle-resource public schools in suburban and rural settings within the United States. Of the 15 participants solicited for interviews, 10 responded and were subsequently interviewed. Table 2 includes more details on the demographics of our sample.

2.4. Data Sources and Analysis

To investigate the impact of our facilitation system on participant learning, we followed a mixed methods evaluation that included collecting three data sources over a period of three months. A constant comparative method [64] was used to identify emerging themes in these data sources. To understand the participants’ perception of the effectiveness of the facilitation system, we conducted a post-PD survey that included six Likert-scale (1 = strongly disagree to 5 = strongly agree) questions. These questions focused on understanding how each participant evaluated the different dimensions of the feedback triangle guiding the facilitation system by asking them to rate their satisfaction with (1) their facilitator’s help in developing the participant’s understanding of the course content; (2) their facilitator’s accessibility and availability whenever they needed support; (3) their ability to benefit from the facilitators’ office hours; (4) the relationship they developed with their facilitator; (5) their facilitator connecting them to other participants; and (6) whether they would recommend their facilitator to future participants.
As for the semi-structured interviews, they lasted for approximately 30 min and included 20 questions, of which 11 were facilitation-related. The interview questions concerning facilitation aimed to explore how participants evaluated the structural, cognitive, and social-affective dimensions of the facilitation system of feedback. For example, participants were asked questions like (1) How did you find your experience interacting with your facilitator? And if you didn’t interact, what was the reason behind your lack of interaction? (2) Did your facilitator support you in developing a better understanding of the course content? If yes, in what ways? If not, why was that the case?
Interviews were digitally recorded, transcribed, and mined for themes that would highlight participants’ perceptions of the facilitation system’s affordances and limitations. Following the guidance of Campbell and colleagues [65] for coding complex interview data, we pursued a unitization strategy that focused on meaning units within a participants’ response rather than coding a participant’s full response to a question under one code. In this unitization process, the lead author unitized the responses based on his understanding of the theoretically motivated questions guiding the study, the context, and the author’s subjective interpretation. After the unitization process was complete, a total of 174 utterances were coded into the following three categories: positive feedback, negative feedback, and other, which are described in Table 3. This categorization aimed to help us develop a better understanding of participants’ perceptions regarding the influence of feedback on their learning. To understand which dimension of the feedback system might have affected participants’ perceptions of the feedback system and the directionality of this impact, we created the six subcategories described in Table 4. The coding scheme guiding the categorization process drew from the theoretical frameworks of Yang and Carless [42] and Sterman and colleagues [30].
To obtain inter-rater reliability, two researchers were trained who are a part of the research team but were not involved in the analysis of the data. The two raters independently coded 46 utterances, which is equivalent to 26% of the data. Alpha scores for each pair of raters ranged between 0.705 and 0.815. The remaining codes were assigned by another member of the research team.
To better understand the themes emerging from the post-PD surveys and semi-structured interviews, we relied on the researcher field notes that we gathered from the twelve bi-weekly facilitation meetings. During these sessions, facilitators shared their experiences engaging with their participants and offered recommendations for improving the facilitation system. We informally analyzed these field notes for themes that allowed a more holistic understanding of the data gathered from the participants.

3. Results

In this section, we present the themes that emerged on the influence of the facilitation system on the facilitators’ and participants’ perceptions of learning engagement within the course. Survey responses showed that online participants on average rated all six Likert-scale questions between 3.38 and 4.08, which indicates that participants had a relatively positive experience with the facilitation system but that some challenges limited their overall satisfaction. See Table 5 for the detailed survey results. Participants’ responses highlighted some preliminary themes that we sought to understand by conducting semi-structured interviews and analyzing researcher field notes.

3.1. Negative and Positive Impacts of the Structural Dimension of Feedback

3.1.1. Negative Responses to the Structural Dimension of Feedback

Even though participants’ survey responses show their relative satisfaction with the facilitation system, the analysis of participant interviews showed responses related to the category of negative structural feedback (59 responses) outnumbering responses related to the category of positive structural feedback (28 responses). These responses indicate that the design choices for structuring the feedback process might have led to these negative views. Participants’ articulation of the main factors affecting their perception of the limitations of the structural dimension of feedback can be categorized into the following three themes: a gap between participants’ needs and a facilitation system feature, multiple features adding to participants’ cognitive load and reducing the system’s ease of use, and participants needing more synchronous engagement.
There are several examples of how participants felt that a specific facilitation feature did not meet their learning needs, with the most recurring feature being facilitators’ office hours. The following quote shows how a participant did not use office hours because they felt that having the structural feature of communicating via email was preferable:
I did not use the office hours. I did not need them. I think I got everything I needed via email.
According to our facilitation field notes, facilitators felt that many of their participants might have felt too intimidated to use office hours because they had never met their facilitators in person. Driven by this feedback, we recommended that facilitators create their own introductory welcome videos. We also offered them the necessary training on how to create more personalized videos with the goal of mitigating or preventing social intimidation.
Another reason offered by the participants for their underutilization of office hours was their relative comfort with the course content, as illustrated in the following quote:
I didn’t feel like it would sort of add too much. I felt pretty comfortable with the progress that I was making already. If I felt like I was really struggling with something, that would have been an impetus for me to join, but I didn’t feel like it was necessary, so I didn’t do it.
In this example, the participant’s self-efficacy with the course content allowed him to navigate the course without feeling the need for office hours. This participant’s perspective aligns with our field notes on facilitators’ feedback about office hours. According to the facilitators, because the vast majority of online courses on edX, the course’s host platform, do not offer office hours and require minimal synchronous interactions, participants who were exposed to courses on this platform might not see the merit in attending them.
Some participants also felt that the structure of the facilitation system limited their ability to build social relationships with other participants. For example, when asked about whether her facilitator connected her to other participants in her cohort, the participant responded by saying the following:
She did tell in an email. I recall she did mention who all was in her groups, all the names of the people and that that would be people to look for in the discussion boards. And when I first saw that I was like, okay. And then I went into the discussion boards, and then I felt like it was very cumbersome to try to find names and look back and forth. I lost interest in that pretty quickly. But maybe again, if it was her with our cohort or a smaller group in a more isolated setting, I feel like that a lot more interaction, exchange would probably happen with the people in her cohort and probably with her too.
According to this participant, the current structure for connecting participants challenged her ability to identify other participants within her cohort. This process seems to have added to the participants’ cognitive load and her interest in engaging with her cohort members. This quote may also indicate barriers to the facilitation system’s ease of use, which, as Jung [10] indicated, may directly hinder participants’ learning persistence and their ability to complete an online course.
Several participants also recommended offering more opportunities for synchronous engagement when asked about possible improvements for the current facilitation structure. This is highlighted in the quote below:
And I would hope that we would have more [synchronous meetups], rather than just having two or three. And maybe not make it mandatory, but at least if somebody wanted to join and interact with people, it’ll definitely be held.
In this example, the participant recommended offering them more opportunities for building social capital, which refers to the resources that a participant can access through their social relationships [66]. The participant’s recommendation for more synchronous interactions might have been due to their feeling of isolation due to the lack of “humanlike” interactions (e.g., the absence of visual cues in text-based communication), which is one of the challenges of asynchronous courses [67]. Thus, having synchronous video-based meetings may help the participant feel that the learning experience is more “real” [68]. This quote also shows the importance of offering participants multiple modes for building knowledge, depending on their engagement preferences.
Overall, the deficiencies in the feedback system’s structural dimension may have contributed to participants’ inability to close the feedback loop that can support their learning. That said, some of the participants who did not feel the need to interact with one of the system’s structural dimensions might have been following a single-loop learning model, where their perceptions of the usefulness of the system’s features were guided by the specific learning goals that they aimed to achieve regardless of the learning possibilities offered by the structure.

3.1.2. Positive Responses to the Structural Dimension of Feedback

In terms of participants’ perceptions of how the structural dimension of the facilitation system added to their learning and engagement, participants highlighted several positive structural features. For example, when asked about how to improve the facilitation system, one participant said the following:
I don’t feel like it needs to be improved. The only one question I had, I posted. And I like the idea that you could post a comment or a question, so that those questions were easily seen by the facilitators.
Offering participants an opportunity to highlight their questions with a unique format that differentiates their questions from other posts seems to have been favored by this participant. The researcher field notes showed that this feature was mutually beneficial to participants and facilitators. Facilitators found that it reduced their cognitive load by allowing them an efficient way of locating participant questions. The ease of use of this feature may also allow participants to ask more questions, which may support the development of double-loop learning.
Some participants found the synchronous meetups to be helpful for their learning. The following two examples illustrate this point:
So those synchronous meetings really kind of give us the feel of the safety net, that if we have any questions or anything, there is somebody we can ask and we will definitely get our answers. So that’s the good aspect, I think.
And so [through the synchronous meetups] we did get a chance to experience all of you, in a way. And that, I think, was very helpful.
These quotes show that participants appreciated the synchronous personal interactions with the facilitation team, even when these interactions were virtual.
According to participants’ responses, offering participants three different tracks for completing the course seems to have allowed them more flexibility to engage with the course while dealing with the disruption caused by the pandemic. The following examples illustrate this point:
So, from a participant’s perspective, having multiple tracks is awesome because, again, I was sort of thrown into the middle of pandemic teaching in the spring, and that’s essentially why I forgot that I had signed up for this thing. So having the ability to go back and sort of restart on the third track there was really great.
I thought it was helpful. In fact, I felt it was extremely helpful because when I started off, I could tell that there were participants that really did have the time to get into it. And I just wasn’t able to put in the effort that I should have been able to put into it. When I shifted into that second track, I felt that I was able to make better connections with those participants in that track because I was able to focus on it more.
Having multiple tracks also allowed the facilitation team to adjust the facilitation structures and practices based on participants’ responses to the post-PD surveys. After analyzing the responses of the 45 participants who completed the course within the first two tracks (Table 6), we discussed possibilities for improving these results during every subsequent facilitation meeting and changed some facilitation features to enhance the effectiveness of the facilitation system. Since only 20 (28%) of the participants who registered for the first track ended up completing the course during its designated time, we began reviewing the survey responses after the first two tracks to allow us a better understanding of participants perceptions of the facilitation system, including their perspectives on having multiple tracks. Analyzing the responses of the participants who completed the course by the third track (Table 7) showed the possible effects of these revisions in improving participants’ perceptions of the facilitation system. The results revealed that participants’ positive perceptions increased significantly for the following factors: facilitators’ ability to offer participant help in developing a better understanding of the course content (Q1), facilitators’ accessibility and availability for offering support (Q2), facilitators’ ability to connect participants to one another (Q5), and participants’ recommending their facilitator for future participants (Q6) (Table 8). As for the changes for questions Q3 and Q4, the changes in participants’ responses were not statistically significant (p > 0.05).
Researcher field notes offered more information about the usefulness of some of the structural factors of the facilitation system. According to the facilitators, the biweekly facilitation meetings provided opportunities to build relationships with one another, share their knowledge and expertise, and review and adjust the structure of the facilitation system when needed. For example, after the third week, the facilitation team agreed to focus their attention on providing feedback for all capstone projects while continuing to offer feedback for other posts using their own discretion and when a participant asks for feedback. This process helped improve the efficiency of our feedback process.
According to the facilitators, the weekly monitoring report that they used to track participant progress allowed facilitators to tailor their feedback and communication to support the participants’ needs and engagement status.

3.2. Negative and Positive Impacts of the Cognitive Dimension of Feedback

Participants’ survey responses seemed to show their relative satisfaction with facilitators’ ability to support their understanding of the course content, with the mean of their responses being 3.88 and a standard deviation of 1.03. The analysis of the participants’ interview responses offered us an opportunity to develop a more nuanced understanding of their perceptions of the cognitive dimension of facilitators’ feedback. There were more positive responses (29) than negative responses (12) to the cognitive dimension of feedback.

3.2.1. Negative Responses to the Cognitive Dimension of Feedback

Participants’ main challenges with the cognitive dimension of the facilitation system’s feedback included feedback lacking substantive cognitive value and the absence of feedback on some of the participants’ online contributions.
The following excerpt illustrates how some participants felt that the content of the feedback they received did not add to their cognitive engagement with the course content:
Honestly, I feel like I did get feedback. Honestly, I don’t remember very well. I feel like if I did, it must’ve been just a little more generic, just a little more general comments about things. It’s nothing that I felt like I needed to change or alter what I did, or nothing that really added to what I had already planned. I guess nothing’s really, to be honest, nothing’s really sticking out in terms of what the feedback was on anything like that.
This example showed how the feedback that the participant received from the facilitator was relatively generic and did not add to the participants’ content knowledge to the extent that she could not remember receiving any feedback.
Other participants felt that the cognitive dimension of the feedback they received included a mix of positive and negative contributions, as seen in the example below:
Well, like I said, it was hit or miss, you never know who was going to respond. It felt bad because I posted something late and nobody had answered me.
This example highlights one of the facilitators’ challenges during the facilitation system, which was monitoring and supporting participants’ engagement when the participants were relatively behind schedule in course. According to the field notes, facilitators faced some technical issues using the platform to search for participants’ posts. Moreover, because the platform does not notify participants when they receive feedback on their posts, some participants were unaware of their facilitators’ feedback, as exemplified in the following quote from a facilitator:
One of my friends is enrolled in the course, and when I asked her about the feedback she received, she told me that she didn’t receive any. I was surprised, so I went and checked, and it turns out that she did receive feedback; it’s just that she didn’t get an email notifying her of its existence.
According to some facilitators, some of the participants’ negative feedback about the cognitive dimension of feedback might be due to the time it took for them to adapt their expertise by familiarizing themselves with the updates in the course curriculum and the new facilitation structure. Some facilitators also felt overwhelmed with the amount of feedback they needed to provide for the 180 participants engaging with the course. Guided by their recommendation, we agreed to focus the facilitators’ attention on offering feedback for participants on their Capstone assignments rather than on every participant post. The course had a total of four Capstone assignments that required participants to post about how they planned to implement some aspects of the course curriculum with their students. According to the facilitators, directing their focus to prioritize offering feedback on these assignments helped them provide more substantive feedback on participants’ implementation plans and efficiently utilize their time to advance participants’ learning.

3.2.2. Positive Responses to the Cognitive Dimension of Feedback

Many participants seemed to have benefited from the cognitive dimension of the facilitation system’s feedback. Participants appeared to appreciate the quality of the facilitators’ feedback and their guidance for implementing the course content in their own classrooms. The following example highlights how some participants appreciated the facilitators’ cognitive contributions to the course:
[My facilitator], she was great. I don’t really have anything in particular. Like I’m trying to think about like how I would, if it were me in those shoes, like what would I do differently? And I really feel like she, she nailed it. Again, the quality of the ideas that she was sort of putting out there was excellent. Her pace in terms of when she was responding to the things that I put out there was really good.
This feedback seems to show how our facilitators might have benefited from our training on providing high-quality feedback. Other participants mentioned how their facilitators’ content knowledge expertise added to the cognitive dimension of their feedback. The following quote illustrates one of these examples:
I think those [the facilitator’s feedback] just inherently were probably better because they’ve done it before, and they know how to give good constructive feedback. Good feedback, real feedback. I think most of those were really good; they definitely had more information.
Participants’ responses also highlighted how they highly valued facilitators’ cognitive feedback on their implementation of the course content within their own classrooms and contexts, as shown in the example below:
I think she’s very willing to share and elaborate on the stuff that she does in her classroom and willing to offer it up to the population of participants out there that are trying to do that…and she definitely has a lot of valuable resources and ideas that I can tap into that if I need to. I’ll definitely take that moving forward into the school year.
Participants also seemed to appreciate how some facilitators moderated and encouraged some of the participants’ online discussions about the course content. The following example illustrates how one facilitator contributed to a discussion about a capstone assignment in which participants were required to post about how they planned to implement some aspects of the course curriculum with their own students:
Yes. So, I’m thinking about that one class participant that I mentioned to you that he and I had gone back and forth with some feedback on one of my capstones, and my facilitator actually jumped in on that. And it was just a really nice discussion among the three of us.
The researcher field notes offered helpful insights about some of the factors that contributed to the facilitators’ ability to develop the cognitive dimension of their feedback. Many facilitators noted the importance of the biweekly facilitation meetings in building a learning community that allowed them to develop their content knowledge expertise and pedagogical approach. Facilitators also appreciated having the facilitation manual to guide them in providing high-quality feedback.

3.3. Negative and Positive Impacts of the Social-Affective Dimension of Feedback

Participants’ responses to the survey question regarding their facilitators’ ability to connect them to other participants receive the lowest average rating among the facilitation questions, with a mean of 3.54 and a standard deviation of 1.12. We used participants’ interview responses to develop a more holistic understanding of their perceptions of the effectiveness of the social-affective dimension of feedback.

3.3.1. Negative Responses to the Social-Affective Dimension of Feedback

The most recurring theme among participants’ negative perceptions of the facilitation system’s social-affective dimension was participants’ inability to identify or build a relationship with their facilitator. In the following quote, for example, a participant describes being unable to identify whether her facilitator was the person providing the feedback. The participant also did not know if the people interacting with her were part of her cohort:
I want to say I know one of the facilitators had commented on one of my posts. But I know that I’ve made a connection with another person in Ohio that was teaching using the modeling method too. But I don’t remember much beyond that. I know that there was two or three people, I just don’t know if they were part of my group, my section two group, or not.
This example highlights the absence of a social relationship between the participant and the facilitator. It also reveals a possible deficiency in the facilitators’ ability to connect participants to others within their cohort. Structural dimensions of the feedback system may have contributed to the participants’ inability to connect with other participants, as discussed earlier. The example below also illustrates a participant’s inability to remember who offered her feedback:
So, there was feedback, but I have no idea who those people were.
Another participant complained about her facilitator’s delay in sending the contact information of other participants, as seen below:
I guess the only thing I sort of wished was the email I got that said who was in my group. That came probably like week three. To have that information sooner would have been helpful. I might’ve made a bigger attempt to connect to that group.
According to the researcher field notes, all the facilitators highlighted their struggle with building relationships with their cohort members. Several facilitators even complained that several members of their cohort members never responded to their emails. Participants’ preferences and learning goals might drive this lack of social interaction. For instance, some participants might have been interested only in completing the course rather than building relationships, as highlighted by one of the participants:
So, my priority in completing the course in what I was sort of looking for was definitely, I wouldn’t say completing the course, but I was trying to get sort of pedagogical ideas out of it. So, resources and ideas for how to teach this type of thing for my students, I didn’t put a priority on community building. So that was definitely not something that was like top of my list in terms of what I was doing.
Some participants also felt that the communication they received via email lacked a personal, as illustrated in the excerpt below:
Yeah, I’ve received some emails from my facilitator, but they seem to be sort of automatically generated. They might be personal, but they certainly didn’t seem so anyways.
This comment highlights how the discourse used in sending emails might impact participants’ perceptions of the facilitators’ social presence.

3.3.2. Positive Responses to the Social-Affective Dimension of Feedback

Our analysis of participants’ perceptions of the benefits of the social-affective dimension of feedback reveals that participants valued their facilitators’ presence throughout the course. Participants’ average rating of their facilitators’ accessibility and availability had the highest score among facilitation-related survey questions, with a mean of 4.08 and a standard deviation of 4.0. The same theme was prominent among interview responses. This observation is in line with the findings of Chang and colleagues [60] and Gurley [69], which confirmed the direct impact of teaching presence on online participants’ ability to complete a course. The examples below illustrate two participants’ experiences with their facilitators’ presence:
Like I said, the way it was this year, I definitely knew that I had a contact person I could reach out to and that there was a group of people who were giving me feedback and who were answering questions. So, I felt well-supported.
Having her reach out and say, “Hey, I’m here for you. If you have questions, here’s how to get a hold of me.” And that was perfect.
In both examples, the participants felt that their facilitators’ accessibility and presence supported their online learning experience. Another participant appreciated his facilitator’s communication approach and his efforts to build a community among participants by encouraging them to complete the course with the same cohort of participants:
He reached out once. I’d been on vacation and he just sent a reminder, not mean or anything just saying, hey, here’s our timeline. And he did it in an effort to kind of maintain that community within our session.
The researcher field notes included several examples of facilitators offering suggestions for augmenting social relationships among members of the learning community. For example, some facilitators shared some communication that spurred more social interactions among participants. Other facilitators mentioned how they tried to connect participants who shared similar backgrounds by introducing them to each other and recommending that they look at each other’s posts.
Given that the course was running during a pandemic, facilitators always insisted on providing participants with the necessary emotional support to engage with the course.

4. Discussion

This study explored participants’ and facilitators’ perspectives on the effectiveness of an online facilitation feedback system in supporting their learning and engagement. We introduced an online facilitation system of feedback to support participants’ learning and instruction about complex systems; the facilitation system incorporated structural, cognitive, and social-affective dimensions of feedback. The findings show how the different dimensions of the system were perceived differently by various participants based on their needs and backgrounds. By adopting a feedback-loop learning model, participants’ experience with the content and similar online experiences seems to have influenced their interest in adjusting their goals to engage with the system’s structural features beyond their initial single-loop learning goals. Other participants’ responses showed how the feedback system’s structural dimension supported their ability to build social capital by allowing for more synchronous and asynchronous personal interactions. This finding highlights the difficulty of categorizing participants’ perceptions, given how the various dimensions of a feedback system tend to be interdependent and interrelated. For example, if a feedback system supports social interactions, is it because of the system’s structure, the facilitators’ social skills, or some other element in the online environment?
Participants’ responses show an appreciation for their facilitators’ efforts to encourage their cognitive engagement, the quality of facilitators’ ideas and feedback, and facilitators’ expertise in implementing the course content within their own classrooms. Participants’ perceptions of the limitations of the cognitive dimension seem to have been influenced by the time it took facilitators to adapt their expertise to support participant learning. In other words, facilitators had to follow a double-loop learning model, where they tried to develop a deeper understanding of the course content and the facilitation system to be able to support their cohort members. To avoid this issue, facilitators recommended offering more training on how to offer content-specific feedback. As for the social-affective dimension of feedback, participants’ learning goals seem to have influenced participants’ perceptions. Some participants appreciated their facilitators’ efforts to build relationships with them, while others did not feel the need to engage with their facilitators because they were mainly focused on learning the content and completing the course. That said, most of the participants’ responses indicate that they valued their facilitator “being there” both synchronously and asynchronously, a finding that aligns with the literature on fostering presence in online learning environments [13,70].

5. Contributions and Implications for Practice

Our results corroborate Ludvigsen [4] and Blum-Smith and colleagues’ [13] findings on the difficulty of analyzing the complex relationships between the dimensions of an online facilitation system and the various interconnected and interdependent variables influencing learners’ interactions and perceptions of this system. For example, several hidden variables, such as a limitation within a platform’s algorithm, might impact a participant’s perception of the feedback system. As a result, participants might not be able to develop a holistic understanding of how the feedback system influenced their learning outcomes, which makes it harder to evaluate the system’s effectiveness. That said, this limitation should not deter educators from finding ways to capitalize on technological advancements to develop a more nuanced understanding of how the different complex variables of a feedback system may influence participants’ learning and engagement [35]. As such, educators designing these systems will need to embrace the process of “ongoing negotiation” between the intended goals of these systems and the affordances for their enactment within a complex dynamic environment of diverse participants [13].
In our analysis of how the facilitation system might have impacted the feedback-loop character of participants’ learning [25], we found mixed results. More specifically, the system contributed to double-loop learning for some participants, single-loop learning for others, and in some cases, participants were unable to close the feedback loop and reach their intended learning goals. Importantly, it was clear that most of our participants engaged with the course with the aim of achieving specific learning outcomes. These predetermined short-term goals may have contributed to their interest in attaining single-loop learning while challenging their motivation to engage in efforts to improve their long-term learning strategies and achieve double-loop learning [25]. Supporting an online facilitation system’s advancement of both learning models may require the design of adaptive feedback systems that tailor their responses to participants’ diverse motivations, expectations, and backgrounds, with the ultimate goal of supporting deeper long-term learning [70,71]. Allowing participants more opportunities for collaboration and offering peer-to-peer feedback may also be critical for supporting participants’ ability to explore both learning models. For example, nurturing collaborative learning environments, such as collaborative inquiry learning, may allow participants access to their collaborators’ diverse pool of resources and expertise, offering them new insights that can scaffold their learning outcomes [72].
Our results further suggest that following the principles of the DBIR framework may offer researchers and practitioners an opportunity to adapt and adjust these feedback systems in a way that responds to participants’ and facilitators’ different learning needs [62]. This was evident in how participants’ survey responses improved after adjusting our system to respond to their needs. Given our focus on analyzing the responses of the 74 participants who completed the course rather than the 180 who engaged with the course, our results were limited by our inability to capture the perceptions of the participants who did not complete the course and the post-PD survey.
One of our main goals in this paper was to develop and provide promising evidence of the effectiveness of an online facilitation system of feedback on participant learning. Despite the mixed results highlighted by participants’ and facilitators’ perceptions, this is an important pursuit, particularly because of the disruption caused by COVID-19 and the massive shift towards online learning which might continue well beyond the pandemic. Clearly, there are many possibilities for improving learning outcomes through online facilitation; we encourage researchers and practitioners to build on our findings to strengthen online feedback systems and thus make online learning a more positive and rewarding experience for facilitators and participants alike.

Author Contributions

Conceptualization, A.M. and S.A.Y.; Data curation, J.-U.Y., T.R. and N.N.; Formal analysis, A.M., J.-U.Y., T.R. and N.N.; Funding acquisition, S.A.Y.; Investigation, S.A.Y.; Methodology, A.M., S.A.Y. and J.S.; Project administration, T.R. and K.M.; Writing—original draft, A.M.; Writing—review & editing, S.A.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a grant from the U.S. National Science Foundation, #1721003.

Institutional Review Board Statement

The study was conducted through approved human subjects guidelines according to the Institutional Review Board of the University of Pennsylvania (protocol 827647 approved on June 8, 2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Acknowledgments

This research was funded by a grant from the U.S. National Science Foundation, #1721003.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. How the different principles were incorporated into our design process.
Table A1. How the different principles were incorporated into our design process.
#DBIR Element How It Guided Our Design and Implementation of the Facilitation System
1A focus on understanding persistent problems of practice facing the various stakeholders engaging with the education systemThe facilitation system was guided by feedback from:
(1) The facilitation team, which included expert participants who were enrolled in the course in previous years;
(2) Our participants during the duration of the course; and
(3) The instructional team, which is the team responsible for curating the course content and structure.
2A commitment to an iterative and inclusive process of collaborative design The facilitation system went through a process of continuous iteration throughout the course. In this process, we relied on feedback from our facilitators and participants to adjust the facilitation system to meet their needs. For example, we decided to include more details in the participant engagement monitoring report based on facilitators’ feedback.
3A concern for using systematic inquiry to develop theories and knowledge related to implementation processes and student learning outcomesOur facilitation system was guided by the feedback triangle framework, where we viewed learning as a process of closing feedback loops. Throughout the course, we offered participants different features and multiple modes for engaging with the course to support their different needs and preferences.
4A concern for developing the capabilities of researchers, practitioners, and policymakers to influence sustainable change within systemsThe facilitation system was supported by a dedicated facilitation team that followed a structured process. The team had its own dedicated infrastructural resources and tools to guide its goals.

References

  1. Jeong, H.; Hmelo-Silver, C.E. Seven affordances of computer-supported collaborative learning: How to support collaborative learning? How can technologies help? Educ. Psychol. 2016, 51, 247–265. [Google Scholar] [CrossRef]
  2. Chen, J.; Wang, M.; Kirschner, P.A.; Tsai, C.C. The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Rev. Educ. Res. 2018, 88, 799–843. [Google Scholar] [CrossRef]
  3. Scardamalia, M.; Bereiter, C. Smart technology for self-organizing processes. Smart Learn. Environ. 2014, 1. [Google Scholar] [CrossRef] [Green Version]
  4. Ludvigsen, S.; Steier, R. Reflections and looking ahead for CSCL: Digital infrastructures, digital tools, and collaborative learning. Int. J. Comput. Collab. Learn. 2019, 14, 415–423. [Google Scholar] [CrossRef] [Green Version]
  5. Matuk, C.; Tissenbaum, M.; Schneider, B. Real-time orchestrational technologies in computer-supported collaborative learning: An introduction to the special issue. Int. J. Comput. Supported Collab. Learn. 2019, 14, 251–260. [Google Scholar] [CrossRef] [Green Version]
  6. Cabrera, D.; Cabrera, L.; Powers, E.; Solin, J.; Kushner, J. Applying systems thinking models of organizational design and change in community operational research. Eur. J. Oper. Res. 2018, 268, 932–945. [Google Scholar] [CrossRef]
  7. Zhou, J.; Dawson, P.; Tai, J.H.; Bearman, M. How conceptualising respect can inform feedback pedagogies. Assess. Eval. High. Educ. 2020, 46, 68–79. [Google Scholar] [CrossRef]
  8. Firmin, R.; Schiorring, E.; Whitmer, J.; Willett, T.; Collins, E.D.; Sujitparapitaya, S. Case study: Using MOOCs for conventional college coursework. Distance Educ. 2014, 35, 178–201. [Google Scholar] [CrossRef]
  9. Gayoung, L.E.; Sunyoung, K.E.; Myungsun, K.I.; Yoomi, C.H.; Ilju, R.H. A study on the development of a MOOC design model. Educ. Technol. Int. 2016, 17, 1–37. [Google Scholar]
  10. Jung, Y.; Lee, J. Learning engagement and persistence in massive open online courses (MOOCS). Comput. Educ. 2018, 122, 9–22. [Google Scholar] [CrossRef]
  11. Siemens, G.; Downes, S.; Cormier, D.; Kop, R. PLENK 2010–Personal Learning Environments, Networks and Knowledge. 2010. Available online: https://www.islandscholar.ca/islandora/object/ir%3A20500 (accessed on 17 October 2020).
  12. Kasch, J.; Van Rosmalen, P.; Kalz, M. A Framework towards educational scalability of open online courses. J. Univers. Comput. Sci. 2017, 23, 845–867. [Google Scholar]
  13. Blum-Smith, S.; Yurkofsky, M.M.; Brennan, K. Stepping back and stepping in: Facilitating learner-centered experiences in MOOCs. Comput. Educ. 2020, 160, 104042. [Google Scholar] [CrossRef] [PubMed]
  14. Salter, S.; Douglas, T.; Kember, D. Comparing face-to-face and asynchronous online communication as mechanisms for critical reflective dialogue. Educ. Action Res. 2017, 25, 790–805. [Google Scholar] [CrossRef]
  15. Parsons, S.A.; Hutchison, A.C.; Hall, L.A.; Parsons, A.W.; Ives, S.T.; Leggett, A.B. US participants’ perceptions of online professional development. Teach. Particip. Educ. Int. J. Res. Stud. 2019, 82, 33–42. [Google Scholar]
  16. Webb, D.C.; Nickerson, H.; Bush, J.B. A comparative analysis of online and face-to-face professional development models for CS education. In Proceedings of the SIGCSE ‘17: The 48th ACM Technical Symposium on Computer Science Education, Seattle, WA, USA, 8–11 March 2017; pp. 621–626. [Google Scholar]
  17. Yoon, S.A.; Miller, K.; Richman, T.; Wendel, D.; Schoenfeld, I.; Anderson, E.; Shim, J. Encouraging collaboration and building community in online asynchronous professional development: Designing for social capital. Int. J. Comput. Supported Collab. Learn. 2020, 15, 351–371. [Google Scholar] [CrossRef]
  18. Turvey, K.; Pachler, N. Design principles for fostering pedagogical provenance through research in technology supported learning. Comput. Educ. 2020, 146, 103736. [Google Scholar] [CrossRef]
  19. Sterman, J.D. Learning from evidence in a complex world. Am. J. Public Health 2006, 96, 505–514. [Google Scholar] [CrossRef]
  20. Wang, Z.; Gong, S.Y.; Xu, S.; Hu, X.E. Elaborated feedback and learning: Examining cognitive and motivational influences. Comput. Educ. 2019, 136, 130–140. [Google Scholar] [CrossRef]
  21. Hokayem, H.; Gotwals, A.W. Early elementary students’ understanding of complex ecosystems: A learning progression approach. J. Res. Sci. Teach. 2016, 53, 1524–1545. [Google Scholar] [CrossRef]
  22. Scott, W.R.; Davis, G.F. Organizations and Organizing: Rational, Natural and Open Systems Perspectives, 1st ed.; Routledge: Abingdon, UK, 2015. [Google Scholar]
  23. Lizzio, A.; Wilson, K. Feedback on assessment: Students’ perceptions of quality and effectiveness. Assess. Eval. High. Educ. 2008, 33, 263–275. [Google Scholar] [CrossRef]
  24. Henderson, M.; Ryan, T.; Phillips, M. The challenges of feedback in higher education. Assess. Eval. High. Educ. 2019, 44, 1237–1252. [Google Scholar] [CrossRef]
  25. Tarrant, S.P.; Thiele, L.P. Practice makes pedagogy–John Dewey and skills-based sustainability education. Int. J. Sustain. High. Educ. 2016, 17, 54–67. [Google Scholar] [CrossRef]
  26. Pardo, A. A feedback model for data-rich learning experiences. Assess. Eval. High. Educ. 2018, 43, 428–438. [Google Scholar] [CrossRef]
  27. Schön, D.; Argyris, C. Organizational learning II: Theory, Method and Practice; Addison-Wesley Publishing Company: Boston, MA, USA, 1996. [Google Scholar]
  28. Cheng, M.T.; Rosenheck, L.; Lin, C.Y.; Klopfer, E. Analyzing gameplay data to inform feedback loops in The Radix Endeavor. Comput. Educ. 2017, 111, 60–73. [Google Scholar] [CrossRef] [Green Version]
  29. Wiliam, D. Feedback: Part of a system. Educ. Leadersh. 2012, 70, 30–34. [Google Scholar]
  30. Sterman, J.; Oliva, R.; Linderman, K.W.; Bendoly, E. System dynamics perspectives and modeling opportunities for research in operations management. J. Oper. Manag. 2015, 39, 40. [Google Scholar] [CrossRef]
  31. Carless, D. Feedback loops and the longer-term: Towards feedback spirals. Assess. Eval. High. Educ. 2019, 44, 705–714. [Google Scholar] [CrossRef]
  32. Sterman, J.D. Learning in and about complex systems. Syst. Dyn. Rev. 1994, 10, 291–330. [Google Scholar] [CrossRef] [Green Version]
  33. Scott, G.; Danley-Scott, J. Two loops that need closing: Contingent faculty perceptions of outcomes assessment. J. Gen. Educ. 2015, 64, 30–55. [Google Scholar] [CrossRef]
  34. Ramani, S.; Könings, K.D.; Ginsburg, S.; van der Vleuten, C.P. Feedback redefined: Principles and practice. J. Gen. Intern. Med. 2019, 34, 744–749. [Google Scholar] [CrossRef] [Green Version]
  35. Martin, F.; Wang, C.; Sadaf, A. Student perception of helpfulness of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses. Internet High. Educ. 2018, 37, 52–65. [Google Scholar] [CrossRef]
  36. Davis, D.; Jivet, I.; Kizilcec, R.F.; Chen, G.; Hauff, C.; Houben, G.J. Follow the successful crowd: Raising MOOC completion rates through social comparison at scale. In Proceedings of the LAK’17 conference proceeding: The Seventh International Learning Analytics & Knowledge Conference, Simon Fraser University, Vancouver, BC, Canada, 13–17 March 2017; pp. 454–463. [Google Scholar]
  37. An, Y. The effects of an online professional development course on participants’ perceptions, attitudes, self-efficacy, and behavioral intentions regarding digital game-based learning. Educ. Technol. Res. Dev. 2018, 66, 1505–1527. [Google Scholar] [CrossRef]
  38. Phirangee, K.; Epp, C.D.; Hewitt, J. Exploring the relationships between facilitation methods, students’ sense of community, and their online behaviors. Online Learn. 2016, 20, 134–154. [Google Scholar] [CrossRef]
  39. Zhu, M.; Bonk, C.J.; Sari, A.R. Instructor experiences designing MOOCs in higher education: Pedagogical, resource, and logistical considerations and challenges. Online Learn. 2018, 22, 203–241. [Google Scholar] [CrossRef]
  40. Hew, K.F. Student perceptions of peer versus instructor facilitation of asynchronous online discussions: Further findings from three cases. Instr. Sci. 2015, 43, 19–38. [Google Scholar] [CrossRef]
  41. Van Popta, E.; Kral, M.; Camp, G.; Martens, R.L.; Simons, P.R. Exploring the value of peer feedback in online learning for the provider. Educ. Res. Rev. 2017, 20, 24–34. [Google Scholar] [CrossRef]
  42. Yang, M.; Carless, D. The feedback triangle and the enhancement of dialogic feedback processes. Teach. High. Educ. 2013, 18, 285–297. [Google Scholar] [CrossRef] [Green Version]
  43. Barker, M.; Pinard, M. Closing the feedback loop? Iterative feedback between tutor and student in coursework assessments. Assess. Eval. High. Educ. 2014, 39, 899–915. [Google Scholar] [CrossRef]
  44. Gerard, L.F.; Varma, K.; Corliss, S.B.; Linn, M. Professional development for technology-enhanced inquiry science. Rev. Educ. Res. 2011, 81, 408–448. [Google Scholar] [CrossRef]
  45. Yoon, S.A.; Anderson, E.; Koehler-Yom, J.; Evans, C.; Park, M.; Sheldon, J.; Schoenfeld, I.; Wendel, D.; Scheintaub, H.; Klopfer, E. Teaching about complex systems is no simple matter: Building effective professional development for computer-supported complex systems instruction. Instr. Sci. 2017, 45, 99–121. [Google Scholar] [CrossRef] [Green Version]
  46. Fishman, B.J.; Penuel, W.R.; Allen, A.R.; Cheng, B.H.; Sabelli, N.O. Design-based implementation research: An emerging model for transforming the relationship of research and practice. Natl. Soc. Study Educ. 2013, 112, 136–156. [Google Scholar]
  47. Fishman, B.J.; Penuel, W.R. Design-Based Implementation Research. In International Handbook of the Learning Sciences; Routledge: Abingdon, UK, 2018; pp. 393–400. [Google Scholar]
  48. Hew, K.F.; Qiao, C.; Tang, Y. Understanding student engagement in large-scale open online courses: A machine learning facilitated analysis of student’s reflections in 18 highly rated MOOCs. Int. Rev. Res. Open Distrib. Learn. 2018, 19. [Google Scholar] [CrossRef]
  49. Nicol, D. From monologue to dialogue: Improving written feedback processes in mass higher education. Assess. Eval. High. Educ. 2010, 35, 501–517. [Google Scholar] [CrossRef]
  50. Resendes, M.; Scardamalia, M.; Bereiter, C.; Chen, B.; Halewood, C. Group-level formative feedback and metadiscourse. Int. J. Comput. Supported Collab. Learn. 2015, 10, 309–336. [Google Scholar] [CrossRef]
  51. Kizilcec, R.F.; Halawa, S. Attrition and achievement gaps in online learning. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale, Vancouver, BC, Canada, 14–18 March 2015; pp. 57–66. [Google Scholar]
  52. Kizilcec, R.F.; Pérez-Sanagustín, M.; Maldonado, J.J. Recommending self-regulated learning strategies does not improve performance in a MOOC. In Proceedings of the third (2016) ACM conference on learning@ scale, Edinburgh, Scotland, UK, 25–26 April 2016; pp. 101–104. [Google Scholar]
  53. Yeşilyurt, E.; Ulaş, A.H.; Akan, D. Participant self-efficacy, academic self-efficacy, and computer self-efficacy as predictors of attitude toward applying computer-supported education. Comput. Hum. Behav. 2016, 64, 591–601. [Google Scholar] [CrossRef]
  54. Darling-Hammond, L.; Hyler, M.E.; Gardner, M. Effective Participant Professional Development; Learning Policy Institute: Palo Alto, CA, USA, 2017. [Google Scholar]
  55. Baroody, A.J.; Dowker, A. ; The Development of Arithmetic Concepts and Skills: Constructive Adaptive Expertise; Routledge: Abingdon, UK, 2013. [Google Scholar]
  56. Anthony, G.; Hunter, J.; Hunter, R. Prospective participants development of adaptive expertise. Teach. Particip. Educ. 2015, 49, 108–117. [Google Scholar]
  57. Sweller, J. Cognitive load theory and educational technology. Educ. Technol. Res. Dev. 2020, 68, 1–6. [Google Scholar] [CrossRef]
  58. Goldie, J.G. Connectivism: A knowledge learning theory for the digital age? Med. Particip. 2016, 38, 1064–1069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Chaker, R.; Impedovo, M.A. The moderating effect of social capital on co-regulated learning for MOOC achievement. Educ. Inf. Technol. 2020, 5, 1–21. [Google Scholar] [CrossRef]
  60. Li, Y.; Chen, H.; Liu, Y.; Peng, M.W. Managerial ties, organizational learning, and opportunity capture: A social capital perspective. Asia Pac. J. Manag. 2014, 31, 271–291. [Google Scholar] [CrossRef]
  61. Adamopoulos, P. What makes a great MOOC? An interdisciplinary analysis of student retention in online courses. In Proceedings of the International Conference on Information Systems, ICIS 2013, Milano, Italy, 15–18 December 2013. [Google Scholar]
  62. Chang, R.I.; Hung, Y.H.; Lin, C.F. Survey of learning experiences and influence of learning style preferences on user intentions regarding MOOC s. Br. J. Educ. Technol. 2015, 46, 528–541. [Google Scholar] [CrossRef]
  63. Rosé, C.P.; Ferschke, O. Technology support for discussion based learning: From computer supported collaborative learning to the future of massive open online courses. Int. J. Artif. Intell. Educ. 2016, 26, 660–678. [Google Scholar] [CrossRef] [Green Version]
  64. Glaser, B.G.; Strauss, A.L. Discovery of Grounded Theory: Strategies for Qualitative Research; Routledge: Abingdon, UK, 2017. [Google Scholar]
  65. Campbell, J.L.; Quincy, C.; Osserman, J.; Pedersen, O.K. Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociol. Methods Res. 2013, 42, 294–320. [Google Scholar] [CrossRef]
  66. Putnam, R. Bowling Alone; Simon & Schuster: New York, NY, USA, 2000. [Google Scholar]
  67. Lowenthal, P.R.; Dunlap, J.C. Investigating students’ perceptions of instructional strategies to establish social presence. Distance Educ. 2018, 39, 281–298. [Google Scholar] [CrossRef]
  68. Ulker, U. Reading activities in blended learning: Recommendations for university language preparatory course participants. Int. J. Soc. Sci. Educ. Stud. 2019, 5, 83. [Google Scholar]
  69. Gurley, L.E. Educators’ preparation to teach, perceived teaching presence, and perceived teaching presence behaviors in blended and online learning environments. Online Learn. 2018, 22, 197–220. [Google Scholar]
  70. Hollands, F.M.; Tirthali, D. MOOCs: Expectations and reality. Center for Benefit-Cost Studies of Education; Teachers College, Columbia University: New York, NY, USA, 2014. [Google Scholar]
  71. Watson, S.L.; Loizzo, J.; Watson, W.R.; Mueller, C.; Lim, J.; Ertmer, P.A. Instructional design, facilitation, and perceived learning outcomes: An exploratory case study of a human trafficking MOOC for attitudinal change. Educ. Technol. Res. Dev. 2016, 64, 1273–1300. [Google Scholar] [CrossRef]
  72. Wang, H.Y.; Duh, H.B.; Li, N.; Lin, T.J.; Tsai, C.C. An investigation of university students’ collaborative inquiry learning behaviors in an augmented reality simulation and a traditional simulation. J. Sci. Educ. Technol. 2014, 23, 682–691. [Google Scholar] [CrossRef]
Figure 1. The single-loop learning (negative feedback-loop) model is bounded by prevailing mental models [32].
Figure 1. The single-loop learning (negative feedback-loop) model is bounded by prevailing mental models [32].
Systems 09 00010 g001
Figure 2. In the double-loop learning (positive feedback-loop) model, feedback can cause changes in mental models [32].
Figure 2. In the double-loop learning (positive feedback-loop) model, feedback can cause changes in mental models [32].
Systems 09 00010 g002
Figure 3. The feedback triangle framework. “The feedback triangle framework” reprinted from “The feedback triangle and the enhancement of dialogic feedback processes” by Yang & Carless, 2013, Teaching in Higher Education, 18:3, 285–297. Copyright (2013) by Taylor & Francis Ltd www.tandfonline.com [42].
Figure 3. The feedback triangle framework. “The feedback triangle framework” reprinted from “The feedback triangle and the enhancement of dialogic feedback processes” by Yang & Carless, 2013, Teaching in Higher Education, 18:3, 285–297. Copyright (2013) by Taylor & Francis Ltd www.tandfonline.com [42].
Systems 09 00010 g003
Table 1. Demographics of the 74 participants who completed the course and a post-professional development (PD) survey.
Table 1. Demographics of the 74 participants who completed the course and a post-professional development (PD) survey.
Participant Demographics
Ethnicity
White49
Black1
Hispanic4
Asian or Pacific Islander12
Multiethnic2
Other5
Prefer not to say1
Table 2. The characteristics of the 10 participants who were selected for the semi-structured interviews.
Table 2. The characteristics of the 10 participants who were selected for the semi-structured interviews.
Participant IDFacilitatorSatisfaction Category School Resource LevelSchool Location
1AHighMiddle ResourceSuburban
2ANeutralHigh ResourceSuburban
3BHighHigh ResourceSuburban
4BLowHigh ResourceSuburban
5BLowHigh ResourceSuburban
6CHighHigh ResourceSuburban
7CNeutralHigh ResourceSuburban
8DNeutralMiddle ResourceRural
9DHighHigh ResourceRural
10DLowHigh ResourceSuburban
Table 3. Categories for evaluating the effectiveness of the facilitation system.
Table 3. Categories for evaluating the effectiveness of the facilitation system.
Positive feedback A positive feedback occurs when the feedback allows the participant an opportunity to close the feedback loop and develop an understanding of the course content. Within a facilitation system, examples of positive feedback include (1) a facilitator offering a participant feedback on the content of their post and this feedback incentivizing the participant to learn more about the course content, (2) a facilitator sharing with a participant the contact information of another participant and they end up connecting or interacting within or beyond the course.
Negative feedbackNegative feedback occurs when the feedback causes the output of the participant to decrease or prevents the participant from closing the feedback loop. Examples of negative feedback include (1) when a feature of the facilitation system is disregarded by the participant, (2) when a facilitator’s feedback limits the participant’s ability to engage with the course content and community.
Other Participants’ feedback is categorized as “other” if they refer to the facilitation system in a way that doesn’t fall into either of the other two categories.
Table 4. Subcategories for evaluating the effectiveness of the facilitation system.
Table 4. Subcategories for evaluating the effectiveness of the facilitation system.
SubcategoryDefinitionExample
Positive Structural DimensionThe structural dimension refers to disciplinary practices and institutional policies that determine how the feedback process is arranged and what resources are mobilized in providing feedback [42]. This dimension is deemed as facilitating a positive feedback when participants favorably refer to a structural aspect of the facilitation system as the main factor supporting their judgment of this system. In these instances, the facilitator’s role in supporting the participants’ experiences is not highlighted by the participant beyond their performance of a structural practice. “I think the synchronous meetups were helpful because I think that’s where you all talked to us more about different things and when we split off into those little small groups it wasn’t necessarily with our specific cohort person.”
Positive Cognitive Dimension The cognitive dimension of facilitators’ feedback refers to the facilitators’ specific contribution to a participant’s understanding of the course content and its related activities beyond the structural dimensions of the facilitation system. This dimension is deemed as facilitating a positive feedback when participants favorably refer to a facilitator’s contribution as valuable to their cognitive processes. The cognitive dimension of feedback can influence a participant’s understanding of course content, task completion, and implementation strategies. “And there were a couple of things that my facilitator had responded to in some of my discussion posts that I actually, I would save them to use. One of the ones that she provided me with was “How science works” I believe, was the document. Most of it was through discussion posts and email. She did provide me with some, a lot of good resources that applied to what my response was in the discussion posts.”
Positive Social DimensionThe social dimension of the facilitators’ feedback refers to the facilitators’ specific contribution to a participant’s ability to connect with other members of the learning community and build relationships beyond the impact of the structural dimensions of the facilitation system. The social dimension also includes a participant’s ability to nurture their social capital through the support and presence of the facilitator. This dimension is deemed as facilitating positive feedback when participants favorably refer to a facilitator’s specific contribution to their ability to interact and/or build social relationships with other members of the learning community. “All my interactions were positive. I think she was just reaching out and it was clear that she wanted to make sure that everyone in the group got through it and at least knew each other’s emails. We all for sure knew from her that that was a group that we could contact and communicate with if we were struggling.”
Negative Structural DimensionThis dimension is deemed as promoting negative feedback when participants refer to a structural aspect of the facilitation system as the main factor influencing their unfavorable judgment of this system. In these instances, the facilitator’s role in hindering the participants’ learning experience is not highlighted by the participant beyond their performance of a structural practice. “I was just thinking that the other day, like yesterday. How come I have to go back in there and search for that topic and try to remember where I posted it, you know, get the answer to that question.”
Negative Cognitive DimensionThis dimension is deemed as promoting negative feedback when participants refer to a facilitator’s contribution as a reason for hindering their cognitive processes. The cognitive dimension of feedback can influence a participant’s understanding of course content, task completion, and implementation strategies. “She did respond to one or two that I recall. But that was about it. I didn’t feel like it was real significant or anything. But yeah, she did. I think it was one or two that she responded to”
Negative Social DimensionThis dimension is deemed as promoting negative feedback when teachers refer to a facilitator’s specific contribution as a reason for their inability to interact and/or build social relationships with other members of the learning community.“Yeah. I would not say that I interacted much, if any with my facilitator. I certainly would not know my facilitator was. I did get two emails that seem to be sort of automatically generated. They might be personal, but they certainly didn’t seem so anyways.”
Table 5. Descriptive statistic for participants’ responses to the post-PD survey Likert-scale questions (1 = strongly disagree to 5 = strongly agree).
Table 5. Descriptive statistic for participants’ responses to the post-PD survey Likert-scale questions (1 = strongly disagree to 5 = strongly agree).
DescriptiveQ1-My Facilitator Helped Me Develop a Better Understanding of the Course ContentQ2-My Facilitator Was Accessible and Available for Me Whenever I Needed SupportQ3-I Was Able to Benefit from the Facilitators’ Office HoursQ4-I Am Satisfied with the Relationship I Developed with My FacilitatorQ5-My Facilitator Connected Me with Other ParticipantsQ6-I Would Recommend My Facilitator to Future Participants.
Mean3.884.083.383.783.543.99
SD1.034.000.871.021.120.99
Median4.004.003.004.003.004.00
Mode5.005.003.003.003.005.00
Range4.004.004.004.004.004.00
Table 6. Descriptive statistics for participants’ responses to the post-PD survey Likert-scale questions for the first 45 participants who completed the course during the first two tracks (1 = strongly disagree to 5 = strongly agree).
Table 6. Descriptive statistics for participants’ responses to the post-PD survey Likert-scale questions for the first 45 participants who completed the course during the first two tracks (1 = strongly disagree to 5 = strongly agree).
Descriptive Q1Q2Q3Q4Q5Q6
Mean3.603.933.273.643.313.78
SD1.071.070.891.091.161.02
Median4.004.003.003.003.004.00
Mode3.005.003.003.003.004.00
Range4.004.004.004.004.004.00
Table 7. Descriptive statistics for participants’ responses to the post-PD survey Likert-scale questions for the 29 participants who completed the course by track 3 (1 = strongly disagree to 5 = strongly agree).
Table 7. Descriptive statistics for participants’ responses to the post-PD survey Likert-scale questions for the 29 participants who completed the course by track 3 (1 = strongly disagree to 5 = strongly agree).
DescriptiveQ1Q2Q3Q4Q5Q6
Mean4.314.313.554.003.904.31
SD0.810.810.830.890.980.85
Median553445
Mode553335
Range222232
Table 8. Results for an independent samples t-test for the post-PD survey questions.
Table 8. Results for an independent samples t-test for the post-PD survey questions.
TracksQ1Q2Q3Q4Q5Q6
1&231&231&231&231&231&23
Mean 3.64.313.934.313.273.553.6443.313.93.784.31
SD1.070.811.070.810.890.831.090.891.160.981.020.85
Range 424242424342
t-test3.231.711.411.532.332.43
p-value 0.002 *0.09 *0.1650.1290.023 *0.018 *
Cohen’s d0.6890.0950.3220.3530.5270.535
* p ≤ 0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Marei, A.; Yoon, S.A.; Yoo, J.-U.; Richman, T.; Noushad, N.; Miller, K.; Shim, J. Designing Feedback Systems: Examining a Feedback Approach to Facilitation in an Online Asynchronous Professional Development Course for High School Science Teachers. Systems 2021, 9, 10. https://0-doi-org.brum.beds.ac.uk/10.3390/systems9010010

AMA Style

Marei A, Yoon SA, Yoo J-U, Richman T, Noushad N, Miller K, Shim J. Designing Feedback Systems: Examining a Feedback Approach to Facilitation in an Online Asynchronous Professional Development Course for High School Science Teachers. Systems. 2021; 9(1):10. https://0-doi-org.brum.beds.ac.uk/10.3390/systems9010010

Chicago/Turabian Style

Marei, Amin, Susan A. Yoon, Jae-Un Yoo, Thomas Richman, Noora Noushad, Katherine Miller, and Jooeun Shim. 2021. "Designing Feedback Systems: Examining a Feedback Approach to Facilitation in an Online Asynchronous Professional Development Course for High School Science Teachers" Systems 9, no. 1: 10. https://0-doi-org.brum.beds.ac.uk/10.3390/systems9010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop