Next Article in Journal
Después de usted: Variation and Change in a Spanish Tripartite Politeness System
Next Article in Special Issue
The Effects of Digitally Mediated Multimodal Indirect Feedback on Narrations in L2 Spanish Writing: Eye Tracking as a Measure of Noticing
Previous Article in Journal
Being a Student or at Home: Does Topic Influence How Bilinguals Process Words in Each Language?
Previous Article in Special Issue
Multimodal Approaches for Heritage and Second Language Instructor Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Re-Thinking Peer Reviewing in the Virtual Context: The Roles of Giving and Receiving Online Feedback in L2 Spanish Classrooms

by
Emilia Illana-Mahiques
Department of Romance Studies, Cornell University, Ithaca, NY 14850, USA
Submission received: 1 April 2021 / Revised: 20 August 2021 / Accepted: 27 August 2021 / Published: 10 September 2021
(This article belongs to the Special Issue L2/HL Writing and Technology)

Abstract

:
This study explores learners’ online peer review practices during a four-week second language writing project. The project was developed in a multi-section Spanish writing course at the college level. The study investigates how college Spanish learners give online feedback to their peers, whether there is any relationship between the feedback roles they assume and their final performance, and the additional factors that may influence online peer reviewing practices. A total of 76 students participated in the study, all of whom received training prior to writing three drafts and giving and receiving feedback comments during two online peer review sessions. Descriptive statistical measures were used to analyze the types of online feedback students used most frequently. The comparative effects of giving comments were analyzed along with those of receiving comments by means of multiple regression analyses, in order to examine the relationship of these elements to final project performance as writers. Results support the learning-by-reviewing hypothesis, which argues that giving feedback to peers helps feedback-givers write better essays themselves. A follow-up analysis also shows that learning by reviewing online is most evident when giving specific types of online feedback, which students of all proficiency levels can learn how to do.

1. Introduction

1.1. Technology and the Future of Education

Digital technologies started an era of social transformation with constant changes of conventions, discourses, textual forms, and communities (Lotherington and Ronda 2014). As the world becomes hyper-connected, technology not only has become normalized in our daily environment, but it is a fundamental component of how people interact and communicate (Gaines 2019). Social reliance on technology has unquestionably increased in times of pandemic, radicalizing more than ever the patterns of interacting, working, and thinking about what is possible, and introducing new technological practices that may persist permanently in our society and occupations (Garfin 2020).
Learning is among the human activities most shaped by the digital revolution. Access to knowledge and information is possible anytime, anywhere (UNESCO 2019). Digitization, screenification, automation, virtualization, and other forms of digital innovation (Choudhury 2016) are becoming the new pillars of education. Students, in turn, not only take an active role in their learning but also gain greater independence as they complete their tasks anywhere, at their preferred time, and on any device. Institutions and other organizations, therefore, need to rethink how, when, and where people learn while continuously raising the question of what it means to live, learn, and communicate in a digitally transformed future (Besseyre des Horts 2019; Choudhury 2016).

1.2. Rethinking How, When, Where, and What Students Learn: The Value of Peer Review

With technological tools changing the nature of L2 learning, it is essential to rethink the educational environments that best fit new ways of learning (Williamson and Hague 2009). Approaches for how people learn have changed dramatically in terms of how, when, where, and what students learn (Besseyre des Horts 2019).
First, the HOW was affected in that teaching methods moved from teacher-centered models to student-centered models. That is, they moved from having the instructor deliver the content, to placing the student and the learner interaction at the center of the learning process. This shift granted students greater responsibility for their own learning (Elbow [1973] 1998). Additionally, in the field of L2 learning, much attention has been given to dialogic, sociocultural, and collaborative approaches to learning, all of which are grounded in negotiating; learning from one another; and exchanging knowledge, competencies, and expertise between students (Lantolf and Thorne 2006). Second, the rapid diffusion of technology influenced the WHEN and WHERE of learning by dissolving the boundaries of time and place. Learning has become an extended and multi-step experience, and in-class sessions have transitioned to asynchronous tasks and activities that learners can complete online, in several steps, and at their own pace (Dubskikh et al. 2019), thus giving students full control of when and where to complete the tasks. Third, the digital era is also changing WHAT students learn. The learning curricula go beyond covering specific content to focus more on life-long strategies that lead learners to think critically, manage knowledge, and acquire tools that will help them find the information needed at a given time (Besseyre des Horts 2019).
Reconceptualizing what it means to be educated in the digital world pushes instructors to reflect on the practices that best respond to new learning demands (Oskoz and Elola 2014). A practice that is increasingly gaining recognition is online peer review, the digital process of working together in dyads or small groups to critique and provide online feedback on one another’s performance using electronic communications and technologies (Cho and MacArthur 2011; Costello and Crane 2016). Although the technique of peer reviewing is not new, it has adopted many forms over the years: it can be face-to-face or online, use written and/or oral mediums, and vary in design (e.g., number of reviews, instructor involvement, use of numerical rating, language of feedback comments) (Illana-Mahiques 2019a).
With respect to the points raised earlier, the technique of peer reviewing responds well to the recent social shifts of digital communication. First, peer review has been associated with active learning (Mendonça and Johnson 1994), since students engage in analyzing, reflecting, commenting, and revising the peer’s work. In turn, comments from peers encourage students to become more reflective and aware of the rhetorical structure of their own writing, often leading them to revisit, analyze, self-correct, and improve their written performance (Ferris and Hedgcock [1998] 2014).
Second, online peer review is also considered to promote collaborative learning (Storch 2013) and to provide learners with opportunities for an active, contextualized, and meaningful type of communication (De Guerrero and Villamil 2000). It contributes to learners’ autonomy and fits with current digital models of collaboration which, according to Lotherington and Ronda (2014), involve “crowdsourcing, or putting minds together to create a massive problem-solving collective” (p. 18). These skills not only are applicable to the L2 language context, but they are also transferable to many aspects of life.
Third, online peer review practices erase the difference between experts and novices, home and workplace, and real and virtual activities (Lotherington and Ronda 2014). These features become even more accentuated when the identity of the students is kept anonymous or when writer and reviewer do not even meet in person. In these cases, learners may develop a wider sense of community as they gain greater access to multiple perspectives (Ertmer et al. 2007).

1.3. Challenges of Online Peer Review

Although online peer feedback makes important contributions to the L2 learning process, it is not without its challenges. A number of difficulties that have been identified in the research include attending to learners’ learning styles, decreasing students’ computer anxiety (Matsumura and Hann 2004), overcoming lack of communication, and avoiding one-way interactions that either leave students’ feedback unaddressed (Guardado and Shi 2007) or require that learners passively accept the comments without further follow-up (Díez-Bedmar and Pérez-Paredes 2012). Learners’ interaction in the online setting also presents additional challenges, including conveying messages clearly, being able to communicate complex ideas, and understanding peers’ explanations, especially when they are not detailed and specific (Tunison and Noonan 2001). Another drawback is the logistics of the process, for instance, accepting the lack of oral interaction (Ho and Savignon 2007) as well as the time delay from giving feedback to receiving peers’ comments (Ertmer et al. 2007; Guardado and Shi 2007).
Aiming at reducing some of the difficulties of online peer feedback and further maximizing its effectiveness, several recommendations have been posed in research. First, the logistical complications of giving and receiving feedback should be reduced (Ertmer et al. 2007); second, training should be provided to all L2 writers as a means of helping them become effective online responders (Tuzi 2004); third students’ anonymity may be embraced as a chance for students to free themselves from social pressure, be less afraid of pointing out problems, and provide more honest and critical comments (Guardado and Shi 2007; Ho and Savignon 2007); fourth, in FL settings, where learners share a common mother tongue, students could be allowed to used their L1, whenever they encounter difficulties expressing their ideas in the L2 (Ho and Savignon 2007; Yang et al. 2006); and fifth, whenever possible, instructors should have the opportunity to participate in the peer review activities (DiGiovanni and Nagaswami 2001).
Given the effectiveness of these strategies in compensating for the challenges of online peer review, this study took each of these into consideration when designing the peer review activities for the research. Particular attention was given to the training process, which is discussed further in the following section.

1.4. Needs and Challenges of Peer Review Training

Although the positive effects of implementing peer feedback sessions in the classroom have long been accepted, researchers emphasize the need for training students, and they argue that peer feedback is beneficial only when students receive guidance on how to provide feedback to each other (Berg 1999b; Levi Altstaedter 2016; Min 2006; Sánchez-Naranjo 2019; Zhu 1995). Students who receive training give richer and more varied feedback than their counterparts who have not, and their overall comments (global and form-focused) are more likely to be viewed as helpful by their peers who make more and better revisions in response to peer feedback.
Giving appropriate peer review training to students, however, is in itself a highly challenging task, in part because a number of dimensions about feedback-giving are still poorly understood. For instance, there is little agreement on what constitutes quality feedback (Nelson and Schunn 2008) or on how the contextual and inherently social nature of peer review influences students’ comments (e.g., task set-up, classroom context, online versus face-to-face modes, etc.). Similarly, topics such as learners’ agency and their self-reported beliefs as feedback-givers (Storch and Wigglesworth 2010), the stances learners assume as givers of feedback (Lockhart and Ng 1995; Mangelsdorf and Schlumberger 1992), the quality of the feedback they provide (Allen and Katayama 2016; Allen and Mills 2016; Hu and Lam 2010), the criteria for these comments to be constructive and effective (Gielen et al. 2010), and the reviewer’s ability to not only be critical, reflective, and analytical (Mangelsdorf 1992) but also provide comprehensible comments that are informative, specific, and explanatory (Min 2005), have received little attention in previous research.
Beyond the challenges associated with training, the changing and complex nature of peer review makes it difficult to determine the effects of this practice on learning, especially in relation to the act of giving feedback. While many peer review studies have analyzed the role of received feedback and the subsequent performance of those learning to write (Kamimura 2006; Min 2006; Paulus 1999; Yang et al. 2006), empirical research on the usefulness of giving feedback to the reviewers’ own writing is still very limited (Cao et al. 2019; Yu 2019), especially in relation to the online context (Cassidy and Bailey 2018). Even more limited are the discussions that quantitatively compare the effects of giving feedback and receiving feedback in relation to students’ L2 writing performance (Lundstrom and Baker 2009) and, to my knowledge, no previous L2 study has conducted this analysis in the online context.
Responding to the existing gaps in the literature, this study takes a quantitative approach to explore the influence of both giving and receiving online feedback on students’ learning as measured in their final writing performance (Final Version). In addition, and given the lack of discussion about feedback-giving in the online context, this study further explores the effects of giving feedback in helping reviewers improve their own writing. The next sections elaborate on the literature developed thus far in regard to the feedback-giving perspective, emphasizing the need for more research on the peer reviewing roles.

1.5. Peer Review and the Benefits of the Feedback-Giving Role

In the field of L2 learning research, there is little research analyzing the beneficial effects of reviewing on one’s own writing (Lundstrom and Baker 2009). On the one hand, it seems logical that students in the role of feedback-giver develop analytical skills, that these skills allow them to evaluate their peers’ writing appropriately, and that these analytical abilities ultimately are transferred to their own writing. On the other hand, however, more research is needed to confirm these premises and to illuminate the connections between the processes involved in feedback-giving.
Despite the limited empirical research on the benefit of peer review to the feedback-giver, research focusing on (1) the reading-writing relationship (Beach and Liebman-Kleine 1986), (2) the learner’s self-reviewing and self-assessing skills (Barkaoui 2007), and (3) the impact of audience awareness on students’ own writing (Chen and Brown 2012), often supports the principles of the learning-by-reviewing hypothesis. This hypothesis was proposed by Cho and MacArthur’s (2011) first language (L1) study to refer to how “students may improve their own writing skills by engaging in peer review of writing” (p. 74). Because the term is relatively recent, it is worth reviewing how it relates to each of the three research areas just mentioned.
Many of the benefits associated with peer reviewing are rooted in the reading–writing relationship. Reviewing practices lead students to analyze writing from the reader’s perspective, which in turn provides opportunities for learning by observation. The term observational learning was coined by Bandura (1971) to refer to the process of acquiring information, a skill, or a behavior by watching others and creating symbolic mental representations that later act as the basis to replicate the observed activities, strategies, or behaviors. Applied to online peer-review activities, readers not only observe the peer’s performance as the object of evaluation, but the information obtained is used as input for subsequent writing or revision activities (Couzijn and Rijlaarsdam 2005). An example in the L2 learning context is the study by De Guerrero and Villamil (2000). The researchers, who analyzed the oral interaction of pairs during peer-review sessions, concluded that as knowledge became explicit, both writer and reader consolidated and reorganized their L2 knowledge.
Peer review also promotes self-review or self-generated explanations, a concept often referred as “internal feedback” (Butler and Winne 1995). Applied to peer reviewing, Mory (2003) describes internal feedback as an inherent process of self-monitoring that links students’ past performance to their next successive task, motivating writers to reinterpret their task and their own engagement. The beneficial effects of encouraging students to reflect on their learning processes, either through reflective prompts or by a combination of reflection and suggestive feedback, have been demonstrated (van den Boom et al. 2007). As students are given the opportunity to develop critical evaluation skills, they may learn to identify logical gaps, detect problems, and diagnose inconsistencies in the argument of the text (Ferris and Hedgcock [1998] 2014), these being essential skills in reviewing and assessing their own work. In the field of L2 learning, Hyland and Hyland (2006) argue that developing these abilities is one of the main goals of the overall peer review process. As the researchers affirm, “the ultimate aim of any form of feedback should be to move students to a more independent role where they can critically evaluate their own writing and intervene to change their own process and products where necessary” (p. 92).
Research on revision in combination with increasing writer’s knowledge of their audience has also yielded positive effects in improving writing quality (Chen and Brown 2012; Holliway and McCutchen 2004; Midgette et al. 2008). In the area of L2 learning, Tsui and Ng (2000) found that writing for peers helps writers become more conscious that their texts need to satisfy the expectations of a given audience. Thus, as students learn to take into consideration the characteristics and demands of their audience, they gain a greater sense of ownership of the text (Tsui and Ng 2000) and are further encouraged to write and read their own work from the perspective of a reader (Cho and MacArthur 2011). This, in turn, may lead learners to actively and critically analyze their own work.
Overall, the perspectives of peer reviewing as a technique that provides student writers with the reader’s role, self-reviewing skills, and audience knowledge arise as strong arguments in support of the learning-by-reviewing hypothesis. However, despite the expectation that engaging in feedback-giving may lead reviewers to strengthen their writing skills and produce an essay of higher quality, more research is needed in the L2 context that confirms this premise. As Lundstrom and Baker (2009) point out, “the benefits of peer review to the reviewer, or the student giving feedback, has [sic] not been thoroughly investigated in second-language writing research” (p. 30). This study sheds light on this issue and explores the beneficial impact of giving feedback to the student who takes the reviewer role.

1.6. Learning-by-Reviewing Hypothesis

In response to the need for more empirical research that analyzes the benefits of giving feedback, various researchers have explored the contrasting effects of the role of feedback-giving to the role of feedback-receiving. Research conducted on this topic suggests that reviewing peer work may have a positive influence on students’ learning and their ability to improve their written performance (Cao et al. 2019; Lundstrom and Baker 2009; Yu 2019).
Prior research in the field explored students’ beliefs about and perspectives on the value of giving and receiving peer feedback. Yu (2019) looked at the master thesis drafts of three second-year EFL students, the comments they gave between themselves, as well as the comments they received from the rest of the four first-year students. Individual interviews were conducted before and after each of the two peer reviews sessions, and a stimulated recall was also carried out to gather further data on the reasoning behind their comments. Using qualitive methods of data analysis, the authors found that giving feedback was perceived to be beneficial to all seven students, specifically in relation to their awareness of the thesis genre, their writing skills, their ability to seek external assistance, and their self-reflective and self-critical skills.
Additionally, Cao et al. (2019) collected and analyzed data from an undergraduate ESL class that worked in groups of three to comment on each other’s summary writing tasks. Their discussions were video-recorded to prompt questions for the individual stimulated recall sessions and interview sessions. In the stimulated recall, students reflected on the specific feedback they gave. Then in the interview, they elaborated on their awareness, experiences, and perceptions about the two roles (giving and receiving) and how these enhanced their learning. While a few students expressed negative opinions about the benefits of peer reviewing, most participants perceived that their learning benefited from both giving and receiving peer feedback. Specific factors, however, such as within-group differences in writing ability, were found to mediate students’ perceptions on how the two roles promoted their learning.
Apart from exploring students’ beliefs and perceptions, researchers have also extended this line of inquiry to quantitative studies. Using a quasi-experimental design, Lundstrom and Baker (2009) proposed a between-group design that divided the participants of nine different intact classes into two groups: a control group and an experimental group. Participants in the control group were referred to as receivers. They only received feedback from peers and were asked to rewrite the given texts using the feedback provided on the margins. Participants in the experimental group were referred to as givers. They only offered feedback and were asked to give their own suggestions to the texts provided. Effects of the activities were measured through pre-test/post-test design, which consisted of a 30-min in-class essay at the beginning and at the end of the semester. Analyses of the writing samples demonstrated that giving feedback was more beneficial to improving learner’s performance than receiving feedback. As the researchers affirm, “L2 writing students can improve their own writing by transferring abilities they learn when reviewing peer text” (p. 38).
Despite the methodological strength of this study, it is worth noting that their quasi-experimental design involved providing the same texts to everyone, and learners were asked to either review or revise them. This means that not only were the roles of givers and receivers isolated, but that there was no opportunity for interaction between participants. The sense of community was missing and the text used for the peer review session was not written by any member of the class. While this controlled for differences in students’ writing, having a simulated environment and working with a text that is not their own could have affected the performance of both givers and receivers. Hence, generalization of the results to other authentic, online, non-experimental settings may not be appropriate (Cohen et al. [2007] 2011).
Studies with higher ecological validity have been conducted in L1 research that confirm Lundstrom and Baker’s (2009) findings. Cho and Cho (2011) looked at the laboratory reports of 72 native English-speakers in a university physics course. Sources of data included the students’ essay, the types of comments given and received, and the score assigned to each of the drafts. Using correlational analyses, the researchers explored the relationship between the quality of the final version and the type of given comments and received comments. Results confirmed that reviewer’s comments significantly influenced their own revisions, while the effects of received comments were limited.
Lu and Law (2012) also used similar correlational analyses to analyze the work of 181 participants in a humanities project in a secondary school in Hong Kong. Among the main strengths of the study is the use of a detailed instrumentation to classify the scope of the comments in two dimensions: cognitive and affective. Cognitive comments were further categorized as (1) identify problem [sic]; (2) suggestion; (3) explanation; and (4) comment on language. Affective comments included (5) negative and (6) positive comments. The coding scheme integrated specific criteria previously associated with feedback quality (Min 2005), and more importantly, it allowed the researchers to explore more specific relationships on what types of feedback may be influencing learners’ performance. The results of the study coincide with Cho and Cho’s findings (Cho and Cho 2011) in that giving feedback, particularly suggestion comments and feedback on identifying problems, was beneficial to improve reviewers’ own writing. Additionally, receiving positive affective feedback from a peer was also found beneficial for writers when revising their essay.
From the studies reviewed above, it is evident that the role of feedback givers arises as an essential part of peer review practices. Correlational studies are also crucial to better understand the nature of peer review. Data are obtained from realistic, naturally occurring contexts, and the effects of giving comments can be analyzed along those of receiving comments, thus respecting the reciprocity that is inherent to peer review activities (Cho and Cho 2011).
Studies in the field that use correlation analyses, however, were only found in the L1 field and primarily in face-to-face contexts. Given that writing, commenting, and revising in the L1 may not be comparable to developing these activities in the target language, more research is needed that focuses on the L2 writing context. Similarly, given the challenges of dealing with a foreign language (e.g., proficiency), with learners from different cultures (Carson and Nelson 1996; Leki 1990), and/or with the online format of the peer reviewing practices (Tuzi 2004), the benefits cannot be assumed to be the same in the L2 online writing context.
Responding to the limited research on the topic (Cao et al. 2019; Yu 2019), the even fewer studies that take a quantitative approach (Lundstrom and Baker 2009) and departing from an experimental design that may alter the peer review dynamic of simultaneously giving and receiving feedback (Cho and Cho 2011; Lu and Law 2012), this study explores both roles—online feedback-giver and online feedback-receiver—as they develop in conjunction and as an inseparable unit in the online context. The potential benefits associated with each role may, in turn, be influenced by specific factors. For example, the variable of students’ L2 proficiency has received considerable attention in the field of L2 peer review (Allen and Katayama 2016; Allen and Mills 2016). However, its role in determining whether or how students may benefit from adopting the feedback-giving and feedback-receiving roles has not yet been explored in previous literature. Therefore, this study also responds to this gap and explores some of the potential factors that may influence student’s online feedback-giving practices.
Overall, this study fills the need for more L2 research that analyzes the effects of online peer review to the giving and receiving roles, and how specific factors such as L2 proficiency may influence students’ ability to successfully perform each role. The research questions that guided this study are the following:
  • What types of comments do reviewers give on their peers’ drafts?
  • To what extent does the total number of given and received comments have an influence on the writers’ performance?
  • How does giving specific types of feedback relate to the final performance of the writers?
  • To what extent do the reviewers’ initial L2 writing performance and the quality of the drafts that they review relate to the type of comments that reviewers provide?

2. Materials and Methods

2.1. Participants

A total of 89 students were recruited from the third-year multi-section Spanish course entitled Spanish Language Skills: Writing, offered at a midwestern university. From the 89 students initially recruited, 76 of them took part in the study. All participants were L2 learners of Spanish, pursuing either a major or a minor in Spanish, attended class regularly, and completed all online submissions on time, including the peer reviewing tasks.

2.2. Procedures: Data Collection

Data were collected from multiple sources, including a background questionnaire, a pre-writing activity, students’ written essay (which included an Outline, a Draft, and a Final Version), and online written peer feedback.
First, the background questionnaire collected demographic information (e.g., age, major, total number of years studying Spanish) and other information regarding students’ experiences writing and peer reviewing in Spanish. Then, the pre-writing activity was administered. The activity was completed during class time, and all course sections met in the Language Media Center of the university equipped with about 60 computers. Students were given 25 min to complete the pre-writing activity: write a narrative based on a story illustrated in six panels. Their responses were submitted via Canvas, the learning management system (LMS) used at the university. Since the writing task was timed and it allowed for the differentiation between the writing abilities of the learners, students’ responses to this activity were considered valid indicators of their initial writing skills. This assessment served the purposes of this study but should not be confused with a more standard measure of language proficiency, such as the ACTFL Writing Proficiency Test.
After finishing the background questionnaire and the pre-writing activity, participants were involved in an online L2 Spanish writing essay focused on a personal narrative. A total of four weeks were dedicated to three topics: (1) teach students about the genre, (2) give them appropriate training, and (3) have them write and submit their essay (Outline, Draft, and Final Version) as well as the peer feedback.
Regarding the teaching sessions (1), and to avoid pedagogical differences between sections, the researcher, together with the support of the instructor of each section, taught and implemented the same teaching module in all six sections of the course. The curriculum during the four weeks of the study was the same across all sections. Thus, the content, teaching style, materials, homework assignments, and the specific examples of the genre used for each phase of the essay (Outline, Draft, and Final Version) were the same across all six sections. This ensured that all students received the same instruction on how to write a personal narrative. The arrangement also allowed students to attend a different class section if, for any reason, they had to be absent from a meeting of their own class. Otherwise, they were asked to schedule an appointment with the researcher to go over the contents covered in class.
Students also received the same peer review and technology training (2). Each of these training processes (i.e., in technology and in peer reviewing) were woven into the teaching curriculum and, therefore, took place throughout the four weeks of the study. Students received training in peer reviewing for both the Outline and the Draft assignments. The training started with a class discussion exercise, and through other critical thinking activities, progressed to providing students with specific tools and resources that would help them give richer, more specific, constructive, and varied feedback. Specifically, the main training tasks included reading, reflecting on, and discussing the principles and values of peer review; practicing with the peer review guidelines for each assignment; discussing the range of different types of comments; and analyzing, commenting on, and discussing peer reviewing techniques on samples of personal narratives.
The technology training allowed students to become familiar with the platform, identify technological issues, and ask for help if they experienced other problems with the platform. The students learned the required procedures to access the peer’s essay and to respond to the peer review guidelines. They also saw demonstrations of the variety of resources offered in the peer review platform and further explored and practiced with them during class time. After training, no additional instructions were added to the peer review activities since students were already familiar with the platform.
Regarding the students’ submissions (3), the writing assignment consisted of narrating a personal experience, which could be a travel adventure, a memorable event, an accident, a life-changing experience, etc. For the assignment, students were asked to write an engaging, interesting, and suspenseful story that would incorporate the key elements of a personal narrative (e.g., title, presentation, rising action, climax, falling action). The specific features of the genre were discussed in class, as part of the content curriculum.
The sources of data obtained from the essay included two drafts, the Outline and Draft assignments, both of which were peer reviewed and revised to result in the Final Version assignment. In other words, the three main steps of the drafting process included: (1) writing the Outline or the Draft assignment; (2) completing a peer review activity using PeerMark software; and (3) making the necessary revisions and adjustments before submitting a newer version of their work. As illustrated in Figure 1, the same three-step procedure was followed in the Outline phase and the Draft phase. Then, students’ Final Versions were sent to the instructor for a grade.
Data collection ended with students’ submission of their Final Version, which was used as an indicator of their final performance. The grade that each instructor gave to their students, however, was irrelevant for the purposes of this study. This ensured consistency in the students’ assessments, especially because each of the six instructors used their own version of the rubric. Some graded more strictly than others, as instructors often prefer an achievement curve, where all students obtain relatively high grades, rather than a normally distributed curve, where only a few students obtain high and low grades. A normal curve, however, is always preferred in regression analyses.

2.3. Data Management: Online Plantforms and Task Arrangements

All draft submissions (Outline, Draft, and Final Version) and all feedback comments written to the assigned peers were supported by the Canvas platform, the online Learning Management System (LMS) used at the university. Integrated in Canvas, the external tool PeerMark (also known as Turnitin Feedback Studio) was used as the main software for students to carry out the online peer review sessions. The PeerMark platform and its embedded tools offer students scaffolding to provide effective feedback on both local and global issues (Li and Li 2018). Design features of the Turnitin platform (e.g., commenting tools, composition marks, highlighting) allow students to highlight problematic segments and add explanatory comments. Students can also respond to the instructor’s questions, which guide students on the main elements they need to focus on when giving comments (Illana-Mahiques 2019a; Li and Li 2018).
For this project, the peer review guidelines were presented in a checklist format to ensure that students would not miss any criterion. Because PeerMark did not allow for a check off list and required, instead, a minimum of two choices (multiple choice format), the numbers 1 and 2 were used strategically to imitate a check off list format (i.e., 1 = no comment needed for this criterion, 2 = comment added in response to the criterion). The main purpose of this list of criteria was to guide students’ focus towards the main elements of a narration. Student responses to this checklist, however, were not relevant and were not part of the data collection. Thus, for the peer review, only the feedback comments that students wrote were considered part of the data collection. Figure 2 is an example of what students saw when they opened a peer’s assignment in PeerMark. Then Figure 3 shows PeerMark’s tools palette, which offers students additional functions, such as including quick marks or changing colors to highlight different areas of the peer’s draft.
Another advantage of the PeerMark is that it can be embedded into various LMS (i.e., Canvas, Blackboard, Sakai) as long as the institution supports the tool. This facilitates students’ work in that they don’t have to navigate to another platform. Similarly, it also assists instructors in planning and managing the peer review activity, while all the data remain recorded within the LMS (Li and Li 2018). In this case, the software allowed the entire project to take place online and within the Canvas platform. As a result, class time could be spent on issues related to training and task management.
Regarding the structure of the peer review assignment, online sessions were set up to be asynchronous and anonymous. Students completed the tasks at different times and places, and they used their preferred computer. Reviewers’ identities were kept confidential to encourage students to comment more and to provide more honest feedback. To further ensure anonymity, assignments were arranged so that no two students reviewed each other’s paper. Additionally, to enable students to fully participate in the peer review interaction, students were asked to comment in their L1, so their L2 proficiency would not prevent them from fully expressing their ideas (Ho and Savignon 2007; Yang et al. 2006).

2.4. Measures of L2 Writing

Measures of writing quality were applied to three of the students’ submissions: (1) the pre-writing activity, (2) the Draft assignment, and (3) the Final Version assignment. Respective to each set of data, (1) writers’ initial writing skills were measured in terms of the assessment score given to their pre-writing activity (an illustrated story); (2) quality of the peer draft was measured by the assessment score given to the peer’s Draft assignment; and (3) writing quality of students’ final performance was defined as the assessment score assigned to the writer’s Final Version. All assessments were conducted after the language writing project was finalized. This helped to avoid any possible bias and maintain similar pedagogical practices across all sections.
The three aforementioned sets of data were assessed in terms of their quality, and two different holistic rubrics were used for their evaluation. A slightly simpler holistic rubric was used to assess the 25-min pre-writing activity (see Appendix A), whereas both the Draft and the Final Version assignments were assessed with a more detailed rubric that targeted in greater depth the qualities of a narrative (see Appendix B). The criteria included in both rubrics responded to the genre of the personal narrative. The rubrics not only consider global specifications, such as organization, content, and idea development, but also include local criteria related to language usage (e.g., grammar, sentence structure, vocabulary, and mechanics).
Reliability checks were carried out by comparing the researcher’s ratings with another expert’s rating. After practicing with the rubric, the researcher and the expert rater rated 65% of the data together, discussing possible differences until agreement was reached. The rest of the data were coded by the researcher independently. Interrater reliability was ensured by contacting and training yet a third experienced rater, who in both cases rated a sample (20% of the data) of the pre-writing assignment (r = 0.84, p < 0.01) and the Final Version assignment (r = 0.87, p < 0.01).

2.5. Comment Analysis

Comments from the peer reviewers were segmented into feedback points. A feedback point can be defined as “a self-contained message on a single issue of peer writing” (Cho and Cho 2011, p. 634). Each feedback point was then classified using a coding scheme similar to that of Min (2005) and Lu and Law (2012). Specifically, the researchers’ scheme (Lu and Law 2012; Min 2005) was adjusted to create a typology that could classify all feedback points collected in this study. To increase validity, changes made to the categories were discussed with two other researchers in the field of L2 writing.
From the comments that the 76 participants gave to the Outline and Draft assignments, a total of 2318 feedback points were categorized into two dimensions: the affective dimension (praise/empathy, explanation of the praise) and/or the cognitive dimension (problem identification, suggestion, alteration, and justification).
The affective dimension refers to general affective perceptions of peer drafts. Positive statements were coded as praise and empathy (P/E) when they appeared as quick formulaic expressions to encourage the writer (e.g., “good job”). Longer statements that explained and summarized the positive aspects of the text were coded as explanation of the praise (EP) (e.g., “I like how you provide a detailed description of the main characters to highlight their personality”).
The cognitive dimension takes a purposeful perspective, informing writers about issues to address in their essay. A feature was coded as problem identification (PI) if the comment located or pinpointed a specific problem. A feedback point was coded as suggestion (S) if it provided the student writers with specific advice or a solution to a problem. Feedback comments that addressed language issues (such as word choice or grammar) were tagged as alteration (A). Statements that clarified the reviewer’s rationale for giving a comment were coded as justification (J). Finally, a comment was coded as elaboration (E) when it asked the writer to add more details about a specific issue or event. Intercoder reliability was reached with another coder and 20% of the total data were coded together (r = 0.94, p < 0.01). Following the same coding model, the researcher coded the rest of the data independently using the adjusted feedback taxonomy (Lu and Law 2012; Min 2005) summarized in Figure 4.
Segmenting all comments from the Outline and Draft assignments into feedback points and then classifying them into categories provided an important data set that was used in the descriptive and the statistical analyses. Thus, for all the analyses conducted in this study, data on student feedback comments refer to the feedback points collected in both the Outline and the Draft assignments.

2.6. Statistical Data Analysis

The data analysis proceeded in several stages. To address Research Question 1, the types of comments that students made (both in the Outline and the Draft assignments) were analyzed in terms of frequency counts through descriptive statistics. For Research Questions 2 and 3, the specific comment types predicting final performance were measured using multiple regression analyses. Specifically, two regression models were used that considered final performance as the dependent variable. Model 1 is more general in that it takes into consideration the total amount of feedback given (independent variable 1) and the total amount of feedback received (independent variable 2). Model 2 is more specific in that it considers the specific types of feedback given. The independent variables (x1, x2, xp) correspond to the raw frequency of each of the feedback types given. Finally, for Research Question 4, additional factors related to giving and receiving feedback (i.e., students’ initial writing skills and quality of peer draft) were taken into consideration in terms of their impact in the reviewing stage. The effects of these two factors in predicting students’ use of specific types of comments were analyzed through a correlation analysis.

3. Results

Participants (N = 76) gave to their peers a total of 2318 feedback points, all of which were classified by type. The types of online feedback given and received, together with other relevant measures from the 76 students, were included in the analysis. Statistical analyses were employed to answer the four research questions. The following sections summarize the findings in relation to each of the research questions guiding the study.

3.1. Summary of Descriptive Statistics: Types of Online Comments Used

Table 1 shows the raw frequency (f), the percentage (pp), and the mean (M) number for each of the comment types provided by reviewers.
The most frequent feedback type was suggestion comments (f = 458, pp = 19.76, M = 6.03) and the second most frequent type was praise and empathy comments (f = 412, pp = 17.77, M = 5.42). The least frequent type of feedback was explanation of praise comments (f = 264, pp = 11.39, M = 3.47) and the second least frequent was problem identification (f = 270, pp = 11.65, M = 3.55).
Overall, and probably due to the influence of peer review training, the different types of comments were generally well distributed. Between the highest and the lowest frequencies, participants gave other types of comments, namely alteration (f = 278, pp = 11.99, M = 3.66), justification (f = 307, pp = 13.24, M = 4.04), and elaboration (f = 329, pp = 14.19, M = 4.33). It should be noted that among the three types of comments, alteration—correcting language errors—was not near the top of the list of comments most frequently used. Students really did focus on content and on offering support and encouragement to their fellow writers.

3.2. Multiple Regression: Predictive Effects of the Online Peer Review Roles

Research Question 2 analyzes whether giving extensive online feedback, receiving extensive online feedback, or both can predict the quality of learners’ Final Version. A multiple regression analysis was used to address this research question. The test for the regression analysis consisted of one dependent variable (y), namely final score, and two independent variables, the total amount of online feedback received (x1) and the total amount of online feedback given (x2). After checking that no significant violations of the regression model were made, the regression coefficients were calculated.
The results displayed in Table 2 show that, of the two independent variables entered in the model, only the total amount of online feedback given (x2 = GAVE_Total) was a significant predictor of the participants’ final score (y’ = Final_Score), with a p-value = 0.01 which is p < 0.05 on a two-tailed test. The other predictor, namely total amount of online feedback received (x2 = REC_Total), did not prove to be significant (p-value = 0.37). Overall, and as shown in Table 3, the model accounted for 12.8% (R Square) of the variance in the dependent variable.

3.3. Multiple Regression: Predictive Effects of the Online Feedback-Giving by Comment Types

As a follow-up to the results from the previous research question, Research Question 3 analyzes whether giving specific types of online comments can predict the quality of learners’ Final Version. Through a second multiple regression analysis, the writing quality of the participants’ Final Version (y), as measured by the rubric on students’ Final Version (see Appendix B), was used as the dependent variable, and the distributed tally of all the possible types of online comments given (x1, x2, x3, x4, xp) were used as the independent variables.
Adjustments were made to address multicollinearity, so no pair of variables would have a high coefficient (>0.60). Specifically, the variable praise and empathy (PE) was eliminated from the regression module because it was highly correlated with explanation of the praise (EP) comments. This adjustment was found appropriate for two reasons: first, to avoid bringing redundant information into the regression model and, second, to obtain more accurate statistical results without compromising the overall taxonomy, which still captured the affective dimension. After the adjustments, the regression analysis included the student’s final performance as the only dependent variable (y’) and a total of six independent variables (PI, S, A, J, E, EP) corresponding with the types of online feedback given.
As shown in Table 4, among all feedback given there are three variables with significant contributions to the model: identifying problems (PI) to peers (p = 0.05, p < 0.05); justifying (J) a specific comment provided to the peer (p = 0.036, p < 0.5); and giving affective feedback that explains a specific praise comment (EP) (p = 0.003). It seems reasonable that identifying a problem and justifying it contributed to students engaging in activities of higher cognitive demand. The strongest predictor of final performance, however, is explanation of praise (EP). From the giver’s perspective, it is possible that because students focus and engage in what the peer did well, they are able to return to their own work and apply some of the same strategies they praised to their peer’s work. Overall, as shown in Table 5, the model accounted for 26.2% (R Square) of the variance in the dependent variable.

3.4. Correlation Analyses between Specific Factors and Types of Comments Reviewers Provide

To reveal the factors that may lead students to provide a specific type of feedback, a correlation analysis was carried out. Table 6 is a summary of the correlation analysis, which shows two types of pair relationships. The first column (Initial_Score) shows the relationship between each type of feedback given and students’ initial writing skills, as measured by the rubric on the pre-writing activity (see Appendix A). The second column (Peer_Draft_Score) shows the relationship between each type of feedback given and the quality of the peer Draft, as measured by the rubric on the writer’s Draft submission (see Appendix B). The strength of each relationship is measured in terms of the correlation coefficients.
The results from Table 6 show only two significant correlations between the different types of comments and the two aforementioned factors of interest: reviewers’ initial writing skills and the quality of the peer’s Draft. The strongest correlation is between giving comments that identify a problem (IP) and the initial writing skills of the reviewer. Then the second strong correlation is between giving suggestions comments (S) and the initial writing skills of the reviewer. In other words, the higher the initial L2 writing skills of the reviewers, the more problem identification (PI) and suggestion (S) comments they gave. The rest of the feedback types did not correlate significantly with either of the factors. That is, the quality of the peer’s Draft did not significantly trigger any particular type of feedback from the peer reviewer.

4. Discussion

The purpose of this study was to explore the types of feedback used most by learners and analyze how inhabiting the roles of online feedback giver and receiver impact learners’ writing performance. Additionally, the influence of other factors, namely the reviewers’ initial writing skills (assessed from the pre-writing activity) and the quality of the peer drafts (assessed from the peer’s Draft assignment), were also considered in exploring in more detail the role of the online feedback giver.
Results in response to Research Question 1 showed that the type of comment most commonly used was suggestion, followed by praise and empathy comments. Conversely, the type of comment used the least was explanation of the praise, and the second least used was problem identification. These results indicate that students offer many suggestions, but the specific problem is not always pointed out. Similarly, students give many formulaic praise and empathy comments (e.g., “good job!”), but an explanation of what in particular is good and why is not always added.
These results inform our training practices, which could teach reviewers about giving more complete, varied, and elaborate comments. For example, training could emphasize the value of pointing out a problem before offering a suggestion, or the value of following up with a praise comment in order to explain what was done well and why. Offering reviewers a clear structure on how to write their comments has been supported in previous research (Min 2005; Nilson 2003). Especially in L2 peer reviewing, authors such as Min (2005) affirm that a step-by-step procedure “may appear rigid and unnecessary in response groups” but “it is crucial for EFL paired peer review since there is usually little time for reviewer and writer to discuss the written comments” (p. 296). This becomes even more relevant in the online setting, where there is no face-to-face interaction between writers and reviewers.
It is also worth noting that, while the number of alteration comments is relatively high, there is no evidence of students overusing this type of surface comment, which targeted error correction. This may be a result of the peer review training sessions, which prioritized commenting on content, not only on grammar. In fact, it is possible that the greater focus placed on content may have led students to give more suggestion comments. It is also possible that students prioritized this type of comment because they found it easier to make, were already familiar with its structure, or because it was often practiced in the training sessions, which encouraged students to be constructive and to give suggestions on their peers’ work.
Ultimately, and in regard to the overall classification of the online feedback types, descriptive statistics showed that the comments were well distributed among the seven different feedback types. Participants used a variety of online feedback types in a balanced manner, and the relative frequency of each of the categories ranged between 11% and 20%. Thus, despite the peer review being conducted in the online context, none of the feedback types was overused or underrepresented. Instead, students were able to balance out their tendency to comment on surface issues (Berg 1999b; Hu 2005; Levi Altstaedter 2016) and use the array of feedback types available to them (Illana-Mahiques 2019b). These findings may indicate students’ ability to transfer and apply what they learn from the training to their own online context.
Results in response to Research Question 2 showed that giving online feedback was a significant predictor of the students’ final scores, whereas receiving online comments did not significantly predict final scores. These findings are consistent with previous studies in the field that argue for the benefits of giving feedback (Cho and Cho 2011; Lu and Law 2012; Lundstrom and Baker 2009). Specifically, the findings support the idea that developing analytical skills and learning how to review others’ writing may ultimately lead students to become better writers, more capable of assessing, revising, and improving their own drafts. As Meeks (2017) emphasizes in the Eli Review blog, “giving feedback teaches students something about writing they can’t learn from drafting.” Cho and MacArthur (2011) make a similar argument in their learning-by-reviewing hypothesis, which argues that peer reviewing sets students up to carefully assess, judge, and revise their own work, ultimately writing better essays themselves. In the field of L2 learning, Lundstrom and Baker (2009) arrived at the same conclusion, claiming that giving feedback is better than receiving, at least in terms of improving L2 learners’ final written performance. The present study, however, confirms similar findings in teaching languages other than English (Spanish) and with the online context as the main space for reviewing and providing comments to their peer’s text.
By exploring in greater depth the role of the feedback-giver, results from Research Question 3 identified three types of online comments that were significant in predicting reviewers’ scores in their own Final Version: problem identification (PI), justification (J), and explanation of the praise (EP). While the bulk of research to support these findings comes from the L1 literature (except for Lundstrom and Baker 2009), similar explanations may be used in interpreting the findings in the L2 online context and in relation to the three feedback types that appeared as significant.
Giving problem identification comments requires higher cognitive abilities, such as comparing and contrasting, evaluating, and thinking critically about the peer’s text. The purpose is to identify the parts of the text that are problematic, difficult to understand, confusing, misplaced, or that lack additional elements. As students develop the ability to see such problems in the texts they review, it is possible they can transfer their skills as reviewers to themselves as writers so that they can critically self-evaluate their own writing and make appropriate revisions (Lundstrom and Baker 2009). This finding is consistent with previous research showing that students can learn from critiquing, identifying problems, or providing weakness comments to peers (Cho and Cho 2011; Cho and MacArthur 2011; Lu and Law 2012). As Cho and Cho (2011) conclude, “by commenting on the weakness of peer drafts, reviewers can develop knowledge of writing constraints that then helps the reviewers to monitor and regulate their own writing” (p. 639).
Justification comments, which explain in greater detail the reason why something needs to be changed, included, revised, or reorganized, also appeared to be significant in predicting writers’ performance. This type of comment allows reviewers to build on their perspective as readers, and also to articulate further reasons and explanations for the problem identification or suggestion comments. It also requires higher order thinking as they ask reviewers not only to envision the draft that would result from incorporating the suggestions and addressing the problems, but also articulate this process and encourage writers to picture a better version of their drafts. As Meeks states in her blog (Meeks 2017), “giving feedback is […] cognitively demanding because it asks reviewers to talk to writers about specific ways to transform the draft that is into the draft that could be.
Explanation of the praise appeared as the strongest predictor of final performance. It refers to positive, non-formulaic, and non-generic comments that justify to the writer how or why a specific part of the text is well written. While literature supporting the usefulness of positive comments is inconclusive (Cho and Cho 2011; Lu and Law 2012), this study aligns with Cho and Cho’s (2011) study. As the authors affirm, through giving positive comments, “student reviewers may gain knowledge of effective writing strategies” (p. 639). This applies to the L2 online context as well. Students not only are able to perform well in the online platform, but as they analyze the features of good writing and become familiar with the writing criteria, they also become more capable of applying some of this knowledge to review and revise their own writing.
These findings strengthen the arguments presented earlier in the discussion about helping students write more detailed and complete comments. Thus, beyond motivating students to use an ample variety of comments, training could give students clear guidance in adding problem identification comments to other kinds of comments (e.g., suggestion comments), or in following up feedback comments with justification or explanation of the praise comments. These more complete and elaborate comments may not only benefit writers but, in agreement with the learning-by-reviewing hypothesis (Cho and MacArthur 2011), they prompt reviewers to use higher cognitive skills that ultimately lead reviewers to write better essays themselves.
In addition to training, other factors may influence students’ feedback-giving abilities. This issue was analyzed to address Research Question 4. The significant correlation between both giving problem identification comments and giving suggestion comments with the reviewers’ initial writing skills showed that students with higher initial writing skills (as measured by the rubric in Appendix A) are more inclined to identify problems and add suggestions or solutions to their comments. It is possible that factors such as self-confidence, self-perceived proficiency, assertiveness, and/or knowledge about the topic of the peer’s essay lead students with higher initial writing skills to point out problems and offer solutions. It is also possible that detecting and explaining problems requires more knowledge about writing than giving other types of comments. These findings are consistent with those of previous studies. Cho and Cho (2011) found that “the higher the level of writing possessed by the reviewers, the more they commented on problems regarding the micro- and macro-meanings of the peer draft” (p. 637), as opposed to focusing on surface issues.
Regarding the quality of the peer draft, the non-significant correlations show that student reviewers are able to give a variety of feedback types regardless of the quality of the Draft assignment (as measured by the rubric in Appendix B). These results are relevant in L2 peer reviewing. It would be expected that as higher-skilled writers are able to identify more problems and give more suggestions comments, they would do so in a differentiated manner. That is, lower quality essays would receive less praise and more critiques (e.g., problem identification, suggestions) and, conversely, higher quality essays would receive more praise and fewer critiques (Cho and Cho 2011). However, based on the non-significant correlations, it seems that reviewers’ criteria for giving problem identification (PI) and suggestion (S) comments does not depend on the quality of the peer draft. That is, problem identification (PI) and suggestion (S) comments were given to both higher quality Draft submissions and lower quality Draft submissions.
A plausible argument that may help explain these findings is that reviewers’ initial writing skills influence reviewing activities in a different manner from that of the quality of the peer draft. Initial writing skills were influential in giving problem identification and suggestion comments, but not in giving positive comments. In contrast, quality of the peer draft was not important in eliciting any particular type of comment, not even problem identification or suggestion comments.
Another interesting finding lies at the interaction of the findings from Research Questions 3 and 4. The findings for Research Question 3 demonstrated that giving online comments such as justification (J) and explanation of the praise (EP) can help student reviewers improve their own drafts. The findings for Research Question 4 further demonstrated that reviewers’ ability to give these types of comments and learn from them is not fully determined by their initial skills. This conclusion is supported by previous studies on the benefits of giving feedback (Lundstrom and Baker 2009; Meeks 2016) as well the peer review literature on the effectiveness of giving peer review training (Berg 1999a, 1999b; Hu 2005; Levi Altstaedter 2016; Min 2005). As suggested in these studies, factors such as peer review training (Min 2005), student anonymity and engagement in the reviewing activity (Lundstrom and Baker 2009), as well as frequency and intensity of the review sessions and length of the comments given (Meeks 2016) may also help explain students’ ability to benefit from the feedback comments they give.

5. Conclusions

This study has demonstrated the complexities involved in online peer reviewing, emphasizing the role of the feedback-giver and their final essay scores after they receive appropriate peer review training. Rather than focus on how online feedback affects writers’ revisions, the study demonstrated the learning-by-reviewing hypothesis: the idea that critiquing and giving constructive online feedback to peers actually help feedback-givers write better essays themselves.
The findings obtained in the study also invite us to rethink the notion of feedback and peer reviewing in the online context. First, good-quality feedback should be judged not only in terms of its inputs, features, or conventions, but also in terms of identifiable impacts on learning (Gielen et al. 2010; Nelson and Schunn 2008). As shown in the study, the impact of online feedback-giving extends beyond the student receiving the comments to benefit both givers and receivers of online feedback. Second, online peer reviewing should not be understood as a mere process of transmitting information from the reviewer to the writer through a technological platform. Far from being a linear technique, online peer reviewing should be approached from a more organic multilateral perspective that positions both reviewers and writers as active learners engaged in generating feedback loops. That is, the outputs, in this case the online feedback comments given to peers, circle back and become inputs for reviewers to improve their own essay. This way, online feedback comments reviewers give to their writers further feed their own learning, ultimately making them better writers.
This study also shows that the initial writing skills of the learners, as assessed from the pre-writing activity, does not determine the online feedback comments students are able to give. Together with the previous results, these findings have important implications because they suggest that students’ ability to learn from the feedback they give does not depend only on their writing skills. Instead, other practices, such as training, increasing practice, and encouraging students to give longer and more complete feedback can lead reviewers to learn from the feedback they give to their peers (Meeks 2016).
Finally, it should be mentioned that while the findings of this study help to advance the field of feedback-giving in the online peer review context, much research is still needed in this area. Research focused on better understanding the L2 online peer review context is scarce, and quantitative research that simultaneously analyzes the roles of giving and receiving feedback in a real-life setting is almost non-existent. Therefore, more research is needed that confirms these results and expands our still limited knowledge about students’ roles when engaged in L2 online peer reviewing practices.

6. Directions for Future Research

This research study answered some questions related to feedback-giving practices, but also raised several new questions that could be explored in future L2 peer review research. One of the main findings of this study is that students are able to learn from the feedback they give. However, the specific variables that enhance students’ learning is an area yet to be explored. The study considered the total number of comments given and received, and also the varied types of comments given. Future studies may consider exploring additional variables such as the length and richness of the comments, the language used to convey the feedback (i.e., L1 or L2), the tone employed by the reviewers (e.g., hedging), and the strategies to combine different types of comments, and whether any of these variables help predicting students’ performance in their revised draft.
A related area of interest is analyzing the effects of individual differences or other mediating variables in the reviewer’s feedback-giving abilities. Factors such as initial writing skills and the quality of the peer draft were explored in this study. Future research may consider other factors, such as students’ proficiency level, GPA, or scores obtained in previous L2 courses, and whether these influence the amount and types of feedback that reviewers give to their peers.
Taking a broader perspective, this study explored peer review practices during a four-week period, mainly collecting data on the personal narrative essay. Future researchers may extend their data collection to an entire semester, so that peer review is explored in other writing modalities (e.g., description, exposition, argumentation) or in other genres (e.g., fiction, poetry).
It should also be pointed out that the results obtained in the correlational analyses refer to peer review practices in which students participate as both givers and receivers of feedback. No generalizations should be made to contexts in which students only give, or only receive feedback, or when peer review is done in small groups. Nevertheless, because peer review sessions can be implemented in many ways, it is important for instructors and researchers to be aware of the potential that each arrangement may offer. Thus, more research may be conducted to learn about and explore the different contexts.
Finally, while the results in this study apply to asynchronous online peer review sessions, it cannot be assumed that similar findings will be obtained in synchronous online sessions or in traditional face-to-face settings. Future research may explore in what ways different modes shape the nature of peer review, whether the specific mode selected influences the roles of giving and receiving feedback, and what techniques may compensate for potential difficulties in either of the modes. Additionally, while online feedback and traditional review are very different (Ho and Savignon 2007; Tuzi 2004), this does not mean that online feedback practices (synchronous and asynchronous) cannot be valuable complements to face-to-face peer review activities. As emphasized in previous research (Tuzi 2004), the various forms of feedback should not be understood as isolated or mutually exclusive practices. Instead, and given the different benefits and characteristics associated with each mode, the practices may be combined in many ways. For example, Chang (2012) predicts positive benefits from combining the three peer review modes of face-to-face, synchronous, and asynchronous online interaction. However, more research is needed in this area to better understand how different combinations may impact peer review sessions.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of The University of Iowa (ID# 201706743).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Acknowledgments

I would like to thank Judith Liskin-Gasparro for her valuable insights on earlier drafts of this article. Special thanks to Adam D. DeNoble for his help and support on editing the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Holistic Rubric for the Pre-Writing Activity.
BEST
(4)
GOOD
(3)
AT the LEVEL
(2)
BELOW the LEVEL
(1)
Language, Grammar, Sentence Structure, and Word Choice
  • Few errors in tense, pronouns and/or agreement, most of which do not interfere with meaning.
  • Sentences vary in length and structure, and flow naturally.
  • Demonstrates creativity and flexibility in vocabulary choices—all of which enhance meaning.
  • Some errors in tense, pronouns and/or agreement, some of which may interfere with meaning.
  • Sentences generally vary in length and structure, and generally flow naturally.
  • Generally demonstrates a variate vocabulary, most of which enhance meaning.
  • Many errors in tense, pronouns and/or agreement, many of which obscure the comprehensibility of content.
  • Uses mostly repetitive sentence structure, which may lack complexity and interfere with flow.
  • Vocabulary choices may be vague, basic, inappropriate or non-specific—and may impede comprehensibility.
  • Abundant errors in tense, pronouns and/or agreement, many of which interfere with meaning such that text is difficult to comprehend.
  • Uses basic structures that do not demonstrate sentence mastery and disrupts flow.
  • Vocabulary choices may be incorrect, simplistic, and/or overtly influenced by English
Content
  • Strong and engaging introduction, event sequence, and conclusion.
  • Accurate description of the panels, including many details and emotions. Results in a very creative and inventive story.
  • Clear and concise introduction, event sequence, and conclusion
  • Accurate descriptions of the panels, but limited or non-evident details and emotions in the story. Results in a straight-forward story.
  • Too plain or too convoluted story, which obscures the clarity of the introduction, event sequence, and/or conclusion.
  • Little accuracy in the description of the panels.
  • Confusing or missing introduction, event sequence, and/or conclusion
  • Difficult to follow a storyline.

Appendix B

Holistic Rubric for the Draft and Final Version assignments.
CriteriaGuidingQuestions4-BEST3-GOOD2-APPROPIATE1-FAIR
Organization/PlotIs there a strong vivid context that makes you feel engaged and oriented?
Does the event sequence unfold logically and naturally?
Does the conclusion follow the events of the story and reflects on the overall experience, while contributing to the story?
  • Introduction: Purposefully engaging introduction, with a strong opening hook that makes me want to keep reading.
  • Body: Event sequence is pleasant to read and to follow
  • Conclusion: Ending is creative, inventive, and descriptive; follows the events of the story and adds to its meaning.
  • Introduction: Generally Interesting, clear, and concise introduction. First sentences may not be particularly hooking.
  • Body: Event sequence is generally easy to follow.
  • Conclusion: Ending is generally creative, inventive, and descriptive; it generally follows the events of the story and adds to the meaning of the story.
  • Introduction: Few ideas and details are not fully relevant or interesting
  • Body: Event sequence is somewhat easy to follow, but key ideas or details are missing or may distract the reader form the main plot
  • Conclusion: Ending is fairly creative, inventive, and descriptive; mostly follows the events of the story and adds to the meaning of the story, but may lack depth and originality
  • Introduction: Too much or too little context, and not very interesting.
  • Body: Event sequence takes some effort to follow. Some ideas or details are convoluted, not well connected or out of place
  • Conclusion: Ending is somewhat creative, inventive, and descriptive; but it may not follow the events of the story or continue its meaning, or it may be too simple or confusing
Narrative TechniquesDoes the writer build up to the conflict and the climax by creating suspense and dramatizing the events?
Are characters well developed, including information about their feelings and reactions?
  • The writer effectively uses creative descriptions of actions, thoughts, and feelings to develop experiences and events that add to the story
  • Dialogue, emotions, and sensory details are coherent and purposefully integrated.
  • The reader gets easily in the emotions, feelings and reactions of the main character.
  • The writer generally uses creative descriptions of actions, thoughts, and feelings to develop experiences and events that relate to the story.
  • Dialogue, emotions and sensory details are appropriately integrated; but could add more scenes, or remove others that do not contribute to the story.
  • Generally easy to empathize with the emotions, feelings and reactions of the main character.
  • The writer uses some descriptions of actions, thoughts, and feelings to develop experiences and events. Could add few more details or remove others that do not add to the story
  • Dialogue, emotions, and sensory details are evident; but they are out of place, convoluted, or are put together artificially.
  • It takes some effort to empathize with the emotions, feelings, and reactions of the main character.
  • The writer uses few or non-relevant descriptions of actions, thoughts, and feelings, which distracts from other key details.
  • Some dialogue, emotions and sensory details are confusing and meaningless.
  • Climax lacks suspense and dramatization.
  • It takes much effort to empathize with the emotions, feelings, and reactions of the main character.
Grammar
+
Sentence structure
+
Word choice
+
Coherence and cohesion
Grammar: Verb tense, pronouns & agreements
Sentence structure: Structure, length & flow
Vocabulary: Variation, preciseness & complexity
Mechanics: Dialogue
Coherence and cohesion: Balance and connectivity at the sentence and paragraph level
  • Few errors in tense, pronouns and/or agreement, most of which do not interfere with meaning.
  • Sentences vary in length and structure, and flow naturally.
  • Demonstrates variation, creativity and flexibility in vocabulary choices—all of which enhance meaning.
  • Almost no mechanical errors in dialogue
  • Coherent and well balanced text, with rich logical transitions between sentences.
  • Some errors in tense, pronouns and/or agreement, some of which may interfere with meaning.
  • Sentences generally vary in length and structure, and generally flow naturally.
  • Generally uses precise vocabulary, most of which enhance meaning.
  • Few mechanical errors in dialogue do not distract from flow
  • Generally coherent and well-balanced text, but some ideas need better transitioning.
  • Many errors in tense, pronouns and/or agreement, many of which obscure the comprehensibility of content.
  • Uses mostly repetitive sentence structure, which may lack complexity and interfere with flow.
  • Vocabulary choices may be basic or non-specific—and may impede comprehensibility.
  • Some errors in dialogue mechanics may disrupt flow
  • Coherent text but the different parts are not well-balanced
  • Abundant errors in tense, pronouns and/or agreement, many of which interfere with meaning such that text is difficult to comprehend.
  • Uses basic structures that do not demonstrate sentence mastery and disrupts flow.
  • Vocabulary choices may be vague, simplistic, and/or incorrect
  • Errors in dialogue mechanics often disrupt flow
  • No evident or little coherence between parts, which obscure comprehensibility

References

  1. Allen, David, and Akiko Katayama. 2016. Relative second language proficiency and the giving and receiving of written peer feedback. System 56: 96–106. [Google Scholar] [CrossRef]
  2. Allen, David, and Amy Mills. 2016. The impact of second language proficiency in dyadic peer feedback. Language Teaching Research 20: 498–513. [Google Scholar] [CrossRef]
  3. Bandura, Albert. 1971. Social Learning Theory. New York: General Learning Press, Available online: http://www.asecib.ase.ro/mps/Bandura_SocialLearningTheory.pdf (accessed on 22 February 2021).
  4. Barkaoui, Khaled. 2007. Teaching writing to second language learners: Insights from theory and research. TESL Reporter 40: 35–48. [Google Scholar]
  5. Beach, Richard, and JoAnne Liebman-Kleine. 1986. The writing/reading relationship: Becoming one’s own best reader. In Convergences: Transactions in Reading and Writing; Edited by Bruce T. Petersen. Urbana: National Council of Teachers of English, pp. 64–81. Available online: https://eric.ed.gov/?id=ED265568 (accessed on 22 February 2021).
  6. Berg, E. Cathrine. 1999a. The effects of trained peer response on ESL students’ revision types and writing quality. Journal of Second Language Writing 8: 215–41. [Google Scholar] [CrossRef]
  7. Berg, E. Cathrine. 1999b. Preparing ESL students for peer response. TESOL Journal 8: 20–25. [Google Scholar] [CrossRef]
  8. Besseyre des Horts, Charles-Henri. 2019. Learning in the digital age. In Human Learning in the Digital Era. Edited by UNESCO and NETEXPLO. Paris: UNESCO, pp. 112–15. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000367761.locale=en (accessed on 22 February 2021).
  9. Butler, Deborah L., and Philip H. Winne. 1995. Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research 65: 245–81. [Google Scholar] [CrossRef]
  10. Cao, Zhenhao, Shulin Yu, and Jing Huang. 2019. A Qualitative Inquiry into Undergraduates’ Learning from Giving and Receiving Peer Feedback in L2 Writing: Insights from a Case Study. Studies in Educational Evaluation 63: 102–12. [Google Scholar] [CrossRef]
  11. Carson, Joan G., and Gayle L. Nelson. 1996. Chinese students’ perceptions of ESL peer response group interaction. Journal of Second Language Writing 5: 1–19. [Google Scholar] [CrossRef]
  12. Cassidy, Richard, and Daniel Bailey. 2018. L2 Students’ perceptions and practices of both giving and receiving online peer-feedback. Multimedia-Assisted Language Learning 32: 11–34. [Google Scholar]
  13. Chang, Ching-Fen. 2012. Peer review via three modes in an EFL writing course. Computers and Composition 29: 63–78. [Google Scholar] [CrossRef]
  14. Chen, Julian C., and Kimberly L. Brown. 2012. The effects of authentic audience on English as a second language (ESL) writers: A task-based, computer-mediated approach. Computer Assisted Language Learning 25: 435–54. [Google Scholar] [CrossRef]
  15. Cho, Kwangsu, and Charles MacArthur. 2011. Student revision with peer and expert reviewing. Learning and Instruction 20: 328–38. [Google Scholar] [CrossRef]
  16. Cho, Young H., and Kwangsu Cho. 2011. Peer reviewers learn from giving comments. Instructional Science 39: 629–43. [Google Scholar] [CrossRef]
  17. Choudhury, Amit R. 2016. June 2nd. Futurist: More Changes in Next 20 Years Than Last 300. The Business Times [Blog Post]. Available online: https://www.businesstimes.com.sg/government-economy/futurist-more-changes-in-next-20-years-than-last-300 (accessed on 31 August 2021).
  18. Cohen, Louis, Lawrence Manion, and Keith Morrison. 2011. Research Methods in Education. London and New York: Routledge. First published 2007. [Google Scholar]
  19. Costello, Jane, and Daph Crane. 2016. Effective feedback in online learning. In Handbook of Research on Active Learning and the Flipped Classroom Model in the Digital Age. Edited by Jared Keengwe and Grace Onchwari. Hershey: IGI Global, pp. 212–31. [Google Scholar]
  20. Couzijn, Michel, and Gert Rijlaarsdam. 2005. Learning to write instructive texts by reader observation and written feedback. In Effective Learning and Teaching of Writing, 2nd ed. Edited by Gert Rijlaarsdam, Huub Van den Bergh and Michel Couzijn. Dordrecht and New York: Springer, pp. 209–40. [Google Scholar] [CrossRef]
  21. De Guerrero, María, and Olga S. Villamil. 2000. Activating the ZPD: Mutual scaffolding in L2 peer revision. The Modern Language Journal 84: 51–68. [Google Scholar] [CrossRef]
  22. Díez-Bedmar, María Belén, and Pascual Pérez-Paredes. 2012. The types and effects of peer native speakers’ feedback on CMC. Language Learning & Technology 16: 62–90. Available online: https://eric.ed.gov/?id=EJ972345 (accessed on 31 August 2021).
  23. DiGiovanni, Elaine, and Girija Nagaswami. 2001. Online peer review: An alternative to face-to-face? ELT Journal 55: 263–72. [Google Scholar] [CrossRef]
  24. Dubskikh, Angelina, Yulia Savinova, and Anna Butova. 2019. Virtual educational environment as one of the perspective technologies of e-learning in foreign language teaching. In Proceedings of the 15th International Scientific Conference eLearning and Software for Education, Bucharest, Romania, 11–12 April 2019; Retrieved from ProQuest Conference Papers & Proceedings Database. Bucharest: Carol I National Defence University, vol. 3, pp. 27–32. [Google Scholar]
  25. Elbow, Peter. 1998. Writing without Teachers, 2nd ed. Oxford and New York: Oxford University Press. First published 1973. [Google Scholar]
  26. Ertmer, Peggy A., Jennifer C. Richardson, Brian Belland, Denise Camin, Patrick Connolly, Glen Coulthard, Kimfong Lei, and Christopher Mong. 2007. Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication 12: 412–33. [Google Scholar] [CrossRef] [Green Version]
  27. Ferris, Dana R., and John Hedgcock. 2014. Teaching L2 Composition: Purpose, Process, and Practice, 3rd ed. New York and Abingdon: Routledge. First published 1998. [Google Scholar]
  28. Gaines, Brian R. 2019. From facilitating interactivity to managing hyperconnectivity: 50 years of human–computer studies. International Journal of Human-Computer Studies 131: 4–22. [Google Scholar] [CrossRef]
  29. Garfin, Dana Rose. 2020. Technology as a coping tool during the coronavirus disease 2019 (COVID-19) pandemic: Implications and recommendations. Stress and Health 36: 555–59. [Google Scholar] [CrossRef] [PubMed]
  30. Gielen, Sarah, Elien Peeters, Filip Dochy, Patrick Onghena, and Katrien Struyven. 2010. Improving the effectiveness of peer feedback for learning. Learning and Instruction 20: 304–15. [Google Scholar] [CrossRef]
  31. Guardado, Martin, and Ling Shi. 2007. ESL students’ experiences of online peer feedback. Computers and Composition 24: 443–61. [Google Scholar] [CrossRef]
  32. Ho, Mei-ching, and Sandra J. Savignon. 2007. Face-to-face and computer-mediated peer review in EFL writing. CALICO Journal 24: 269–90. Available online: https://0-www-jstor-org.brum.beds.ac.uk/stable/24147912 (accessed on 31 August 2021). [CrossRef] [Green Version]
  33. Holliway, David R., and Deborah McCutchen. 2004. Audience perspective in young writers’ composing and revision: Reading as the reader. In Revision Cognitive and Instructional Processes. Edited by Linda Allal, Lucile Chanquoy and Pierre Largy. Boston: Kluwer Academic Publishers, pp. 87–101. [Google Scholar] [CrossRef]
  34. Hu, Guangwei, and Sandra T. E. Lam. 2010. Issues of cultural appropriateness and pedagogical efficacy: Exploring peer review in a second language writing class. Instructional Science 38: 371–94. [Google Scholar] [CrossRef]
  35. Hu, Guangwei. 2005. Using peer review with Chinese ESL student writers. Language Teaching Research 9: 321–42. [Google Scholar] [CrossRef]
  36. Hyland, Ken, and Fiona Hyland. 2006. Feedback on second language students’ writing. Language Teaching 39: 83–101. [Google Scholar] [CrossRef] [Green Version]
  37. Illana-Mahiques, Emilia. 2019a. Deconstructing Peer Review in the Spanish Writing Classroom: A Mixed Methods Study. Unpublished Doctoral dissertation, The University of Iowa, Iowa City, IA, USA. [Google Scholar]
  38. Illana-Mahiques, Emilia. 2019b. Peer reviewing in L2 Spanish classrooms: Action research. In Research Approaches to Second Language Acquisition. Proceedings of the 2018 Second Language Acquisition Graduate Symposium. Edited by Antonio A. Perez Belda, Hadley Galbraith, Kevin Josephs, Angela Pico Pinto, Evelyn Pulkowski, Kezia Walker-Cecil and Caolimeng Wuxiha. Minneapolis: Center for Advanced Research on Language Acquisition, The University of Minnesota, pp. 1–23. Available online: https://carla.umn.edu/resources/working-papers/documents/Proceedings-2018-SLAGradStudentSymposium.pdf#page=8 (accessed on 22 February 2021).
  39. Kamimura, Taeko. 2006. Effects of peer feedback on EFL student writers at different levels of English proficiency: A Japanese context. TESL Canada Journal 23: 12–39. [Google Scholar] [CrossRef] [Green Version]
  40. Lantolf, James P., and Steven L. Thorne. 2006. Sociocultural Theory and the Genesis of Second Language Development. Oxford and New York: Oxford University Press. [Google Scholar]
  41. Leki, Ilona. 1990. Potential problems with peer responding in ESL writing classes. CATESOL Journal 3: 5–19. Available online: http://www.catesoljournal.org/wp-content/uploads/2016/11/CJ3_leki.pdf (accessed on 23 February 2021).
  42. Levi Altstaedter, Laura. 2016. Investigating the impact of peer feedback in foreign language writing. Innovation in Language Learning and Teaching 12: 1–15. [Google Scholar] [CrossRef]
  43. Li, Jinrong, and Mimi Li. 2018. Turnitin and peer review in ESL academic writing classrooms. Language Learning & Technology 22: 27–41. [Google Scholar]
  44. Lockhart, Charles, and Peggy Ng. 1995. Analyzing talk in ESL peer response groups: Stances, functions, and content. Language Learning 45: 605–51. [Google Scholar] [CrossRef]
  45. Lotherington, Heather, and Natalia Ronda. 2014. 2B or not 2B? From pencil to multimodal programming: New frontiers in communicative competencies. In Digital Literacies in Foreign and Second Language. Edited by Janel Pettes Guikema and Lawrence Williams. San Marcos: CALICO Monograph Series, pp. 9–28. [Google Scholar]
  46. Lu, Jingyan, and Nancy Law. 2012. Online peer assessment: Effects of cognitive and affective feedback. Instructional Science 40: 257–75. [Google Scholar] [CrossRef] [Green Version]
  47. Lundstrom, Kristi, and Wendy Baker. 2009. To give is better than to receive: The benefits of peer review to the reviewer’s own writing. Journal of Second Language Writing 18: 30–43. [Google Scholar] [CrossRef]
  48. Mangelsdorf, Kate, and Ann Schlumberger. 1992. ESL student response stances in a peer-review task. Journal of Second Language Writing 1: 235–54. [Google Scholar] [CrossRef]
  49. Mangelsdorf, Kate. 1992. Peer reviews in the ESL composition classroom: What do the students think? ELT Journal 46: 274–84. [Google Scholar] [CrossRef]
  50. Matsumura, Shoichi, and George Hann. 2004. Computer anxiety and students’ preferred feedback methods in EFL writing. The Modern Language Journal 88: 403–15. [Google Scholar] [CrossRef]
  51. Meeks, Melissa. 2017. December 14th. Post-Semester Reflection: Who Leveled Up? [Blog Post]. Available online: https://elireview.com/2017/12/14/who-leveled-up/ (accessed on 31 August 2021).
  52. Meeks, Melissa. 2016. “Making a Horse Drink”. Eli Review. (blog). November 10. Available online: https://elireview.com/2016/11/10/making-a-horse-drink/ (accessed on 31 August 2021).
  53. Mendonça, Cassia O., and Karen E. Johnson. 1994. Peer review negotiations: Revision activities in ESL writing instruction. TESOL Quarterly 28: 745–69. [Google Scholar] [CrossRef]
  54. Midgette, Ekaterina, Priti Haria, and Charles MacArthur. 2008. The effects of content and audience awareness goals for revision on the persuasive essays of fifth- and eighth-grade students. Reading and Writing 21: 131–51. [Google Scholar] [CrossRef]
  55. Min, Hui-Tzu. 2005. Training students to become successful peer reviewers. System 33: 293–308. [Google Scholar] [CrossRef]
  56. Min, Hui-Tzu. 2006. The effects of trained peer review on EFL students’ revision types and writing quality. Journal of Second Language Writing 15: 118–41. [Google Scholar] [CrossRef]
  57. Mory, Edna H. 2003. Feedback research revisited. In Handbook of Research on Educational Communications and Technology. Edited by David H. Jonassen. New York: Macmillan, pp. 745–83. [Google Scholar]
  58. Nelson, Melissa M., and Christian D. Schunn. 2008. The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science 37: 375–401. [Google Scholar] [CrossRef]
  59. Nilson, Linda B. 2003. Improving student peer feedback. College Teaching 51: 34–38. [Google Scholar] [CrossRef]
  60. Oskoz, Ana, and Idoia Elola. 2014. Integrating digital stories in the writing class: Toward a 21st-century literacy. In Digital Literacies in Foreign and Second Language. Edited by Janel Pettes Guikema and Lawrence Williams. San Marcos: CALICO Monograph Series, pp. 179–200. [Google Scholar]
  61. Paulus, Trena M. 1999. The effect of peer and teacher feedback on student writing. Journal of Second Language Writing 8: 265–89. [Google Scholar] [CrossRef]
  62. Sánchez-Naranjo, Jeannette. 2019. Peer review and training: Pathways to quality and value in second language writing. Foreign Language Annals 52: 612–43. [Google Scholar] [CrossRef]
  63. Storch, Noemi, and Gilian Wigglesworth. 2010. Students’ engagement with feedback on writing: The role of learner agency. In Sociocognitive Perspectives on Language Use and Language Learning. Edited by Rob Batstone. Oxford and New York: Oxford University Press, pp. 166–85. [Google Scholar]
  64. Storch, Noemi. 2013. Collaborative Writing in L2 Classrooms. Bristol and New York: Multilingual Matters. [Google Scholar]
  65. Tsui, Amy B. M., and Maria Ng. 2000. Do secondary L2 writers benefit from peer comments? Journal of Second Language Writing 9: 147–70. [Google Scholar] [CrossRef]
  66. Tunison, Scott, and Brian Noonan. 2001. On-line learning: Secondary students’ first experience. Canadian Journal of Education/Revue Canadienne de l’Éducation 26: 495–511. [Google Scholar] [CrossRef]
  67. Tuzi, Frank. 2004. The impact of e-feedback on the revisions of L2 writers in an academic writing course. Computers and Composition 21: 217–35. [Google Scholar] [CrossRef]
  68. UNESCO. 2019. Human Learning in the Digital Era. Paris: UNESCO and NETEXPLO, Available online: https://unesdoc.unesco.org/ark:/48223/pf0000367761.locale=en (accessed on 22 February 2021).
  69. van den Boom, Gerard, Fred Paas, and Jeroen J. G. van Merriënboer. 2007. Effects of elicited reflections combined with tutor or peer feedback on self-regulated learning and learning outcomes. Learning and Instruction 17: 532–48. [Google Scholar] [CrossRef]
  70. Williamson, Ben, and Cassie Hague. 2009. Digital Participation, Digital Literacy, and School Subjects: A Review of the Policies, Literature and Evidence. Bristol: Futurelab, Available online: https://www.nfer.ac.uk/media/1772/futl08.pdf (accessed on 22 February 2021).
  71. Yang, Miao, Richard Badger, and Zhen Yu. 2006. A comparative study of peer and teacher feedback in a Chinese EFL writing class. Journal of Second Language Writing 15: 179–200. [Google Scholar] [CrossRef]
  72. Yu, Shulin. 2019. Learning from giving peer feedback on postgraduate theses: Voices from master’s students in the Macau EFL context. Assessing Writing 40: 42–52. [Google Scholar] [CrossRef]
  73. Zhu, Wei. 1995. Effects of training for peer response on students’ comments and interaction. Written Communication 12: 492–528. [Google Scholar] [CrossRef]
Figure 1. Summary of the L2 writing essay assignment.
Figure 1. Summary of the L2 writing essay assignment.
Languages 06 00151 g001
Figure 2. Screen shot of a peer review activity in PeerMark.
Figure 2. Screen shot of a peer review activity in PeerMark.
Languages 06 00151 g002
Figure 3. Tools palette on the PeerMark platform.
Figure 3. Tools palette on the PeerMark platform.
Languages 06 00151 g003
Figure 4. Adjusted feedback taxonomy used to classify students’ comments (Lu and Law 2012; Min 2005).
Figure 4. Adjusted feedback taxonomy used to classify students’ comments (Lu and Law 2012; Min 2005).
Languages 06 00151 g004
Table 1. Descriptive statistics for the different types of feedback given.
Table 1. Descriptive statistics for the different types of feedback given.
Number of Participants (N)Raw Frequency (f)Percentage (pp)Mean (M)Std. Dev. (SD)
Problem Identification (PI)7627011.653.553.316
Suggestion (S)76458 a19.766.033.425
Alteration (A)7627811.993.66 4.216
Justification (J)7630713.244.043.435
Elaboration (E)7632914.194.333.635
Explanation of the praise (EP)76264 b11.393.472.951
Praise and empathy (PE)7641217.775.424.014
a Most frequent type of feedback used by student reviewers. b Least frequent type of feedback used by student reviewers.
Table 2. Multiple regression analysis predicting students’ performance on the Final Version based on the roles that students assume (giving vs. receiving).
Table 2. Multiple regression analysis predicting students’ performance on the Final Version based on the roles that students assume (giving vs. receiving).
ModelUnstandardized CoefficientsStandardized CoefficientstSig.
BStd. ErrorBeta
1 a
(ROLE) b
(Constant)3.1110.636 4.8920.000
REC_Total0.0170.0180.1050.9020.370
GAVE_Total0.0410.0150.3082.6460.010 *
a Dependent Variable: Students’ score in their final narrative performance (Final_Score). b Predictors: Total amount of online feedback given (GAVE_Total) and received (REC_Total). * Predictor variable is statistically significant at the significance level of p < 0.05.
Table 3. Summarized regression analysis of the explanatory variables of final performance.
Table 3. Summarized regression analysis of the explanatory variables of final performance.
ModelRR SquareAdjusted R SquareStd. Error of the Estimate
10.3580.128 a0.1042.008
a Predictors: GAVE_Total, REC_Total.
Table 4. Multiple regression analysis predicting students’ performance in the Final Version of the writing assignment.
Table 4. Multiple regression analysis predicting students’ performance in the Final Version of the writing assignment.
ModelUnstandardized CoefficientsStandardized CoefficientstSig.
BStd. ErrorBeta
2 a
(GAVE) b
(Constant)3.8890.513 7.5840.000
GAVE_PI0.1990.1000.3121.9930.050 *
GAVE_S−0.1140.100−0.180−1.1440.257
GAVE_A−0.0690.073−0.128−0.9390.351
GAVE_J0.1820.0850.2952.1330.036 *
GAVE_E−0.1010.076−0.154−1.3340.187
GAVE_EP0.2540.0820.3533.1010.003 *
a Dependent Variable: Students’ score in their final narrative performance (Final_Score). b Predictors: Types of feedback comments given (GAVE_EP, GAVE_PI, GAVE_E, GAVE_J, GAVE_A, GAVE_S). * Predictor variable is statistically significant at the significance level of p < 0.05.
Table 5. Summarized regression analysis of the giving-related explanatory variables of final performance.
Table 5. Summarized regression analysis of the giving-related explanatory variables of final performance.
ModelRR SquareAdjusted R SquareStd. Error of the Estimate
2 (GAVE) 0.5120.2620.198 a1.899
a Predictors: GAVE_EP, GAVE_PI, GAVE_E, GAVE_J, GAVE_A, GAVE_S.
Table 6. Correlation coefficients of predicting variables of Model 2 (types of feedback given) with reviewers’ initial writing ability and quality of the peer Draft.
Table 6. Correlation coefficients of predicting variables of Model 2 (types of feedback given) with reviewers’ initial writing ability and quality of the peer Draft.
Initial_ScorePeer_Draft_Score
GAVE_PIPearson Correlation0.506 **−0.074
Sig. (2-tailed)0.0000.523
GAVE_SPearson Correlation0.292 *−0.067
Sig. (2-tailed)0.0110.562
GAVE_APearson Correlation0.2070.066
Sig. (2-tailed)0.0730.574
GAVE_JPearson Correlation0.1930.028
Sig. (2-tailed)0.0940.810
GAVE_EPearson Correlation−0.1990.084
Sig. (2-tailed)0.0860.470
GAVE_EPPearson Correlation0.1410.115
Sig. (2-tailed)0.2260.323
GAVE_PEPearson Correlation0.1360.220
Sig. (2-tailed)0.2400.056
* Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Illana-Mahiques, E. Re-Thinking Peer Reviewing in the Virtual Context: The Roles of Giving and Receiving Online Feedback in L2 Spanish Classrooms. Languages 2021, 6, 151. https://0-doi-org.brum.beds.ac.uk/10.3390/languages6030151

AMA Style

Illana-Mahiques E. Re-Thinking Peer Reviewing in the Virtual Context: The Roles of Giving and Receiving Online Feedback in L2 Spanish Classrooms. Languages. 2021; 6(3):151. https://0-doi-org.brum.beds.ac.uk/10.3390/languages6030151

Chicago/Turabian Style

Illana-Mahiques, Emilia. 2021. "Re-Thinking Peer Reviewing in the Virtual Context: The Roles of Giving and Receiving Online Feedback in L2 Spanish Classrooms" Languages 6, no. 3: 151. https://0-doi-org.brum.beds.ac.uk/10.3390/languages6030151

Article Metrics

Back to TopTop