Next Article in Journal
Proof of Concept for a Novel Social-Emotional Learning Programming: The B.E. M.Y. F.R.I.E.N.D. Framework
Next Article in Special Issue
Comparative Analysis of Knowledge Control and Evaluation Methods in Higher Education
Previous Article in Journal
The Role of Emotional Intelligence, Meta-Comprehension Knowledge and Oral Communication on Reading Self-Concept and Reading Comprehension
Previous Article in Special Issue
User Experience of a Mobile App in a City Tour Game for International Doctoral Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pedagogical Assessment in Higher Education: The Importance of Training

Center for Research in Education and Psychology, University of Évora, 7000 Evora, Portugal
*
Author to whom correspondence should be addressed.
Submission received: 19 October 2023 / Revised: 7 December 2023 / Accepted: 11 December 2023 / Published: 18 December 2023
(This article belongs to the Special Issue Assessment and Evaluation in Higher Education—Series 3)

Abstract

:
The diversity of students reaching higher education, the skills required of the 21st-century citizen, the Bologna Declaration, and the pressure exerted by international organizations impose a pedagogical reconfiguration of teaching, learning, and assessment through the recognition of the pedagogical dimension as a component of teacher professional development. We present the results of a study conducted at a university in Portugal with the following objectives: identifying conceptions and practices of pedagogical assessment and determining the influence of pedagogical training on these conceptions and practices. An online questionnaire (pre- and post-test) was administered to 31 teachers who had taken part in a training course on pedagogical assessment. It was found that: nearly half of the teachers experience difficulties in pedagogical assessment, with fairness being the main issue; the most commonly used instruments are written tests, research assignments, and reports; around two-thirds of teachers change the way they assess students, with the nature of the curricular units being the most influential factor in this decision; and there has been a change in the concept of assessment, in which the strict idea of testing, measuring, and classifying students’ knowledge has been replaced by the gathering of information for decision-making about the teaching and learning process.

1. Introduction

The diversity of students who arrive at higher education from different nationalities, cultures, socioeconomic levels, and training paths, but also the need to develop skills to face the challenges and take advantage of the opportunities of a world in constant change, impose a reconfiguration of teaching, learning, and pedagogical assessment. Such reconfiguration represents a break with the transmissive paradigm, centered on knowledge, toward “differentiating teaching, learning and assessment methodologies, depending on the targeted skills, personal projects and student motivation” (p. 6) [1].
The so-called “21st-century skills” encompass cognitive, socio-emotional, and practical skills that enable people to adapt, innovate, and stand out in diverse professional, personal, and social contexts. The European Union’s agenda for higher education even highlights that high-level skills are needed today and that “People’s capabilities to be entrepreneurial, manage complex information, think autonomously and creatively, use resources, including digital ones, intelligently, communicate effectively and be resilient are more crucial than ever” (p. 2) [2].
The change in pedagogical paradigm in higher education announced more than two decades ago, since the creation of the European Space resulting from the Bologna process, continues to be present on policy agendas for higher education. Pressure from international entities such as the Organization for Economic Cooperation and Development (OCDE) and the National Agency for Assessment and Accreditation of Higher Education (A3ES) has increased, which emphasizes the need for pedagogical training for higher education teachers.
However, its effective implementation has not been achieved, coming up against a strongly rooted professional culture that privileges the investigative dimension over the pedagogical dimension [3,4,5,6]. This culture is reinforced by the idea that pedagogical learning acquired for other educational contexts and teaching can be transferred to teaching practices at the higher education level [7].
Even so, there is a greater awareness in Europe among the academic community about the importance of teaching [8] and the relevance of pedagogical training, especially for those who do not have this training for other levels of education [7]. In this context, several higher education institutions have adopted pedagogical training as a component of professional development and have undertaken different initiatives aimed at this teaching training.
This is the case, for example, for countries such as Austria, Ireland, Norway, and the Netherlands, which have established a nationwide strategy for teaching and learning in higher education. However, in a study conducted by the European University Association in 2018, of the 28 countries analyzed, only 7 have regulated teacher pedagogical training, and, in another 4, it is carried out despite not being a national requirement, while in the remaining 17 it results mainly from measures taken by each university.
These pedagogical development programs are generally carried out by higher education institutions themselves through their training centers and education faculties [9]. The results of Trends 2018 [10] admit that teaching and non-teaching staff exchanges, as well as collaboration with other universities and participation in projects and initiatives, are an important resource and a good catalyst to improve teaching in general and the development of teaching staff in particular, which suggests that national and/or international exchanges seem to be a good bet for higher education institutions to develop and improve their teaching.
In Portugal, there are no central regulations for academic staff training, but there are guidelines defined by the Portuguese Agency for Assessment and Accreditation of Higher Education [11], with an emphasis on training and pedagogical innovation. Teaching enhancement depends on individual higher education institutions’ initiatives, the training programs are not mandatory, and the universities organize specific training units, courses, or workshops, showing commitment to improving teaching and learning. There is a growing recognition of the need to value and institutionalize spaces dedicated to teacher training in higher education and spaces for meeting and sharing teacher perspectives and educational cultures, which can have the effect of transforming pedagogy from traditional teaching practices to learning processes that involve student action in the construction of knowledge [12].
In any case, with or without the use of mobility, most of the university institutions analyzed in Trends 2018 offer training courses in this area, whether mandatory or optional, and encourage professional development through other means, such as the use of portfolios, peer feedback, teaching in teams, or through research focused on teaching and learning [10]. However, this area remains a priority, as research has shown that the pedagogical training of higher education teachers is fragile, pedagogical–didactic knowledge is generally superficial and poorly supported, and higher education institutions reveal different levels of commitment to the training of their teachers [12].
The way teachers teach and assess influences the way students learn [13], because “in general, there is a significant relationship between teaching, assessment practices and student learning” (p. 102) [14]. Placing the student at the center of the teaching and learning process and the teacher as a mediator and facilitator of learning is not enough to change teaching strategies. It is necessary to change assessment practices so that they assume their regulatory role in teaching and learning through the mobilization of different instruments and techniques; formative assessment; the use of constructive feedback, self-assessment, and peer assessment [15]; and making use of different technological tools.
Assessment practices with an emphasis on results, of an essentially summative nature, a focus on content, and aiming to certify student learning at the end of the year, semester, or period [16] prevail, supported by “ontological, epistemological and methodological foundations” that support “an assessment intrinsically associated with the production of measurements and classifications” (p. 141) [17].
It is important to keep in mind that pedagogical assessment, whether formative or summative, must be part of the everyday classroom through its integration into the teaching and learning process and has to be easily understood by students and committed to the curriculum and pedagogy [17]. Formative assessment is focused on the teaching and learning process and tends to be continuous and systematic, while summative assessment is about the results of that process and has a punctual nature. In this sense, formative assessment presupposes a high commitment to the teaching and learning process through the active participation of students in processes of self-assessment, hetero-assessment, self-regulation, and self-control, mediated by feedback from the teacher.
For this feedback to be effective, it must be systematic and occur during the process, being at the service of improving learning and promoting in students the development of self-assessment and self-regulation skills in their learning, as well as the ability to effectively overcome difficulties [18,19].
On the other hand, as the spectrum of learning that students develop is generally very broad, it is essential to gather information from a varied set of strategies and diversify the moments in which assessment takes place. It is this exchange of information, constituting a process of triangulation, that provides an assessment closer to reality, as it allows students to reveal their knowledge, skills, abilities, and attitudes more completely [20].
Therefore, the diversification of information-gathering processes also responds to the variety of students’ cognitive styles. That is, if the ways in which students process information and solve problems are different, the tasks to be developed, the information to be gathered, and the feedback to be distributed must also be different [20].
Research reveals that “we now have a sufficiently solid empirical basis to affirm that higher education students can learn more and better, with more depth and understanding, if assessment and teaching practices are modified” (p. 118) [21]. This is a necessary change for students to be able to carry out more meaningful learning and assume a more active and autonomous role in self-regulating their learning process [15]. In this context, training is a fundamental way to promote the necessary changes in the teachers’ and students’ roles regarding teaching, assessment, and learning processes, enhancing the reconfiguration of their pedagogical practices.
Realizing that not all initiatives are effective in changing teacher practices, Darling-Hammond, Hyler, and Gardner [22] analyzed a set of studies with a positive impact on improving teachers’ practices and student learning. As a result of this analysis, they proposed a set of main features of effective professional development: content focus, incorporating active learning, supporting collaboration, using models of effective practice, providing coaching and expert support, offering feedback and reflection, and having sustained duration.
The training described in this paper was not focused on specific curricular contents, as the theme “Pedagogical Assessment” developed has a transversal nature but was based on active learning as the participants were engaged in isomorphic strategies they could implement with their students. The teachers also had the opportunity to discuss, share, and reflect on their assessment practices, contrasting them with each other and with theoretical texts selected for this purpose as well as examples of good practice.
The tasks requested were diverse, individual, and collective, and detailed feedback was offered, synchronously or asynchronously. The sessions were spaced out in order to provide adequate time to reflect on new strategies that facilitate changes in practice.
As research related to the impact of the implementation of such programs is limited, this study aimed to describe the concepts and practices of pedagogical assessment among higher education teachers and understand the influence of pedagogical training on these concepts and practices.
For this purpose, we developed the following research questions: (1) What do teachers think about pedagogical assessment? (2) What are the main difficulties experienced by teachers in pedagogical assessment? (3) What instruments and tasks are most commonly used by teachers in pedagogical assessment? (4) What are the main reasons for opting for different student assessment instruments and tasks?
In order to answer these research questions, the study gathered data through a questionnaire survey of higher education teachers and subsequent statistical and content analyses, as described in detail in the following section. Afterward, we present the findings according to the order established for the research questions, and, finally, we provide the study’s final considerations.

2. Materials and Methods

From a methodological point of view, this is an exploratory study, quantitative in nature, that used the descriptive survey technique through the application of a questionnaire survey to teachers from a public university in Portugal who participated in the course training with a focus on pedagogical assessment. The questionnaire survey was chosen due to the fact that it is particularly used in research due to its structured nature and the automation of the statistical processing of data when carried out with the support of specific software [23].
The questionnaire was administered at two moments: before the start of the course (pre-test) and after its completion (post-test).

2.1. The Training

The course, entitled “Grounding and Improving Pedagogical Assessment in Higher Education” and aimed at university teachers, began in 2021 and adopted a training model based on the idea of a process that develops in a reflective spiral of successive cycles of planning, action, and assessment of the result of the action [24]. Two trainers with experience in educational evaluation dynamized the course.
Three editions of the course were held, the first two lasting 14 h and the following lasting 16 h. In all editions, the training took place entirely in a virtual environment, with synchronous (on the Zoom platform) and asynchronous (on the Moodle platform) sessions lasting one hour each. Throughout the sessions, different types of tasks were proposed, group or individual (in synchronous and asynchronous sessions), for which immediate feedback (self-correction), written or oral, was provided by researchers/trainers in the synchronous sessions.
During the sessions, data were gathered through observation and documental analysis in order to regulate teaching and learning. The observation in the synchronous sessions, as a data-gathering technique, had two purposes: (i) mutual observation of the trainers to regulate their action; and (ii) observation of the teachers/trainees, during which the trainers took on the role of participant observer, dynamizing the tasks, providing feedback, leading the debates through questioning, and promoting the emergence and sharing of knowledge, conceptions, and practices in the search for solutions to solve problems and improve practices.
Documental analysis was another technique used to gather data from the tasks carried out by the trainers (e.g., text analysis, critical comments, and answers to self-assessment questionnaires).
The assessment dimension was always present, supported by reflection on the data gathered, leading to informed decision-making regarding planning and the distribution of constructive feedback to support learning.
The data gathered during the training were only used to regulate teaching and learning. Only the data from the questionnaire applied before and after the training were analyzed for this study.
For this data-gathering, participating teachers responded to a questionnaire, available online on the LimeSurvey platform, at the beginning and end of the course. By responding to the questionnaire, teachers gave their informed consent, agreeing to participate in the study, understanding that their participation was voluntary, and being aware that the data would be gathered anonymously, in accordance with the rules of the Data Protection Commission.

2.2. The Instrument

The questionnaire used in the study was developed by the authors and consists of three parts: Part I, Personal Data, which includes questions for the academic and socio-professional characterization of the respondents; Part II, Assessment Practices, which consists of questions related to assessment practices; and Part III, Conceptions in the Scope of Pedagogical Assessment, which contains questions about conceptions in the scope of pedagogical assessment.
In this article, we will present the socio-professional characterization data, the data gathered from the questions in Part II (1. What does assess mean to you? 2. What does classify mean to you? 3. Do you feel any constraints/difficulty in the context of the pedagogical assessment? 3.1. If you answered affirmatively, state the constraints/difficulties. 4. What instruments and tasks do you usually use in pedagogical assessment? 5. In general, do you always assess your students in the same way? 5.1. If you answered negatively, what factors justify these changes? And 6. Indicate how often you perform the following activities: 6.1. I use different instruments and tasks to assess students; and the data gathered from question 7. of Part III (Please indicate your level of agreement with the following statements: 7.1. The main purpose of pedagogical assessment is the classification of learning; 7.2. The main purpose of pedagogical assessment is student learning; 7.3. The diversity of tasks and assessment instruments contributes to a fairer assessment.).
The statements in question 6 have answer options on a four-point Likert-type frequency scale (1 = Never; 2 = Rarely; 3 = Often; and 4 = Always). The answer options for the statements in question 7 are arranged on a four-point Likert-type agreement scale (1 = Completely disagree; 2 = Disagree; 3 = Agree; and 4 = Completely agree). Likert-type scales are particularly useful for measuring attitudes, perceptions, and opinions [25] and, in this sense, have made it possible to gather more sensitive and responsive data on teachers’ conceptions of pedagogical assessment.
Regarding the open-answer questions, by enabling teachers to describe their opinions on what assessment and grading are and what constraints and difficulties they experience in the field of pedagogical assessment, misconceptions and flaws in these concepts could be analyzed and it was also possible to understand in more detail what real difficulties these teachers face in the field of pedagogical assessment.
In order to guarantee the validity of the use of the questionnaire results, it was first submitted to a panel of experts in the field of assessment, who analyzed the representativeness, relevance, and quality of the items that compose the instrument. In terms of reliability, we analyzed the internal consistency using Cronbach’s alpha coefficient for parts II and III. The alpha values observed were 0.79 and 0.82, respectively, indicating a high degree of reliability of the questionnaire.

2.3. Participants’ Characterization

The sample, selected by convenience [26,27], consists of 31 university teachers from a public university in Portugal who participated in the pedagogical training course “Grounding and Improving Pedagogical Assessment in Higher Education”. It is worth highlighting that this is an exploratory study and that, due to its size and characteristics, the sample is not representative. In this sense, the results are not generalizable to the population of professors at the university where the study was carried out.
Among the 31 teachers who enrolled in the course, 81% were female and 19% were male. The majority were between 50 and 59 years old (45.2%), with an average of approximately 20 years of service (SD = 10.80). Regarding the Schools they belong to, 45% of teachers teach at the School of Science and Technology, 42% teach at the School of Social Sciences, and 13% teach at the School of Arts.
Regarding whether or not they had prior pedagogical training, only 10 teachers (32%) had completed some pedagogical training. Of these, 36% completed it in Initial Training, 46% in Continuing Training, and 18% in both.

2.4. Data Analysis Procedure

Quantitative data were analyzed using statistical procedures, using SPSS v. 27. Central tendency and dispersion analyses were performed, and, due to the nature of the sample, the Mann–Whitney test was applied to compare the mean agreement values for research questions 2, 3, and 4 between the group of teachers who had or did not have pedagogical training before the course. The open-answer questions were analyzed using the content analysis technique [28], and categories were created with recording units that were quantified, allowing for the use of descriptive statistics procedures.

3. Results and Discussion

In order to satisfy the objectives of the study, we will describe the results logically and sequentially, according to the order of the research questions.

3.1. What Do Teachers Think about Pedagogical Assessment?

3.1.1. What Does Assess Mean to You?

The content analysis of this item originated four categories of analysis: “gather information to make decisions about the teaching and learning process”; “guide teaching and learning”; “determine the acquisition of knowledge/skills” and “test/measure/grade/gauge the level of knowledge/skills”.
The results showed that, before the course, the majority of teachers (42%) indicated that assessing is to “test/measure/classify/gauge the level of knowledge/skills”. After the course, we observed an inversion in the percentages of the categories, with the most frequent answer becoming to “gather information for decision-making about the teaching and learning process” (45%) (Figure 1).
Before the course, the majority of teachers expressed assessment conceptions in line with the first two historical phases: the measurement, or psychometric phase, and the description, or Tylerian phase [29]. In the measurement phase, characterized by effectiveness and testing in teaching, the main objective of the assessment was to measure students’ knowledge. In the description phase, the objective ceases to be the measurement of knowledge and becomes the comparison of the objectives that were previously defined with the results obtained by the students, still having the result as the main purpose [29].
However, after the course, we observed an apparent change in the conception of assessment, which came to be identified by 45% of teachers as the act of “gather information for decision-making about the teaching and learning process” and by 23% as the act of “guide teaching and learning”. Both concepts are more consistent with the contemporary concept of assessment, which can be defined succinctly as the gathering of information to make good educational decisions and to promote effective teaching and learning [30,31,32].
It is necessary to highlight that, even after the training course, one-third of the teachers (32%) maintained the concept that assessing is to “determine the acquisition of knowledge/skills” (19%) and to “test/measure/grade/gauge the level of knowledge/skills” (13%). Although these purposes are part of the assessment process [30], they should not be the main purpose of the assessment. Assessing goes far beyond the simple act of assigning ratings since assessment is a process, not an end in itself. These are conceptions that need to be questioned and analyzed for a better understanding of the meaning of pedagogical assessment.
When analyzing the difference between the responses of teachers who have and do not have pedagogical training before the course, we observed that for the group without training, the most frequent answer was to “determine the acquisition of knowledge/skills”, while for the group with training, it was to “test/measure/grade/gauge the level of knowledge/skills”. After the course, teachers from both groups responded more frequently that assessing is to “gather information to make decisions about the teaching and learning process” (Figure 2 and Figure 3).
Even though before the training course the percentage of teachers with pedagogical training was lower than that of teachers without training, it was expected that they would have concepts that were more coherent with the contemporary concept of assessment, which is assumed as
[…] a process through which teachers and students gather, analyse, interpret, discuss, and use information relating to student learning (evidence of learning) with a view to a variety of purposes such as: (a) identifying the most and least achieved aspects of students in the which concerns their learning; (b) monitor the progress of students’ learning towards performance levels that are considered desirable; (c) distribute quality feedback to support students in their learning efforts; (d) assign grades; and (e) distribute feedback to parents and guardians.
(p. 6) [30]
This fact corroborates the idea, already presented by Stiggins [33] and reinforced by several contemporary authors [32,34,35], that teachers have difficulties with the procedures that involve pedagogical assessment and that one of the main reasons is the quality of the initial teacher training courses. According to Pastore and Andrade [34], these courses approach assessment superficially, and, in many cases, contact with concepts and practices of pedagogical assessment only happens in some psychology or methodology classes.
After the training course, an apparent conceptual change was observed in both groups of teachers, especially in the group of teachers without pedagogical training, indicating the probable influence of the training course on this conceptual change.

3.1.2. What Is the Purpose of Pedagogical Assessment?

Regarding the analysis of the degree of agreement with statement 9.1., “The main purpose of pedagogical assessment is the classification of learning”, the results showed that, before the course, 55% of teachers agreed, 32% disagreed, 10% completely disagreed and 3% completely agreed with the statement, showing that, for the majority of teachers, the level of agreement was higher than the level of disagreement with the statement.
However, when we analyze the difference between the average level of agreement of teachers with and without pedagogical training, we observe that the degree of agreement of teachers without training (Mdn = 3.00) is significantly higher (U = 159,000; p < 0.05; r = 0.53) than that of teachers with pedagogical training (Mdn = 2.00). Among teachers without pedagogical training, 72% agreed or totally agreed with the statement, while for the teachers with pedagogical training, 70% disagreed or totally disagreed with the statement.
This fact shows that, overall, teachers without pedagogical training are the ones who agree with the statement, perhaps demonstrating the importance of pedagogical training courses on certain teachers’ conceptions of assessment.
After the course, no statistical difference was found (p > 0.05) between the mean degree of agreement of teachers without and with training, since 95% of teachers without training and 100% of teachers with pedagogical training have come to disagree or completely disagree with the statement that the main purpose of pedagogical assessment is to classify learning.
Considering that the conception of assessment contained in the statement goes against current assessment perspectives, more focused on learning than on scoring students [32,36,37,38], this change in conception was observed, especially in the group of teachers without prior pedagogical training, demonstrating a positive impact of the training course in question [39]. According to Fialho et al. [39], since the 1990s, several studies have highlighted the need to prioritize assessment practices that have as their main objective student learning and not just the classification of learning.
Regarding the analysis of the degree of agreement concerning statement 9.2., “The main purpose of pedagogical assessment is student learning”, the results showed that, before the course, 86% of teachers without training and 90% of teachers with training, representing 87% of the total number of teachers, responded that they agreed or completely agreed with the statement. After the course, this percentage became 100% for both groups. No statistical difference (p > 0.05) was found between the average responses of teachers with and without training before and after the course.
Unlike what was observed in the pre-test with the results of the item stating that the main purpose of pedagogical assessment is to classify learning, in which the majority of teachers, especially teachers without pedagogical training, agreed with the statement, the level of agreement for the statement “the main purpose of pedagogical assessment is student learning” was high in both the pre-test and post-test. In this case, the teachers’ conception is in line with the most current perspectives on pedagogical assessment, which consider that the aim of assessment is to promote effective teaching and learning [31].
Comparing the results of statements 9.1. and 9.2., before the course, some contradiction was noticed, especially among the responses of teachers without training. This is because, as both address the main purpose of assessing (statement 9.1. says that this is the classification of learning, and statement 9.2. says that it is student learning), it was expected that the results would be opposite—that is, if the main purpose of assessing is the classification of learning, it should not also be student learning and vice versa.
However, we observed high agreement in the responses of teachers without training to both statements. In this sense, we can infer that, before the course, teachers, especially this group, did not have a clear concept regarding the fundamental purpose of pedagogical assessment.
On the other hand, such a contradiction was not observed after the training course, since practically all teachers disagreed with the fact that the main purpose of pedagogical assessment is the classification of learning, and they all agreed that the main purpose of assessing is student learning. Once again, we note the importance of the training course in question for changing teachers’ conception of the goal of pedagogical assessment, which now integrates only the current perspective that the main purpose of pedagogical assessment is the gathering of information for decision-making, aiming to improve student learning [32]. These decisions might include, for example, planning and conducting instruction, providing feedback to students, diagnosing learning difficulties, and also, but not as the main objective, classifying learning and academic progress [38].

3.1.3. What Does Classify Mean to You?

Regarding the question “What does classify mean to you?”, the majority of teachers answered, before (71%) and after the course (77%), that classifying is “attributing a qualitative/quantitative value to the student’s performance/task”.
The results show conformity in the concept of classification, which boils down to attributing a value to students’ performance or task. This concept is in accordance with that presented by Fernandes [30], in which classification is defined as a set of techniques and procedures that, through algorithms or other procedures, enable the calculation or determination of students’ grades or weightings.
When analyzing the difference between the responses of teachers with and without pedagogical training, it appears that, although both groups have the highest response frequency in the category “attributing a qualitative/quantitative value to the student’s performance/task”, the percentage of teachers without pedagogical training is higher than that of teachers with pedagogical training before and after the training course. Unlike teachers without pedagogical training, 20% of teachers with training indicated, before and after the course, that to classify is to assess.
This fact, also previously noted in the analysis of the question “What does assess mean to you?”, reveals itself as a very widespread conceptual mistake, in which assessment is confused with classifying and grading [37]. From this perspective, assessment is conceived as the attribution of a number that, within a certain scale, supposedly measures student learning.
Fernandes [37] notes that “this is one of the mistakes that has most contributed to the deviation of assessment from its main purpose: helping students and teachers to learn and teach better!” (p. 26). Therefore, we emphasize that it is essential to understand that, although classifying and grading are part of pedagogical assessment, they are concepts of a markedly different nature, purpose, and pedagogical insertion, even though they might have, theoretically, the common purpose of contributing to students learning better [30].

3.2. What Are the Main Difficulties Experienced by Teachers in the Context of Pedagogical Assessment?

Regarding adversities in the context of pedagogical assessment, around half of the teachers (48%) indicated that they felt some constraint and/or difficulty. Among them, the issue of fairness in the practice of assessment stands out.
When comparing the differences between the responses of teachers with and without pedagogical training, it is observed that around half of the teachers in both groups feel some constraint and/or difficulty: 50% and 48%, respectively. Ensuring fairness is the issue of the greatest adversity identified for both groups, being recognized by 50% and 60% of teachers without and with pedagogical training, respectively, who revealed that they felt some difficulty.
In general, the results support those published in the literature in this field of research, which confirms that many teachers are not effectively prepared to integrate assessment into their daily pedagogical practice [34]. Among the different adversities found in the literature, there are: the inefficiency of prior training [40]; the difficulties in translating effective assessment strategies into practice [41]; and the scarcity of studies focused on pedagogies and approaches adopted to develop teachers’ knowledge, skills, dispositions, and attitudes related to assessment literacy [34].
Specifically, for teachers in training, Kruse et al. [42] add that the studies show that they express little knowledge or superficial knowledge of assessment methods and that they are able to discuss the basic principles and methods of assessment but are unable to apply them in the classroom. They also add that teachers reveal difficulties with the selection of quality assessment instruments, with the fairness and impartiality of assessments, with the interpretation of results, with the assessment of higher-level cognitive skills and with the use of different assessment instruments and formative feedback [42].
For DeLuca and Lam [43], teachers in training and in-service often seem to lack the ability to “articulate significant connections between assessment intentions, theories, and practices” (p. 18). This reality contrasts with the fact that teachers are expected to meet the highest expectations regarding student learning, the choice and development of assessment strategies and instruments, and the administration, scoring, and communication of test results [44].
Regarding the topic of fairness in assessment, the difficulty most indicated by teachers in the present study, Zoeckler’s [45] research revealed that teachers have difficulties when assessing students fairly. According to the author, even though the moral aspects of teachers in assessments, generally unexplained, play an important role in assessment practices adopted, the main argument is related to the influence of teachers’ values and beliefs when assessing and giving feedback to students.
The Standards for Educational and Psychological Testing define a fair assessment as an assessment that is responsive to “individual characteristics and testing contexts so that test scores will yield valid interpretations for intended uses” (p. 50) [46]. In this sense, fairness is a fundamental attribute of the validity of any assessment [32].
It is worth noting that there is no single direct path to ensuring a fair assessment. However, some characteristics, such as transparency, equitable treatment, critical reflection, and the classroom environment, can contribute to a fairer assessment [46]. Herman and Cook [47] add that a teacher can guarantee a fair assessment when the results reflect the same construct and have the same meaning for all respondents, without favoring or disfavoring any of them due to characteristics irrelevant to what is being assessed.

3.3. What Instruments and Tasks Are Most Commonly Used by Teachers in Pedagogical Assessment?

The results of the analysis of the question “What instruments and tasks do you usually use in pedagogical assessment?” revealed that the instruments and tasks most commonly used by teachers in pedagogical assessment are written tests (77%), research work conducted in groups (74%) and individually (58%), and reports (45%). On the other hand, the least-used instruments and tasks were projects (3%), problem-solving (3%), reading and worksheets (3%), hetero assessment (3%), group essays (3%), and portfolios (3%).
This substantial preference for written tests (frequency tests and exams) is probably supported by the idea that they are the best way to find out what students know and are capable of doing [4,48]. In fact, tests, when well-constructed, can provide opportunities for students to demonstrate the knowledge they have acquired, create moments of learning and reflection on the work that has been developed, regulate the teaching and learning process, and guarantee the gathering of information about what students know and are able to do [37]. However, even though they present these advantages, the tests have several limitations, as they tend to: assess a limited number of curricular objectives and skills; focus more on comparing results than on student progression; fractionate knowledge; focus the assessment on objectives that require less cognitive elaboration, such as memorization; and aggregate results in order to produce a global classification [37].
Given all these factors, the privileged use of tests is insufficient for assessing students’ skills. We also emphasize that no assessment instrument is self-sufficient and capable of assessing everything a student knows or is capable of doing. Furthermore, it is necessary to consider that assessment is not an exact science and, therefore, regardless of the information-gathering instrument used, there is always a high probability of making some type of error [20].
When we analyzed the results by groups with and without pedagogical training, we found, for both groups, a distribution very close to the general analysis. Exceptions are observed in instruments and tasks used only by teachers without pedagogical training (participation/performance in classes, projects, problem-solving, reading and worksheets, and hetero assessment) and only by teachers with pedagogical training (reflection/review, case studies, group essays, and the portfolio).
This indicates that there are no differences between teachers with and without pedagogical training in their preference for greater use of written tests, research work, and reports (Figure 4).
In addition, regarding the frequency of use of different instruments and tasks in student assessment, 45% of teachers answered frequently and another 45% answered always, indicating the use of several instruments and tasks in the pedagogical assessment of students, with no statistical difference between teachers with and without pedagogical training (p > 0.05). Comparing this result with those from the teachers’ response to the question “Which instruments and tasks are most commonly used by teachers in pedagogical assessment?”, a certain coherence is observed, as a large number of instruments and tasks were mentioned.
However, given that most of the instruments and tasks mentioned are used by less than 30 percent of teachers, it appears that the majority of these are used by one or two teachers. Considering that 100% of teachers agree (26%) or totally agree (74%) with the fact that “The diversity of tasks and assessment instruments contributes to a fairer assessment”, it can be assumed that, based on the results of previous studies [21], effective diversification and its consequent contribution are based, in general, on the use of four instruments: written tests, research work, and reports.
Diversification is important and necessary to multiply the information gathered about student learning [29,49] since only then is possible to determine with some accuracy and comprehensibility what students know and what they are able to do in different situations, contexts, and conditions [50,51]. For Fernandes [20], such diversification must be carried out through a wide spectrum of instruments, including, for example, reports, texts of a different nature, observations, problem resolutions, performances, and assorted products.

3.4. What Are the Main Reasons for Opting for Different Student Assessment Instruments and Tasks?

When asked, “In general, do you always assess your students in the same way?”, around two-thirds of teachers (61%) answered that they vary the way they assess students, with no statistical differences between teachers with or without pedagogical training (p > 0.05). For these teachers, the main factors that justify the option of diversifying assessment instruments and tasks are the nature of the curricular units (55%) and the type of content (42%), while the factors that least support this decision are educational policies (7%), the number of classes (10%), and the duration of classes (10%).
Different typologies of curricular units, such as theoretical teaching, theoretical–practical teaching, practical and laboratory teaching, fieldwork, seminars, internships, and tutorial guidance, seem to determine the teaching strategies and, therefore, tend to have an influence on the selection of assessment instruments and tasks. This fact is consistent with the results of Cid et al. [4], according to whom assessment tasks are directly linked to the nature of the curricular units and the type of lessons, with practical lessons using a greater diversity of tasks and instruments.
Comparing the answers of the groups of teachers with and without training, we observed that, for the first group, the nature of the curricular units and the number of students per class were the factors that most justified the diversification of the way of assessing. On the other hand, for the second group, the factors were the nature of the curricular units and the type of content (Figure 5).
The diversification of assessment instruments according to content might be linked to what is intended to be assessed, i.e., the assessment aims and, more specifically, the cognitive domains that are supposed to be assessed, such as remember, understand, apply, analyze, evaluate, and create [52]. Even though there are no specific and exclusive instruments for each of these cognitive domains, there are instruments that are more suited to assessing each of these domains.
As for the rationale for the number of students per class, this might be linked to issues related to the sort of task the students will be asked to do or perform, the time taken to complete it, the time taken to correct it, and the difficulty/ease of giving feedback to the students.
According to Depresbiteris and Tavares [29], the diversification of assessment instruments is supported by the need to analyze student learning from different angles and dimensions. Fernandes [20] adds subjectivity, associated with all assessment processes, as a robust reason for the diversification of assessment instruments to be put into practice, as well as theories of learning and learning psychology, based on multiple intelligences proposed by Howard Gardner.
However, Depresbiteris and Tavares [29] warn that, although the diversification of instruments is important, it is not enough. The authors claim that it is necessary to prevent its adoption from having a random character, given that “the assessment has theoretical and practical components and has a methodical and pedagogical character that configure its actions as intentional aimed at what you want to achieve” (p. 16).
Finally, it is worth highlighting that, although the characteristics of the students were indicated by only one-third of the teachers as a reason for opting for different assessment instruments and tasks, Fernandes [20] states that it is necessary to diversify in order to include, as the diversification of information-gathering processes must take into account the diversity of students. Therefore, the “tendency to use a given process rather than others reduces the sensitivity of assessments to such diversity” (p. 12).

4. Conclusions

In order to describe the conceptions and practices of pedagogical assessment among higher education teachers and understand the influence of pedagogical training on these attributes, a pre- and post-test questionnaire was applied to 31 university teachers who participated in the pedagogical training course “Grounding and Improving Pedagogical Assessment in Higher Education”.
Regarding teachers’ perceptions about pedagogical assessment, the results revealed an apparent change in conception after the course, in which the majority of teachers no longer identified it as an activity to “test/measure/classify/gauge the level of knowledge/competencies” and came to recognize it as a process to “gather information for decision-making about the teaching and learning process” and to “guide teaching and learning”. We highlight, however, that one-third of teachers maintained the idea that assessing is to “determine the acquisition of knowledge/skills” and to “test/measure/classify/gauge the level of knowledge/skills”.
Regarding the purpose of pedagogical assessment, we observed that, before the training course, there was a slight divergence between the teachers’ conceptions, with and without pedagogical training. This is because, before the course, most of them agreed with the statements “the main purpose of pedagogical assessment is the classification of learning” and “the main purpose of pedagogical assessment is student learning”. However, after the course, we observed that, apparently, there was a considerable change in the teachers’ conceptions, since all of them started to disagree with the fact that the main purpose of pedagogical assessment is the classification of learning and agree that, in fact, the main purpose is student learning.
As for teachers’ conception of what classify means, we found that, before and after the course, the majority identified it as the attribution of a quantitative and/or qualitative value to students’ performance and/or task. It is worth mentioning that, even after the training course, some teachers still indicated that to classify is to assess, revealing a widely spread misunderstanding that resembles the concepts of assessing and classifying.
Concerning the main difficulties experienced by teachers within the scope of pedagogical assessment, the results showed that nearly half of the participating teachers feel some constraint within the realm of pedagogical assessment. Among the main adversities, fairness in assessment was the most cited issue, corroborating the results of several studies that demonstrate that teachers experience difficulties in assessing students fairly.
Finally, regarding the instruments and tasks used by teachers in pedagogical assessment, we observed, as expected, that written tests are, along with research works and reports, the most commonly used instruments to assess students. Nevertheless, around two-thirds of teachers indicated that they diversify the way they assess students, mainly according to the nature of the curricular units and the type of content.
In summary, the study revealed an apparent change in conception regarding the concepts of assessment and classification and the purpose of pedagogical assessment. It demonstrated that around half of the teachers expressed difficulty in assessing their students, especially regarding fairness in assessment, and also that written tests are the most frequently used instruments to assess students. At last, the results supported the importance of the training course in pedagogical assessment to replace erroneous and outdated concepts of assessment with concepts that are more congruent with current theories on pedagogical assessment.
One limitation of this study is related to the fact that its results cannot be extrapolated to the population of university teachers since a limited number of teachers participated in the training course and, consequently, the study, making such generalizations impossible. It is therefore suggested that new editions of the course be carried out and that the results continue to be published in order to obtain more robust data on the conceptions and practices of university teachers in the scope of pedagogical assessment.
Nevertheless, the exploratory study presented in this article contributes to the recognition of the importance of pedagogical training in general and assessment training specifically. The results presented in this article provide an opportunity for further studies, for example, to see if the teachers’ practice corresponds to the results obtained in the questionnaire and, above all, to see if the apparent change in the conceptions of the teachers who took part in the training course has led to changes in their practice.

Author Contributions

Conceptualization, I.F., M.C. (Marília Cid) and M.C. (Marcelo Coppi); methodology, I.F., M.C. (Marília Cid) and M.C. (Marcelo Coppi); software, M.C. (Marcelo Coppi); validation, I.F., M.C. (Marília Cid) and M.C. (Marcelo Coppi); formal analysis, I.F., M.C. (Marília Cid) and M.C. (Marcelo Coppi); investigation, I.F., M.C. (Marília Cid) and M.C. (Marcelo Coppi); data curation, M.C. (Marcelo Coppi); writing—original draft preparation, I.F. and M.C. (Marília Cid); writing—review and editing, I.F. and M.C. (Marília Cid); supervision, I.F. and M.C. (Marília Cid). All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by National Funds through the FCT—Foundation for Science and Technology, I.P., within the scope of the Research Grant with reference UI/BD/151034/2021 and the project UIDB/04312/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Almeida, L.; Gonçalves, S.; `do Ramos, Ó.J.; Rebola, F.; Soares, S.; Vieira, F. Inovação Pedagógica no Ensino Superior. Cenários e Caminhos de Transformação; Agência de Avaliação e Acreditação do Ensino Superior: Lisboa, Portugal, 2022. [Google Scholar]
  2. European Commission. Communication from the Commission to the European Parliament. The Council, the European Economic and Social Committee and the Committee of the Regions on a Renewed EU Agenda for Higher Education. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2017:247:FIN (accessed on 22 August 2023).
  3. Chaleta, E.; Quaresma, P.; Fialho, I.; Sebastião, L.; Leal, F.; Rato, L.; Cid, M.; Borralho, A.; Saraiva, M. Concepções Sobre o Que é Aprender Em Professores Universitários Das Áreas de Ciências Sociais e de Ciências e Tecnologia. In Ensinar, Avaliar e Aprender no Ensino Superior: Perspetivas Internacionais; Cid, M., Rajadell-Puiggròs, N., Santos Costa, G., Eds.; Centro de Investigação em Educação e Psicologia—Universidade de Évora: Évora, Portugal, 2020; pp. 37–63. [Google Scholar]
  4. Cid, M.; Fialho, I.; Borralho, A.; Fernendes, D.; Rodrigues, P.; Melo, B. A Avaliação Nas Práticas Curriculares Em Quatro Universidades Portuguesas. In Avaliação, Ensino e Aprendizagem no Ensino Superior em Portugal e no Brasil: Realidades e Perspetivas; Fernandes, D., Borralho, A., Barreira, C., Monteiro, A., Catani, D., Cunha, E., Alves, M.P., Eds.; Educa—Instituto de Educação da Universidade de Lisboa: Lisboa, Portugal, 2014; pp. 615–648. [Google Scholar]
  5. Fialho, I.; Cid, M. Fundamentar e Melhorar a Avaliação Pedagógica No Ensino Superior. Um Processo Formativo Sustentado Na Investigação-Ação Em Contexto Digital. In Portas que o Digital Abriu na Investigação em Educação; Nobre, A., Mouraz, A., Duarte, M., Eds.; Universidade Aberta: Lisboa, Portugal, 2021; pp. 151–173. [Google Scholar] [CrossRef]
  6. Pereira, F.S.; Leite, C. O Processo de Bolonha Na Sua Relação Com a Agenda Da Qualidade—Uma Análise Focada No Perfil Dos Docentes Que Asseguram Os Cursos de Educação Básica. TMQ–Tech. Methodol. Qual. Número Espec.–Process. Bolonha 2020, 135–150. [Google Scholar]
  7. De Almeida, M.M. Formação Pedagógica e Desenvolvimento Profissional No Ensino Superior: Perspetivas de Docentes. Rev. Bras. Educ. 2020, 25, 1–22. [Google Scholar] [CrossRef]
  8. Sursock, A. Trends 2015: Learning and Teaching in European Universities; European University Association: Brussels, Belgium, 2015. [Google Scholar]
  9. Bunescu, L.; Gaebel, M. National Initiatives in Learning and Teaching in Europe—A Report from the European Forum for Enhanced Collaboration in Teaching (EFFECT) Project; European University Association: Brussels, Belgium, 2018. [Google Scholar]
  10. Gaebel, M.; Zhang, T. Trends 2018 Learning and Teaching in the European Higher Education Area; European University Association: Brussels, Belgium, 2018. [Google Scholar]
  11. A3ES. Manual de Avaliação Institucional Do Ensino Superior 2022; Agência de Avaliação e Acreditação do Ensino Superior: Lisboa, Portugal, 2022. [Google Scholar]
  12. Xavier, A.R.C.; Leite, C. Sentidos Pedagógicos Do Processo de Bolonha—Uma Análise a Partir Dedocumentos de Constituição Do Espaço Europeu de Ensino Superior. Currículo Front. 2023, 23, e1962. [Google Scholar] [CrossRef]
  13. Gibbs, G. Uso Estratégico de La Evaluación En El Aprendizaje. In Evaluar en la Universidad—Problemas y Nuevos Enfoques; Brown, S., Glasner, A., Eds.; Narcea: Madrid, Spain, 2003; pp. 61–75. [Google Scholar]
  14. Fernandes, D. Práticas de Ensino e de Avaliação de Docentes de Quatro Universidades Portuguesas. In Avaliação, Ensino e Aprendizagem no Ensino Superior em Portugal e no Brasil: Realidades e Perspetivas; Fernandes, D., Borralho, A., Barreira, C., Monteiro, A., Catani, D., Cunha, E., Alves, M.P., Eds.; Educa–Instituto de Educação da Universidade de Lisboa: Lisboa, Portugal, 2014; pp. 97–136. [Google Scholar]
  15. Fialho, I.; Chaleta, E.; Borralho, A. Práticas de Avaliação Formativa e Feedback, No Ensino Superior. In Ensinar, Avaliar e Aprender no Ensino Superior: Perspetivas Internacionais; Cid, M., Rajadell-Puiggròs, N., Santos Costa, G., Eds.; Centro de Investigação em Educação e Psicologia da Universidade de Évora: Évora, Portugal, 2020; pp. 65–92. [Google Scholar]
  16. Cid, M.; Fialho, I. Critérios de Avaliação. Da Fundamentação à Operacionalização. In TurmaMais e Sucesso Escolar—Contributos Teóricos e Práticos; Fialho, I., Salgueiro, H., Eds.; Centro de Investigação em Educação e Psicologia da Universidade de Évora: Évora, Portugal, 2011; pp. 109–124. [Google Scholar]
  17. Fernandes, D. Para Um Enquadramento Teórico Da Avaliação Formativa e Da Avaliação Sumativa Das Aprendizagens Escolares. In Avaliar para Aprender no Brasil e em Portugal: Perspectivas Teóricas, Práticas e de Desenvolvimento; Ortigão, M.I.R., Fernandes, D., Pereira, T.V., Santos, L., Eds.; Editora CRV: Curitiba, Brazil, 2019; pp. 219–239. [Google Scholar]
  18. Borralho, A.; Cid, M.; Fialho, I. Avaliação Das (Para as) Aprendizagens: Das Questões Teóricas Às Práticas de Sala de Aula. In Avaliar para Aprender no Brasil e em Portugal: Perspectivas Teóricas, Práticas e de Desenvolvimento; Ortigão, M.I., Fernandes, D., Pereira, T., Santos, L., Eds.; Editora CRV: Curitiba, Brazil, 2019; pp. 219–240. [Google Scholar]
  19. Brookhart, S.M. How to Give Effective Feedback to Your Students; ASCD: Alexandria, Egypt, 2008. [Google Scholar]
  20. Fernandes, D. Diversificação dos Processos de Recolha de Informação (Fundamentos); Universidade de Lisboa: Lisabon, Portugal, 2021. [Google Scholar]
  21. Fernandes, D. Práticas de Avaliação de Dois Professores Universitários: Pesquisa Utilizando Observações e Narrativas Das Atividades Das Aulas. Educ. Rev. 2015, 1, 109–135. [Google Scholar] [CrossRef]
  22. Darling-Hammond, L.; Hyler, M.E.; Gardner, M. Efective Teacher Professional Development; Learning Police Institute: Palo Alto, CA, USA, 2017. [Google Scholar]
  23. Santos, J.R.; Henriques, S. Inquérito por Questionário [em Linha]: Contributos de Conceção e Utilização em Contextos Educativos; Universidade Aberta: Lisboa, Portugal, 2021. [Google Scholar]
  24. Kemmis, S.; McTaggart, R. Como Planificar la Investigacion-Accion; Editorial Laertes: Barcelona, Spain, 1992. [Google Scholar]
  25. Cohen, L.; Manion, L.; Morrison, K. Research Methods in Education, 6th ed.; Routledge Publishers: Oxford, UK, 2007. [Google Scholar]
  26. Ghiglione, R.; Matalon, B. O Inquérito—Teoria e Prática; Celta Editora: Oeiras, Portugal, 1992. [Google Scholar]
  27. Hill, M.M.; Hill, A. Investigação Por Questionário; Edições Sílabo: Lisboa, Portugal, 2005. [Google Scholar]
  28. Bardin, L. Análise de Conteúdo; Edições 70: Lisboa, Portugal, 1977. [Google Scholar]
  29. Depresbiteris, L.; Tavares, M.R. Diversificar é Preciso...: Instrumentos e Técnicas de Avaliação de Aprendizagem; Senac São Paulo: São Paulo, Brazil, 2009. [Google Scholar]
  30. Fernandes, D. Avaliação Pedagógica, Classificação e Notas: Perspetivas Contemporâneas. Folha de Apoio à Formação-Projeto de Monitorização, Acompanhamento e Investigação em Avaliação Pedagógica (MAIA); Ministério da Educação/Direção-Geral da Educação: Lisboa, Portugal, 2021.
  31. Kane, M.T.; Wools, S. Perspectives on the Validity of Classroom Assessments. In Classroom Assessment and Educational Measurement; Brookhart, S.M., McMillan, J.H., Eds.; Routledge: New York, NY, USA, 2020; pp. 11–26. [Google Scholar]
  32. Popham, W.J. Classroom Assessment: What Teachers Need to Know, 8th ed.; Pearson: Los Angeles, CA, USA, 2017. [Google Scholar]
  33. Stiggins, R.J. The Unfulfilled Promise of Classroom Assessment. Educ. Meas. Issues Pract. 2001, 20, 5–15. [Google Scholar] [CrossRef]
  34. Pastore, S.; Andrade, H.L. Teacher Assessment Literacy: A Three-Dimensional Model. Teach. Teach. Educ. 2019, 84, 128–138. [Google Scholar] [CrossRef]
  35. Popham, W.J. Assessment Literacy Overlooked: A Teacher Educator’s Confession. Teach. Educ. 2011, 46, 265–273. [Google Scholar] [CrossRef]
  36. Coppi, M. Estratégias Para a Coleta de Evidências de Validade de Avaliações de Sala de Aula. Meta Avaliação 2022, 14, 826–849. [Google Scholar] [CrossRef]
  37. Fernandes, D. Para Uma Fundamentação e Melhoria Das Práticas de Avaliação Pedagógica; Ministério da Educação/Direção-Geral da Educação: Lisboa, Portugal, 2021.
  38. Russel, M.K.; Airasian, P.W. Avaliação Em Sala de Aula: Conceitos e Aplicações, 7th ed.; AMGH: Porto Alegre, Brazil, 2014. [Google Scholar]
  39. Fialho, I.; Cid, M.; Coppi, M. Grounding and Improving Assessment in Higher Education: A Way of Promoting Quality Education. Front. Educ. 2023, 8, 1143356. [Google Scholar] [CrossRef]
  40. DeLuca, C.; Chavez, T.; Bellara, A.; Cao, C. Pedagogies for Preservice Assessment Education: Supporting Teacher Candidates’ Assessment Literacy Development. Teach. Educ. 2013, 48, 128–142. [Google Scholar] [CrossRef]
  41. Volante, L.; Fazio, X. Exploring Teacher Candidates’ Assessment Literacy: Implications for Teacher Education Reform and Professional Development. Can. J. Educ. 2007, 30, 749–770. [Google Scholar] [CrossRef]
  42. Kruse, L.; Impellizeri, W.; Witherel, C.E.; Sondergeld, T.A. Evaluating the Impact of an Assessment Course on Preservice Teachers’ Classroom Assessment Literacy and Self-Efficacy. Mid-West. Educ. Res. 2020, 32, 107–132. [Google Scholar]
  43. DeLuca, C.; Lam, C.Y. Preparing Teachers for Assessment within Diverse Classrooms: An Analysis of Teacher Candidates’ Conceptualizations. Teach. Educ. Q. 2014, 41, 3–24. [Google Scholar]
  44. Williams, J.C. “Assessing without Levels”: Preliminary Research on Assessment Literacy in One Primary School. Educ. Stud. 2015, 41, 341–346. [Google Scholar] [CrossRef]
  45. Zoeckler, L.G. Moral Aspects of Grading: A Study of High School English Teachers’ Perceptions. Am. Second. Educ. 2007, 35, 83–102. [Google Scholar]
  46. AERA; APA; NCME. Standards for Educational and Psychological Testing; American Educational Research Association: Washington, DC, USA, 2014. [Google Scholar]
  47. Herman, J.; Cook, L. Fairness in Classroom Assessment. In Classroom Assessment and Educational Measurement; Routledge: New York, NY, USA, 2020; pp. 243–264. [Google Scholar]
  48. Barreira, C.; Bidarra, G.; Vaz-Rebelo, M.P.; Monteiro, F.; Alferes, V. Perceções de Docentes e Estudantes de Universidades Portuguesas Sobre Ensino, Aprendizagem e Avaliação. In Avaliação, Ensino e Aprendizagem no Ensino Superior em Portugal e no Brasil: Realidades e Perspetivas; Fernandes, D., Borralho, A., Barreira, C., Monteiro, A., Catani, D., Cunha, E., Alves, M.P., Eds.; Educa–Instituto de Educação da Universidade de Lisboa: Lisboa, Portugal, 2014; pp. 309–325. [Google Scholar]
  49. Sperrhake, R.; Piccoli, L. Instrumentos Para Avaliação Formativa Da Alfabetização: Princípios Conceituais e Metodológicos. Em Aberto 2020, 33, 47–67. [Google Scholar] [CrossRef]
  50. Fernandes, D. Avaliação Das Aprendizagens: Desafios Às Teorias, Práticas e Políticas; Texto Editores: Lisboa, Portugal, 2005. [Google Scholar]
  51. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 6th ed.; Pearson: London, UK, 2006. [Google Scholar]
  52. Anderson, L.W.; Krathwohl, D.R.; Airasian, P.W.; Cruikshank, K.A.; Mayer, R.E.; Pintrich, P.R.; RATHS, J.; Wittrock, M.C. A Taxonomy for Learning, Teaching and Assessing: A Revison of Bloom’s Taxonomy of Educational Objectives; Addison Wesley Longman: New York, NY, USA, 2001. [Google Scholar]
Figure 1. Distribution of the percentage of teachers according to the answers to the question “What does assess mean to you?”. Source: Prepared by the authors.
Figure 1. Distribution of the percentage of teachers according to the answers to the question “What does assess mean to you?”. Source: Prepared by the authors.
Education 13 01248 g001
Figure 2. Distribution of the percentage of teachers without pedagogical training according to the answers to the question “What does assess mean to you?”, before and after the training course. Source: Prepared by the authors.
Figure 2. Distribution of the percentage of teachers without pedagogical training according to the answers to the question “What does assess mean to you?”, before and after the training course. Source: Prepared by the authors.
Education 13 01248 g002
Figure 3. Distribution of the percentage of teachers with pedagogical training according to the answers to the question “What does assess mean to you?”, before and after the training course. Source: Prepared by the authors.
Figure 3. Distribution of the percentage of teachers with pedagogical training according to the answers to the question “What does assess mean to you?”, before and after the training course. Source: Prepared by the authors.
Education 13 01248 g003
Figure 4. Distribution of the percentage of instruments and tasks most used by teachers with and without training in pedagogical assessment. Source: Prepared by the authors.
Figure 4. Distribution of the percentage of instruments and tasks most used by teachers with and without training in pedagogical assessment. Source: Prepared by the authors.
Education 13 01248 g004
Figure 5. Distribution of the percentage of the main reasons for opting for different assessment instruments and tasks by group of teachers with and without pedagogical training. Source: Prepared by the authors.
Figure 5. Distribution of the percentage of the main reasons for opting for different assessment instruments and tasks by group of teachers with and without pedagogical training. Source: Prepared by the authors.
Education 13 01248 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fialho, I.; Cid, M.; Coppi, M. Pedagogical Assessment in Higher Education: The Importance of Training. Educ. Sci. 2023, 13, 1248. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci13121248

AMA Style

Fialho I, Cid M, Coppi M. Pedagogical Assessment in Higher Education: The Importance of Training. Education Sciences. 2023; 13(12):1248. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci13121248

Chicago/Turabian Style

Fialho, Isabel, Marília Cid, and Marcelo Coppi. 2023. "Pedagogical Assessment in Higher Education: The Importance of Training" Education Sciences 13, no. 12: 1248. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci13121248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop