Research article

Beliefs and understandings of assessment theories and terminologies by university lecturers

Authors
  • Maddalena Taras orcid logo (Associate Professor, Faculty of Education and Society, University of Sunderland, Sunderland, UK)
  • Irma Molina orcid logo (Director of Research, Faculty of Education, Sergio Arboleda University, Bogotá, Colombia)
  • Katherina Gallardo orcid logo (Professor, School of Humanities and Education, Tecnológico de Monterrey, Monterrey, México)
  • Juan Carlos Morales-Piñero orcid logo (Assistant Professor, Faculty of Economics, Universidad Militar Nueva Granada, Bogotá, Colombia)

Abstract

This study investigated the beliefs and practices of university lecturers around assessment theories and their links with teaching practices. Given the discrepancies in the literature, the research explored the gap in theoretical understandings of summative and formative assessment and the relationship between them, as well as the relationship with lecturers’ own practices. In total, 109 lecturers from a Colombian institution of higher education participated. An instrument consisting mainly of quantitative questions, with some qualitative questions, which was originally designed to investigate these beliefs and practices among higher education lecturers in England, was used. The translation, validation and reliability processes of the instrument were carried out. Thus, an exploratory factorial study was performed to establish the dimensions of the instrument, determining seven factors. The results indicate discrepancies between what lecturers understand by formative and summative assessment and their practices in the classroom. This coincides with the theoretical contradictions also found in the analysis of specialised literature. It is concluded that a detailed understanding of the discrepancies between beliefs and practices would be valuable in redesigning the ways in which the training of lecturers’ assessment competencies is approached.

Keywords: assessment literacies, lecturer beliefs, higher education, formative and summative assessment

How to Cite: Taras, M., Molina, I., Gallardo, K. and Morales-Piñero, J.C. (2023) ‘Beliefs and understandings of assessment theories and terminologies by university lecturers’. London Review of Education, 21 (1), 36. DOI: https://doi.org/10.14324/LRE.21.1.36.

Rights: 2023, Maddalena Taras, Irma Molina, Katherina Gallardo and Juan Carlos Morales-Piñero.

176 Views

53 Downloads

Published on
22 Nov 2023
Peer Reviewed

Introduction

For many years, assessment of learning has been one of the most important elements of pedagogical practice, and recently it has been recognised as an integral part of learning and teaching (UNESCO, 2015). As such, it transcends disciplines and subject areas, although processes are naturally adapted and adopted in specific contexts. While pedagogic and assessment theories may specifically be considered to be the specialist domain of educationalists, it is recognised that, in any context, personal beliefs, epistemologies and ontologies impact on processes and practices (Cope and Kalantzis, 2015; Dysthe, 2008). For example, if an individual believes that teachers are experts and the fount of all knowledge, then teacher-led pedagogies might logically follow. If, however, learner- and learning-led pedagogies are part of a teacher’s beliefs, then it is more likely that, given a choice, these pedagogies will be predominant in practice. Since assessment is increasingly seen as inseparable from learning and teaching, it is also logical that discourses and working theories (that is, how programmes, courses and modules are framed and presented) also depend on what filters down from more formalised presentations of theories of assessment. Terms such as summative and formative assessment are commonly found in the education field. It could be argued that what they mean to individuals and teams of educationalists is critical to influencing pedagogies and concepts such as inclusion, both social and educational, and ethical and shared practices in assessment (Hargreaves, 2005; MacLellan, 2001; Pedder and Opfer, 2011). Given this situation, despite theories generally conceived as specific to specialist domains of education, an understanding of the key terms has become the basis of daily discourses. How these terms relate to each other (that is, how they interrelate in theory) inevitably have important consequences for practices, and, importantly, for how and where students fit into these practices. Therefore, this article goes as far as to argue that a clearer and better understanding of terms such as summative and formative assessment, and how they relate, goes some way to clarifying how teachers’ pedagogies are supported by assessment, and can therefore be of great benefit to students.

This research arose out of two deficits identified by two of the researchers – one in England and one in Mexico – connected by their shared interest in all things concerning assessment. First, repeated discussions revealed a shared belief that university lecturers across subjects and disciplines tended to reflect superficial and uncoordinated understandings, particularly of summative and formative assessments. Perhaps challenging lecturer beliefs would trigger a desire to know more about how their pedagogies may be impacted by those beliefs. A questionnaire developed in England, and used across disciplines, had revealed such a gap in clear understanding of summative and formative assessments and the relationship between them.

The second deficit was that no similar research had been undertaken in a Latin American context, and we were concerned that a questionnaire designed for lecturers in England might not be suitable for lecturers in South America because of divergent social and educational influences. This was despite our conviction that assessment discourses and practices were becoming more universally shared through shared access to research.

Therefore, our aim was twofold: (1) to trial the questionnaire to ascertain if the gaps in understanding identified in England were similarly to be found at a Colombian higher education provider and (2) to test the suitability of the instrument in the Latin American context. Both aspects are important groundwork for being able to introduce the instrument into different language and cultural contexts. This instrument collects both quantitative and qualitative data, although it is weighted towards the quantitative. One reason for this is that, generally, quantitative research is used to test relationships among variables, and possibly to predict these (Creswell and Guetterman, 2019; Wallwey and Kajfez, 2023). Another reason is that this nascent area of research has little precedence in the UK, and none in Latin America, and therefore, quantitative research has the advantage of providing a generalisable overview.

Researchers traditionally use ‘quantitative methods to measure intervention and/or implementation outcomes and qualitative methods to understand process’ (Palinkas et al., 2011: 48). In this way, quantitative research is more rigorous for confirmation, while qualitative research is more helpful in preliminary exploration.

In many countries, quantitative research is still prized above qualitative research, and this is likely to make the instrument more usable and valuable as a research tool, particularly in a new area of research. Thus, the greater focus and emphasis on the quantitative aspects of the instrument in this article are not necessarily a disadvantage for promoting its use outside the original limited and limiting context.

Similarly, the choice of respondents provides an example of a cross-disciplinary group in which contexts and experiences of assessment are likely to be diverse.

Contradictory and conflicting discourses and beliefs in assessment theories, particularly around formative and summative assessment, have been in circulation since the end of the twentieth century. One potential consequence of this is that, at the chalkface, lecturers work with incohesive beliefs, which also impact on empirical research.

For this reason, we carried out a study that explored beliefs and understandings of the theory of assessment by university lecturers. We used a questionnaire originally designed to investigate the beliefs and understandings of university lecturers in England (Taras, 2008). In addition, to strengthen our analysis of the results, we critically reviewed the literature, both anglophone and Hispanic, on the theory and practice of assessment. This analysis allowed us to establish a bridge between theoretical discourses and how they are transferred to practices in the classroom. The results could point the way for lecturers and administrators to improve pedagogy.

Changing beliefs about assessment

In recent years, concepts of assessment that support learning, rather than being merely instrumental for accreditation and validation, have increasingly become the norm. However, there is little research that focuses on these developments and on how this paradigm shift empowers students (Davies and Taras, 2018; Tan, 2004; Taras, 2008, 2023). The discourses and practices of student-centred assessment processes to achieve collaboration and empowerment are still of limited scope. These anomalies in conceptual thoughts and practices in assessment, as compared to developments in learning and teaching, have created unaligned, ambiguous and fragmented frameworks in teachers’ understandings of assessment theories and principles, which relate to similarly unaligned, ambiguous and fragmented frameworks for practice (Davies and Taras, 2016; Taras and Davies, 2017).

Researchers across the anglophone world have agreed on the paradigm shift, even if they have chosen different vocabularies to express this (Lau, 2015). This paradigm shift has reflected changes in learning and teaching across educational practices, where learner- and learning-centredness have become dominant discourses. Transfer to learner- and learning-centred discourses and practices in assessment has been much more restrained and constrained, with the most commonly accepted processes still teacher-led and, importantly, teacher-controlled.

Representations and discussions about summative and formative assessment are symptomatic of these problems. These anomalies perpetuate confusion in understandings of both theory and practice (Davies and Taras, 2016; Taras, 2009; Taras and Davies, 2017). As Lau (2015) points out, the paradigm shift was never completed, and many of the old principles remain alongside the new ones, creating ambiguity and contradictions. Lau (2015) relies on the work of Biggs (1998), Taras (2005) and Barnett (2007) to evaluate the main arguments on the need to integrate and use summative and formative assessment together to support assessment.

Changing beliefs about learning and assessment have gone hand in hand with changes in discourses, paradigms and practices. Regarding learning, if learners learn by doing, then learners have increasingly been involved in all aspects of pedagogic practices, with teacher support and guidance. Regarding assessment, this is less easily realised. Despite discourses of assessment for learning, which have found resonance in widely different educational settings, much of the basis for practices, including self-assessment practices, are teacher-led, teacher-controlled and teacher-centred (Black et al., 2003; Black and Wiliam, 2018). The main reason for this is that formative assessment is based on teachers telling students, with the formation being students demonstrating that they have followed teachers’ instructions. Further complications arise because the assessment for learning distinctions between formative and summative assessment are based on functions of assessment (that is, the uses to which assessment may be put). Since the result of any assessment can have many and multiple uses and functions, and, importantly, cannot be limited, stating that any assessment has a formative function does not make it so. Having a teacher-led assessment makes it an instruction and an order. This process is not conducive to learning- and learner-centredness. The other element of the theory surrounding assessment for learning is that formative and summative assessment have different functions and are therefore separate entities and processes (Black et al., 2003; Black and Wiliam, 2018), so that to have both formative and summative assessment functions, teachers must repeat assessments, which, interestingly, it something teachers refused to do in the study by Black et al. (2003).

Another assessment theory, which contrasts significantly with this, integrates Scriven (1967) and Sadler (1989), where assessment is based on processes (that is, on what is done), and formative and summative assessment are aspects of one process. Summative assessment is a summation of an assessment at any one time, and it produces information, either in the form of comments or in the form of letter and/or number grades. This information may then be used and become formative assessment, because the work has been reformed by this information. If teachers provide the information to students, and, importantly, if students use it, then it becomes formative assessment by students. If students provide the information on their own work, and then use it to update their work, then it is student self-assessment. Thus, formative assessment and student self-assessment are similar in that they both require use of information; they differ in who provides the information to whom. This process-based approach is also vindicated when considering and understanding concepts of feedback. Since the 2010s developments in the assessment literature have focused on feedback, which is a key product of assessment, and which is considered to be of primary importance in supporting learning. However, failure to contextualise feedback within theories of assessment or learning, and particularly its relationship with summative and formative assessment, continues to cause lack of clarity in what teachers do. Providing processes which produce ethical and transparent feedback, and understanding how these processes lead to inclusive feedback and assessment, becomes difficult if the basic assessment literacies are not clarified and shared, as is demonstrated in the results of this study.

In their review, Nieminen et al. (2023) analysed how theory features in assessment and feedback research and found that in the anglophone literature only 21 empirical articles out of 56 used theory. Thus, they came to the same conclusions as Ashwin (2012) and Tight (2014) in their reviews. These reviews discuss in detail the negative impacts on practices of theoretical deficits, again confirming our own findings:

While 21 studies explicitly drew on theories, we identified 29 as utilising something like theories. First, multiple studies mentioned broader learning theories such as socio-cultural perspectives (Adalberon, 2020) or social constructivism (Carless, 2020; Kilgour et al., 2020), albeit treating such ideas lightly.

(Nieminen et al., 2023: 83–4; emphasis in the original)

As Nieminen et al. (2023: 84) point out, a learning theory is always used implicitly in a learning context:

Often, the studies were guided by concepts that lacked sufficient complexity to be theories. Examples of this were feedback literacy (Han and Xu, 2020; Molloy et al., 2020), academic self-concept (Simonsmeier et al., 2020) and heuristics and biases (McQuade et al., 2020).

Perhaps the most shocking aspect in the literature is that theories of assessment are rarely acknowledged as being important or even existing, as demonstrated in Nieminen et al. (2023).

Following the analysis described above, it was deemed appropriate to carry out a study that would allow the understanding of beliefs of lecturers in a Latin American context, starting from the theory of summative and formative assessment, and relating them to their assessment practices. It is important to understand Latin American educational systems and educators, where lecturers have followed European and American assessment theory and literature most of the time. There have not been studies to scrutinise assessment beliefs and practices in the Latin American context. There are also no proposals for a more contextualised understanding of what assessment should be, considering the environment and culture, and the specific needs in the development of these societies. Thus, this study investigates the assessment literacies of university lecturers in Colombia. As in previous research, this work postulates that the non-aligned theories that dominate research discourses could have an impact on the understanding and literacy of lecturers in summative and formative assessment, how they interrelate with each other and how they both relate to other key assessment terms.

Assessment trends in the Latin American context

It is important to indicate that these changes are situated under practices of teaching the development of competencies in assessment, which are not always aligned or related to theoretical foundations that impact or influence good practices (Davies and Taras, 2016, 2018; Gallardo, 2020; Taras and Davies, 2017). It is generally accepted that the development of competencies in the assessment of learning requires a clear, coherent and explicit understanding of theories, practices and empirical research and, more importantly, a greater knowledge of how the three intertwine, interact and support each other in the classroom. Thus, it is possible to affirm that teachers should be clear and explicit about their epistemologies and theories to provide a united and coordinated front to support student learning. Individual and personal characteristics and styles do not detract from the requirement of having basic shared principles and practices that are backed by theory and informed by empirical research.

To summarise, Taras (2005, 2023) and Lau (2015) have noted a series of anomalies and misalignments in the discourses of assessment theories. On the one hand, the work of Taras (2005, 2023) focuses on assessment discourses within the literature of compulsory education. On the other, Lau (2015) examines the phenomenon in higher education. Both authors reach similar conclusions about the creation of dichotomies in the pedagogic discourses, which has resulted in an ambiguous understanding of assessment, especially in the relationship between summative and formative assessment, and how assessment can be used efficiently to support learning.

In Latin America, the situation may not be so different, so far as detecting ambiguities around more specific concepts and practices in terms of assessment is concerned. Gallardo Cordova and Valenzuela González (2014) studied the practices of basic education lecturers in the state of Tabasco (Mexico) regarding assessment processes. One of the weaknesses detected was precisely the ambiguity between what lecturers wanted to assess and the assignments to collect this information. The ambiguity of understandings of principles and theory were revealed by a series of technical deficiencies when designing instruments for data collection. That is, the lack of prior knowledge presented itself as obstacles to the development of the task of preparing the system to assess. Martínez Reyes (2015) carried out a study in Costa Rica with university lecturers, most of whom maintain that assessment is mainly focused on verifying the acquisition of knowledge. A very low percentage indicated that assessment also has the function of improving teaching practices. Hargreaves (2005) found comparable results with UK secondary school teachers. Likewise, Muñoz Olivero et al. (2016) carried out a qualitative study to explore if assessment practices focus on student learning. The results confirm that assessment continues to focus on grades, and confirm the continuity of traditional practices by lecturers. In addition, the study discusses how lecturers might implement new assessment practices focused on learning.

Method

The methodological approach of this study is exploratory, based on a survey with a convenience sampling applied between May and June 2019. Mixed methods (Plano Clark and Ivankova, 2016) were used to collect and analyse quantitative and qualitative data. On the one hand, an exploratory factor analysis was used to identify the most relevant factors associated with the questionnaire on assessment beliefs, which will be discussed later, and, on the other, a qualitative analysis of the discourses was applied, supported by ATLAS.ti software.

The procedure for this research was as follows: the research team was interested in obtaining a deeper understanding of summative and formative assessment concepts among Latin American university lecturers, and it was decided to begin with the translation of the questionnaire from English to Spanish. Subsequently, an academic network was formed to promote the application of the questionnaire in various countries of Latin America to highlight beliefs and practices in assessment, starting with Colombia.

The first step was content validation by three experts (see below). This validation process focused on the clarity of the neutral questions, which explored understandings of formative and summative assessment, and student self-assessment and their relationship. It then scrutinised the quality of the Spanish. Subsequently, the questionnaire was programmed into Google Forms, organised into two sections, as in the original instrument. It was applied for the first time in an educational scenario in Colombia, respecting ethical procedures of lecturers’ participation. Consent was obtained to use the information for scientific research purposes. The data were processed using the computer programmes SPSS, Excel and ATLAS.ti.

Instrument

The questionnaire on assessment beliefs (Davies and Taras, 2016, 2018; Taras, 2008; Taras and Davies, 2017) was used to gather information on beliefs and assessment practices. The influences for the design of this questionnaire, developed between 2004 and 2006, come from five sources. The first is the work of Scriven (1967), who first distinguished between formative and summative assessment. The second is Ramaprasad (1983), whose definition of feedback is in general use and clarifies the gap between the current level of performance in comparison with the required level. The third article is by Sadler (1989), who uses Ramaprasad’s definition to develop a theory of formative assessment. The fourth source is the work of Black and Wiliam (2004) and Wiliam and Black (1996), highlighting the deficits and contradictions in assessment theories. The fifth is Taras (2005), who coordinates theoretical relationships between formative and summative assessment and self-assessment. The studies by MacLellan (2001) and Hargreaves (2005) further vindicated Taras’s study by demonstrating the existence of contradictions in the literature on assessment, both in compulsory education and in higher education.

These contradictions create conflict between the theoretical beliefs and the assessment practices of lecturers. This conflict drove the design of the questionnaire that was applied in this study. Taras’s theoretical work and the questionnaire on summative and formative assessment were two important milestones to clarify and strengthen fundamental aspects in this discipline.

The questionnaire went through a process of content and construct validation in which educational researchers from Colombia, Argentina and Mexico participated. (A researcher from Argentina participated only in the validation; researchers from Colombia and Mexico participated in the entire process.) The Spanish version of the instrument includes 43 items, in addition to a section to collect demographic and contextual data. The level of internal consistency was estimated using the KR-20 statistic, obtaining a coefficient of 0.97, which indicates the high reliability of the instrument. To evaluate the relevance of factor analysis, the Kaiser–Meyer–Olkin (KMO) sample adequacy statistic was calculated to contrast if the partial correlations between the variables were small enough.

In addition, the Barlett sphericity test was applied to contrast the null hypothesis to verify if the correlation matrix was an identity matrix. The KMO Index obtained was 0.596, and the value of the Bartlett sphericity test was 0.000, both being good indicators that support the use of factor analysis in the study.

The instrument consists of 43 questions related to formative and summative assessment, of which 8 are open. Table 1 shows the items per category.

Table 1

Classification of items according to the type of assessment in the questionnaire

Category Open questions Closed questions
Formative assessment 3 18
Summative assessment 3 14
Formative and summative assessment 2 3

Results

The results begin with a descriptive analysis presenting the trends of the answers, followed by an exploratory factor analysis evaluating the questionnaire. The qualitative results are then presented, which evaluate in greater depth the replies to the open questions.

This study was carried out at a private university with headquarters in two cities in Colombia. A total of 109 university lecturers participated, 72 males and 37 females. These lecturers belong to 5 subject areas: social and human sciences (72); sciences and engineering (12); health and sports sciences (11); and economic and administrative sciences (5). Only 9 were not identified in any of these areas. The number of years of experience in university teaching of the participants was: 16 or more years (45); from 11 to 15 years (18); from 6 to 10 years (23); from 1 to 5 years (23).

The questionnaire was completed using a binary classification of ‘yes’ or ‘no’ for the closed-ended questions. This enabled the analysis of the level of agreement or disagreement among the lecturers regarding the questions and statements in the instrument.

The results in Table 2 show that the questions where lecturers have higher agreement (over 90 per cent) are those that focus on recognising the importance of formative assessment in three ways: as a tool for monitoring the assessment process; demonstrating lecturers’ willingness to use it in the classroom; and its role in providing feedback. The same agreement is not observed when asked about whether summative assessment and formative assessment evaluate the product or the process. Logically, all assessment is a process, and this process itself can be evaluated. In addition, each process results in a product that can be evaluated. Therefore, it is surprising that the results showed that lecturers believed that formative assessment evaluates the process (0.94), which differs from those affirming that formative assessment evaluates the product (0.68); likewise, with beliefs about summative assessment evaluating process (0.64) and product (0.86). However, in other responses, around 85 per cent of lecturers agree on seeing formative assessment as a tool for evaluating the end of a course, indicating a lack of coherence and clarity regarding the purpose of formative assessment.

Table 2

Descriptive statistics

Item Variable Mean* Deviation
II.14 Formative assessment provides useful feedback. 0.99 0.095
I.6 Do you apply formative assessment activities with your students? 0.96 0.187
II.8 Formative assessment encompasses the assessment of processes. 0.94 0.244
II.12 Formative assessment allows for the assessment of learning. 0.94 0.244
I.7 Do you apply formative assessment activities to assign tasks to your students? 0.93 0.260
II.1 Summative assessment can be used to grade at the end of a course. 0.91 0.288
II.5 Summative assessment encompasses the assessment of products. 0.86 0.343
II.9 Summative assessment enables the assessment of activities for final grading. 0.86 0.343
II.10 Summative assessment allows for the assessment of learning. 0.85 0.362
II.11 Formative assessment allows for the assessment of activities for final grading. 0.85 0.362
I.15 Are formative assessment activities related to summative assessment activities? 0.79 0.407
II.2 Formative assessment can be used to grade at the end of a course. 0.79 0.407
II.3 Summative assessment can be used to put the mid-term grade in a course. 0.78 0.414
II.4 Formative assessment can be used to put the mid-term grade in a course. 0.77 0.425
II.20 Students focus on summative assessment. 0.77 0.425
II.15 Summative and formative assessments require different processes. 0.76 0.431
I.9 Do you integrate activities from summative and formative assessment? 0.74 0.441
II.13 Summative assessment provides useful feedback. 0.72 0.451
II.7 Formative assessment encompasses the assessment of products. 0.68 0.467
I.17 Do your students do self-assessment? 0.66 0.477
II.6 Summative assessment encompasses the assessment of processes. 0.64 0.482
II.18 Students understand what summative assessment is. 0.59 0.493
I.18 Do you present self-assessment as a formative assessment activity? 0.56 0.499
I.8 Do you keep the activities of summative and formative assessment separate? 0.54 0.501
II.16 Summative and formative assessments require similar processes. 0.48 0.502
II.19 Students understand what formative assessment is. 0.45 0.500
I.20 Is self-assessment used in both summative and formative assessment? 0.43 0.498
I.19 Do you present self-assessment as a summative activity? 0.34 0.477
II.21 Students focus on formative assessment. 0.27 0.446
  • Note: * The proportion of affirmative answers. N = 109 observations are presented.

The greatest agreement in the negative response is found when lecturers affirm that their students do not focus on formative assessment. Another point to highlight within the results with less agreement is related to the perception of self-assessment. Although it is reported that 66 per cent of lecturers use it, it is associated with formative assessment in 56 per cent of cases.

In general, quantitative data analysis enables us to see certain strengths of lecturers in recognising the usefulness and uses of assessment; however, when investigating the differences in types of assessment, conceptual and practical confusions are observed, especially with regard to formative assessment.

Exploratory factor analysis

Following Morales Vallejo (2013), a first phase was carried out with a principal component analysis, considering all the variance and the common and the specific of each variable. From this stage, seven factors were identified that explain 62.85 per cent of the total variance of the data, which represents an adequate value in these models. In the second phase, an orthogonal rotation of the components was carried out to obtain a simpler and easier structure to interpret. For this, the varimax method was used, given the low correlation of the items. These seven factors result from diverse iterations in which items that formed factors with less than three elements were excluded, following the approach of Lloret-Segura et al. (2014), until the final conformation with seven factors was obtained. Five items were excluded because they were not relevant to explain the variance in the model.

Although the seven factors defined only explain 62.85 per cent of the variance of the data, the resulting grouping makes sense with the characteristics of the questions, which allows for a clear definition of the factors. This result is very important, bearing in mind that one of the reasons for using exploratory factor analysis is to assess the validity of the scales or questionnaires used to measure theoretical constructs, making it possible to identify whether the observed variables are grouped according to the dimensions expected, and whether those dimensions are consistent with the underlying theories.

Through exploratory factor analysis, it is also possible to discover underlying factors that may not be directly observable, but that influence the relationships between variables. These latent factors can provide a deeper understanding of the interactions between variables. In our case, seven factors were identified that are interpreted below. Factor 1 explains 11.24 per cent of the variance and focuses on knowing the uses of self-assessment in formative processes. Factor 2 explains 9.85 per cent of the variance and captures lecturers’ knowledge about summative assessment. Factor 3 explains 8.84 per cent of the variance and defines lecturers’ knowledge about formative assessment. Factor 4 explains 8.74 per cent of the variance and defines whether lecturers use formative assessment with students. Factor 5 explains 8.58 per cent of the variance and represents lecturers’ perception of the understanding and use of formative assessment and summative assessment by students. Factor 6 explains 8.31 per cent of the variance and captures the information about whether lecturers understand the differences and similarities between formative assessment and summative assessment. Finally, Factor 7 explains 7.28 per cent of the variance, and refers to the use that lecturers make of summative assessment in classes.

Qualitative analysis

Qualitative analysis was conducted to understand the trends in lecturers’ beliefs and activities regarding formative and summative assessment. To this end, information was collected from the six open questions contained in this instrument. To define what is understood by formative and summative assessment two identical questions were asked at the beginning and at the end of the questionnaire. These identical questions were intended to observe whether, after the reflective process that responding to the questionnaire promoted, the participants expressed the same definition of formative and summative assessment at the end as they had at the beginning, or if they expressed a different one. In addition, two further questions were asked to understand (1) whether lecturers find a relationship between formative assessment and the development of processes and (2) whether formative assessment allows the assessment of learning. In total, six open questions were analysed. The analysis focused on the frequency of those terms (see Table 3) that were required to define summative assessment in the first and last questions.

Table 3

Definition of summative assessment, at the beginning and at the end of the questionnaire

Most used terms Frequency
Assessment 9
Quantification 6
Grade 5
Final 5
Learner 4
Sum 4
Accumulation 3
Acquisition 3
Knowledge 3
Teaching 3
Process 3
Results 3

After the analysis, a group definition of summative assessment was constructed using the frequent terms (see Table 4). This definition was as followed: Summative assessment is considered as an accumulative assessment process that allows the quantifying of the acquisition of cumulative knowledge, given through teaching. It is applied at the end and allows the collection of learning outputs.

Table 4

Definition of formative assessment, at the beginning and at the end of the questionnaire

Most used terms Frequency
Assessment 9
Learning 5
Teaching 4
Seek 4
Students 3
Competencies 3
Processes 2
Weaknesses 1

The analysis also led to finding certain differences in terms that were used in the first definition and that were no longer present in the last. These terms were: competence, content, evidence, exam, achievement, work, value, verify. The definition obtained is as followed: Formative assessment is considered as an assessment process that has repercussions on teaching. It seeks to implement processes that encourage students to develop their competencies and detect their weaknesses. No reference was made to the term feedback or to any specific type of written academic product.

Discussion

Far from losing relevance as a result of the circumstances that have complicated the design and application of assessment due to the Covid-19 pandemic, assessment is undoubtedly taking on greater relevance as an essential contribution of universities. In fact, Krishnamurthy (2020) presents the university as a powerhouse for assessment in a world where learning could come from many directions and states that students learn from each other, from algorithmic systems and from public information. Universities will continue to play a powerful role as evaluators of learning.

However, if universities are truly leaders and developers of thinking for a population with civic responsibilities, then they must be much more than gatekeepers of the knowledge economy and measurers of outcomes. Their principal responsibilities include developing assessment knowledge as a basis for critical understanding. The United Nations Convention on the Rights of the Child includes two key principles which must guide education throughout life:

1. States Parties shall assure to the child who can form his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.

(OHCHR, 1990: Article 12.1)

No one, child or adult, may develop their own views unless assessment is used to scrutinise what is available to them. Assessment is about understanding the world and society around us. The literature supports formative assessment as inherently subsuming self-assessment as a means for learners to become self-regulated learners and monitor their own development (Panadero, 2017; Sadler, 1989; Taras, 2023). The second key principle states:

2. For this purpose, the child shall in particular be provided the opportunity to be heard in any judicial and administrative proceedings affecting the child, either directly, or through a representative or an appropriate body, in a manner consistent with the procedural rules of national law.

(OHCHR, 1990: Article 12.2)

Even in such a specialist area as the law, providing people with a voice that is heard requires an understanding of real choices, which only comes from expertise in assessment. In general, assessment is an informed opinion, and, in universities, it is the skill that is the most valuable. Assessment is not something that can exclude learners or the assessed at any level without creating injustices.

This scenario gives greater relevance to lecturer training in assessment processes, even more so when the results of this study show the importance that lecturers attach, above all, to formative assessment. It can be said that formative assessment can, as Hansen (2020) and Atienza et al. (2023) affirm, foster active dialogue between lecturers and students and greater alignment with learning outcomes. However, this study also confirms that there is confusion regarding the conception of formative and summative assessment, as well as regarding the relationship between them.

From the qualitative analysis, the results related to summative assessment make it possible to infer that lecturers’ beliefs are based on summative assessment measuring learning at the end of a certain period. In addition, some task examples stand out, notably, the examination. However, beliefs about formative assessment, and how it is used, are seen as diffuse and weak in comparison to the clarity with which the participants defined and used summative assessment. These findings coincide with what Martínez Reyes (2015) reported. Therefore, it can be confirmed that in this case, too, lecturer thinking focuses on summative assessment as a means of measuring and grading.

The data reveal lecturers’ beliefs and their understanding of assessment theories. The first five affirmative answers show a high level of agreement: for the most part, lecturers recognise the usefulness of formative assessment, which provides useful information to support learning. This would seem to indicate coherence about beliefs in formative assessment. This also coincides with what has been found in other studies in this regard (Davies and Taras, 2018; Lau, 2015), as well as in Latin American studies (Gallardo Cordova and Valenzuela González, 2014).

The next four answers with more agreement (over 85 per cent) are related to summative assessment. These reveal that beliefs about its use are closely linked to accreditation of a course, evaluation of a product and providing final learning grades. These results also coincide with what has been found in other studies in this regard (Davies and Taras, 2016).

It is important to note that, when contrasting some results, it is understood that lecturers agree that formative assessment evaluates the process and summative assessment the product, and, in general, agree that both evaluate learning. Likewise, the comparison between summative assessment and formative assessment is not too different in terms of their use for end-of-course assessment for accreditation, although the use of formative assessment is lower. Data on other aspects of the relationship between summative and formative assessment show less agreement.

A pertinent question might be: What is the thinking behind the disparity between the processes of formative assessment (0.94) and summative assessment (0.64), when technically they can evaluate both the product and the process? From Scriven’s (1967) distinction between summative and formative assessment, discourses have focused on the use of formative assessment to support learning, and because we generally understand that learning is an ongoing process, this perceived link has been strengthened over time. The terms formative and summative also have a literal meaning. These meanings more easily link summative assessment to the product and formative assessment to the process. The perceived differences highlighted between them by these data do not translate into an accurate reflection of assessment theory (Taras, 2005, 2023).

The last questions related to lecturers’ perceptions regarding student work in formative assessment recorded the lowest agreements. The results present a contradiction compared to the first five items, where there is greater agreement as to the high importance of formative assessment. This indicates that, although lecturers use and value formative assessment, they also believe that students do not understand it, and, worse still, do not even focus on it. This finding coincides with what Davies and Taras (2016) found in other studies.

Conclusions

From the findings and discussion, it can be concluded that by examining the theoretical beliefs of lecturers with regard to assessment, and particularly with regard to summative assessment, formative assessment and student self-assessment, there are considerable differences in beliefs and understandings between different subjects and disciplines.

This study found that the perceived differences between summative and formative assessment highlighted in the study were a symptom of the problems created by discrepancies between theoretical understanding and individual and communal assessment practices and beliefs. Another point to note is that students are excluded from dialogue and participation in assessment, although they are considered central to pedagogical discourses of learners at the core of learning processes. These discrepancies are perhaps the most important obstacle to be overcome within education. The self-regulation and assessment literature clearly notes that learners cannot learn or develop without assessment and self-assessment expertise. Excluding learners from assessment is a means of control which negates the principles and powers to learn.

Likewise, the lecturers’ understanding of formative assessment and classroom practices reflects that it is of high importance to them. However, they do not seem to consider students as an integral part of either formative or summative assessment processes.

Again, lecturers are considered to be the doers, and learners the done to, in contradiction to learning and assessment theories. While pedagogies may include learners as central activists in their own development, the results of this research, as with previous similar studies, continue to place assessment as the sole prerogative of teachers and lecturers, again contradicting research, and annulling teachers’ developmental practices.

From the results, it is understood that formative assessment is essentially focused on the process (and less on the product), while with summative assessment the opposite is true. Logically, both formative or summative assessment can assess the process and the product equally.

From the structure of the questionnaire and its translation, it can be affirmed that the instrument allows the collection of relevant information about the beliefs and practices of university lecturers. However, it is necessary to consider that these results are only a first sample of the possibilities of thoughts and practices that can be found in other educational scenarios. It is here that having an instrument that allows researchers to collect data with an acceptable level of reliability becomes valuable and useful in a variety of contexts, especially when this has been proven to be so across cultural and language contexts.

Apart from the logistics of data collection to further future inquiries, perhaps the most salient implication for educators across roles is that assessment can no longer be relegated to the limited and limiting purpose of the traditional measurement tool or remain in the sole control and domain of teachers and lecturers. For many decades, discourses have been repeating that assessment is crucial to learning, yet this can only be considered to be so if assessment is central to learners’ toolkit of skills, comparable to lecturers, for them to monitor their progress and learn.

The relationship between assessment and learning has implications outside teaching in every social context. The United Nations Convention on the Rights of the Child lays the foundations on which to build human rights. Article 12.1 and 12.2 clearly require the informing and education of all, with an informed voice with which to present opinions, even in the most specialised of contexts (OHCHR, 1990). Yet anything that involves assessment seems to be immune to this basic right. Students are required to fight harder if they disagree with the grade awarded them, when it is incumbent on assessors to explain and justify by educating students on assessment processes, criteria and standards, to have a common forum of understanding.

When someone applies for a job or for promotion, there is often the disclaimer that the applicant has no right to question or reply, even if the response provides inaccurate information. Assessment has become the tool of power to either veto and close discussion or prevent the voicing of an opinion that is contrary to the status quo or challenges those with power.

For justice and respect, accurate, shared assessment, as evident in learning and assessment theories, must be the required norm and not the exception.

In future studies, it would be enlightening to carry out a confirmatory factor analysis with lecturer populations in other educational scenarios in Colombia or in other Latin American countries, leading to exploring thinking on theories of assessment. In addition, comparisons between the data in England and Colombia would be of interest to teachers and education leaders alike. Finally, comparative studies of theories about assessment, initially across Europe, and subsequently between Europe and Latin America, would provide a pooling of ideas and invaluable thinking on such a central and all-encompassing area as assessment.

Declarations and conflicts of interest

Research ethics statement

Not applicable to this article.

Consent for publication statement

Not applicable to this article.

Conflicts of interest statement

The authors declare no conflicts of interest with this work. All efforts to sufficiently anonymise the authors during peer review of this article have been made. The authors declare no further conflicts with this article

References

Ashwin, P.. (2012).  ‘How often are theories developed through empirical research into higher education?’.  Studies in Higher Education 37 (8) : 941–55, DOI: http://dx.doi.org/10.1080/03075079.2011.557426

Atienza, R.; Valencia-Peris, A.; López-Pastor, V.. (2023).  ‘Formative assessment and pre-service teacher education: Previous, current and prospective experiences’.  Cultura, Ciencia y Deporte 18 (55) : 133–56, DOI: http://dx.doi.org/10.12800/ccd.v18i55.1914

Barnett, R.. (2007).  ‘Assessment in higher education: An impossible mission?’.  Rethinking Assessment in Higher Education: Learning for the longer term. Boud, D., Falchikov, N. N. (eds.),   London: Routledge, pp. 29–40.

Biggs, J.. (1998).  ‘Assessment and classroom learning: A role for summative assessment?’.  Assessment in Education: Principles, policy & practice 5 (1) : 103–10, DOI: http://dx.doi.org/10.1080/0969595980050106

Black, P.; Harrison, C.; Lee, C.; Marshal, B.; Wiliam, D.. (2003).  Assessment for Learning: Putting it into practice. Maidenhead: Open University Press–McGraw Hill Education.

Black, P.; Wiliam, D.. (2004).  ‘The formative purpose: Assessment must first promote learning’.  Yearbook of the National Society for the Study of Education 103 (2) : 20–50, DOI: http://dx.doi.org/10.1111/j.1744-7984.2004.tb00047.x

Black, P.; Wiliam, D.. (2018).  ‘Classroom assessment and pedagogy’.  Assessment in Education: Principles, policy and practice 25 (6) : 551–75, DOI: http://dx.doi.org/10.1080/0969594X.2018.1441807

Cope, B.; Kalantzis, M.. (2015).  ‘An introduction to the pedagogy of multiliteracies’.  A Pedagogy of Multiliteracies: Learning by design. Cope, B., Kalantzis, M. M. (eds.),   Basingstoke: Palgrave Macmillan, pp. 1–36.

Creswell, J.W.; Guetterman, T.C.. (2019).  Educational Research: Planning, conducting, and evaluating quantitative and qualitative research. 6th ed. Upper Saddle River, NJ: Pearson.

Davies, M.; Taras, M.. (2016).  ‘A comparison of assessment beliefs of science and education lecturers in a university’.  Multidisciplinary Journal of Educational Research 6 (1) : 77–99, DOI: http://dx.doi.org/10.17583/remie.2016.1766

Davies, M.; Taras, M.. (2018).  ‘Coherence and disparity in assessment literacies among higher education staff’.  London Review of Education 16 (3) : 474–90, DOI: http://dx.doi.org/10.18546/LRE.16.3.09

Dysthe, O.. (2008).  ‘The challenges of assessment in a new learning culture’.  Balancing Dilemmas in Assessment and Learning in Contemporary Education. Havnes, A., McDowell, L. L. (eds.),   New York: Routledge, pp. 213–24.

Gallardo, K.. (2020).  ‘Competency-based assessment and the use of performance-based evaluation rubrics in higher education: Challenges towards the next decade’.  Problems of Education in the 21st Century 78 (1) : 61–79, DOI: http://dx.doi.org/10.33225/pec/20.78.61

Gallardo Cordova, K.; Valenzuela González, J.R.. (2014).  ‘Evaluación del desempeño: Acercando la investigación educativa a los docentes’ [Performance evaluation: Bringing educational research closer to teacher].  Revalue 3 (2) : 1–21.

Hansen, G.. (2020).  ‘Formative assessment as a collaborative act. Teachers’ intention and students’ experience: Two sides of the same coin, or?’.  Studies in Educational Evaluation 66 (7) : 100904. DOI: http://dx.doi.org/10.1016/j.stueduc.2020.100904

Hargreaves, E.. (2005).  ‘Assessment for learning? Thinking outside the (black) box’.  Cambridge Journal of Education 35 (2) : 213–24, DOI: http://dx.doi.org/10.1080/03057640500146880

Krishnamurthy, S.. (2020).  ‘The future of business education: A commentary in the shadow of the Covid-19 pandemic’.  Journal of Business Research 117 (5) : 1–5, DOI: http://dx.doi.org/10.1016/j.jbusres.2020.05.034

Lau, A.M.. (2015).  ‘Formative good, summative bad? A review of the dichotomy in assessment literature’.  Journal of Further and Higher Education 40 (4) : 509–25, DOI: http://dx.doi.org/10.1080/0309877X.2014.984600

Lloret-Segura, S.; Ferreres-Traver, A.; Hernández-Baeza, A.; Tomás-Marco, I.. (2014).  ‘El análisis factorial exploratorio de los ítems: Una guía práctica, revisada y actualizada’ [Exploratory item factor analysis: A practical guide revised and updated].  Anales de Psicologia 30 (3) : 1151–69, DOI: http://dx.doi.org/10.6018/analesps.30.3.199361

MacLellan, E.. (2001).  ‘Assessment for learning: The differing perceptions of tutors and students’.  Assessment & Evaluation in Higher Education 26 (4) : 37–41, DOI: http://dx.doi.org/10.1080/02602930120063466

Martínez Reyes, N.. (2015).  ‘Las creencias de los profesores universitarios sobre evaluación del aprendizaje’ [University professors’ beliefs about learning evaluation].  Diálogos 12 : 45–66, DOI: http://dx.doi.org/10.5377/dialogos.v0i12.2193

Morales Vallejo, P.. (2013).  El Análisis Factorial en la Construcción e Interpretación de Tests, Escalas y Cuestionarios. [Factor analysis in the construction and interpretation of tests, scales and questionnaires] , Universidad Pontificia Comillas, Madrid. Accessed 30 September 2023. https://docplayer.es/11440055-El-analisis-factorial-en-la-construccion-e-interpretacion-de-tests-escalas-y-cuestionarios.html .

Muñoz Olivero, J.; Villagra Bravo, C.; Sepúlveda Silva, S.. (2016).  ‘Proceso de reflexión docente para mejorar las prácticas de evaluación de aprendizaje en el contexto de la educación para jóvenes y adultos (EPJA)’ [Teachers’ reflection process to improve learning assessment practices in the context of education for young people and adults (EPJA)].  Folios 1 (44) : 77–91, DOI: http://dx.doi.org/10.17227/01234870.44folios77.91

Nieminen, J.H.; Bearman, M.; Tai, J.. (2023).  ‘How is theory used in assessment and feedback research? A critical review’.  Assessment & Evaluation in Higher Education 48 (1) : 77–94, DOI: http://dx.doi.org/10.1080/02602938.2022.2047154

OHCHR (Office of the High Commissioner for Human Rights). (1990).  Convention on the Rights of the Child, (General Assembly Resolution 44/25). Accessed 30 September 2023. https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-child .

Palinkas, L.A.; Aarons, G.A.; Horwitz, S.; Chamberlain, P.; Hurlburt, M.; Landsverk, J.. (2011).  ‘Mixed method designs in implementation research’.  Administration and Policy in Mental Health and Mental Health Services Research 38 : 44–53, DOI: http://dx.doi.org/10.1007/s10488-010-0314-z

Panadero, E.. (2017).  ‘A review of self-regulated learning: Six models and four directions for research’.  Frontiers in Psychology 8 : 422. DOI: http://dx.doi.org/10.3389/fpsyg.2017.00422

Pedder, D.; Opfer, V.D.. (2011).  ‘Conceptualizing teacher professional learning’.  Review of Educational Research 81 (3) : 376–407, DOI: http://dx.doi.org/10.3102/0034654311413609

Plano Clark, V.L.; Ivankova, N.V.. (2016).  ‘Why a guide to the field of mixed methods research? Introducing a conceptual framework of the field’.  Mixed Methods Research: A guide to the field. Plano Clark, V.L., Ivankova, N.V. N.V. (eds.),   Los Angeles: Sage, pp. 3–30, DOI: http://dx.doi.org/10.4135/9781483398341.n4

Ramaprasad, A.. (1983).  ‘On the definition of feedback’.  Behavioral Science 28 (1) : 4–13, DOI: http://dx.doi.org/10.1002/bs.3830280103

Sadler, D.R.. (1989).  ‘Formative assessment and the design of instructional systems’.  Instructional Science 18 : 119–44, DOI: http://dx.doi.org/10.1007/BF00117714

Scriven, M.. (1967).  ‘The methodology of evaluation’.  Perspectives of Curriculum Evaluation. Tyler, R., Gagne, R.; R. and Scriven, M. M. (eds.),   Chicago: Rand McNally, pp. 39–83.

Tan, K.H.K.. (2004).  ‘Does student self-assessment empower or discipline students?’.  Assessment and Evaluation in Higher Education 29 (6) : 651–62, DOI: http://dx.doi.org/10.1080/0260293042000227209

Taras, M.. (2005).  ‘Assessment – summative and formative – Some theoretical reflections’.  British Journal of Educational Studies 53 (4) : 466–78, DOI: http://dx.doi.org/10.1111/j.1467-8527.2005.00307.x

Taras, M.. (2008).  ‘Summative and formative assessment: Perceptions and realities’.  Active Learning in Higher Education 9 (2) : 172–92, DOI: http://dx.doi.org/10.1177/1469787408091655

Taras, M.. (2009).  ‘Summative assessment: The missing link for formative assessment’.  Journal of Further and Higher Education 33 (1) : 37–41, DOI: http://dx.doi.org/10.1080/03098770802638671

Taras, M.. (2023).  ‘Exploring (fundamentals of) student self-assessment’.  Student Self-Assessment: An essential guide for teaching, learning and reflection at school and university. Taras, M., Wong, H.M. H.M. (eds.),   New York: Routledge.

Taras, M.; Davies, M.S.. (2017).  ‘Assessment beliefs of higher education staff developers’.  London Review of Education 15 (1) : 126–40, DOI: http://dx.doi.org/10.18546/LRE.15.1.11

Tight, M.. (2014).  ‘Discipline and theory in higher education research’.  Research Papers in Education 29 (1) : 93–110, DOI: http://dx.doi.org/10.1080/02671522.2012.729080

UNESCO (United Nations Educational, Scientific and Cultural Organization). (2015).  Catalogue of Learning Assessments: A public resource to learning assessments around the world, Institute of Statistics. Accessed 30 September 2023. https://unesdoc.unesco.org/ark:/48223/pf0000232998 .

Wallwey, C.; Kajfez, R.L.. (2023).  ‘Quantitative research artifacts as qualitative data collection techniques in a mixed methods research study’.  Methods in Psychology 8 : 100115. DOI: http://dx.doi.org/10.1016/j.metip.2023.100115

Wiliam, D.; Black, P.. (1996).  ‘Meanings and consequences: A basis for distinguishing formative and summative functions of assessment?’.  British Educational Research Journal 22 (5) : 537–48, DOI: http://dx.doi.org/10.1080/0141192960220502