British Educational Research Association

  • Search term Advanced Search Citation Search
  • Individual login
  • Institutional login

Review of Education

Formative assessment and feedback for learning in higher education: A systematic review

Corresponding Author

Rebecca Morris

  • [email protected]

Department of Education Studies, University of Warwick, Coventry, UK

Correspondence

Rebecca Morris, Department of Education Studies, University of Warwick, Coventry, UK.

Email: [email protected]

Thomas Perry

School of Education, University of Birmingham, Birmingham, UK

Lindsey Wardle

School of Education, Durham University, Durham, UK

Funding information:

Feedback is an integral part of education and there is a substantial body of trials exploring and confirming its effect on learning. This evidence base comes mostly from studies of compulsory school age children; there is very little evidence to support effective feedback practice at higher education, beyond the frameworks and strategies advocated by those claiming expertise in the area. This systematic review aims to address this gap. We review causal evidence from trials of feedback and formative assessment in higher education. Although the evidence base is currently limited, our results suggest that low stakes-quizzing is a particularly powerful approach and that there are benefits for forms of peer and tutor feedback, although these depend on implementation factors. There was mixed evidence for praise, grading and technology-based feedback. We organise our findings into several evidence-grounded categories and discuss the next steps for the field and evidence-informed feedback practice in universities.

Context and implications

Rationale for this study.

To gain a better understanding of effective formative assessment and feedback approaches in higher education (HE). To promote a more evidence-informed approach to teaching and learning in universities.

Why the new findings matter

The findings highlight a small number of promising strategies for formative assessment and feedback in HE. They also draw attention to a lack of (quality) evidence in this area overall.

Implications for policy-makers and practitioners

Universities and their regulators/funders should be encouraging and supporting more, high-quality research in this important area. Researchers in the field also need to look to developing more ambitious, higher-quality studies which are likely to provide robust, causal conclusions about academic effectiveness (or other outcomes). Those involved in teaching and learning in university should use the findings to inform evidence-informed approaches to formative assessment and feedback and to challenge approaches which do not appear to have foundations in strong evidence. Students could be made more aware of teaching and learning approaches that are likely to support their academic progress.

INTRODUCTION

Formative assessment and feedback are fundamental aspects of learning. In higher education (HE), both topics have received considerable attention in recent years with proponents linking assessment and feedback—and strategies for these—to educational, social, psychological and employability benefits (Gaynor,  2020 ; Jonsson,  2013 ; van der Schaaf et al.,  2013 ). On a practice and policy level there is widespread agreement that formative assessment and feedback should feature substantially within course design and delivery (Baughan,  2020 ; Carless & Winstone,  2019 ; OfS,  2019a ). However, beyond this general expectation, it is less clear where the strength of evidence lies and what the most effective approaches and elements may be for HE students’ learning (Boud & Molloy,  2013 ; Evans,  2013 ).

This systematic review examines the research evidence on the impact of formative assessment and feedback on university students’ academic performance. It is the first international systematic review focusing on assessment and feedback in HE and presenting a comprehensive overview of causal evidence available in the field. Unlike other studies in this area, our review (a) employs a broad conceptualisation of formative assessment and feedback, including research across a range of different aspects of these pedagogical features, and (b) combines this with a rigorous quality appraisal process for identifying the most trustworthy, robust studies on which to base judgements about effective strategies.

There are currently over 200 million students enrolled in HE courses internationally, and this number is expected to continue to grow substantially in coming years (Calderon,  2018 ). Given this scale and the importance of feedback and formative education for learning, this systematic review has wide and significant implications for the field and for practice. We indicate approaches and strategies where there appears to be some evidence for effectiveness while also highlighting the overall lack of high-quality, causal evidence available in this field. Implications of this for practitioners and policymakers seeking to work within an evidence-informed sector are also discussed.

This article proceeds as follows: in the two subsequent sections, we outline definitions of formative assessment and feedback and existing practices in HE relating to them. The methods section sets out our systematic review approach, including search terms, eligibility criteria, and details of the quality appraisal and analysis process. We then present summaries of studies presenting causal evidence, which we organise through categories grounded in the data and present through a narrative synthesis. Finally, we discuss the implications of our results for HE feedback and formative assessment research and practice, providing recommendations for the development of the field.

Definitions and types of formative assessment and feedback

There is no singular definition for either the terms ‘formative assessment’ or ‘feedback’. Nevertheless, there is agreement that feedback is an integral element of a wider framework of formative assessment (Wiliam,  2018 ) and that both are concerned with the gathering and provision of information about a student’s current performance or understanding to benefit students’ learning. Black and Wiliam ( 1998 ), for example, describe formative assessment as including ‘all those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they [the students] are engaged’ (Black & Wiliam,  1998 , p. 8). As Sadler ( 1989 ) notes in earlier work, this transfer of information is not just between teachers and students. He argues that both peer and self-assessment can be important vehicles for providing feedback on students’ existing performance and steps for moving forward.

It is this notion of addressing a ‘gap’ between students’ current level of understanding and their desired level which typically forms a basis for definitions of feedback in education (Hattie & Timperley,  2007 ; Sadler,  1998 ). For some, using or storing this information to simply acknowledge a gap, however, is not enough; it must be utilised in a way to alter that gap, and ultimately have an impact on students’ learning if it is to be called ‘feedback’ (Ramaprasad,  1983 ; Wiliam,  2011 ). For these intertwined processes of formative assessment and feedback to occur and work effectively, teachers are required to root them firmly within their pedagogical practices. Kluger and DeNisi ( 1996 ) in their seminal review, for example, stress that it is how students respond to or act on feedback that is more important than the type of feedback received. In order for this kind of response or action to happen, teachers therefore have to plan and embed opportunities for formative assessment and feedback activities into their curricula and teaching (Speckesser et al.,  2018 ; Wiliam,  2018 ). Recent work by Carless and Winstone ( 2019 ) has indicated the importance of ‘feedback culture’ within HE. They describe the value of learning-focused models of feedback (as opposed to a one-way transmission model) whereby students are encouraged to actively involve themselves in engaging with and implementing the feedback they receive.

Hattie and Timperley ( 2007 ) identify four types of feedback, focusing on: (a) the task, (b) the process, (c) self-regulation, and (d) the individual. They argue that these have different purposes and variable impacts on students’ learning. As a result of this they require different strategies for effective implementation. Most feedback is either verbal or written. Verbal feedback is frequently placed within the context of dialogue. From this perspective, feedback is seen as a ‘move’ within a dialogic teaching and learning approach (Hennessy et al.,  2016 ; Perry et al.,  2020 ). Feedback, for example, can range from a simple judgement of correctness, identification of a part of an answer that could be developed or improved, referring back to prior contributions, and inviting opinions or ideas. Written feedback can take the form of corrections, marks, written comments, questions, targets and approaches designed to stimulate written dialogue. Written feedback is more typically focused on providing corrective and further information to develop student understanding rather than to inform teaching.

An increasingly important strand of educational thinking is emerging from cognitive science, which relates to understanding cognitive processes involved with memory and learning. Concepts such as working memory, long-term memory and cognitive load (Kirschner et al.,  2006 ; Sweller et al.,  2011 ) are influential in explaining how the human mind engages with, processes and retains information. Despite considerable interest in this work within the field of education, Wiliam ( 2018 ) points out that relatively few studies of feedback acknowledge these principles of cognitive science and instead tend to focus on shorter-term performance objectives linked to modes of feedback delivery rather than examining the deeper, longer-term processes of memory gain and learning (see Soderstrom & Bjork,  2015 , for further discussion of the dissociation of learning and performance). Although much of the evidence base is currently derived from laboratory studies rather than ‘real world’ (i.e. ecologically valid) teaching and learning settings, cognitive science is providing a renewed emphasis on certain teaching strategies, including feedback strategies such as quizzing and frequent testing, which are rooted in evidence around recall and retrieval practice (Weinstein & Sumeracki,  2018 ). Cognitive science is likely to continue to offer theoretical bases and relevant evidence to develop understanding of feedback.

Evidence-informed formative assessment and feedback practice

Systematic reviews and meta-analyses, mostly conducted with compulsory school-age children, report relatively high average effect sizes ( d ≈ 0.4–0.8), albeit with large variation, ostensibly linked to a myriad of different forms of feedback, quality of implementation, and the teaching and learning context (EEF,  2018 ; Hattie & Timperley,  2007 ; Kluger & DeNisi,  1996 ; Klute et al.,  2017 ; Wisniewski et al.,  2020 ). This evidence base tends to identify corrective feedback as more useful than praise, punishment or rewards for improving students’ ability to learn new skills and complete tasks effectively (Hattie & Timperley,  2007 ; Kluger & DeNisi,  1996 ). Studies have highlighted that the more information included within feedback, the more beneficial it is and that the provision of comments is more helpful than simply sharing grades or marks (Hattie & Timperley,  2007 ; Wisniewski et al.,  2020 ). Some reviews have examined the significance of the agents delivering the formative assessment and feedback: Klute et al. ( 2017 ) find that feedback directed by agents other than the student (i.e. a teacher or computer program) is more effective. Wisniewski et al. ( 2020 ) also tentatively highlight the effectiveness of peer feedback but note the small number ( n  = 8) of studies upon which they base this claim.

Some authors have suggested that written feedback may be more effective than oral feedback (Biber et al.,  2011 ). However, the more recent meta-analysis by Wisniewski et al. ( 2020 ) found no evidence to support this claim. Unfortunately, research on the effects of written feedback is fairly limited and generally of low-quality: studies of written feedback at compulsory school level have concluded that although practitioners are frequently expected to spend extensive amounts of time providing detailed, written responses to their students’ work, there is little evidence to suggest that it is effective in improving performance (Elliott et al.,  2016 ).

As noted above, there are links between techniques derived from cognitive science and feedback. Research examining the impacts of quizzing and frequent testing is often rooted in the cognitive science literature, drawing upon theories of active recall and retrieval. The acts of recalling and retrieving information, often known as the ‘testing effect’ are believed to support the long-term memorisation (and thus the learning) of that information (Dunlosky et al.,  2013 ; Roediger & Karpicke,  2006 ). As a formative assessment tool, though, advocates of quizzes and testing point to benefits beyond remembering facts or key pieces of information. A ‘feedback effect’, they argue, can also support the development of conceptual understanding due to the opportunities that testing/quizzing provide to practise, develop and address errors or misconceptions when they occur (McDaniel et al.,  2015 ; Vojdanoska et al.,  2010 ). The extent to which this is possible depends upon the design and implementation of the quizzes/tests, and the contexts within which research is carried out. As with findings from cognitive science in general, much of the evidence on testing and quizzing is based upon trials conducted in laboratory settings. Although there have been some studies situated within ‘real life’ educational settings, there are few which are methodologically robust and even fewer involving post-compulsory educational institutions (Greving & Richter,  2018 ).

High-quality evidence, focusing on the impact of feedback on student academic performance in HE contexts is relatively thin compared to that found at school level. This raises questions about (a) what evidence at HE level reveals, and (b) the extent to which the evidence about compulsory school-age feedback applies to HE. A recent review of HE variables which influence student attainment, highlighted the potential value of different forms of formative assessment and feedback (Schneider & Preckel,  2017 ). Panadero and Alqassab’s ( 2019 ) systematic review of anonymous peer feedback in HE included studies focusing on both school-age and higher-education level students, and tentatively suggests more positive impacts for those experiencing this approach at university.

Other reviews focusing more specifically on feedback in HE have tended to take a more conceptual and perspectives-based approach to understanding these issues. Evans ( 2013 ) set out to ‘comprehensively explore the nature of assessment feedback within the specific and current contexts of HE’ (Evans,  2013 , p. 74), acknowledging also that the studies included within her review often draw causal conclusions where the research design or correlational findings do not warrant this. This study built upon earlier influential reviews such as that by Nicol and Macfarlane-Dick ( 2006 ), which sought to synthesise and reconceptualise the evidence in order to develop a more student-centred approach to feedback, moving away from it being viewed as merely an act of transmission for teacher to student (see also Carless & Winstone,  2019 for further discussion on this theoretical distinction). The authors present a model and seven principles of ‘good feedback’ for the development of student self-regulation of their performance. Although plausible and potentially useful, there is value in evaluating these broad principles, testing the impact of preferred and advocated strategies on student’s actual progress and performance.

In summary, there is a considerable lack of research examining the impact of feedback and formative assessment on student learning in HE. To date, there has been no comprehensive study of this important area, presenting challenges for practitioners, institutions and policymakers who wish to adopt evidence-informed feedback and formative assessment practices. Our systematic review addresses this significant gap in the knowledge base and provides important recommendations for those working in HE settings and those researching in this field.

  • What is the evidence of impact on student performance of formative assessment and feedback practices in HE?
  • What and how strong is the evidence of impact for different approaches to feedback?
  • What does the evidence suggest about principles for effective feedback and its implementation?

For the purposes of the systematic review, we considered educational performance to refer specifically to university students’ attainment in assessments of academic performance. This may refer to their attainment in the subject that they were studying but could also include performance in other more generic academic skills, for example, essay writing where this has been assessed. We excluded other academic-related or wider outcomes such as attendance, progression, engagement with learning or enjoyment. Although these are important, and may well be linked to good assessment practice in HE, they were beyond our purview.

To identify all potentially relevant studies we searched the following electronic databases: Applied Social Sciences Index and Abstracts (ASSIA); British Educational Index, Educational Abstracts, ERIC (via Scopus); ProQuest dissertations and theses; ProQuest Central (Education, Psychology, Social Sciences, UK & Ireland); Social Sciences Abstracts; ACER; PsychInfo and PsychAbstracts; Ingenta Connect; and Web of Science. In addition, we carried out systematic searching using Google Scholar, retrieving the first 100 results following the searches with each of our criteria. Studies collated from additional hand searches, personal knowledge or that had been ‘mined’ from other reports, were also included at this early stage.

(Feedback OR assessment*) AND (“Higher education” OR “university student*” OR “college student*” OR “postgraduate” OR “undergraduate”) AND (Trial OR experiment* OR “random*” OR RCT OR “regression discontinuity” OR “causal” OR quasi-experiment*)

For each database, and where possible, we searched for these terms in titles, abstracts, and keywords. Searches were limited to publications in the English language and those published from the year 2000 onwards up until the search date of May 2019. After identification, all texts were downloaded into a reference manager. Following the removal of all duplicates, a total of 12,599 studies were included within this first stage. Screening of all titles was then completed to check for subject/topic relevance; following exclusion of irrelevant studies, we were left with 3290 records. The next stage of screening involved checking titles and abstracts and the application of our eligibility criteria to each piece (Table  1 ).

Inclusion criteria Exclusion criteria
Any medium—e.g. face-to-face/online Any format—e.g. marks/comments Any source—peer/self/tutor/ technology Any focus—feedback/‘feedforward’ /multi-direction
HE course in Further Education college E.g. American ‘college’ Postgraduate and undergraduate
testing a defined area of academic knowledge written e.g. English tests, written exams, dissertations, in-class quizzes
at least two groups (i.e. intervention/control; pre/post intervention; within-subject design etc.)

Following this process there were 188 studies which met the full eligibility criteria on inspection of full texts. Next, a process of information extraction for mapping was implemented to identify key details about each study such as geographical region, subject area, type and source of feedback/formative assessment and year. Alongside the overview data extraction process, we conducted a quality appraisal of each study, targeted at identifying causal evidence of impact. An evidence ‘sieve’ (Gorard et al.,  2017 ) was used as a coding framework for this, requiring details on: study design, size, sample attrition, outcome quality, and threats to validity. Based upon these design and methodological elements, a ‘quality’ rating of 1* (lowest quality) through to 4* (highest quality) was given to each study (see Gorard et al.,  2017 , for full details on the application of this tool). As a result, 27 studies were rated 3* and 1 was rated 4*; these 28 studies were retained for in-depth analysis in our narrative synthesis (see below). The remainder were mostly 2* (150) with a small number of 1* pieces (10). Our search terms had, by design, removed many studies that did not provide causal evidence, and would have otherwise been rated 1*. The full coding spreadsheet of included studies is available upon request from the authors.

In aiming to respond to our research question on the causal impact of formative assessment/feedback in higher education, we carried through only the 28 papers receiving a 3* or 4* quality rating for relevance and causal evidence for narrative synthesis.

Throughout each stage of the above process, checks were undertaken to ensure the quality, consistency and reliability in our judgements on the studies. During screening, each member of the research team took the same sample of titles/abstracts, comparing and discussing these with each other prior to continuation. For the quality appraisal stage, the authors checked inter-rater agreement by working with the same sample of studies to begin with. Borderline judgements were flagged for a second opinion, and these were discussed between the research team. Following this, the project leads also checked a random selection of studies and judgements prior to the final synthesis stages. Figure  1 provides a PRISMA diagram overview of the overall screening process.

Details are in the caption following the image

An overview of the characteristics of the 188 eligible studies is provided in Table  2 .

Area Category Frequency %
Feedback source Mixture 26 13.8
Peer 25 13.3
Self 8 4.3
Technology 53 28.2
Tutor 76 40.4
Subject area Arts and humanities 6 3.2
(T)EFL 54 28.7
Medical sciences 15 8.0
Mixed or other 9 4.8
Physical sciences, mathematics, engineering, technology 50 26.6
Social sciences 54 28.7
Year 2001–2005 8 4.3
2006–2010 37 19.7
2011–2015 82 43.6
2016–2019 61 32.5
Region Africa 1 0.5
Asia 24 12.8
Central and South America 4 2.1
Europe 46 24.5
Middle East 27 14.4
North America 80 42.6
Oceania 6 3.2
Education Level Postgraduate 10 5.3
Undergraduate 175 93.1
Across both or other 3 1.6

Table  3 also provides an overview of the quality ratings and the criteria used to determine these. Following this, we go on to present a narrative synthesis of the 28 highest-quality studies.

Freq. % Freq. %
Overall quality rating 1 (Lowest) 10 5.3 Positive findings Unknown/unclear 6 3.2
2 150 79.8 Negative 2 1.1
3 27 14.4 Neutral 57 30.3
4 (Highest) 1 0.5 Positive 123 65.4
Study size Trivial/unclear 1 0.5 Attrition Huge (50%+) or not reported 10 5.3
Under 15 per comparison group (v. small) 11 5.9 High (30–50%) 9 4.8
15–49 per CG (small) 114 60.6 Moderate (15–30%) 15 8.0
50–99 per CG (medium) 38 20.2 Some (5–15%) 22 11.7
100+ per CG (Large) 24 12.8 Minimal (0–5%)—no evidence of impact on findings 132 70.2
Design strength Weak (flawed/biased comparison) 45 23.9 Outcome Quality Uninformative 3 1.6
Moderate (comparison made) 126 67.0 Poor—Issues of validity or appropriateness 6 3.2
Strong (RCT/strong quasi-experiment) 17 9.0 Moderate—Tutor assessed, little QA or standardisation/implementation 135 71.8
Strong—quality assurance, standardised, moderated 43 22.9
Threats No evidence of limitations 47 25.0 V. strong—Standardised, pre-specified, independent 1 0.5
Small limitations—will not affect results 82 43.6
Moderate limitation—might affect results 58 30.9
Serious limitations—probably substantial skew of results 1 0.5

From these 28 studies, we identified the main topics and questions covered in each paper and then created five general thematic categories relating to the type, medium and delivery of feedback: (1) Content, detail and delivery, (2) Timing and spacing, (3) Quizzing and Testing, (4) Peers, (5) Technology. See Appendix  S1 for a table mapping all studies included in the detailed review against the general thematic areas. Below we provide a narrative synthesis of the studies in each thematic area, providing a description of each study and an overall summary of evidence within the theme. A small number of papers were identified as being relevant to more than one theme, but we report each within the theme to which it was most strongly aligned. Reporting each paper individually ensures that the full range of evidence from our relatively small number of remaining studies is presented openly and transparently for the reader. This approach also serves to highlight the breadth and diversity of studies here, and the challenges that this presents for developing a robust synthesis upon which to draw firm conclusions.

Content, detail and delivery

This section summarises high-quality studies focusing on the content and delivery of feedback and formative assessment. This includes research that examines a range of issues such as whether students receive feedback (or not), as well as the level of detail, amount and content of formative assessment tasks and feedback.

The strongest study in this section, and the only 4* rated piece within our review, is a natural experiment which examined the effect of providing feedback on past exam performance on future performance (Bandiera et al.,  2015 ). The study used student data from Master’s courses at a large UK university that were one year in length. Some departments provided students with feedback on their module exam performance (in the form of their exam scores) immediately following these assessments across the year; a number of other departments did not do this, and only informed students of their exam performance at the end of their course (after all assessments had been completed). The researchers found that the provision of feedback had a positive effect on students' subsequent test scores with the mean impact corresponding to 13% of a standard deviation in test scores. The impact of the feedback was stronger for more able students and for students who had less information to start with about the academic environment, whereas no subset of individuals was found to be discouraged by feedback. This study indicates the importance and potential impact of providing timely information to students on their individual performance.

De Paola and Scoppa ( 2011 ) evaluated the impact of including an additional intermediate exam and providing students with information about their results prior to the final exam. Students in a control group took the final exam at the end of the module (without the additional mid-module exam). Participants were 344 students taking economics classes as part of a Business and Administration degree at a university in Italy. Half of the students were randomly allocated to the treatment group (mid-term exam) and half to the control group (final exam only). The results show that students undertaking the intermediate exam perform better both in terms of the probability of passing the exams and of grades obtained. High ability students appear to benefit more from the treatment. The design of the experiment also allowed the authors to understand whether this impact was due to ‘workload division or commitment’ effects or from ‘feedback provision’ effects. They found that the estimated treatment impact was due exclusively to the first effect, whereas the feedback provision had no positive effect on performance.

A number of studies within this section focus on the amount and/or type of feedback provided to students. This might include whether students receive feedback or not, the level of detail provided, or the use of written feedback and scores/grades. Lipnevich and Smith ( 2009 ), for example, examined the effects of providing no feedback versus detailed feedback to a large cohort of psychology students at a US university. Additionally, those provided with detailed feedback were either led to believe that it was provided by either the course instructor or computer generated. These conditions were also crossed with the receipt of a numerical grade (or not) and receiving a statement of praise (or not). All students were required to write a single-question essay at the beginning of their course. Detailed feedback on the essay, specific to individual’s work, was found to be strongly related to student improvement in essay scores, with the influence of grades and praise providing more mixed results: receipt of a tentative grade depressed performance, although this effect was ameliorated if accompanied by a statement of praise. Overall, detailed, descriptive feedback was found to be most effective when given alone, unaccompanied by grades or praise. The perceived source of the feedback (the computer or the instructor) had little impact on the results.

Butler et al. ( 2008 ) examined the effect of immediate feedback compared with no feedback (until after completion of the post-test). Their experiment looked at feedback on regular online tests set as homework, rather than on a single formative task completed in class. Five sections of a mathematics course at a US university (total participants n  = 373) were randomly allocated to either an immediate feedback or no feedback condition. Students in the immediate feedback group received information straight after completing each quiz. This meant that they could see their score and which items were answered incorrectly. Correct answers were not given to encourage the students to seek support with understanding their errors. The control group received no feedback (either scores or details of correct/incorrect responses) during the series of online quizzes; instead, they only found out this information after the end of the experiment. Results showed that students who received immediate feedback on quizzes had higher quiz and final test averages than those in the control group.

Heckler and Mikula ( 2016 ) investigated the levels of feedback complexity, studying the effects of ‘knowledge of correct response’ (KCR) feedback and ‘elaborated feedback’ (a general explanation) both separately and combined. Their study included 450 physics students learning about vector mathematics. Their findings indicated that elaborated feedback was most effective, especially for students with lower prior knowledge and lower course grades. In contrast, KCR feedback was less effective for these students. Combining both kinds of feedback also had no impact on students’ performance compared to elaborated feedback alone. In a similar study, Petrović et al. ( 2017 ) also examined the impact of providing KCR or elaborated feedback, in comparison with a control group who received no formative assessments or feedback. Participants were three consecutive cohorts of students on a digital processing course at the University of Zagreb ( n  = 70—control group; n  = 34—KCR group; n  = 35—EF group). As the authors hypothesised, the results—based upon three summative assessments across the module—showed considerably higher performance for the two experimental feedback groups compared with the control group, who received no formative assessment. Further analysis also showed that those in the EF group performed better than those in the KCR feedback group in the summative assessments. Although there was no difference between the two experimental groups for the formative assessments, the authors suggest that the more detailed feedback is likely to have supported improved performance for the more complex tasks required as part of the summative assessments.

Two other 3* studies focused predominantly on the content of the feedback provided to chemistry students in a single US university. Scalise et al. ( 2018 )'s experiment included two treatment groups: the first received additional conceptual questions in their online homework and the second received these questions plus differentiated answer feedback. Students receiving these interventions were compared with a business-as-usual group who received the usual online homework and feedback for the course. Both treatment groups showed increased gains in learning outcomes over the original comparison group. However, there were no differences between the two intervention groups, suggesting that the additional differentiated answer feedback may not have impacted performance any more than the use of conceptual questions on their own.

Like the above study which used additional conceptual questions to promote learning, Lee ( 2011 ) examined the use of learning strategy prompts and metacognitive feedback on students’ outcomes. In this doctoral study, 261 undergraduate Education students were randomly allocated to three groups. One intervention group received learning strategy prompts, written statements which directed students to use different learning strategies when studying instructional material. A second intervention group received the learning prompts plus metacognitive feedback—information given to learners about their decisions regarding which cognitive strategies to use and how to use them. The third group acted as a comparison group. Two criterion tests measuring recall and comprehension served as post-tests. The study found that the participants who were given learning strategy prompts with metacognitive feedback scored significantly higher in the recall and comprehension tests after controlling for their prior domain knowledge. Those who only received the prompts (without the metacognitive feedback) scored no higher than the control group.

In a study with a different focus to those above, Mikheeva et al. ( 2019 ) investigated the role of politeness when giving instructions and feedback. In an online mathematics course at a German university, 277 students were randomly assigned to four groups: polite instructions and polite feedback ( n  = 64); direct instructions and polite feedback ( n  = 90); polite instructions and direct feedback ( n  = 57) and direct instructions and direct feedback ( n  = 66). Directness and politeness were characterised by factors such as numbers of words and vocabulary choices, and both instructions and feedback were provided online and in written form. Findings showed that politeness in instructions did not have an impact on outcomes, whereas receiving polite feedback did positively influence students’ scores in the chapter tests and final post-tests.

As the above summaries highlight, there is considerable variation within this theme. The nature of these studies and their contexts are diverse; however, there are still some overarching conclusions that can be drawn. Perhaps unsurprisingly, we see evidence supporting the use of simple feedback (as opposed to no feedback) (Bandiera et al.,  2015 ; Butler et al.,  2008 ; Lipenvic and Smith, 2009; Petrović et al.,  2017 ). In some settings, more detailed individual feedback is also shown to be effective, perhaps particularly for those with lower starting points in terms of attainment (Heckler & Mikula,  2016 ) and when completing more complex tasks (Petrovic et al.,  2017 ). Evidence around the use of grades and praise is more mixed though (Lipnevic and Smith) and the study by De Paola and Scoppa ( 2011 ) indicates that including an additional assessment point may improve students’ outcomes, but that this impact is not attributed to the feedback provided. There is little information provided about the influence of the source of feedback (i.e. via computer or instructor) although the findings from the studies here indicate that both can be effective. In terms of delivery though, Mikheeva et al.'s ( 2019 ) research suggests the importance of politeness in feedback provision. Work by Lee ( 2011 ) and Scalise et al. ( 2018 ) also points to potential promise for feedback activity which encourages students to spend time thinking more deeply about their work (e.g. via metacognitive strategies).

Timing and spacing

The timing of feedback provided following formative assessment activities emerged as one theme within the higher-quality studies. This tended to overlap either with issues raised in the section above (e.g. at what point feedback was provided) and with the studies focusing on quizzing/testing, where there was emphasis on frequent retrieval-based tasks to assess and feed back on learning. Here we discuss the two studies that foreground assessing the timing of feedback (immediate versus delayed) on student attainment.

Three studies consider the role of feedback timing during online formative assessment activities. These studies all examined the effect of giving feedback immediately (i.e. as students respond to each question item) or with a delay (i.e. following completion of the task). Van der Kleij et al. ( 2012 ) conducted a study with economics students at a university in the Netherlands. They randomly allocated students ( n  = 152) from nine classes to three different feedback condition groups. Following a formative assessment task involving an online, multiple choice question (MCQ) test, students either received immediate knowledge of correct response (KCR) and elaborated feedback; delayed KCR and elaborated feedback; or delayed knowledge of results (KR) but no additional feedback. An online summative assessment, used as a post-test, was administered immediately after the formative task. Findings indicate no significant difference between the feedback conditions and achievement on the post-test.

In a similar study, Gaona et al. ( 2018 ) also considered the impacts of immediate feedback provided on short-answer online quizzes. Their research, a quasi-experiment involving 5507 mathematics students across four university campuses in Chile, involved providing feedback on each question, including whether the response given was correct/incorrect, plus a step-by-step account of how to solve the question. One group of students received this feedback immediately after responding to each question (immediate) whereas the other group had to complete and submit the whole quiz before then receiving the feedback on each question (deferred). Findings from the study indicate that the Grade Point Average (GPA) was lower overall for students who received immediate feedback. However, the authors urge caution in interpreting this, pointing to the fact that students were allowed unlimited attempts at each quiz, and where they scored incorrectly on a question, they were likely to start the quiz again. Further analyses show that students spent longer on the immediate quiz feedback, took more attempts and achieved slightly higher maximum ratings. The authors suggest that these potentially positive outcomes need to be considered alongside the inefficiency and limited individual academic gain of this approach for students.

The findings from the studies above indicate a fairly unclear picture in relation to the value of immediate versus delayed feedback. This is echoed in a number of the 2* papers exploring issues of timing as well, signalling a need for further work in this area, and across different contexts and subject disciplines.

Quizzing and testing

Eight of the 28 higher-quality studies focused on quizzing or frequent testing, and its impact on student attainment. The majority of these include participants and content from science or maths-based subjects.

Four studies examine the impact of using quizzing/tests compared with either not using them or using alternative approaches. Peterson and Siadat ( 2009 ) evaluated the effect of frequent, cumulative, time-restricted multiple-choice quizzes with immediate constructive feedback on the achievement of mathematics students at a college in Chicago, America. Students were in groups which received either weekly or bi-weekly quizzes as formative assessment, or in a control group which received no formative assessment. After four months, the results indicated that both quizzing groups performed better in their summative examinations than the control group. Doing the quizzes twice a week rather than just once appeared to have no additional benefit in terms of performance. In a similar study by Domenech Blazquez de la Poza and Munoz-Miquel ( 2015 ), students of microeconomics in a Spanish university participated in 10 short, handwritten, in-class tests across the course of one semester. These were cumulative and alternated between MCQ and problem-based, essay tests. To receive immediate feedback, suggested responses were immediately given to students following the test and marks were made available to students on the day of the test. When compared with groups not participating in the frequent testing approach, the findings indicate stronger performance on the final module exam for the testing group (an increase of 9.7 percentage points when control variables were included in the regression).

Pennebaker Gosling and Ferrell ( 2013 ) report the findings from a quasi-experiment examining the academic performance of students taking daily online, in-class quizzes which provided immediate and personalised feedback. Psychology students ( n  = 901) completed 26 short (10 min, eight MCQ items) tests during one semester; these contributed to 86% of the final grade for the module. Student performance was compared with the same data for classes previously taught by the same instructor ( n  = 935) but who had not used the frequent quizzing approach. Instead, this comparison group had completed four longer, written exams spread through the course of the term. Findings indicate a somewhat mixed picture. Students in the frequent testing group received lower grades overall than their predecessors in the control group. However, the authors posit that this is at least in part due to inflated (upward curving of) grades given to these earlier cohorts. Further analyses, including comparing results from the same questions used year-on-year, suggest that the experimental group’s grades were higher by 0.59 of a letter grade. Using this as a constant, they go on to argue that when factored in, students in the intervention group performed better in their final assessment and in other classes too. However, the challenges with the outcome measures do mean that these results need to be interpreted cautiously.

A recent doctoral study by Sartain ( 2018 ) examined the effect of frequent testing on the exam scores of undergraduate nursing students at a US university. Four cohorts of students ( n  = 440) were allocated to either quizzing or non-quizzing groups with two cohorts per group. The non-quizzing group were required to undertake traditional unit exams and a comprehensive final assessment; the quizzing group were required to complete these as well but also had the addition of quizzes as part of their required coursework. One cohort within the quizzing group received instructions and information about the value of quizzing; the other did not. Analyses suggest that quizzing is linked to a positive impact on both unit and final exam scores, and that this was particularly the case for lower and middle achievers. There was no difference in attainment between the quizzing group who received the additional information on quizzing and the group that did not. The authors argue, therefore, that quizzing is an effective tool to help improve students’ grades, regardless of whether students are made aware of its benefits or not.

Dobson et al. ( 2015 ) examined the extent to which testing—along with the reading of material—promoted greater recall and improved performance. Kinesiology students ( n  = 88) studied information relating to skeletal muscles, varying by three levels of familiarity (familiar, mixed information, unfamiliar). All students used both the repeated reading approach (R-R-R-R) and the read-test approach (R-T-R-T). The first studying strategy required students to read through a set of information on muscles four consecutive times. The second strategy asked students to first read through the information for 2 min and then spend 2 min testing themselves (through free recall) and repeat this process once. During the testing portions of the R-T-R-T strategy, students were unable to see the muscle information. Participants used the two strategies to study six sets of muscles in a sequential order and during just one studying session. Learning was evaluated via free recall assessments administered immediately after studying and again after a one-week delay and a three-week delay. Across those three assessments, the read-only strategy resulted in mean scores of 29.3, 15.2 and 5.3 for the familiar, mixed and unfamiliar information, respectively, whereas the testing-based strategy produced scores of 34.6, 16.9 and 8.3, respectively. The results indicate that the testing-based strategy produced greater recall immediately and with a three-week delay, regardless of the participants' level of familiarity with the muscle information.

Through two experiments at a US university, McDaniel et al. ( 2015 ) also examined the effects of different sequences of testing and studying. For the first experiment participants ( n  = 85) read a research methods text. Two days later they were either assigned to: a first condition that involved repeatedly restudying the material three times (SSS); a second condition where they engaged in a test-restudy-test sequence (TST); or a third condition where they were tested on the studied material three times (TTT). All participants then received a final test five days later. Findings showed that both the TST and TTT produced better final performance than the SSS condition; however, TST was not better than TTT. In the second experiment (participants n  = 124), the TST condition was altered so that after the first test, correct/incorrect feedback was provided and the test and feedback were available during the study phase. With this protocol, TST produced better learning and retention than did TTT or SSS. The authors highlight the correct/incorrect feedback given to participants after the first test as the ‘critical modifier’ here. This provided students with guidance of which areas of study that they needed to revisit before the second test to improve their performance.

A study by Rezaei ( 2015 ) examines the impact of frequent quizzing, both on an individual and collaborative basis. The study included 288 research methods students at a university in California, America. It compared groups of students taking part in the course between 2009 and 2014, all taught by the same instructor but using different assessment methods. The first group (control) followed the traditional approach of a mid-term test, final exam and research project. In the second group, the instructor also provided short (20 item), open-book online quizzes after each lecture. The third group completed all of the same elements except they were encouraged to take their quizzes in pairs. The findings indicate that the regular quizzing had a substantial positive impact on final grades, compared to the no quizzing condition. The authors note that there appeared to be a positive short-term effect (through improvements in the quizzes) and a longer-term effect too, as evidenced in the end-of-term exam. The group allowed to take their quizzes in pairs also went on to perform significantly higher than both the control group and the individual quizzing group, highlighting the potential promise for this kind of collaborative learning.

In two experiments on an educational psychology course, Vogler and Robinson ( 2016 ) also examine the effect of collaborative formative assessment. Their team-based testing (TBT) approach allowed students to work together to develop a consensus around test responses in three separate tests, answering until they were correct. As a comparison the students took another three tests individually with feedback. Students were then tested on this content two weeks later and again after two months. Results indicated that the TBT students scored higher when retested two months later than those who took the test individually.

The studies summarised above indicate considerable promise for quizzing and testing approaches. Evidence is presented for the benefits of using quizzing/testing within HE classrooms. Moreover, the process of including tests in pre- and post-study content, as well as asking students to complete them collaboratively, also appears to be a promising approach. Quizzing and testing is one of the more prevalent areas of assessment and feedback research that we found through our review. Although only a small number of studies were rated as 3* and summarised in this section, it is worth noting that positive findings were apparent from a number of 2* studies too. This evidence adds to the broader picture regarding this approach and its impact, and supports the suggestion that quizzing/testing is a ‘good bet’ for supporting student learning and attainment in HE. What is less clear from the studies here, however, is the mechanisms that might support the effectiveness of quizzes/low-stakes testing. We do not know, for example whether it is the act of participating in these activities (i.e. the process of retrieval and recall that they require) or the feedback provided as a result of them that impacts students’ improved learning and outcomes (see e.g., Halamish and Bjork ( 2011 ) who conduct a series of experiments examining the former). We return to the question of testing and the role of feedback within this as an operative mechanism in the final section of the article.

This section focuses on formative assessment activities or feedback which involves students working together to understand and develop their learning. Our review found five 3* studies relating to peer assessment or feedback. These examined the impacts of engaging with different kinds of peer review or feedback activities, including the use of ratings and qualitative feedback, providing peer review training, and the provision of anonymous or identifiable peer review. Overall, the studies in this area point to some potentially promising findings for strategies to support students’ academic attainment. We discuss each one in more detail below.

Xiao and Lucking ( 2008 ) conducted a quasi-experiment, examining the impact of peer assessment on students’ writing performance on a foundation teacher education course. A total of 232 online and campus students were divided in to two groups: one received ratings (in the form of numerical scores) on different aspects of their peers’ writing; the other group received ratings and detailed qualitative feedback. Using the interactive software available on the Wiki online platform, four students were designated to assess each student’s assignment. Following this first round of peer assessment, students were advised to rework their drafts and resubmit them. A further round of scoring then took place with multiple students assessing each piece of work. Final grades (and those used as the outcome measure of the trial) were awarded by instructors. Prior to the written tasks and assessment process, all students received a short briefing on peer assessment and the opportunity to practise scorings. Findings indicate that students in the scoring plus detailed written feedback group gained a small but significant improvement in their writing compared to the group who just received peer scores.

In their subsequent doctoral study, Xiao ( 2011 ) sought to examine the effects of peer-assessment skill training on students’ writing performance. A quasi-experimental design was employed and included 473 foundation education students. Students from the first semester of the course (Group A—Fall semester 2007) formed the comparison group; they completed tasks as usual, using peer assessment but with no in-depth peer assessment skill training. A second group (Group B—Spring semester 2008) received principle-based peer assessment training, including two weeks of instructions on this approach. Principle-based peer assessment focused on the rationale for the approach, assessment criteria, ways to give effective feedback and judge peer performance. A third group (Group C—Fall semester 2008) received target criteria peer assessment training, including two weeks of instructions. This involve the same as the principle-based approach but was more closely integrated into the course content, more linked to the major assignment and required students to do peer assessment skill-focused exercises outside of the classroom. Using a similar Wiki article approach as above, students’ pre- and post-scores in each group were compared. Findings show that students in both Groups B and C (who received in-depth peer assessment training) outperformed those in Group A. There were no differences, however, between the two intervention groups, indicating that the more course-focused target-based approach was no more effective than the more generic principles-based approach.

Zhang’s ( 2018 ) study also considers the impact of peer feedback on writing performance. This doctoral study included 198 English major students in a Chinese university. Eight intact classes were randomly assigned to either receive traditional, instructor-led feedback (control group) or peer feedback, including training on how to use and generate peer feedback (intervention group). The four classes in the intervention group did not receive instructor feedback for the 15 weeks of the study. Students completed initial assessments and a number of draft tasks with requirements to improve these following feedback from either the instructor (control group) or their peers (intervention group). For the intervention group students, peer feedback was delivered both orally and in writing. The author notes a difference in writing ability and English language proficiency between the two groups at the outset. However, even when taking these variables into account, they conclude a greater improvement for the treatment group. Analyses indicated that the quality of feedback that students received from each other was associated with their subsequent final grades. This was particularly the case when students had the opportunity to reflect upon the feedback that they had received from peers. Although potentially promising, caution is urged here due to the high effect size in relation to the academic performance, perhaps influenced by the differences between groups at the outset plus the fact that this was a relatively short, intensive intervention.

A study by Crowe et al. ( 2015 ) tested the effect of in-class student peer review in a quantitative research methods course. Based upon four sections of a course, 170 students completed two sections which incorporated in-class peer review and two sections which did not. For the two sections with peer review, content scheduled for the days during which peer review was used in class was delivered through an online course management system. Although the peer review activities took place in class, with the tutor present, the authors do not describe students being explicitly trained in peer review approaches. The findings show that in-class peer review did not improve final grades or final performance on learning outcomes for the module. Nor did it affect the difference in performance between drafts and final assignments that measured student learning objectives. Crucially, the authors also note the substantial amount of time that in-class peer review took and which meant that class delivery/teaching time was reduced, having a potentially negative impact on students’ ability to access and engage with the full module content.

A final 3* study by Lu and Bol ( 2007 ) considered the effect of anonymous versus identifiable online peer review on writing performance. Participants were 92 undergraduate freshmen in four English composition classes enrolled in the Fall semesters of 2003 and 2004. The same instructor taught all four classes and in each semester one class was assigned to the anonymous e-peer review group and the other to the identifiable e-peer review group. All other elements—course content, assignments, demands, and classroom instruction—remained constant. Students completed eight e-peer reviewed written assignments through the term. Those in the anonymous group received feedback from two unidentifiable peers; those in the identifiable group worked in groups of three and all reviewed the work of each other. In both groups, reviewers provided suggested scores for the work, completed some editing and made suggestions for improvement. The results from both semesters showed that students participating in anonymous e-peer review performed better on the writing performance task. These students tended to provide more critical comments per draft and slightly lower scores than their colleagues in the identifiable group.

High-quality evidence on the impact of peer assessment and feedback is fairly limited. This section, however, does highlight some promising findings and should be read in conjunction with the subsection above, which indicates the potential benefits of collaborative quizzing and testing. Providing training for peer assessment appears to be useful in terms of promoting attainment and Lu and Bol’s ( 2007 ) study also indicates the possibilities for anonymised peer feedback. In addition to the research discussed here, there were a further eighteen 2* papers focusing on peer assessment and feedback. These were largely small-scale studies and mostly, like the studies above, focused on improving students’ writing, often in English as a Foreign Language or social science settings. The majority of these report some positive findings ( n  = 12 studies) and also highlight some other benefits such as student engagement. Again, this suggests some degree of promise, albeit the need to consider the complexities of implementing peer feedback effectively and the potential cost of substituting instructional time for peer feedback (Crowe et al.,  2015 ; Evans,  2013 ).

This section describes the higher-quality (3*) studies which have an emphasis on the use of technology in providing feedback to students. Studies included here focus on issues related to web-based feedback compared to paper or face-to-face feedback; the use of different web-based feedback systems for providing personalised performance information; and the use of technology such as video podcasting for providing feedback. We acknowledge that these are not the only technology-related studies included in the review. The use of online approaches for formative assessment and feedback can also be found in each of the other subsections; however, the ones described in this section are those that foreground the technology use and where it is the technology itself that is being explicitly assessed for impact.

Mitra and Barua ( 2015 ) examined the impact of online formative assessment and feedback versus a paper version combined with face-to-face feedback. The authors conducted a quasi-experimental trial with two groups of medical students in a single Malaysian university. The control group ( n  = 102) undertook a single paper-based formative MCQ test relating to the musculoskeletal module of their course and received whole-group face-to-face feedback on their performance. The experimental group ( n  = 65) instead received three web-based formative MCQ tests across the same five-week module and received automated online feedback. Despite students in the experimental group appearing to do better in the formative tests, in the final summative assessment (taken by all students in the study), there was no difference in overall performance.

In a further study, Richards-Babb et al. ( 2018 ) examined the use of an adaptive web-based feedback system for setting and responding to chemistry students’ homework. This system involved providing a more personalised approach to completion of homework tasks and tests, giving students response-specific feedback on their work. This approach was compared with a traditional-responsive system (also online) where students were required to work through the same set of questions in the same order, regardless of their current level of mastery in the subject. Feedback for this approach also emphasised the need to correct mistakes. Using propensity score matching ( n  = 6114 pairs) to create comparable groups, the authors compared the outcomes of those students in the adaptive-responsive cohorts with those in the earlier traditional-responsive cohorts. The findings indicate that the adaptive system increased the likelihood of achieving a higher final grade, particularly for students who had average or below average prior attainment. Despite these potentially positive results, an accompanying attitudes survey showed that students reported less favourable attitudes towards the adaptive system compared to the traditional-responsive approach. This highlights the potential trade-off that HE lecturers sometimes face: a strategy that may support increased learning is not necessarily going to be received positively by students, particularly perhaps if it requires additional work or effort. Similarly, it is not necessarily the case that approaches which focus on providing engagement and enjoyment will also provide the best opportunities to maximise learning.

Chen ( 2011 ) examined the impact of an online personalised diagnosis and feedback tool which provides information to students on their learning paths. Computer programming students at a university in Taiwan ( n  = 145) were randomly allocated to either an experimental group ( n  = 72) who received the personalised online system following the completion of a formative test, or a control group ( n  = 73) who received just their test scores and no further feedback or engagement with the web platform. The personalised feedback system uses an algorithm (known as Pathfinder) to give students detailed information on the ‘knowledge pathway’ taken during the test and indicates misconceptions that occurred during the test. Comparisons of post-test scores show a mean of 58.9 (std. dev. = 15.5) for the control group and 68.2 (std. dev. = 14.75) for the experimental group. However, the authors urge caution that despite this potentially promising result, the experiment focused on only a single episode of using the online feedback tool.

A final study in this section reports two experiments involving video-based feedback (Leton et al.,  2018 ). The first experiment tested the impact of providing knowledge of correct responses (KCR) (i.e. ticks/crosses) coupled with more detailed video podcast feedback compared with just KCR. The second experiment then compared the KCR + video podcast condition with KCR + written feedback/explanations (i.e. text-based explanations for the questions/responses). Participants in the first experiment were 44 engineering students taking a statistics course at a university in Madrid, Spain. After attending one theoretical and one practical lecture, students completed an online MCQ test using the Siette web platform, and either received KCR or KCR+video feedback. Results indicated that those in the experimental group achieved higher results in the post-test assessment. However, by this point numbers of participants were small, with just 16 remaining in the intervention group and 19 in the control group. The second experiment, undertaken in the following year, included more students ( n  = 112), allocated to either KCR+video feedback condition or KCR + equivalent illustrated feedback (text-based explanations). The results showed no difference in post-test performance between these two groups and also no difference in students’ attitudes towards the different feedback methods.

The findings here indicate a rather mixed picture in relation to the use of online or video-based feedback. Where there are positive outcomes, these are often caveated with implementation or methodological issues. Moreover, there are challenges relating to the extent to which any impact (positive or negative) is associated with the use of technology as a mode for delivering feedback or as a strategy for generating and providing formative assessment and feedback (as seen for example with online quizzing). There were a further 42 studies with a 2* rating, which use technology in some way for the provision of feedback; as with the studies reported above, however, findings from these are very mixed. The studies indicate a real enthusiasm for employing learning technologies for feedback provision but little in the way of strong theoretical or empirical grounds on which to test effectiveness. These issues, plus the heterogeneous nature of the various technology-focused studies, makes it difficult to draw any firm conclusions about the benefits of using these kinds of approaches to deliver formative assessment and feedback in university.

This review has examined the impact of formative assessment and feedback in HE learning. The study set out to understand and summarise the evidence for these strategies and their impact on student performance. We identified 28 robust studies providing satisfactory causal evidence to test a form or quality of formative assessment or feedback. In this section we discuss the findings from these studies and present conclusions on the strength of this evidence, and potential implications for policymakers, practitioners and researchers in the field.

In line with previous research, the evidence from our review provides support for the use of formative assessment and feedback for promoting attainment in HE. This will be reassuring for those HE lecturers who seek to base their practice on evidence-informed approaches to teaching and learning. Yet, despite this unsurprising high-level finding, we still know relatively little about the types, modes and features of these approaches that are likely to be most effective. The studies included in this review point towards some potentially promising strategies, including, for example, quizzing/testing and peer feedback. However, the limited and patchy nature of the research, plus the lack of methodological robustness in many of the studies means that it is difficult to offer firmer conclusions. Of the 188 records included in the final extraction and quality rating processes, 126 are based upon very small sample sizes, usually based in a single department or with a single lecturer in an institution. Often the studies appear opportunistic in nature, rather than being designed deliberately and with methodological rigour as a central consideration. This is perhaps partly related to the nature of HE teaching and research responsibilities for lecturers, and possibly linked to the challenges of gaining funding for more ambitious trials. Nevertheless, to provide a stronger evidence base, our review points to the need for a much more systematic and scaled approach to examining these vital areas of teaching and learning within HE. We discuss the possibilities for this further in the section below.

The evidence relating to quizzing and testing appears to suggest that embedding these approaches as a way of retrieving knowledge and identifying misconceptions or errors (for both student and teacher) can be beneficial. Our study finds that the majority of studies reporting the use of these approaches are based in science or mathematics-based subjects. This is perhaps because often such strategies focus on the recall of ‘facts’ or key pieces of knowledge, often associated with more technical learning. There were no studies in this review, across any quality rating, which tested the use of quizzing/testing within arts and humanities subjects. In addition, the quizzing/testing approaches are arguably more tightly focused and easily defined or operationalised than some other formative assessment and feedback approaches. This makes them more straightforward and attractive for the kinds of causal designs that we were looking for in this review. But while there may be more studies focusing on these strategies, most with positive findings, the investigations are still limited in probing whether it is the quiz (i.e. the process of retrieval) or the feedback received as part of it, which is likely to be the mechanism supporting improved attainment. Although some of the studies (in this section, and across the systematic review as a whole) have strong theoretical foundations, many do not. Similarly, a number of the studies with 2* and 3* ratings have low ecological validity (e.g. laboratory studies), again making it difficult for HE lecturers to find rich evidence that is relevant to their own setting. Although it is certainly a promising area of research to inform teaching, there is a need to continue with developing a more comprehensive evidence base from which to work.

Through our themes, we have begun to piece together a framework based on causal evidence. As we note above, this is necessarily limited and partial due to the lack of research in this area and further empirical studies are needed. When we compare the extent of the evidence on HE to that at school level, we find considerable disparity. Reviews and meta-analyses of research involving compulsory school-age pupils strongly suggests the importance of formative assessment feedback for supporting student progress and attainment. These findings and the extent of the evidence upon which they are based is not reflected in the HE literature. This seems curious given the size of the sector, the great pressure on universities to innovate, and the fact that there is often money available for teaching and research initiatives. Unfortunately, though, what appears to happen—based upon the published work that we have assessed through this review—is the development of myriad new strategies, which are then rationalised and advocated rather than rigorously piloted, tested and (if successful) scaled. The approaches are frequently not rooted in strong existing evidence, and often focus on outcomes other than academic progress, such as student satisfaction, enjoyment or engagement. Small-scale evaluations of these approaches are sometimes carried out by those who develop them (and therefore are invested in highlighting positive findings) but these are rarely designed with a view to being able to make strong causal claims which add and build on the existing knowledge base. Additional methodological issues also arise when we consider the measures used to assess student attainment and progress. The majority of the studies included here used tutor-devised assessments, rather than more standardised approaches. This is not particularly surprising given that university assessment more broadly tends to be designed and implemented by tutors; externally assessed standardised tests or exams, as we see in the school sector, are much less common. This has potential implications for the reliability and validity of the results obtained through the studies included here, and also perhaps makes it more challenging to run multi-site trials with multiple universities using the same standardised pre/post-tests.

There are a number of key issues here. First is the extent to which those teaching in HE are expected to undertake and publish research themselves, and the support available for doing so. Work from the school sector has highlighted the benefits and possibilities of engaging teacher practitioners within the development of a more evidence-informed system (Churches et al.,  2020 ). In the context of universities, where teaching staff are often required to carry out research, the studies described in this review are likely to be useful and potentially informative. But a tension lies in the fact that this approach is not conducive to developing a broader, stronger evidence base that can be relied upon to inform teaching and learning policy on a larger scale. We are of the view that HE teaching is better advanced through the identification, testing and development of a set of key principles for effective HE teaching and learning, which lecturers can master and contextualise (in relation to subject and institutional context) than the desire to develop novel, ‘innovative’ approaches and conducting small-scale studies of their impact. Indeed, the higher-quality studies reported in this review have largely focused on the fundamentals of teaching and learning such as detail, timing, quality and delivery of feedback; these studies, and this review, are an important first step to building this HE-level evidence base. As noted above, the growing body of work emanating from pure and applied cognitive science holds promise for developing and explicating this evidence base (Agarwal et al.,  2012 ; Churches et al.,  2020 ). There also appears to be great value in development, side-by-side, with the evidence-base for compulsory school age pupils, for feedback and formative assessment, and teaching and learning more generally. While these are very different contexts, many of the fundamentals—including the value of high-quality communication, relationships and subject knowledge—are likely to remain important.

One of the key differences between the contexts of universities and schools is the aims and purpose of teaching and learning. Put simply, schools are usually expected to prioritise young people’s academic progress. Along with other aims such as promoting children’s safety, well-being and social outcomes, they are measured using performance outcomes (i.e. exam grades) and are held to account based upon these. In HE, this is less the case. Student attainment is not used as the main measure of university ‘success’ and a wider range of factors including student satisfaction and progression are collated via instruments such as the National Student Survey (in the UK) and are included as component measures in the Teaching Excellence Framework or university league tables. Within the current quasi-marketised system of HE, high value is placed upon students’ perceptions around student experience, course satisfaction and value-for-money (Furedi,  2011 ) as this is what is measured and used for accountability purposes (OfS,  2019a , 2019b ). There is arguably little incentive or opportunity for developing teaching and learning strategies focusing on improving academic outcomes. In the UK, assessment and feedback as specific areas have typically received lower scores from students compared with other areas of university life (OfS,  2019a ). This has led to universities being encouraged to enhance provision in this area (Nicol,  2010 ; OfS,  2019b ). Although this focus on improvement is to be welcomed, the tension here remains: universities are encouraged to improve students’ feelings of satisfaction with these areas, rather than embedding approaches that may also contribute to academic progress and performance.

Finally, we return to the issue of evidence-informed teaching and learning. If universities (and teachers in them) wish to provide the best opportunities for their students to achieve and reach their academic potential, then it is vital that policies and practices are focused on evidence-informed approaches. The government, regulators (such as the Office for Students in England) and other strategic organisations in the sector could also take a stronger role with supporting this stance and by investing resources. Tools and resources could be developed, similar to the school-based Teaching and Learning Toolkit (EEF,  2020 ) in England or the What Works Clearinghouse (WWC,  2020 ) in the USA to inform staff of useful strategies. Moreover, training of university teaching staff should model and foster the use of evidence to inform practice. That is not to say that there should be a ‘one best way’ approach to teaching in HE: practitioner autonomy and professional judgement is an important element of teaching in the university sector. However, we do think that there is an argument for widely sharing and promoting effective practices that could enhance students’ opportunities to learn. Crucially, though, we would also suggest that there is the need for more research evidence upon which to draw. Without this, being ‘evidence-informed’ is much more challenging as we do not know what the ‘best bets’ are (Elliot Major & Higgins,  2019 ) and have a limited pool of information to base decisions upon. National bodies such as the OfS and Universities UK could play a vital and pioneering role in promoting, commissioning and funding larger-scale, methodologically rigorous and independent research studies in key areas of teaching and learning. Universities could also be encouraged and incentivised to participate in these to engage both practitioners and students in the pursuit of evidence-informed practice and genuinely impactful research.

LIMITATIONS OF THE REVIEW

Although this systematic review is robustly designed and reports findings fully and transparently, and effectively synthesises results and conclusions on a number of key areas of formative assessment and feedback, like all studies of this kind it has limitations. The most significant relates to the parameters of the review and the fact that our search terms and inclusion/exclusion criteria for date, language, and research design could have resulted in useful studies—which may have contributed to our knowledge and understanding—being excluded. We also acknowledge the potential publication bias that is revealed via our review (Torgerson,  2006 ). Nearly two thirds of our eligible studies ( n  = 123) reported positive results whereas only n  = 2 (1.1%) published negative outcomes. Despite seeking to minimise possible publication bias by including unpublished and ‘grey’ material, it would still appear that positive findings relating to feedback in HE are more likely to be shared. We take this into account when discussing the studies and drawing overall conclusions, particularly regarding the need for more high-quality, larger-scale trials in this area.

Those teaching in HE care about learning and the achievements of their students. Although formative assessment and feedback appears to be a valuable approach to supporting student performance, at present not enough is known about the specific and most effective strategies to be used. Our review contributes to a strong moral and academic case for an evidence-informed approach to teaching and learning in universities. For this to happen, the HE sector should learn lessons from recent movements towards evidence-use in the compulsory schooling sector. Ambition and commitment are needed but we are optimistic that this could lead to a stronger research base for practitioners to work with, and improved learning opportunities and outcomes for students.

CONFLICT OF INTEREST

The authors declare that there is no conflict of interest.

ETHICAL APPROVAL

As this research is based on a systematic review of published studies, this is not applicable to our research.

Open Research

Data availability statement.

The database used for the collation and coding of studies included within this review is available upon request from the authors.

Supporting Information

Filename Description
Word document, 24.6 KB

Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

  • Agarwal, P. K. , Bain, P. M. , & Chamberlain, R. W. ( 2012 ). The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist . Educational Psychology Review , 24 ( 3 ), 437 – 448 . 10.1007/s10648-012-9210-2 Web of Science® Google Scholar
  • Bandiera, O. , Larcinese, V. , & Rasul, I. ( 2015 ). Blissful ignorance? A natural experiment on the effect of feedback on students' performance . Labour Economics , 34 , 13 – 25 . https://doi.org/10.1016/j.labeco.2015.02.002 10.1016/j.labeco.2015.02.002 Web of Science® Google Scholar
  • Baughan, P. ( 2020 ). On your marks: Learner-focused feedback practices and feedback literacy . York : Advance HE. https://www.advance-he.ac.uk/knowledge-hub/your-marks-learner-focused-feedback-practices-and-feedback-literacy Google Scholar
  • Biber, D. , Nekrasova, T. , & Horn, B. ( 2011 ). The effectiveness of feedback for L1-English and L2-writing development: A meta-analysis . ETS Research Report Series , 2011 ( 1 ), 1 – 110 . 10.1002/j.2333-8504.2011.tb02241.x Google Scholar
  • Black, P. , & Wiliam, D. ( 1998 ). Inside the Black Box: Raising standards through classroom assessment . London, UK : King’s College School of Education. Google Scholar
  • Boud, D. , & Molloy, E. ( 2013 ). Rethinking models of feedback for learning: The challenge of design . Assessment & Evaluation in Higher Education , 38 ( 6 ), 698 – 712 . 10.1080/02602938.2012.691462 Web of Science® Google Scholar
  • Butler, M. , Pyzdrowski, L. , Goodykoontz, A. , & Walker, V. ( 2008 ). The effects of feedback on online quizzes . International Journal for Technology in Mathematics Education , 15 ( 4 ), 131 – 136 . Google Scholar
  • Calderon, A. ( 2018 ). Massification of higher education revisited . http://cdn02.pucp.education/academico/2018/08/23165810/na_mass_revis_230818.pdf Google Scholar
  • Carless, D. , & Winstone, N. ( 2019 ). Designing effective feedback processes in higher education: A learning-focused approach . London, UK: Routledge. Google Scholar
  • Chen, L.-H. ( 2011 ). Enhancement of student learning performance using personalized diagnosis and remedial learning system . Computers & Education , 56 ( 1 ), 289 – 299 . 10.1016/j.compedu.2010.07.015 Web of Science® Google Scholar
  • Churches, R. , Dommett, E. J. , Devonshire, I. M. , Hall, R. , Higgins, S. , & Korin, A. ( 2020 ). Translating laboratory evidence into classroom practice with teacher-led randomized controlled trials—A perspective and meta-analysis . Mind, Brain, and Education , 14 ( 3 ), 292 – 302 . 10.1111/mbe.12243 Web of Science® Google Scholar
  • Crowe, J. , Silva, T. , & Ceresola, R. ( 2015 ). The effect of peer review on student learning outcomes in a research methods course . Teaching Sociology , 43 ( 3 ), 201 – 213 . 10.1177/0092055X15578033 Web of Science® Google Scholar
  • De Paola, M. , & Scoppa, V. ( 2011 ). Frequency of examinations and student achievement in a randomized experiment . Economics of Education Review , 30 ( 6 ), 1416 – 1429 . 10.1016/j.econedurev.2011.07.009 Web of Science® Google Scholar
  • Dobson, J. L. , Linderholm, T. , & Yarbrough, M. B. ( 2015 ). Self-testing produces superior recall of both familiar and unfamiliar muscle information . Advances in Physiology Education , 39 ( 4 ), 309 – 314 . 10.1152/advan.00052.2015 PubMed Web of Science® Google Scholar
  • Domenech, J. , Blazquez, D. , de la Poza, E. , & Munoz-Miquel, A. ( 2015 ). Exploring the impact of cumulative testing on academic performance of undergraduate students in Spain . Educational Assessment, Evaluation and Accountability , 27 ( 2 ), 153 – 169 . 10.1007/s11092-014-9208-z Web of Science® Google Scholar
  • Dunlosky, J. , Rawson, K. A. , Marsh, E. J. , Nathan, M. J. , & Willingham, D. T. ( 2013 ). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology . Psychological Science in the Public Interest , 14 ( 1 ), 4 – 58 . 10.1177/1529100612453266 CAS PubMed Web of Science® Google Scholar
  • EEF ( 2018 ). Feedback review – Part of the teaching and learning toolkit . https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit/feedback/ Google Scholar
  • EEF ( 2020 ). Teaching and learning toolkit . https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit/ . Accessed 18 September 2020. Google Scholar
  • Elliot Major, L. , & Higgins, S. ( 2019 ). What works? Research and evidence for successful teaching . London, UK : Bloomsbury. Google Scholar
  • Elliott, V. , Baird, J. A. , Hopfenbeck, T. , Ingram, J. , Thompson, I. , Usher, N. , & Zantout, M. ( 2016 ). A marked improvement? A review of the evidence on written marking . London, UK : EEF. Google Scholar
  • Evans, C. ( 2013 ). Making sense of assessment feedback in higher education . Review of Educational Research , 83 ( 1 ), 70 – 120 . 10.3102/0034654312474350 Web of Science® Google Scholar
  • Furedi, F. ( 2011 ). Introduction to the marketisation of higher education and the student as consumer . In The marketisation of higher education and the student as consumer (pp. 15 – 22) . Abingdon, Oxon : Routledge. Google Scholar
  • Gaona, J. , Reguant, M. , Valdivia, I. , Vasquez, M. , & Sancho-Vinuesa, T. ( 2018 ). Feedback by automatic assessment systems used in mathematics homework in the engineering field . Computer Applications in Engineering Education , 26 ( 4 ), 994 – 1007 . 10.1002/cae.21950 Web of Science® Google Scholar
  • Gaynor, J. W. ( 2020 ). Peer review in the classroom: Student perceptions, peer feedback quality and the role of assessment . Assessment & Evaluation in Higher Education , 45 ( 5 ), 758 – 775 . 10.1080/02602938.2019.1697424 Web of Science® Google Scholar
  • Gorard, S. , See, B. H. , & Siddiqui, N. ( 2017 ). The trials of evidence-based education: The promises, opportunities and problems of trials in education . Milton Park, UK : Taylor & Francis. 10.4324/9781315456898 Google Scholar
  • Greving, S. , & Richter, T. ( 2018 ). Examining the testing effect in university teaching: Retrievability and question format matter . Frontiers in Psychology , 9 , 2412 . 10.3389/fpsyg.2018.02412 PubMed Web of Science® Google Scholar
  • Halamish, V. , & Bjork, R. A. ( 2011 ). When does testing enhance retention? A distribution-based interpretation of retrieval as a memory modifier . Journal of Experimental Psychology: Learning, Memory, and Cognition , 37 ( 4 ), 801 . 10.1037/a0023219 PubMed Web of Science® Google Scholar
  • Hattie, J. , & Timperley, H. ( 2007 ). The power of feedback . Review of Educational Research , 77 ( 1 ), 81 – 112 . 10.3102/003465430298487 Web of Science® Google Scholar
  • Heckler, A. F. , & Mikula, B. D. ( 2016 ). Factors affecting learning of vector math from computer-based practice: Feedback complexity and prior knowledge . Physical Review Physics Education Research , 12 ( 1 ), 010134 . 10.1103/PhysRevPhysEducRes.12.010134 Web of Science® Google Scholar
  • Hennessy, S. , Rojas-Drummond, S. , Higham, R. , Márquez, A. M. , Maine, F. , Ríos, R. M. , García-Carrión, R. , Torreblanca, O. , & Barrera, M. J. ( 2016 ). Developing a coding scheme for analysing classroom dialogue across educational contexts . Learning, Culture and Social Interaction , 9 , 16 – 44 . 10.1016/j.lcsi.2015.12.001 Web of Science® Google Scholar
  • Jonsson, A. ( 2013 ). Facilitating productive use of feedback in higher education . Active Learning in Higher Education , 14 ( 1 ), 63 – 76 . 10.1177/1469787412467125 Web of Science® Google Scholar
  • Kirschner, P. A. , Sweller, J. , & Clark, R. E. ( 2006 ). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching . Educational Psychologist , 41 ( 2 ), 75 – 86 . 10.1207/s15326985ep4102_1 Web of Science® Google Scholar
  • Kluger, A. N. , & DeNisi, A. ( 1996 ). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory . Psychological Bulletin , 119 ( 2 ), 254 . 10.1037/0033-2909.119.2.254 Web of Science® Google Scholar
  • Klute, M. , Apthorp, H. , Harlacher, J. , & Reale, M. ( 2017 ). Formative assessment and elementary school student academic achievement: A review of the evidence . Institute of Education Sciences, US Department of Education. https://ies.ed.gov/ncee/edlabs/regions/central/pdf/REL_2017259.pdf Google Scholar
  • Lee, H. W. ( 2011 ). The effects of generative learning strategy prompts and metacognitive feedback on learners' self-regulation, generation process, and achievement . Dissertation Abstracts International Section A: Humanities and Social Sciences , 71 ( 12-A ), 1 – 180 . https://etda.libraries.psu.edu/files/final_submissions/2268 Google Scholar
  • Leton, E. , Molanes-Lopez, E. M. , Luque, M. , & Conejo, R. ( 2018 ). Video podcast and illustrated text feedback in a web-based formative assessment environment . Computer Applications in Engineering Education , 26 ( 2 ), 187 – 202 . 10.1002/cae.21869 Web of Science® Google Scholar
  • Lipnevich, A. A. , & Smith, J. K. ( 2009 ). Effects of differential feedback on students’ examination performance . Journal of Experimental Psychology: Applied , 15 ( 4 ), 319 . 10.1037/a0017841 PubMed Web of Science® Google Scholar
  • Lu, R. , & Bol, L. ( 2007 ). A comparison of anonymous versus identifiable e-Peer review on college student writing performance and the extent of critical feedback . Journal of Interactive Online Learning , 6 ( 2 ), 100 – 115 . Google Scholar
  • McDaniel, M. A. , Bugg, J. M. , Liu, Y. Y. , & Brick, J. ( 2015 ). When does the test-study-test sequence optimize learning and retention? , Journal of Experimental Psychology-Applied , 21 ( 4 ), 370 – 382 . 10.1037/xap0000063 PubMed Web of Science® Google Scholar
  • Mikheeva, M. , Schneider, S. , Beege, M. , & Günter, D. R. ( 2019 ). Boundary conditions of the politeness effect in online mathematical learning . Computers in Human Behavior , 92 , 419 – 427 . 10.1016/j.chb.2018.11.028 Web of Science® Google Scholar
  • Mitra, N. K. , & Barua, A. ( 2015 ). Effect of online formative assessment on summative performance in integrated musculoskeletal system module . BMC Medical Education , 15 , 7 . 10.1186/s12909-015-0318-1 PubMed Web of Science® Google Scholar
  • Nicol, D. ( 2010 ). From monologue to dialogue: Improving written feedback processes in mass higher education . Assessment & Evaluation in Higher Education , 35 ( 5 ), 501 – 517 . 10.1080/02602931003786559 Web of Science® Google Scholar
  • Nicol, D. J. , & Macfarlane-Dick, D. ( 2006 ). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice . Studies in Higher Education , 31 ( 2 ), 199 – 218 . 10.1080/03075070600572090 Web of Science® Google Scholar
  • OfS ( 2019a ). Student satisfaction rises but universities should do more to improve feedback . https://www.officeforstudents.org.uk/news-blog-and-events/press-and-media/student-satisfaction-rises-but-universities-should-do-more-to-improve-feedback/ Google Scholar
  • OfS ( 2019b ). English higher education 2019: The Office for Students annual review . https://www.officeforstudents.org.uk/annual-review-2019/a-high-quality-student-experience/ Google Scholar
  • Panadero, E. , & Alqassab, M. ( 2019 ). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading . Assessment & Evaluation in Higher Education , 44 ( 8 ), 1253 – 1278 . https://doi.org/10.1080/02602938.2019.1600186 10.1080/02602938.2019.1600186 Web of Science® Google Scholar
  • Pennebaker, J. , Gosling, S. , & Ferrell, J. ( 2013 ). Daily online testing in large classes: Boosting college performance while reducing achievement gaps . PLoS One , 8 ( 11 ), e79774. 10.1371/journal.pone.0079774 PubMed Web of Science® Google Scholar
  • Perry, T. , Davies, P. , & Brady, J. ( 2020 ). Using video clubs to develop teachers’ thinking and practice in oral feedback and dialogic teaching . Cambridge Journal of Education , 50 ( 5 ), 615 – 637 . 10.1080/0305764X.2020.1752619 Web of Science® Google Scholar
  • Peterson, E. , & Siadat, M. V. ( 2009 ). Combination of formative and summative assessment instruments in elementary algebra classes: A prescription for success . Journal of Applied Research in the Community College , 16 ( 2 ), 92 – 102 . Google Scholar
  • Petrović, J. , Pale, P. , & Jeren, B. ( 2017 ). Online formative assessments in a digital signal processing course: Effects of feedback type and content difficulty on students learning achievements . Education & Information Technologies , 22 ( 6 ), 3047 – 3061 . 10.1007/s10639-016-9571-0 Web of Science® Google Scholar
  • Ramaprasad, A. ( 1983 ). On the definition of feedback . Behavioral Science , 28 ( 1 ), 4 – 13 . 10.1002/bs.3830280103 Web of Science® Google Scholar
  • Rezaei, A. R. ( 2015 ). Frequent collaborative quiz taking and conceptual learning . Active Learning in Higher Education , 16 ( 3 ), 187 – 196 . 10.1177/1469787415589627 Web of Science® Google Scholar
  • Richards-Babb, M. , Curtis, R. , Ratcliff, B. , Roy, A. , & Mikalik, T. ( 2018 ). General chemistry student attitudes and success with use of online homework: traditional-responsive versus adaptive-responsive . Journal of Chemical Education , 95 ( 5 ), 691 – 699 . 10.1021/acs.jchemed.7b00829 CAS PubMed Web of Science® Google Scholar
  • Roediger, H. L. , & Karpicke, J. D. ( 2006 ). Test enhanced learning: Taking memory tests improves long-term retention . Psychological Science , 17 , 249 – 255 . https://doi.org/10.1111/j.1467-9280.2006.01693.x 10.1111/j.1467-9280.2006.01693.x PubMed Web of Science® Google Scholar
  • Sadler, D. R. ( 1989 ). Formative assessment and the design of instructional systems . Instructional Science , 18 ( 2 ), 119 – 144 . 10.1007/BF00117714 Web of Science® Google Scholar
  • Sadler, D. R. ( 1998 ). Formative assessment: Revisiting the territory . Assessment in Education: Principles, Policy & Practice , 5 ( 1 ), 77 – 84 . 10.1080/0969595980050104 CAS PubMed Google Scholar
  • Sartain, A. F. ( 2018 ). The frequency of testing and its effects on exam scores in a fundamental level baccalaureate nursing course . (Ed.D.). The University of Alabama, Ann Arbor, MI. Google Scholar
  • Scalise, K. , Douskey, M. , & Stacy, A. ( 2018 ). Measuring learning gains and examining implications for student success in STEM . Higher Education Pedagogies , 3 ( 1 ), 183 – 195 . 10.1080/23752696.2018.1425096 Web of Science® Google Scholar
  • Schneider, M. , & Preckel, F. ( 2017 ). Variables associated with achievement in higher education: A systematic review of meta-analyses . Psychological Bulletin , 143 ( 6 ), 565 . 10.1037/bul0000098 PubMed Web of Science® Google Scholar
  • Soderstrom, N. C. , & Bjork, R. A. ( 2015 ). Learning versus performance: An integrative review . Perspectives on Psychological Science , 10 ( 2 ), 176 – 199 . 10.1177/1745691615569000 PubMed Web of Science® Google Scholar
  • Speckesser, S. , Runge, J. , Foliano, F. , Bursnall, M. , Hudson-Sharp, N. , Rolfe, H. , & Anders, J. ( 2018 ). Embedding Formative Assessment: Evaluation report and executive summary . London, UK : EEF. Google Scholar
  • Sweller, J. , Ayres, J. , & Kalyuga, S. ( 2011 ). Cognitive load theory . New York, NY: Springer-Verlag. 10.1007/978-1-4419-8126-4 Web of Science® Google Scholar
  • Torgerson, C. ( 2006 ). Publication bias: The Achilles heel of systematic reviews? British Journal of Educational Studies , 54 ( 1 ), 89 – 102 . 10.1111/j.1467-8527.2006.00332.x Web of Science® Google Scholar
  • van der Kleij, F. M. , Eggen, T. J. , Timmers, C. F. , & Veldkamp, B. P. ( 2012 ). Effects of feedback in a computer-based assessment for learning . Computers & Education , 58 ( 1 ), 263 – 272 . 10.1016/j.compedu.2011.07.020 Web of Science® Google Scholar
  • Van der Schaaf, M. , Baartman, L. , Prins, F. , Oosterbaan, A. , & Schaap, H. ( 2013 ). Feedback dialogues that stimulate students' reflective thinking . Scandinavian Journal of Educational Research , 57 ( 3 ), 227 – 245 . 10.1080/00313831.2011.628693 Web of Science® Google Scholar
  • Vogler, J. S. , & Robinson, D. H. ( 2016 ). Team-based testing improves individual learning . Journal of Experimental Education , 84 ( 4 ), 787 – 803 . 10.1080/00220973.2015.1134420 Web of Science® Google Scholar
  • Vojdanoska, M. , Cranney, J. , & Newell, B. R. ( 2010 ). The testing effect: The role of feedback and collaboration in a tertiary classroom setting . Applied Cognitive Psychology , 24 ( 8 ), 1183 – 1195 . 10.1002/acp.1630 Web of Science® Google Scholar
  • Weinstein, Y. , & Sumeracki, M. ( 2018 ). Understanding how we learn: A visual guide . London, UK : Routledge. 10.4324/9780203710463 Google Scholar
  • Wiliam, D. ( 2011 ). What is assessment for learning? Studies in Educational Evaluation , 37 ( 1 ), 3 – 14 . 10.1016/j.stueduc.2011.03.001 Google Scholar
  • Wiliam, D. ( 2018 ). Feedback: At the heart of – but definitely not all of – formative assessment . In A. Lipnevic , & J. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 3 – 28 ). Cambridge, UK , Cambridge University Press. 10.1017/9781316832134.003 Google Scholar
  • Wisniewski, B. , Zierer, K. , & Hattie, J. ( 2020 ). The power of feedback revisited: A meta-analysis of educational feedback research . Frontiers in Psychology , 10 , 3087 . 10.3389/fpsyg.2019.03087 PubMed Web of Science® Google Scholar
  • WWC ( 2020 ). What works clearinghouse . Retrieved from https://ies.ed.gov/ncee/wwc/ . Accessed 18 September 2020. Google Scholar
  • Xiao, Y. ( 2011 ). The effects of training in peer assessment on university students' writing performance and peer assessment quality in an online environment . Retrieved from https://digitalcommons.odu.edu/teachinglearning_etds/44/ . Accessed 18 September 2020. Google Scholar
  • Xiao, Y. , & Lucking, R. ( 2008 ). The impact of two types of peer assessment on students' performance and satisfaction within a Wiki environment . The Internet and Higher Education , 11 ( 3–4 ), 186 – 193 . 10.1016/j.iheduc.2008.06.005 Web of Science® Google Scholar
  • Zhang, X. ( 2018 ). An examination of the effectiveness of peer feedback on Chinese university students' English writing performance . (Ph.D.). Oakland University, Ann Arbor, MI. Google Scholar

Citing Literature

formative assessment and research occur

Volume 9 , Issue 3

October 2021

formative assessment and research occur

Information

formative assessment and research occur

Log in to Wiley Online Library

Change password, your password must have 10 characters or more:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

Create a new account

Forgot your password.

Enter your email address below.

Please check your email for instructions on resetting your password. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account.

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

This site belongs to UNESCO's International Institute for Educational Planning

Home

IIEP Learning Portal

formative assessment and research occur

Search form

  • issue briefs
  • Monitor learning
  • Formative assessment

This brief explains how formative assessment can contribute to improving learning and what recurring challenges affect its implementation. It then provides policy recommendations that may help educators and policy-makers overcome these obstacles.

Formative assessment, often referred to as ‘assessment for learning’, classroom, or continuous assessment, encompasses ‘all those activities undertaken by teachers, and/or by students which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged’ (Black and Wiliam, 1998: 7–8). Whether formal or informal, they can take various forms such as quizzes and tests, written essays, self/peer assessment, oral questioning, learning logs, and so on. While traditionally opposed to summative assessment or ‘assessment of learning’, which is  used to ‘certify or select learners in a given grade or age for further schooling’ (UNESCO, 2019: 16), the distinction has become blurred, with a growing number of hybrid assessments mixing both purposes. Additionally, although generally low-stake, formative assessments can count for students’ final grades. Thus, it is worth noting that classifying an assessment as formative should consider both its characteristics and the use of the information generated (Dunn and Mulvenon, 2009).

During the COVID-19 crisis formative assessments gained more relevance due to uncertainty about whether students were acquiring the necessary skills. With summative and high-stake examinations often being cancelled or postponed, formative assessments may provide better options and solutions in measuring learner progress (Bawane and Sharma, 2020). Although the education sector globally was unprepared for the crisis, some countries managed to find alternative modes of formative assessment through innovative means. For instance, in the United Arab Emirates, a smart measurement policy enabled  the assessment of students’ academic performance using artificial intelligence (IIEP-UNESCO, 2020).

What we know

Evidence about the benefits of formative assessments on learning is mixed. A review of the literature in Clarke (2012) suggests that they can yield promising learning gains (especially for low achievers) if frequent and of high quality. Meaningful feedback is central to the efficiency of formative assessments (OECD, 2005a; Muskin, 2017). Hill argues that ‘when used to provide feedback on a daily basis to both teacher and students’, they are ‘one of the most powerful interventions ever recorded in educational research literature’ (Hill, 2013: 65). To be effective, feedback needs to be based on sound data, performed well (Hill, 2013), and followed by appropriate corrective measures (Allal and Mottier Lopez, 2005). However, Browne (2016) makes the nuance that while research clearly points to the inefficient implementation of formative assessments in sub-Saharan Africa and South Asia, the only rigorous experimental study conducted in these regions found no positive effects on learning even with appropriate implementation. Moreover, some authors raise methodological concerns or concerns related to definitions in the literature supportive of formative assessments (see for example Dunn and Mulvenon, 2009; Bennett, 2011). 

Nevertheless, if ‘valid, timely, constructive, and specific to the learning needs of the child’, formative assessments can be particularly helpful in advancing teaching and learning (READ, 2020: 3). By providing feedback to teachers and students, they can help educators to plan instructional activities (Allal and Mottier Lopez, 2005), including differentiated instruction (OECD, 2008), and enable adjustment and remediation targeted to a student or group of students (Muskin, 2017). They may also help identify areas for improvement in teacher professional development, and may be crucial for teachers in motivating and engaging their students (Muskin, 2017). 

Challenges 

Many education systems are moving towards more formative assessments, acknowledging the limitations of high-stake examinations (e.g. the limited range of skills assessed and techniques used). However, their implementation in classrooms remains problematic, especially in developing contexts. 

Teaching conditions

Poor teaching conditions may affect the effective implementation of formative assessments. Large class sizes may cause teachers difficulties in providing individualized attention to their students (Browne, 2016). Moreover, fears that formative assessments might be time-consuming and resource-intensive, especially alongside  extensive curriculum requirements, contribute to their perception as an ‘administrative burden’ for teachers (OECD, 2005b; Browne, 2016). Teachers may conform to policies but do not use assessment results to improve teaching or learning (Browne, 2016).

School- and system-level support

Although policy changes initiated a shift towards formative assessments in Africa, minimal institutional support, such as additional teacher training and materials, has been provided to operate this shift (Browne, 2016). 

Moreover, school culture may not always be supportive of formative assessments. In many countries, the focus remains on more visible summative assessments conducted for accountability purposes (OECD, 2005a; Browne, 2016). Additionally, school directors, inspectors, or the wider system may not grant teachers enough freedom to make decisions based on assessment results by adjusting their teaching methods and moving away from traditional teaching practices (Muskin, 2017). Teachers’ autonomy is all the more imperative as the current pandemic creates unprecedented situations in which teachers’ ability to adapt and innovate is essential (UNICEF, 2021).

Lack of trained teachers

In some countries, many teachers need capacity development in test construction, administration, record-keeping of test marks, and assessment of soft skills (Muskin, 2017). Consequently, teachers may use poorly constructed tests or may copy tests from textbooks (Kellaghan and Greaney, 2004). However, Browne (2016) notes that even when trained and equipped with adequate resources, teachers may return to previous practices if they lack confidence, do not understand the purpose of formative assessments, or are not encouraged by a supportive school culture.

Inclusion and equity

Formative assessments are central to the teaching-learning process. They can help improve student outcomes if part of a fair, valid, and reliable process of gathering, interpreting and using information generated throughout the student learning process (Global Education Monitoring Report Team, 2020).

Equity preoccupations are at the center of the debate between proponents of formative and summative assessments. Arguments against formative assessments include that they can penalize disadvantaged students, for instance because of patronage risks or potential biases in teacher assessments linked to gender, ethnicity, or socio-economic background (Kellaghan and Greaney, 2004; Bennett, 2011; IIEP-UNESCO, 2020).

However, formative assessments can foster equity and inclusion if they are used through a variety of assessment methods that take into account the diversity of students’ abilities (Muskin, 2017) and if teachers are aware of, and address, any potential preconceptions they might have (OECD, 2005a). 

Students with disabilities may require alternative forms of assessment. They are more likely to access the curriculum in inclusive environments when teachers use a universal design approach and are already capable and competent to modify, adapt, or accommodate the needs of students within their assessment plans (Manitoba Education, Citizenship, and Youth, 2006; Wagner, 2011). Accommodations may include extra time to complete assignments, the use of scribes, oral instruction, and so on.

Policy and planning

Linking formative assessments to sector planning.

Whereas summative assessments often dominate the political debate on education (OECD, 2008), it is not evident how formative assessments can inform sector planning. An OECD study points to ‘a lack of coherence between assessments and evaluations at the policy, school and classroom levels’ as a major barrier to wider practice (OECD, 2005b: 4). It means that information gathered at regional or national levels is often judged unhelpful in informing classroom practices; vice versa, classroom-based assessments may be perceived as irrelevant for policy-making. This may also come from the fact that, in the absence of standardization within or across schools, formative assessment data cannot be aggregated into system-level information in the way large-scale standardized assessments are (World Bank, 2018).

However, the importance of classroom-level variables in student learning variations still makes it necessary to look ‘inside the black box’ of classroom practice (OECD, 2005a: 88). International organizations such as OECD and UNESCO advocate for a better alignment between, or combination of, formative and summative assessments (OECD, 2005a; Muskin, 2017). For instance, in Uruguay, large-scale national assessment results were used for formative purposes to advance both student learning and in-service teacher training (Ravela, 2005). Additionally, the Early Grade Reading Assessment (EGRA), a ‘hybrid assessment’, offers an example of how a large-scale assessment, whose data inform decision-makers, can also help identify the need for early instruction improvement in classrooms (Wagner, 2011; IIEP-UNESCO, 2019).

Investing in teacher training 

Investments in initial and in-service training, as well as materials for formative assessments, are essential for teachers’ confidence and the effective implementation of formative assessments (OECD, 2005a; Muskin, 2017), especially in regions such as sub-Saharan Africa where they are relatively new (Browne, 2016). Ensuring teachers understand the purpose of formative assessments is key to fostering their ownership of these pedagogical changes (Browne, 2016). Such efforts, combined with the provision of tools and incentives to use the results of formative assessments, proved effective in Malawi, Liberia and India (World Bank, 2018).

Strengthening schools and the education system’s support

Schools play a major role in stimulating and guiding teachers while conducting and using formative assessments. For instance, the Framework for Improving Student Outcomes (FISO) implementation guide of the state of Victoria, Australia, encourages schools to obtain school-wide agreement on the use of formative assessments and to establish consistent processes for analyzing the data generated. 

Implementing formative assessment requires a system which follows up, monitors the quality of assessment practices, and supports teachers when needed (Browne, 2016; World Bank, 2018). It is also important that teachers are not overwhelmed with assessments while they juggle dense curricula. Some countries, such as Morocco, have dedicated time in the calendar for continuous assessments, while others, such as Tanzania, have simply opted for a dramatic simplification of the curriculum (Muskin, 2017). The COVID-19 crisis has rendered the latter option relevant, as UNICEF recommends prioritizing some curriculum components and identifying those that are currently unachievable (UNICEF, 2021).

Creating a culture of evaluation

Instilling a culture of evaluation throughout the system is crucial. It signifies that ‘teachers and school leaders use information on students to generate new knowledge on what works and why, share their knowledge with colleagues, and build their ability to address a greater range of their students’ learning needs’ (OECD, 2005a: 25). Moreover, teachers are more likely to conduct formative assessments if schools and education systems alike encourage them to innovate, for example through peer support or pilot projects which test new assessment methods (OECD, 2005a).

Plans and policies

  • Liberia: National learning assessment policy (2021)
  • Zambia: National learning assessment framework (2017)
  • READ (Russian Education Aid for Development). 2020. ‘Formative Assessment and Student Learning: How to Ensure Students Continue to Learn Outside of the Classroom’. Newsletter 13 .
  • Soland, J.; Hamilton, L. S.; Stecher, B. M. 2013. Measuring 21st Century Competencies: Guidance for Educators.  Asia Society and RAND Corporation.

Allal, L.; Mottier Lopez, L. 2005. 'Formative assessment of learning: A review of publications in French.'  In: Formative Assessment: Improving Learning in Secondary Classrooms , (pp. 241–264). Paris: OECD Publishing.

Bennett, R. E. 2011. 'Formative assessment: A critical review.'  In:  Assessment in Education: Principles, Policy & Practice 18 (1) : 5–25.

Black, P.; Wiliam, D. 1998. 'Assessment and classroom learning.'   Assessment in Education: Principles, Policy & Practice 5 (1) : 7–74.

Browne, E. 2016. Evidence on formative classroom assessment for learning.  K4D Helpdesk Report. Brighton: Institute of Development Studies.

Bawane, J.; Sharma, R, 2020. Formative assessments and the continuity of learning during emergencies and crises.  NEQMAP 2020 Thematic Review. Paris: UNESCO.

Clarke, M. 2012. What matters most for student assessment systems: A framework paper.  Washington DC: World Bank.

Dunn, K. E.; Mulvenon, S. W. 2009. 'A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education.  In:  Practical Assessment, Research & Evaluation 14 (7) : 11.

Global Education Monitoring Report Team. 2020. Global Education Monitoring Report, 2020: Inclusion and Education: All Means All.  Paris: UNESCO.

Hill, P. W. 2013. ‘ The Role of assessment in measuring outcomes'.  In: M. Barber and S. Rizvi (eds), Asking More: The Path to Efficacy . London: Pearson. 

IIEP-UNESCO. 2019. 'Student learning assessments'.  IIEP Policy Toolbox. 

———. 2020. 'Will we ever go back to normal when it comes to student assessments?' Education for Safety, Resilience and Social Cohesion ,. Last accessed June 10 2021.

Kellaghan, T.; Greaney, V. 2004. Assessing student learning in Africa. Directions in Development. Washington, D.C: World Bank.

Manitoba Education, Citizenship, and Youth (Canada). 2006. Rethinking Classroom Assessment with Purpose in Mind: Assessment for Learning, Assessment as Learning, Assessment of Learning.  Manitoba Education, Citizenship, and Youth. 

Muskin, J. A. 2017. Continuous Assessment for Improved Teaching and Learning: A Critical Review to Inform Policy and Practice.  Current and critical issues in curriculum, learning and assessment, 13. Geneva: UNESCO International Bureau of Education.

OECD (Organisation for Economic Co-operation and Development). 2005a. Formative Assessment: Improving Learning in Secondary Classrooms.  Paris: OECD.

———. 2005b. Formative Assessment: Improving Learning in Secondary Classrooms.  Policy brief. Paris: OECD.

OECD (Organisation for Economic Co-operation and Development). 2008. Assessment for Learning. The Case for Formative Assessment.  OECD/CERI International Conference 'Learning in the 21st Century: Research, Innovation and Policy'. 

Ravela, P. 2005. 'A formative approach to national assessments: The Case of Uruguay.'  In: Prospects 35 (1) : 21–43.

READ (Russian Education Aid for Development). 2020. 'Formative assessment and student learning: How to ensure students continue to learn outside of the classroom.'   Newsletter 13 .

UNESCO. 2019. The Promise of Large-Scale Learning Assessments: Acknowledging Limits to Unlock Opportunities. Paris: UNESCO.

Wagner, D. A. 2011. Smaller, Quicker, Cheaper: Improving Learning Assessments for Developing Countries. Paris: IIEP-UNESCO.

World Bank. 2018. Learning to Realize Education’s Promise. World Development Report 2018.  Washington, DC: World Bank.

Related information

  • Are students still learning during COVID-19? Formative assessment can provide the answer

Formative Assessment and Feedback Strategies

  • Reference work entry
  • First Online: 17 December 2022
  • pp 1359–1386
  • Cite this reference work entry

formative assessment and research occur

  • Susanne Narciss 5 &
  • Joerg Zumbach 6  

Part of the book series: Springer International Handbooks of Education ((SIHE))

1562 Accesses

Formative assessment and formative feedback strategies are very powerful factors for promoting effective learning and instruction in all educational contexts. Formative assessment, as a superordinate term, refers to all activities that instructors and/or learners undertake to get information about teaching and learning that are used in a diagnostic manner. Formative feedback is a core component of formative assessment. If well designed and implemented in terms of a formative feedback strategy, it provides students and teachers with information on the current state of learning in order to help the further regulation of learning and instruction in the direction of the learning standards strived for. This chapter presents the issues in, as well as selected approaches for, designing formative assessment and feedback strategies. Based on recent meta-analyses and literature reviews, it summarizes core theoretical and empirical findings on the conditions and effects of formative assessment and feedback in (higher) education. Furthermore, it discusses challenges and implications for applying the current insights and strategies for effective formative assessment and feedback in higher education. Finally, suggestions on helpful resources are provided.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

formative assessment and research occur

Constructing Formative Assessment Strategies

formative assessment and research occur

The power of assessment feedback in teaching and learning: a narrative review and synthesis of the literature

Ajjawi, R., Kent, F., Broadbent, J., Hong-Meng Tai, J., Bearman, M., & Boud, D. (2021). Feedback that works: A realist review of feedback interventions for written tasks. Studies in Higher Education . https://doi.org/10.1080/03075079.2021.1894115

Andrade, H., & Cizek, G. (Eds.). (2010). Handbook of formative assessment . New York: Routledge.

Google Scholar  

Bailey, R., & Garner, M. (2010). Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices. Teaching in Higher Education, 15 (2), 187–198. https://doi.org/10.1080/13562511003620019

Article   Google Scholar  

Bearman, M., Dawson, P., Ajjawi, R., Tai, J., & Boud, D. (Eds.). (2020). Re-imagining university assessment in a digital world . Springer.

Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: A key to improving student learning? Studies in Higher Education, 36 , 671–687.

Biggs, J., & Tang, C. (2007). Teaching for quality learning at university. What the student does (3rd ed.). Maidenhead: McGraw-Hill.

Black, P., & McCormick, R. (2010). Reflections and new directions. Assessment & Evaluation in Higher Education, 35 (5), 493–499.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5 (1), 7–74.

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21 (1), 5–31.

Boston, C. (2002). The concept of formative assessment. Practical Assessment, Research, and Evaluation, 8 . https://doi.org/10.7275/kmcq-dj31 . https://scholarworks.umass.edu/pare/vol8/iss1/9

Boud, D., & Dawson, P. (2021). What feedback literate teachers do: An empirically-derived competency framework. Assessment & Evaluation in Higher Education . https://doi.org/10.1080/02602938.2021.1910928

Boud, D., & Molloy, E. (Eds.). (2013). Feedback in higher and professional education: Understanding it and doing it well . London: Routledge.

Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65 , 245–281.

Carless, D. (2020). From teacher transmission of information to student feedback literacy: Activating the learner role in feedback processes. Active Learning in Higher Education . https://doi.org/10.1177/1469787420945845

Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43 (8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354

Carless, D., & Winstone, N. (2020). Teacher feedback literacy and its interplay with student feedback literacy. Teaching in Higher Education , 1–14. https://doi.org/10.1080/13562517.2020.1782372

Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36 (4), 395–407.

Carless, D., Bridges, S. M., Chan, C. K. Y., & Glofcheski, R. (Eds.). (2017). Scaling up assessment for learning in higher education . Singapore: Springer.

Chong, S. W. (2021). Reconsidering student feedback literacy from an ecological perspective. Assessment & Evaluation in Higher Education, 46 (1), 92–104.

Clark, J. , Porath, S., Thiele, J., & Jobe, M. (2020). Action research . NPP eBooks. 34. Online: https://newprairiepress.org/ebooks/34

Cranney, J., Hulme, J. A., Suleeman, J., Job, R., & Dunn, D. S. (2021). Assessing learning outcomes in undergraduate psychology education: Lessons learned from five countries. In S. A. Nolan, C. M. Hakala, & R. E. Landrum (Eds.), Assessing undergraduate learning in psychology: Strategies for measuring and improving student performance (pp. 179–201). Washington, D.C.: American Psychological Association. https://doi.org/10.1037/0000183-013

Chapter   Google Scholar  

de Kleijn, R. A. (2021). Supporting student and teacher feedback literacy: An instructional model for student feedback processes. Assessment & Evaluation in Higher Education . https://doi.org/10.1080/02602938.2021.1967283

Double, K. S., McGrane, J. A., & Hopfenbeck, T. N. (2020). The impact of peer assessment on academic performance: A meta-analysis of control group studies. Educational Psychology Review, 32 (2), 481–509.

Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83 (1), 70–120.

Fouad, N. A., Grus, C. L., Hatcher, R. L., Kaslow, N. J., Hutchings, P. S., Madson, M. B., Collins, F.L., & Crossman, R. E. (2009). Competency benchmarks: A model for understanding and measuring competence in professional psychology across training levels. Training and Education in Professional Psychology, 3 (4), 5–26. https://doi.org/10.1037/a0015832

Fouad, N. A., Hatcher, R. L., & McCutcheon, S. (2022). Introduction to the special issue on competency in training and education. Training and Education in Professional Psychology, 16 (2), 109–111. https://doi.org/10.1037/tep0000408

Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & education, 57 (4), 2333–2351.

Goldin, I., Narciss, S., Foltz, P., & Bauer, M. (2017). New directions in formative feedback in interactive learning environments. International Journal of Artificial Intelligence in Education, 27 (3), 385–392. https://doi.org/10.1007/s40593-016-0135-7

Gonsalvez, C. J., Deane, F. P., Terry, J., Nasstasia, Y., & Shires, A. (2021). Innovations in competence assessment: Design and initial validation of the Vignette Matching Assessment Tool (VMAT). Training and Education in Professional Psychology, 15 (2), 106–116. https://doi.org/10.1037/tep0000302

Gonsalvez, C. J., Shafranske, E. P., McLeod, H. J., & Falender, C. A. (2021). Competency-based standards and guidelines for psychology practice in Australia: opportunities and risks. Clinical Psychologist, 25 (3), 244–259. https://doi.org/10.1080/13284207.2020.1829943

Hattie, J. A. (2009). Visible learning. A synthesis of over 800 meta-analyses relating to achievement . New York: Routledge.

Hattie, J. A., & Gan, M. (2011). Instruction based on feedback. In R. Mayer & P. Alexander (Eds.), Handbook of research on learning and instruction (pp. 249–271). New York: Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 , 81–112.

Henderson, M., Ajjawi, R., Boud, D., & Molloy, E. (Eds.). (2019). The impact of feedback in higher education: Improving assessment outcomes for learners . Singapore: Springer.

Iglesias Pérez, M. C., Vidal-Puga, J., & Pino Juste, M. R. (2020). The role of self and peer assessment in Higher Education. Studies in Higher Education , 1–10. https://doi.org/10.1080/03075079.2020.1783526

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119 (2), 254.

Li, H., Xiong, X., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45 (2), 193–211. https://doi.org/10.1080/02602938.2019.1620679

Lipnevich, A., & Panadero, E. (2021). A review of feedback models and theories: Descriptions, definitions, and conclusions. Frontiers in Education . https://doi.org/10.3389/feduc.2021.720195

Mathan, S. A., & Koedinger, K. R. (2005). Fostering the intelligent novice: Learning from errors with metacognitive tutoring. Educational Psychologist, 40 (4), 257–265.

Morris, R., Perry, T., & Wardle, L. (2021). Formative assessment and feedback for learning in higher education: A systematic review. Review of Education, 9 (3). https://doi.org/10.1002/rev3.3292

Mory, E. H. (2004). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (2nd ed., pp. 745–783). Mahwah, NJ: Lawrence Erlbaum Associates.

Narciss, S. (2004). The impact of informative tutoring feedback and self-efficacy on motivation and achievement in concept learning. Experimental Psychology, 51 (3), 214–228.

Narciss, S. (2006). Informatives tutorielles feedback [Informative tutorial feedback] . Entwicklungs- und Evaluationsprinzipien auf der Basis instruktionspsychologischer Erkenntnisse . Münster: Waxmann.

Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector, M. D. Merrill, J. J. G. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 125–144). Mahwah, NJ: Lawrence Erlbaum Associates.

Narciss, S. (2012). Feedback strategies. In N. Seel (Ed.), Encyclopedia of the learning sciences (Vol. F(6), pp. 1289–1293). New York: Springer.

Narciss, S. (2013). Designing and evaluating tutoring feedback strategies for digital learning environments on the basis of the interactive tutoring feedback model. Digital Education Review, 23 , 7–26. Retrieved from http://greav.ub.edu/der

Narciss, S. (2017). Conditions and effects of feedback viewed through the lens of the Interactive Tutoring Feedback Model. In D. Carless, S. M. Bridges, C. K. Y. Chan, & R. Glofcheski (Eds.), Scaling up assessment for learning in higher education (pp. 173–189). Singapore: Springer.

Narciss, S., Hammer, E., Damnik, G., Kisielski, K., & Körndle, H. (2021). Promoting prospective teacher competencies for designing, implementing, evaluating, and adapting interactive formative feedback strategies. Psychology Learning & Teaching, 20 (2), 261–278.

Narciss, S., & Huth, K. (2006). Fostering achievement and motivation with bug-related tutoring feedback in a computer-based training on written subtraction. Learning and Instruction, 16 , 310–322.

Narciss, S., Schnaubert, L., Andres, E., Eichelmann, A., Goguadze, G., & Sosnovsky, S. (2014). Exploring feedback and student characteristics relevant for personalizing feedback strategies. Computers & Education, 71 , 56–76.

Natriello, G. (1987). The impact of evaluation processes on students. Educational Psychologist, 22 (2), 155–175.

Nicol, D. (2019). Reconceptualising feedback as an internal not an external process. Italian Journal of Educational Research , 71–84.

Nicol, D. (2021). The power of internal feedback: Exploiting natural comparison processes. Assessment & Evaluation in Higher Education, 46 (5), 756–778.

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 , 199–218.

Nolan, S. A., Hakala, C. M., & Landrum, R. E. (Eds.). (2021). Assessing undergraduate learning in psychology: Strategies for measuring and improving student performance . Washington, D.C.: American Psychological Association.

Panadero, E., & Lipnevich, A. (2021). A review of feedback typologies and models: Towards an integrative model of feedback elements. Educational Research Review, 100416 . https://doi.org/10.1016/j.edurev.2021.100416

Panadero, E., Lipnevich, A., & Broadbent, J. (2019). Turning self-assessment into self-feedback. In M. Henderson, R. Ajjawi, D. Boud, & E. Molloy (Eds.), The impact of feedback in higher education (pp. 147–163). Cham: Palgrave Macmillan.

Price, M., Handley, K., & Millar, J. (2011). Feedback: Focusing attention on engagement. Studies in Higher Education, 36 , 879–896.

Puentedura, R. ( 2012). Building Upon SAMR. www.hippasus.com/rrpweblog/archives/2012/09/03/BuildingUponSAMR.pdf

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28 , 4–13.

Redecker, C., & Johannessen, Ø. (2013). Changing assessment – Towards a new assessment paradigm using ICT. European Journal of Education, 48 (1), 79–96.

Reinholz, D. (2016). The assessment cycle: A model for learning through peer assessment. Assessment & Evaluation in Higher Education, 41 (2), 301–315.

Robinson, S., Pope, D., & Holyoak, L. (2013). Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduate students. Assessment & Evaluation in Higher Education, 38 , 260–272.

Rodolfa, E., & Schaffer, J. (2019). Challenges to psychology education and training in the culture of competence. American Psychologist, 74 (9), 1118–1128. https://doi.org/10.1037/amp0000513

Sadler, D. R. (1989). Formative assessment and the design of instructional system. Instructional Science, 18 , 119–144.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 , 153–189.

Spector, J. M., Ifenthaler, D., Samspon, D., Yang, L., Mukama, E., Warusavitarana, A., … Gibson, D. C. (2016). Technology enhanced formative assessment for 21st century learning. Educational Technology & Society, 19 (3), 58–71.

To, J., & Panadero, E. (2019). Peer assessment effects on the self-assessment process of first-year undergraduates. Assessment & Evaluation in Higher Education, 44 (6), 920–932.

Todd, J. S., & Hammer, E. Y. (2021). How to create a culture of assessment. In S. A. Nolan, C. M. Hakala, & R. E. Landrum (Eds.), Assessing undergraduate learning in psychology: Strategies for measuring and improving student performance (pp. 67–76). Washington, D.C.: American Psychological Association. https://doi.org/10.1037/0000183-006

Van der Kleij, F. M., & Lipnevich, A. A. (2021). Student perceptions of assessment feedback: A critical scoping review and call for research. Educational Assessment, Evaluation and Accountability, 33 (2), 345–373.

Wang, D., & Han, H. (2021). Applying learning analytics dashboards based on process-oriented feedback to improve students’ learning effectiveness. Journal of Computer Assisted Learning, 37 (2), 487–499.

Wanner, T., & Palmer, E. (2018). Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback. Assessment & Evaluation in Higher Education, 43 (7), 1032–1047. https://doi.org/10.1080/02602938.2018.1427698

Whitmore, J. (2010). Coaching for Performance: The principles and practice of coaching and leadership fully revised 25th anniversary edition . Hachette UK.

Wiener, N. (1954). The human use of human beings: Cybernetics and society . Oxford: Houghton Mifflin.

Wiliam, D. (2006). Formative assessment: Getting the focus right. Educational Assessment, 11 (3-4), 283–289.

Wiliam, D. (2010). An integrative summary of the research literature and implications for a theory of formative assessment. In H. Andrade & G. Cizek (Eds.), Handbook of formative assessment (pp. 18–40). New York: Routledge.

Wiliam, D., & Thompson, M. (2007). Integrating assessment with instruction: What will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 53–82). Lawrence Erlbaum Associates.

Winstone, N. E., Balloo, K., & Carless, D. (2022). Discipline-specific feedback literacies: A framework for curriculum design. Higher Education, 83 , 57–77. https://doi.org/10.1007/s10734-020-00632-0

Winstone, N., & Carless, D. (2019). Designing effective feedback processes in higher education: A learning-focused approach . London: Routledge.

Book   Google Scholar  

Wylie, E. C., & Lyon, C. J. (2020). Developing a formative assessment protocol to support professional growth. Educational Assessment, 25 (4), 314–330.

Xiong, Y., & Suen, H. K. (2018). Assessment approaches in massive open online courses: Possibilities, challenges and future directions. International Review of Education, 64 , 241–263. https://doi.org/10.1007/s11159-018-9710-5

Download references

Author information

Authors and affiliations.

School of Science - Faculty of Psychology, Psychology of Learning and Instruction, Technische Universitaet Dresden, Dresden, Sachsen, Germany

Susanne Narciss

Department of Educational Research, University of Salzburg, Salzburg, Austria

Joerg Zumbach

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Susanne Narciss .

Editor information

Editors and affiliations.

Department of Psychology, University of South Florida, Bonita Springs, FL, USA

Douglas A. Bernstein

Department of Human, Philosophical and Educational Sciences (DISUFF), University of Salerno, Fisciano, Italy

Giuseppina Marsico

Section Editor information

University of Salzburg, Salzburg, Austria

Psychologie des Lehrens und Lernens, Technische Universität Dresden, Dresden, Deutschland

Department of Psychology, University of South Florida, Tampa, FL, USA

University of Salerno, Fisciano, Italy

Federal University of Bahia, Salvador, Brazil

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this entry

Cite this entry.

Narciss, S., Zumbach, J. (2023). Formative Assessment and Feedback Strategies. In: Zumbach, J., Bernstein, D.A., Narciss, S., Marsico, G. (eds) International Handbook of Psychology Learning and Teaching. Springer International Handbooks of Education. Springer, Cham. https://doi.org/10.1007/978-3-030-28745-0_63

Download citation

DOI : https://doi.org/10.1007/978-3-030-28745-0_63

Published : 17 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-28744-3

Online ISBN : 978-3-030-28745-0

eBook Packages : Education Reference Module Humanities and Social Sciences Reference Module Education

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to main navigation

Formative Assessment

Assessment comes in two forms:  formative  and  summative .  Formative assessment  occurs during the learning process, focuses on improvement (rather than evaluation) and is often informal and low-stakes.

Adjustments in Instruction

Formative assessment allows instructors to gain valuable feedback—what students have learned, how well they can articulate concepts, what problems they can solve. Instructors can then make changes to increase effectiveness, which can lead to substantial learning gains (Black and Wiliam, 1998).

The Problem of Student Over-Confidence

Formative assessment also helps students accurately assess their own knowledge, which is crucial for learning. Especially for lower-performing students, a significant gap exists between what students think they know and what they actually know (Bell and Volckmann, 2011). This confirms what in psychology is called the Dunning-Kruger effect: the less competent or skilled an individual is, the more likely he or she is to be overconfident in his or her abilities (Kruger and Dunning, 1999). Overconfidence has a strong negative effect on learning. Students who are overconfident have significantly smaller normalized learning gains than students who were more realistic in their assessments (Mathabathe and Potgieter, 2014).

Classroom Assessment Techniques (CATs)

Classroom Assessment Techniques are a specific set of formative assessments designed to give the instructor and the students a clear picture of what they know. The term CATs was popularized in Angelo and Cross’ book  Classroom Assessment Techniques: A Handbook for College Teachers . The following are a few of the most popular CATs that, because of their simplicity and flexibility, can be used in almost any subject:

  • What was the most important thing you learned during this class?
  • What important question remains unanswered?
  • What was the muddiest point of the class?
  • What made this point so difficult to comprehend?
  • One-Sentence Summary. Choosing a single topic addressed during a class session, ask the students to answer the question:  who does what, to whom, when, where, why, and how ?
  • Student-Generated Test Questions. Have students generate test questions and practice answering their questions thoroughly. Additionally, integrating student response systems such as Clickers in the classroom have been proven to result in increases in a number of significant areas including: students' ability to assess their learning, the amount of pages students read before class, their overall understanding of the material, and their exam scores (Hedgcock & Rouwenhorst, 2014).

For a pre-constructed assessment worksheet see  Fast Feedback Form  or find additional CATs here:   https://vcsa.ucsd.edu/_files/assessment/resources/50_cats.pdf

Retrieval Practices Enhance Learning

Formative assessment can also help students learn material. Although students may prefer “cramming” before an exam by re-reading texts and notes, they remember more and have a deeper understanding of material when they must mentally retrieve it regularly, at spaced intervals interleaved with other unrelated material. From short writing exercises to low-stakes quizzes to answering polling questions (e.g., with Clickers), formative assessment can facilitate these retrieval practices that enhance learning (Brown, Roediger III, & McDaniel, 2014).

Angelo, T. A., & Cross, K.P. (1993).  Classroom assessment techniques: A handbook for college teachers . Second Edition. San Francisco: Jossey-Bass Publishers .

Bell, P., & Volckmann, D. (2011). Knowledge surveys in general chemistry: Confidence, overconfidence, and performance.  Journal of Chemical Education, 88 (11), 1469-1476.

Black, P., & Wiliam, D. (2006). Assessment and classroom learning.  Assessment in Education: Principles, Policy and Practice, 5 (1), 7-74.

Brown, P., & Roediger III, H., & McDaniel, M. (2014).  Make it stick: The science of successful learning.  Cambridge, MA: Belknap Press.

Hedgcock, W., & Rouwenhorst, R. (2014). Clicking their way to success: Using student response systems as a tool for feedback.  Journal for Advancement of Marketing Education,   22 (2), 16-25.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments.  Journal of Personality and Social Psychology , 77(6), 1121-1134.

Mathabathe, K. C., & Potgieter, M. (2014). Metacognitive monitoring and learning gain in foundation chemistry.  Chemical Education Research and Practice,   15 (1), 94 104.

Creative Commons License

Academy for Teaching and Learning

Moody Library, Suite 201

One Bear Place Box 97189 Waco, TX 76798-7189

  • General Information
  • Academics & Research
  • Administration
  • Gateways for ...
  • About Baylor
  • Give to Baylor
  • Pro Futuris
  • Social Media
  • College of Arts & Sciences
  • Diana R. Garland School of Social Work
  • George W. Truett Theological Seminary
  • Graduate School
  • Hankamer School of Business
  • Honors College
  • Louise Herrington School of Nursing
  • Research at Baylor University
  • Robbins College of Health and Human Sciences
  • School of Education
  • School of Engineering & Computer Science
  • School of Music
  • University Libraries, Museums, and the Press
  • More Academics
  • Compliance, Risk and Safety
  • Human Resources
  • Marketing and Communications
  • Office of General Counsel
  • Office of the President
  • Office of the Provost
  • Operations, Finance & Administration
  • Senior Administration
  • Student Life
  • University Advancement
  • Undergraduate Admissions
  • Graduate Admissions
  • Baylor Law School Admissions
  • Social Work Graduate Programs
  • George W. Truett Theological Seminary Admissions
  • Online Graduate Professional Education
  • Virtual Tour
  • Visit Campus
  • Alumni & Friends
  • Faculty & Staff
  • Prospective Faculty & Staff
  • Prospective Students
  • Anonymous Reporting
  • Annual Fire Safety and Security Notice
  • Cost of Attendance
  • Digital Privacy
  • Legal Disclosures
  • Mental Health Resources
  • Web Accessibility

Formative Assessment of Teaching

What is formative assessment of teaching.

How do you know if your teaching is effective? How can you identify areas where your teaching can improve? What does it look like to assess teaching?

Formative Assessment

Formative assessment of teaching consists of different approaches to continuously evaluate your teaching. The insight gained from this assessment can support revising your teaching strategies, leading to better outcomes in student learning and experiences. Formative assessment can be contrasted with summative assessment, which is usually part of an evaluative decision-making process. The table below outlines some of the key differences between formative and summative assessment: 

Evaluation of Teaching

Type of Assessment

Formative

Summative

Gather evidence of teaching to guide the instructor towards growth and improvement. 

Gather evidence of teaching to make a decision about the instructor being evaluated.

To reveal the instructor’s current strengths and areas for improvement. 

To judge the instructor’s case for promotion, tenure, or other decision of consequence.

A check-in that allows you to adjust and correct your actions.

A final exam in a course where your performance is judged.

May generate pieces of evidence over time that can later be used as part of a summative assessment.

May use approaches similar to formative assessment with a different purpose and audience.

By participating in formative assessment, instructors connect with recent developments in the space of teaching and learning, as well as incorporate new ideas into their practice. Developments may include changes in the students we serve, changes in our understanding of effective teaching, and changes in expectations of the discipline and of higher education as a whole.

Formative assessment of teaching ultimately should guide instructors towards using more effective teaching practices. What does effectiveness mean in terms of teaching?

Effectiveness in Teaching

Effective teaching can be defined as teaching that leads to the intended outcomes in student learning and experiences. In this sense, there is no single perfect teaching approach. Effective teaching looks will depend on the stated goals for student learning and experiences. A course that aims to build student confidence in statistical analysis and a course that aims to develop student writing could use very different teaching strategies, and still both be effective at accomplishing their respective goals. 

Assessing student learning and experiences is critical to determining if teaching is truly effective in its context. This assessment can be quite complex, but it is doable. In addition to measuring the impacts of your teaching, you may also consider evaluating your teaching as it aligns with best practices for evidence-based teaching especially in the disciplinary and course context or aligns with your intended teaching approach. The table below outlines these three approaches to assessing the effectiveness of your teaching:

Evidence of Effective Teaching

Approach

Student Learning Experiences

Alignment with Best Practices

Alignment with Intention

Does my current course design or teaching strategy lead to students able to demonstrate my stated learning outcomes?

Does my current course design or teaching strategy align with what is recommended in my  context (e.g. student level, class format/size, discipline)?

Does my current course design or teaching strategy align with my teaching philosophy and values?

Measures of student learning are the most authentic and accurate metrics for teaching efficacy.


Effective teaching will increase student learning from before to after a course, and to a higher extent compared to less effective methods.

Research has identified several strategies more likely to be effective at accomplishing certain student outcomes. 


Certain instructional formats/approaches may help accomplish particular skill learning objectives.

The planned teaching approach may not actually be reflected in practice.


Observations and student experiences can reveal a mismatch between reality and intentions.

Direct evaluation of student work through papers, projects, assignments, exam questions


Student surveys for intended experiences or changes in student beliefs/attitudes

Evaluation of course design components using instructor rubrics


Evaluation of live teaching practice using classroom observation protocols

Student surveys for perceptions of class environment or instructor practice


Evaluation of live teaching practice using classroom observation protocols

What are some strategies that I might try? 

There are multiple ways that instructors might begin to assess their teaching. The list below includes approaches that may be done solo, with colleagues, or with the input of students. Instructors may pursue one or more of these strategies at different points in time. With each possible strategy, we have included several examples of the strategy in practice from a variety of institutions and contexts.

Teaching Portfolios

Teaching portfolios are well-suited for formative assessment of teaching, as the portfolio format lends itself to documenting how your teaching has evolved over time. Instructors can use their teaching portfolios as a reflective practice to review past teaching experiences, what worked and what did not.

Teaching portfolios consist of various pieces of evidence about your teaching such as course syllabi, outlines, lesson plans, course evaluations, and more. Instructors curate these pieces of evidence into a collection, giving them the chance to highlight their own growth and focus as educators. While student input may be incorporated as part of the portfolio, instructors can contextualize and respond to student feedback, giving them the chance to tell their own teaching story from a more holistic perspective.

Teaching portfolios encourage self-reflection, especially with guided questions or rubrics to review your work. In addition, an instructor might consider sharing their entire teaching portfolio or selected materials for a single course with colleagues and engaging in a peer review discussion. 

Examples and Resources:

Teaching Portfolio - Career Center

Developing a Statement of Teaching Philosophy and Teaching Portfolio - GSI Teaching & Resource Center

Self Assessment - UCLA Center for Education, Innovation, and Learning in the Sciences

Advancing Inclusion and Anti-Racism in the College Classroom Rubric and Guide

Course Design Equity and Inclusion Rubric

Teaching Demos or Peer Observation

Teaching demonstrations or peer classroom observation provide opportunities to get feedback on your teaching practice, including communication skills or classroom management.

Teaching demonstrations may be arranged as a simulated classroom environment in front of a live audience who take notes and then deliver summarized feedback. Alternatively, demonstrations may involve recording an instructor teaching to an empty room, and this recording can be subjected to later self-review or peer review. Evaluation of teaching demos will often focus on the mechanics of teaching especially for a lecture-based class, e.g. pacing of speech, organization of topics, clarity of explanations.

In contrast, instructors may invite a colleague to observe an actual class session to evaluate teaching in an authentic situation. This arrangement gives the observer a better sense of how the instructor interacts with students both individually or in groups, including their approach to answering questions or facilitating participation. The colleague may take general notes on what they observe or evaluate the instructor using a teaching rubric or other structured tool.

Peer Review of Course Instruction

Preparing for a Teaching Demonstration - UC Irvine Center for Educational Effectiveness

Based on Peer Feedback - UCLA Center for Education, Innovation, and Learning in the Sciences

Teaching Practices Equity and Inclusion Rubric

Classroom Observation Protocol for Undergraduate STEM (COPUS)

Student Learning Assessments

Student learning can vary widely across courses or even between academic terms. However, having a clear benchmark for the intended learning objectives and determining whether an instructor’s course as implemented helps students to reach that benchmark can be an invaluable piece of information to guide your teaching. The method for measuring student learning will depend on the stated learning objective, but a well-vetted instrument can provide the most reliable data.

Recommended steps and considerations for using student learning assessments to evaluate your teaching efficacy include:

Identify a small subset of course learning objectives to focus on, as it is more useful to accurately evaluate one objective vs. evaluating many objectives inaccurately.

Find a well-aligned and well-developed measure for each selected course learning objective, such as vetted exam questions, rubrics, or concept inventories.

If relevant, develop a prompt or assignment that will allow students to demonstrate the learning objective to then be evaluated against the measure.

Plan the timing of data collection to enable useful comparison and interpretation.

Do you want to compare how students perform at the start of your course compared to the same students at the end of your course?

Do you want to compare how the same students perform before and after a specific teaching activity?

Do you want to compare how students in one term perform compared to students in the next term, after changing your teaching approach?

Implement the assignment/prompt and evaluate a subset or all of the student work according to the measure.

Reflect on the results and compare student performance measures.

Are students learning as a result of your teaching activity and course design?

Are students learning to the degree that you intended?

Are students learning more when you change how you teach?

This process can be repeated as many times as needed or the process can be restarted to instead focus on a different course learning objective.

List of Concept Inventories (STEM)

Best Practices for Administering Concept Inventories (Physics)

AAC&U VALUE Rubrics

Rubric Bank | Assessment and Curriculum Support Center - University of Hawaiʻi at Mānoa

Rubrics - World Languages Resource Collection - Kennesaw State University

Student Surveys or Focus Groups

Surveys or focus groups are effective tools to better understand the student experience in your courses, as well as to solicit feedback on how courses can be improved. Hearing student voices is critical as students themselves can attest to how course activities made them feel, e.g. whether they perceive the learning environment to be inclusive, or what topics they find interesting.

Some considerations for using student surveys in your teaching include:

Surveys collect individual and anonymous input from as many students as possible.

Surveys can gather both quantitative and qualitative data.

Surveys that are anonymous avoid privileging certain voices over others.

Surveys can enable students to share about sensitive experiences that they may be reluctant to discuss publicly.

Surveys that are anonymous may lend to negative response bias.

Survey options at UC Berkeley include customized course evaluation questions or anonymous surveys on bCourses, Google Forms, or Qualtrics. 

Some considerations for using student focus groups in your teaching include:

Focus groups leverage the power of group brainstorming to identify problems and imagine possible solutions.

Focus groups can gather both rich and nuanced qualitative data.

Focus groups with a skilled facilitator tend to have more moderated responses given the visibility of the discussion.

Focus groups take planning, preparation, and dedicated class time.

Focus group options at UC Berkeley include scheduling a Mid-semester Inquiry (MSI) to be facilitated by a CTL staff member.

Instructions for completing question customization for your evaluations as an instructor

Course Evaluations Question Bank

Student-Centered Evaluation Questions for Remote Learning

Based on Student Feedback - UCLA Center for Education, Innovation, and Learning in the Sciences

How Can Instructors Encourage Students to Complete Course Evaluations and Provide Informative Responses?

Student Views/Attitudes/Affective Instruments - ASBMB

Student Skills Inventories - ASBMB

How might I get started?

Self-assess your own course materials using one of the available rubrics listed above.

Schedule a teaching observation with CTL to get a colleague’s feedback on your teaching practices and notes on student engagement.

Schedule an MSI with CTL to gather directed student feedback with the support of a colleague.

Have more questions? Schedule a general consultation with CTL or send us your questions by email ( [email protected] )!

References:

Evaluating Teaching - UCSB Instructional Development

Documenting Teaching - UCSC Center for Innovations in Teaching and Learning

Other Forms of Evaluation - UCLA Center for Education, Innovation, and Learning in the Sciences

Evaluation Of Teaching Committee on Teaching, Academic Senate

Report of the Academic Council Teaching Evaluation Task Force

Teaching Quality Framework Initiative Resources - University of Colorado Boulder

Benchmarks for Teaching Effectiveness - University of Kansas  Center for Teaching Excellence

Teaching Practices Instruments - ASBMB

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Formative vs. summative assessment: impacts on academic motivation, attitude toward learning, test anxiety, and self-regulation skill

Seyed m. ismail.

1 College of Humanities and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia

D. R. Rahul

2 School of Science and Humanities, Shiv Nadar University Chennai, Chennai, India

Indrajit Patra

3 NIT Durgapur, Durgapur, West Bengal India

Ehsan Rezvani

4 English Department, Isfahan (Khorasgan) Branch, Islamic Azad University, Isfahan, Iran

Associated Data

The data that support the findings of this study are available from the corresponding author upon reasonable request.

As assessment plays an important role in the process of teaching and learning, this research explored the impacts of formative and summative assessments on academic motivation, attitude toward learning, test anxiety, and self-regulation skill of EFL students in Iran. To fulfill the objectives of this research, 72 Iranian EFL learners were chosen based on the convenience sampling method assigned to two experimental groups (summative group and formative group) and a control group. Then, the groups took the pre-tests of test anxiety, motivation, and self-regulation skill. Then, one experimental group was trained by following the rules of the formative assessment and the other experimental group was taught according to the summative assessment. The control group was instructed without using any preplanned assessment. After a 15-session treatment, the post-tests of the test anxiety, motivation, and self-regulation skill were administered to all groups to assess the impacts of the instruction on their language achievement. Lastly, a questionnaire of attitude was administered to both experimental groups to examine their attitudes towards the impacts of formative and summative assessment on their English learning improvement. The outcomes of one-way ANOVA and Bonferroni tests revealed that both summative and formative assessments were effective but the formative one was more effective on academic motivation, test anxiety, and self-regulation skill. The findings of one sample t -test indicated that the participants had positive attitudes towards summative and formative assessments. Based on the results, it can be concluded that formative assessment is an essential part of teaching that should be used in EFL instructional contexts. The implications of this study can help students to detect their own weaknesses and target areas that need more effort and work.

Introduction

In teaching and learning, assessment is defined as a procedure applied by instructors and students during instruction through which teachers provide necessary feedbacks to modify ongoing learning and teaching to develop learners’ attainment of planned instructional aims (Robinowitz, 2010 ). According to Popham ( 2008 ), assessment is an intended procedure in which evidence of learners’ status is utilized by educators to adjust their ongoing instructional processes or applied by learners to change their present instructional strategies. Assessment intends to improve learning and it is used to reduce the gap between students’ present instructional situation and their target learning objectives (Heritage, 2012 ).

Two types of assessment are formative and summative. According to Glazer ( 2014 ), summative assessment is generally applied to give learners a numerical score with limited feedback. Therefore, summative assessment is commonly used to measure learning and is rarely used for learning. Educators can make the summative assessment more formative by giving learners the opportunity to learn from exams. This would mean supplying pupils with feedback on exams and making use of the teaching potentiality of exams. Wininger ( 2005 ) proposed an amalgamation of assessment techniques between summative assessment and formative assessment. This marriage between summative assessment and formative assessment is referred to as summative-formative assessment. Based on Wininger, summative-formative assessment is used to review the exam with examinees so they can get feedback on comprehension. Formative-summative assessment occurs in two primary forms: using a mock exam before the final or using the final exam before the retake.

Formative assessment allows for feedback which improves learning while summative assessment measures learning. Formative assessment refers to frequent, interactive assessments of students’ development and understanding to recognize their needs and adjust teaching appropriately (Alahmadi et al., 2019 ). According to Glazer ( 2014 ), formative assessment is generally defined as tasks that allow pupils to receive feedback on their performance during the course. In the classroom, teachers use assessments as a diagnostic tool at the termination of lessons or the termination of units. In addition, teachers can use assessments for teaching, by identifying student misconceptions and bridging gaps in learning through meaningful feedback (Dixson & Worrell, 2016 ). Unfortunately, numerous instructors consider formative assessments as a tool to measure students’ learning, while missing out on its teaching potential. Testing and teaching can be one or the same which will be discussed further in this research (Remmi & Hashim, 2021 ).

According to Black et al. ( 2004 ), using formative tests for formative purposes improves classroom practice whereby students can be encouraged in both reflective and active review of course content. In general terms, formative assessment is concerned with helping students to develop their learning (Buyukkarci & Sahinkarakas, 2021 ). Formative assessment can be considered as a pivotal and valid part of the blending of assessment and teaching (Ozan & Kıncal, 2018 ). Formative assessment helps students gain an understanding of the assessment process and provides them with feedback on how to refine their efforts for improvement. However, in practice, assessment for learning is still in its infancy, and many instructors still struggle with providing productive and timely feedback (Clark, 2011 ).

Using the mentioned assessments can positively affect the test anxiety of the students. Test anxiety signifies the extent to which the students experience apprehension, fear, uneasiness, panic tension, and restlessness while even thinking of forthcoming tests or exams (Ahmad, 2012 ). Anxiety can also be regarded as a product of hesitation about imminent events or situations (Craig et al., 2000 ). Test anxiety is the emotional reaction or status of stress that happens before exams and remains throughout the period of the exams (Sepehrian, 2013 ). Anxiety can commonly be connected to coercions to self-efficacy and evaluations of circumstances as threatening or reactions to a resource of stress to continue (Pappamihiel, 2002 ).

The other variable which can influence the consequences of tests or testing sessions in EFL settings is the attitudes of students towards English culture, English language, and English people. Kara ( 2009 ) stated that attitude about learning together with beliefs and opinions have a significant impact on learners’ behaviors and consequently on their performances. Those learners who have desirable beliefs about language learning are willing to rise more positive attitudes toward language learning. On the other hand, having undesirable beliefs can result in negative attitudes, class anxiety, and low cognitive achievements (Chalak & Kassaian, 2010 ; Tella et al., 2010 ). There are both negative and positive attitudes towards learning. Positive attitudes can develop learning and negative attitudes can become barriers to learning because students have these attitudes as they have difficulties in learning or they just feel that what is presented to them is boring. While a negative attitude toward learning can lead to poor performances of students, a positive attitude can result in appropriate and good performances of students (Ellis, 1994 ).

Woods ( 2015 ) says that instructors should regularly utilize formative assessment to advance the learners’ self-regulation skills and boost their motivation. Motivation is referred to the reasons why people have different behaviors in different situations. Motivation is considered as the intensity and direction of the students’ attempts. The intensity of attempt is referred to the extent that students try to reach their objectives and the direction of attempt is referred to the objectives that students intend to reach (Ahmadi et al., 2009 ; Paul & Elder, 2013 ). Motivation is an inborn phenomenon that is influenced by four agents such as aim (the aim of behaviors, purposes, and tendencies), instrument (instruments used to reach objectives), situation (environmental and outer stimulants), and temper (inner state of the organism). To reach their goals, people first should acquire the essential incentives. For instance, academic accomplishment motivation is significant to scholars (Firouznia et al., 2009 ).

Wiliam ( 2014 ) also asserts that self-regulation learning can be a crucial part of a productive formative assessment concerning the techniques of explaining, sharing, and understanding the instructional goals and students’ success and responsibility for their own learning. Self-regulation skill requires learners to dynamically utilize their cognitive skills; try to achieve their learning aims; receive support from their classmates, parents, and instructors when needed; and most significantly, be responsible for their own learning (Ozan & Kıncal, 2018 ). This research aimed to explore the impacts of using summative and formative assessments of Iranian EFL learners’ academic motivation, attitude toward learning, test anxiety, and self-regulation skill. This study is significant as it compared the effects of two kinds of assessments namely formative and summative on academic motivation, attitude toward learning, test anxiety, and self-regulation skill. As this research investigated the effects of the mentioned assessments on four emotional variables simultaneously, it can be considered as a novel study.

Review of the literature

In the field of teaching English as a foreign language, several researchers and experts defined the term “assessment” as a pivotal component of the process of teaching. According to Brown ( 2003 ), assessment is a process of collecting data about learners’ capabilities to conduct learning tasks. That is, assessment is the way instructors use to gather data about their methods and their pupils’ improvement. Furthermore, the assessment process has got an inseparable component from teaching, since it is impossible to think of teaching without assessments. Brown ( 2003 ) defined assessment in relation to testing. The differences between them refer to the fact that the latter occurs at an identified point of time while the former is an ongoing process that occurs regularly (Brown, 2003 ).

Other scholars explained the meaning of assessment by distinguishing it from evaluation. Regarding the difference between the two, Nunan ( 1992 ) asserted that assessment is referred to the procedures and processes whereby teachers determine what students can do in the target language and added evaluation is referred to a wider range of processes that may or may not include assessment data. In this way, then, assessment is process-oriented while evaluation is product-oriented. Palomba and Banta ( 1999 ) defined assessment as “the systematic collection, review, and use of information about educational programs undertaken to improve learning and development” (p.4). All in all, assessing students’ performances means recognizing and gathering information, receiving feedback, and analyzing and modifying the learning processes. The main goal, thus, is to overcome barriers to learning. Assessment is then used to interpret the performances of students, develop learning, and modify teaching (Aouine, 2011 ; Ghahderijani et al., 2021 ).

Two types of assessment are formative and summative. Popham ( 2008 ) said that it is not the nature of the tests to be labeled as summative or formative but the use to which that tests’ outcomes will be put. That is to say, the summative-formative manifestation of assessment does not stop at being a typology but it expands to be purposive due to the nature of assessment. Summative assessment, then, has been referred to as some criteria. Cizek ( 2010 ) suggests that two criteria can define the summative assessment: (1) it is conducted at the termination of some units and (2) its goal is mainly to characterize the performances of the students or systems. Its major goal is to gain measurement of attainment to be utilized in making decisions.

Through Cizek’s definition, a summative assessment seeks to judge the learners’ performances in every single course. Thus, providing diagnostic information is not what this type of assessment is concerned with. Significantly, the judgments made about the students, teachers, or curricula are meant to grade, certificate, evaluate, and research on how effective curricula are, and these are the purposes of summative assessment according to Cizek ( 2010 ).

According to Black and Wiliam ( 2006 ), summative assessment is given occasionally to assess what pupils know and do not know. This type of assessment is done after the learning has been finalized and provides feedback and information that summarize the learning and teaching process. Typically, no more formal learning is occurring at this stage, other than incidental learning that may happen via completing the assignments and projects (Wuest & Fisette, 2012 ). Summative assessment measures what students have learned and mostly is conducted at the end of a course of instruction (Abeywickrama & Brown, 2010 ; Liu et al., 2021 ; Rezai et al., 2022 ).

For Woods ( 2015 ), the summative assessment provides information to judge the general values of the instructional programs, while the outcomes of formative assessment are used to facilitate the instructional programs. Based on Shepard ( 2006 ), a summative assessment must accomplish its major purpose of documenting what learners know and can do but, if carefully created, should also efficaciously fulfill a secondary objective of learning support.

Brown ( 2003 ) claimed that summative assessment aims at measuring or summarizing what students have learned. This means looking back and taking stock of how well that students have fulfilled goals but does not essentially pave the way to future improvement. Furthermore, the summative assessment also known as assessment of learning is clarified by Spolsky and Halt ( 2008 ) who state that assessment of learning is less detailed, and intends to find out the educational programs or students’ outcomes. Thus, summative assessment is applied to evaluating different language skills and learners’ achievements. Even though summative assessment has a main role in the learners’ evaluation, it is not sufficient to know their advancement and to detect the major areas of weaknesses, and this is the essence of formative assessment (Pinchok & Brandt, 2009 ; Vadivel et al., 2021 ).

The term ‘formative assessment’ has been proposed for years and defined by many researchers. A clearer definition is provided by Brown ( 2003 ) in which he claims that formative assessment is referred to the evaluation of learners in the process of “forming” their skills and competencies to help them to keep up that growth process. It is also described as comprising all those activities conducted by instructors or by their learners that supply information to be utilized as feedback to adjust the learning and teaching activities in which they are involved (Fox et al., 2016 ).

Formative assessments aim to gain immediate feedback on students learning through which strengths and weaknesses of students can be diagnosed. Comprehensively, Wiliam ( 2011 ) suggests: Practices in the classrooms are formative to the extent that evidence about students’ accomplishments is elicited, interpreted, and utilized by instructors, students, or their classmates, to decide about the subsequent steps in the education that are probably to be better or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.

Through this definition, formative assessment actively involves both students’ and teachers’ participation as a key component to develop students’ performance. The assessment for learning, which is based on the aim behind using it, is assessing learners’ progress (McCallum & Milner, 2021 ). Therefore, it is all about gathering data about learners’ achievement to recognize their progress in skills, requirements, and capabilities as their weaknesses and strengths before, during, and after the educational courses to develop students’ learning and achievement (Douglas & Wren, 2008 ).

Besides, Popham ( 2008 ) considered the formative assessment as a strategic procedure in which educators or pupils utilize assessment-based evidence to modify what they are presently performing. That describes it as the planned process that is not randomly occurring. Therefore, formative assessment is an ongoing procedure that provides learners with constructive timely feedback, helping them achieve their learning goals and enhancing their achievements (Vogt et al., 2020 ). Formative assessment is a helpful technique that can provide students with formative help by evaluating the interactions between assessment and learning (Chan, 2021 ; Masita & Fitri, 2020 ).

Some criteria related to formative assessment have been presented by Cizek ( 2010 ). In his opinion, formative assessment attempts to identify students’ levels whether high or low, to provide more help for educators to plan subsequent instruction, to make it easier for students to continue their own learning, review their work, and be able to evaluate themselves. To make learners responsible for their learning and do their research Formative assessment, to Cizek, is a sufficient tool and area for learners and teachers to make proficiency in the learning-teaching process. All in all, concerning specific objectives, formative assessment is a goal-oriented process.

Tahir et al. ( 2012 ) stated that formative assessment is a diagnostic use of assessment that can provide feedback to instructors and learners throughout the instructional process. Marsh ( 2007 ) claimed that formative tests are a type of strategy which are prepared to recognize students’ learning problems to provide a remedial procedure to develop the performances of the majority of the learners. The information that is provided for the learners should be utilized for the assessment to be explained as a formative one. The Assessment Reform Group (ARG) ( 2007 ) explains formative assessment as the procedure to look for and interpret the evidence for instructors and their students to make decisions about where the students fit in their learning, where they need to go, and how best to get there. Kathy ( 2013 ) also argued that formative tests aim to analyze the students’ learning problems to develop their academic attainment.

The theory that is behind our study is the sociocultural theory stating that knowledge is generated in a cooperative way within social contexts. It views learning as a condition wherein learners generate their meanings from the materials and content delivered to them, rather than trying to memorize the information (Vygotsky, 1978 ). Based on sociocultural theory, learning can occur successfully when teachers and students have more interactions with each other.

Some empirical studies are reported here. Alahmadi et al. ( 2019 ) aimed to examine whether a formative speaking assessment produced any effect on learners’ performances in the summative test. Besides, they aimed to observe students’ learning and to provide useful feedbacks that can be applied by educators to develop learners’ achievement and assist them to detect their weaknesses and strengths in speaking skills. Their results indicated that formative assessment helped Saudi learners to solve the problems they encounter in speaking tests.

Mahshanian et al. ( 2019 ) highlighted the significance of summative assessment in conjunction with teacher-based (formative) assessments on the learners’ performances. To do this study, 170 EFL students at the advanced level were chosen and grouped based on the kind of assessment they had received. The subjects in this research were administered exams for two main reasons. First, a general proficiency test was given to put the students at different levels of proficiency. Second, for comparing students’ development according to different kinds of assessments within a 4-month learning duration, an achievement test of the course was administered both as the pre-test and the post-test. The data gained via the scores of the participants on the achievement test received analyses and then compared by utilizing ANCOVA, ANOVA, and t- tests. Based on the outcomes of this research, we can conclude that an amalgamation of summative and formative assessments can result in better achievements for EFL students than either summative or formative assessments discretely.

Imen ( 2020 ) attempted to determine the effects of formative assessments on EFL learners’ writing skills. Indeed, the goal of this study was to recognize the effects of formative assessments on developing the writing skills of first-year master’s students at Abdel Elhamid Ibn Badis University, in Mostaganem. This research also attempted to reveal an essential issue that is the lack of the execution of formative assessments in the writing classrooms. To verify the hypotheses, two tools were applied in this study to gather the data, the teachers’ questionnaire and the students’ questionnaire. The findings of the study revealed that the formative assessment was not extensively used in teaching and learning writing skills, at the University of Mostaganem. The results of both questionnaires showed that if the students were evaluated formatively, their writing skills could be highly enhanced.

Ashdale ( 2020 ) attempted to examine the influences of a particular formative assessment named as Progress Trackers, by comparing a control group that did not receive the Progress Tracker with an experimental group that received the formative-based assessment. The research findings revealed that there were no substantial differences between the experimental and control groups based on the results of the pre-test and the post-test scores. While not statistically significant, the experimental group showed a larger increase in the learners with at least a 60% development in achievement. The lack of significant differences between the experimental group and the control group could be created by the uselessness of the formative assessments or the inability to exclude other factors in the class contexts. This could comprise the uses of other formative assessments applied in both groups, delivery of content, and execution of the formative assessments.

Persaud Singh and Ewert ( 2021 ) investigated the effects of quizzes and mock exams as a formative assessment on working adult learners’ achievement using a quasi-experimental quantitative design. One experimental group received both quizzes and mock exams, another group received mock exams only, and a control group received neither. The data gathered received analyses by utilizing t -tests and ANOVA. The findings indicated noticeable differences in the levels of achievement for the groups receiving formative assessments in comparison to the control participants. The “mock exam” group outperformed slightly than the “quizzes and mock exam” group.

Al Tayib Umar and Abdulmlik Ameen ( 2021 ) traced the effects of formative assessment on Saudi EFL students’ achievement in medical English. The research also tried to figure out teachers’ and students’ attitudes toward formative assessment. The participants involved in this research were 98 students selected among the Preparatory Year learners at a Saudi university. They were assigned to an experimental group and a control group. The experimental students were given their English for Specific Purposes (ESP) courses following the formative assessment techniques whereas the control group was trained in their ESP courses by traditional assessment rules. The experimental group teachers were given intensive training courses in Saudi Arabia and abroad on how to use formative assessment principles in the classrooms. At the end of the experiment that continued for 120 days, the control and experimental groups sat for the end of term examination which was designed for all candidates in the Preparatory College. Grades of all participants in the two groups in the final exam were compared. The performance of the experimental group was found to be meaningfully higher than that of the control group. Instructors’ and students’ attitudes towards formative assessment were positive.

Hamedi et al. ( 2022 ) investigated the effects of using formative assessment by Kahoot application on Iranian EFL students’ vocabulary knowledge as well as their burnout levels. This study was conducted on 60 participants who were in two groups of experimental and control. The results indicated that using formative assessment generated significant effects on of Iranian EFL students’ vocabulary knowledge.

In conclusion, the above studies confirmed the positive effects of summative and formative assessment on language learning. Yet, there are a few kinds of research on comparing the effects of the summative and formative assessments on Iranian EFL learners’ academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Most studies in the domain of assessment examined the effects of the summative and formative assessments on the main skills (reading, speaking, writing, and listening) and they did not pay much attention to the psychosocial variables; therefore, this research posed two questions to cover the existing gap.

  • RQ1. Does using formative and summative assessments positively affect Iranian EFL learners’ test anxiety, academic motivation, and self-regulation skill?
  • RQ2. Do Iranian EFL learners present positive attitudes toward learning through formative and summative assessments?

Methodology

Design of the study, participants.

The participants of this research were 72 Iranian EFL students who have studied English since 2016. The male EFL learners were selected based on the convenience sampling method by administering the Preliminary English Test (PET). They were selected from the Parsian English language institute, located in Ahvaz city, Iran. The participants’ general English proficiency was intermediate and their age average was 21 years old. The participants were divided into two experimental groups (summative and formative) and a control group.

Instrumentations

For homogenizing the subjects in terms of general English proficiency, we gave a version of the PET test, extracted from the book PET Practice Test (Quintana, 2008 ). Because of some limitations, only the sections of reading, grammar, and vocabulary of the test were used in this study. We piloted the test on another similar group and allotted 60 min for answering all its items. Its validity was accepted by some English experts and its reliability was .91.

Britner and Pajares’ ( 2006 ) Science Anxiety Scale (SAS) was used as the other instrument to assess the participants’ test anxiety. Some wordings of the items were changed to make them suitable for measuring test anxiety. There were 12 items in this test that required the participants to consider the items (e.g., I am worried that I will get weak scores in most of the exams) and answer a 6-point scale ranging from certainly false to certainly true. Based on Cronbach’s alpha formula, the reliability index of the anxiety test was .79.

The other tool used in this study was the Self-Regulatory Strategies Scale (SRSS) which was developed by Kadıoğlu et al. ( 2011 ) to assess the self-regulation skills of the participants. The SRSS was a 6-point Likert instrument including never, seldom, occasionally, often, frequently, and constantly. The SRSS consisted of 29 statements in eight dimensions. The results of Cronbach’s alpha formula showed that the reliability of the SRSS was .82.

We used the Attitude/Motivation Test Battery (AMTB) of Gardner ( 2004 ) to evaluate the respondents’ English learning motivation. This measuring instrument had 26 items each with six responses: Highly Disagree, Moderately Disagree, Somewhat Disagree, Somewhat Agree, Moderately Agree, and Highly Agree. We used the Cronbach alpha to measure the reliability of the motivation questionnaire ( r = .87). It should be noted that the motivation questionnaire, the SAS, and the SRSS were used as the pre-tests and post-tests of the research.

The last tool employed in this research was an attitude questionnaire examining the participants’ attitudes towards the effectiveness of summative and formative assessment on their English learning enhancement. The researchers themselves created 17-point Likert- items for this questionnaire and the reliability of this instrument was .80. Likert scale was utilized in the questionnaire to show the amount of disagreement and agreement from 1 to 5 that were highly disagree, disagree, no idea, agree, and highly agree. The validities of all mentioned tools were substantiated by a group of English specialists.

Collecting the needed data

To start the study, first, the PET was administered to 96 EFL learners and 72 intermediate participants were selected among them. As stated previously, the participants were divided into two experimental groups (summative and formative) and one control group. After that, the pretests of test anxiety, motivation, and self-regulation skill were administered to the participants of all groups. After pretesting process, the treatment was conducted on the groups differently; each group received special instruction.

One experimental group was instructed based on the rules of the formative assessment, in the formative group, the teacher (researcher) assisted the students to participate in evaluating their learning via using self and peer assessment. Besides, the teacher’s comprehensive and descriptive elicitation and feedbacks of information about students’ learning were significant in formative class. In fact, there were no tests at the termination of the term and the teacher was flexible concerning the students’ mistakes and provided them with constructive feedback including metalinguistic clues, elicitation, correction, repletion, clarification request, recast, and repletion.

In the summative class, the teacher assessed the students’ learning by giving mid-term and final exams. The teacher did not provide any elaborative feedback, and his feedback was limited to yes/no and true/ false. The control group neither received a formative-based instruction nor a summative-based instruction. The teacher of the control group instructed them without utilizing any preplanned assessments. They finished the course without any formative and summative assessments. After the treatment, the post-tests of the test anxiety, motivation, and self-regulation skill were given to all groups to assess the influences of the intervention on their language achievement. In the final step, the questionnaire of attitude was distributed among both experimental groups to check their opinions about the impacts of summative and formative assessment on their English learning improvement.

The whole study lasted 23 sessions; each took 50 min. In one session, the PET test was administered and in the next three sessions, three pre-tests were conducted. During 15 sessions, the treatment was carried out; in three sessions, three post-tests were given to the participants, and in the last session the attitudinal questionnaire was administered to examine the participants’ attitudes towards the effectiveness of summative and formative assessment of their English learning achievement.

Data analysis

Having prepared all needed data via the procedures mentioned above, some statistical steps were taken to provide answers to the questions raised in this study. First, the data were analyzed descriptively to compute the means of the groups. Second, some one-way ANOVA and Bonferroni tests were used for analyzing the data inferentially. Third, one sample t- test was utilized to analyze the motivation questionnaire data.

Results and discussion

After checking and getting sure about the normality distribution of the data by using the Kolmogorov-Smirnov test, we used several one-way ANOVA tests and reported their results in the following tables:

As we see in Table ​ Table1, 1 , the mean scores of all groups are almost similar. They got almost equal scores on their anxiety pre-test and the three groups were at the same level of anxiety before conducting the instruction. This claim is verified in the following table with the help of one-way ANOVA.

Descriptive statistics of all groups on the test anxiety pre-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2427.7011.372.3222.9032.5114.0049.00
Summative2428.9111.892.4223.8933.9313.0050.00
Formative2428.4110.932.2323.7933.0314.0049.00
Total7228.3411.251.3225.7030.9913.0050.00

According to the Sig value in Table ​ Table2, 2 , there is not a noticeable difference between the test anxiety of all three groups. They were at the same anxiety level at the outset of the study. The inferential statistics show that all the participants had an equal amount of anxiety before they had received the treatment.

Inferential statistics of all groups on the test anxiety pre-tests

Sum of squaresdfMean squares Sig.
Between groups17.6928.84.06.93
Within groups8980.6269130.15
Total8998.3171

As is seen in Table ​ Table3, 3 , the mean scores of all groups are different on the anxiety post-tests. Based on the descriptive statistics, the groups gained different scores on their anxiety post-test and the experimental groups obtained better scores than the control group. This claim is substantiated in the following table by using a one-way ANOVA test.

Descriptive statistics of all groups on the test anxiety post-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2429.9511.082.2625.2734.6314.0051.00
Summative2437.9110.802.2033.3542.4719.0060.00
Formative2449.5010.372.1145.1153.8823.0062.00
Total7239.1213.331.5735.9942.2514.0062.00

Table ​ Table4 4 depicts that the Sig value is less than .00; accordingly, one can conclude that there is a noticeable difference between the test anxiety post-tests of all three groups. They were at different anxiety levels at the end of the research. It seems that the experimental groups outdid the control group on the post-test.

Inferential statistics of all groups on the test anxiety post-tests

Sum of squaresDfMean squares Sig.
Between groups4635.0822317.5420.02.00
Within groups7986.7969115.75
Total12,621.8771

In Table ​ Table5, 5 , the test anxiety level of all groups is compared. This table shows that there are remarkable differences between the anxiety post-tests of the control group and both experimental groups. Also, this table shows that the formative group outdid the control and summative groups. The formative group had the best performance among the three groups of this study.

Multiple comparisons by Bonferroni test (test anxiety)

(I) groups(J) groupsMean differences (I-J)Std. errorsSig.95% confidence intervals
Lower boundsUpper bounds
ControlSummative−7.95 3.10.03−15.57−.33
Formative−19.54 3.10.00−27.16−11.92
SummativeControl7.95 3.10.03.3315.57
Formative−11.58 3.10.00−19.20−3.96
FormativeControl19.54 3.10.0011.9227.16
Summative11.58 3.10.003.9619.20

a The mean differences are significant at the 0.05 level

As observed in Table ​ Table6, 6 , all three groups’ performances on the self-regulation pre-tests are almost the same; their mean scores are almost equal. We used a one-way ANOVA to check the groups’ performances on the self-regulation pre-tests.

Descriptive statistics of the three groups on the self-regulation pre-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2477.5417.023.4770.3584.7339.0099.00
Summative2478.2016.223.3171.3585.0641.00101.00
Formative2476.8316.783.4269.7483.9239.0098.00
Total7277.5216.451.9373.6681.3939.00101.00

In Table ​ Table7, 7 , the inferential statistics of all groups on the self-regulation pre-tests are shown. As Sig (.96) is higher than (0.05), the differences between the three groups are not meaningfully significant. Based on this table, all three groups had the same level of self-regulation ability at the outset of the study.

Inferential statistics of the three groups on the self-regulation pre-tests

Sum of squaresdfMean squares Sig.
Between groups22.69211.34.04.96
Within groups19,203.2569278.30
Total19,225.9471

The mean scores of the control group, the summative group, and the formative group are, 80.12, 130.04, and 147.25, respectively (Table ​ (Table8). 8 ). At the first look, we can say that both experimental participants outflank the control participants since their mean scores are very higher than the mean score of the control group.

Descriptive statistics of the three groups on the self-regulation post-tests

MeansStd. deviationsStd. errors95% confidence interval for meanMinimumMaximum
Lower boundsUpper bounds
Control2480.1217.143.50072.8887.3647.00114.00
Summative24130.0410.442.13125.62134.45109.00146.00
Formative24147.2527.195.55135.76158.7339.00167.00
Total72119.1334.524.06111.02127.2539.00167.00

The results indicate significant differences between the self-regulation post-tests of the groups in favor of the experimental groups (Table ​ (Table9 9 ) . Based on the inferential statistics, the performances of the three groups on the self-regulation post-test are different and the summative group and the formative group outflank the control group.

Inferential statistics of the three groups on the self-regulation post-tests

Sum of squareDfMean squares Sig.
Between groups58,348.52229,174.2676.60.00
Within groups26,278.0869380.84
Total84,626.6171

The outcomes in Table ​ Table10 10 indicate that both experimental groups have better performances than the control group on the self-regulation post-tests. Also, the findings show that the formative group performed better than the other two groups. The treatment had the most effect on the formative group.

Multiple comparisons by Bonferroni test (self-regulation)

(I) groups(J) groupsMean differences (I-J)Std. errorsSig.95% confidence intervals
Lower boundsUpper bounds
ControlSummative−49.91 5.63.00−63.73−36.09
Formative−67.12 5.63.00−80.94−53.30
SummativeControl49.91 5.63.0036.0963.73
Formative−17.20 5.63.01−31.03−3.38
FormativeControl67.12 5.63.0053.3080.94
Summative17.20 5.63.013.3831.03

The control group’s mean score is 90.33, the mean score of the summative group is 91.75, and the mean score of the formative group is 92.45 (Table ​ (Table11). 11 ). Accordingly, we can say that the three groups had an equal degree of motivation before conducting the treatment.

Descriptive statistics of the three groups on the motivation pre-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2490.3325.085.1179.74100.9250.00149.00
Summative2491.7522.084.5082.42101.0755.00128.00
Formative2492.4521.694.4283.29101.6255.00129.00
Total7291.5122.692.6786.1896.8450.00149.00

Table ​ Table12 12 presents the inferential statistics of all groups on the motivation pre-tests. One can see that Sig (.94) is larger than 0.50; consequently, no difference is observed among the groups in terms of motivation pre-tests. The inferential statistics show that the students of the three groups had the same amount of motivation before they had received the treatment.

Inferential statistics of the three groups on the motivation pre-tests

Sum of squaredfMean squares Sig.
Between groups56.19228.09.05.94
Within groups36,519.7969529.27
Total36,575.9871

As shown in the Table ​ Table13, 13 , the mean scores of the summative and formative groups are 115.79 and 127.83, respectively, on the motivation post-tests and the mean of the control group is 92.87. It appears that the experimental participants outperform the control participants on the motivation post-tests as their mean scores are higher than the control group.

Descriptive statistics of the three groups on the motivation post-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2492.8720.994.2884.00101.7460.00129.00
Summative24115.7913.502.75110.09121.4999.00140.00
Formative24127.8312.512.55122.54133.11100.00150.00
Total72112.1621.582.54107.09117.2360.00150.00

In Table ​ Table14, 14 , the inferential statistics of all groups on the motivation post-tests are revealed. The Sig value (.00) is less than 0.50; therefore, the differences between the groups are significant. Indeed, the experimental groups outperformed the control group after the instruction and this betterment can be ascribed to the treatment.

Inferential statistics of the three groups on the motivation post-tests

Sum of squaredfMean squares Sig.
Between groups15,138.0827569.0429.12.00
Within groups17,933.9169259.91
Total33,072.0071

The mean scores of the motivation post-tests are compared in Table ​ Table15. 15 . Accordingly, there are noticeable differences between the post-tests of all groups. The formative participants had better performance than the other two groups. We can say that the formative assessment is more effective than the summative assessment in EFL classes.

Multiple comparisons by Bonferroni test (motivation)

(I) groups(J) groupsMean differences (I-J)Std. errorsSig.95% confidence intervals
Lower boundsUpper bounds
ControlSummative−22.91 4.65.00−34.33−11.49
Formative−34.95 4.65.00−46.37−23.53
SummativeControl22.91 4.65.0011.4934.33
Formative−12.04 4.65.03−23.46−.62
FormativeControl34.95 4.65.0023.5346.37
Summative12.04 4.65.03.6223.46

As depicted in Table ​ Table16, 16 , the amount of statistic T -value is 63.72, df =16, and Sig =0.00 which is less than 0.05. This implies that Iranian students held positive attitudes towards the effectiveness of summative and formative assessments on their language learning improvement.

One-sample test of the attitude questionnaire

Test value = 0
DfSig. (2-tailed)Mean differences95% confidence interval of the differences
LowerUpper
Scores63.7216.0004.524.374.67

Briefly, the results indicate that both experimental groups had better performances than the control group in their post-tests. The formative group had the best performance among the three groups of this study. Additionally, the results reveal that the participants of the present research had positive attitudes towards the effectiveness of both formative and summative assessments on their language learning development.

After analyzing the data, it was found that all three groups were at the same levels of test anxiety, motivation, and self-regulation skill at the outset of the research. But, the performances of the three groups were different at the end of the investigation. Both experimental groups outdid the control group on their post-tests and the formative group performed better among the three groups. Although both types of assessments (summative and formative) were effective on test the anxiety, motivation, and self-regulation skill of EFL learners, the formative assessment was the most effective one. The findings of the current research also indicated that both experimental groups presented positive attitudes toward the implementation of the summative and formative assessments in EFL classes.

The findings gained in this study are supported by Persaud Singh and Ewert ( 2021 ) who inspected the impacts of formative assessment on adult students’ language improvement. They indicated that there were meaningful differences between the formative participants and the control participants in terms of language achievement in favor of the formative participants. Additionally, our research findings are advocated by Alahmadi et al. ( 2019 ) who explored the effects of formative speaking assessments on EFL learners’ performances in speaking tests. They showed that the formative assessment assisted Saudi EFL learners to solve the problems they encountered in speaking tests.

In addition, our study findings are in accordance with Mahshanian et al. ( 2019 ) who confirmed that the amalgamation of summative and formative assessment can result in better achievement in English language learning. Also, our investigation lends support to the findings of Buyukkarci and Sahinkarakas ( 2021 ) who verified the positive effects of using formative assessment on learners’ language achievement. Additionally, the results of the current research are in agreement with Ounis ( 2017 ) who stated that formative assessment facilitated and supported students’ learning. Our study findings are supported by the sociocultural theory which focuses on the role of social interactions among the students and their teachers in the classroom. Based on this perspective, the learning process is mainly a social process and students’ cognitive functions are made based on their interactions with those around them.

Furthermore, our research results are in agreement with the results of Imen ( 2020 ) who discovered the impacts of formative assessments on EFL students’ writing abilities. His results indicated that using formative assessment develops the participants’ writing skills. Moreover, our research outcomes are supported by the impacts of formative assessments on learners’ academic attainment, opinions about lessons, and self-regulation skills in Ozan and Kıncal ( 2018 ) who performed an investigation on the influences of formative assessments on students’ attitudes toward lessons, academic achievement, and self-regulation skill. They revealed that the experimental class that received the treatment by formative assessment practices had better academic performances and more positive attitudes towards the classes than the control class.

Regarding the positive attitudes of the participants towards formative and summative assessment, our results are in line with Tekin ( 2010 ) who discovered that formative assessment practices meaningfully developed students’ attitudes about mathematics learning. That research indicated that the participants in the treatment group had positive attitudes about mathematics learning. In addition, King ( 2003 ) asserted that the formative assessments enhanced the learners’ attitudes about science classes. Also, Hwang and Chang ( 2011 ) revealed that the formative assessment highly boosted the attitudes and interest of students toward learning in local culture classes.

One explanation for the outperformance of the formative group over the other two groups can be the fact that they received much more input. They were provided with different kinds of feedback and took more exams during the semester. These exams and feedback can be the reasons for their successes in language achievement. This is in line with Krashen’s ( 1981 ) input theory stating that if students are exposed to more input, they can learn more.

The other possible explanations for our results are that formative assessments are not graded so they take the anxiety away from the assessees. They also detach the thinking that they must get everything right. Instead, they serve as a practice for students to get assistance along the way before the final tests. Teachers usually check for understanding if students are struggling during the lesson. Teachers address these issues early on instead of waiting until the end of the unit to assess. Teachers have to do less reteaching at the end because many of the problems with mastery are addressed before final tests. The mentioned advantages can be the reasons for our obtained findings.

In addition, monitoring the students’ learning via using the formative assessment can be the other justification for our results. In fact, monitoring the learning process can provide an opportunity for the teachers to give constructive feedback to their students to improve their language learning. When teachers continuously monitor students’ growth and modify instruction to ensure constant development, they find it easier and more predictable to progress towards meeting the standards on summative assessments. By comprehending precisely what their students know before and during the instruction, teachers have much more power to improve the students’ mastery of the subject matter than if they find out after a lesson or unit is complete.

It is important to point out that when instructors continually evaluate the development of their students and modify their curriculum to assure constant improvement, they find that it is simpler and more predictable to make progress toward fulfilling the requirements on summative assessments. If teachers wait until the end of a session or unit to find out how well their learners have mastered the material, they will have considerably less influence over how well their learners learn the material than if they find out how well their learners have mastered it earlier and during teaching. The value of formative assessment lies in the critical information about student comprehension that it provides throughout the process of learning, as well as the chance it gives educators to provide participants with quick and efficient, and action-oriented feedback, as well as the chance to alter their own behavior so that every respondent has the chance to learn and re-learn the material. Learners whose academic performance falls on the extreme ends of the normal curve, such as those who are struggling and those who excel academically, benefit the most from formative evaluation. These learners have learning requirements that are often one of a kind and highly specialized, and to meet those needs, the instructor needs updated data. In addition, making use of frequent formative evaluation as a means to remediate learning gaps brought up by COVID-19 guarantees that educators can promptly give remediation.

The other justification for our findings can be ascribed to the strength of formative assessments that lies in the formative information they provide about the students’ comprehension throughout the learning process and the opportunities they give to teachers to provide the pupils with action-oriented and timely feedback and to change their own behaviors so that each learner has an opportunity to learn and re-learn. More particularly, using formative assessment can assist the students to detect their own weaknesses and strengths and target areas that need more effort and work. All the positive points enumerated for the formative assessments can be the reasons and explanations for the results gained in the current research.

Moreover, the better performance of assessment groups may be due to numerous reasons. In the first place, consistently evaluating students’ progress helps maintain learning objectives at the forefront of one’s mind. This ensures that learners have a distinct goal to strive towards and that instructors have the opportunity to assist clear up misconceptions before learners get off track. Second, engaging in the process of formative assessment enables instructors to gather the information that reveals the requirements of their students. When instructors have a clear grasp of what it takes for their students to be successful, they are better able to design challenging educational environments that push every learner to their full potential. Thirdly, the primary role of formative assessment that will assist in enhancing academic achievement is to provide both learners and instructors with frequent feedback on the achievement that is being made toward their objectives. Learners can bridge the gap between their existing knowledge and their learning objectives through the use of formative assessment (Greensetin, 2010 ). The fourth benefit of doing the formative assessment is an increase in motivation. Formative assessment entails creating learning objectives and monitoring the progress towards those objectives. When learners have a clear idea of where they want to go, their performance dramatically improves. Fifthly, students must identify a purpose for the work that is assigned to them in the classroom. Connecting the learning objectives with real-world problems and situations draws students into the instructional activities and feeds their natural curiosity about the world. Sixthly, an in-depth examination of the data gathered via formative assessment provides the educator with the opportunity to investigate their own methods of teaching and identify those that are successful and those that are not. It is indeed possible that some of the strategies that work for one group of learners won’t work for another. Lastly, students become self-regulated when they are provided with the tools they need to set, track, and ultimately achieve their own learning objectives. Students may develop into self-reliant thinkers if they are exposed to models of high-quality work and given adequate time to reflect on and refine their own work.

The positive effects of formative and summative assessment on students’ motivation are supported by The Self Determination Theory (SDT) of Motivation which is a motivational theory that provides a way of understanding human motivation in any context (Ryan & Deci, 2000 ). SDT attempts to understand human motivation beyond the simple intrinsic/extrinsic model. It suggests that human motivation varies from fully intrinsic motivation, which is characterized by fully autonomous behavior and “for its own sake” to fully extrinsic motivation, which is characterized by behavior that is fully heteronomous and which is instrumentalized to some other end.

In this study, the self-regulatory skills of the students in the EGs where the formative assessment practices were applied did significantly differ from the ones in the CG where no formative assessment practices were applied. Thus, students’ self-regulation was shown to be improved as a result of formative assessment procedures. Similar findings were observed in the experimental research by Xiao and Yang ( 2019 ) that compared the self-regulation abilities of EG and CG learners in secondary school and discovered a substantial difference in favor of the former group. Research findings based on qualitative data reveal that learners engaged in a variety of cognitive techniques and self-regulatory learning practices. The participants acknowledged that they were an integral part of their own learning and that they accepted personal responsibility for their progress. Teachers reported that learners’ ability to self-regulate improved as a result of formative assessment, which fostered ongoing, meaningful, and learning-effort and performance-focused dialogue between teachers and learners. The students’ progress in the areas of self-regulation and metacognitive abilities, as well as their growth in accordance with educational standards, may be supported by a rise in their success in diagnostic examinations thanks to the use of formative assessment (DeLuca et al., 2015 ). In a study that he conducted in 2015, Woods examined the link between formative assessment and self-regulation. He highlighted that teachers who use formative assessment strategies need to comprehend the participants’ self-regulatory learning processes to make appropriate decisions for their classrooms. Furthermore, Woods ( 2015 ) recommended that educators make regular use of formative assessment to foster the growth of learners’ abilities to self-regulate and to boost the motivation levels of their learners. Wiliam ( 2014 ) also asserted that self-regulatory learning could be an important component of an effective formative assessment in relation to the techniques of explaining, sharing, and comprehending the learning goals and success criteria and students taking the responsibility for their own learning.

It is vital to note that learners who have developed self-regulation skills employ their cognitive abilities; work toward their learning objectives; seek out appropriate support from peers, adults, and authority figures; and, most significantly, accept personal accountability for their academic success. As a result, learners’ abilities to self-regulate have a direct effect on the type of formative assessment based on learning and the applications designed to eliminate learning deficiencies. Self-regulation is an ability that needs time and practice to acquire, but it is possible to do so with the right tools and a continuous strategy. Formative assessment techniques were shown to boost learners’ ability to self-regulate, although this effect was found to be small when the study findings were combined with those found in the literature. This finding may be attributed to the fact that, although formative assessment procedures were implemented for an academic year, they were limited to the context of the social research classroom, and students’ abilities to self-regulate may develop and evolve over time.

The findings of this research can increase the knowledge of the students about two types of assessment. This study can encourage students to want their teachers to assess their performances formatively during the semester. Also, the findings of this study can assist instructors to implement more formative-based assessments and feedback in their classes. This study can highlight the importance of frequent input, feedback, and exam for teachers. An exact analysis of formative assessment data permits the teachers to inspect their instructional practices in order to understand which are producing positive results and which are not. Some that are effective for one group of students may not be effective for another group. The implications of this research can help students try to compensate for their deficiencies by taking responsibility for their own learning instead of just attempting to get good grades. In this respect, formative assessments ensure that students can manage the negative variables such as a high level of examination and grading.

Using formative assessments helps teachers gather the information that reveals the students’ needs. Once teachers have an understanding of what students need to be successful, they can generate a suitable learning setting that will challenge each learner to grow. Providing students and teachers with regular feedback on progress towards their aims is the major function of the formative assessments that will help in increasing academic accomplishment. Formative assessments can help the students close the gap between their present knowledge and their learning objectives. Moreover, using formative assessment gives the students evidence of their present progress to actively monitor and modify their own learning. This also provides the students the ability to track their educational objectives. Also, via using formative assessment, the students have the ability to measure their learning at a metacognitive level. As the students are one of the main agents of the teaching-learning process, instructors must share the learning objectives with them. This sharing can develop the students’ learning in basic knowledge and higher order cognitive processes such as application and transfer (Fulmer, 2017 ). In fact, if learners know that they are expected to learn in that lesson, they will concentrate more on those areas. Formative assessments make the teaching more effective by guiding learners to achieve learning objectives, setting learning needs, modifying teaching accordingly, and increasing teachers’ awareness of efficient teaching methods. Lastly, our findings may aid material developers to implement more formative-based assessment activities in the EFL English books.

In conclusion, this study proved the positive impacts of applying formative assessments on Iranian EFL students’ academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Therefore, teachers are strongly recommended to use formative assessment in their classes to help students improve their language learning. Using formative assessment allows teachers to modify instruction according to the results; consequently, making modifications and improvements can generate immediate benefits for their students’ learning.

One more conclusion is that using formative assessment gives the teacher the ability to provide continuous feedback to their students. This allows the students to be part of the learning environment and to improve self-assessment strategies that will help with the understanding of their own thinking processes. All in all, providing frequent feedback during the learning process is regarded as an efficient technique for motivating and encouraging students to learn a language more successfully. Indeed, by assessing students during the lesson, the teachers can aid them to improve their skills and examine if they are progressing or not. Thus, formative assessment is an essential part of teaching that should be used in EFL instructional contexts.

As we could not include many participants in our study, we recommend that future researchers include a large number of participants to increase the generalizability of their results. We worked on male EFL learners; the next studies are required to work on both genders. We could not gather qualitative data to enrich our results; the upcoming researchers are advised to collect both quantitative and qualitative data to develop the validity of their results. Next researchers are called to examine the effects of the summative and formative assessments on language skills and sub-skills. Also, next researchers are offered to inspect the effects of other types of assessments on language skills and subskills as well as on psychological variables involved in language learning.

Acknowledgements

Not applicable.

Abbreviations

EFLEnglish as a foreign language
ANOVAAnalysis of variance
PETPreliminary English Test
SASScience Anxiety Scale
SRSSSelf-Regulatory Strategies Scale
AMTBAttitude/Motivation Test Battery
SDTSelf Determination Theory
EGExperimental group
CGControl group

Authors’ contributions

All authors had equal contributions. The author(s) read and approved the final manuscript.

Authors’ information

Seyed M. Ismail is an assistant professor at Prince Sattam Bin Abdulaziz University, Saudi Arabia. His research interests are teaching and learning, testing, and educational strategies. He published many papers in different journals.

D. R. Rahul is an assistant professor School of Science and Humanities, Shiv Nadar University Chennai, Chennai, India. He has published several research papers in national and international language teaching journals.

Indrajit Patra is an Independent Researcher. He got his PhD from NIT Durgapur, West Bengal, India.

Ehsan Rezvani is an assistant professor in Applied Linguistics at Islamic Azad University, Isfahan (Khorasgan) Branch, Isfahan, Iran. He has published many research papers in national and international language teaching journals.

We did not receive any funding at any stage.

Availability of data and materials

Declarations.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Seyed M. Ismail, Email: [email protected] .

D. R. Rahul, Email: moc.liamg@ttinrdluhar .

Indrajit Patra, Email: moc.liamg@0nortengampi .

Ehsan Rezvani, Email: [email protected] .

  • Abeywickrama P, Brown HD. Language assessment: Principles and classroom practices. Pearson Longman; 2010. [ Google Scholar ]
  • Ahmad S. Relationship of academic SE to self-regulated learning, SI, test anxiety and academic achievement. International Journal of Education. 2012; 4 (1):12–25. doi: 10.5296/ije.v4i1.1091. [ CrossRef ] [ Google Scholar ]
  • Ahmadi S, Namazizadeh M, Abdoli B, Seyedalinejad A. Comparison of achievement motivation of football players between the top and bottom teams of the Football Premier League. Olympic Quarterly. 2009; 17 (3):19–27. [ Google Scholar ]
  • Al Tayib Umar A, Abdulmlik Ameen A. The effects of formative evaluation on students’ achievement in English for specific purposes. Journal of Educational Research and Reviews. 2021; 9 (7):185–197. doi: 10.33495/jerr_v9i7.21.134. [ CrossRef ] [ Google Scholar ]
  • Alahmadi N, Alrahaili M, Alshraideh D. The impact of the formative assessment in speaking test on Saudi students’ performance. Arab World English Journal. 2019; 10 (1):259–270. doi: 10.24093/awej/vol10no1.22. [ CrossRef ] [ Google Scholar ]
  • Aouine A. English language assessment in the Algerian middle and secondary schools: A context evaluation . 2011. [ Google Scholar ]
  • Ashdale M. The effect of formative assessment on achievement and motivation . 2020. [ Google Scholar ]
  • Assessment Reform Group . Assessment for learning. 2007. [ Google Scholar ]
  • Black P, Harrison C, Lee C, Marshall B, Wiliam D. Assessment for learning: Putting it into practice. Open University Press; 2004. [ Google Scholar ]
  • Black P, Wiliam D. Assessment for learning in the classroom. Assessment and Learning. 2006; 5 :9–25. [ Google Scholar ]
  • Britner SL, Pajares F. Sources of science SE beliefs of middle school students [Electronic version] Journal of Research in Science and Teaching. 2006; 43 (5):485–499. doi: 10.1002/tea.20131. [ CrossRef ] [ Google Scholar ]
  • Brown HD. Language assessment principles and classroom practices. Oxford university press; 2003. [ Google Scholar ]
  • Buyukkarci K, Sahinkarakas S. The impact of formative assessment on students’ assessment preferences. The Reading Matrix: An International Online Journal. 2021; 21 (1):142–161. [ Google Scholar ]
  • Chalak A, Kassaian Z. Motivation and attitudes of Iranian undergraduate EFL students towards learning English. GEMA Online Journal of Language Studies. 2010; 10 (2):37–56. [ Google Scholar ]
  • Chan KT. Embedding formative assessment in blended learning environment: The case of secondary Chinese language teaching in Singapore. Education Sciences. 2021; 11 (7):360. doi: 10.3390/educsci11070360. [ CrossRef ] [ Google Scholar ]
  • Cizek GJ. An introduction to formative assessment: History, characteristics, and challenges. In: Andrade HL, Cizek GJ, editors. Handbook of formative assessment . Routledge; 2010. pp. 3–17. [ Google Scholar ]
  • Clark I. Formative assessment: Policy, perspectives and practice. Florida Journal of Educational Administration & Policy. 2011; 4 (2):158–180. [ Google Scholar ]
  • Craig KJ, Brown KJ, Baum A. Environmental factors in the etiology of anxiety. 2000. [ Google Scholar ]
  • DeLuca C, Klinger D, Pyper J, Woods J. Instructional rounds as a professional learning model for systemic implementation of Assessment for Learning. Assessment in Education: Principles, Policy & Practice. 2015; 22 (1):122–139. doi: 10.1080/0969594X.2014.967168. [ CrossRef ] [ Google Scholar ]
  • Dixson DD, Worrell FC. Formative and summative assessment in the classroom. Theory into practice. 2016; 55 (2):153–159. doi: 10.1080/00405841.2016.1148989. [ CrossRef ] [ Google Scholar ]
  • Douglas G, Wren D. Using formative assessment to increase learning . Virginia Beach City Public Schools; 2008. [ Google Scholar ]
  • Ellis R. The study of second language acquisition. Oxford University Press; 1994. [ Google Scholar ]
  • Firouznia S, Yousefi A, Ghassemi G. The relationship between academic motivation and academic achievement in medical students of Isfahan University of Medical Sciences. Iranian Journal of Medical Education. 2009; 9 (1):79–84. [ Google Scholar ]
  • Fox J, Haggerty J, Artemeva N. Mitigating risk: The impact of a diagnostic assessment procedure on the first-year experience in engineering. In: Read J, editor. Post-admission language assessment of university students. Springer; 2016. pp. 43–65. [ Google Scholar ]
  • Fulmer SM. Should we share learning outcomes / objectives with students at the start of a lesson? 2017. [ Google Scholar ]
  • Gardner RC. Attitude/Motivation test battery: International AMTB research project. The University of Western Ontario; 2004. [ Google Scholar ]
  • Ghahderijani BH, Namaziandost E, Tavakoli M, Kumar T, Magizov R. The comparative effect of group dynamic assessment (GDA) and computerized dynamic assessment (C-DA) on Iranian upper-intermediate EFL learners’ speaking complexity, accuracy, and fluency (CAF) Lang Test Asia. 2021; 11 :25. doi: 10.1186/s40468-021-00144-3. [ CrossRef ] [ Google Scholar ]
  • Glazer N. Formative plus summative assessment in large undergraduate courses: Why both? International Journal of Teaching and Learning in Higher Education. 2014; 26 (2):276–286. [ Google Scholar ]
  • Greensetin L. What teachers really need to know about formative assessment. ASCD; 2010. [ Google Scholar ]
  • Hamedi A, Fakhraee Faruji L, Amiri Kordestani L. The effectiveness of using formative assessment by Kahoot application on Iranian Intermediate EFL learners’ vocabulary knowledge and burnout level. Journal of new advances in English Language Teaching and Applied Linguistics. 2022; 4 (1):768–786. [ Google Scholar ]
  • Heritage M. From formative assessment: Improving teaching and learning . 2012. [ Google Scholar ]
  • Hwang HJ, Chang HF. A formative assessment-based mobile learning approach to improving the learning attitudes and achievements of students. Computers and Education. 2011; 56 :1023–1031. doi: 10.1016/j.compedu.2010.12.002. [ CrossRef ] [ Google Scholar ]
  • Imen . The impact of formative assessment on EFL students’ writing skill . 2020. [ Google Scholar ]
  • Kadıoğlu C, Uzuntiryaki E, Çapa-Aydın Y. Development of self-regulatory strategies scale (SRSS) Eğitim ve Bilim. 2011; 36 (160):11–23. [ Google Scholar ]
  • Kara A. The effect of a ‘learning theories’ unit on students’ attitudes towards learning. Australian Journal of Teacher Education. 2009; 34 (3):100–113. doi: 10.14221/ajte.2009v34n3.5. [ CrossRef ] [ Google Scholar ]
  • Kathy D. 22 essay assessment technique for measuring in teaching learning. 2013. [ Google Scholar ]
  • King MD. The effects of formative assessment on student self-regulation, motivational beliefs, and achievement in elementary science (Doctoral dissertation) 2003. [ Google Scholar ]
  • Krashen S. Second language acquisition and second language learning. Pergamon Press; 1981. [ Google Scholar ]
  • Liu, F., Vadivel, B., Mazaheri, F., Rezvani, E., & Namaziandost, E. (2021). Using games to promote efl learners’ willingness to communicate (WTC): Potential effects and teachers’ attitude in focus. Frontiers in Psychology , 4526 . [ PMC free article ] [ PubMed ]
  • Mahshanian A, Shoghi R, Bahram M. Investigating the differential effects of formative and summative assessment on EFL learners’ end-of-term achievement. Journal of Language Teaching and Research. 2019; 10 (5):1055–1066. doi: 10.17507/jltr.1005.19. [ CrossRef ] [ Google Scholar ]
  • Marsh CJ. A critical analysis of the use of formative assessment in schools. Educational research for policy and practice. 2007; 6 (1):25–29. doi: 10.1007/s10671-007-9024-z. [ CrossRef ] [ Google Scholar ]
  • Masita M, Fitri N. The use of Plickers for formative assessment of vocabulary mastery. Ethical Lingua Journal of Language Teaching and Literature. 2020; 7 (2):311–320. doi: 10.30605/25409190.179. [ CrossRef ] [ Google Scholar ]
  • McCallum S, Milner MM. The effectiveness of formative assessment: student views and staff reflections. Assessment and Evaluation in Higher Education. 2021; 46 (1):1–16. doi: 10.1080/02602938.2020.1754761. [ CrossRef ] [ Google Scholar ]
  • Nunan D. Research methods in language learning. CUP; 1992. [ Google Scholar ]
  • Ounis A. The assessment of speaking skills at the tertiary level. International Journal of English Linguistics. 2017; 7 (4):95–113. doi: 10.5539/ijel.v7n4p95. [ CrossRef ] [ Google Scholar ]
  • Ozan C, Kıncal RY. The effects of formative assessment on academic achievement, attitudes toward the lesson, and self-regulation skills. Educational Sciences: Theory and Practice. 2018; 18 :85–118. [ Google Scholar ]
  • Palomba CA, Banta TW. Assessment essentials: Planning, implementing, and improving assessment in higher education. Jossey-Bass Publishers; 1999. [ Google Scholar ]
  • Pappamihiel NE. English as a second language students and English language anxiety: Issues in the mainstream classroom. ProQuest Education Journal. 2002; 36 (3):327–355. [ Google Scholar ]
  • Paul R, Elder L. Critical thinking: Tools for taking charge of your professional and personal life. Pearson Education; 2013. [ Google Scholar ]
  • Persaud Singh V, Ewert D. The effect of formative assessment on performance in summative assessment: A study on business English students in a language training center . 2021. [ Google Scholar ]
  • Pinchok N, Brandt WC. Connecting formative assessment research to practice: An introductory guide for educators. Learning point; 2009. [ Google Scholar ]
  • Popham WJ. Classroom assessment: What teachers need to know. 5. Prentice Hall; 2008. [ Google Scholar ]
  • Quintana J. PET practice tests. Oxford University Press; 2008. [ Google Scholar ]
  • Remmi F, Hashim H. Primary school teachers’ usage and perception of online formative assessment tools in language assessment. International Journal of Academic Research in Progressive Education and Development. 2021; 10 (1):290–303. doi: 10.6007/IJARPED/v10-i1/8846. [ CrossRef ] [ Google Scholar ]
  • Rezai A, Namaziandost E, Miri M, Kumar T. Demographic biases and assessment fairness in classroom: Insights from Iranian university teachers. Language Testing in Asia. 2022; 12 (1):1–20. doi: 10.1186/s40468-022-00157-6. [ CrossRef ] [ Google Scholar ]
  • Robinowitz A. From principles to practice: An embedded assessment system. Applied Measurement in Education. 2010; 13 (2):181–208. [ Google Scholar ]
  • Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist. 2000; 55 (1):68–95. doi: 10.1037/0003-066X.55.1.68. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sepehrian A. Self-Efficacy, achievement motivation and academic procrastination as predictors of academic achievement in pre-college students. Proceeding of the Global Summit on Education. 2013; 6 :173–178. [ Google Scholar ]
  • Shepard LA. Classroom assessment. In: Brennan RL, editor. Educational measurement. 4. American Council on Education/Praeger; 2006. pp. 623–646. [ Google Scholar ]
  • Spolsky B, Halt FM. The handbook of educational linguistics. Blackwell; 2008. [ Google Scholar ]
  • Tahir, M., Tariq, H., Mubashira, K., & Rabbia, A. (2012). Impact of formative assessment on academic achievement of secondary school students. International Journal of Business and Social Science , 3 (17) http://myflorida.com/apps/vbs/vbs_www.ad.view_ad?advertisement_key_num=107800 .
  • Tekin EG. Matematik eğitiminde biçimlendirici değerlendirmenin etkisi [Effect of formative assessment in mathematics education] 2010. [ Google Scholar ]
  • Tella J, Indoshi FC, Othuon LA. Relationship between students’ perspectives on the secondary school English curriculum and their academic achievement in Kenya. Research. 2010; 1 (9):390–395. [ Google Scholar ]
  • Vadivel B, Namaziandost E, Saeedian A. Progress in English language teaching through continuous professional development—teachers’ self-awareness, perception, and feedback. Frontiers in Education. 2021; 6 :757285. doi: 10.3389/feduc.2021.757285. [ CrossRef ] [ Google Scholar ]
  • Vogt K, Tsagari D, Csépes I, Green A, Sifakis N. Linking learners’ perspectives on language assessment practices to teachers’ assessment literacy enhancement (TALE): Insights from four European countries. Language Assessment Quarterly. 2020; 17 (4):410–433. doi: 10.1080/15434303.2020.1776714. [ CrossRef ] [ Google Scholar ]
  • Vygotsky LS. Mind in society: The development of higher psychological processes. Harvard University Press; 1978. [ Google Scholar ]
  • Wiliam D. Embedded formative assessment. Solution Tree; 2011. [ Google Scholar ]
  • Wiliam D. Formative assessment and contingency in the regulation of learning processes . 2014. [ Google Scholar ]
  • Wininger SR. Using your tests to teach: Formative summative assessment. Teaching of Psychology. 2005; 32 (3):164–166. doi: 10.1207/s15328023top3203_7. [ CrossRef ] [ Google Scholar ]
  • Woods, N. (2015). Formative assessment and self-regulated learning. The Journal of Education Retrieved from https://thejournalofeducation.wordpress.com/2015/05/20/formative-assessment-and-self-regulated-learning/ .
  • Wuest DA, Fisette JL. Foundations of physical education, exercise science, and sport. 17. McGraw-Hill; 2012. [ Google Scholar ]
  • Xiao Y, Yang M. Formative assessment and self-regulated learning: How formative assessment supports students' self-regulation in English language learning. System. 2019; 81 :39–49. doi: 10.1016/j.system.2019.01.004. [ CrossRef ] [ Google Scholar ]

Formative and Summative Assessment

Assessment helps instructors and students monitor progress towards achieving learning objectives. Formative assessment is used throughout an instructional period to treat misconceptions, struggles, and learning gaps. Summative assessments evaluate learning, knowledge, proficiency, or success at the conclusion of an instructional period.

Below you will find formative and summative descriptions along with a diagram, examples, recommendations, and strategies/tools for the next steps.

Descriptions

Formative assessment  (Image 1, left) refers to tools that identify misconceptions, struggles, and learning gaps along the way and assess how to close those gaps. It includes practical tools for helping to shape learning. It can even bolster students’ ability to take ownership of their education when they understand that the goal is to improve learning and not apply final marks (Trumbull and Lash, 2013). It can include students assessing themselves, peers, or even the instructor, through writing, quizzes, conversation, and more. Formative assessment occurs throughout a class or course and seeks to improve student achievement of learning objectives through approaches that can support specific student needs (Theal and Franklin, 2010, p. 151). In the classroom, formative assessment centers on practice and is often low-stakes. Students may or may not receive a grade.

In contrast,  summative assessments (Image 1, right) evaluate student learning, knowledge, proficiency, or success after an instructional period, as a unit, course, or program. Summative assessments are almost always formally graded and often heavily weighted (though they do not need to be). Summative assessment can be used to significant effect in conjunction and in alignment with formative assessment, and instructors can consider a variety of ways to combine these approaches. 

Two diagrams showing the when, why, and how of formative and summative assessment. Formative: Help students to learn and practice, when - throughout the course, why - identify gaps and improve learning, how - via approaches that support specific student needs. Whereas, summative asses student performance, when at the end of an instructional period, why - collect evidence of student knowledge, skills or proficiency, how - via exit learning or a cumulative assessment.

Examples of Formative and Summative Assessments

Formative: l earn and practice.

  • In-class discussions
  • Clicker questions (e.g., Top Hat)
  • 1-minute reflection writing assignments
  • Peer review
  • Homework assignments

Summative: Assess performance

  • Instructor-created exams
  • Standardized tests
  • Final projects
  • Final essays
  • Final presentations
  • Final reports
  • Final grades

Formative Assessment Recommendations

Ideally, formative assessment strategies improve teaching and learning simultaneously. Instructors can help students grow as learners by actively encouraging them to self-assess their skills and knowledge retention, and by giving clear instructions and feedback. Seven principles (adapted from Nicol and Macfarlane-Dick, 2007 with additions) can guide instructor strategies:

1. Keep clear criteria for what defines good performance

Instructors can explain criteria for A-F graded papers and encourage student discussion and reflection about these criteria (accomplish this through office hours, rubrics, post-grade peer review, or  exam/assignment wrappers . Instructors may also hold class-wide conversations on performance criteria at strategic moments throughout the term.

2. Encourage students' self-reflection.

Instructors can ask students to utilize course criteria to evaluate their own or peers’ work and share what kinds of feedback they find most valuable. Also, instructors can ask students to describe their best work qualities, either through writing or group discussion.

3. Give students detailed, actionable feedback

Instructors can consistently provide specific feedback tied to predefined criteria, with opportunities to revise or apply feedback before final submission. Feedback may be corrective and forward-looking, rather than just evaluative. Examples include comments on multiple paper drafts, criterion discussions during 1-on-1 conferences, and regular online quizzes.

4. Encourage teacher and peer dialogue around learning

5. promote positive motivational beliefs and self-esteem.

Students will be more motivated and engaged when assured that an instructor cares for their development. Instructors can design assignments to allow for rewrites/resubmissions in assignments to promote learning development. These rewrites might utilize low-stakes assessments, or even automated online testing that is anonymous, and (if appropriate) allows for unlimited resubmissions.

6. Provide opportunities to close the gap between current and desired performance

Related to the above; instructors can improve student motivation and engagement by making visible any opportunities to close gaps between current and desired performance. Examples include opportunities for resubmission, specific action points for writing or task-based assignments, and sharing study or process strategies that an instructor would use to succeed.

7. Collect information to help shape teaching

Instructors can feel free to collect useful information from students to provide targeted feedback and instruction. Students can identify where they are having difficulties, either on an assignment or test or in written submissions. This approach also promotes metacognition, as students reflect upon their learning. 

Instructors may find various other formative assessment techniques through  CELT’s Classroom Assessment Techniques .

Summative Assessment Recommendations

Because summative assessments are usually higher-stakes than formative assessments, it is especially important to ensure that the assessment aligns with the instruction’s goals and expected outcomes. 

1. Use a Rubric or Table of Specifications

Instructors can use a rubric to provide expected performance criteria for a range of grades. Rubrics will describe what an ideal assignment looks like, and “summarize” expected performance at the beginning of the term, providing students with a trajectory and sense of completion. 

2. Design Clear, Effective Questions

If designing essay questions, instructors can ensure that questions meet criteria while allowing students the freedom to express their knowledge creatively and in ways that honor how they digested, constructed, or mastered meaning.

3. Assess Comprehensiveness. 

Effective summative assessments allow students to consider the totality of a course’s content, make deep connections, demonstrate synthesized skills, and explore more profound concepts that drive or find a course’s ideas and content. 

4. Make Parameters Clear

When approaching a final assessment, instructors can ensure that parameters are well defined (length of assessment, depth of response, time and date, grading standards). Also, knowledge assessed relates clearly to the content covered in course; and provides students with disabilities required space and support.

5. Consider Anonymous Grading. 

Instructors may wish to know whose work they grade, to provide feedback that speaks to a student’s term-long trajectory. If instructors want to give a genuinely unbiased summative assessment, they can also consider a variety of anonymous grading techniques (see hide student names in SpeedGrader Canvas guide ).

Explore Assessment Strategies and Tools

Instructional strategies.

CELT’s online resources are organized to help an instructor sequentially work through the teaching process.

Learning Technology

A listing with applications that have been proven to meet the ISU’s security, accessibility, and purchasing standards.

Academic Integrity

Explore the following approaches and methods which emphasize prevention and education.

  • Nicol, D.J. and Macfarlane-Dick, D. (2006) Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 2-19.
  • Theall, M. and Franklin J.L. (2010). Assessing Teaching Practices and Effectiveness for Formative Purposes. In: A Guide to Faculty Development . KJ Gillespie and DL Robertson (Eds). Jossey Bass: San Francisco, CA.
  • Trumbull, E., & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory . San Francisco: WestEd.

Formative and Summative Assessment, by the Center for Excellence in Learning and Teaching (CELT) at Iowa State University is licensed under Creative Commons BY-NC-SA 4.0 . This work, Formative and Summative Assessment, is a derivative of Formative and Summative Assessment developed by the Yale University Poorvu Center for Teaching and Learning(retrieved on June 23, 2020) from https://poorvucenter.yale.edu/Formative-Summative-Assessments.

  • For Parents
  • For Educators
  • Our Work and Impact
  • About Digital Citizenship
  • Digital Citizenship Curriculum
  • Digital Citizenship (U.K.)
  • Lesson Collections
  • All Lesson Plans
  • Digital Life Dilemmas
  • SEL in Digital Life Resource Center
  • Implementation Guide
  • Toolkits by Topic
  • Digital Citizenship Week
  • Digital Connections (Grades 6–8)
  • Digital Compass™ (Grades 6–8)
  • Digital Passport™ (Grades 3–5)
  • Social Media TestDrive (Grades 6–8)

formative assessment and research occur

AI Literacy for Grades 6–12

  • All Apps and Websites
  • Curated Lists
  • Best in Class
  • Common Sense Selections
  • About the Privacy Program
  • Privacy Evaluations
  • Privacy Articles
  • Privacy Direct (Free download)
  • Free Back-to-School Templates
  • 21 Activities to Start School
  • AI Movies, Podcasts, & Books
  • Learning Podcasts
  • Books for Digital Citizenship
  • ChatGPT and Beyond
  • Should Your School Have Cell Phone Ban?
  • Digital Well-Being Discussions
  • Supporting LGBTQ+ Students
  • Offline Digital Citizenship
  • Teaching with Tech
  • Movies in the Classroom
  • Social & Emotional Learning
  • Digital Citizenship
  • Tech & Learning
  • News and Media Literacy
  • Common Sense Recognized Educators
  • Common Sense Education Ambassadors
  • Browse Events and Training
  • AI Foundations for Educators
  • Digital Citizenship Teacher Training
  • Modeling Digital Habits Teacher Training
  • Student Privacy Teacher Training

formative assessment and research occur

Training Course: AI Foundations for Educators

formative assessment and research occur

Earn your Common Sense Education badge today!

  • Family Engagement Toolkit
  • Digital Citizenship Resources for Families

Family Tech Planners

Family and community engagement program.

  • Workshops for Families with Kids Age 0–8
  • Workshops for Middle and High School Families
  • Kids and Tech Video Series

formative assessment and research occur

  • Get Our Newsletter

Teachers' Essential Guide to Formative Assessment

Topics:   Tech & Learning Assessment Classroom Media & Tools

How can I use formative assessment to plan instruction and help students drive their own learning?

teacher giving student a high five

What is formative assessment?

What makes a good formative assessment, how should i use formative assessment results, how do i know what type of formative assessment to use, what are the benefits of using an edtech tool for formative assessment.

A formative assessment is a teaching practice—a question, an activity, or an assignment—meant to gain information about student learning. It's formative in that it is intentionally done for the purpose of planning or adjusting future instruction and activities. Like we consider our formative years when we draw conclusions about ourselves, a formative assessment is where we begin to draw conclusions about our students' learning.

Formative assessment moves can take many forms and generally target skills or content knowledge that is relatively narrow in scope (as opposed to summative assessments, which assess broader sets of knowledge or skills). Common examples of formative assessments include exit tickets, fist-to-five check-ins, teacher-led question-and-answer sessions or games, completed graphic organizers, and practice quizzes.

In short, formative assessment is an essential part of all teaching and learning because it enables teachers to identify and target misunderstandings as they happen, and to adjust instruction to ensure that all students are keeping pace with the learning goals. As described by the NCTE position paper Formative Assessment That Truly Informs Instruction , formative assessment is a "constantly occurring process, a verb, a series of events in action, not a single tool or a static noun."

As mentioned above, formative assessments can take many forms. The most useful formative assessments share some common traits:

  • They assess skills and content that have been derived from the backward planning process . They seek to assess the key learning milestones in the unit or learning sequence.
  • They are actionable . They are designed so that student responses either clearly demonstrate mastery of the skills and content, or they show exactly where mastery is lacking or misunderstanding is occurring.
  • When possible, they are student-centered . Using an assessment where students measure themselves or their peers, or where they're prompted to reflect on their results, puts students in charge of their own learning. It allows students to consider their own progress and determine positive next steps. Unfortunately, student-centered formative assessments don't always yield the easiest and most actionable information for teachers, so their benefits have to be weighed against other factors.

Formative assessments are generally used for planning future instruction and for helping students drive their own learning. In terms of future instruction, how you use assessment data most depends on what kind of results you get.

  • If 80% or more demonstrate mastery , you'll likely want to proceed according to plan with subsequent lessons. For individual students not demonstrating mastery, you'll want to find ways to interject extra support. This might mean a differentiated assignment, a guided lesson during independent work time, or support outside of class.
  • If between 50% and 80% demonstrate mastery , you'll need to use class time to have structured differentiation. You'll need to build this into the next lesson(s) if it isn't already planned. This means different activities or guided instruction for different groups of students. Students who've demonstrated mastery could engage in an extension activity or additional practice, or serve as support for other students. Students still attempting mastery could receive additional guided practice or additional instructional materials like multimedia resources or smaller "chunks" of content.
  • If fewer than 50% demonstrate mastery , you'll need to do some whole-class reteaching. There are many approaches and concrete strategies for reteaching. Check out this article from Robert Marzano as well this blog post from BetterLesson for ideas.

The above recommendations are general rules of thumb, but your school or district may have specific guidelines to follow around teaching and reteaching. Make sure to consult them first.

Also, it's important to remember that building differentiation into the structure of your class and unit design from the beginning is the best way to make use of formative assessment results. Whether this means a blended or flipped classroom or activity centers, structuring in small-group, student-directed learning activities from the outset will make you more willing—and better prepared—to use formative assessment regularly and effectively in your class.

This is perhaps the most difficult question when it comes to formative assessment. There are so many different methods— just check out this list from Edutopia -- that it's easy to get lost in the sea of options. When it comes to choosing, the most important question is: What type of skill or content are you seeking to measure?

  • Content knowledge ("define," "identify," "differentiate") is generally the easiest to assess. For less rigorous objectives like these, a simple fist-to-five survey or exit ticket can work well. An edtech tool can also work well here, as many of them can score and aggregate multiple-choice responses automatically.
  • Higher-order thinking skills ("analyze," "synthesize," "elaborate") are generally more difficult and time-consuming to assess. For this, you'll likely use a different question type than multiple choice and need to allow more time for students to work. A good option here is to have students do a peer assessment using a rubric, which has the double benefit of allowing them to reflect on their own learning and cutting down the time you need to spend assessing the work. This can be done through an LMS or another project-based learning app , or through old-school paper and pencil; it just depends on your preference. Because students—and adults, too—often don't know what they don't know, self-assessments may be less accurate and less actionable for these types of skills.
  • Process-oriented skills ("script," "outline," "list the steps") also tend to be more difficult to assess. Graphic organizers can work well here, allowing teachers (or peer reviewers) to see how students arrived at their results. STEM apps for higher-order thinking and coding apps can also make this assessment information more accessible.

As mentioned above, one of the big benefits of using a tool for formative assessment is that it allows teachers to more efficiently use their time. Apps like Quizlet and Formative use a quiz format to provide real-time feedback to both students and teachers, and—n their premium versions—provide aggregate qualitative and quantitative assessment data. Other apps, like Kahoot! or Quizizz , provide these features with the added engagement of game-based competition . Apps like Flip (video-based) and Edulastic (tracks against standards) provide assessment data with other additional perks. Check out our list of top tech tools for formative assessment to see a range of options.

Finally, if you're already regularly teaching with technology , using an edtech tool fits seamlessly into the daily activities your students already know how to do. It can be an independent activity that students do as part of a blended classroom, or an outside-of-class activity that's part of a flipped classroom. In this context, both students and teachers will get the most out of the time-saving and student-centered benefits that edtech tools provide.

As an education consultant, Jamie created curriculum and professional development content for teachers. Prior to consulting, Jamie was senior manager of educator professional learning programs at Common Sense and taught middle school English in Oakland, California. For the 2016–2017 school year, Jamie received an Excellence in Teaching award and was one of three finalists for Teacher of the Year in Oakland Unified School District. While teaching, Jamie also successfully implemented a $200,000 school-wide blended-learning program funded by the Rogers Family Foundation and led professional development on a wide range of teaching strategies. Jamie holds a bachelor's degree in philosophy from Eugene Lang College and a master's degree in philosophy and education from Teacher' College at Columbia University. Jamie currently lives in Sao Paulo, Brazil with his 4-year-old son, Malcolm, and his partner, Marijke.

Related Content

formative assessment and research occur

The Best Quiz and Game Show Apps for Classrooms

After dozens of hours testing quizzing tools, we recommend Quizizz for teachers and students.

formative assessment and research occur

3 Tips for Great Formative Assessment

formative assessment and research occur

Top Tech Tools for Formative Assessment

Assess and engage students in real time with these formative assessment apps.

Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Center for the Advancement of Teaching Excellence

Formative assessments.

Nicole Messier, CATE Instructional Designer February 4th, 2022

WHAT? Heading link Copy link

Formative assessments occur before, during, and after a class session and data collected is used to inform improvements to teaching practices and/or student learning and engagement.

  • Formative assessments are beneficial to instructors by helping them to understand students’ prior knowledge and skills, students’ current level of engagement with the course materials, and how to support students in their progression to achieve the learning objectives.
  • Formative assessments are beneficial to students by providing them with immediate feedback on their learning as well as opportunities to practice metacognition, which is an awareness of one’s own knowledge and thinking processes as well as an ability to self-monitor one’s learning path (e.g., self-assessment of learning) and adapt or make changes to one’s learning behaviors (e.g., goal setting).

Formative assessments can be viewed through two broad assessment strategies: assessments for learning and assessments as learning.

  • Assessment for learning (AfL) provides the instructor an opportunity to adapt their teaching practices to support current students’ needs through the collection of data as well as provide practice, feedback, and interaction with the students.
  • Assessment as learning (AaL) provides student ownership of learning by utilizing evidence-based learning strategies, promoting self-regulation, and providing opportunities for reflective learning.

Formative Assessment

Want to learn more about these assessment strategies? Please visit the Resources Section – CATE website to review resources, teaching guides, and more.

Non-Graded Formative Assessments (AfL & AaL) Heading link Copy link

Non-graded formative assessments (afl & aal).

Non-graded formative assessments can be used to examine current students’ learning and provide an opportunity for students to self-check their learning.

  • Before class, questions can provide students with an opportunity to self-assess their learning as well as provide instructors with information for adapting their instruction.
  • During class, questions can provide a platform for discussion, interaction, and feedback.
  • After class, questions can provide students with opportunities to reflect, self-assess, and use retrieval practice .
  • Questions to gauge understanding of content in the video.
  • Think-pair-share – asking students to turn to their neighbor in class or small breakout groups in an online discussion and share their thoughts, ideas, or answers to a topic or question.
  • Muddiest point – asking students to identify a topic or theme that is unclear, or that they do not have confidence in their knowledge yet.
  • Three-minute reflection – asking students to pause and reflect on what they have learned during class (e.g., shared in a survey tool like Google Form , or in a discussion tool like Acadly ).
  • Asynchronous online sharing and brainstorming using Blackboard discussion boards or EdTech tools like Jamboard or Padlet.

Polling and video questions can be designed as assessment for learning (AfL) by gathering data for instructors to adapt their lectures and learning activities to meet students where they are or to provide opportunities for students to reflect on their learning. In-class activities such as think-pair-share and muddiest point or asynchronous sharing can be designed as assessment as learning (AaL) by providing opportunities for students to self-assess their learning and progress.

Example 1 - Polling Questions Heading link Copy link

Example 1 - polling questions.

An instructor wants to determine if students understand what is being discussed during the lecture and decides to create an opportunity for students to reflect and self-assess. The instructor designs a Likert scale poll where students are asked to rank their understanding of concepts from 1 – extremely muddy (no understanding of the concept) to 5 – ready to move on (a clear understanding of the concept). Based on student responses the instructor decides to revisit a muddy concept in the next class as well as provides additional resources via the course site on the concept to support student learning.

The instructor also encourages students to revisit concepts that they scored a three or lower on and write down questions about the concepts to share before the next class. The instructor decides to continue using the poll and the collection of questions on important concepts in the upcoming units. The instructor will utilize these questions throughout the term to support student learning.

This formative assessment example demonstrates assessment for learning (Afl) and assessment as learning (AaL) by collecting data to adapt instruction as well as provide students with the opportunity to self-assess.

Polling questions can also be used to verify that pre-class work was completed, as a knowledge check while taking attendance, as a quick confirmation of understanding while lecturing, or as an exit poll before leaving class (on-campus or synchronous online).

Non-graded formative assessments can be adapted to provide extrinsic motivation by awarding students credit if they achieve a certain percentage of correct answers (e.g., students complete at least 70% of the questions correctly to receive full credit). This type of extrinsic motivation shifts the focus from the students’ ability to answer the questions correctly to promoting self-assessment, practice, and goal setting.

Graded Formative Assessments (AfL & AaL) Heading link Copy link

Graded formative assessments (afl & aal).

Just like non-graded formative assessments, graded formative assessments can be used to examine current students’ learning and provide an opportunity for students to gauge their learning. Graded formative assessments should provide students with opportunities to practice skills, apply knowledge, and self-assess their learning.

  • One-minute essay – asking students to write down their thoughts on a topic at the end of a lecture.
  • Concept map – asking students to create a diagram showing relationships between concepts.
  • Authentic assessments – an assessment that involves a real-world task or application of knowledge instead of a traditional paper.
  • Reflections, journals, self-assessment of previous work
  • Discussion forums – academic discussions focused on a topic or question.
  • Group work or peer review
  •  Video questions using EdTech tools like Panopto or Echo360 .

Formative assessments like in-class work, written assignments, discussion forums, and group work can be graded with a rubric to provide individualized feedback to students. Video questions using EdTech tools like Panopto or Echo360 and quizzes using Blackboard Tests, Pools, and Surveys can be automatically graded with immediate feedback provided to students.

Example 2 - Written Assignment Heading link Copy link

Example 2 - written assignment.

An instructor decides to create four formative written assessments to measure student learning and provide opportunities for students to self-assess and self-regulate their learning. These written assignments are designed to assess each of the learning objectives in the course. Students are required to find new evidence by performing research based on the aligned learning objective(s) in each assignment. In the first written assignment, students are provided with a rubric to self-assess their work and submit their self-assessment and work. The instructor provides personalized feedback using the rubric on their work and self-assessment. In the second and third written assignments, students are asked to submit their work and provide a review of their peers’ work using a rubric. The instructor provides feedback on the peer review only. In the fourth assignment, the students are asked to select one of the previous pieces of work and make revisions as well as write a reflection on the knowledge and skills that were developed by completing a self-assessment and two peer reviews.

This formative assessment example demonstrates the importance of feedback in improving student performance and learning. This example could come from a writing, research, or humanities course where students are expected to produce narrative, argumentative, persuasive, or analytical essays. These written assignments could also be in major coursework and be more authentic (involves a real-world task or application of knowledge instead of a traditional paper), for example, developing a memo, proposal, blog post, presentation, etc. 

Formative assessments are used to provide opportunities for practice, feedback, and interaction ensuring students are active learners, instead of passive recipients of the information. In an active learning environment, student engagement, motivation, and outcomes are improved through the implementation of formative assessments. Students participate in meaningful learning activities and assessments that promote self-regulation, provide practice, and reinforce skills in an active learning environment.

Want to learn more about active learning strategies? Please visit the  Resources Section – CATE website to review resources, teaching guides, and more.

WHY? Heading link Copy link

Why develop formative assessments in your course?

Since the late 90s, Paul Black and Dylan Wiliam have been challenging the view that summative assessment is the best way to measure learning and support student success. Black and Wiliam’s research on formative assessment and student achievement started the shift from a summative focus to a more balanced view of assessment for student success.

Studies have shown that students who participate in formative assessments have improved overall performance and higher scores than students who do not participate in the formative assessments (Robertson, 2019) .

Impact on Students Heading link Copy link

Impact on students.

Students who participate in formative assessments develop and improve several essential skills (Koka, 2017) including:

  • Communication skills
  • Collaboration skills
  • Problem-solving skills
  • Metacognition
  • Self-regulation skills

Student involvement, self-reflection, and open communication between faculty and students during formative assessments are vital to student success (Koka, 2017). Effective formative assessments include (Black, 2009):

  • “Clarifying and sharing learning intentions and criteria for success,
  • Engineering effective classroom discussions and other learning tasks that elicit evidence of student understanding,
  • Providing feedback that moves students forward,
  • Activating students as instructional resources for one another,
  • Activating students as the owners of their own learning.”

Use of EdTech Tools Heading link Copy link

Use of edtech tools.

Studies have shown that using EdTech tools for formative assessments improves the immediacy of scores and feedback to students. Student wait time and faculty workload are dramatically reduced by the utilization of EdTech tools (Robertson, 2019). The use of EdTech tools for formative assessments also improves student satisfaction, enjoyment, and engagement (Grier, 2021; Mdlalose, 2021). EdTech tools can be used for synchronous and asynchronous formative assessments; however, synchronous formative assessments can allow the instructor to clarify misconceptions and help foster more engagement during discussions to create a learning community (Mdlalose, 2021).

In a study and literature review by Robertson and Humphrey (2019), they determined elements needed for formative assessment tools to be effective, including timeliness of feedback, elaborative feedback from the instructor, personalized feedback for students, reusability (reusing existing questions or content), accessibility (does the use of the tool exclude some students), interface design (how easy it is to implement), interaction (does it improve the frequency of interactions between student and instructor), and cost (funded by the institution or personal expense). These elements should be taken into consideration as you determine which EdTech tool(s) to use for formative assessments.

Feedback & Formative Assessments Heading link Copy link

Feedback & formative assessments.

A critical component of any formative assessment is the timeliness of feedback. Studies have shown that it is the immediacy of feedback that is most beneficial to student learning (Robertson, 2019) . As you begin to design formative assessments or select an EdTech tool to develop a formative assessment, make sure to determine how you will provide feedback to students.

Reflect on the following questions regarding feedback and formative assessments:

  • How will you ensure that feedback to students is timely?
  • How will you design multiple opportunities for feedback interactions with you and/or among peers?
  • How will you distribute feedback interactions throughout the course?
  • How will you provide personalized feedback to students?

Want to learn more about grading and feedback? Please visit the Resources Section – CATE website to review resources, teaching guides, and more.

HOW? Heading link Copy link

How do you start designing formative assessments?

First, you can review your course outcomes and learning objectives to ensure alignment of the formative assessments developed. Formative assessments can help measure student achievement of learning objectives as well as provide students with actionable feedback and the instructor with data to make decisions on current teaching and instruction practices.

So how do you determine what type of formative assessment to design? Or the frequency and distribution of formative assessments in your course? Let’s dive into some of the elements that might impact your design decisions, including class size, discipline, modality, and EdTech tools .

Class Size Heading link Copy link

Formative assessments can be designed and implemented in any course size from small seminar courses to large lecture courses. The size of the class will influence the decisions that instructors make regarding the use of EdTech tools to deliver formative assessments.

Small Class Size

  • May allow for more formative assessments distributed throughout the course.
  • May allow for more immediacy of feedback and descriptive, personalized, or dialogic feedback from the instructor.

Large Class Size

  • May require instructors to utilize EdTech tools to deliver formative assessments that are distributed throughout the course.
  • May require instructors to utilize EdTech tools to deliver timely, consistent, and helpful feedback to students.

Discipline Heading link Copy link

Formative assessments can be implemented in any type of course or program. A few considerations when developing formative assessments:

  • To understand students’ prior knowledge and skills.
  • As learning for students to reflect and self-regulate their learning.
  • To measure achievement of learning objectives.
  • To collect data to make decisions about teaching and instruction.

In undergraduate general education coursework, instructors should consider using formative assessments to understand student goals and motivations for taking a course and how to support their goals (future learning and connection to future career) and sustain their engagement in a course that may not be directly or obviously related to the major program of study. In major coursework, instructors might want to consider using formative assessments to reinforce knowledge and practice skills needed for summative assessments and external accreditation or licensure exams.

Modality Heading link Copy link

The modality of your course will influence the planning and delivery of formative assessments. Formative assessments can be designed for both synchronous and asynchronous delivery for any course modality.

Synchronous formative assessments (during scheduled classes) can be administered in on-campus, online synchronous, hybrid, and synchronous distributed courses. For example, creating in-class polls or surveys using an EdTech tool like Acadly and   iClickers .

Asynchronous formative assessments (outside of scheduled classes) can be administered in any type of course; however, asynchronous formative assessments are vital for online asynchronous courses to measure and reinforce learning. For example, creating weekly or unit quizzes in Blackboard using the Tests, Pools, and Surveys to reinforce student learning of the content.

Formative Assessment Tools Heading link Copy link

Formative assessment tools.

EdTech tools can help to reduce faculty workload by providing a delivery system that reaches students before, during, and/or after class sessions

Below are EdTech tools that are available to UIC faculty to create and/or grade formative assessments for and as learning.

Video and Questions Tools Heading link Copy link

Video and questions tools.

  • VoiceThread

Asynchronous formative assessment tools like videos with questions can help you provide opportunities for students to self-assess learning, receive feedback, and practice.

Questions, Surveys, and Polling Tools

  •   iClickers
  • Blackboard surveys and quizzes
  • Google forms
  • Poll Everywhere

Question or polling tools can be administered synchronously to check understanding during a lecture in on-campus or online synchronous courses. Many of these tools can also be used asynchronously by providing a link in the course materials or announcements in the learning management system (LMS) – Blackboard .

Assessment Creation and Grading Tools

  • Blackboard assignments drop box and rubrics

Assignments and scoring rubrics can be created in Blackboard for students to practice skills, receive feedback, and make revisions. Formative assessments can be created within Gradescope, or you can score in-class work using AI technology to reduce grading time, provide consistency in grading, and give general as well as personalized feedback to students.

Want to learn more about these formative assessment tools? Visit the EdTech section on the CATE website to learn more.

GETTING STARTED Heading link Copy link

Getting started.

The following steps will support you as you examine current formative assessment practices through the lens of assessment for learning (AfL) and assessment as learning (AaL) and develop new or adapt existing formative assessments.

  • Consider creating an outline of the course and determine when a learning objective is covered and should be assessed.
  • To collect data for decision-making about teaching and instruction (AfL).
  • To provide students opportunities for practice and feedback (AfL and AaL).
  • To promote self-regulation and reflective learning by students (AaL).
  • To provide differentiation for students to improve individual learning and performance (AfL).
  • Format: in-class work, question(s), written assignment, etc.
  • Delivery: paper and pencil, Blackboard, EdTech tool, etc.
  • Feedback: general (how to improve performance), personalized (student-specific), etc.
  • Scoring: graded, non-graded, participation points, or extra credit.
  • The fourth step is to review data collected from formative assessment(s) and reflect on the implementation of the formative assessment(s) to inform continuous improvements for equitable student outcomes.

HOW TO USE/CITE THIS GUIDE Heading link Copy link

Creative Commons Attribution-Noncommercial 4.0 international logo of three circles. Circle to the left has

  • This work is licensed under Creative Commons Attribution-NonCommercial 4.0 International.
  • This license requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only.

Please use the following citation to cite this guide:

Messier, N. (2022). “Formative assessments.” Center for the Advancement of Teaching Excellence at the University of Illinois Chicago. Retrieved [today’s date] from https://teaching.uic.edu/resources/teaching-guides/assessment-grading-practices/formative-assessments/

  • CC BY-NC 4.0 Deed

ADDITIONAL RESOURCES Heading link Copy link

Additional resources.

Academic Planning Task Force. (2020). Guidelines for Assessment in Online Learning Environments .

Clifford, S. (2020). Eleven alternative assessments for a blended synchronous learning environment. Faculty Focus.

Crisp, E. (2020). Leveraging feedback experiences in online learning. EDUCAUSE

Dyer, K. (2019). 27 easy formative assessment strategies for gathering evidence of student learning. NWEA .

Gonzalez, J. (2020). 4 laws of learning (and how to follow them). Cult of Pedagogy .

Weinstein, Y., Sumeracki, M., Caviglioli, O. (n.d.). Six strategies for effective learning. The Learning Scientists .

Agarwal, P. (n.d.) Retrieval practice website

Hattie, J. (n.d.) Visible Learning website

Weinstein, Y., Sumeracki, M., Caviglioli, O. (n.d.). The Learning Scientists. 

Wiliam, D. (n.d.) Dylan Wiliam’s website

REFERENCES Heading link Copy link

Black, P., Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment Evaluation and Accountability. 21. 5-31. 10.1007/s11092-008-9068-5.

Earl, L.M., Katz, S. (2006). Rethinking classroom assessment with purpose in mind – Assessment for learning, assessment as learning, assessment of learning. Winnipeg, Manitoba: Crown in Right of Manitoba .

Grier, D., Lindt, S., Miller, S. (2021). Formative assessment with game-based technology. International Journal of Technology in Education and Science . 5. 193-202. 10.46328/ijtes.97.

Koka, R., Jurane-Bremane, A., Koke, T. (2017). Formative assessment in higher education: From theory to practice. European Journal of Social Sciences Education and Research . 9. 28. 10.26417/ejser.v9i1.p28-34.

Mdlalose, N., Ramaila, S., Ramnarain, U. (2021). Using Kahoot! As a formative assessment tool in science teacher education. International Journal of Higher Education . 11. 43-51. 10.5430/ijhe.v11n2p43.

Robertson, S., Humphrey, S., Steele, J. (2019). Using technology tools for formative assessments . Journal of Educators Online . Volume 16, Issue 2.

Weinstein, Y., Sumeracki, M., Caviglioli, O. (2019). Understanding how we learn – A visual guide. Routledge .

Publications

On-demand strategy, speaking & workshops, latest articles, write for us, library/publications.

  • Competency-Based Education
  • Early Learning
  • Equity & Access
  • Personalized Learning
  • Place-Based Education
  • Post-Secondary
  • Project-Based Learning
  • SEL & Mindset
  • STEM & Maker
  • The Future of Tech and Work

formative assessment and research occur

Todd Smith and Stacey Ocander on Pathways Strategies to Address the Healthcare Workforce Shortage in Nebraska and Beyond

Futures thinking in education, conrad wolfram on computational thinking and revolutionizing math education, candy van buskirk on learn outside and disrupting the status quo in education, recent releases.

Health Science Pathways Guide

New Pathways Handbook: Getting Started with Pathways

Unfulfilled Promise: The Forty-Year Shift from Print to Digital and Why It Failed to Transform Learning

The Portrait Model: Building Coherence in School and System Redesign

Green Pathways: New Jobs Mean New Skills and New Pathways

Support & Guidance For All New Pathways Journeys

Unbundled: Designing Personalized Pathways for Every Learner

Credentialed Learning for All

AI in Education

For more, see Publications |  Books |  Toolkits

Microschools

New learning models, tools, and strategies have made it easier to open small, nimble schooling models.

Green Schools

The climate crisis is the most complex challenge mankind has ever faced . We’re covering what edleaders and educators can do about it. 

Difference Making

Focusing on how making a difference has emerged as one of the most powerful learning experiences.

New Pathways

This campaign will serve as a road map to the new architecture for American schools. Pathways to citizenship, employment, economic mobility, and a purpose-driven life.

Web3 has the potential to rebuild the internet towards more equitable access and ownership of information, meaning dramatic improvements for learners.

  • Schools Worth Visiting

We share stories that highlight best practices, lessons learned and next-gen teaching practice.

View more series…

About Getting Smart

Getting smart collective, impact update, the research base for formative assessment.

formative assessment and research occur

By Mary Ryerse and Susan Brookhart

Formative assessment is at the forefront of many education conversations and, at present, many accept intuitively that it’s an important part of the learning process.

Yet, how do we know formative assessment actually works? In this blog, we unpack some of the research base underlying the practice of formative assessment.

For those less familiar with the practice, it is important to note that formative assessment is a process in which students and teachers work together to improve learning. Both students and teachers are active participants in the process as they generate, interpret, and use evidence of learning to 1) aim for learning goals, 2) apply criteria to the work they produce, and 3) decide on next steps.

To summarize the process, there is a formative learning cycle which encourages students to repeatedly ask these three questions:

  • Where am I going?
  • Where am I now?
  • Where to next?

Further, formative assessment is not a particular kind of test, or marks or grades, but rather an ongoing practice.

Below, we have synthesized key information about research behind formative assessment’s effectiveness.

Foundational Research Base for Formative Assessment

The original research base on formative assessment is most typically traced back to the 1998 publication Assessm ent and Classroom Learning  (Black & Wiliam, 1998), the first widely cited review of literature on formative assessment in the English language.  The researchers found “firm evidence” that formative assessment can work, but also noted that there was not much formative assessment happening in conventional teaching practices (more on that later).

Much of the research measures whether a particular practice is working by the “effect size.”

Definition: “ Effect size  is the difference between treatment group and comparison group expressed in standard deviation units.”  Generally speaking, the higher the effect size the stronger the evidence of impact.

In the Black and Wiliam review, the authors cited prior studies reporting effect sizes that ranged from 0.40 and 0.70 for formative assessment practice, a relatively strong indicator. Evidence for those numbers came mostly from a 1986 review of formative assessment in special education titled Effects of Systematic Formative Evaluation: A Meta-Analysis  (Fuchs & Fuchs, 1986).

In an effort to increase classroom practice, Black and Wiliam also created a “practitioner’s summary,” Inside the Black Box: Raising Standards through Classroom Assessment , to go along with the original pub.

More Recent Reports and Critiques.

A number of studies followed the foundational research of Black and Wiliam. These more recent reports justified a need for more research, a critique of current research and a need for content-area-specific research.

A call for research.  More recently, Formative Assessment: A Meta-Analysis and a Call for Research , (Kingston & Nash, 2011) estimated the effect size in the 0.20 to 0.30 range. Regardless, the findings point to a significant level of impact.

A critical review.  There is some criticism of research that lumps all formative assessment together because formative assessment is so complex. It encompasses many different practices and usually takes place in a context of other changes, as well (for example, a change to more student-centered learning in general). Bennett’s Formative Assessment: A Critical Review  (2011) is the most widely cited and best articulated of the critiques.

A look across content areas. It may be that in the future, the best evidence of the effects of formative assessment on learning will be accumulated through studies within content areas, which removes some of the complexity among a group of studies. In Formative Assessment and Writing  (Graham, Hiebert, & Harris, 2015), a review of formative assessment specifically assessing the effects on students learning to write, was conducted and found average effect sizes of 0.87, 0.62, 0.58, and 0.38 for feedback from adults, self, peers, and computers, respectively.  These effect sizes point to the importance of emphasizing specific content area feedback and assessment.

Closely Connected: Effects of Feedback

Additionally, evidence for the effects of formative assessment also comes from the much larger literature on the effects of feedback. Feedback is one of the foundational aspects of formative assessment.  Recent reviews of the feedback literature include the following:

  • The Power of Feedback  (Hattie & Timperley,  2007). Summarizing previous meta-analyses of the effects of feedback, this review found an overall effect size of 0.79, which placed it among the top 5 or 10 influences of any kind on achievement.
  • Effects of Feedback in a Computer-Based Learning Environment on Students’ Learning Outcomes  (Van der Kleij, Feskens, & Eggen, 2015). This review reported effect sizes of 0.49 for elaborated feedback (feedback that includes explanations, additional material, and/or suggestions for next steps) in the context of computer-based instruction.
  • Focus on Formative Feedback  (Shute, 2008). This report provided a more descriptive review of the literature on task-level, formative feedback, and four summary tables of recommendations for practice based on the research reviewed.

Policy Brief

It has been demonstrated that students become active agents in their educational process as they learn how to use feedback, set goals, monitor their own progress and select strategies that move their own learning forward. According to Formative Assessment: Improving Learning in Secondary Classrooms , formative assessment practice has been shown to be highly effective in raising the level of student attainment, increasing equity of student outcomes and improving students’ ability to learn.

The above-mentioned policy brief is an executive-summary-style brief of a larger study (OECD, 2005) that was part of OECD’s “What Works in Innovation in Education” initiative. The report reviews international research as well as OECD’s case study findings, presents case studies from several schools in eight participating countries, and includes English, French, and German literature reviews on formative assessment in their respective contexts and research traditions.

Additional findings include that schools which use formative assessment show not only general gains in academic achievement but also particularly high gains for previously underachieving students.

It is worth noting that teachers help students accomplish the above and move learning forward. According to the same OECD report, teachers who engage in formative assessment report a changed classroom culture, clarity regarding goals, varied instructional practices and more positive interactions with students.

As we see that the more general research has revealed positive effects and benefits of formative assessment, it seems clear that more specific research will only refine and improve the practice.

For more, see:

  • Scaling Formative Assessment: The How I Know Project
  • Keys to Success for Formative Assessment: A Professional Learning Guide
  • Reflections on How I Know

Susan Brookhart is a  formative assessment expert who speaks, writes and consults through Brookhart Enterprises LLC (susanbrookhart.com).

Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update .

Mary Ryerse

Discover the latest in learning innovations.

Sign up for our weekly newsletter.

Related Reading

Photo by Julia M Cameron from Pexels

Help Students Thrive in Remote Learning Environments with Formative Assessment

formative assessment and research occur

Using Design Thinking with Districts to Improve Formative Assessment Practice

formative assessment and research occur

Competency-Based Approaches Focus On Skills, Relevancy and Personalization

XQ

The Learner-Centered Competency-Based Learning Management System

Leave a comment.

Your email address will not be published. All fields are required.

Nominate a School, Program or Community

Stay on the cutting edge of learning innovation.

Subscribe to our weekly Smart Update!

Smart Update

What is pbe (spanish), designing microschools download, download quick start guide to implementing place-based education, download quick start guide to place-based professional learning, download what is place-based education and why does it matter, download 20 invention opportunities in learning & development.

formative assessment and research occur

Meet the Team

Customer Success

Conferences

Formative Assessment: Pros and Cons You Need to Know

formative assessment and research occur

A formative assessment occurs before or during a unit or lesson’s implementation to inform the student and instructor of progress. It allows for adjustments in learning to be made and to ensure learning outcomes are achieved.

Formative assessment allows you to measure and track students' progress in real time and change the course curriculum and instruction as necessary.

How is Formative Assessment Formalized as a Practice?

Formative assessment is a method of assessing students' understanding of course material. It is an ongoing assessment strategy that could include a multitude of activities, all with a purpose of creating understanding of the learner’s progress of the learning objectives. Formative assessment activities should be chosen based upon the course material, learner and teacher attributes, but could entail a series of quick-fire questions, one-minute reflection writing assignments, in-class discussions, or classroom polls. Any activity or task that assists the teacher and student in understanding the learner's progress throughout the course and provides teachers with next steps during instruction to circle back on a concept and/or identify specific students who need reteaching is formative assessment.

Students will face challenges in their studies—they may struggle to understand a subject or grasp a concept. It is nearly impossible for a teacher to notice the struggle of every student and provide the necessary support without using formative assessment.

Because formative assessments are considered part of the learning process, they do not require the same graded evaluation as summative assessments (such as end-of-unit exams). Instead, they provide pupils with the ability to show what they know at that point in the learning process towards mastery, similar to a homework project. They help teachers check for comprehension along the way and make decisions regarding future instruction; they can provide students with comments for improvement.

formative assessment and research occur

Why is Formative Assessment Vital?

Without formative assessment, there is a chance the lesson or unit will proceed with students lacking understanding of critical components of the learning outcomes.  Valuable learning time will be lost and the unit may fail  to achieve the intended outcomes.

For behavioral change and community-engaged programs, this sort of evaluation is crucial. A formative assessment helps the teacher deal with unexpected events and react to emergent features.

When you're unable to monitor and gather practical input, a formative assessment can be used to improve the unit’s execution and increase the odds of achieving the learning outcomes. With the assistance of formative assessments, the numerous process changes can be clearly understood.

Valid reasons for what works, what doesn't, and why can be discovered with formative assessments. As a result, the unit will be more likely to succeed as teachers gather more knowledge and improve potential project formulations.

Advantages of Formative Assessment

Below are some benefits and advantages to consider when planning formative assessments.

1. Aids in the development of skills

The primary goal of formative assessments is to assist learners in the development of competencies. The teachers can use this type of evaluation to determine an individual’s learning needs and guide them toward their learning objectives.

This method identifies an individual's obstacles and challenges in order to develop appropriate solutions to overcome them. The next lesson or task also gets scheduled during the evaluation.

2. Examining student work

Students' work attitude—how they handle solo work, group work, hands-on tasks, and other factors—can reveal a lot of information, particularly if pupils are expected to explain their reasoning during the procedure. When teachers spend time analyzing students' work, they learn about:

  • Current comprehension, student attitudes, and skills developed concerning the subject topic
  • Teaching styles, strengths, and shortcomings
  • Any additional or specialized support required

Teachers might adapt their instruction to be more efficient in the future based on such a study of students' classroom work.

3. Questioning techniques

Questioning techniques can be used with individuals, small groups, or the entire class. Asking students to respond to well-thought-out, higher-order questions like "why" and "how" is an effective formative assessment strategy. Higher-order questions require students to think more profoundly and assist the teacher in determining the degree and scope of their comprehension.

Giving pupils a "wait time" to react is another tactical inquiry method used in formative evaluation. When comprehensive inquiries are paired with an appropriate wait time, studies have indicated that student engagement in  classroom conversation is higher.

formative assessment and research occur

4. Documentation

The next significant advantage of formative assessment is that it provides documentation of the learning process. This documentation of  challenges and the outcomes obtained in the early and middle stages of the process can become an integral component of collaboration between teachers.

5. Complex instructional strategies are developed and refined

Formative assessments are practical for various interventions, and they are particularly beneficial for improving broad-range and composite instructional strategies where multiple components are consistently implemented at the same time.

Disadvantages of Formative Assessment

When it comes to formative assessment, there are a few drawbacks to consider.

1. Time-consuming and resource-intensive

Whether it’s done monthly, weekly, or daily, formative assessment can be a time- and resource-expensive procedure because it requires frequent data collection, research, reporting, and refining of the implementation plan to ensure success.

2. Experts experienced with assessments

Conducting the formative assessment thoroughly requires professionally qualified teachers who are able to analyze the criteria for mastery and create appropriate measures to assess student progress.  Formative Assessment training can support  the process and guide how to develop quality assessments..

3. Creates complexity challenges

Formative assessment presents a variety of methodological issues because formative assessment requires on-going and timely analysis and refinement to evaluate the impact of the instruction. Furthermore, it only becomes possible to measure outcomes after a strategy is executed. Another drawback is the difficulty in determining the specific intervals at which to evaluate the success of a strategy.

4. Evaluators must maintain objectivity

The intervention is shaped by constant feedback, and the teacher’s objectivity should improve. There should be a consistent plan in place to keep the necessary distance for impartiality while providing thorough and formative input.

Formative Assessment Implementation

Any formative assessment implementation plan is more robust when developed and implemented collaboratively by an instructional team. Collaboration between teachers, especially those with standard courses and curriculum, provides an opportunity for significant professional development and improved instructional effectiveness. The implementation team's plan should consider the following questions:

  • What should the students know and be able to demonstrate? (Learning outcomes)

Formative assessment should be firmly rooted in the learning outcomes intended by the curriculum. Once the learning outcome (target) is identified the success criteria is created to document what the student will be able to do to demonstrate mastery of the outcome. A pre-assessment is often helpful in understanding what each student has comprehended before the unit's onset.

  • How will I know they are making progress and on-target to achieve the learning outcomes? (Demonstration of competencies)

Determining the formative assessment activities that will provide accurate and precise information about student understanding is a critical component of the assessment plan. The assessments should be tightly aligned to the intended learning outcomes and meet the success criteria established for mastery..

  • To make adjustments in the instruction, what stages of education are most critical to understanding learning progress? (Key checkpoints)

Create a map for the unit or lesson's progression and establish essential understandings that each student should achieve along the way. Consider points in the learning where a misunderstanding is likely, based on the teacher's experience teaching other students. Also, consider where a student's misunderstanding could create a significant learning obstacle in the learning progression.

  • What are some common misunderstandings teachers should expect, and how would they address them if they occur? (Strategy tool bag)

Responding to a classroom full of students learning at different rates is complex. In the formative assessment planning process, preparing instructional strategies for common misunderstanding can improve instructional agility and decrease the teacher's pressure to promptly make significant and difficult decisions.

  • When will our collaborative team convene to reflect upon the process, share strategies, and analyze student work?

Formative assessment is as complicated as it is essential. Establishing a timeframe and setting the expectation for the continued support of the instructional team is critical to successful instruction and student learning.

The impact of formative assessment on teaching and learning activities is significant. Providing formative feedback and evaluation is listed as one of the top influences on student achievement in John Hattie's work on Visible Learning. Students can direct the teacher's attention to areas where they need support, while teachers use the information from formative assessments to enhance their instructional techniques. The use of formative assessment is also an excellent technique to get more students participating in class because they're given different quick-fire activities as the course advances.

formative assessment and research occur

Staff Evaluation Software

Document every step of the staff evaluation process, including walk-throughs, self-evaluations, supporting evidence, reporting and performance analytics. Get Started →

More Great Content

We know you'll love

formative assessment and research occur

What Is The Danielson Group’s Framework for Teaching?

formative assessment and research occur

Transcript Audits as a Tool for Personalized Student Support

formative assessment and research occur

Effective Strategies for School Counselor Professional Development

formative assessment and research occur

Insider Tips for High School Counselors: Think Like a Pathways Pro

Stay in the know.

Subscribe to our newsletter today!

  • Grades 6-12
  • School Leaders

Win a $100 gift card each day this month! 🎁

What Is Formative Assessment and How Should Teachers Use It?

Check student progress as they learn, and adapt to their needs.

What is Formative Assessment? #buzzwordsexplained

Assessments are a regular part of the learning process, giving both teachers and students a chance to measure their progress. There are several common types of assessments, including pre-assessment (diagnostic) and post-assessment (summative). Some educators, though, argue that the most important of all are formative assessments. So, what is formative assessment, and how can you use it effectively with your students? Read on to find out.

What is formative assessment?

Frayer model describing characteristics of formative assessment

Source: KNILT

Formative assessment takes place while learning is still happening. In other words, teachers use formative assessment to gauge student progress throughout a lesson or activity. This can take many forms (see below), depending on the teacher, subject, and learning environment. Here are some key characteristics of this type of assessment:

Low-Stakes (or No-Stakes)

Most formative assessments aren’t graded, or at least aren’t used in calculating student grades at the end of the grading period. Instead, they’re part of the daily give-and-take between teachers and students. They’re often quick and used immediately after teaching a specific objective.

Planned and Part of the Lesson

Rather than just being quick check-for-understanding questions many teachers ask on the fly, formative assessments are built into a lesson or activity. Teachers consider the skills or knowledge they want to check on, and use one of many methods to gather information on student progress. Students can also use formative assessments among themselves for self-assessment and peer feedback.

Used to Make Adjustments to Teaching Plans

After gathering student feedback, teachers use that feedback to make adjustments to their lessons or activities as needed. Students who self-assess then know what areas they still need help with and can ask for assistance.

How is formative assessment different from other assessments?

Chart comparing formative and summative assessment

Source: Helpful Professor

There are three general types of assessment: diagnostic, formative, and summative. Diagnostic assessments are used before learning to determine what students already do and do not know. Think pre-tests and other activities students attempt at the beginning of a unit. Teachers may use these to make some adjustments to their planned lessons, skipping or just recapping what students already know.

Diagnostic assessments are the opposite of summative assessments, which are used at the end of a unit or lesson to determine what students have learned. By comparing diagnostic and summative assessments, teachers and learners can get a clearer picture of how much progress they’ve made.

Formative assessments take place during instruction. They’re used throughout the learning process and help teachers make on-the-go adjustments to instruction and activities as needed.

Why is formative assessment important in the classroom?

These assessments give teachers and students a chance to be sure that meaningful learning is really happening. Teachers can try new methods and gauge their effectiveness. Students can experiment with different learning activities, without fear that they’ll be punished for failure. As Chase Nordengren of the NWEA puts it :

“Formative assessment is a critical tool for educators looking to unlock in-depth information on student learning in a world of change. Rather than focusing on a specific test, formative assessment focuses on practices teachers undertake during learning that provide information on student progress toward learning outcomes.”

It’s all about increasing your ability to connect with students and make their learning more effective and meaningful.

What are some examples of formative assessment?

Chart showing what formative assessment is and what it isn't

Source: Writing City

There are so many ways teachers can use formative assessments in the classroom! We’ve highlighted a few perennial favorites, but you can find a big list of 25 creative and effective formative assessments options here .

Exit Tickets

At the end of a lesson or class, pose a question for students to answer before they leave. They can answer using a sticky note, online form, or digital tool.

Kahoot Quizzes

Kids and teachers adore Kahoot! Kids enjoy the gamified fun, while teachers appreciate the ability to analyze the data later to see which topics students understand well and which need more time.

We love Flip (formerly Flipgrid) for helping teachers connect with students who hate speaking up in class. This innovative (and free!) tech tool lets students post selfie videos in response to teacher prompts. Kids can view each other’s videos, commenting and continuing the conversation in a low-key way.

What is your favorite way to use formative assessments in the classroom? Come exchange ideas in the WeAreTeachers HELPLINE group on Facebook .

Plus, check out the best tech tools for student assessment ..

Wondering what formative assessment is and how to use it in the classroom? Learn about this ongoing form of evaluation here.

You Might Also Like

25+ Formative assessment ideas for the classroom.

25 Formative Assessment Options Your Students Will Actually Enjoy

Get them excited to show you what they know! Continue Reading

Copyright © 2024. All rights reserved. 5335 Gate Parkway, Jacksonville, FL 32256

IMAGES

  1. The process of formative assessment

    formative assessment and research occur

  2. Best 25+ Formative and summative assessment ideas on Pinterest

    formative assessment and research occur

  3. What Is Formative Assessment? Examples and Types of Formative Assessment

    formative assessment and research occur

  4. Formative Assessments

    formative assessment and research occur

  5. The Ultimate Guide to Formative Assessments (2024)

    formative assessment and research occur

  6. 25 Summative Assessment Examples (2024)

    formative assessment and research occur

VIDEO

  1. Formative Assessment 1project work#activity #math

  2. formative assessment designs in book 😍😊

  3. FORMATIVE ASSESSMENT

  4. Formative assessment design #subscribe #Midhila_official

  5. formative assessment and summative assessment,|#ctet

  6. Top 10 Formative Assessment Ideas ELA Classroom

COMMENTS

  1. Formative assessment: A systematic review of critical teacher

    1. Introduction. Using assessment for a formative purpose is intended to guide students' learning processes and improve students' learning outcomes (Van der Kleij, Vermeulen, Schildkamp, & Eggen, 2015; Bennett, 2011; Black & Wiliam, 1998).Based on its promising potential for enhancing student learning (Black & Wiliam, 1998), formative assessment has become a "policy pillar of educational ...

  2. PDF The Effects of Formative Assessment on Academic Achievement ...

    According to the research results, formative assessment was the third most influential factor among 138 factors for students' achievement. In the same order, feedback, which is one of the most significant elements of formative assessment, came in at eighth place. However, only two meta-analyses (Burns & Symington, 2002; Fuchs & Fuchs,

  3. Formative assessment and feedback for learning in higher education: A

    For these intertwined processes of formative assessment and feedback to occur and work effectively, teachers are required to root them firmly within their pedagogical practices. ... This section summarises high-quality studies focusing on the content and delivery of feedback and formative assessment. This includes research that examines a range ...

  4. PDF CHAPTER 1 Formative Assessment and Assessment for Learning

    What makes formative assessment formative is that it is immediately used to make adjustments so as to form new learning" (Shepard, 2008, p. 281). The common thread woven throughout formative assessment research, articles, and books bears repeating: it is not the instrument that is formative; it is the chapter1.indd 4 2/6/09 1:32:57 PM

  5. Formative assessment

    Formative assessment. This brief explains how formative assessment can contribute to improving learning and what recurring challenges affect its implementation. It then provides policy recommendations that may help educators and policy-makers overcome these obstacles. Formative assessment, often referred to as 'assessment for learning ...

  6. PDF The Effect of Formative Assessment Practices on Student Learning ...

    the impact of formative assessment on student learning varied from one study to another. By using random effects model, overall meta-analysis showed that there was a significant effe. t of for. ative assessment on student learning (g= .72, SE= .07, 95% CI= .59; 85, p<.05).Table 3. Overall effect sizes. Model.

  7. Formative Assessment and Feedback Strategies

    As shown in Fig. 4, research on formative assessment and feedback can focus on many aspects and (interacting) variables within a concrete (psychology) course or a specific instructional context. The concrete research issues as well as approaches depend on the research focus. ... Effects of feedback can occur on various levels (task, self ...

  8. PDF Formative assessment and elementary school student academic achievement

    Key findings. Formative assessment is a process that engages teachers and students in gathering, interpreting, and using evidence about what and how students are learning. This review identifies rigorous studies of the effectiveness of formative assessment on elementary school student achievement.

  9. Formative Assessment: Balancing Educational Effectiveness and Resource

    Methods. The following research methods were used: A literature review highlights key issues and principles. Using the School of the Built Environment, Heriot-Watt University as a case study, template forms to prompt reflection were completed voluntarily by academics to describe and evaluate effectiveness and efficiency of different types of formative assessment already in use, from the point ...

  10. Formative Assessment

    Assessment comes in two forms: formative and summative.Formative assessment occurs during the learning process, focuses on improvement (rather than evaluation) and is often informal and low-stakes.. Adjustments in Instruction. Formative assessment allows instructors to gain valuable feedback—what students have learned, how well they can articulate concepts, what problems they can solve.

  11. Formative Assessment of Teaching

    Formative assessment can be contrasted with summative assessment, which is usually part of an evaluative decision-making process. The table below outlines some of the key differences between formative and summative assessment: ... Research has identified several strategies more likely to be effective at accomplishing certain student outcomes. ...

  12. PDF What is the benefit of using the formative assessment process?

    Focus on FAME is meant to help FAME program Leads, Coaches, and Learning Team members learn from the ideas and suggestions collected from their peers in the Formative Assessment for Michigan Educators (FAME) program. Each issue describes a specific set of findings and sugges-tions that result from the work of the Research and Development Team ...

  13. Formative vs. summative assessment: impacts on academic motivation

    Formative-summative assessment occurs in two primary forms: using a mock exam before the final or using the final exam before the retake. ... To make learners responsible for their learning and do their research Formative assessment, to Cizek, is a sufficient tool and area for learners and teachers to make proficiency in the learning-teaching ...

  14. Formative and Summative Assessment

    Formative assessment occurs throughout a class or course and seeks to improve student achievement of learning objectives through approaches that can support specific student needs (Theal and Franklin, 2010, p. 151). In the classroom, formative assessment centers on practice and is often low-stakes. Students may or may not receive a grade.

  15. Teachers' Essential Guide to Formative Assessment

    In short, formative assessment is an essential part of all teaching and learning because it enables teachers to identify and target misunderstandings as they happen, and to adjust instruction to ensure that all students are keeping pace with the learning goals. As described by the NCTE position paper Formative Assessment That Truly Informs ...

  16. What does research say the benefits of formative assessment are?

    Formative assessment processes can positively impact student learning. Formative assessment has been empirically associated with gains in student learning, teachers' increased knowledge of their ...

  17. Formative Assessments

    Formative assessments occur before, during, and after a class session and data collected is used to inform improvements to teaching practices and/or student learning and engagement. ... Black and Wiliam's research on formative assessment and student achievement started the shift from a summative focus to a more balanced view of assessment for ...

  18. PDF Formative Assessment

    is "process," in that formative assessment is happen-ing throughout the learning, as opposed to summative assessment, which is often a onetime event that o- c-curs at the end of a learning unit and is used to make judgments about student competence. Elements of the Formative Assessment Process . Several researchers (Sadler, 1989; Black ...

  19. Full article: Navigating formative assessment as professional

    Professional development of formative assessment. Professional development is an essential prerequisite for the successful implementation of formative assessment (Black and Wiliam Citation 2009; DeLuca, Chapman-Chin, and Klinger Citation 2019; Yan et al. Citation 2021).Both novice and experienced teachers undergo continuous professional development throughout their careers, drawing from formal ...

  20. PDF Principles of Formative Assessment

    Principles. 1 Process embedded in on-going practice. • Is not an adjunct to practice. • Does not "sit on top" of practice. • Is not a test or an event. • Is embedded in the ongoing flow of activity and interaction within the lesson. • Is how teachers and students do business in the classroom. 2 Collaborative culture.

  21. The Research Base for Formative Assessment

    More recently, Formative Assessment: A Meta-Analysis and a Call for Research, (Kingston & Nash, 2011) estimated the effect size in the 0.20 to 0.30 range. Regardless, the findings point to a significant level of impact. A critical review. There is some criticism of research that lumps all formative assessment together because formative ...

  22. Formative Assessment: Pros and Cons You Need to Know

    A formative assessment occurs before or during a unit or lesson's implementation, allowing the teacher to measure and track students' progress in real time and change curriculum and instruction as necessary. ... research, reporting, and refining of the implementation plan to ensure success. 2. Experts experienced with assessments

  23. Formative Research Guidance

    Formative research examines the prospective target au - dience, their behaviors and perceptions, and the factors which influence those behaviors. Formative research is particularly important for enabling researchers and pub - lic health practitioners to identify potential obstacles to future programming, such as participation barriers,

  24. What Is Formative Assessment and How Should Teachers Use It?

    Formative assessment takes place while learning is still happening. In other words, teachers use formative assessment to gauge student progress throughout a lesson or activity. This can take many forms (see below), depending on the teacher, subject, and learning environment. Here are some key characteristics of this type of assessment:

  25. LLM-Driven Ontology Learning to Augment Student ...

    The aim of this quasi-experimental research is to investigate the effectiveness of formative assessment on the progress of private elementary school students. 77 students (33 girls and 44 boys ...

  26. Grant Writing Tip: Evaluating Your Research

    Similarly, formative assessments measure research progress and determine if adjustments are needed to correct issues before investing too heavily in a line of work. Summative assessments happen at the end of a project to determine whether the research was successful, like a "final exam." In summative assessments, a project's final results ...

  27. Or Hen: Getting to the core of the matter

    In middle school, once the records of his condition were transferred to the school, and an assessment was made of his abilities, Hen was given permission to take his exams orally. Rather than writing his answers, he would sit with a teacher and talk it out. To this day, he credits his loquacious nature to those early, formative years.

  28. Research Agency: Nutrition Formative Assessment

    Formative Assessment report highlighting key themes and insights from the qualitative study. Note: The deliverables on this RFP are subject to changes based on new information that may become ...