Formative Assessments versus Summative Assessments

Formative Assessment versus Summative Assessment

Based on Common Formative Assessment and Embedded Formative Assessment

Defining Formative and Summative

Whenever I do an assessment workshop with a group of teachers, I always start with some definitions so that we’re all talking about the same thing. Almost every teacher I’ve worked with is pretty confident about what the difference is between a formative and a summative assessment.

There are some terrific metaphors that people use. For example, formative assessment occurs when the cook tastes the soup in the kitchen, whereas summative assessment occurs when the patron tastes the soup in the restaurant. Or, formative assessment occurs when I go to the doctor for a physical, whereas summative assessment occurs when I get an autopsy.

What these metaphors help us to see is that summative assessment is the final performance, while formative assessment happens while the student is still learning the concepts and the teacher is able to provide extra time and support to assure all students learn these concepts. It is the formative piece of the process that I like to examine: how to design and write the assessments to make them more diagnostic.

I find Dylan William’s (2011) definition for formative assessment especially helpful in this regard. He says, “An assessment functions formatively to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers to make decisions about next steps in instruction that are likely to be better, or better founded, than the decisions they would have made in the absence of evidence.”

Designing and Writing Effective Formative Assessments

This definition, then, requires the assessment data to include not just which students need extra help but also specific details about what kind of help they need. To do this, the student responses must provide insight into students’ thinking and understanding around specific learning targets, rather than standards.

Consider, for example, this second-grade reading standard: “Ask and answer such questions as who, what, where, why and how to demonstrate understanding of key details in informational text” (NGA & CCSSO, 2010). In addition to being able to ask questions as well as answer questions about informational text, students need to know what “who, what, where, why, and how” questions are. They must also be able to read and comprehend second-grade text and choose the key details it contains. A formative assessment is designed to identify which of these smaller skills the student might need help with.

When a team designs a formative assessment then, they must first unwrap the standard into learning targets and determine the expected proficiency level of those standards using a common language such as Webb’s Depth of Knowledge. The team then discusses which of the learning targets should be assessed during the learning and how they want to assess it.

In our work, we’ve found that these assessments should be short and focused on only a few learning targets so that the team can respond to students who need help on each of these targets. We’ve also found that when teams use constructed response questions, they often have better insight into what misunderstandings or misconceptions a student has about a target.

How to Use Formative Assessment Data

When teams write quality formative assessments, the data they get back leads them to effective responses. For example, if a team chooses two learning targets to assess and uses one constructed response question for each of these, the resulting data analysis occurs around each of those targets. So, the team starts by identifying which students were proficient on target 1 and which were not proficient on that target. Then they use student work to see if the students who weren’t proficient all had the same misunderstanding or made the same mistake. If so, they plan their response in a way to help students overcome that misunderstanding. If these students had different misunderstandings, they respond to the students in each of these groups differently. They then move on to the second learning target and do the same thing. Thus, they are responding student by student, target by target.

Designing and Writing Summative Assessments

Contrast this to how a team would design and write a summative assessment. They would first discuss what it would look like to be proficient on the standards they’re assessing. Consider this sixth-grade science standard: “Develop and use a model to describe the function of a cell as a whole and the ways parts of the cells contribute to the function” (NGSS Lead States, 2013). To show proficiency on this standard, the student must develop a model for this concept. A summative assessment, then, would require students to complete this task. When teachers supply students with a model, they are short-circuiting the thinking that students need to engage in to be proficient.

In this case, the summative assessment might include having students create such a product. We recommend that teams begin this conversation as they are unwrapping the standard and be especially conscious that not all summative assessments have to be a final test. Summative assessments are, therefore, designed to show that students can put all of the smaller learning targets together to be proficient on the standard.

Using Summative Assessment Data Formatively

When teachers ask us if they can use summative assessment data in a formative way, the answer is a bit more complex. Most state assessments are designed around standards and don’t tell us with enough detail where student learning is breaking down. We get back a list of students who are below proficiency but little detail about why this is happening. Unfortunately, we have seen many schools that design their interventions around the cut points of their state test. They have a group of students far below proficiency, another who are approaching proficiency, a group who are proficient, and a group who are beyond proficient. The problem is that all students who are in the far-below-proficiency group don’t have the same needs. This data helps teams identify students who need time and support but cannot tell them what that time and support should look like.

Putting It All Together

High-performing teams value both formative and summative data and are clear about what decisions they can make using both. Formative assessments are most helpful in identifying students who are having difficulty on this year’s essential standards while they are still being taught. In the RTI process, this is Tier 1. Teams provide immediate help while these learning targets are still being taught. If students are still unable to master these targets or are unable to put them together to master the standard, the students are moved into Tier 2 support. At the same time, high-performing teams are identifying students who have more significant learning issues because they haven’t learned last year’s essential standards—or even those from the year before. These students are provided intensive (Tier 3) support. An effective intervention system also allows these students to access the other two levels of support if needed.

When teams have good data and use it effectively, they are able to diagnose student learning needs. Just as all students in Tier 1 don’t have the same learning issues, neither do the students in Tier 2 or Tier 3. Effectively identifying these needs starts with understanding how each type of assessment is designed and how the data should be used. High-performing teams learn together about the assessments they use and write, and how they can continually get better at identifying and responding to student needs.



Bailey, K. & Jakicic, C. Common formative assessment: A toolkit for professional learning communities at work. Bloomington, IN: Solution Tree Press.

National Governors Association for Best Practices & Council of Chief State School Officers. (2010). “Common Core State Standards for English language arts and literacy in history/social studies, science, and technical subjects.” Washington, DC: Authors. Accessed at on January 4, 2013.

NGSS Lead States (2013). Next generation science standards: For states, by states. Washington DC: National Academies Press.

Webb, N. L. (2005). “Web alignment tool.” Wisconsin Center of Educational Research, University of Wisconsin-Madison. Accessed at on September 9, 2018.

Wiliam, D. (2011) Embedded formative assessment. Bloomington, IN: Solution Tree Press.

[author_bio id=”76″]Embedded Formative Assessment by Dylan Wiliam

Solution Tree

Here's some awesome bio info about me! Short codes are not allowed, but perhaps we can work something else out.

Leave a Reply

Your email address will not be published. Required fields are marked *