Performance Tasks in SBG

I was given a good challenge by our secondary curriculum coordinator a couple weeks back. He wanted to know how we get in front of standards-based grading being reduced to collections of isolated skills. In other words, we're doing well tracking our essential standards over time, but those are more or less in isolation (not taking into account any spiralling or scaffolding happening).

File this under stream-of-consciousness rambles. I have three thoughts percolating:

  • Curriculum is skills, knowledge, and dispositions. It seems that performance tasks should focus on skills and dispositions more than content because they're the "connective tissue," as it were, to context outside of the classroom.
  • Some kind of measurement tool is needed, but what is the scope? Is it defined by the district? Or are those skills and dispositions different based on content area? Or even by classroom?
  • Showing application or transfer of information is difficult because you have to make something novel rather than simply report on learning.

In terms of how to do this...well, I haven't quite made it there yet. I have a feeling that this would be a good place for a single point rubric (because those are the new 🔥🔥🔥 right now) because ofo the flexibility they provide.

Another tack would be to write new performance standards which combine the individual standards, but that's another level of organization to add on top of unpacking the current content material. It could work with a larger group together at the district level, but consensus becomes the challenge.

If you're a teacher using SBG, what thoughts do you have? How do you make sure students are forming holistic understanding and not simply accruing a collection of ideas?

Other reading

RedesignU (which I need to investigate more) has a curated list of reading that led me to this helpful policy guide for SBG at a larger level. It includes some guiding questions on performance tasks which were interesting.

CompetencyWorks has a really short article that was thin on material but had a good bulleted list of performance task criteria for the SBG classroom that made me think about what would be included.

Here's another CompetencyWorks article, which is very dated, but raises some good points about interoperability of various SBG tracking systems and the challenges faced when trying to get a wide-angle lens on student growth. Designing performance tasks includes content and helping stuents navigate that process (it's personal, remember?) means we need information to work from. The systems have improved, but it is still difficult to build a full working system on the fly.


Pool Edge by Theen ... is licensed under CC BY-NC-SA

No, Retakes Are Not a Bad Idea

There's an Edutopia article cycling around which argues against allowing retakes on assessments. The author builds a case focused on motivation, mental health, and teacher effectivness.

Instead of doing a large breakdown here, I annotated the original article with my pushback in context.

On a semi-related note, If you don't have the hypothes.is extension installed, you should grab it and leave replies on the original article. Comments are also welcome here.

What is a Grade?

an exit sign hanging on a wall.

I had the pleasure of working with about 15 people yesterday on moving to standards-based grading next year. We started off with a long discussion about what grades are and what they mean. It's easy to get into what they should be, but I wanted to make sure we all had a solid understanding of what grades actually do in most of our classrooms.

I had a couple of guiding questions and one that generated the most interesting response was the following:

A student rarely comes to class and when they do, work isn't turned in. At the end of the semester, that student easily passes the final exam. Does that student pass your class?

Lots of eyebrows furrowed.

There was some uneasy looking around.

About half said yes, the other half said no.

Now, there are major assumptions here. Is the test valid and reliable (standards-aligned)? How did the teacher intervene? Did a student show growth before taking the test in some other way?

All issues aside, the root of the question forces us to consider whether a grade in our class represents learning or compliance.

I also wonder why we're more accepting of the inverse situation: a student who has not taken the class who passes the final is allowed to skip the course (or is given credit, etc).

If we're comfortable with allowing students to skip a class (be given credit) by testing out we should be just as comfortable allowig a student who "shows no effort" to be given credit for hitting the same benchmark. The difference is our perception of that student.

Challenging our biases is important, particularly long-held assumptions that dictate our perceptions about "good" vs "bad" students. Grades are the output of those biases in many cases.

What do you think?


The featured image is Br... flickr photo by Peter Schüler shared under a Creative Commons (BY-NC-SA) license

Methods of Calculating Grades in SBG

A barn with windmills on a prairie.

I'm prepping a full-day workshop on standards based grading for about 20 teachers in a couple weeks. One major part of the day will be centered on converting a SBG report to a 100-point scale letter grade, mostly because we just have to.

Here are some of the methods I've come across, which have all (in one way or another) informed my own method, which is last in this post.

Equalized Weighting

I saw this calculation method first from Frank Noschese on his KISSBG blog post. He glances over it in the post body, but the comments below get into some of the details. Here's the formula:

50 + 50 * (earned/total)

At first, the additional 50 points added in look like a bonus, which feels weird. In reality, this wipes out the 0-50 F range. Now, each letter grade rougly corresponds to a 10-point spread:

  • F: 50 - 60
  • D: 61 - 70
  • C: 71 - 80
  • B: 81 - 90
  • A: 91 - 100

It's an equivicator, not a bonus.

Reflective Grading

Shifting away from assigning arbitrary points is a big piece of standards-based grading. Laura Gibbs, Kathryn Byars and Ken Bauer are the three names that jumped out in this region. Feedback is the main driver. Work is given feedback and only feedback. The focus between teacher and student is on demonstration, not on points or numbers.

For assessment, students reflect on and provide evidence of proficiency on each standard. Laura, Kathryn, and Ken all did this differently, but the main flavor is the same. Take a look at Kathryn's helpful Google slides, Laura's deep-dive book chapter and Ken's various blog posts. This is by far the most flexible, fuzzy, and subjective method of reporting.

Standard Purism

The most "pure" method of standards-based grading removes all items from the gradebook except for the standards. The methods of grading these varies. Some use a straight average of binary items (pass/fail). Others put each standards on some kind of rubric scale and give an average.

The main benefit of this structure is that practice work (homework, classwork, etc) is excluded. If a student forgets or decides not to do an assignment, their grade is not affected because it is practice.

On the other hand, this opens the door for assignments to be completely optional. This is a detriment, in my opinion, because students may not have the self-awareness or diligence to do independent work otherwise. Additionally, if a student skips a test or quiz because it doesn't go in the gradebook, it can set up an awkward situation where a student is racing to prove standards at the end of the year.

Some kind of blend

I ended up blending several of these ideas into a system I like. I used components of KISSBG (binary yes/no for standards) with a weighted course average to calculate the final grade.

Category Weight
Classwork 20%
Standards 80%

In my gradebook, any classwork/practice was lumped together into one category. Homework, tests, quizzes, etc, all contributed to 20% of the total course grade.

Standards were individual assignments worth one point. They were assessed over time on a four-point rubric:

Description Score
Exceeds Expectations 4
Meets Expectations 3
Approaches Expectations 2
Does Not Meet Expectations 1
No evidence 0

The cutoff for toggling a 1/1 in the gradebook was a 3. This meant they demonstrated proficiency in the concept in that situation. A 4 was given if the student could connect different related ideas...showing the relationships between standards.

Rubrics were used on every assignment and that aggregate score was used to determine the gradebook 1 or 0. Over time, patterns emerged and students were able to track their growth/decline in Canvas (more on that another time). I rarely graded Classwork assignments in depth...if it was turned in, I often gave full credit just for having it done. The rubric feedback was the important piece and I tried to put the focus on learning from those pieces.


Is ther a best method? I don't think so. It really depends on your group of students and situational context. In 2012, I used a more reflective approach. In 2016, I was using more the 80/20 split with some reflection thrown in. Both were equally valid and I felt good about the grades I ended up reporting.

What others would you suggest? Leave a comment below.


Managing Standards Based Grades in Canvas LMS

I'm trying to make standards-based grading more approachable for my teachers. When I was teaching full time, I held to Frank Noschese's Keep It Simple philosopy. Single standards correlate to single assignments that are scored as pass/fail. Now, I averaged these out on a weighted scale to calculate a 0-100 grade, but that's for another post

Using Canvas, I was able to set up a functional reassessment strategy to aggregate demonstrations of proficiency.

The Learning Mastery Gradebook in Canvas does not translate anything into the traditional gradebook. This mean that every week or so, I would have to open the Mastery report alongside the traditional gradebook and update scores line by line. This was tedious and prone to error.

Using the Canvas API and a simple relational database, I put together a Python web app to do that work for me. The idea is that a single outcome in a Canvas course is linked with a single assignment to be scored as a 1 or 0 (pass/fail) when a mastery threshold is reached.

The app

Users are logged in via their existing Canvas account. There they are shown a list of active courses along with the number of students and how many Essential Standards are currently being assessed (ie, linked to an assignment).

The teacher dashboard

In the Course view, users select which grading category will be used for the standards. Outcomes are pulled in from the course and stored via their ID number. Assignments from the selected group are imported and added to the dropdown menu for each Outcome.

The alignment menu

Users align Outcomes to the Assignment they want to be updated in Canvas when the scores are reconciled. This pulls live from Canvas, so the Outcomes and Assignments must exist prior to importing.

As Assignments are aligned, they're added to the score report table.

Score reporting per assignment

Right now, it defaults to a 1 or 0 (pass/fail) if the Outcome score is greater than or equal to 3 (out of 4). All of the grade data is pulled at runtime - no student information is ever stored in the database. The Outcome/Assignment relationship that was created tells the app which assignment to update for which Outcome.

When scores are updated, the entire table is looped. If an Outcome has risen above a 3, the associated Assignment is toggled to a 1. The same is true for the inverse: if an Outcome falls below a 3, the Assignmet is toggled back to a 0.

I have mixed feelings about dropping a score, but the purpose of this little experiment is to make grade calculations and reconciliation between Outcomes and Assignments much more smooth for the teacher. It requires a user to run (no automatic updates) so grades can always be updated manually by the teacher in Canvas. Associations can also be removed at any time.

As always, the source for the project is on GitHub.

Reassessing in Standards Based Grading

I'm helping several teachers move toward standards-based grading practices this year. We work a lot on philosophy - why they'd want to use this grading mechanism over traditional scores, how to support learning, and the language of SBG in general with students - before we get into the how-to. That helps make sure everyone is in the right frame of mind.

Once they're ready to start, that's where the how-to work comes in. I know what I think about how to set up a class, but there is no gold standard when it comes to actually running the class. If you're looking to start, allow me to redirect you to Frank Noschese and his excellent blog as well as pretty much anything written by Rick Wormeli.

Today's post started as an email asking how I handled retests in my class. The following is more or less what I wrote back, with some edits for clarity and more general application.


I’m trying to up my standards based grading game. We briefly talked about this last semester, but I’m wondering...how can I most efficiently update students’ grades to show mastery when I’m having them do test corrections? Ideas welcome!!

This came in an email

Do you do paper-and-pencil corrections? How are you building your tests? I ask because there are a few ways you could consider, but each kind of depends on your own style and class processes.

Grading paper-and-pencil corrections

When I did this, it was usually something like:

  • a) write out the wrong answer,
  • b) write the correct answer,
  • c) why is is it right,
  • d) give a reference to the right answer,
  • e) which standard/outcome does this relate to?

So, they would go through the material, evaluate their responses, and then find the right answer and justify it. I was mainly concerned with the justification of the response, not so much that they found the right answer. I would grade their mastery on that justification, bumping them up or down a little bit.

To track it, you could download the MagicMarker (iOS only) app and mark them on Outcomes as if you were talking to them in class. It aggregates those scores into the Canvas Learning Mastery grade book and then you can evaluate the overall growth rather than give credit based on that one assessment.

Question Banks

This is definitely the most time consuming to set up, but once it's set up, you're golden. Getting questions in standards-referenced banks allows you to build out Quizzes that pull randomly, so you can give a retake or another attempt that updates those Learning Mastery grade book results. This is what I tended to do instead of paper/pencil once I had everything going.

Students would get their results and then focus on any standards that were less than a three in their Learning Mastery grades (out of four total). There'd be some kind of work involved so they weren't blindly guessing, but then they could take the test again because the questions were likely to be different with the bank setup.

Set up banks based on standard and then file questions in there. When you build the Quiz, you use Add new question group rather than Add question in Canvas. You can link the question group to a question bank and specify how many items to pull at X number of points.

Student defense and other evidence

This one is probably my favorite: just giving students a chance to plead their case...a verbal quiz, essentially. I'd use MagicMarker while we were talking to keep track of their demonstration. I would ask them to show me work we'd done, explain how they know what they know, and then prod them with more questions.

I typically did this if they were having trouble demonstrating understanding in other ways. I wanted to remove test anxiety or reading comprehension from the equation, but this was typically the last option for those kids. I'd then work with them to get over those test-taking humps (granted, this was more important to do in the AP class because they had to take the test and I needed them to be ready for it).


I think all of this boils down to get more data into Canvas (or your LMS if you can)...try not to rely on a single demonstration to judge understanding. My goal was to have students show mastery on standards by the end of the semester. So, if they're not getting one of them now, it still goes in as a zero but it serves as a reminder that they still have to do that standard. I was updating grades on the last day of the semester for my students. It's a weird way for them to think and it'll take some prodding by you so they don't forget that a zero can always convert to full credit. Usually what happens is a later unit will give them more context for whatever they're struggling with and cycling back after more scaffolding is more effective than trying to drill the issue immediately, if that makes sense.


If you're not using Canvas, there may be similar systems in your LMS that will help you track growth. I also have a Google Sheet template that you can use to track student growth. Shoot me an email if you'd like that and I'd be happy to send it along.

Fair Isn’t Always Equal: Three Grading Practices to Avoid

Remember, though, that grades should not be used as rewards. Nor should they be used as affirmation, compensation, or validation. Grades should represent an honest report of evidence at this moment in time, nothing more. If we make them something more than that, we undermine the student’s maturation and any useful purpose for grading.

Source: Fair Isn't Always Equal: Three Grading Practices to Avoid

I'm working with several teachers on moving toward standards-based grading and we're starting to have conversations about grades themselves. I ask how they feel about zeroes, extra credit, completion, and makeup work. This article is a great primer and/or followup to those initial meetings.