Performance Tasks in SBG

I was given a good challenge by our secondary curriculum coordinator a couple weeks back. He wanted to know how we get in front of standards-based grading being reduced to collections of isolated skills. In other words, we're doing well tracking our essential standards over time, but those are more or less in isolation (not taking into account any spiralling or scaffolding happening).

File this under stream-of-consciousness rambles. I have three thoughts percolating:

  • Curriculum is skills, knowledge, and dispositions. It seems that performance tasks should focus on skills and dispositions more than content because they're the "connective tissue," as it were, to context outside of the classroom.
  • Some kind of measurement tool is needed, but what is the scope? Is it defined by the district? Or are those skills and dispositions different based on content area? Or even by classroom?
  • Showing application or transfer of information is difficult because you have to make something novel rather than simply report on learning.

In terms of how to do this...well, I haven't quite made it there yet. I have a feeling that this would be a good place for a single point rubric (because those are the new 🔥🔥🔥 right now) because ofo the flexibility they provide.

Another tack would be to write new performance standards which combine the individual standards, but that's another level of organization to add on top of unpacking the current content material. It could work with a larger group together at the district level, but consensus becomes the challenge.

If you're a teacher using SBG, what thoughts do you have? How do you make sure students are forming holistic understanding and not simply accruing a collection of ideas?

Other reading

RedesignU (which I need to investigate more) has a curated list of reading that led me to this helpful policy guide for SBG at a larger level. It includes some guiding questions on performance tasks which were interesting.

CompetencyWorks has a really short article that was thin on material but had a good bulleted list of performance task criteria for the SBG classroom that made me think about what would be included.

Here's another CompetencyWorks article, which is very dated, but raises some good points about interoperability of various SBG tracking systems and the challenges faced when trying to get a wide-angle lens on student growth. Designing performance tasks includes content and helping stuents navigate that process (it's personal, remember?) means we need information to work from. The systems have improved, but it is still difficult to build a full working system on the fly.


Pool Edge by Theen ... is licensed under CC BY-NC-SA

A Case for Better Course Design

Campus Technology published an article last week about a biomed course that saw mixed results from flipped instruction. The full article is open access (CC-BY 4.0) and available to read. I’ve read and annotated the original article and I’m going to distill a couple of points from bot the published report and the CT article.

The Report

The authors state right up front that there “were no statistically significant differences in examination scores or students’ assessment of the course between 2015 (traditional) and 2016 (flipped).” Campus Technology (and other publications) often latch on to the grade implications rather than qualitative student feedback on the efficacy of flipping. To the researchers’ credits, they do recognize higher retention and application as reported by students on self-reported feedback surveys.

The biggest red flag for me was in the definition of flipping. As Robert Talbert regularly points out, many research articles limit flipping to “video at home, discussion in class.” The article elaborated on the at home experience in the methods section. From the article,

Students were introduced to new material each week by completing assigned readings from textbooks and journal articles, then by watching recorded lectures given by faculty experts at MSPH on one of 10 core epidemiology topics. Next, students completed short online graded assessments of their understanding of the new concepts presented in these media based on the Just-in-Time Teaching (JiTT) pedagogy…

Students were also able to submit questions to instructors prior to the in-person meeting that would be addressed at the start of the session. The article also makes note that doctoral students and instructors would monitor questions via email or office hours in between in-person meetings.

So, students watched a lecture (no discussion on the format, length, or content of the lecture), read some articles, and then began to apply material in preparation for the lecture. More on this later.

Students reported confidence in their learning and ability to apply materials with a slight increase in the flipped (84.1%) vs traditional (80.6%) cohorts (non-statistically significant, however).

Campus Technology’s Interpretation

The opening sentence proclaims:

A study at Columbia University’s Mailman School of Public Health found that in a health science course following the flipped classroom model, there was no statistically significant differences in test scores or students’ assessments of their course, compared to a traditional lecture course.

They do not note that the study took place over two years (two different groups of students) but did report positive impacts due to freedom to watch lectures when they wanted to (improved flexibility). CT also included an insightful quote from one of the authors about the lack of time to process information in a traditional setting after a lecture (discussion was immediately after lecture in the traditional design) but that flipping doesn’t allow for “[direct engagement] with the lecturers”

The Bigger Picture

The research study and the ensuing report highlight two things for me:

  1. Grades are often the motivating factor when flipped classrooms are studied which limits discussion of student impact and,
  2. the perceived importance of course design is negligible when studies are conducted or reported.

Students reported a higher satisfaction with the class due to flexibility and because they felt more confidence in the material. Time to process information is important and they were better able to contribute to discussions after having time to think through the lecture. But, all the CT article focused on was the grade. It isn’t a secret that few practitioners (K-12 or higher ed) actually read the reports unless they’re actively planning their own study. There is a responsibility for news outlets and blogs to include gains beyond the final exam score.

How did students grow beyond the test? What improvements did instructors see in the cohort? These are important factors that should be included in followup interviews if not in the research report itself. The research did have the six instructors full out surveys, but they were not reported in the results with student feedback.

Secondly, course design is critical if we want to improve student performance. Several of the citations were quite old (early to mid 2000’s) and were in a similar vein, looking at student exam scores rather than course design and teaching methodology (granted, several of the cited articles were paywalled so I couldn’t do a full evaluation of each).

If we simply bottle courses and reverse the time of interaction, why would we have an expectation of student improvement on exams? This article shows that the course is consistent, if nothing else, with no change in student exam performance. How would it have changed if students had explored material before the lecture, as in Ramsey Musallam’s or Dan Meyer’s work? How would students have benefitted from interactive items at the beginning of the discussion period rather than a rehash of the lecture from the instructor?

While the research makes some interesting points, it is far from conclusive in its results on the efficacy of flipping. The authors make conciliations at the end, but we need to continue to push the discussion away from a particular technology solution and start by analyzing our instruction methods as the real turning point in student learning.


Featured image is Lecture Hall, Chairs flickr photo by Dustpuppy72 shared under a Creative Commons (BY-NC-ND) license