My AI-driven Grading Approaches


I'd had this article starred in my RSS reader to sit down and read carefully. Robert is a math professor I've followed for many years because of his thoughtful approaches to grading and assessment. AI in education is everywhere and I'm constantly battling either seeing students use AI assistants to complete work or in wrestling with the discrepancy between "everything done" and poor performance on assessments.

It is nearly impossible to verify independently whether student work has been created by an AI versus the student, or if it’s a combination, how much is attributable to the student. So it’s only a perception of rampant cheating; the real extent is unknowable unless students tell me.

This is the rub - in chemistry, so much of what we do is based on skill development. These small skills - using the periodic tabe, for instance - prop up skills we develop later in the course. For work done independently (ie, not in the classroom with me around) I have to go on faith that students have done authentic work. Robert's grading standards call this a "good-faith effort." I think that's some language that I'll be adopting.

The point is that if the work shown on something students complete independently isn't good-faith, done as a human being, I am unable to do my job effectively and we're working against each other. I've tried hard to drive that point home with my 10th graders this year.

In the end, Robert ends up minimizing the effects of the homework/independent work grades and reinforcing the importance of being able to do the work on in-class tests. I've come to almost the same conclusion and I think I'm going to press harder into this model next year by reducing the value of independent work in the gradebook that much more. Says Robert:

Since everything is done in person, AI use is not really an issue; students can use AI to complete their homework, but I don’t think many do, because they know eventually they’ll be accountable for doing it “live”.

He also notes that he went hard into a no-tech policy this year in class. I've also started being more assertive with calling out phone use, but since I didn't set a policy at the start of the year, it's pretty tough to change tack now. Next year, I think it's going to be "phones in bags" from day one. I can see clear cognitive improvements when phones aren't around. You can almost map out grades based on student phone use in class. Setting a blanket policy will help build a culture of "let's be here now" that is so important for learning.

He ends with a provocative question:

How do you assess upper-level cognitive skills while mitigating AI risks? ... On the one hand, tasks that get to those levels seem especially vulnerable to being “hacked” by generative AI [...] On the other hand, there should be ways to assess those higher levels with items that are uniquely “hardened” against AI.

Again, this is the rub. I used to use homework to build out base skills to develop critical thinking and application skills in the classroom. Now that generative AI can just do the skill building tasks, students cannot begin to approach the critical thinking work in the room. I need to really spend time thinking about my skill-aligned tasks this summer to try some new strategies in the fall.

[Original link]

Published: 2025-04-17 | Category: Links | Tags: ai, teaching, assessment, robert talbert

Share this post
Previous: Anchor Update Next: That Time I Was Don Quixote

Comments

Get in touch

Comments are always open. You can get in touch by sending me an email at brian@ohheybrian.com

Categories