Monday, September 19, 2016

Test Scores with SBG vs. Traditional

I recently graded the first unit test of the year. I was hesitant because of how we are changing things this year. We aren't doing anything so revolutionary or contrarian that I should think that students would do poorly in the new system, but I'm nonetheless cautious going into the first test.

The Changes

Last year was my first full year teaching and, of course, the first year going through the curriculum that we use at Wheaton North. I followed a pretty traditional method of scoring student work. Some completion with a lot of arbitrary points assigned to various problems, worksheets, quizzes, labs, and tests. 

This year, I'm starting a 2 Year Implementation of SBG, focusing more on the feedback component of it this year with the hopes that it'll be full-fledged next year. Because the emphasis is on the feedback, I'm not grading summative tests (unit tests) with an SBG model. However, what I have done, is add the learning targets to each of the student worksheets and labs and then actually used them to grade the students throughout the course of the unit. All of the learning experiences, for the most part, have remained the same - same notes, worksheets, labs, and activities. Most importantly (for this blog post), the tests are also largely the same.

The (Preliminary) Results

After grading my students' tests, I wanted to compare their results this year with what my last year's students got on the same test with the more traditional feedback system. Here they are:
  • 2016 Unit 1 Test Average (SBG): 83.7%
  • 2015 Unit 1 Test Average (Trad.): 74.4%
I could go on and on here about all the small differences and variables that are affecting the student's grades (there's a big difference between a first-year teacher and a second-year teacher). I'll go ahead and assume you don't want to hear about standard deviations, t-tests, and null hypotheses (and we'll also assume that those terms are about the extent of my statistical vocabulary). 

What the Results DON'T Mean

Some agenda-pushing, SBG-obsessed educators may take these numbers and declare from the rooftops that this is "proof'" that SBG is better than a more traditional grading system, that we all must immediately drop what we are doing and take the plunge into re-writing our assessments. I don't think this is necessarily strong evidence for such a conclusion, and certainly not "proof!" Test scores are helpful, but not the whole story.

  What the Results DO Mean

The test scores tell me a couple different things:

  1. I'm not harming my kids. I've had thoughts that removing "points" from the lexicon of my classroom might cause my grading practices to become invalid, that is, not representative of what students know and can do. I arbitrarily assigned "B" as a grade to level 3 on my rating system. Would this skew student's grades up or down? Since the test is graded the same way it was last year, I think I can say that I'm not doing damage.
  2. Detailed feedback is helpful. Seriously, who doesn't already know this? We've all heard feedback that is too vague to be helpful. "This doesn't seem to be a good "fit" for you." Thanks, I'll keep that in mind in the future. SBG gives students more detailed feedback. Instead of losing 2 points, you get a 3/5 in "drawing conclusions from a graph" and a 5/5 in "mathematically determining the density of an object."
  3. My students learned more this year. For the myriad of explanations for why the scores are the way they are, and whether the difference is significant, let's remember that this is the important point: my students this year are doing better than my students last year. I'm not saying SBG is the whole reason for the difference from one year to the next, but I am pretty pleased that there is a 9.3% increase. Now that's something to write home (or a blog post) about!

No comments:

Post a Comment