Designing assessment rubrics: 10 mistakes to avoid

I’ve spent hundreds of hours reviewing hundreds of lesson plans for the concorso oral exam.

And what has that taught me?

Well, loads of things. The main one is that teachers are such a creative bunch.

But another thing, which I will focus on in this guide, is that while assessment rubrics are absolutely fundamental in the oral exam, there are recurrent errors you need to be aware of.

So, in this guide, I’m going to put the knowledge I’ve collected over the years to tell you about 10 errors you should avoid when designing your assessment rubrics for the concorso.

Avoid these and you’ll be in for a chance at a high score at the oral exam!

First things first: what are assessment rubrics?

The first thing we need to tackle is: what are assessment rubrics? And what are their 3 main components, i.e. criteria, levels and descriptors?

Watch our explainer video to find out:

Now that you know what assessment rubrics are, let’s move on to 10 errors you need to avoid to get top marks in your oral exam!

1. Too many criteria

When you plan your assessment rubric, the first thing you need to think about is what your students will be assessed on. In other words, you need to identify the criteria on which you will assess your students.

Say that you’re assessing your students on the speaking task of “telling their partner about their best and worst subject”. You may assess the students’ fluency, grammatical and lexical accuracy, level of participation in the task or ability to manage an interaction.

That’s all well and good, but a mistake I’ve seen far too many times is including too many criteria. Yes, there is such a thing as too many criteria: we are working with school-aged students (and, by extension, their parents, who often want to know the ins and outs of how their child is assessed) and they need to be able to understand and use their assessment rubrics.

Most of the time, 3-4 criteria per task will be more than enough. Your assessment needs to be valid, which brings me to…

2. Irrelevant criteria

If you’re assessing listening skills, your criteria need to be related to… listening, and little else. I always give the example of a listening test whose purpose is to assess the students’ ability to understand the key points of a talk (which is, among other things, a CEFR statement), but then the assessment takes into account the students’ spelling or grammatical accuracy in their written answers. Your criteria need to be relevant and valid. Which, again, brings me to…

3. A lack of alignment between lesson aims and assessment criteria

Pro tip: after you’ve finished writing your lesson plan, go back to your lesson aims. Are your procedures in line with your stated aims? Are your assessment criteria in line with your lesson aims? I often see lesson plans stating language- and content-related lesson aims as well as broader aims about, for example, developing digital or citizenship competences. I have no problem with that, but: if you have an aim, you also have to have a way to assess if that aim has been reached.

4. Criteria inaccessible to students

Drawing on CEFR statements or IELTS descriptors is okay. However, are our students going to understand the criterion “phonological control”? They likely won’t. Make sure your criteria are concepts the students can comprehend.

5. Descriptors written in inaccessible language

This is also valid for your descriptors. Are students going to understand the descriptor “can employ the full range of phonological features in the target language with a high level of control – including prosodic features such as word and sentence stress”? I think you know the answer to that. Grade your language for the CEFR level of your students: if you’re unsure if your language is too difficult, run it through text inspector and then simplify it with Rewordify.

6. Descriptors tackling different things

So let’s say that we have the criterion “fluency” for a speaking activity. There are 3 levels (needs work, good, excellent). These are the descriptors for the 3 levels:

  • Needs work: the student cannot use the language quickly and link their ideas well.
  • Good: the student speaks mostly without stopping, with good interaction patterns and reformulation strategies
  • Excellent: the student speaks well, without stopping and uses their full creativity.

There are various issues with these descriptors, but the main one is: they tackle different things. The first one deals with how quickly the student speaks and their ability to link their ideas; the second one also adds interaction patterns and reformulation strategies; the third one adds creativity. Choose one or two aspects to focus on and then grade them differently for different levels. To see an example of good descriptors, graded for different levels, watch webinar 3 in the on demand webinar course.

7. Descriptors that are too long

Have you ever seen a student spend more than 20 seconds looking at their grade? Exactly. Overly long and convoluted descriptors will not facilitate the students’ engagement with the assessment and, to make things worse, they make it more likely that you will make error no. 6. Stick to one or two short and simple sentences per descriptor.

8. Demotivating labels

“Poor, insufficient, inadequate”: whatever your views of summative assessment, try to phrase your labels in a way that doesn’t only highlight the shortcomings of the student but also the fact that they are on the way to developing (i.e. not all hope is lost), like “improvement needed, in progress, in construction”.

9. Going from right to left

I’m sure this is just an oversight due to the tiredness that we all rightfully feel after studying so hard for our exams. However, when you’re positioning your levels on your assessment grid, make sure they go from the lowest to the highest from left to right, not vice versa.

10. No time allocated for assessment rubrics

The vast majority of lesson plans I’ve reviewed have included at least one between teacher assessment, peer assessment and self-assessment grids (yay, because committees seem to like these). However, the vast majority of lesson plans I’ve reviewed also failed to allocate any time in the description of procedures to explaining and using these assessment grids. This is of course especially important if you’re doing peer and self-assessment (those take *time* for students to understand and complete) but also just teacher assessment: you want to first explain the criteria to your student before they do their task (so they know what they’re being assessed on) and then allocate some time to explain discuss their grades with them.

To learn more about how to design assessment rubrics, with plenty of practical examples, watch webinar 3 in our one demand webinar course!

I hope these 10 dont’s have been useful! Which one are you going to pay special attention to? Let me know in the comments!

Did you enjoy our post? Share it with your friends:

Leave a Reply

Your email address will not be published. Required fields are marked *