Interpreting Student Ratings of Instruction

Last week, those who administered IDEA Student Ratings of Instruction in our Fall courses received that data in our campus mailboxes. If you’re anything like me, there’s always a mixture of anticipation and anxiety when opening the packet: I think I did well; they seemed to like the class; I hope no one flamed me, though. And of course, no matter how many times we’re told to not take things personally, there’s always that one comment we can’t seem to take any other way. In my case, it’s a good bet that even if all the comments except one sing the praises of the course and my instruction, I’m going to obssess about the one that didn’t. Because teaching is an endeavor that’s so tied up in our identity, it’s hard to react any other way, it seems.

But if we let ourselves get wrapped up in negativity, or lose a sense of proportion when it comes to looking at student ratings of our instruction, then we might miss the chance to put that data to work for us. The point of these ratings 1 is to help us determine what we’re doing that helps student learning, and/or what practices of ours might be getting in the way of that learning. Using student ratings of instruction is an essential tool for our professional development. There are always things we can be doing better, and it’s always a good idea to know what practices are working, so that we may continue to employ them.

So what do we do after opening the IDEA packets? Where do we begin with the data? Do we skip right to the student comments, or do we pay attention to the numbers, distributions, and comparisons? There is a wide range of data contained in each IDEA summary report, and it can be a little bewildering to go through if you’re not familiar with what each section does. Fortunately, you have options. The CETL Director has been trained by IDEA to to help faculty use the summary reports and individual data, and one of the services CETL offers is consultations to help walk through your data. To set one up, simply call or email Kevin, or use this site’s contact form. If you’re looking for an answer to a specific question, or want more information about a particular portion of your IDEA data, the IDEA website has a collection of helpful resources: The “Notes on Instruction” are useful for figuring out specific practices associated with the various teaching styles in the IDEA forms. They also provide material related to each of the learning objectives that faculty can select at this link. Finally, there are a series of IDEA Papers on various areas of pedagogy-from classroom practice to course design-that can be accessed at this page (sorted by category). As we transition to a digitally-administered IDEA form, there will also be more materials available online to interpret individual results.

As for using data from student ratings of instruction in general, there are a number of things to think about as we incorporate the insights from that data into formative development for our own teaching. We should ask ourselves what’s significant, what portions are outliers, what’s in our control, and what might not be. Here are some some good resources that can serve as food for thought as you interpret this student feedback:

Elizabeth Barré’s research on student ratings is top-notch, and she does a really good job putting what we know about “student evaluations” in perspective. See her original meta-study from 2015, “Do Student Evaluations Really Get an F?,” and her 2018 follow-up, “Research on Student Ratings Continues to Evolve. We Should Too.”

The Vitae section of The Chronicle of Higher Education has published some useful columns on working with student ratings, particularly in assessing how you might retain or modify specific pedagogical practices, which you can find here and here.

Finally, we should all be aware of what student ratings cannot do, and of some of the ways in which they are structurally problematic. In particular, there are significant gender inequities that several studies have identified with standard types of student ratings. There is reason to believe they can discriminate against adjunct faculty as well. With these things in mind, it’s incumbent upon chairs, deans, and members of the promotion and tenure committee to be aware of what student ratings can and cannot do. It’s also useful for each of us to consider the same things. Student ratings can be useful tools for formative feedback and reflection on our teaching. But they aren’t the last word on our pedagogy, nor are they an infallible measure of “quality teaching.” That said, they can illuminate trends, and they can at least tell us if and how our students see themselves learning-and that’s not nothing.


Have a question? Need help with a teaching and learning issue? Come by the CETL, or contact us to schedule a consultation today!

Finally, sometimes imposter syndrome can hit pretty hard:


 

  1. Note that I’m not using the phrase “course evaluations.” There’s a reason for that. Students aren’t in a position where they can “evaluate” our courses; they aren’t subject-matter experts, and they’re not in a position where they can assess whether we’re utilizing either pedagogical or disciplinary best practices. What they can do, though, is assess whether or not they felt they made progress towards the course goals, and whether they felt we helped them do so. Thus, the phrase “student ratings of instruction” is a more useful, and accurate, designator.

Leave a Reply

Your email address will not be published. Required fields are marked *