Tips for Reviewing Student Evaluation Data

The way we as faculty and staff view and use SET data can add to their value. First, we should acknowledge that SETs are just one measure of instructional effectiveness, with peer review/classroom observation and reflection on our own pedagogy and practice also viewed as highly effective ways to assess quality of teaching.

The Office of Teaching & Learning offers these tips for reviewing SET ratings and comments:

Seek out themes: Themes arise across quantitative and qualitative data sets and across courses and semesters. Consistently constructive feedback on aspects of instruction or a course deserve attention. The Office of Teaching & Learning is available to provide assistance interpreting this data.

Disregard feedback with little or no meaning: While it is always helpful to know we’re doing good work, “This is the best class I have ever taken,” doesn’t provide us with much useful content beyond confirmation of value. At the opposite end of the spectrum, “This is the worst class I have ever taken,” doesn’t tell us anything that might be helpful. (As an aside, extremely unprofessional student feedback, which we know is fortunately rare at CVM, should be reported to the Associate Dean of Professional Programs.)

When reviewing ratings for courses and individual instructors, consider what aspect of teaching & learning is referenced in the statement. For example, if a course regularly scores “low” on “Exams or another assessments measured my learning/understanding,” a course team may want to review learning outcomes for each lecture and if those align with assessment questions. In other words, are we testing on what we have told students we expect them to learn?

Know what variables may have an affect on student ratings. Generally, we overemphasize the connection between instructor personality and easy grading on student ratings. The non-profit group, IDEA has reviewed research on student ratings and found:

  • Instructor age and teaching experience don’t have an effect on ratings. (Lower ratings for newer teachers are often a reflection of learning how to teach or refining course design.)
  • Instructor personality characteristics don’t have much effect on ratings. (Behaviors exhibited by an instructor when they teach – expressiveness, for example — may have an effect.)
  • Research productivity of faculty doesn’t have an effect on ratings.
  • There’s a weak connection between expected grades and student ratings.
  • Course workload and difficulty are correlated with student ratings, but not in the way we might think. Students give somewhat higher ratings to difficult courses.
  • For more information, visit http://ideaedu.org/wp-content/uploads/2014/11/idea-paper_50.pdf.

Acknowledge that student evaluations are measures of student PERCEPTIONS while at the same time understanding that the students have spent an entire semester experiencing us and our course teams.

Consider the most important practices we undertake that affect or can improve student ratings. They include:

  • Planning and designing courses well in advance and as a team (when applicable).
  • Organizing our courses and material logically and purposefully.
    Setting and maintaining clear expectations regarding workload and deadlines throughout the semester.
  • Identifying particularly difficult concepts and material and exploring a variety of approaches for their instruction (explaining or showing real-world application or providing additional learning resources in Carmen, for example).
  • Testing fairly and effectively based on well defined learning outcomes. (Testing frequently at regular intervals aids in retention.)

Finally, remember that the most useful time for extensive review of longitudinal SET data is when refining or redesigning a course for its next offering.

Leave a Reply

Your email address will not be published. Required fields are marked *