Conversation Focuses on Answering Tough Questions about Grading and Feedback

During a Wednesday, Sept. 6, presentation, Dr. Julie Byron, and Melinda Rhodes-DiSalvo, Ph.D., as well as a group of faculty, sat down to wrestle with “Answers to Tough Questions about Grading and Student Feedback.” A few highlights follow.

1. What about grade inflation? More important, how do I know if I’m “too hard” or “too easy”?

In a competency-based medical education model, the focus is on mastery regardless of the grade designated to represent that mastery. Designing tests or experiences that measure meaningful growth of competency can be more difficult than doing the same in a comparative manner (“Did she perform better or worse than he did?”).

Faculty can use item analysis as one way of determining whether or not test questions are strong indicators of student learning. Still faculty should be careful about relying too much on item analysis. If a learning objective, for example, requires students to memorize terms and all students successfully do so, a multiple choice question might be tagged as “too easy.”

Faculty may want to look at their ExamSoft reports for these indicators.

  • Item Difficulty Index (p-value): Values are from 0.00 to 1.00, with a high number indicating the exam item was mastered or not as difficult/discriminatory and a lower number indicating difficulty or discrimination. Look for extremes to indicate when you might want to review a question.
  • Upper Difficulty Index (Upper 27%): When the value approaches 1.00, your high performers scored well on the item. At 0.5 or below, your high performers failed to get the question right, so you may want to review the question.
  • Discrimination Index: This index compares the upper and lower 27%. If you have a 0.3 or above, there’s good discrimination. You may want to look at 0.10 to 0.29 as there’s fair discrimination. If you have a negative value, the item is considered flawed.
  • Point Bi-serial Correlation Coefficient: Values range from -1.00 to 1.00. At 1.00, exam takers who did well overall also did well on a particular question. You will want to review negatives.
In addition, course teams might consider assessments other than high-stakes midterms and finals to measure student learning. Methods of addressing the workload associated with written assignments or essay questions include grading with rubrics, allowing for group submissions, and allowing students to peer review one another with rubrics.

For further information, ExamSoft has created this primer on item analysis, and the Office of Teaching & Learning would be happy to assist with assignment and question development.

2. What do I do in the face of increasing student demands for content to be delivered in one form or another?

It’s impossible to meet the preferences of all students. Faculty should feel comfortable presenting content and materials in a manner that best supports student achievement of learning objectives. (Recommendations on electronic notes and presentations can be found in this update.)

3. I have difficult feedback on performance to deliver? How do I do that?

  • Establish a climate of trust and respect.
  • Provide feedback on a one-on-one basis.
  • Provide feedback with the intention of helping the student grow.
  • Focus on the behavior to describe the performance issue.
  • Use “I noticed” or “the perception is that …” as part of the conversation.
  • Be timely in providing the feedback.
  • Ask the student to take notes during the consultation.
  • Concentrate one or two skills/issues.
  • Work with the student to determine solutions to the identified problem.
  • Ask the student to reiterate those solutions and how he or she will implement them.
  • Invite the student to provide you with feedback on the issue.
  • Invite the student to report back on progress.
4. I practically gave the students all the information for the test and they still bombed it? What’s wrong?
  • The best (but not always surefire) way to avoid this scenario is to prepare students in advance. To reduce student anxiety and apprehension, explain your grading criteria. Consider showing them sample questions or have them respond to sample questions in class or complete practice quizzes. (It is up to the students to do so.)
  • Model how you would approach a question or solve a problem. Or better yet ask the students to do this.
  • Use Top Hat to Identify areas of confusion in advance or as you teach. These “muddiest points” also be identified through reviewing previous assessments. (What concepts consistently challenge students?)
  • State clearly what you expect students to demonstrate. This might be problem solving, or evaluating and applying conceptual models or methodologies, or analyzing data.
  • Following the exam, identify problems that cropped up repeatedly. Suggest strategies that students might adopt to deal with these problems.
  • Have all team members review exam questions as a group to ensure material on the exam is material the team believes is critical for students to master.
  • Debrief the students after the test. (This is worth the time.)
  • Provide opportunities for students to fail, learn, and succeed.


Leave a Reply

Your email address will not be published. Required fields are marked *