Rubrics add transparency, consistency, and efficiency to grading

Definition

Rubrics are scoring guides or specific pre-established performance criteria in which each level of performance is described to contrast it with performance at other levels.

Uses

Rubrics:

  • Provide feedback and grade student work.
  • Help students understand the targets for their learning.
  • Help students learn standards of quality for a particular assignment.
  • Help students make dependable judgments about their own work that can inform improvement.

Types

There are two types of rubrics – analytic and holistic. An analytic rubric is used to assess more than one content area at different levels of performance. A holistic rubric targets a single area and is used to assess a whole work or product considering multiple factors.

Features

Rubrics contain three essential features: evaluation criteria, quality definitions, and a scoring strategy. They generally exist as tables comprising a description of the task being evaluated, row headings outlining the criteria being evaluated, column headings identifying the different levels of performance, and a description of the level of performance in the boxes of the table.

When writing a rubric, an instructor or teaching team should set the scale, define the ratings, and develop descriptions of what performance looks like at each level.

Scales

Examples of three scales:

  • Weak, Satisfactory, Strong
  • Beginning, Intermediate, High
  • Weak, Average, Excellent
  • Developing, Competent, Exemplary
  • Low Mastery, Average Mastery, High Mastery

Examples of four scales:

  • Unacceptable, Marginal, Proficient, Distinguished
  • Beginning, Developing, Accomplished, Exemplary
  • Needs Improvement, Satisfactory, Good, Accomplished
  • Emerging, Progressing, Partial Mastery, Mastery
  • Inadequate, Needs Improvement, Meets Expectations, Exceeds Expectations
  • Poor, Fair, Good, Excellent

Examples of five scales:

  • Poor, Minimal, Sufficient, Above Average, Excellent
  • Novice, Intermediate, Proficient, Distinguished, Master
  • Unacceptable, Poor, Satisfactory, Good, Excellent

Student Perceptions of Rubrics

Studies of students’ responses to rubric use suggest that graduate and undergraduate students value rubrics because they clarify the targets for their work, allow them to regulate their progress, and make grades or marks transparent and fair.

Rubrics enable them to engage in important processes, including identifying critical issues in an assignment and, thereby, reducing uncertainty and doing more meaningful work, determining the amount of effort needed for an assignment, evaluating their own performances in order to get immediate feedback, especially on weaknesses, estimating their grades prior to the submission of assignments and focusing their efforts so as to improve performance on subsequent assignments.

Tip based on perceptions: Provide rubrics with the assignment description, as well as an example of a graded or evaluated work.

Faculty Perceptions on Rubrics

In a review of 20 studies on rubrics, three studies report positive instructor perceptions of rubrics as scoring guides. In these cases, rubrics provided an objective basis for evaluation.

One striking difference between students’ and instructors’ perceptions of rubric use is related to their perceptions of the purposes of rubrics. Students frequently referred to them as serving the purposes of learning and achievement, while instructors focused almost exclusively on the role of a rubric in quickly, objectively and accurately assigning grades. Instructors’ limited conception of the purpose of a rubric might contribute to their unwillingness to use them.

Rubrics require quite a bit of time on the part of the instructor or teaching team to develop.

Rubric Reliability

The types of reliability that are most often considered in classroom assessment and in rubric development involve rater reliability. Reliability refers to the consistency of scores that are assigned by two independent raters (inter‐rater reliability) and by the same rater at different points in time (intra‐rater reliability).

The literature most frequently recommends two approaches to inter‐rater reliability: consensus and consistency. While consensus (agreement) measures if raters assign the same score, consistency provides a measure of correlation between the scores of raters.

Several studies have shown that rubrics can allow instructors and students to reliably assess performance.

Of the four papers found that discuss the validity of the rubrics used in research, three focused on the appropriateness of the language and content of a rubric for the population of students being assessed. The language used in rubrics is considered to be one of the most challenging aspects of its design.

As with any form of assessment, the clarity of the language in a rubric is a matter of validity because an ambiguous rubric cannot be accurately or consistently interpreted by instructors, students or scorers.

Teaching tips from reliability research: Scorer training is the most important factor for achieving reliable and valid large scale assessments. Before using a rubric, a teaching team should practice grading assignments together to ensure rubric clarity.

https://www.brown.edu/sheridan/teaching-learning-resources/teaching-resources/course-design/classroom-assessment/grading-criteria/rubrics-scales

https://uwf.edu/media/university-of-west-florida/…/documents/rubric-template.docx

Reddy, Y. M., & Andrade, H. (July 01, 2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35, 4, 435-448.

Competency-Based Medical Education an a Nutshell

Competency-based education in human medicine moves away from the idea that competence is related to time on a rotation. The fundamental premise is the Day 1 test. On Day 1, what can students do with no supervision?

This is an outcomes-based approach to the design, implementation, assessment and evaluation of an education program using competencies. It asks:

  • What are the abilities needed of grads?
  • How can we sequence from novice to expert?
  • How can we enhance teacher-trainee interaction?
  • What learning activities are really needed?
  • How can we use best practices in assessment?
Competency-based principles include:
  • A focus on outcomes: graduate abilities
  • Ensuring progression of competence
  • Viewing time as a resource, not a framework
  • Promoting leaner centerdness
  • Demanding greater transparency and utility in the program/curriculum

Competency-Based Veterinary Education Framework Introducted at AAVMC

A new Competency-Based Veterinary Education (CBVE) framework unveiled Saturday at the 2018 conference of the Association of American Veterinary Medical Colleges (AAVMC) represents three years of intensive work and lays the framework for schools to develop a competency-based curriculum.

It is probably the most significant work of the organization to date and facilitates the shift from faculty-centered teaching to student-centered learning, said Chief Executive Officer Andy Maccabe. “We don’t consider this product to be a perfect, final product,” he added. Instead, it will be updated and revised as educators implement it, and all schools are invited to adapt the framework.

The framework encompasses 9 Domains of Competence with corresponding competencies. These include:

  1. Clinical Reasoning and Decision-making
  2. Individual Animal Care and Management
  3. Animal Population Care and Management
  4. Public Health
  5. Communication
  6. Collaboration
  7. Professionalism and Professional Identity
  8. Financial and Practice Management
  9. Scholarship

In addition, the framework is accompanied by Entrustable Professional Activities (EPAs) or essential tasks that veterinary medical students can be trusted to perform with limited supervision in a given context and with regulatory requirements, once sufficient competence has been demonstrated. The 8 EPAs include:

  1. Gather a history, perform an examination, and create a prioritized differential diagnosis list
  2. Develop a diagnostic plan and interpret results
  3. Develop and implement a management/treatment plan
  4. Recognize a patient requiring urgent or emergent care and initiate evaluation and management
  5. Formulate relevant questions and retrieve evidence to advance care
  6. Perform a common surgical procedure on a stable patient, including pre-operative and post-operative management
  7. Perform general anesthesia and recovery of a stable patient, including monitoring and support
  8. Formulate recommendations for preventive healthcare

The EPAs are accompanied by descriptions of activities, relevant domains, and elements within the activity. AAVMC conference attendees from our college look forward to sharing more information upon their return.

 

CBE (competency-based education) has a “tyranny of utility.” … It has to be highly applicable. All learning activities are connected to “a golden thread” through the curriculum or they are “selective electives.”
– Jason GRank, CanMEDS, Royal College of Physicians and Surgeons of Canada

Tips for Writing Good Multiple-Choice Questions

  • Base each item on an educational or instructional objective of the course, not trivial information.
  • Try to write items in which there is one and only one correct or clearly best answer.
  • The phrase that introduces the item (stem) should clearly state the problem.
  • Test only a single idea in each item.
  • Be sure wrong answer choices (distractors) are at least plausible.
  • Incorporate common errors of students in distractors.
  • The position of the correct answer should vary randomly from item to item.
  • Include from three to five options for each item.
  • Avoid overlapping alternatives.
  • The length of the response options should be about the same within each item (preferably short).
  • There should be no grammatical clues to the correct answer.
  • Format the items vertically, not horizontally (i.e., list the choices vertically)
  • The response options should be indented and in column form.
  • Word the stem positively; avoid negative phrasing such as “not” or “except.” If this cannot be avoided, the negative words should always be highlighted by underlining or capitalization: Which of the following is NOT an example ……
  • Avoid excessive use of negatives and/or double negatives.
  • Avoid the excessive use of “All of the above” and “None of the above” in the response alternatives. In the case of “All of the above”, students only need to have partial information in order to answer the question. Students need to know that only two of the options are correct (in a four or more option question) to determine that “All of the above” is the correct answer choice. Conversely, students only need to eliminate one answer choice as implausible in order to eliminate “All of the above” as an answer choice. Similarly, with “None of the above”, when used as the correct answer choice, information is gained about students’ ability to detect incorrect answers. However, the item does not reveal if students know the correct answer to the question.

From Writing Good Multiple Choice Questions by Dawn M. Zimmaro, Ph.D.

How Many Options Should a Multiple-choice Question Have? Maybe Just 3

Exactly how many options should a multiple-choice question have? The answer has varied over the years, but one meta-analysis suggests fewer than many of us currently use. As recently as 2002, researchers suggested we use “as many plausible distractors as feasible,” but that may mean just 3, according to Michael C. Rodriguez in “Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years.” 

Rodriguez writes, “I would support this advice by contributing the concern that in most cases, only three are feasible. Based on this synthesis, MC items should consist of three options, one correct option and two plausible distractors. Using more options does little to improve item and test score statistics and typically results in implausible distractors. The role of distractor deletion method makes the argument stronger. Beyond the evidence, practical arguments continue to be persuasive.

  1. Less time is needed to prepare two plausible distractors than three or four distractors.
  2. More 3-option items can be administered per unit of time than 4- or 5-option items, potentially improving content coverage.
  3. The inclusion of additional high quality items per unit of time should improve test score reliability, providing additional validity-related evidence regarding the consistency of scores and score meaningfulness and usability.
  4. More options result in exposing additional aspects of the domain to students, possibly increasing the provision of context clues to other questions (particularly if the additional distractors are plausible).”
We may not feel comfortable moving from 4 or 5 options to 3, but the message is clear, there’s no reason to spend valuable faculty time and energy on developing non-plausible distractors, and more than 5 options does NOT improve a question.

AAVMC to Unveil Competency Framework for Veterinary Medicine

For the past two years, a group of respected educators from the Association of American Veterinary Medical Colleges member institutions has been working to develop a competency framework for veterinary medicine that aligns with approaches used by other health professions.

The Office of Teaching & Learning became aware of this work when it began and participated in an activity designed to provide feedback to the group working on this project, specifically examining a very early draft of the competencies during a session at the 2016 Veterinary Educators Collaborative.

The finalized framework will be introduced during a plenary session at the 2018 AAVMC annual meeting in March.

Council for Professional Education Chair Tod Drost and OTL Director Melinda Rhodes-DiSalvo will be present at the meeting and are excited to share what they learn with colleagues at CVM and consider how the framework might assist in advancing educational excellence. In essence the framework will respond to the questions “What does the public expect a graduate veterinarian to be able to do?” and “How do you actually assess students’ competencies in these areas?”

According to AAVMC: “The framework will be introduced as a ‘best practices’ model which all member institutions are welcome to adopt or consult as they modify existing curricula or develop new ones. While no action will be taken to adopt the program as an official standard for evaluating educational outcomes, the body of work represents the most substantial effort ever undertaken in this area of academic veterinary medicine and is expected to serve as a valuable tool to guide curricular development, refinement and outcomes assessment.”

The project was led by Associate Deans for Academic Affairs Dr. Jennie Hodgson of the Virginia-Maryland College of Veterinary Medicine and Dr. Laura Molgaard of the University of Minnesota College of Veterinary Medicine.

AAVMC’s website has additional information about the process and working group.

How to Increase the Value of Tests

  • Incorporating frequent quizzes into a class’s structure may promote student learning. These quizzes can consist of short-answer or multiple-choice questions and can be administered online or face-to-face. … Providing students the opportunity for retrieval practice—and, ideally, providing feedback for the responses—will increase learning of targeted as well as related material.

  • Providing “summary points” during a class encourages students to recall and articulate key elements of the class. Setting aside the last few minutes of a class to ask students to recall, articulate, and organize their memory of the content of the day’s class may provide significant benefits to their later memory of these topics. Whether this exercise is called a minute paper or the PUREMEM (pure memory, or practicing unassisted retrieval to enhance memory for essential material) approach, it may benefit student learning.

  • … Pretesting students’ knowledge of a subject may prime them for learning. By pretesting students before a unit or even a day of instruction, an instructor may help alert students both to the types of questions that they need to be able to answer and the key concepts and facts they need to be alert to during study and instruction.

  • Finally, instructors may be able to aid their students’ metacognitive abilities by sharing a synopsis of these observations. … Adding the potential benefits of pretesting may further empower students to take control of their own learning, such as by using example exams as primers for their learning rather than simply as pre-exam checks on their knowledge.

Tips For Giving Feedback in the Clinical Environment

  1. Establish a respectful learning environment.
  2. Communicate goals and objectives for feedback.
  3. Base feedback on direct observation.
  4. Make feedback timely and a regular occurrence.
  5. Begin the session with the learner’s self-assessment.
  6. Reinforce and correct observed behaviours.
  7. Use specific, neutral language to focus on performance.
  8. Confirm the learner’s understanding and facilitate acceptance.
  9. Conclude with an action plan.
  10. Reflect on your feedback skills.
  11. Create staff-development opportunities.
  12. Make feedback part of institutional culture.

Various uses of the term, “Assessment”

Q: I keep on hearing about assessment. Assessment as in a test, assessment as in learning outcomes, assessment as in course assessment, assessment as in program assessment. What’s up?

A: The term, “assessment,” when commonly used at the college refers to a midterm or final exam, or another assignment designed to test for student acquisition of foundational knowledge or ability to clinically reason. In this sense, “assessment” serves as another word for examination.

Assessment can also refer to a plan or structure established for constant course and program improvement, and a way to ensure a college is collectively doing what it says it does — that students learn what we say they will during their time with us.

Because examinations are designed to test student knowledge, it would seem reasonable to equate the exam with learning outcomes achievement. This assumes that questions on examinations are aligned (directly related to) stated outcomes for a lecture or course or program.

“There is often confusion over the difference between grades and learning assessment, with some believing that they are totally unrelated and others thinking they are one and the same. The truth is, it depends. Grades are often based on more than learning outcomes. Instructors’ grading criteria often include behaviors or activities that are not measures of learning outcomes, such as attendance, participation, improvement, or effort. Although these may be correlated with learning outcomes, and can be valued aspects of the course, typically they are not measures of learning outcomes themselves.

“However, assessment of learning can and should rely on or relate to grades, and so far as they do, grades can be a major source of data for assessment.” (http://www.cmu.edu/teaching/assessment/howto/basics/grading-assessment.html#scoringparticipation)

When deciding on what kind of assessment activities to use, it is helpful to keep in mind the following questions:

  • What will the student’s work on the activity (multiple choice answers, essays, project, presentation, etc.) say about their level of competence on the targeted learning objectives?
  • How will the instructor’s assessment of their work help guide students’ practice and improve the quality of their work?
  • How will the assessment outcomes for the class guide teaching practice?

Continue reading

Conversation Focuses on Answering Tough Questions about Grading and Feedback

During a Wednesday, Sept. 6, presentation, Dr. Julie Byron, and Melinda Rhodes-DiSalvo, Ph.D., as well as a group of faculty, sat down to wrestle with “Answers to Tough Questions about Grading and Student Feedback.” A few highlights follow.

Continue reading