How to become “test wise”

Texas A&M’s College of Medicine has a great website filled with test-taking tips. “What is Test Wiseness? It is a subject’s capacity to utilize the characteristics and formats of test and/or test-taking situations to receive a high score (Hyde 1981, 3). These are skills that can allow you to perform well in any testing situation and to know what to do before, during and after the test. Research tells us test-wise people have improved attitudes toward testing, have less test anxiety and achieve better grades (Vattanapath and Jaiprayoon 1999). Sweetnam (2003) found that even students familiar with the content may do poorly because they lack test-taking skills.”

We encourage you to take a look!

Keystone study strategies outlined

“Three keystone study strategies” outlined in the book Making It Stick: The Science of Successful Learning can become habit and help you structure the remainder of your time this spring semester.

1. Practice Retrieving New Learning from Memory: “Retrieval practice” means self-quizzing. “Retrieving knowledge and skill from memory should become your primary study strategy in place of rereading.” You can do this by stopping during a study or review session to ask yourself questions.

  • What did I just review?
  • What vocabulary/terminology/concepts are new to me?
  • What are the most important points or ideas?
  • How do these important points relate to what I already know?

“The familiarity with a text that is gained from rereading creates illusions of knowing, but these are not reliable indicators of mastery of material,” the authors write. “… By contrast, quizzing yourself on the main ideas and the meanings behind the terms helps you to focus on the central precepts rather than on peripheral material or on a professor’s turn of phrase. Quizzing provides a reliable measure of what you’ve learned and what you haven’t yet mastered.”

2. Space Out Your Retrieval Practice: “Spaced practice means studying information more than once but leaving considerable time between practice sessions.” As you easily understand, cramming for an exam doesn’t fit this model. In order to implement this technique, you should establish a self-quizzing schedule. The authors suggest first quizzing yourself close to your first encounter with the material, then several days later. “Over the course of a semester, as you quiz yourself on new material, also reach back to retrieve prior material and ask yourself how that knowledge relates to what you have subsequently learned.”

In addition, “another way of spacing retrieval practice is to interleave the study of two or more topics, so that alternating between them requires that you continually refresh your mind on each topic as you return to it.”

The take away: “Lots of practice works, but only if it’s spaced.”

3. Interleave the Study of Different Problem Types: “If you find yourself falling into single-minded, repetitive practice of a particular topic or skill, change it up: mix in the practice of other subjects, other skills, constantly challenging your ability to recognize the problem type and select the right solution.”

Making It Stick also recommends other study strategies, which include:

  • Elaboration or “finding additional layers of meaning in new material.”
  • Generation or attempting to answer a question before being shown an answer.
  • Reflection or “the act of taking a few minutes to review what has been learned in a recent class or experience and asking yourself questions” about the material and your acquisition/mastery of the material.
  • Calibration or “the act of aligning your judgments of what you know and don’t know with objective feedback so as to avoid being carried off by the illusions of mastery that catch many learners by surprise at test time.”
  • Mnemonic Devices or “tools … for creating mental structures that make it easier to retrieve what you have learned.”
From chapter 8 of Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning.

Rubrics add transparency, consistency, and efficiency to grading


Rubrics are scoring guides or specific pre-established performance criteria in which each level of performance is described to contrast it with performance at other levels.



  • Provide feedback and grade student work.
  • Help students understand the targets for their learning.
  • Help students learn standards of quality for a particular assignment.
  • Help students make dependable judgments about their own work that can inform improvement.


There are two types of rubrics – analytic and holistic. An analytic rubric is used to assess more than one content area at different levels of performance. A holistic rubric targets a single area and is used to assess a whole work or product considering multiple factors.


Rubrics contain three essential features: evaluation criteria, quality definitions, and a scoring strategy. They generally exist as tables comprising a description of the task being evaluated, row headings outlining the criteria being evaluated, column headings identifying the different levels of performance, and a description of the level of performance in the boxes of the table.

When writing a rubric, an instructor or teaching team should set the scale, define the ratings, and develop descriptions of what performance looks like at each level.


Examples of three scales:

  • Weak, Satisfactory, Strong
  • Beginning, Intermediate, High
  • Weak, Average, Excellent
  • Developing, Competent, Exemplary
  • Low Mastery, Average Mastery, High Mastery

Examples of four scales:

  • Unacceptable, Marginal, Proficient, Distinguished
  • Beginning, Developing, Accomplished, Exemplary
  • Needs Improvement, Satisfactory, Good, Accomplished
  • Emerging, Progressing, Partial Mastery, Mastery
  • Inadequate, Needs Improvement, Meets Expectations, Exceeds Expectations
  • Poor, Fair, Good, Excellent

Examples of five scales:

  • Poor, Minimal, Sufficient, Above Average, Excellent
  • Novice, Intermediate, Proficient, Distinguished, Master
  • Unacceptable, Poor, Satisfactory, Good, Excellent

Student Perceptions of Rubrics

Studies of students’ responses to rubric use suggest that graduate and undergraduate students value rubrics because they clarify the targets for their work, allow them to regulate their progress, and make grades or marks transparent and fair.

Rubrics enable them to engage in important processes, including identifying critical issues in an assignment and, thereby, reducing uncertainty and doing more meaningful work, determining the amount of effort needed for an assignment, evaluating their own performances in order to get immediate feedback, especially on weaknesses, estimating their grades prior to the submission of assignments and focusing their efforts so as to improve performance on subsequent assignments.

Tip based on perceptions: Provide rubrics with the assignment description, as well as an example of a graded or evaluated work.

Faculty Perceptions on Rubrics

In a review of 20 studies on rubrics, three studies report positive instructor perceptions of rubrics as scoring guides. In these cases, rubrics provided an objective basis for evaluation.

One striking difference between students’ and instructors’ perceptions of rubric use is related to their perceptions of the purposes of rubrics. Students frequently referred to them as serving the purposes of learning and achievement, while instructors focused almost exclusively on the role of a rubric in quickly, objectively and accurately assigning grades. Instructors’ limited conception of the purpose of a rubric might contribute to their unwillingness to use them.

Rubrics require quite a bit of time on the part of the instructor or teaching team to develop.

Rubric Reliability

The types of reliability that are most often considered in classroom assessment and in rubric development involve rater reliability. Reliability refers to the consistency of scores that are assigned by two independent raters (inter‐rater reliability) and by the same rater at different points in time (intra‐rater reliability).

The literature most frequently recommends two approaches to inter‐rater reliability: consensus and consistency. While consensus (agreement) measures if raters assign the same score, consistency provides a measure of correlation between the scores of raters.

Several studies have shown that rubrics can allow instructors and students to reliably assess performance.

Of the four papers found that discuss the validity of the rubrics used in research, three focused on the appropriateness of the language and content of a rubric for the population of students being assessed. The language used in rubrics is considered to be one of the most challenging aspects of its design.

As with any form of assessment, the clarity of the language in a rubric is a matter of validity because an ambiguous rubric cannot be accurately or consistently interpreted by instructors, students or scorers.

Teaching tips from reliability research: Scorer training is the most important factor for achieving reliable and valid large scale assessments. Before using a rubric, a teaching team should practice grading assignments together to ensure rubric clarity.…/documents/rubric-template.docx

Reddy, Y. M., & Andrade, H. (July 01, 2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35, 4, 435-448.

Competency-Based Medical Education an a Nutshell

Competency-based education in human medicine moves away from the idea that competence is related to time on a rotation. The fundamental premise is the Day 1 test. On Day 1, what can students do with no supervision?

This is an outcomes-based approach to the design, implementation, assessment and evaluation of an education program using competencies. It asks:

  • What are the abilities needed of grads?
  • How can we sequence from novice to expert?
  • How can we enhance teacher-trainee interaction?
  • What learning activities are really needed?
  • How can we use best practices in assessment?
Competency-based principles include:
  • A focus on outcomes: graduate abilities
  • Ensuring progression of competence
  • Viewing time as a resource, not a framework
  • Promoting leaner centerdness
  • Demanding greater transparency and utility in the program/curriculum

Competency-Based Veterinary Education Framework Introducted at AAVMC

A new Competency-Based Veterinary Education (CBVE) framework unveiled Saturday at the 2018 conference of the Association of American Veterinary Medical Colleges (AAVMC) represents three years of intensive work and lays the framework for schools to develop a competency-based curriculum.

It is probably the most significant work of the organization to date and facilitates the shift from faculty-centered teaching to student-centered learning, said Chief Executive Officer Andy Maccabe. “We don’t consider this product to be a perfect, final product,” he added. Instead, it will be updated and revised as educators implement it, and all schools are invited to adapt the framework.

The framework encompasses 9 Domains of Competence with corresponding competencies. These include:

  1. Clinical Reasoning and Decision-making
  2. Individual Animal Care and Management
  3. Animal Population Care and Management
  4. Public Health
  5. Communication
  6. Collaboration
  7. Professionalism and Professional Identity
  8. Financial and Practice Management
  9. Scholarship

In addition, the framework is accompanied by Entrustable Professional Activities (EPAs) or essential tasks that veterinary medical students can be trusted to perform with limited supervision in a given context and with regulatory requirements, once sufficient competence has been demonstrated. The 8 EPAs include:

  1. Gather a history, perform an examination, and create a prioritized differential diagnosis list
  2. Develop a diagnostic plan and interpret results
  3. Develop and implement a management/treatment plan
  4. Recognize a patient requiring urgent or emergent care and initiate evaluation and management
  5. Formulate relevant questions and retrieve evidence to advance care
  6. Perform a common surgical procedure on a stable patient, including pre-operative and post-operative management
  7. Perform general anesthesia and recovery of a stable patient, including monitoring and support
  8. Formulate recommendations for preventive healthcare

The EPAs are accompanied by descriptions of activities, relevant domains, and elements within the activity. AAVMC conference attendees from our college look forward to sharing more information upon their return.


CBE (competency-based education) has a “tyranny of utility.” … It has to be highly applicable. All learning activities are connected to “a golden thread” through the curriculum or they are “selective electives.”
– Jason GRank, CanMEDS, Royal College of Physicians and Surgeons of Canada

Faculty and Staff Development Session Focuses on Pedagogies of Inclusion

Six faculty joined Office of Teaching and Learning staff to discuss pedagogies of inclusion during a Thursday morning event on “Inclusive Pedagogies.” The conversation was rewarding, and lasted well past the session with instructors sharing their how they approach engaging as many students as possible.

Inclusive pedagogy is a method of teaching that incorporates dynamic practices and learning styles, multicultural content, and varied means of assessment, with the goal of promoting student academic success, as well as social, cultural, and physical well-being, and it often reflect the strategies we know work to engage all students.

All instructors are urged to begin to assess assumptions they have about experience, knowledge, ability, identify, and viewpoints.

Tips and takeaways from the session included the following, among others:

  • Recognize any biases or stereotypes you may have absorbed.
  • Rectify any language patterns or case examples that exclude or demean any groups.
  • Attend to student identities and seek to change the ways systemic inequities shape dynamics in teaching-learning spaces, affect individuals’ experiences of those spaces, and influence course and curriculum design.
  • If discriminatory remarks are made in your class, it is your responsibility to interrupt them and point them out as such. If you do not, students may think that you either approve of or are unaware of the impact of the comment or behavior.
  • Do not assume that all students will recognize cultural, literary or historical references familiar to you.
  • Convey the same level of respect and confidence in the abilities of all your students.
  • In class discussion, be wary of unfair patterns of communication (e.g., men interrupting women, a white student getting credit for a student of colors idea) and ensure fair access to class discussion for all students.
  • In courses in which class discussion is important, consider calling upon students rather than only relying on volunteers. Some students may be willing to participate but may not volunteer, for cultural or personal reasons.
  • Consider who comprises panels of experts or guest lecturers.
  • Use Universal Design for Learning (UDL) when preparing activities, materials, and presentations.

Tips for Writing Good Multiple-Choice Questions

  • Base each item on an educational or instructional objective of the course, not trivial information.
  • Try to write items in which there is one and only one correct or clearly best answer.
  • The phrase that introduces the item (stem) should clearly state the problem.
  • Test only a single idea in each item.
  • Be sure wrong answer choices (distractors) are at least plausible.
  • Incorporate common errors of students in distractors.
  • The position of the correct answer should vary randomly from item to item.
  • Include from three to five options for each item.
  • Avoid overlapping alternatives.
  • The length of the response options should be about the same within each item (preferably short).
  • There should be no grammatical clues to the correct answer.
  • Format the items vertically, not horizontally (i.e., list the choices vertically)
  • The response options should be indented and in column form.
  • Word the stem positively; avoid negative phrasing such as “not” or “except.” If this cannot be avoided, the negative words should always be highlighted by underlining or capitalization: Which of the following is NOT an example ……
  • Avoid excessive use of negatives and/or double negatives.
  • Avoid the excessive use of “All of the above” and “None of the above” in the response alternatives. In the case of “All of the above”, students only need to have partial information in order to answer the question. Students need to know that only two of the options are correct (in a four or more option question) to determine that “All of the above” is the correct answer choice. Conversely, students only need to eliminate one answer choice as implausible in order to eliminate “All of the above” as an answer choice. Similarly, with “None of the above”, when used as the correct answer choice, information is gained about students’ ability to detect incorrect answers. However, the item does not reveal if students know the correct answer to the question.

From Writing Good Multiple Choice Questions by Dawn M. Zimmaro, Ph.D.

How Many Options Should a Multiple-choice Question Have? Maybe Just 3

Exactly how many options should a multiple-choice question have? The answer has varied over the years, but one meta-analysis suggests fewer than many of us currently use. As recently as 2002, researchers suggested we use “as many plausible distractors as feasible,” but that may mean just 3, according to Michael C. Rodriguez in “Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years.” 

Rodriguez writes, “I would support this advice by contributing the concern that in most cases, only three are feasible. Based on this synthesis, MC items should consist of three options, one correct option and two plausible distractors. Using more options does little to improve item and test score statistics and typically results in implausible distractors. The role of distractor deletion method makes the argument stronger. Beyond the evidence, practical arguments continue to be persuasive.

  1. Less time is needed to prepare two plausible distractors than three or four distractors.
  2. More 3-option items can be administered per unit of time than 4- or 5-option items, potentially improving content coverage.
  3. The inclusion of additional high quality items per unit of time should improve test score reliability, providing additional validity-related evidence regarding the consistency of scores and score meaningfulness and usability.
  4. More options result in exposing additional aspects of the domain to students, possibly increasing the provision of context clues to other questions (particularly if the additional distractors are plausible).”
We may not feel comfortable moving from 4 or 5 options to 3, but the message is clear, there’s no reason to spend valuable faculty time and energy on developing non-plausible distractors, and more than 5 options does NOT improve a question.

AAVMC to Unveil Competency Framework for Veterinary Medicine

For the past two years, a group of respected educators from the Association of American Veterinary Medical Colleges member institutions has been working to develop a competency framework for veterinary medicine that aligns with approaches used by other health professions.

The Office of Teaching & Learning became aware of this work when it began and participated in an activity designed to provide feedback to the group working on this project, specifically examining a very early draft of the competencies during a session at the 2016 Veterinary Educators Collaborative.

The finalized framework will be introduced during a plenary session at the 2018 AAVMC annual meeting in March.

Council for Professional Education Chair Tod Drost and OTL Director Melinda Rhodes-DiSalvo will be present at the meeting and are excited to share what they learn with colleagues at CVM and consider how the framework might assist in advancing educational excellence. In essence the framework will respond to the questions “What does the public expect a graduate veterinarian to be able to do?” and “How do you actually assess students’ competencies in these areas?”

According to AAVMC: “The framework will be introduced as a ‘best practices’ model which all member institutions are welcome to adopt or consult as they modify existing curricula or develop new ones. While no action will be taken to adopt the program as an official standard for evaluating educational outcomes, the body of work represents the most substantial effort ever undertaken in this area of academic veterinary medicine and is expected to serve as a valuable tool to guide curricular development, refinement and outcomes assessment.”

The project was led by Associate Deans for Academic Affairs Dr. Jennie Hodgson of the Virginia-Maryland College of Veterinary Medicine and Dr. Laura Molgaard of the University of Minnesota College of Veterinary Medicine.

AAVMC’s website has additional information about the process and working group.

Getting Through Finals: Try Time Management That Works Because You Commit to It

If you’re seeking strategies for making it through final exams, you may want to spend some time this weekend (and then every Sunday until the semester is over) to develop a very specific weekly calendar and schedule each hour (or half hour) to include lectures, labs, work, extracurricular activities, social time, intense study sessions, grocery shopping, laundry and sleep time — just about everything you do. These calendar items are appointments you commit to keeping.

This calendar should include sessions of intense self-testing/study time for each class of the day and preparation for upcoming exams. Writing notes verbatim or passively reading notes or viewing presentation slides are the opposite of intense study/self-testing.

sample planned student calendar