Feeling over-objectified

Plot table from A Taxonomy for Learning, Teaching, and Assessing

I believe wholeheartedly in the importance of designing courses around meaningful learning objectives.

Online educators, though, often seem to be obsessed with enforcing a very behavioristic system of measurable objectives and sub-objectives using Bloom’s taxonomy. (This comes up in the otherwise laudable standards of Quality Matters. They are extremely specific about how to use learning objectives, even though the matter isn’t at all settled or clear in the research they rely on.)

Before Christmas I borrowed A Taxonomy for Learning, Teaching, and Assessing from the library. This book presents the well-known “revised” Bloom’s taxonomy.

What I found was a detailed guide that seems like it would be very useful for institutions or school districts who are trying to calibrate curriculum assessment at a wide scale. I found less there for individual course designers or for students.

In the absence of a strict institutional assessment regime, I remain convinced that straightforward, general, vernacular-English learning objectives are better than Bloom’s ones, for a few reasons.

We can get a false impression of rigor from the six levels

If every objective was calibrated to have identical scope and depth, you might reasonably say that “Design…” was more rigorous than “Clarify…” But what if the design task is fairly mechanical and the clarify task is extremely thorny and subtle?

To some extent, the levels abstractly make sense on a scale of rigor (remembering is lower than applying, which is lower than creating a brand-new structure). Two caveats, though:

  1. That scale has no basis in research on learning or cognition (which isn’t all that settled anyway), and
  2. The levels don’t relate in any sort of sequence, even if it seems like they do (someone can apply something without understanding it, certainly without recalling it directly, and which of those is more difficult is a judgment call in a given situation).

Tinkering with verbs doesn’t make something fundamentally more measurable or meaningful

The Taxonomy book goes on at great length about the subtleties of placing a task at the appropriate level, and I don’t find too much fault with their suggestions—it’s all fairly consistent. But I don’t think that most instructors or instructional designers want to take the time to plot objectives according to the book’s levels, sub-levels, and sub-sub-levels. More importantly, too, what’s the use of assigning that subtle meaning when a layperson (a student, for example) couldn’t possibly perceive the distinctions?

One of the primary tasks in the Applying the Quality Matters Rubric course I took last fall was to judge whether course objectives were measurable or not. My classmates would pounce on the fictitious sample instructor’s objectives:

“‘Understand strategies for overcoming public speaking anxiety’??? This instructor doesn’t know anything about objectives! We can’t even review this course because the objectives are completely opaque and unmeasurable…

“But we can change the verb to describe, and then everything’s fixed.”

Seriously? First, if you know exactly what the measurable version should be, was there really a problem in the first place? That makes it purely semantic and nothing to do with the meaning. Second, to a student or a sane instructor, isn’t the first version fairly clear? No, it’s not 100% clear what understanding entails, but isn’t describe just as vague? Can’t you describe in different ways, at different depths?

It’s all too neat

The way we use objectives suggests that learning is easily planned, sequential, neatly packaged, and identical from student to student. Certainly the objectives provide direction—and an aimless, directionless college course is probably a bad thing. Has anyone ever learned anything in careful order like that, though? And has anyone ever had a great college course that never changed gears or veered into unexpected areas?

I think Bloom’s over-promises and suggests that we understand learning better than we do. Learning is a complex system, differing from person to person, and it includes all sorts of non-cognitive elements (motivation, prior knowledge, and so on).

Saying that a student met a learning objective is a big claim.

  • It requires that your objective, materials, and assessment were perfectly planned and aligned.
  • It requires that the proxy measurement of that assessment was highly correlated to the behavior described in the objective. (Are all assessments so authentic and transferrable to the real world?)
  • It requires that each learning objective is evaluated individually, objectively, without interference from grading scales, curves, and so on. (Does a D mean you met the objective? If you got a penalty for turning in the project late, does that mean you met the objective to a lesser extent? How often do graduate students get Fs?)

What, then?

Let’s create our courses around clear, meaningful learning outcomes. Let’s focus on providing an overabundance of resources and practice for reaching those outcomes—not just a carefully prescribed sequence that hews exactly to our objectives. Let’s share these outcomes with students in a transparent, easy-to-grasp way, without bombarding them with micro-objectives.

If we give students a few clear, strong course outcomes, they’ll begin to understand the connection between the outcomes and the activities in the course. We’ll also be leaving room for the course to take different paths, for students to learn in the messy, serendipitous way that people really learn.

And let’s judge their learning based on those outcomes, but with the understanding that the data will give us no more than hints. And let’s use that data to try to make the next time go better.

Leave a Reply

Your email address will not be published. Required fields are marked *