I’m writing this blog post from my “visiting professor lodging” in Prague and have been pondering the differences between e-learning here (that is, Central Europe) and in the United States. It occurs to me that the differences I’ve encountered reflect minor characteristics, but there are some interesting parallels between categorizing e-learning geographically and designating e-learning as separate and distinct from more traditional forms of instruction.
I’ve long argued that e-learning (and before e-learning, any form of distance education) should be evaluated rigorously, using criteria based on what we know about human learning and instructional design. This call for serious scrutiny of online courses is not at all unusual; we hear it from educational administrators, accrediting bodies, and department chairs, for example. What we don’t typically hear, however, is the second part of my argument: that all courses – face-to-face, blended, online, and everything in between – should be evaluated rigorously with less regard for the instructional environment than for factors that are more likely to influence learning and achievement.
Last week I spoke at a conference in Bratislava, Slovakia about how to recognize quality in online courses. I spent some time on how to select evaluation criteria, how to create a rubric, and other topics that you’d expect to be hear in a talk like this. What I hoped the conference attendees also walked away with, though, was food for thought about evaluating e-learning separately from other types of instruction. I am beginning to suspect that every time we segregate online coursework into a separate category for evaluation purposes, we inadvertently shoot ourselves in the foot.
One of the obstacles we face when proposing any type of unfamiliar educational practice is the fact that humans will resist change and innovation, and not only resist, but engage in a relatively predictable set of decision-making strategies about that innovation. One of the most influential of these is related to the idea of relative advantage. This is where we compare the innovation with our current practices to determine if it makes sense to even consider changing. After all, if a new way of doing something isn’t any better than what we’re already doing, why bother? So, if e-learning is proposed as a significantly different form of instruction, it’s likely to face resistance simply because it’s difficult to muster valid evidence of its superiority in terms of learning and achievement (and with good reason).
In situations where we manage to overcome this obstacle and e-learning is implemented, the skeptics who remain unconvinced are often the first to call for rigorous evaluation and careful scrutiny of the new courses. Why is that a problem? By segregating e-learning in this way, we’re encouraging the idea that it is different enough from traditional practice to require “special treatment,” and it’s a very small leap from “special” to “inferior.” (I wish I had a solid explanation for why I can’t argue that “special” might just as likely lead to “superior,” but I feel sure that it doesn’t.)
So, how to discourage this special status? One strategy is to have a common set of evaluation criteria that are applied to all courses regardless of delivery format. If you look carefully at the Blackboard Exemplary Course Program rubric, for example, those criteria could, with minor changes in wording, be used for a face-to-face course. Factors such as providing self-assessment activities or presenting content in an organized manner are hardly exclusive to online instruction, but these are what make a difference in teaching and learning – not whether students watch a video streamed online versus sitting in a classroom.
So, the topic I had planned to write this blog post about – the differences between e-learning here and there – didn’t turn out quite as I’d planned, but the point is the same: maybe we need to focus more on the similarities that influence learning and less on the minor differences that lead to separation.