Many years ago, a good friend of mine (someone I consider a master teacher) pulled me aside. He was always on the lookout for ways to better engage his students. He had tried using online discussion boards, but every time he used them, he felt like a failure. He had heard that discussion boards were a valuable tool, but he just couldn’t seem to use them effectively.
“Are your students engaged in the classroom?” I asked.
“Are they performing well on assignments and on tests?”
“Then don’t use discussion boards,” I said.
In higher education, educational technologists, faculty developers, and instructional designers often equate “more LMS use” with “better LMS use,” as if using more tools were a sign of teaching maturity. As someone who has worked extensively supporting faculty technology adoption, I know the consequences of this assumption first-hand. For fear of being judged, many faculty seek help with the LMS only reluctantly, if they ever do so at all.
Until recently, we haven’t had much more than anecdotal evidence and common sense to support our assumptions about the relationship between teaching maturity and LMS adoption. In late 2016, however, researchers at Blackboard conducted the first large-scale empirical study of how instructors actually use the LMS. Looking at an anonymized dataset representing 70,000 courses from 927 institutions, with 3,374,462 unique learners using Blackboard Learn during Spring 2016 in North America, a team led by Dr. John Whitmer found that faculty tend to use the LMS in one of five ways (which they called archetypes). These archetypes range from essentially using the LMS as a content repository (‘Supplemental’), to using all the bells and whistles (‘Holistic’).
Table 1: Course archetype frequency distribution
The team found that the amount of time students spent on average increased with each subsequent archetype. Perhaps most striking about the team’s research was the fact that, taken in aggregate, they found no statistically significant difference in student achievement on the basis of course archetype. What these findings mean is that there is not necessarily a single ‘better’ or ‘worse’ way of using the LMS. Instead, there are only better and worse ways of using it in particular contexts, with particular students, by particular instructors, to meet particular teaching and learning goals.
When the findings were released, I thought of my friend, and the many conversations I have had with faculty over the years. I thought about the power this information has to reshape interactions between faculty and academic technologists. I thought about the opportunity to stop talking about ‘using the LMS more,’ and to start talking about ‘using the LMS well.’
Since the release of this research, we have built course archetypes into Analytics for Learn (A4L). With course archetype classification now included as a core feature of A4L, I thought I’d take a look at the distribution as it appeared in an anonymized test data set used by permission from one of our partners. The results were very interesting.
Just as our researchers found, taken as a whole we find a similar distribution of course archetypes, but no significant difference in outcome between them.
FIGURE 1: Mean GPA by Course Archetype (all departments). Screenshot from Analytics for Learn
When we look at specific departments, however, things start getting really interesting. In the Department of History, on the one hand, students seem to perform best in courses that have adopted a more evaluative instructional design pattern – making effective use of tests, quizzes, and assignments. Courses adopting a more holistic pattern in this context see the worst outcomes, by a large margin.
FIGURE 2: Mean GPA by Course Archetype in the History Department. Screenshot from Analytics for Learn
In the Business Department, on the other hand, the holistic approach sees significantly better results than any of the other archetypes.
FIGURE 3: Mean GPA by Course Archetype in the Business Department. Screenshot from Analytics for Learn
If I was a chair or dean at this school, the next set of questions I would ask might include:
- Do these patterns hold across each course level? Or do different strategies work better for freshmen than seniors?
- What are the most effective patterns in gateway courses?
- Are there certain populations of students for whom particular patterns work better compared to others?
With Analytics for Learn, it becomes possible to ask and answer these questions and more by simply slicing and dicing, filtering, and changing variables.
It is important to note that these results are from one school, and so aren’t generalizable. But they highlight the potential impact that looking at course archetypes by department can have on how we understand and optimize for student success. The more we learn about teaching and learning, the more we learn about how complex it really is. It is important not to overgeneralize. It is also important to avoid the temptation to view relationships in a way that is too simplistic. The power of Analytics for Learn come from the fact that it helps institutions to identify where they should be looking for deeper relationships between complex factors, and where they should not. Where interesting relationships are observed, it empowers institutions to investigate further to develop the kind of nuanced understanding necessary to make a significant impact on teaching, learning, and student success.
What would YOU do if you knew that your choice of instructional design pattern could mean the difference between passing and failing for some students?
View the text only version of the above graphs here.