By far, my favorite thing about our BbWorld conference is having face-to-face conversations with customers.  It’s the thing I most look forward to, and after our conference in New Orleans a couple of weeks ago, I came away noticing a significant but subtle trend.  It had to do with the ever-thorny world of ethics and analytics.

First, here are some of my foundational views on analytics for context:

  • I believe that there are biases in analytics, whether one intends them to be there or not
  • I believe that openness, collaboration, and sharing is the way to go with higher ed analytics
  • I have always been against hyperbole and black boxes
  • Ethics conversations around analytics are vital, but it needs to be balanced with progress

Given these points, here’s what I saw in some discussions I had with analytics practitioners.  There seemed to be a shift away from “we’re talking about analytics but grappling with the ethical questions” to “we understand there are significant ethical considerations, but we are motivated to implement a solution.”  Let’s see if I can codify this shift graphically:

This is a 2x2 diagram looking at the relationship between culture and tolerance for experimentation at a school,  and the mission critical nature of the project.  

If you are looking to implement something like a learner analytics project, you need to look at two things.  On the vertical, what’s the culture and tolerance for experimentation at your school?  Do you pilot tools left and right, or does it take an “act of IRB” to get anything done?  On the horizontal, we examine the mission critical nature of the project.  Using analytics for campus parking is one thing.  Using analytics to determine who gets accepted to your school is something altogether different.

Combining these two factors returns a somewhat obvious decision matrix.  If it’s not mission critical, then go ahead and experiment.  If it is, you should be a bit more deliberate.  So how does this matrix fit into what I was hearing from customers at BbWorld?  I’ll show you:

This is a 2x2 diagram looking at the relationship between culture and tolerance for experimentation at a school,  and the mission critical nature of the project.  

My conversations with customers who were closely involved with analytics indicated a shift from a more conservative approach to a more experimental one.  It’s not that schools don’t care about the ethics or the outcomes, it’s that they think the positives outweigh the negatives.  One school was very overt with faculty.  The Administration said, “We are not using analytics to determine who we are going to drop from a class. We want to use it to help you do a better job at talking to and retaining all students.”

To me, this is a big shift.  Previous conversations around ethics of analytics would hit points like:

  • The ethics of knowing (if I know a student is at risk, am I obligated to act)
  • Innate biases of models
  • Is it ethical to experiment with students (e.g. pilot software on a subgroup)

What I’m trying to visualize in this 2×2 diagram is that institutions are starting to focus more on the other side of the equation – the benefits.  The intervention can be classified as non-mission critical.  If we think a student is at risk, we won’t automatically drop them from a class or disenroll them from the institution.  We’ll just give them a call.  If it turns out our model was wrong and the student isn’t at risk, then no harm done.  As this lens changes, the potential benefits (of helping students to retain/improve/engage) become more prevalent and the school is more apt to try an analytics implementation.

We have made the prior case that analytics in higher ed has fallen into the trough of disillusionment, and that this is a good thing.  It will move us past the hyperbole and into implementations.  Maybe the ethics discussion is moving past the theoretical and toward the practical, as well?

Related Posts

Share This Article

Twitter Facebook LinkedIn Pinterest Email