This is joint blog post by Van Davis and John Whitmer. 

Artificial Intelligence (AI) is transforming the ways that people interact with the world and with one another. From predicting traffic patterns, to recommending product purchases, to identifying good “matches” between individuals, people are increasingly relying on algorithms to help them make decisions in their daily lives. As higher education institutions incorporate more technologies into administrative and teaching/learning processes, they are collecting tremendous amounts of data about students and faculty including behavioral data about physical and cognitive interactions as well as learning outcomes. As a result, institutions are beginning to explore the use of AI with this data to understand, model, and predict student persistence, retention, and career success.

However, the opportunities afforded by AI are counter-balanced with significant risks to some of higher education’s core values such as access, equity, and self-determination. For example, many AI techniques are opaque to human review and inspection – not due to nefarious purposes, but simply due to the nature of the calculations used to create them. Even if the algorithms are made “transparent”, they could not be understood by anyone except a statistician (and maybe not even then!)  Further, historical data used for modeling and learning may reproduce societal biases and inadvertently reproduce the very inequities we seek to overturn by using these approaches. And to further complicate things, our existing legal and policy frameworks are ill-equipped to address the complexities raised by the emergence of AI applications in higher education.

At Blackboard we believe that AI and the transparent use of educational data holds enormous promise to help students succeed. Whether it is used to provide instructors and advisors with insights that help identify and provide assistance to struggling students, developing adaptive content that can serve as a “smart” tutor with on-demand assistance for students, or helping administrators better understand enrollment trends, AI and the data and analytics that drives it can help students and institutions succeed.

But we are also mindful that AI is still relatively young in its implementation. We know that the results of analytics are only as good as the data and the programming that is fed into the algorithms. We know that just as institutional decision making should be transparent, the means by which AI makes decisions should also be transparent, and where that is not feasible, new processes need to be developed to provide a similar level of human oversight. This is not a trivial or easy issue, and one that we must dedicate significant effort to do well. We know that the development of and adherence to strong privacy guidelines is critical, and, above all else, we must as educators have a vigorous conversation around the appropriate use of AI and the interventions associated with it.

As such, Blackboard is proud to announce that we are launching a global project to develop an accepted set of ethical standards for the design and application of AI in education. To ensure that learning is optimized and that learners are protected from unnecessary harm, Blackboard will convene a group of global leaders to identify educational opportunities provided by AI, explore related ethical issues, and make recommendations for the ethical and legal application of these new approaches.

As a critical first step, we will be convening group of global higher education institutions, law firms, and non-profit organizations that are actively engaged in using and developing analytics and AI applications associate with student learning and success. Based on the conversations conducted at the first convening, the participants will create an initial framework for a more productive approach to AI and ethics, which will be disseminated to the larger education community. In addition, the project’s partners will collaborate to create a white paper on the ethics of analytics and AI in higher education, which will include proposed standards and best practices.

Confirmed participants in the project include:

  • Andrew Cormack, Chief Regulatory Adviser, Jisc
  • Dennis Bonilla, Executive Dean, University of Phoenix
  • Elana Zeide, Visiting Assistant Professor at Seton Hall University School of Law, Affiliate at Princeton University’s Center for Information Technology, and Visiting Fellow at Yale School of Law’s Information Society Project
  • Iris Palmer, Senior Policy Analyst, Education Policy Program, New America
  • John Fritz, Associate Vice President, Instructional Technology, University of Maryland, Baltimore County
  • Joshua A. Kroll, Postdoctoral Scholar, UC Berkeley School of Information
  • Mark Watts, Partner, Bristows LLP
  • Oriol Pujol, Vice Rector of Digital Transformation, University of Barcelona
  • Phillip Long, CINO Project 2021 and Associate Vice Provost, University of Texas at Austin
  • Roger Ford, Associate Professor of Law, University of New Hampshire School of Law
  • Wendy Colby, Unit Vice President, Global Strategic Product Management & Innovation, Laureate Education

At Blackboard, our mission is to partner with the global education community to enable learner and institutional success through leveraging innovative technologies and services. We believe helping institutions better understand the ethical, legal, pedagogical, and technical complexities surrounding AI, we can all improve student success. It’s just one piece of how we work to be education’s partner in change.

For more information on this initiative, contact Dr. Van Davis at van.davis@blackboard.com.

 

Related Posts

Share This Article

Twitter Facebook LinkedIn Pinterest Email