R3 1.18 September 28, 2023 Generative AI isn’t just like calculators in class – so now what?
This issue of R3 is another focused on AI in education. It’s got a title that will ring a bell with anyone who’s been following various metaphors and analogies used to try to get a handle on what AI means for teaching: It’s Not Like a Calculator, So What Is the Relationship Between Learners and Generative Artificial Intelligence? This one was kindly passed along to me by a LinkedIn connection, who noted the common ground between this article’s take on how we think about AI and what I put in my last book about technology and memory. I do find endless fascination in the interplay between minds and machines, and as we all sort out what AI is going to do for learning, I think it makes sense to balance bigger philosophical questions with practical advice for teachers.
I also want to mention a few other resources I’ve been reading and referencing over the last few weeks. This in-depth article on teaching quality in higher ed, by Beth McMurtrie over at the Chronicle, takes on this familiar topic from an interesting angle: the disparity between the high value placed on college teaching by the general public, and the devaluation of it within colleges themselves. I’m quoted in it, along with many more expert folks in the college pedagogy space. I’ve also been drawing a lot of inspiration from Peter Felten and Leo Lambert’s 2021 book Relationship-Rich Education: How Human Connections Drive Success in College, especially in how they talk about putting the concept of “relentless welcome” into place to drive success and address inequities. Lastly, I’ve been mulling over how to apply some of the good ideas about supporting student mental health through teaching from Improving Learning and Mental Health in the College Classroom (Eaton, Hunsaker, & Moon), which came out earlier this year. Full disclosure that I am the lead editor on that one, but it’s a topic that is rightfully on many people’s minds these days and so I hope it does get added to some reading lists.
Citation:
Lodge, J. M., Yang, S., Furze, L., & Dawson, P. (2023). It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? Learning: Research and Practice, 00(00), 1–8.
DOI:
https://doi.org/10.1080/23735082.2023.2261106
Paywall or Open:
Paywall
Summary:
This is an essay that argues for a conceptually sophisticated approach to AI informed by theories of technology and cognition. Much is still unclear about exactly how generative AI will change learning, or cognition generally, but whatever these impacts are, they are sure to be more complex than the calculator analogy would suggest. The authors conclude by emphasizing the unique and far more open set of possibilities for educational AI, including its capability to build novel tools for learning.
Key Concepts:
One major take-home is that generative AI can and probably should be considered in light of other theoretical work on how people use technology to think and to learn. Concepts like offloading (delegating certain cognitive functions to technology) and technology as an extender of cognition/force multiplier can spark insights and suggest new lines for research.
The authors float the possibility that AI could even function as a social learning partner, which is important because then it could accelerate learning in some of the ways that traditional collaborative learning does. They also observe that unlike other familiar educational technologies, such as adaptive courseware, AI requires more direction and self-regulation on the part of the learner.
Choice Quote from the Article:
“At the core of the idea of generative AI as an extension of the mind is that it is somewhat like a prosthesis, extending and enhancing in ways that go well beyond what either a human or machine could do alone. In this way, we would see this type of use as being focussed on relatively mundane aspects of the learning process, similar to the possibilities provided by offloading, but more integrated into the thinking and learning processes students engage in. Importantly, we see a subtle but critical distinction between offloading and extending here. In the case of offloading, mundane work is being given tothe machine. On the other hand, through extension, the machine amplifies what would be possible by a human alone. For example, a calculator can assist with mundane calculations that could be performed by humans (i.e., cognitive offloading), whereas image-creating AI tools (such as Midjourney) can potentially greatly enhance and build on human capabilities for creativity.”
Why it Matters:
I’ve written before about the power of metaphors and analogies for shaping how we think about complicated topics (e.g., here, about ungrading, and here, about the need to avoid corporate-speak when talking to faculty). I do think that metaphors matter, especially in the initial stages of constructing conceptual understanding. Thus, it makes sense for the people in any kind of a leadership or development role with respect to AI to reflect on the metaphors they’re using, and perhaps to think twice about the “it’s like calculators were!” opener when speaking to faculty.
Many of us in faculty development are having these talks now. While these discussions can, and maybe should, concentrate on practical strategies, I think it’s also good practice to touch on the more abstract questions of what goes on in the mind when we work with a machine in any capacity. This is where aspects of this article, such as Figure 1, will provide inspiration and perhaps depth to the presentations and meetings we are having now.
Previously in this newsletter I’ve written about cognitive offloading as a potentially useful framework for understanding how people will adapt to using generative AI in their day to day lives. I think there are going to be more investigations of how this process works specifically in the case of AI. Overall, I was heartened to read something that encourages us to go deeper in pondering the implications of advanced AI on how we learn and think – beyond superficial metaphors, and definitely beyond the idea that they’re mostly important as a new way for students to cheat.
Most Relevant For:
Scholars interested in the relationship between technology and human psychology; instructional designers and leaders who want to look farther ahead to where developments in AI are going to take us in the distant future; faculty professional development staff involved in creating presentations and workshops on AI.
Limitations, Caveats, and Nagging Questions:
As the article acknowledges, these are all emerging areas where there’s not a whole lot of data yet. And, I think in the effort to move the AI discussion beyond the cheating concerns, we can’t forget that this is still an area of massive practical importance for faculty. This essay is a thought-provoking read that would pair well with more applied, practically-oriented articles, but probably wouldn’t be a good entrée to the topic for novices (and in fairness, this is not at all the stated aim of the piece).
If you liked this article, you might also appreciate:
Grinschgl, S., Papenmeier, F., & Meyerhoff, H. S. (2021). Consequences of cognitive offloading: Boosting performance but diminishing memory. Quarterly Journal of Experimental Psychology, 74(9), 1477–1496. https://doi.org/10.1177/17470218211008060
Miller, M.D. (2022). Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World. West Virginia University Press.
Mollick, E. R., & Mollick, L. (2023). Assigning AI: Seven Approaches for Students, with Prompts. SSRN Electronic Journal, 1–46.
File under: Technology; ChatGPT; generative AI; cognitive offloading