R3 1.11 June 8, 2023 Artificial Intelligence, Cognitive Offloading, and the Future of Learning
Using the offloading concept to guide how to begin dealing with ChatGPT.
Like most people in higher education today, I’m somewhere between interested in and obsessed with the latest developments in generative artificial intelligence. With that in mind, AI is a topic I’ll be returning to throughout my summer reading, and here in this newsletter as often as I can. As someone who writes and does research about educational technology, I’ve already started putting out thoughts on the implications of tools like ChatGPT for education (for example, in this article and in this podcast). But my goal now is to move beyond these first impressions toward a deeper understanding, and that’s what I hope to achieve over the next few months.
Not surprisingly, there isn’t much yet out there as far as formal scholarly publications on ChatGPT and similar innovations, at least in the areas of psychology and education where I tend to do most of my reading. But there is enough on AI in general to begin laying a foundation, and that is where this issue’s spotlight article fits in. It’s by a scholar whose work I admire, one who has written extensively on cognitive offloading (a phenomenon that figures prominently in my latest book, Remembering and Forgetting in the Age of Technology). This particular piece actually predates the debut of ChatGPT in November 2022, but even so, I feel that it forms a nice bridge between previous work in cognition and technology and the current explosion of interest in AI.
Citation:
Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5(July), 1–6.
DOI:
https://doi.org/10.3389/frai.2022.908261
Paywall or Open:
Open
Summary:
This essay reviews the cognitive offloading concept, breaking existing research down along the two main questions researchers have focused on so far. These include 1) what factors determine the likelihood of offloading, and 2) the consequences of offloading for performance on different types of cognitively demanding tasks. Offloading involves externalizing cognitive processes through technology; for example, using turn-by-turn GPS replaces mental navigation, smartphone-based calendars replace remembering appointments, and digital shopping lists replace memorized ones. Offloading in turn fits under the more general concept of “distributed cognition,” whereby thought processes tap into external sources of information and computation as well as internal cognitive resources. The article reviews positive and negative impacts of offloading along with other research findings, leading up to a discussion of avenues for future research and unanswered questions about AI. There are also preliminary recommendations for practice.
Key Take-Home Points, Predictions, and Advice:
Offloading tends to improve immediate performance (e.g., a shopping list will be followed more accurately when you can rely on your phone), but over time, offloading can hinder performance. This does not happen because of a global decrease in cognitive capability, but rather, a combination of factors such as shallower processing of to-be-remembered information, slowing in the acquisition of new skills, less-accurate metacognition, and overconfidence. On the flip side, it may also be possible to combine human and digital cognition in ways that capture the best of both worlds, in line with the “enhancement” or Transhumanism views of technology.
Based on what we know about offloading so far, we would predict that people will be the most likely to offload mental tasks onto AI when they believe that the technology is more capable than they are at a given task, when AI is easy to use and highly accessible, and when an individual has a high degree of trust in the technology. This last point is likely to become particularly important as AI begins to enable offloading of high-stakes tasks such as driving. Furthermore, personality factors may determine who opts to rely on AI. Research has uncovered interesting connections between the personality traits of openness to experience and neuroticism and the willingness to turn a task over to technology. This might be because these particular traits correlate with how trusting a person is in general.
Choice Quote from the Article:
Especially regarding education, it must be discussed whether there should be a “ban” of cognitive offloading due to potential detrimental effects thereof or whether students need to learn how to properly use technical tools without causing harm for their cognition (cf. Bearman and Luckin, 2020; Dawson, 2020). In line with these authors, we advocate to teach students how to use technical devices so that they satisfy their needs but to not (unintentionally) harm cognition. For instance, students need to learn how to differentiate between their own knowledge and externally stored knowledge, so that the effect of inflated knowledge is avoided. Furthermore, students should be made aware of their offloading behavior and that they won’t be able to access their technical tools in critical situations such as during exams.
Why it Matters:
Cognitive offloading is the evidence-based, grounded-in-reality counterpart to ominous cultural narratives about how technology “rewires” our brains. It provides a powerful framework for understanding how and why we actually do share thinking with our digital devices. In this way, it might be the best platform currently available for developing a new scholarly understanding of the psychology of generative AI. This particular article offers a wonderfully well-researched and still concise review of the offloading concept.
The article also reveals some surprising parallels between how individuals view their fellow humans and how they view technology. Trust is key for both of these things, so that people who are more likely to trust another person are more likely to trust technological aids such as automated driving systems. I predict that we will see the trust factor and related personality traits play out in some increasingly noticeable ways, with some individuals soundly rejecting technologies like ChatGPT and others gravitating towards them. Perhaps this could also be the basis for investigating any generational or other demographic divisions that might emerge along the way - for example, if it turns out that faculty are far less enthused about engaging with ChatGPT, compared to their students.
This brings us back to the connections to education. There are not many concrete suggestions at the level of pedagogical practice (“do this, not that”) in the article. However, I think it makes a good case for developing student metacognition by wrapping AI into a larger set of lessons about offloading, with the goal of getting students to make deliberate and informed choices about what technology they use, when, and why. I did also appreciate one big concluding point from the article, that psychological science has a lot to contribute to the wider cultural discussion about AI but so far hasn’t had much of a voice. We’ve seen this dynamic before in discourse about technology and education, and so this article should offer another push to researchers to keep generating solid research and ensuring it gets out into the public sphere.
Most Relevant For:
Instructional designers preparing to handle faculty questions about generative AI; cognitive psychologists; educational psychologists; leaders responsible for policy and practice involving educational technology
Limitations, Caveats, and Nagging Questions:
This article doesn’t – and indeed, maybe no article currently could – provide a comprehensive set of predictions about how offloading will play out with generative AI. Just as social media has a different and greatly amplified set of impacts compared to earlier forms of computer-mediated interactions, generative AI is likely to touch off a different set of concerns. I’ve argued that we’re still probably seeing some degree of hype and even moral panic associated with ChatGPT in particular, and yet, I do think there may turn out to be some completely new ways in which human cognition wraps itself around this new set of tools.
Of course, I still have questions about what the impacts will be, and reading this article helped me articulate those much more clearly. What sorts of cognitive processes will most people choose to offload to AI? Will we still see the usual pattern of highly localized/situation specific decrements to performance coupled with no lasting, global decrements to overall cognitive capabilities? Or, will there be more global impacts that emerge? What should faculty share with students as far as evidence-based, sensible advice about ChatGPT? How should they shape their AI-related course policies, based on the research as well as practical considerations?
If you liked this article, you might also appreciate:
Cecutti, L., Chemero, A., & Lee, S. W. S. (2021). Technology may change cognition without necessarily harming it. Nature Human Behaviour, (July), 0–3. https://doi.org/10.1038/s41562-021-01162-0
Finley, J.R., Naaz, F. & Goh, F.W. (2018). Memory and technology: How we use information in the brain and in the world. Springer.
Grinschgl, S., Papenmeier, F., & Meyerhoff, H. S. (2021). Consequences of cognitive offloading: Boosting performance but diminishing memory. Quarterly Journal of Experimental Psychology, 74(9), 1477–1496. https://doi.org/10.1177/17470218211008060
Miller, M.D. (2022). Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World. West Virginia University Press.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 476–478.
Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learning Environments. https://doi.org/10.1080/10494820.2023.2209881
File under:
Technology; ChatGPT; generative AI; cognitive offloading; memory