R3 2.19 December 17, 2024: Year in Review
Trends, themes, and looking ahead to what 2025 might be like
Welcome to this year’s final issue of the R3 Newsletter! I probably won’t have a new post go out until about mid-January - apologies for the hiatus, but it’s all in service of another big writing project with a deadline that’s coming up soon. (I’ll be sharing more news about that one in 2025.)
Looking back over the last year, as we do around this time, one big highlight was the publication of my short book on how to remember students’ names. That book was genuinely fun to write, partly because it took me right down memory lane - back to psycholinguistics and the language-related research that I used to do at the very beginning of my academic career. I also liked being able to take on a really practical problem, one where taking a few specific steps can lead to immediate and noticeable payoffs. Lastly, as I've launched the book into the world, it's occurred to me that successfully tackling this one issue via an evidence-based, growth-oriented approach might lead some faculty to try something similar in other areas of teaching.
This year also saw the launch of one of the first books that I've worked on as an editor, Liz Norell’s brand-new work The Present Professor: Authenticity and Transformational Teaching. It is a beautifully written guide to classroom presence and techniques for achieving that presence through developing our own self-understanding, self-compassion, and other aspects of intrapersonal wisdom. Norell’s book is packed with encouragement, self-assessments, adn suggestions for practice. I think it would be an inspiring work to share with faculty at the outset of what is shaping up to be a challenging new year. It’s available here: https://www.oupress.com/9780806194691/the-present-professor/ , here: https://www.amazon.com/Present-Professor-Authenticity-Transformational-Teaching/dp/0806194685, and wherever else books are sold.
In looking back over all the R3 posts from this last year, a few recurring themes emerge that get me thinking about what is in store for 2025. Research in what I would call the core topics of learning sciences - things like retrieval practice, study schedules, the best ways to set up and use study aids – is still going strong, and I think these will all continue to be relevant topics, especially in conjunction with questions about how to select and use technologies for teaching.
Of course, the rapid spread of generative AI tools is still top of mind for those of us working in higher education in any capacity, and doubly so for those of us interested in educational technology. I covered this topic less this year than I did in 2023’s R3 Newsletter, but that doesn’t mean it’s dwindling in importance. I’d chalk it up to this being an awkward moment in the development of scholarship on AI in education, for the simple reason that the research base hasn’t had time to take root and grow. When I find myself impatient for there to be more formal academic scholarship on AI in teaching, I'm reminded of just how lengthy a process it is to design, conduct, analyze, and write up traditional social sciences-style research. That doesn't even get into peer review, publication lag and all the other time-consuming hurdles that lie between making an important discovery and getting it to the people who can put it to use. This isn’t a criticism, exactly; that ultra-deliberative process, with all of its different checkpoints, stress-testing of ideas, and stringent quality criteria, is what gives formal scholarship its enduring value.
Furthermore, while we consumers of the research are waiting for more substantive scholarship to come along, we’re simultaneously drowning in a torrent of general information and advice: what AI tools are and how they work, ideas for new assignments and activities, policy templates and so on. New articles are of course coming out every single day, but many of them are more along the lines of editorializing, making predictions (oftentimes extremely general ones), or simply compiling highlights from that same torrent of resources. Here again, this isn’t a criticism per se. I just haven't been able to pin down enough of the sort of material that will take us from theory to practice, providing do-this-not-that guidance that is grounded in some type of hard evidence. In fairness, I can also attest to the challenge of writing about AI at this point in time, having recently submitted the final draft of a chapter for an AI-focused book coming out in January. It feels surreal to consider the developments that have happened in the year since I first wrote the rough draft, and of course I worry about the work still being relevant.* For pretty much any other academic topic, one year would be no big deal, but not so for AI - another challenge for scholars looking to make their contributions through books.
One exception worth mentioning is Teaching with AI: A Practical Guide to a New Era of Human Learning, by José Antonio Bowen and C. Edward Watson. This book references an admirable amount of scholarship, considering the early state of the game. The best part, though, is the hefty amount of detailed instruction on how exactly to carry out various applications of AI in higher education, coupled with a thoughtful discussion of academic dishonesty and all the other problems with current AI tools that by now we're all well acquainted with. This work is a promising start, but I keep coming back to the overall conclusion that this is an acutely challenging time for those of us who write, do research, and disseminate research findings in this space.
This brings us to what I see as the second overarching theme in all the research I've reviewed this year in this newsletter: student motivation and metacognition. Purists will rightfully object here, I know, but I cannot help but see these as related issues, even though technically they are two separate things. Both are mechanisms that make it possible for students to be in control of and engaged in their own learning, and that has an overriding influence on their success. Going back over 2024’s R3 posts, I am struck by just how many of them do touch on engagement and student autonomy in one form or another: promoting growth mindset, persuading students to improve their study strategies, student autonomy (here and here), emphasizing transferable skills to engage lower-performing students, and a few more at the intersection of learning, memory, reasoning, and motivation.
Granted, there's a bit of a feedback loop involved in the emergence of those running themes, given that what I currently see as the most interesting and important issues shapes what I choose to read and review in the first place. But it's undeniable that some really compelling work in this area has come out in the last couple of years. I can also share that in the conversations with dozens of leaders and faculty I’ve had on campus visits over the past year, what I've heard referenced time and again are concerns about getting students to fully engage in what they’re learning, to take charge of their own academic goals, to adopt better ways to study and stick with those plans over time.
I would also argue that motivation/metacognition itself ties back to the question of how to best deal with AI in education. Although no one would say that all AI woes would magically disappear if only our students were totally invested in their own educations, I think we instructors will never be able to coexist peacefully and productively with the new AI tools if our students aren’t on board with our academic agendas. Now more than ever, students have to want to acquire the skills and knowledge that we have to offer, and they have to be willing to put in the hard and sustained effort to make that happen. As I heard the brilliant educational technologist Dave Cormier say in a talk about AI, “If students are going to do work while we are not watching, they will have to actually care about it.” And, I’d add, getting them to care is a meeting-in-the-middle exercise that requires effort and perspective-taking on all sides.
AI has sucked up so much oxygen that it's hard to forecast what other technologies are going to be emerging or changing in the near future, especially those that aren't explicitly AI-focused. I think there's still plenty of space for technologies that allow instructors to take specific skills, content, or learning principles they want to target and reinforce just those things in the context of the overall course design. We’ve seen this happen with retrieval practice, where a wealth of options have sprung up to support quizzing and self-quizzing. Now, I think the time is ripe for simple but highly specialized technologies that make it easy to put other principles into practice, or that target discipline-specific content or skills that students tend to struggle with the most. I’m working on a few ideas of my own, but I hope to see more tools being developed that fit that description.
All of this leads into the last set of trends that I'm contemplating. Much boils down to this: Change really is coming. To be clear, I don't say that lightly, or often. I’ve built my approach to educational technology on taking the long view, sidestepping the developments and supposed disruptions that come and go every year. And having done this kind of work for 15+ years now, I’ve heard way too many variations on the idea that students/technology/the economy/the political climate/demographics are changing so much and so fast that higher ed will finally have to change along with it. It’s become familiar background noise, the elevator music of college teaching. But that background hum really does seem to be building into an impossible-to-ignore racket, and it’s my dearly held hope that those of us doing the hard work in the middle of it all will be in charge of designing the changes to come.
It’s always hard to separate predictions from hopes and fears. What we wish for, and what we are most worried about, both inevitably color how we interpret the limited information that we have about what the future will look like. I know I’m hardly immune to such biases, so when envisioning the year to come, I’ve gotten in the habit of asking myself: is this something that I think will definitely happen, could possibly happen, or is happening right now? I also apply my time test when perusing the long lists of what in-the-know leaders say is coming down the pike for higher ed - are these leaders describing things they know to be true right now, things they are predicting based on what they know, or are they merely relating hopes and fears of the moment? For my part, all I will predict (for now) is this: AI will continue to spur change, yes, but only as one part of a larger realignment, one centered around what students come to college to get, what we faculty want them to get, and the teaching practices that help us connect one to the other.
*That said, I’m still looking forward to seeing all of the chapters in their completed form, and hope that mine offers some useful wisdom on what to do and not to do in AI-focused faculty professional development offerings. I’ll let you know when it’s out, and I look forward to hearing what you think of it! Here’s the full citation for it:
Miller, M. D. (2025). Generative AI as a challenge to faculty development: Ugly advice at the dawn of generative AI. In K. Pulk & R. Koris (Eds.), Generative AI in higher education: The good, the bad, and the ugly. Edward Elgar.