The Athabasca University Doctor of Education in Distance Education
Each year I do a webinar with Ph.D students in the Doctor of Education in Distance Education at Athabasca University. Because I am now retired I haven’t been keeping up with all the latest developments in distance education, so I thought I would focus on a topic that is highly relevant to Ph.D. students but is not subject (so much) to the vagaries of contemporary developments: research methods.
So I entitled the webinar: Researching digital learning: relevance, methods and likely outcomes.
My large language model AI tool
Thanks to my colleague Ron Owston, I now have an AI-chatbot (called Tonybots) that is trained on the 2,600 articles on my website (see the banner above that says: Search my site with AI).
In the past, I would have spent a good deal of time thinking about the structure of my talk, but out of curiosity I thought I would ask Tonybots to suggest the topics to be discussed, by asking this question of it:
What kinds of research lead to the most significant results in digital learning?
Here’s the response I got:
The most significant results in digital learning research often come from studies that focus on a variety of factors beyond just the use of technology. These include:
- Pedagogical approaches: Research that examines how teaching methods and instructional design influence learning outcomes in digital environments can yield important insights.
- Learning outcomes: Studies that define and measure specific learning outcomes, rather than just comparing digital learning to traditional methods, can provide more nuanced understanding of what works.
- Contextual factors: Research that considers the context in which digital learning occurs, including the subject matter, the characteristics of the learners, and the goals of the instruction, can lead to more meaningful results.
- Technology implementation: Studies that look at the fidelity of technology implementation and how technology is integrated into the learning experience can reveal the conditions under which digital learning is most effective.
- Comparative studies: Well-conducted comparative studies that look at digital learning versus traditional learning can be useful, but they need to control for variables such as teacher effectiveness and instructional goals.
- Longitudinal studies: Research that follows learners over time can provide insights into the long-term effects of digital learning on knowledge retention and skill development.
- Large-scale and mixed-method studies: Quantitative studies with large samples can identify trends and patterns, while qualitative research can explore the nuances and provide a deeper understanding of the digital learning experience.
In summary, research that takes a holistic approach, considering multiple factors and using a variety of research methods, tends to lead to the most significant and actionable results in the field of digital learning.
My reaction
Initially: wow! This is more or less the topics I would have covered anyway. Well done, Tonybots.
But then I thought, ‘Why do I need to give this webinar when the students could more easily have just asked Tonybots?’ Then it hit me. Tonybots provided a good summary but did not really get to the issues around each of these approaches to research. In particular it did not deal with my detailed personal views on each of these methods, even though it was drawing on all the posts on my web site.
So in the actual webinar we dealt with some of these issues such as:
- the general failure of comparative research in this field to find any statistically significant differences, because usually the differences within a condition (for instance, face-to-face teaching) are larger than the differences between the conditions. For example, the effect of the quality of the instructors is often more important than the mode of delivery. However, this does not mean there are no differences. If you could ‘match’ the quality of the instructors, you might find a significant difference, all other factors being equal (which they are usually not).
Tonybots did not get anywhere close to identifying such issues in the depth or detail required. [If you are interested in the ‘nuances’ of different research methods, the webinar is available here: https://www.youtube.com/watch?v=MvKSoOX-DeA]
Dubious referencing
I also asked Tonybots to identify the sources on which it drew, in other words, the list of blog posts on my site from which it drew its response. This is what I got:
‘Making Digital Learning Work’: why faculty and program directors must change their approach | Tony Bates
Bailey, A. et al (2018) Making Digital Learning Work Boston MA:The Boston Consulting Group/Arizona State University
Now I also have a general search engine on the site. I typed in ‘research methods’ and got 22 hits, none of which were the two references Tonybots gave me. When I went to ‘Making digital learning work’, neither that post nor the Bailey article was about research methods.
This is a general problem with large language model AI – it is not at all clear how it gets its information and when it does give references they are usually not relevant to the issue. It is almost as if the designers deliberately want to hide the sources from which they are drawing (possibly to avoid copyright issues, I conjecture).
In conclusion
I draw several lessons from this:
- the bot was great for providing me quickly with a structure for the webinar – the main topics to be covered – and was accurate as far as it went;
- it did not deal with the main issues or challenges around each research method or approach. Further questioning of the bot might have resulted in more nuance, but it was easier for me to deal with these in person: in other words the bot response was broad but not deep;
- large language models such as ChatGPT and Tonybots are parasites: they depend on initial human input. In this sense, they are backward looking, to what has previously been recorded, usually by humans. They do not enable you though to look forward. Innovation and creative thinking are still very much human activities, at least at the moment;
- educators need to keep experimenting with AI, but that requires a good understanding of how the models work, and unfortunately that is not transparent except at a very general level – but in education the devil is in the details, which AI currently tends to miss, although I am sure it will get better.
- for AI to become more accepted in education, it needs to be much more accurate and transparent in revealing its sources; this lack of transparency really affects its likely acceptance and validity in education.
I am wondering if any of my readers have similar or different experiences in using AI in teaching. If so please share by using the comment box at the end of this post.