Carey, T., & Trick, D. (2013). How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. Toronto: Higher Education Quality Council of Ontario.
Tom Carey is one of the authors of the above study, and as an example of the best of reflective practice, he has kindly provided his thoughts about the report, now that it is finished. The reflection is in two parts. The second part will follow tomorrow.
Tom Carey:
Tony’s last post presented a summary and response to a report authored by David Trick and myself for the Higher Education Quality Council of Ontario (HEQCO): How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. In parallel, I prepared this guest post with my reflections on what surprised me during the process of researching and writing the Environmental Scan of emerging developments in online learning. Tomorrow’s guest post will reflect on what we did not or could not include in the final version – partly because we were still left with a lot of sense-making still to do by the time we had hit our limits for pages and time (looking for help here, folks).
David may perhaps want to chime in on the surprises and omissions of the Literature Review where he so capably took the lead. Tony was also involved as an expert advisor, along with George Siemens and Ed Walker – our thanks again to all of you. We hope that our complementary perspectives will spark some more dialogue about these important issues: there have also been comments from Terry Anderson and visual notes by Giulia Forsythe.
Results that Surprised
My colleague Carl Bereiter pointed out to me long ago that the most interesting question to ask about a research project concerns the “results that surprise” – I think this was his opening query at every Ph.D. thesis defense. As someone with a long involvement in online learning, OER and distance education, here is my list of surprises:
We didn’t end up writing a research report: I’d better clarify that for our helpful partners at HEQCO, who sponsored the report and supported us in the process. We did write a Research Report as stated in the contract, but we found the decisions we made about content and tone were influenced equally by the mandate for a Research Report, the opportunity for a Teachable Moment and the growing sense that a Call for Collective Action through collaboration across institutions was needed.
The Teachable Moment came in part from the clamour about MOOCs, where the participation of prominent institutions had caused a sudden jump in attention – and perhaps even credibility – for online learning amongst some academic and political leaders. We tried to seize that moment, however brief it might be, to make the case for online learning as an ally in the challenges faced by higher education across the world: how to educate more students, with deeper and more lasting learning outcomes, in the face of fiscal constraints driven by demographic and economic factors over which we had no control. (More on the Call for Collective Action below…)
It seemed to help if we presented MOOCs as a cumulative development: as we worked on the report, we came to the conclusion that one way to make sense of MOOCs was to consider them as the aggregation of several other factors with longer histories and more evidence of success. The report positions MOOCs (the instructionist type, at least) as building on the other developments in online learning that we highlighted: Affordable Online Texts, Adaptive Systems, Learning Analytics and a variety of approaches to optimize student-instructor interactions. Framing MOOCs this way, as leveraging all of these advances at once to drive down the marginal cost of more students, seemed to reduce the magic of offering courses without charge. We make no claim that this is a complete portrayal of the MOOC phenomenon as of May 2013 – it certainly was not complete – and leave it to readers to assess how helpful that particular perspective may be in diminishing the image of MOOCs as a totally new (and alien?) invention.
There are still open questions about ‘significant difference’: as someone who had institutional oversight for online learning, I was already familiar with the body of research demonstrating that online learners could do at least as well as their counterparts in traditional courses. There were some caveats of course, and David summarized very clearly what the research said about what kinds of students might be best able to take advantage of online learning. While data at the course level was available, we did not find similar evidence about ‘no significant difference’ at the program level…or for deeper learning outcomes that do not lend themselves to large scale objective measures that can be replicated across different student cohorts.
Of course, a big part of that lack of data derives from the difficulty of measuring such outcomes within a program – let alone across different ways to offer and support programs over time. It happened that as we were wrestling with these issues I was also working on program-level outcomes with several professional schools at a university on Canada’s west coast. Many of the outcomes we were trying to define and assess, such as ‘professional formation’ and ‘epistemic fluency’ (a term I learned about from Peter Goodyear of the University of Sydney), struck me as requiring interactions quite different from any of the course examples where ‘no significant difference’ results had appeared. Steve Gilbert of the TLT Group shared a label for this: The Learning That Matters Most. I describe below how this evolved into The Learning That Scales Least (at least as far as our current evidence shows).
The concluding Call to Collective Action was about scalability much more than technology: my thanks go out to the members of the Ontario Universities Council on E-Learning for getting on to me about this when I presented the report’s conclusions to them earlier this month. They convinced me to soften the focus on online learning in the concluding recommendations, in favour of a more level playing field around the evidence – at this moment and subject to change – about scaling up teaching and learning environments. If I were writing that part of the report now, my recommendation for the collaborative action required across the colleges and universities in the province would be something like this (but more concise, clearer, etc.):
We need to work together on understanding and leveraging emerging developments to scale up learning experiences ‒ wherever appropriate ‒ so that the resulting gains in productivity will allow us to sustain and advance the student outcomes requiring other kinds of learning experiences (that are not readily scaled).
The “collective” part of this conclusion came out of our difficulty in making sense of the emerging developments that we had highlighted: to paraphrase race car driver Mario Andretti, we concluded that “if you think you know what is happening, you’re not going deep enough”. We didn’t put anything that folksy in the report, but we did conclude that we couldn’t get much further on definitive statements about “where are MOOCs going” and the like. If we couldn’t make sense of emerging directions, perhaps we could “make sense of making sense”: outline a course of action in which institutions could collaborate to share the risks of the necessary investment in systematic experimentation. (There is more on this Call to Action in tomorrow’s guest post on Themes We Couldn’t Include…).
Thanks, Tom!
[…] Tom Carey’s reflections on the HEQCO report on online learning and productivity: 1-Catching a teac… […]