Before drawing up my outlook for 2013, I want to discuss the important topic of prediction in online learning, in particular how predictions are made, and what value they may have. Nate Silver’s excellent book (references are at the end of this article) looks at prediction in a number of fields: weather forecasting (excellent up to three days, useless after eight days), economic forecasting (hopeless by both media pundits and professional economists), baseball players’ performance (pretty good and improving), earthquakes (bad for major quakes, but promising for lesser quakes), poker and a number of other areas. He also has some interesting reflections on big data as well. Unfortunately though he doesn’t discuss prediction in online learning, so I’ll try and help out with this!
Factors associated with reliable predictions
Silver’s book is valuable because he sets out some of the factors associated with good prediction (or forecasting):
- Well understood and empirically supported theory about what drives the field under inquiry (excellent in weather forecasting and earthquakes; poor in politics and economics)
- Large, reliable sets of relevant data and the ability to crunch large data sets
- Relatively stable movement within the data (i.e. not too much ‘noise’ or randomness)
- Elimination of or accounting for as far as possible the unknown
- Above all, a probabilistic approach to prediction that takes account of uncertainty.
Factors associated with online learning
The problem for online learning is that few of these factors exist. In terms of theory, we do have some some empirically supported theories about what makes for effective online learning (e.g. Linda Harasim’s Learning Theory and Online Technologies) and some standards for best practices. However, these are often not practiced, or are ignored, in the field of online learning, and more importantly we lack good, empirically based theories of organizational decision-making in post-secondary education. This makes the application of what theory we have to understanding data and looking for the signal in the noise particularly hazardous for online learning.
The situation is even worse with regards to data. Weather forecasting data is detailed, localized and goes back over 60 years. Online learning is itself barely 20 years old (at least as we now know it), and is continually changing (as is the weather of course, but at least meteorologists know why the weather changes).
We have very little data on what is actually happening in online learning, and over-reporting in some areas (e.g. MOOCs) and under-reporting in others (for-credit programs). We are almost entirely dependent on the Sloan/Babson annual surveys for online learning enrollments and the Kenneth Green survey for IT developments on campuses, both covering just the USA. These surveys are invaluable, especially because they use a consistent methodology from year to year, enabling comparisons to be made, but they depend on the voluntary participation of selected staff within institutions, which tends to provide a bias to over-reporting online activities. In Canada, we have nothing, except a 2010 survey in Ontario which is unlikely to be repeated. So the statistical basis for reliable prediction in online learning just isn’t there.
With regard to ‘unknowns’ in online learning, they are everywhere but of course not visible until they hit you. MOOCs are a good example of something suddenly jumping out of the bush at you. But we have had other scares as well, such as for-profit universities. And some of the scares or unknowns quickly become very real in online learning, while others disappear almost as quickly as they came.
The last factor though, a more probabilistic approach, is one we can apply to online learning. Silver makes the distinction between hedgehogs – pundits who have a strong view on everything, a ‘biased’ or strong ideological position, and who tend to make statements with a high degree of certainty, but who are frequently and routinely completely wrong- and foxes, who tend to be more cautious in their statements, are more equivocal in their predictions, but in the long haul have a better track record of accurate predictions. Foxes take a more probabilistic approach, recognizing degrees of uncertainty in their predictions (not necessarily in mathematical terms).
Timing as a factor in online learning predictions
A particular problem with prediction in online learning is the timing. The Horizon reports deal with this by having one, three and five year projections which is a more probabilistic approach, but I would argue it is more of a hedgehog than a fox because it focuses mainly on technology and not on pedagogy, and usually does not hedge its bets. Jon Baggaley, in a forthcoming analysis of the Horizon reports, also shows how unreliable their predictions have been.
In online learning, technology moves faster than people, and people move faster than organizations. So where you see changes in individuals, it may be another 10 years before that filters through to true organizational adoption. Also when does a prediction become true? Let’s take hybrid learning. Does 100 instructors moving to hybrid learning constitute a ‘trend’? However, if 100 institutions move in that direction within a year, that would be more significant. So as well as timing, the level of analysis matters too.
Why prediction in online learning is still necessary
Audrey Watters, who is my favourite blogger on online learning and educational technology, has also read Nate Silver’s book, and is aware of many of the problems I have just laid out. For these reasons, she has decided not to make any predictions for 2013. She is no doubt wiser than me, but I think it’s a pity she’s opting out. She is in a much better position than most of us to make predictions about online learning because she has a very broad overview, a full picture, of what is going on, even if the details are not always clear.
The fact is, we have to make predictions, every day in our lives – is it going to rain (so I take an umbrella), will the stock market go down (so I won’t invest $5,000), will my house still have enough equity when I need to go into an old folk’s home (yes, so I’ll go on holiday this year), will my bosses want to do MOOCs (yes, so I’d better be prepared)? And we always have to make these predictions with biases, less than perfect data, and lots of nasty unknowns lurking in the garden.
Silver’s book in fact does not argue against predictions, but doing them as well as possible. You do the best you can, and take a probabilistic approach. (If I don’t use the umbrella, and it doesn’t rain, no big deal, I’ll wait two months before reconsidering my investment, I’ll chose a budget holiday, I’ll suggest a way to do MOOCs that enhances their quality). We will all have to make some predictions, some intelligent guesses, as to what’s going to happen in online learning this year, so we can at least be prepared.
Go for it, baby
I will be making some predictions, not because I have all the data I’d like (you never do, even in meteorology). I also have my biases and prejudices. However, I do have a lot of experience in online learning, which provides at least some sort of theoretical framework for analysis, I do get to see what’s happening in about 10-15 universities and colleges a year (not enough but more than many), I do read a lot of the research literature on online learning, and I cover a huge amount of news and developments for my blog. So you decide whether or not my predictions are likely to be better than yours. At least you can make a comparison. (Silver points out that the average of multiple sources of predictions is usually more accurate than single sources of prediction, so let’s all share).
So, yes, you will get an outlook for 2013 for online learning from me. I will make some firm predictions, but I will use a one to five year horizon, and there will be caveats, and the unknowns will still jump out at you during the year – but at least you’ll have an umbrella to fend them off, and you can then blame me if it all goes wrong.
Silver, N. (2012) The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t New York: The Penguin Press
Watters, A. (2013) Why I’m not making Ed-Tech predictions for 2013Â Hack Education, January 1
Baggaley, J. (in press) Shifting Horizons, Distance Education
[…] risky but necessary [Bates] On 01/04/2013, in Canada, online learning, by Daniel Christian Why predicting online learning developments is risky but necessary — from Tony […]
Wow, I am incredibly humbled that you’d say such nice things about me, because I admire so much the work you do.
And while sure, I would like to think that I do have a pretty good handle on the ed-tech industry (as it grows in the U.S., at least), I worry about posts such as this one (http://www.huffingtonpost.com/kevin-ducoff/higher-education-tech_b_2278751.html) in the Huffington Post that seem little more than fluffing for investors’ favorite startups du jour. It’s enough to make me at least offer a cautionary post about the predictions that we’re going to see made (yours aren’t what I’m worried about, I should be clear).
Indeed, one of the people whose predictions I most enjoy reading is DemandMedia’s John Battelle (see: http://battellemedia.com/archives/2013/01/predictions-from-last-year-how-i-did-2012-edition.php). He knows what he’s talking about… But then I have to pause and wonder what to make of someone who’s so deeply invested (literally) in the technology space “predicting” the future. What is wishful thinking (I tend to make these sorts of predictions), what is shaping the markets of the future?
Of course, the “models” I have for the future are biased in my own way. I recognize that…And perhaps, I was just a little reluctant to start the new year with a post titled “2013: I Predict Ed-Tech Goes to Hell” 🙂
Many thanks, Audrey.
I guess everyone who runs a blog has some axe to grind – I certainly have several! Ideally, we should all be objective, but I guess post-modernists would argue there is no true objectivity. The great thing about the Internet is that it does allow a million voices to be heard. All we have to do is find the signal in the noise! That’s where you provide a great service.
I will continue to look forward to your posts throughout the year and have a great 2013!
I’m all in favor of striking a cautionary note when it comes to predictions, but I wouldn’t be so quick to dismiss the “educational technology” industry.
As Phil Hill has pointed out these “School-as-a-Service” providers partner with schools to help them transition from F2F to online. I’m not arguing for or against this latest phenomenon–outsourcing online curricular,content production/marketing/student services– but it’s a significant shift away from a university attempting to do everything in house.
I’ve developed online courses, degree programs, and entire online initiatives, and the increased interest from the investment community and the MOOC phenomenon is changing the online landscape. (I agree with Tony’s assessment that MOOCs, from a pedagogical perspective, are sorely lacking, but they’re fomenting interest in all things online and might catalyze change in higher ed. That’s a pretty big “might,” though.)
[…] Dr. Bates calls out Audrey Watters for not making predictions for 2013. I’d love to see her predictions. […]
[…] comments in his post: Silver makes the distinction between hedgehogs – pundits who have a strong view on everything, a […]
[…] Why predicting online learning developments is risky but necessary […]
[…] Photo credit: Tony’s post here […]
[…] a previous post, I talked about the difficulties in making predictions in online learning. Bearing in mind the […]
[…] READ THE FULL POST HERE Like this:LikeBe the first to like this. This entry was posted in online learning and tagged online learning, predictions, Tony Bates by mncompact. Bookmark the permalink. […]
Tony, thank you for this post; which I had to respond to. I would like to ban the word prediction because it infers certainty and comfort, and when thinking about the future, we cannot assume either. Your explanation of why you will still do ‘firm’ predictions that comes with a raft of caveats including ‘don’t blame me’ is an indiciator of this – you’ll make some assertions based on your experience in 2013, and that’s what we need and expect from you; I don’t need you to predict anything. A couple of specific comments:
(i) MOOCs didn’t jump out the bush at anyone; the signs they were coming had been around since the term was coined in 2008 by Dave Cormier. In the four years since then, there’s been much on the horizon to tell us what was possible (note, not predictable) – if we were looking. The skill we need to understand better the future of any educational field is the ability to look, to watch for signals of change and track them over time. If you are looking, you’ll see change coming, you ‘ll be able to respond early, and you won’t get surprised by bush hiding MOOCs, you’d be tracking its path beside the bushes.
(ii) predictions are based on data about the past and the present, and since there is no future data or facts, what we are doing is making assumptions about whether the data we have can infer what’s going to happen. With the weather, we can infer that if there’s clouds in the sky and the weather bureau says rain, it’s likely to rain (not that it will rain), and so we take an umbrella as a precaution. That’s what we need to do with online learning and all other things educational. Interrogate the data we have and explore the many outcomes that are possible in the future – rather than putting our eggs in one basket with prediction.
Prediction is saying ‘this will happen’, not ‘this might happen’ and we also need to keep an eye on this and this. I’m arguing for a watch list – what do we need to pay attention to over the next year or two, and have the conversation about the implications if those things start to strengthen this year. I’d be looking outside online learning too, since what is happening in the broader social, tech, political etc systems will have an influence on the future on online learning. That approach, of course, is not as much fun as predictions!
I do think we waste time and mental energy on crafting predictions – that some of us are saying is it worth the bother is a good thing though because it suggests we are recognising that predictions don’t actually help us prepare for the future. They do give us good fodder for ridicule some years hence when we look back at them though (eg Bill Gates: 64mb of RAM should be enough for anyone).
This may be semantics, but language matters. Perhaps we just need a new word – possibilities? But then that doesn’t give us the comfort or the certainty that we humans appear to crave.
Thanks, Maree
Semantics do matter. I think Nat Silver’s comment is correct that forecast is a better term than prediction. Certainly some degree of humility (which I’m not known for) is necessary in any forecasts about online learning.
In terms of MOOCs jumping out of the bushes, it depends where you were located. It certainly made a few Presidents jump (e.g. the President of the University of Virginia). But you’re right, those in ‘the business’ could have seen it coming. What was less expected is that it would be Stanford and MIT driving it.
Lastly, predictions are heavily tied up with agendas – pushing a particular position that supports one’s ideology, or a backdoor way of trying to twist the future to particular interests. But we still have make assumptions about the future, whatever we do. I enjoy seeing what other people think may happen, even though I often don’t agree with them. I hope people will treat my forecasts in the same way.
Hi Tony. It is semantics and you are right that forecasts is a better term than prediction. I can see that predicitions can be tied up with particular agendas, but this another reason why we should stop doing them. You are correct that we all make assumptions about the future, and as you did in your post, it’s important to lay those assumptions bare so the quality of forecasts can be assessed. Any ideas about the future that are good and credible are valuable for the ongoing conversation.
Writing this, I do wonder if my issue is that predictions (in the sense of ‘I know this will happen’) tend to shut down the conversation and the seeking out of lots of opinion about what’s happening and why, and having an ongoing conversation. As opposed to deferring to the expert’s opinion via their predictions which has the effect of closing off other ideas?
And I did enjoy your subsequent post on your forecasts 🙂
[…] kann auch lesen, was der Autor generell ĂĽber Vorhersagen denkt und warum er sie trotzdem riskiert (”Why predicting online learning developments is risky but necessary”). Schade, dass Audrey Watters seine Einschätzung nicht teilt (”Why I’m not making […]
[…] https://tonybates.wpengine.com/2013/01/04/why-predicting-online-learning-developments-is-risky-but-necessar… […]
[…] reasons why teachers should UTOD – Home – Doug Johnson's Blue Skunk Blog Sort Share tonybates.wpengine.com 1 week […]
[…] to predict trends in educational technology in 2013, while Tony Bates in his first blog of 2013, “Why predicting online learning developments is risky but necessary” claims that, despite the issues raised by Watters, he feels that, not only is he in a position […]