Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4
Why this paper?
I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:
- it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
- as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
- the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
- I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.
What was the course?
8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.
The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:
- an online eText, especially designed for the course
- reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
- an online discussion area/forum
- mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.
Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.
Thus the study measured only the learning of the most successful online students (in terms of completing the online course).
Methodology (summary)
The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.
Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”
Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:
- physics teachers
- not physics teachers
- physics background
- no physics background
- college math
- no math
- post-graduate qualification
- bachelor degree
- no more than high school
Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:
- four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
- staff office hours,
- help from fellow students,
- available physics tutors,
- MIT library
Main results (summary)
- gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
- in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
- there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts
Conclusions
This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:
It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.
To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.
I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.
Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.
Excellent analysis, thanks Tony!
I’ve added a few of my own thoughts on this at https://landing.athabascau.ca/bookmarks/view/733402/learning-in-an-introductory-physics-mooc-all-cohorts-learn-equally-including-an-on-campus-class-colvin-the-international-review-of-research-in-open-and-distance-learning – in brief, those ‘considerable qualifications’ are indeed very considerable. I would love to see comparisons between this kind of MOOC and those simply using a textbook or other online resources without the framework of the course, but I cannot for the life of me think of a way of easily doing this.
Jon
Many thanks, Jon. As you always do, you have done an excellent post which will be valuable to anyone doing research in online learning/MOOCs.
Tony
A big chance was missed .
This a comparison apple with a pear .
Then results are meaningless.
For , now 20 years, I follow online developments .
I never liked any course at all for 20 years until I got in March 2012 the Circuits course from MIT .
Even since then there is great improved technology and methods in learning. MIT, Stanford have huge labs to do research. Stanford does not want to use Word online but says ” research driven courses ” I loved it .
I wish someone like you must do several research several comparisons groups such as students at the same age and at the same IQ level . May be even same SAT scores , from same geographical area . One group take online course, the other classical in class. Subject must be the same . Only one research is not enough either .
As an engineer I want to prove that a good most advanced online learning is better than in class learning .
I just believe ” online is better than f2f ” . Really this is just belief which cannot be proved .
But recently I have more hopes for online learning seeing the research done by Stanford and MIT .
It is not static. Improved every day . Classical education did not improve for the last 2500 years face to face from Aristo
I have been supporter of the online since 1995 . Even I suggested Babson College to have whole MBA program on CD-Rom later on internet in 1996 .
Tony I am glad you follow developmentrs on online .
Please inform when you finish your book .
Best regards .