I’m slogging my way through the special issue on learning analytics in Distance Education, a subscription-only journal, because what is happening in this field is important but mostly wrong in its approach to teaching and learning.
This is the fourth post reviewing articles in the journal Distance Education, Vol. 40, No.3. The other three were:
- analytics and learning design at the UKOU.
- analytics and personality traits in a high school in China
- analytics and gamification in an undergraduate course at a Hong Kong university.
As always if you find these posts of interest, please read the original articles. Your conclusions will almost certainly be different from mine.
The article
Slater, S. and Baker, R. (2019) Forecasting future student mastery Distance Education, Vol. 40, No. 3
The aim of the study
The study proposes a method to predict the point at which a student will reach skill mastery within an adaptive learning system, based on ‘current approaches to estimating student knowledge’.
Method
Data were obtained from 22,000 students who used ASSISTments, a free online tool for learning and testing mathematics. ASSISTments provides formative assessment and student support and assistance. The data set analysed consisted of problems and skills used in ‘skill builder’ problem sets, where students complete a set of problems involving the same skill and can only advance when they get three consecutive correct answers. The authors do not provide information on the age or the location of the students in the dataset, or what the context of their teaching or learning was outside the ASSISTment tool.
Knowledge in this study then is measured by students’ correct or incorrect answers on specific knowledge components (KC), using two different methods:
- Bayesian Knowledge Tracing (BKT);
- and Performance Factors Analysis (PFA).
These concepts need explanation because for me they suggest a bad misunderstanding of the learning process, but a hugh edifice of research is being built on this weak foundation.
BKT
I will quote from the article:
With BKT, student knowledge of a given KC [knowledge component] is assumed to be either known or unknown, and the likelihood of a student being in either state is inferred by their pattern of correct and incorrect answers.
There are four possible parameters/probabilities of student knowledge of a KC:
- initial knowledge (i.e. they ‘knew’ the KC before exposure to testing)
- learning (i.e. knowledge achieved between initial and later testing)
- guessing (a correct answer is given although the KC has not been learned)
- slip (an incorrect answer is given although the KC has been learned)
For each first attempt a student makes at a new problem, a set of equations based on Bayes’ theorem is used both to calculate the probability that they will answer that question correctly and to update the probability that the student knows the skill.
Got it? If not hang on in there. But note that the term ‘knowledge component’, ‘skill’ and ‘problem’ are often used interchangeably in this article.
PFA
This method is even more obtuse.
PFA models student performance using a logistic regression equation with two variables – the cumulative number of correct and incorrect answers that the student has produced thus far for the current skill. It also uses three parameters, typically fit for each skill:
- the degree to which a correct answer is associated with better future performance
- the degree to which an incorrect answer is associated with better future performance
- the overall ease/difficulty of the knowledge component to be learned.
Data analysis
The researchers analysed 180,000 unique problems across 478 identified skills. To forecast knowledge mastery, the researchers grouped the data into nearly 200,000 ‘unique student-skill pairs, one string of records for each unique student’s first attempt at each problem within each unique skill‘. Pairs where a student did NOT reach mastery were eliminated from the analysis. The goal was to identify how quickly future student mastery could be predicted.
The method becomes even more obtuse at this point, but basically it involved calculating how many attempts students made at a problem before they got the answer right, and then using this data to see which approach, BKT or PFA, better predicted future mastery on the basis of how many attempts each unique student made before getting an answer correct.
Results
- BKT was pretty useless at forecasting student mastery.
- PFA was better, able to predict the number of opportunities a student requires to reach mastery to within 2-3 opportunities (in English I think this means it could predict how many test attempts they would need before they get the answer right, within 2-3 attempts).
- Both models were not fully able to to account for more rapid shifts in student performance, due to ‘eureka’ or ‘aha’ moments.
My comments
Why have I gone to so much trouble to analyse and comment on a study that has almost no practical use for teachers or students? Why not write off this article as just another poor academic paper whose focus is so narrow that it is of interest to only those in the specialist field of prediction through learning analytics? Basically because their conception of teaching and learning is both wrong and dangerous.
Even acknowledging that this study attempted to forecast student mastery in a very narrow, specific, quantitative field (problem solving in mathematics), the study reveals all kinds of misconceptions about the learning process.
First, knowledge components (in this case mathematical problems) are not isolated from one another. It is a fundamental misconception to think of knowledge as isolated chunks of information, each of which requires separate processing and testing. The study looks only at how students do in tests in ASSISTment. It does not examine the actual teaching that students have been exposed to before taking the tests, or what they do between test attempts.
Second, as the authors ruefully acknowledge, ‘learning is not always a smooth and gradual process’. Learning is developmental. Our understanding of a concept continually grows and develops. This means that knowledge is not like computing: on or off – you either know something or you don’t. You may not know enough about something to answer correctly a specific question but there is usually some prior knowledge that is relevant and on which one can build when faced with a new situation.
Third, ‘mastery’ through a single test item is not a very helpful way of testing knowledge. Mastery is defined here as good enough as measured by a single test item, but true knowledge continues to grow. A better test of knowledge is how it is applied in different contexts and different situations. You can do this up to a point with lots of problems in math but it doesn’t work in more qualitative subject disciplines. Also imagine how discouraging it can be for a student to take more and more tests and keep failing them.
But at the end of the day, it is the goal of the study that is wrong. If the study had been successful, and the algorithms – BKT or PFA or some other statistical technique – were able to measure after a students’ first or second attempt at solving a problem how many further attempts would be needed, how would this information be used? And what about the 30,000 odd attempts that never reached mastery? What practical or pragmatic decisions would be made as a result of applying such algorithms? They do not tell us what the problem is, what alternative approaches to teaching and learning might be more appropriate – just route the poor kid through more tests and problems.
What this kind of learning analytics is doing is what I call theory-free analysis of learning, where the hope is that somehow the statistical analyses will eventually lead to effective and automated learning. We won’t know why the method was successful but it works. It is though a very short step from there to saying that unless we can identify it through statistical analyses, learning analytics or artificial intelligence, then learning cannot be happening.
This is not a rant against learning analytics in general. When properly used, and tied to theories of learning, learning analytics or more accurately statistical analyses can be very useful as we saw in the study on gamification. It was therefore even more disappointing for me to see two members of the School of Education at a prestigious U.S. university taking such a mindless statistical approach to learning in this article. However, this is not a one-off. I have been reading many papers like this recently, which is why I think we should be resisting this approach to education. But try and read this article yourself – and good luck.
Up next
The use of sentiment analysis to study user agreements and privacy language in MOOCs. This is a much more interesting article.