The Science Behind Practice Questions
Why practice questions work better than re-reading
Most people prepare for exams by re-reading notes and highlighting textbooks. It feels productive. The material seems familiar. But decades of research in cognitive psychology tell a different story: actively answering practice questions produces dramatically better long-term retention than any form of passive review.
This finding — known as the “testing effect” — is one of the most robust results in all of learning science.
The numbers are striking
Seven major meta-analyses, covering hundreds of experiments and data from over 48,000 learners, consistently show that practice testing outperforms re-study:
| Study | Sample | Effect size |
|---|---|---|
| Rowland (2014) | 61 experiments, 159 effect sizes | g = 0.50 |
| Adesope, Trevisan & Sundararajan (2017) | 188 experiments, 272 effect sizes | g = 0.61 |
| Yang, Luo, Vadillo, Yu & Shanks (2021) | 222 studies, 48,478 students | g = 0.50 |
| Bangert-Drowns, Kulik & Kulik (1991) | 35 field studies | d = 0.54 |
| Phelps (2012) | 177 studies, 640 effect sizes | d = 0.55–0.88 |
| Pan & Rickard (2018) | 192 effect sizes | d = 0.40 (transfer) |
| Schwieren, Barenberg & Dutke (2017) | Psychology classrooms | d = 0.56 |
An effect size of g = 0.50 means that practice testing moves an average student from the 50th percentile to roughly the 69th percentile — a meaningful advantage when every mark counts.
Re-reading and highlighting don't work
A landmark 55-page review by Dunlosky, Rawson, Marsh, Nathan & Willingham (2013), published in Psychological Science in the Public Interest and cited over 14,000 times, evaluated ten common learning strategies. The verdict was clear:
- High utility: Practice testing and distributed (spaced) practice
- Low utility: Re-reading, highlighting, summarisation
The strategies most students rely on are among the least effective. The strategy that feels hardest — answering questions and struggling with retrieval — works best.
The forgetting gap is dramatic
In a foundational study by Roediger & Karpicke (2006), students who studied a passage once then took three practice tests forgot only 10% over one week. Students who studied the same passage four times forgot 52%. That's a fivefold difference in forgetting rate.
Karpicke & Roediger (2008), publishing in Science, found an even more dramatic result: students who continued practising with tests recalled 80% of material after one week, while those who only re-studied recalled 33–36% — despite identical performance during learning.
MCQs specifically enhance learning
Multiple-choice questions are sometimes dismissed as superficial, but the research shows they are an effective retrieval practice tool — when designed well.
Little, Bjork, Bjork & Angello (2012) demonstrated that MCQs with competitive, plausible distractors trigger genuine retrieval processes. Learners don't just recognise the right answer — they actively recall why each alternative is correct or incorrect, strengthening memory for both tested and related material.
The meta-analyses confirm this. Adesope et al. (2017) found MCQ practice tests produced an effect size of g = 0.70 — actually larger than short-answer tests (g = 0.48) in their analysis.
The key requirement is feedback. Greving & Richter (2018) found that MCQs without feedback showed no significant testing effect, while MCQs with explanatory feedback consistently produced robust learning gains. That's why every question on our platform includes a detailed explanation.
Spacing multiplies the benefit
The testing effect becomes even more powerful when practice is spaced over time rather than crammed into a single session.
Cepeda, Pashler, Vul, Wixted & Rohrer (2006), in a major meta-analysis covering 317 experiments, found that spaced practice outperformed massed practice in 259 of 271 comparisons.
When spacing is personalised using adaptive algorithms, the gains are even larger. Lindsey, Shroyer, Pashler & Mozer (2014) found that adaptive spaced review improved exam scores by 16.5% compared to massed study — and for the earliest, most forgettable material, scores improved by an average of two letter grades.
It works for professional exams
This isn't just laboratory science. Research with professional learners confirms the findings translate to real-world exam preparation.
Larsen, Butler & Roediger (2009) conducted a randomised controlled trial with medical residents at Washington University. On a test more than six months later, residents who had practised with repeated quizzing scored 13 percentage points higher than those who re-studied the same material (Cohen's d = 0.91 — a large effect).
Even getting questions wrong helps
Perhaps the most counterintuitive finding: making errors during practice actually improves learning, as long as you receive feedback afterwards.
Kornell, Hays & Bjork (2009) demonstrated that unsuccessful retrieval attempts — even on questions guaranteed to be unanswerable — improved subsequent learning of the correct answers. The effortful search activates the mental networks needed to encode the answer when it arrives.
So if you're getting questions wrong during practice, that's not failure — it's learning.
Why passive study feels effective but isn't
The reason most learners prefer re-reading over practice questions comes down to a well-documented cognitive illusion. When you re-read material, the information feels familiar and processing feels fluent. Your brain interprets that fluency as evidence of learning. But familiarity is not the same as the ability to recall information under exam conditions.
Psychologists Robert and Elizabeth Bjork call this the concept of “desirable difficulties”: conditions that make learning feel harder during practice actually produce stronger, more durable memories. Practice testing is harder than re-reading — and that difficulty is precisely what makes it effective.
The bottom line
The evidence is not a close call. Across seven meta-analyses, hundreds of experiments, and populations from school students to medical professionals, practice testing is one of only two learning strategies rated “high utility” by researchers. It works across all age groups, all subject matter, and all exam formats. It works even better with spaced repetition, detailed feedback, and well-designed questions.
That's exactly what this platform delivers.
References
- Adesope, O. O., Trevisan, D. A., & Sundararajan, N. (2017). Rethinking the use of tests: A meta-analysis of practice testing. Review of Educational Research, 87(3), 659–701. DOI
- Bangert-Drowns, R. L., Kulik, J. A., & Kulik, C.-L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 61(2), 213–238. DOI
- Bjork, R. A., & Bjork, E. L. (2020). Desirable difficulties in theory and practice. Journal of Applied Research in Memory and Cognition, 9(4), 475–479. DOI
- Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks. Psychological Bulletin, 132(3), 354–380. DOI
- Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students' learning with effective learning techniques. Psychological Science in the Public Interest, 14(1), 4–58. DOI
- Greving, S., & Richter, T. (2018). Examining the testing effect in university teaching. Educational Psychology Review, 30, 1–24. DOI
- Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968. DOI
- Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(4), 989–998. DOI
- Larsen, D. P., Butler, A. C., & Roediger, H. L. (2009). Repeated testing improves long-term retention relative to repeated study. Academic Medicine, 84(9), 1218–1225. DOI
- Lindsey, R. V., Shroyer, J. D., Pashler, H., & Mozer, M. C. (2014). Improving students' long-term knowledge retention through personalized review. Psychological Science, 25(3), 639–647. DOI
- Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges. Psychological Science, 23(11), 1337–1340. DOI
- Pan, S. C., & Rickard, T. C. (2018). Transfer of test-enhanced learning. Journal of Educational Psychology, 110(5), 710–728. DOI
- Phelps, R. P. (2012). The effect of testing on student achievement. International Journal of Testing, 12(1), 21–43. DOI
- Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning. Psychological Science, 17(3), 249–255. DOI
- Schwieren, J., Barenberg, J., & Dutke, S. (2017). The testing effect in the psychology classroom. Applied Measurement in Education, 30(4), 307–316. DOI
- Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning. Psychological Bulletin, 147(4), 399–435. DOI