Teaching basic lab skills
for research computing

Using Science to Design This Course

One of the many reasons I left the University of Toronto to work full-time revising this course was to explore two areas I've been curious about for years: educational technology, and the science of learning. With my daughter a year away from kindergarten, is no longer academic [1]. In this post and some that follow, I'd like to talk about what I'm finding and how it's influencing the design of this course. If you want to know more, please subscribe to Mark Guzdial's excellent and thought-provoking blog about computer science education.

Our experiments with screencasts and online tutoring are still in their early days (i.e., we have put less than three hours of the former online, and haven't started the latter), so I don't have a lot to report. By October or November, though, I hope to be able to speak more knowledgeably about how cost-effective asynchronous web-based instruction is compared to more traditional modes [2].

Nothing I've learned about ed tech so far has really surprised me. What I've been learning about learning definitely has: there's a lot more science there than I ever suspected [3]. For example, CS educators have spent years arguing "objects first" versus "objects later" as a teaching strategy. Turns out it doesn't matter: outcomes are the same either way [ES09]. Similarly, I've "known" for years that using a debugger helps people learn how to program. Turns out I was wrong, at least in introductory courses [BS10].

At a higher level, there's lots of evidence now showing that novices learn more from worked examples than from working through problems on their own. The theoretical basis for this comes from cognitive load theory, which can, with some work, be translated into concrete course design [CB07]. I'm still digesting this literature, but I would probably never have discovered ideas like fading worked examples without it. More importantly, I wouldn't have been able to distinguish those ideas from others that sound equally plausible, but aren't backed up by evidence [4].

How does this translate into course material? To be honest, I'm not sure yet: "lots of worked examples" is obvious, but other questions—in particular, self-assessment—are still pending. If you know of something with evidence behind it that I should read, I'd welcome a pointer.

References

[CB07] Michael E. Caspersen and Jens Bennedsen: "Instructional Design of a Programming Course—A Learning Theoretic Approach". ICER'07, 2007.

[ES09] Albrecht Ehlert and Carsten Schulte: "Empirical Comparison of Objects-First and Objects-Later". ICER'09, 2009.

[BS10] Jens Bennedsen and Carsten Schulte: "BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs". ACM Trans. Computing Education, 10(2)/8, June 2010.

Footnotes

[1] Academic (adj.): ...not expected to produce an immediate or practical result.

[2] Did I really just use the word "mode"? Br...

[3] If you're interested, a good place to start is the CWSEI site, or again, Mark Guzdial's blog.

[4] Ernst and Singh's Trick or Treatment is a great book about evidence-based thinking in medicine; I'd happily buy half a dozen copies of something similar about education to give to friends and family.