I spoke with three different people about curriculum design last week, so this seems like a good time to summarize what I know about doing that properly, how we've actually been doing it for Software Carpentry, why the two are different, and what we hope to do in future. To set the stage, I need to talk about the medical program at McMaster University in Ontario.
McMaster is one of Canada's younger universities, and prides itself on its practical bent. When it set out to create a medical school it surveyed hundreds of practicing physicians to find out what they remembered and used five years or more afer leaving school, then put that, and only that, into its problem-based curriculum. The idea was that if doctors who'd had time to settle into their practices weren't actually using something, there was no point teaching it in the first place.
This evidence-based approach to curriculum design is sensible, practical, has solid theoretical and empirical foundations, and has been used successfully by a wide variety of organizations. It therefore elicited sneers and protests from other schools and the provincial medical association, primarily because the resulting program was only three years long rather than four. McMaster pressed ahead, though, and studies of their graduates have repeatedly shown that they are just as good at their jobs as anyone else.
In a sane world, this would have led other universities to re-design their programs. In our world, though, that hasn't happened. The curriculum of most Ontario medical schools, and indeed of most other university programs, are still "designed" according to the following syllogism:
"Important to whom?" is occasionally asked, but rarely answered except through anecdotes. Some universities do track the careers students pursue after graduation, but as far as I can tell, they never look at what knowledge and skills students actually use in those jobs. Meanwhile, university faculty aren't exposed to programs designed this way, so they don't think to look for that information when it's their turn to update their department's courses. As for funding agencies, good luck getting them to care about anything related to training...
So here's how we should design the curriculum for a two-day Software Carpentry bootcamp:
Step 2 is the most important of these. If you ask people to tell you what they do, they over-report the relatively rare activities that gave them trouble or required them to stop and think, and under-report the routine tasks that actually consume most of their time. The only way to get an accurate profile of where their time actually goes is to have a third party observe it. The "couple of days" part is important too: if you watch someone for an hour, all you see is them being self-conscious. You have to watch for an extended period so that you fade into the background, and also to get a representative sample of actual tasks.
We've never done this for Software Carpentry, primarily because we've never found someone willing to back the necessary study. It wouldn't take a lot—two dozen subjects for two days each, an equal amount of time to categorize and re-categorize the observations, plus setup and travel, works out to about six months of full-time effort—but time and again, people have thought they think they can skip this step and magically land on the right result anyway. (Ironically, some of these people would describe themselves as data scientists, and could explain at length why basing decisions on personal experience and "it's obvious to me" is a bad idea...)
What we've done as a substitute is much less rigorous, but has still led to major changes in what we do. Since May 2012, I've asked roughly 40 alumni of past bootcamps what practices they've actually adopted from those we've taught. Here's what I've learned:
I had hoped to put some statistical meat on these anecdotal bones by now, but there have been several hiccups along the way. As sooon as we get institutional approval, though, I want to contact several hundred people and ask them which of the things we taught them have become part of their daily routine and what else they've learned on their own that we, or someone, should have taught them.
What I really want to do, though, is persuade scientists to practice what they preach. We complain loud and long about idiots who think that paying attention to evidence is nothing more than a matter of personal taste when it comes to climate change, evolution, and other facts; we have no right to act the same way when it comes to education. If we're going to ask our students to do their best when they come into our classrooms, we have an obligation to do our best as well.
Originally posted 2013-10-14 by Greg Wilson in Teaching, Lectures.comments powered by Disqus