In January 2012, John Cook posted this to his widely-read blog:
In a review of linear programming solvers from 1987 to 2002, Bob Bixby says that solvers benefited as much from algorithm improvements as from Moore's law: "Three orders of magnitude in machine speed and three orders of magnitude in algorithmic speed add up to six orders of magnitude in solving power. A model that might have taken a year to solve 10 years ago can now solve in less than 30 seconds."
A million-fold speedup is pretty impressive, but faster hardware and better algorithms are only two sides to the triangle. The third is development time, and while it has improved since 1987, the speedup is measured in small percentages, not orders of magnitude. For most scientists, getting the code to do the right thing is now a bigger bottleneck than its running time.
That's where Software Carpentry comes in. We teach scientists (usually grad students, since they have both the need and the time) what they ought to know before they start working on large programs, creating web services, or any other leading-edge work. Our hope is that if we give people basic skills, they'll be better able to take advantage of more sophisticated things.
Between January 2012 and July 2013, over 100 volunteers will have run 92 two-day workshops for over 3000 scientists. A typical two-day curriculum is:
As the comments above suggest, our real aim isn't to teach Python, Git, or any other specific tool: it's to teach computational competence. What we've found, though, is that we can't do this in the abstract: people won't show up, and if they do, they won't understand. We try hard to start with the particular, and to show them that yes, this stuff actually is useful, and then bring in more general stuff.
We nominally aim for 40 people per workshop, and are always grateful for local helpers to wander the room and answer questions during practicals. We find workshops go a lot better if people come in groups, e.g., 4-5 people from one lab, half a dozen from another department or institute, etc., so that they are less inhibited about asking questions, and can support each other afterward. (It also produces much higher turnout from groups that are usually under-represented in computing, such as women and minority students.) We use live coding rather than slides: it's more convincing, there's more lateral knowledge transfer (i.e., people learn more than we realized we were teaching them by watching us work), and it makes instruction a lot more responsive.
Our instructors are all volunteers, so the only cost to host sites is travel and accommodation. All but a handful of our instructors are working scientists themselves; that, plus live coding instead of slides, ensures that attendees get lots of "how" as well as "what".
We also run an online training course for would-be instructors. It takes 2-4 hours/week of their time for 12 weeks, and introduces them to the basics of educational psychology, instructional design, and how these things apply to teaching programming. It's necessarily very shallow, but it's still more than most university faculty ever get...
Results have been very good: we had two independent evaluations done last spring (one by Prof. Julie Libarkin at Michigan State University, the other by Dr. Jorge Aranda at the University of Victoria), and both found that people actually are learning useful things. What we're struggling with now is showing that this translates into them doing more and/or better science (the holy trinity of "novelty, efficiency, and trust").
Here's what else isn't working (or isn't working well):
We're pausing to catch our breath from mid-July to the end of August this year, during which time we'll clean up our GitHub repositories, finish the instructors' guide, and generally get ready for another round of boot camps in the fall. Before then, though, we will have some exciting news to announce, so please keep an eye on this blog—we think you'll like what you see.
Originally posted 2013-05-24 by Greg Wilson in Opinion.comments powered by Disqus