Over the last few days, there have been four related discussion threads on the Software Carpentry mailing lists about what we use, what we teach, and how we teach it. Together, they highlight what we're doing well and where we need to do better.
There was discussion on:
- IDE vs. IPython Notebook vs. command-line/text editor for Python teaching
- IPython Notebook vs. Markdown for lecture notes (specifically for Python teaching but by implication could extend to teaching other topics)
- Tools themselves causing disruptive cognitive loads, related to whether how they work hides (or merges) too many "fundamental" concepts/levels, and the ever-present issue of how this relates to learner backgrounds (and how/whether you can tailor things for an audience, including how one could assess this empirically)
Notice that 1 is really a specific example of 3, i.e., what tool gets the best balance of power/familiarity/understanding fundamentals? However, it's interesting that all the responses (I think) to 1 were actually about what IDE worked best, and not about the types of tool.
2 is in many ways a flavour of 3, but where the learners/users are the SWC contributors. As such, people's backgrounds and the use cases for the tools obviously mean that things like diff handling become important (but, interestingly, most of the discussion was really about familiarity because that's the main barrier-to-entry).
1 and 2 also implicitly bring up the discussion as to whether we teach one consistent toolset. Here, there is a tension between SWC students getting mastery of a well defined tool and concept-set versus them understanding the landscape of tools and concepts.
This highlights three related tensions in many kinds of teaching (not just Software Carpentry):
- Fundamentals vs. authentic tasks. On the one hand, people need fundamental concepts in order to know how to use specific tools and practices properly. On the other hand, people come to classes like ours because they want to be able to actually do things, and the sooner they're able to accomplish authentic tasks, the more likely they are to learn and to continue learning.
- Foxes vs. hedgehogs. As the saying goes, "The fox knows how to do many things; the hedgehog knows how to do one thing well." If we concentrate on a handful of tasks with lots of reinforcement (as Bennet Fauber stressed), learners will leave knowing how to do a few things well—but only a few. A broad overview of the full landscape of tools and techniques may give them a better understanding of how things fit together, but may also just confuse them, and in either case they will leave having actually mastered fewer things.
- Necessary losses. Our teaching is necessarily less than perfect for any particular learner because we don't have the resources to customize each lesson individually, and even if we did, we can't know in advance what "perfect" would actually be. As David Martin said, there is no globally right tool for teaching: what's best depends on the students and the intended learning outcomes, and whatever we pick cannot be the best choice for people who have different backgrounds and want different outcomes.
Our lessons try to strike a balance between the practical and the conceptual by advertising the first and smuggling in the second. For example, our introduction to the Unix shell only covers 15 commands in three hours. Along the way, though, it shows people how a filesystem is organized, how to get the computer to repeat things so that they don't have to, and (hopefully) why lots of small single-purpose tools that work well together are better than one large tool that tries to do everything. Similarly, the real goal of our introduction to Python isn't really to teach people the syntax of Python: it's to convince them that they ought to write short, readable, testable, reusable functions, and that those are actually all the same thing.
Stuart goes on to ask whether the Software Carpentry brand consists of "just" our topics and teaching style, or is tied to the specific tools we use. I want to say it's the former, but in practice, you can't teach programming without picking a language and an editor. Since we don't have the resources to build and maintain lessons covering all combinations of all possibilities, we have to make choices, which effectively means we're advocating some things over others.
Meawhile, Tim McNamara re-raises the perennial question of assessment:
There seems to be be many statements that follow this format: "in my experience, X is better for learners". The problem is that for each X, not-X is also strongly supported. The key points for me is that we're making inferences about outcomes (e.g., learner growth/satisfaction) on the basis of anecdotal evidence (personal experience).
It strikes me that Software Carpentry is in a unique position to actually test and validate which tools work best. We run many bootcamps, have people who know experimental design and have increasingly strong tooling to support to enable experiments.
I couldn't agree more, but as Titus Brown said in reply:
we have had a hard time getting the necessary funding to do good assessments for Software Carpentry. I think we need to support someone trained, focused, and willing to engage long term (~2+ years) with this as their primary project.
We've learned the hard way that assessing the impact of the various things we're doing is neither easy nor cheap. We don't know how to measure the productivity of programmers, or the productivity of scientists, and the unknowns don't cancel out when you put the two together. Jory Schossau has done valuable service by analyzing survey data and interviewing bootcamp participants (see this pre-print for results to date), but if we want to get the right answer, we're going to have to find a backer.
As this discussion was going on, Ethan White tweeted:
Next time you think about getting involved in a debate about programming languages or text editors, go build something cool instead.
I think this thread shows that there's a useful third path. In it, a dozen instructors who have been teaching R compare notes about what they're doing and how it's working. I really enjoyed the exchange of specifics, and I think that more discussions like this will do a lot to strengthen our instructor community.
To wrap up, here are my take-aways from this past week:
- I need to do a much better job in the instructor training course of introducing people to our existing material, the tools we use, and the "why" behind both. This probably means adding at least one more two-week cycle to the training; I hope that's a cost potential instructors will be willing to bear.
- I need to foster more discussions like the one about how we teach R, and turn those discussions into training material so we don't have to repeat them every twelve months. To get started on that, I'm going to prod people who've taught this year to complete our post-bootcamp instructor survey, and try to get discussions going on the discussion list about how we all actually teach the shell, Python, and Git.
- We need to find resources to do assessment properly.
- I need to do a better job of monitoring the mailing list (and get someone else to keep an eye on it when I'm not available). Some of the discussion was less respectful than our code of conduct requires, and the onus is on us to make sure that everyone always feels welcome.
We've come a long way since our first workshop at Los Alamos sixteen years ago—if all goes well, we'll reach our ten thousandth learner some time in the next twelve months—but we've still got a lot to learn. We'd welcome your suggestions about what to try next.