Somewhere in The Age of Uncertainty, Galbraith wrote that what made Das Kapital and the Bible great books was that they were so large, and so full of contradictions, that everyone could find support in them for anything they wanted. I have felt the same way about the phrase "computational thinking" ever since I attended a workshop at Microsoft Research in September 2007. In one of the breakout sessions, six of us tried to operationalize our understanding of the term by coming up with questions for a quiz that could be given to someone to determine if he or she was thinking computationally. It quickly became clear that we meant very different things when we used those two words. It was also clear (to me at least) that this ambiguity was socially very useful, since (to switch metaphors) it allowed people to attend the same church while disagreeing on the nature of salvation. It's not a polite fiction per se, but rather a—um, damn, I don't know the word—a thing that no one looks at closely because doing so would cause discomfort or friction.
Eventually, though, things do have to be looked at closely. In this case, it's the productivity of scientific programmers. Based on feedback from people who've taken it, I believe that Software Carpentry significantly increases how much scientists can do with computers, but I don't have anything that would pass muster as "proof". I'm actually not even sure what form such proof would take, since I don't know how to measure the productivity of programmers of any other kind either—not in any reasonable amount of time, either. (Waiting to see if alumni produce more papers would take at least a couple of years, maybe more.) If someone could figure out how to measure computational thinking ability, on the other hand, before-and-after testing might be good enough. Any thoughts?