A lot goes on behind the scenes here at software-carpentry.org:
- The site itself is WordPress with a partly-customized theme. We use the blog for topics like this and pages (over a hundred of them) for lecture topics. We used to use Trac to manage work items, but nobody kept it up-to-date; these days, we use a WordPress to-do list plugin for the same purpose, and with as little result.
- Our videos are hosted on YouTube—we used to store them locally, but performance improved a lot when we offloaded.
- We manage our mailing lists and version control repositories through the Dreamhost control panel, which actually delegates mailing list management to Mailman.
- The calendar and map are hosted by Google.
- We do event registration through EventBrite.
- We currently use BlueJeans and Skype for web conferencing, but it's been plagued with both technical and social difficulties: people need to have the right Skype client for their OS, and there are the usual problems with unmuted microphones, unintelligible audio, feedback loops, and so on. Forget flying cars: I'll believe the future has arrived when we can make this work...
This analysis leaves me feeling a bit conflicted. When I think about what we should teach researchers about the web, I have three requirements:
- They should be able to build solutions to problems they actually have.
- They shouldn't create egregious security holes.
- They should be able to debug things on their own when they go wrong.
Since people can only debug things they understand , #3 depends on them understanding how the web works. One test of that is whether they recognize that they shouldn't have to log in and out of different sites in order to move information around manually. But if we don't have a solution to that problem (yet), are we really doing them a favor by pointing out that it actually doesn't have to hurt this much?
 Tweaking code more or less randomly until it appears to work doesn't count as "debugging" in my book.