Teaching basic lab skills
for research computing

Assessing Assessment

Jason Williams and I met with two consultants at the University of Texas at the end of September to get feedback on Software Carpentry's post-workshop survey. They gave us detailed suggestions for improving six of the questions, and felt the rest were OK as they are. The feedback is given below; even without the whole questionnaire (which we will post shortly), we hope it's helpful.

  1. Change "I learned valuable skills" to "I learned skills that I can use in my research" because valuable is too vague and subjective.

  2. "The instructors/helpers were effective" is vague. Maybe add an "(e.g. knowledgeable, good communicators)" or make multiple specific questions.

  3. There were a few suggestions/options for improving the question, "Before the workshop, did you feel any of the following topics were intimidating?"

    1. Change to a sliding scale response to allow finer resolution between intimidating and not intimidating.

    2. Change the response to a Likert scale and rephrase the question to something like, "For each topic, how well do you identify with the statement 'Before the workshop, this topic was very intimidating.'" Or, you could use a "describes my feelings" scale or agree-disagree scale.

    3. Also, students may not be able to accurately self-report on pre-workshop attitude after taking the workshop. This question could be move to pre-assessment survey.

  4. There were a few suggestions for, "After taking this workshop, which statement best reflects how you feel about learning more on the following subjects?"

    1. Change questions to, "After taking the workshop, how interested are in you in learning more about the following topics?"

    2. New responses could be something like 'uninterested/not relevant', 'uninterested/already know enough', 'slightly uninterested', 'neither uninterested nor interested', 'slightly interested', 'interested'.

  5. For the second skills questions, we might be able to get at both what was covered in the workshop and what skills were gained by the student using the side-by-side question format. For example:

    1. The question would be, "For the following tasks, please rate the quality of instruction and how confident you are in your ability to perform the task."

    2. Scale for quality of instruction subset: poor-excellent or above-below average.

    3. Scale for task performance subset: no chance to good chance or easy to difficult.

    4. Change "sex" to "gender" for demographic questions.

    Dialogue & Discussion

    You can review our commenting policy here.