Each semester, my mid-term exam contains a single survey question. The survey question has asked about student use of the BlackBoard, student confidence, student awareness of some relevant economic development, among other content. A major purpose of the question is to gain insight into student effort.
No potential credit is listed for the survey question. No penalties are indicated for a lack of response. The question is not even mentioned when the exams are handed out. As a consequence, the question is completely voluntary.
In theory, responding to a strictly voluntary task with no certain reward should provide a proxy for student effort. A response to the question should indicate a level of student effort. Furthermore, increased student effort should yield some returns in terms of student performance.
Students who answered the question typically have outperformed those who did not. The mean score of respondents has ranged from 0.6 to 0.8 standard deviations above the mean score for non-respondents. In the just-concluded Spring 2013 mid-term exam, there was a mean 4.7 point (0.64σ) advantage for respondents. In addition, 67% of those who scored below 70 on the mid-term failed to answer the question, while 100% of those who scored 80 or above answered the survey question.
To further examine whether the survey question was useful for judging student effort, I matched the information to student completion of the course’s major deliverable to date. That deliverable is an outline for a term paper worth 20% of the course grade. In terms of the outline, 57% of the students who didn’t answer the question had either not turned in their outline or had turned it in late. Moreover, 75% of those students had also failed to complete a case that had been assigned as homework. In contrast, all students who answered the survey question had turned in their outline on time.
It should be noted that the sample sizes are still fairly small. Consequently, there is a degree of uncertainty. However, the persistence of the results suggests sufficient merit worthy of incorporating the process in subsequent classes. Beginning next semester, the approach will be folded into the diagnostic exam I give at the start of each semester. Even as some students who devote a lot of effort might be flagged for intervention, there are no drawbacks. Extra work for such students will only facilitate the learning process for them. At the same time, those truly at risk would also benefit from the extra attention.
Finally, a student's attendance might be viewed as a simple alternative. In my opinion, it is not. Attendance is largely a passive activity that requires minimal effort. Attendance, by itself, only ensures a student will be exposed to course content. It does not guarantee that a student will make the effort required to transform the information from class lectures and discussions into lasting knowledge. Evidence of intiative, on the other hand, suggests that a student is probably making an active effort to learn. That active effort makes it more likely that a student will turn course content into knowledge.
In sum, it appears that the survey question is a potentially good proxy for student effort. Therefore, student responsiveness to the question offers an opportunity for intervention, even if the need for such intervention might not be all that apparent from the mid-term grade by itself. The consistency of the outcomes across classes indicates that the approach could be used to identify and target students at the beginning of the semester, rather than after the mid-term. If that approach works, one can expect to see an increase in overall realized student learning potential at the end of semesters.