Blogger: Post a Comment
Here is a response on evaluations for those non-dean-dad-reading folk:
I like the idea of grade tendency reports. As an instructor I would like to know how I rank in my grading compared with my peers. Grade inflation is a concern amoung part-timers as well.
I wonder, though about Chris's idea of tracking student performance before or after a given course. He seems to want to find evidence of the life-changing teachers, which is laudable, but statistically problematic. Say I am teaching an intro-composition course. My pool of students will most likely be entering Freshmen. They have no prior data to correlate post-class performance with.
Ok, say I have a Sophomore-level course. Numbers would also indicate that those who move on to their Junior year are the more successful, so the overall university attrition rate would have to be factored in.
Jumping topic: what is needed, it seems, are better metrics.
Student evaluations often don't ask specific, concrete questions.
* Did the course follow the syllabus (getting what you sign up for)
* was the reading load and assignments spaced evenly through the semester (consideration of student lives and respect for their time)
* was the instructor accessible for questions, specically: office hours held as posted, e-mails answered, phone number given, IM-available, etc.
* were the course objectives met (which assumes that a course has demonstrable objectives)
* did the student feel respected as a professional (subjective, but indicative of instructor orientation toward the learner--the more successful treat the student with respect, even if holding them to a high degree of work)
This list, of course, may be added and expanded, but it seeks to move the questions from impression-based (The instructor used effective teaching methods.) to objective and demonstrable facts.
Here is an evaluation resource that begins to approach what I am calling for.
I like the idea of grade tendency reports. As an instructor I would like to know how I rank in my grading compared with my peers. Grade inflation is a concern amoung part-timers as well.
I wonder, though about Chris's idea of tracking student performance before or after a given course. He seems to want to find evidence of the life-changing teachers, which is laudable, but statistically problematic. Say I am teaching an intro-composition course. My pool of students will most likely be entering Freshmen. They have no prior data to correlate post-class performance with.
Ok, say I have a Sophomore-level course. Numbers would also indicate that those who move on to their Junior year are the more successful, so the overall university attrition rate would have to be factored in.
Jumping topic: what is needed, it seems, are better metrics.
Student evaluations often don't ask specific, concrete questions.
* Did the course follow the syllabus (getting what you sign up for)
* was the reading load and assignments spaced evenly through the semester (consideration of student lives and respect for their time)
* was the instructor accessible for questions, specically: office hours held as posted, e-mails answered, phone number given, IM-available, etc.
* were the course objectives met (which assumes that a course has demonstrable objectives)
* did the student feel respected as a professional (subjective, but indicative of instructor orientation toward the learner--the more successful treat the student with respect, even if holding them to a high degree of work)
This list, of course, may be added and expanded, but it seeks to move the questions from impression-based (The instructor used effective teaching methods.) to objective and demonstrable facts.
Here is an evaluation resource that begins to approach what I am calling for.
0 Comments:
Post a Comment
<< Home