Student Evaluations aren't really the point
When stripped of the academic haughtiness, professors are employed as intellectual workers. They have a commodity that they are selling (or if you prefer they are tradesmen applying their skills). This is neither a complaint nor a criticism. It is just a way of looking at the problem.
The problem is that there is, in the present state, little to no accountability. Whether the topic is evaluations, adjuncts, tenure, etc. the REAL topic is accountability. If a worker is held accountable and how and when...if ever when tenured.
Student evaluations are used by deans as one data point into a class. That is right and good. One should not judge a class on anonymous evaluations, but one can use the aggregate evals to level-set a base-line of feedback over consecutive years. This can then be used to flag deviant semesters. Still good.
BUT, and the adjunct in me gets riled here, when faculty are not held accountable for the base measures of their job, then the entire system is faulty, corrupt and abominable. What are these base measures?
- Quality of materials
- Time in class
- Real availability out of class
- Ability to communicate both knowledge and knowledge skills.
The first part of the list is relatively easy to measure. If a college requires (which I think they should) all materials to be accessible online, then a quick look by admin can determine depth and breadth of materials. Does it cover the learning objectives? It is complete?
Time in and out of class is also relatively easy to control. Require profs to post open office hours. Then, have a hired gun (a work study student?) go by to check on posted hours. If no prof is there, check with the department desk. If there is no plausible reason (sick kid, etc.), then a demerit (write up) of some sort that has true consequences—read monetary.
The last item is perhaps the toughest to implement. We all know the best profs/teachers are the ones who are able to transcend the material and present it in a way that “brings it alive.” I think this really means that they are able to communicate skills on how to approach, compile and organize the knowledge. That is, they give the facts as well as the meaning. But, how does one test this?
My undergrad had a method of ensuring a baseline of writing skills. I think it could serve as a model here. Each year at the end of the Fall semester, all of the ENG101 kids would have to write an essay on an established topic (different each year). They were to present an organized, coherent and plausible essay in about two hours. The papers were then read by ALL of the department, each paper getting two readings. If the scores (scale of 1-5) were off by more than one consecutive number (a 4, 6 or a 1, 4 pairing, for example), then the paper would get a third reading without informing the third reader what the previous two scores were. I worked in the department for four years, and I never had a third reading not agree with one of the previous scores.
What the exit test ensured was that all Freshmen would have modicum of writing talent OR they would receive the necessary remediation. Good for the students. It also ensured that all of the faculty contributed to the advancement of student writing (good for the student and the department). And, the time it took to read, once begun, ran faster than one might think. All comp instructors would agree, after a while, you just know where on the scale a piece of writing falls.