Skip to Content, Navigation, or Footer.

Student evaluations seen as beneficial

Departments use varied methods to ensure fair assessment of teaching quality and tenure review

Amid the rising tide of grade inflation, some faculty members have voiced concerns that student course evaluations may incentivize professors in the tenure and promotion process to award higher grades. But others dispute that the grade a student receives affects his or her evaluation of a faculty member and cite the variety of other ways teaching quality is assessed as barriers against biases in performance evaluation.

Student course evaluations partly factor into determining faculty members’ salary and their credentials for tenure and promotion, said Dean of the Faculty Kevin McLaughlin P’12.

A junior faculty member may “be an easy grader, shorten class meetings and give easier coursework” to earn favorable student course evaluations and better the odds of receiving tenure, said Stephen Nelson, a higher education expert and senior scholar in the Leadership Alliance at Brown.

A student who is performing poorly in the course will “automatically blame the faculty member,” Nelson said.

But some faculty members and administrators disagreed, saying that grades do not necessarily affect course evaluations.

“I don’t think that you can get better evaluations by just giving out good grades,” said Rashid Zia, assistant professor of engineering. “Students mainly care about what they get out of the class.”

McLaughlin also said his experience does not bear out these claims. “Students by and large do not give better evaluations just because they get better grades,” McLaughlin said.

The University uses a range of methods to assess teacher performance, reducing the danger that negative student course evaluations motivated by poor grades will affect a faculty member’s candidacy for tenure or promotion, administrators and faculty members said.

“Course evaluations are just one measure of a professor’s effectiveness in teaching,” wrote Assistant Professor of History Linford Fisher in an email to The Herald. “The University is increasingly relying upon peer classroom observations, which, when combined with student evaluations, can provide a more robust sense of teaching effectiveness.”

McLaughlin said individual departments also have their own systems for evaluating teaching, which must be approved by the Tenure, Promotion and Appointments Committee and the dean of the faculty.

“We want departments to have multiple methods of evaluations, such as peer review, evaluations of syllabi (and) textbook selections,” McLaughlin said.

The Department of Comparative Literature has “an independent process of peer evaluation by tenured faculty of junior faculty that has been in place for at least 30 years,” said Karen Newman, professor of comparative literature and chair of the department.

“After visiting the class, the senior faculty member writes a review, meets with the junior faculty member to discuss it, and it then goes into his or her file,” Newman added.

In recent years, the Department of History has begun to emphasize qualitative answers rather than numerical ratings, Fisher wrote.

Several administrators and faculty members said moving the student course evaluation system online has increased its effectiveness.

The shift to an online evaluation system has provided “more standardization because now there is a baseline of overall assessment of an instructor or class,” McLaughlin said, adding that administrators can now compare evaluations across departments.

Zia said the online evaluation system is “definitely better than the paper system,” adding that it encourages a higher participation rate, which allows for more feedback.

“Some people don’t go to class, and we cannot get their feedback” through course evaluations distributed in class, Zia said. “But why they don’t go to class would actually be good feedback.”

In the student evaluations for ENGN 0510: “Electricity and Magnetism” in fall 2013, which is a requirement for engineering concentrators, one student indicated that he attended fewer than half of the classes because lectures were recorded and available online, Zia said.

Though student course evaluations may help motivate teachers to improve, McLaughlin said he does not know how much attention full professors pay to student course evaluations. “I guess eventually everyone reads their evaluations,” McLaughlin said.

Despite concerns among some faculty members about the accuracy of student course evaluations, McLaughlin said “students provide valuable information” that should be considered in reaching tenure and promotions decisions.

Student feedback may be an “imperfect” form of evaluation, Zia said, but it should be part of tenure review and “can be balanced with other assessments such as peer reviews.”

“I read (my evaluations) carefully at the end of each semester, trying to find ways to make improvements and tweaks for the next time I offer the course,” Fisher wrote.

In addition to the University’s official course evaluations, the Critical Review, a student organization, also distributes paper course evaluation forms to gather student opinion at the end of each semester.

The Department of English abandoned the Critical Review in 2010 when McLaughlin was the chair of the department.

“I didn’t want instructors to use class time to have students fill out two sets of forms” since the University already has an evaluation system, McLaughlin said. “Class time should be used for instruction.”

ADVERTISEMENT


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.