Nearly everyone has personal experience and opinions of student evaluation surveys.
You, your parents and probably your grandparents have all been tortured with these seemingly pointless forms. Student evaluations of teaching / Student ratings / Course evaluations / Teacher feedback, whatever we choose to call them.
Few things have caused more of a commotion in the academic community than these evaluations. Students despise them and teachers often fear them. So what are they actually good for? Let’s start with a little history to get a grip.
The first student evaluation of teaching made its first public appearance in the higher education setting close to 100 years ago, in 1920 at the University of Washington. After that, usage spread rapidly in North America to nation-wide adoption by the end of the 1960’s. The procedure was initially introduced to fill three different needs: 1) Students wanted a say in how teaching was practiced 2) Administration wanted to improve teaching in order to gain accountability and PR 3) Junior faculty wanted a way of advancing their careers other than solely measuring number of academic publications.
Since introduced, SETs have been subject to ongoing and intense debate regarding reliability, validity, and accountability. Many questions have been raised over and over during the last one hundred years:
- Are students really the best judges as to how well knowledge is transmitted?
- Isn’t SETs just a measure of how popular the teacher is?
- Won’t results be biased?
- How do we interpret results and make effective change in teaching?
- Everyone knows challenging courses causes lower SET scores, how can they be effective in measuring teacher effectiveness?
100 years of use have resulted in 100 years of research material and there are numerous scientific reports arguing both for and against the usefulness of students evaluations of teaching. In fact, SETs and course evaluations have been the single most researched question in respect to teaching in higher education. Almost a full century of studies have disputed this controversial topic through academic publications. Just looking in the database of Education Resources Information Center produces 3.477 references to “student evaluation of teacher performance”.
After a few years of relative peacefulness in the debate, researchers from the University of California, Philip Stark and Richard Freishtat published an article in 2014 named “An Evaluation of Course Evaluations”. Their work dropped like a bombshell and gained massive publicity in the teaching community.
Stark and Freishtat looked at evaluations from a statistical standpoint and argued, like many before them have, that they rarely are effective as a tool to measure teaching effectiveness.
A storm of blog posts followed, citing the article and agitating despise towards SETs. Many advocating for their immediate expiration. Titles describing evaluations as one of the worst things to ever have occurred in education succeeded one another in a very inventive way. How about these titles for example?
“Student Course Evaluations Get an F”
“Better Teachers Receive Worse Student Evaluations”
“Best Way for Professors to Get Good Student Evaluations? Be Male.”
“Student Evaluations: Feared, Loathed, and Not Going Anywhere”
Following the late blog movement, the attitude towards course assessments has been poor. To sum it up some of the voices: “There’s no need to ask students what they think. They can’t provide us any answers”, “Students’ aren’t teaching experts and can’t judge if a teachers’ methods are effective or not”, “Student surveys should be abolished”, “Students’ aren’t able to provide meaningful feedback”
Why then, is it that at least 97% of the world’s universities still rely on this seemingly worthless method of assessing courses and teachers? How can this lousy practice still be used? Anything must be better than SETs! Why are we stuck in the medieval? Are we going to bring back flogging as well? How come students voices can’t be used in any way to tell anything about the quality of education? 100 years of research and we still use the exact same system?
Just like in so many other areas of educational and academic habits, progress and development of procedures and have been slow. Despite all these years of research, many schools still evaluate both teachers and courses based on outdated values and measures.
But do these blog posts really tell the whole truth?
Social sciences are incredibly complex and it is not unlikely for two studies to produce contradictory results. That’s why it’s important to look at the arguments from all directions and not just one study. Of course, provocative titles and bold claims of worthless but widely used methods create selling headlines. Generally, flamboyant studies have a way of becoming more visible and popularized than less sensational research, as many tend to share them widely and freely.
Complexity is added as no two educators are identical, evaluation tools differ widely in quality and validity and each class is composed of a different mixture of students. Each with different backgrounds and preconditions.
An almighty truth can, therefore, be hard to produce, but we can always examine what scholars have had to say over the last 100 years to come a bit closer.
Follow us on this journey towards a universal understanding of the many factors that impacts how students evaluations of teaching are perceived in higher education. The next part will cover what purpose SETs fill in educational settings and what some of the known issues are.
This post first appeared here.