We asked the ~500 students in my Tools in Data Science course in Jan 2024 to create data visualizations.
They then evaluated each others’ work. Each person’s work was evaluated by 3 peers. The evaluation was on 3 criteria: Insight, Visual Clarity, and Accuracy (with clear details on how to evaluate.)
I was curious to see if what we can learn about student personas from their evaluations.
15% are lazy. Or they want to avoid conflict. They gave every single person full marks.
4% are lazy but smart. They gave everyone the same marks, but ~80% or so, not 100%. A safer strategy.
10% are extremists. They gave full marks to some and zero to others. Maybe they have strong or black-and-white opinions. In a way, this offers the best opportunity to differentiate students, if it is unbiased.
8% are mild extremists. They gave marks covering an 80% spread (e.g. 0% to some and 80% to others, or 20% to some and 100% to others.)
3% are angry. They gave everyone zero marks. Maybe they’re dissatisfied with the course, the valuation, or something else. Their scoring was also the most different from their peers.
3% are deviants. They gave marks that were very different from others’. (We’re excluding the angry ones here.) 3 were positive, i.e. gave far higher marks than peers, while 11 were negative, i.e. awarding far lower than their peers. Either they have very different perception from others or are marking randomly.
This leaves ~60% of the group that provides a balanced, reasonable distribution. They had a reasonable spread of marks and were not too different from their peers.
Since this is the first time that I’ve analyzed peer evaluations, I don’t have a basis to compare this with. But personally, the part that surprised me the most were the presence of the (small) angry group, and that there were so many extremists (with a spread of 80%+) — which is a good thing to distinguish capability.