In the past few years, we’ve seen a lot of rapid changes in the working world: easier access to data for all aspects of business, increased emphasis on employee experience and happiness, transparent reporting on diversity and inclusion efforts, ever more sophisticated emoji to liven up your company Slack channel. But for all the advances we’ve made, one area tends to remain squarely in the past: the performance review process.
At most companies, performance reviews involve the same basic components: a manager is asked to rate their direct reports, and the results of this process lead to compensation and promotion decisions.
You may have seen a few variations on this theme: Some companies ask more people to be involved in the review process. Some use elaborate points systems. Some even give employees the chance to rate their peers and managers in return. But most of these processes neglect a simple but crucial fact: They rely on human judgment, and humans are extremely susceptible to all sorts of biases.
Bias is not inherently a bad thing—it’s basically the brain’s way of taking a shortcut to save bandwidth. Unfortunately, when we make biased decisions in the context of a performance review, it can have a real impact on someone’s chances of promotions, raises, and professional development in the long term.
Let’s take a look at five of the most common types of bias that can impact performance reviews.
1. Idiosyncratic rater effect
The idiosyncratic rater effect refers to the fact that people tend to rate another person’s skills based on their own strengths and weaknesses. For example, a manager with strong presentation skills will rate their direct report’s presentation skills against their own (rather than comparing the employee to peers or assessing their improvement over time).
A study published in the Journal of Applied Psychology in 2000 revealed that 62% of the variance in the ratings could be accounted for by individual raters’ peculiarities of perception. Actual performance accounted for only 21% of the variance.
“When we look at a rating we think it reveals something about the ratee, but it doesn’t, not really. Instead it reveals a lot about the rater” writes HR consultant Marcus Buckingham in the Harvard Business Review.
2. Central tendency error
Central tendency error occurs when using a numbered scale to assess an employee’s abilities or performance. Most reviewers will tend to place whoever they’re evaluating in the middle and avoid giving a high or low score. This means that the average score may not reflect an employee’s true performance and will not give employees an accurate assessment of their strengths and areas for improvement. This can especially hurt employees’ chances for development in the long term. If they think they’re satisfactory in all areas, they may struggle to identify which areas to focus on improving.
3. Recency bias
Recency bias—or the likelihood of reviewers to focus more on events from the recent past—is especially troublesome when performance reviews take place on an annual basis. This type of bias means that reviewers will probably neglect an employee’s performance during the first and second quarters in favor of focusing on projects that were completed more recently. And, of course, employees are susceptible to this type of bias themselves when reflecting back on their performance over the past year.
4. Confirmation bias
The next point should come as no surprise if you’ve ever gotten into an argument with a family member whose political views are strongly opposed to your own. People love to be proven right. So much so, in fact, that they will ignore or forget information that contradicts their opinion and be more likely to notice and remember information that validates it. This type of bias is called “confirmation bias” and it can be tricky to avoid. That’s why Warren Buffett actually chose a board member who fundamentally disagrees with his way of doing business. Yes, that’s right—he has someone on his board whose job is to disagree with him!
5. Gender bias
Gender bias refers to the way behavior is perceived based on gender stereotypes and can have serious implications when it comes to evaluating and advancing employees.
There are a number of studies showing how feedback that women receive in the workplace tends to be biased, inconsistent, and vague. Research by Paola Cecchi-Demeglio and Kim Klegman shows that women are 1.4 times more likely to receive critical, subjective feedback rather than positive feedback or critical, objective feedback. Their research also shows a double standard by which the same traits can also be rated positively for a man—for example, a man might be praised for “careful thoughtfulness” while a woman is rated negatively for the same trait and criticized for her “analysis paralysis.” Another study published in Fortune showed that in tech company reviews, women were much more likely to receive negative personality criticism than men.
And research by Shelley Correll and Caroline Simard demonstrated that women tended to get vaguer feedback that was less tied to business outcomes. It’s not just women who are affected, either—it’s especially difficult to give critical feedback across any dimension of difference, such as gender, race, or age.
What comes next?
Now that you’ve seen some of the most common ways that bias can affect the review process, you might be wondering what comes next. How you can make changes to your existing process to mitigate the biases that come into play? One of the best solutions is to create a structured approach to reviews and feedback. Creating consistent questions and prompts will help ensure that all employees are assessed in the same manner. And collecting (and referring back to) feedback more frequently can help reduce recency bias. Google has created a handy checklist to help you consider how to unbias your performance reviews. You can find it here.
And if you’d like to dive even further into this topic, check out our eBook, “Measuring Performance with Objective Evaluations.”