"Was my contribution fairly reviewed?" A framework and an empirical study of fairness in Modern Code Reviews
Modern code reviews improve the quality of software products. Although modern code reviews rely heavily on human interactions, little is known regarding whether they are performed fairly. Fairness plays a role in any process where decisions that affect others are made. When a system is perceived to be unfair, it affects negatively the productivity and motivation of its participants.
In this paper, using fairness theory we create a framework that describes how fairness affects modern code reviews. To demonstrate its applicability, and the importance of fairness in code reviews, we conducted an empirical study that asked developers of a large industrial open source ecosystem (OpenStack) what their perceptions are regarding fairness in their code reviewing process. Our study shows that, in general, the code review process in OpenStack is perceived as fair; however, a significant portion of respondents perceive it as unfair. We also show that the variability in the way they prioritize code reviews signals a lack of consistency and the existence of bias (potentially increasing the perception of unfairness).
The contributions of this paper are: we propose a framework—based on fairness theory—for studying and managing social behaviour in modern code reviews, we provide support for the framework through the results of a case study on a large industrial-backed open source project, we present evidence that fairness is an issue in the code review process of a large open source ecosystem, and, we present a set of guidelines for practitioners to address unfairness in modern code reviews.