While there has been much discussion of the racial disparities and biases in debates about algorithmic fairness, there has been much less discussion of how race and racial categories are themselves conceptualized in these debates. This paper fills this gap in the literature, explaining what it means to think of race as a structural, institutional, and relational phenomenon, and why this can help us recognize the structural aspects of algorithmic unfairness.
• What is the “operationalization” of race, and how does it differ from the use of racial variables? • What are the different way that race can be and has been conceptualized, and why does this matter for algorithmic fairness? • What should AI researchers and practitioners be doing to ensure that the algorithmic systems they build are anti-racist?
• Algorithmic fairness must involve thinking about how race is conceptualized and operationalized in data, and the historical and social context surrounding that operationalization • Algorithmic systems that are “fair” under one conceptualization of race can be unfair under another • AI researchers and practitioners must take a more proactive role in understanding and reporting how their algorithmic systems encode race, and ensuring that this encoding is appropriately used