A quiet revolution is taking place. In contrast to much of the press coverage of artificial intelligence, this revolution is not about the ascendance of a sentient android army. Rather, it is characterized by a steady increase in the automation of traditionally human-based decision processes throughout organizations all over the country. While advancements like AlphaGo Zero make for catchy headlines, it is fairly conventional machine learning and statistical techniques — ordinary least squares, logistic regression, decision trees — that are adding real value to the bottom line of many organizations. Real-world applications range from medical diagnoses and judicial sentencing to professional recruiting and resource allocation in public agencies.
Want Less-Biased Decisions? Use Algorithms.
Is the rise of algorithmic decision making a good thing? There seems to be a growing cadre of authors, academics, and journalists that would answer in the negative. At the heart of this work is the concern that algorithms are often opaque, biased, and unaccountable tools being wielded in the interests of institutional power. These critiques and investigations are often insightful and illuminating, and they have done a good job of disabusing us of the notion that algorithms are purely objective. But there is a pattern among these critics, which is that they rarely ask how well the systems they analyze would operate without algorithms. And that is the most relevant question for practitioners and policy makers: How do the bias and performance of algorithms compare with those of human beings? It’s no secret that algorithms are biased. But the humans they are replacing are significantly more biased. After all, where do institutional biases come from if not the humans who have traditionally been in charge?