In 2013, Stanford professor Clifford Nass faced a student revolt. Nass’s students claimed that those in one section of his technology interface course received higher grades on the final exam than counterparts in another. Unfortunately, they were right: two different teaching assistants had graded the two different sections’ exams, and one had been more lenient than the other. Students with similar answers had ended up with different grades.
We Need Transparency in Algorithms, But Too Much Can Backfire
Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from product recommendations to dating matches. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But transparency can backfire and cause confusion if not implemented carefully. Fortunately, there is a smart way forward. Users should be able to demand the data behind the algorithmic decisions made for them, including in recommendation systems, credit and insurance risk systems, advertising programs, and social networks. This tackles “intentional concealment” by corporations. But it doesn’t address the technical challenges associated with transparency in modern algorithms. Here, a movement called explainable AI (xAI) might be helpful. xAI systems work by analyzing various inputs used by a decision-making algorithm, measuring the impact of each of the inputs individually and in groups, and finally reporting the set of inputs that had the biggest impact on the final decision.