“With the amount of data today, we know there is no way we as human beings can process it all…The only technique we know that can harvest insight from the data, is artificial intelligence,” IBM CEO Arvind Krishna recently told the Wall Street Journal.
When — and Why — You Should Explain How Your AI Works
Four scenarios in which companies should be prepared to explain an algorithm’s predictions.
August 31, 2022
Summary.
AI adds value by identifying patterns so complex that they can defy human understanding. That can create a problem: AI can be a black box, which often renders us unable to answer crucial questions about its operations. That matters more in some cases than others. Companies need to understand what it means for AI to be “explainable” and when it’s important to be able to explain how an AI produced its outputs. In general, companies need explainability in AI when: 1) regulation requires it, 2) it’s important for understanding how to use the tool, 3) it could improve the system, and 4) it can help determine fairness.