In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To “prove” it was human, the company trained the AI to insert “umms” and “ahhs” into its request: for instance, “When would I like the reservation? Ummm, 8 PM please.”
Building Transparency into AI Projects
As algorithms and AIs become ever more embedded in people’s lives, there’s also a growing demand for transparency around when an AI is used and what it’s being used for. That means communicating why an AI solution was chosen, how it was designed and developed, on what grounds it was deployed, how it’s monitored and updated, and the conditions under which it may be retired. There are four specific effects of building in transparency: 1) it decreases the risk of error and misuse, 2) it distributes responsibility, 3) it enables internal and external oversight, and 4) it expresses respect for people. Transparency is not an all-or-nothing proposition, however. Companies need to find the right balance with regards to how transparent to be with which stakeholders.