Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.
A Practical Guide to Building Ethical AI
Companies are quickly learning that AI doesn’t just scale solutions — it also scales risk. In this environment, data and AI ethics are business necessities, not academic curiosities. Companies need a clear plan to deal with the ethical quandaries this new tech is introducing. To operationalize data and AI ethics, they should: 1) Identify existing infrastructure that a data and AI ethics program can leverage; 2) Create a data and AI ethical risk framework that is tailored to your industry; 3) Change how you think about ethics by taking cues from the successes in health care; 4) Optimize guidance and tools for product managers; 5) Build organizational awareness; 6) Formally and informally incentivize employees to play a role in identifying AI ethical risks; and 7) Monitor impacts and engage stakeholders.