Selecting the Right AI/ML Problems

Ethical Implications of the Problem

In addition to considering the fundamental problem-solving capabilities of AI/ML techniques, practitioners should also consider upfront the ethical implications of a machine learning project.

Ethical AI is a nuanced, complex, and emerging discipline, which means there are few concrete guidelines for companies to follow today. Some technology companies are crafting their own AI principles, while others are hiring chief ethics officers to set guidelines and steer organizations toward responsible actions.

AI systems face several critical ongoing ethical challenges. While a detailed treatment and analysis of AI ethics is outside the scope of this publication, managers need to be mindful of a few key themes as they move forward in this evolving and sometimes controversial area.

Fairness and Bias

The most important and frequently occurring ethical issue with enterprise AI/ML systems involves the management of fairness and bias. AI/ML algorithms fueled by big data are driving decisions about health care, employment, education, housing, and policing even as an ever-growing body of evidence shows that AI algorithms can be biased. Even models developed with the best of intentions may inadvertently exhibit discriminatory biases against historically disadvantaged groups, perform relatively worse for certain demographics, or promote inequality.

Enterprises must be concerned not only with statistical bias in their AI models (for example, selection, sampling, inductive, and reporting bias) but also with ethical fairness. Discrimination through performing predictions and classifications on data is the very point of machine learning, but enterprises must be concerned with using statistical discrimination as the basis for unjustified differentiation. Differentiation may be unjustified due to practical irrelevance (for instance, incorporating race or gender in prediction tasks such as employment) and/or moral irrelevance despite statistical relevance (such as factoring in disability). Where race is suspected of causing unjustified bias, the “easy fix” – removing race as a feature – doesn’t work because it may be correlated with other features, for instance ZIP code, which may knowingly or unknowingly have been included in a model. Instead, the best practice is to explicitly include race as a feature and subsequently correct for bias.

In a famous example from 2018, machine learning practitioners at Amazon.com were experimenting with a new AI/ML-based recruiting system to help streamline their hiring process. No matter what the data science team did, they found that the algorithm’s results were biased against women candidates. This occurred in spite of the significant care Amazon’s data scientists had taken to strip out gender-related information from resumes. The bias in results occurred because historical training labels of successful hires were biased towards men. AI/ML algorithms are very good at identifying “successful” outcomes – in this case, male hires. Despite the best efforts of data scientists, the algorithms identified and latched onto a range of features that were highly correlated with gender. The system essentially figured out whether the candidate would be a male hire. Amazon therefore had to stop using AI/ML algorithms for this purpose. The following figure is from a news article discussing this challenge.

Infographic

Figure 23: Amazon scraps AI/ML recruiting tool because of gender bias

Other questions around bias often relate to the use of AI in other human resources (HR) decisions – for example, whether to promote people or make salary recommendations – and in situations where AI agents determine the ability to receive loans or access healthcare.

Unfairness in ML systems is primarily caused by human bias inherent in historical training data. ML models are prone to amplifying such biases. No consensus on an ideal definition of fairness exists today. Rather than attempting to resolve questions of fairness within a single technical framework, the approach should be to educate the people involved in building ML models to examine critically the many ways that machine learning affects fairness. Several fairness criteria have been developed to quantify and correct for discriminatory bias in classification tasks, including demographic parity, equal opportunity, and equalized odds. It is important to select the right type of fairness; otherwise, the wrong metric can lead to harmful decisions and we risk propagating systematic discrimination at scale. When it comes to building fair ML models it is crucial to understand when to use each fairness metric and what to consider when applying them. A trade-off between model performance and fairness usually can be found.

Safety

The safety and reliability of AI systems is another critical issue. The key question here revolves around whether we trust the AI system to make reliable and appropriate decisions. Safety considerations are often thought about in the specific context of physical or real-world systems. But even information systems can have broader and cascading effects on human, economic, social, or environmental safety.

An often-cited example of AI safety centers on the ethical considerations around self-driving cars. For example, in the case of an emergency trade-off, what decisions should the car make? Should it seek to protect its passengers or should it seek to protect others?

While the self-driving car example may feel theoretical or contrived, other safety considerations are driving concrete decisions today. For example, dating back to a directive in 2012, the U.S. government has attempted to set guidelines around the use of autonomous and semi-autonomous weapons systems, seeking to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” This is an evolving space. There is no international consensus on the use of autonomous AI technologies and individual nations may make differing decisions.

Explainability and Transparency

Another frequent AI ethics concern centers around explainability, also referred to as interpretability. AI algorithms often are perceived as black boxes making inexplicable decisions. Unlike traditional software, it may not be possible to point to any “if/then” logic to explain a software outcome to a business stakeholder, regulator, or customer. This lack of transparency can lead to significant losses if AI models – misunderstood and improperly applied – are used to make bad business decisions. This lack of transparency can also result in user distrust and refusal to use AI applications.

Certain use cases – for instance, leveraging AI to support a loan decision-making process – may present a reasonable financial services tool if properly vetted for bias. But the financial services institution may require that the algorithm be auditable and explainable to pass any regulatory inspections or tests and to allow ongoing control over the decision support agent.

In fact, European Union regulation 679, enacted in 2016, gives consumers the “right to explanation of the decision reached after such assessment and to challenge the decision” if it was affected by AI algorithms.

Given our current understanding, certain classes of algorithms, including more traditional machine learning algorithms, tend to be more readily explainable, while being potentially less performant. Others, such as deep learning systems, while being more performant, remain much harder to explain. In such cases, it is recommended to deploy the AI model with a second “interpreter module.” The interpreter module can deduce what factors the AI model considered important for any particular prediction. For the more technical reader, these might include model-agnostic approaches like Lime and Shapley or model-specific approaches like tree interpreters. Improving our ability to explain AI systems remains an area of active research.

Auditability

Traditional software is relatively static. Once an application is released to production, occasional enhancements and upgrades are dictated over time and carefully tracked through DevOps processes and code control. AI systems are far more dynamic. Significant changes can arise with minimal notice and ML models may continuously evolve. When developing an AI application, thousands of models, each with different parameters and dependencies, may be developed, tested, deployed, and used in parallel to adjust dynamically to changing data and business needs. With all of this complexity, auditing system outcomes and tracing the many variants – past and present – of ML models can become an overwhelming task.

Smart ML model management is the necessary antidote to enable auditability of AI systems. A smart ML management framework enables users to track the variety of ML models deployed or being used as challenger models to current and past production deployments. Each of these ML models captures the algorithm, libraries, and parameters along with times these models were deployed.

In conjunction with ML model management, ML results and associated data are tagged to allow end-to-end traceability. This is key to establishing data lineage for the thousands of results being generated. Smart ML model management approaches increase the users’ ability to track machine learning results against specific models and parameters.

For example, auditability is a mandatory requirement for one C3 AI customer using AI for loan decision support. Not only does this Fortune 100 enterprise need to be able to recall AI lending decisions immediately for regulators, but the bank also must be able to highlight specific decisions used at the time of each recommendation. To meet these requirements, the institution is using C3 AI’s out-of-the-box ML model management capabilities to capture all models developed and deployed in production.

Behavioral Manipulation

AI agent manipulation is among the most widespread and most concerning of all ethical AI issues. Occurring primarily in consumer-facing AI, manipulation spans a broad spectrum of concerns from targeted marketing and behavioral nudges to “fake news” and social manipulation. Practices, policies, and guidelines around behavioral manipulation remain fragmented and uncertain.

A full treatment of this topic is significantly beyond the scope of this publication, but managers working with AI, particularly consumer-facing AI, should consider carefully the behavioral ethics of algorithms prior to implementation.