David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0
“Establishing the mapping from the client’s domain to a statistical question is one of the most difficult parts of statistical analysis.” - David Hand, Deconstructing Statistical Questions
If an organization wants to use AI, then business problems must be made into questions that data science can answer. This work is often a negotiated and evolving process. It can involve framing, reframing, discussion and debate between various actors. It is also work that remains largely invisible. Throughout the process there is the potential for many ethical issues to be encountered.
This post is a summary and review of an ethnographic study in a paper entitled Problem Formulation and Fairness. This work aims to illustrate the various actors, points of view, assumptions and the process of navigating practical barriers that are involved in answering a business question using statistical techniques. In doing so, it seeks to shed light on how problem formulation is entangled with ethics in data science. It also holds key learnings for understanding how ethical issues arise in AI at the earliest stages, before AI technologies are even applied.
The case study is about CarCorp, a company that gathers sales leads about people who need special financing for a car. This group of people typically have low or bad credit scores or limited credit history. CarCorp’s clientele is auto-dealers who pay to receive these leads. The manager of CarCorp wants to improve the quality of the leads sent to the dealers. The goal is to prevent ‘churn’ or dealers leaving CarCorp. If dealers have better quality leads* then CarCorp believes dealers are more likely to stick with them as a source of leads.
What is a quality lead?
Understanding what is a quality lead and how to measure it was not a straightforward exercise. Different stakeholders had different opinions. The one thing they did agree on was that “good” leads provided business to the dealer. This was defined as the likelihood of a lead being able to finance or purchase a car from that particular dealer.
Under this scenario, lead quality = lead financeability. BUT, since financeability differed by dealer, this was further refined to lead quality = lead financeability by dealer. This is a data science matching problem, ensuring leads are matched with the dealer most likely to finance them. However, after investigating data sources, the data science team found there was not enough data about dealer financing decisions. Few dealers shared their approval data with CarCorp. Now what?
The business team and data science team at CarCorp have a series of discussions and debates about the data available - its quality and reliability, as well as possibly collecting new data. Through this process, they determine that having a lead’s credit score would be very helpful. However they are not authorized to collect this under the Fair Credit Reporting Act because CarCorp is not making a lending decision, they are just gathering leads.
Some credit information is self reported to CarCorp by the lead. In other cases, the information is available from third parties in the form of ranges. For example, they might know that a lead has a credit score between 476-525. CarCorp had this data for approximately 10% of its leads. Credit scores proved to be helpful to determine financeability, and the business team wondered if a prediction could be made to determine data for the 90% of the leads that were missing the credit score.
Now the problem formulation has shifted and a financeable lead = a lead with a higher credit score. But, the team hits a new barrier. The ranges for the credit scores were reported differently by each source. For example, some used 476-525, 526-575 while others used 451-500, 501-550. How could CarCorp reconcile these differences to make the data consistent?
Pre-existing market data and word of mouth confirmed that 500 is the crucial threshold for a credit score. If someone is over 500 they are deemed a better risk, while under 500 makes for a riskier bet. Thus, the problem shifts again to classification. Is the score in the category over 500 or under 500?
The only ranges that were tricky to determine are ones that span the 500 threshold, such as 476-525. The other ranges are easy to classify as clearly under or over. This led to a new problem, as trying to determine which people in these threshold crossing ranges had scores over 500 or under 500 proved to be difficult. The model performance was poor.
One of the data scientists asks if perhaps they could purchase some data from Experian, a credit reporting service. They hypothesized this might help improve the model, but they could not be sure until they purchased the data. However CarCorp management did not want to purchase the data unless it was sure it was helpful - a classic chicken and egg scenario. The various actors were in a deadlock and the project was eventually halted due to months of inaction. Different actors have different explanations for why the project failed. One business analyst felt that perhaps the business team wanted magic from data science.
The Evolving Problem
As this case makes clear, problem formulation is negotiated, iterative and evolving. The high level goal of improving lead quality, driven by CarCorp’s even higher level goal of reducing churn, is translated in different ways as the project evolves and as various practical hurdles are encountered. These hurdles led to new problems that needed to be solved before the high level business problem could be addressed.
We can see from this example how a “chain of proxies” is created from this initial goal, from lead quality to lead financeability by dealer, to credit score prediction. (p 8) At each turn, various actors redefine the practical terms of how to best solve the business problem in light of their new understanding of the problem and various constraints faced at each stage of the process. This is where ethical choices are also encountered.
Consider the ethical implications for fairness of framing lead financeability as it relates to trying to predict someone’s credit score. Let’s say the team did purchase that Experian data and let’s also assume this new data fixed their model issues to an acceptable level. Now they can classify the over 500 or under 500 leads and then they use that to ship the ‘financeable leads’ which they equate to ‘ quality leads’, off to the appropriate dealers. Presumably, the less financeable leads (those under 500) don’t get shipped at all or maybe they are sent to the dealer but flagged as lower quality. Now, the dealer either never receives that lead or if the lead is flagged as lower quality they already have a poor impression of that lead before they even do an actual credit check for that individual. We should also note that credit scores themselves are not as objective as we might think, but that is a whole other topic!
We can see in this chart how this series of circumstances might play out:
If we are going to determine how to mitigate ethical issues we need to identify and respond in the context of the practical decisions made at various stages in the process. It’s not always obvious just by examining the high level business problem.
Stakeholder Perspectives
CarCorp centers its own needs, to retain its dealer customers, as the driver for quality leads. This isn’t unusual. Businesses frame problems from their own position. However it does have impacts on the other stakeholders. For example, someone in need of special financing who is submitting their information as a lead will have a different perspective and framing of the problem. They are concerned with successfully obtaining financing on the best possible terms. The dealers themselves might also frame the problem slightly differently, such as financing the most deals at an acceptable level of risk, as defined by the dealer.
This chart outlines these goals by stakeholder and describes what is at stake:
The need to surface ethical choices
As noted in the paper, these ethical issues remain hidden. They’re entangled in practical issues such as “business requirements, the choice of proxies, the nature of the algorithmic task, and the availability of data”. (p.9) Being able to recognize the ethics embedded in these choices, and to make intentional decisions based on ethical considerations, is a skill that is missing from the process. It is not as simple as assessing a high level business problem for its ethical implications. Instead, ethics needs to be embedded within the evolving process of negotiating problem formulation in data science. These are the same kinds of decisions that go into AI systems.
Resources
Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 39–48. https://doi.org/10.1145/3287560.3287567
*A complete sidebar, but whenever I think about quality leads I can't help but recall the movie Glengarry Glen Ross
By Katrina Ingram, CEO, Ethically Aligned AI
________
Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com
© 2023 Ethically Aligned AI Inc. All right reserved.