Achieving accuracy and fairness in machine learning systems intended for use in social decision making is possible but designing those systems requires venturing off the simple and obvious paths.
Image credit: Falaah Arif Khan
CMU Researchers Dispel Theoretical Assumption About ML Trade-Offs in Policy Decisions
Carnegie Mellon University researchers are challenging a long-held assumption that there is a trade-off between accuracy and fairness when using machine learning to make public policy decisions.
As the use of machine learning has increased in areas such as criminal justice, hiring, health care delivery and social service interventions, concerns have grown over whether such applications introduce new or amplify existing inequities, especially among racial minorities and people with economic disadvantages. To guard against this bias, adjustments are made to the data, labels, model training, scoring systems and other aspects of the machine learning system. The underlying theoretical assumption is that these adjustments make the system less accurate.
A CMU team aims to dispel that assumption in a new study, recently published in Nature Machine Intelligence. Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in ML; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and found the trade-off was negligible in practice across a range of policy domains.
“You actually can get both. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”
Ghani and Rodolfa focused on situations where in-demand resources are limited, and machine learning systems are used to help allocate those resources. The researchers looked at systems in four areas: prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city’s limited housing inspectors; modeling the risk of students not graduating from high school in time to identify those most in need of additional support; and helping teachers reach crowdfunding goals for classroom needs.
In each context, the researchers found that models optimized for accuracy — standard practice for machine learning — could effectively predict the outcomes of interest but exhibited considerable disparities in recommendations for interventions. However, when the researchers applied adjustments to the outputs of the models that targeted improving their fairness, they discovered that disparities based on race, age or income — depending on the situation — could be removed without a loss of accuracy.
Ghani and Rodolfa hope this research will start to change the minds of fellow researchers and policymakers as they consider the use of machine learning in decision making.
“We want the artificial intelligence, computer science and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” Rodolfa said. “We hope policymakers will embrace machine learning as a tool in their decision making to help them achieve equitable outcomes.”
Original Article: Machine Learning Can Be Fair and Accurate
More from: Carnegie Mellon University
The Latest on: Machine learning fairness
- NSF, Amazon Name Several Recipients of Advancing AI Research Program Awardon August 9, 2022 at 1:46 pm
Looking for the latest Government Contracting News? Read about NSF, Amazon Name Several Recipients of Advancing AI Research Program Award.
- NSF and Amazon Continue Collaboration Supporting Fairness in AI & MLon August 9, 2022 at 8:33 am
Artificial intelligence and machine learning evolve fast. It is crucial that artificial intelligence and machine learning systems be ...
- Hitting the Books: How much that insurance monitoring discount might really be costing youon August 7, 2022 at 8:30 am
Yes, AI/ML systems are better at certain tasks than humans -- but, technically, so are horses. That's no reason to fear robots, argues Gerd Gigerenzer in his new book, How to Stay Smart in a Smart ...
- Ensuring the fairness of algorithms that predict patient disease riskon August 1, 2022 at 6:21 am
"To treat or not to treat?" is the question continually faced by clinicians. To help with their decision making, some turn to disease risk prediction models. These models forecast which patients are ...
- Stanford University: Ensuring the Fairness of Algorithms that Predict Patient Disease Riskon July 31, 2022 at 9:32 pm
To help with their decision making, some turn to disease risk prediction models. These models forecast which patients are more or less likely to develop disease and thus could benefit from treatment, ...
- Algorithmic Fairness: Mitigating Bias in Healthcare AIon July 24, 2022 at 5:00 pm
AI models that are "fairness-aware" attempt to equalize ... where you adjust the outputs of the machine learning model to be more fair. One preprocessing approach is "re-weighting" data from ...
- New Governance Course from Amii and CIOSC charts path toward responsible AI in businesson July 21, 2022 at 8:35 am
The interactive workshop, led by machine learning experts from Amii, helps companies derisk the responsible development of AI products through a focus on ethics, governance and fairness.
- 3 reasons a hybrid proctoring approach makes senseon July 18, 2022 at 12:56 am
But the most effective online proctoring model blends these benefits by combining AI, machine learning, and human intervention to detect unexpected behaviors, ensure fairness, and uphold integrity.
- The New Morality of Debton March 1, 2021 at 9:53 am
Yet the datafication of consumer lending could also uphold the morality of debt, by improving other dimensions of distributional fairness in consumer credit markets. Notably, more accurate credit ...
via Bing News
The Latest on: Machine learning fairness
via Google News