Black box machine learning models can be dangerous for high-stakes decisions. They rely on untrustworthy databases, and their predictions are difficult to troubleshoot, explain and error check for real-time predictions. Their use leads to serious ethics and accountability issues.
This is a preview of subscription content,access via your institution
Access options
Subscribe to Nature+
Get immediate online access to Nature and 55 other Nature journal
$29.99
monthly
Subscribe to Journal
Get full journal access for 1 year
$99.00
only $99.00 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Buy article
Get time limited or full article access on ReadCube.
$32.00
All prices are NET prices.
References
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Nat. Mach. Intell.1, 206–215 (2019).
Rudin, C. et al. Interpretable machine learning: fundamental principles and 10 grand challenges.Stat. Surv.16, 1–85 (2022).
Ledford, H. Millions of Black people affected by racial bias in health-care algorithms.Nature574, 608–609 (2019).
Kan-Tor, Y., Ben-Meir, A. & Buxboim, A. Can deep learning automatically predict fetal heart pregnancy with almost perfect accuracy?Hum. Reprod.35, 1473 (2020).
Badgeley, M. A. et al. Deep learning predicts hip fracture using confounding patient and healthcare variables.NPJ Digit. Med.2, 31 (2019).
弗洛雷斯,a·W。贝克特尔,k & Lowenkamp c t .歧视e positives, false negatives, and false analyses: a rejoinder to “Machine bias: there’s software used across the country to predict future criminals. And it’s biased against Blacks.”.Fed. Probat.80, 38–46 (2016).
Barnett, A. J. et al. A case-based interpretable deep learning model for classification of mass lesions in digital mammography.Nat. Mach. Intell.3, 1061–1070 (2021).
Rudin, C. & Radin, J. Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition.Harv. Data Sci. Rev.https://doi.org/10.1162/99608f92.5a8a3a3d(2019).
Afnan, M. A. M. et al. Ethical implementation of artificial intelligence to select embryos in in vitro fertilization. inAIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society316–326 (AIES, 2021).
Semenova, L., Rudin, C. & Parr, R. On the existence of simpler machine learning models. inFAccT ’22: 2022 ACM Conference on Fairness, Accountability and Transparency1827–1858 (ACM, 2022).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Rudin, C. Why black box machine learning should be avoided for high-stakes decisions, in brief.Nat Rev Methods Primers2, 81 (2022). https://doi.org/10.1038/s43586-022-00172-0
Published:
DOI:https://doi.org/10.1038/s43586-022-00172-0