Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Why black box machine learning should be avoided for high-stakes decisions, in brief

Black box machine learning models can be dangerous for high-stakes decisions. They rely on untrustworthy databases, and their predictions are difficult to troubleshoot, explain and error check for real-time predictions. Their use leads to serious ethics and accountability issues.

This is a preview of subscription content,access via your institution

Access options

Buy article

Get time limited or full article access on ReadCube.

$32.00

Buy

All prices are NET prices.

References

  1. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Nat. Mach. Intell.1, 206–215 (2019).

    ArticleGoogle Scholar

  2. Rudin, C. et al. Interpretable machine learning: fundamental principles and 10 grand challenges.Stat. Surv.16, 1–85 (2022).

    ArticleMathSciNetGoogle Scholar

  3. Ledford, H. Millions of Black people affected by racial bias in health-care algorithms.Nature574, 608–609 (2019).

    ArticleADSGoogle Scholar

  4. Kan-Tor, Y., Ben-Meir, A. & Buxboim, A. Can deep learning automatically predict fetal heart pregnancy with almost perfect accuracy?Hum. Reprod.35, 1473 (2020).

    ArticleGoogle Scholar

  5. Badgeley, M. A. et al. Deep learning predicts hip fracture using confounding patient and healthcare variables.NPJ Digit. Med.2, 31 (2019).

    ArticleGoogle Scholar

  6. 弗洛雷斯,a·W。贝克特尔,k & Lowenkamp c t .歧视e positives, false negatives, and false analyses: a rejoinder to “Machine bias: there’s software used across the country to predict future criminals. And it’s biased against Blacks.”.Fed. Probat.80, 38–46 (2016).

    Google Scholar

  7. Barnett, A. J. et al. A case-based interpretable deep learning model for classification of mass lesions in digital mammography.Nat. Mach. Intell.3, 1061–1070 (2021).

    ArticleGoogle Scholar

  8. Rudin, C. & Radin, J. Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition.Harv. Data Sci. Rev.https://doi.org/10.1162/99608f92.5a8a3a3d(2019).

    ArticleGoogle Scholar

  9. Afnan, M. A. M. et al. Ethical implementation of artificial intelligence to select embryos in in vitro fertilization. inAIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society316–326 (AIES, 2021).

  10. Semenova, L., Rudin, C. & Parr, R. On the existence of simpler machine learning models. inFAccT ’22: 2022 ACM Conference on Fairness, Accountability and Transparency1827–1858 (ACM, 2022).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence toCynthia Rudin.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rudin, C. Why black box machine learning should be avoided for high-stakes decisions, in brief.Nat Rev Methods Primers2, 81 (2022). https://doi.org/10.1038/s43586-022-00172-0

Download citation

  • Published:

  • DOI:https://doi.org/10.1038/s43586-022-00172-0

Search

Quick links

Nature Briefing

Sign up for theNature Briefingnewsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
Baidu
map