DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

DeepCOMBI : explainable artificial intelligence for the analysis and discovery in genome-wide association studies. / Mieth, Bettina; Rozier, Alexandre; Rodriguez, Juan Antonio; Höhne, Marina M. C.; Görnitz, Nico; Müller, Klaus-Robert.

In: NAR Genomics and Bioinformatics, Vol. 3, No. 3, lqab065, 2021.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Mieth, B, Rozier, A, Rodriguez, JA, Höhne, MMC, Görnitz, N & Müller, K-R 2021, 'DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies', NAR Genomics and Bioinformatics, vol. 3, no. 3, lqab065. https://doi.org/10.1093/nargab/lqab065

APA

Mieth, B., Rozier, A., Rodriguez, J. A., Höhne, M. M. C., Görnitz, N., & Müller, K-R. (2021). DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies. NAR Genomics and Bioinformatics, 3(3), [lqab065]. https://doi.org/10.1093/nargab/lqab065

Vancouver

Mieth B, Rozier A, Rodriguez JA, Höhne MMC, Görnitz N, Müller K-R. DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies. NAR Genomics and Bioinformatics. 2021;3(3). lqab065. https://doi.org/10.1093/nargab/lqab065

Author

Mieth, Bettina ; Rozier, Alexandre ; Rodriguez, Juan Antonio ; Höhne, Marina M. C. ; Görnitz, Nico ; Müller, Klaus-Robert. / DeepCOMBI : explainable artificial intelligence for the analysis and discovery in genome-wide association studies. In: NAR Genomics and Bioinformatics. 2021 ; Vol. 3, No. 3.

Bibtex

@article{af42dabd829c438a8a57e8bbcf75466b,
title = "DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies",
abstract = "Deep learning has revolutionized data science in many fields by greatly improving prediction performances in comparison to conventional approaches. Recently, explainable artificial intelligence has emerged as an area of research that goes beyond pure prediction improvement by extracting knowledge from deep learning methodologies through the interpretation of their results. We investigate such explanations to explore the genetic architectures of phenotypes in genome-wide association studies. Instead of testing each position in the genome individually, the novel three-step algorithm, called DeepCOMBI, first trains a neural network for the classification of subjects into their respective phenotypes. Second, it explains the classifiers' decisions by applying layer-wise relevance propagation as one example from the pool of explanation techniques. The resulting importance scores are eventually used to determine a subset of the most relevant locations for multiple hypothesis testing in the third step. The performance of DeepCOMBI in terms of power and precision is investigated on generated datasets and a 2007 study. Verification of the latter is achieved by validating all findings with independent studies published up until 2020. DeepCOMBI is shown to outperform ordinary raw P-value thresholding and other baseline methods. Two novel disease associations (rs10889923 for hypertension, rs4769283 for type 1 diabetes) were identified. ",
author = "Bettina Mieth and Alexandre Rozier and Rodriguez, {Juan Antonio} and H{\"o}hne, {Marina M. C.} and Nico G{\"o}rnitz and Klaus-Robert M{\"u}ller",
note = "Publisher Copyright: {\textcopyright} 2021 The Author(s) 2021. Published by Oxford University Press on behalf of NAR Genomics and Bioinformatics.",
year = "2021",
doi = "10.1093/nargab/lqab065",
language = "English",
volume = "3",
journal = "NAR Genomics and Bioinformatics",
issn = "2631-9268",
publisher = "Oxford University Press",
number = "3",

}

RIS

TY - JOUR

T1 - DeepCOMBI

T2 - explainable artificial intelligence for the analysis and discovery in genome-wide association studies

AU - Mieth, Bettina

AU - Rozier, Alexandre

AU - Rodriguez, Juan Antonio

AU - Höhne, Marina M. C.

AU - Görnitz, Nico

AU - Müller, Klaus-Robert

N1 - Publisher Copyright: © 2021 The Author(s) 2021. Published by Oxford University Press on behalf of NAR Genomics and Bioinformatics.

PY - 2021

Y1 - 2021

N2 - Deep learning has revolutionized data science in many fields by greatly improving prediction performances in comparison to conventional approaches. Recently, explainable artificial intelligence has emerged as an area of research that goes beyond pure prediction improvement by extracting knowledge from deep learning methodologies through the interpretation of their results. We investigate such explanations to explore the genetic architectures of phenotypes in genome-wide association studies. Instead of testing each position in the genome individually, the novel three-step algorithm, called DeepCOMBI, first trains a neural network for the classification of subjects into their respective phenotypes. Second, it explains the classifiers' decisions by applying layer-wise relevance propagation as one example from the pool of explanation techniques. The resulting importance scores are eventually used to determine a subset of the most relevant locations for multiple hypothesis testing in the third step. The performance of DeepCOMBI in terms of power and precision is investigated on generated datasets and a 2007 study. Verification of the latter is achieved by validating all findings with independent studies published up until 2020. DeepCOMBI is shown to outperform ordinary raw P-value thresholding and other baseline methods. Two novel disease associations (rs10889923 for hypertension, rs4769283 for type 1 diabetes) were identified.

AB - Deep learning has revolutionized data science in many fields by greatly improving prediction performances in comparison to conventional approaches. Recently, explainable artificial intelligence has emerged as an area of research that goes beyond pure prediction improvement by extracting knowledge from deep learning methodologies through the interpretation of their results. We investigate such explanations to explore the genetic architectures of phenotypes in genome-wide association studies. Instead of testing each position in the genome individually, the novel three-step algorithm, called DeepCOMBI, first trains a neural network for the classification of subjects into their respective phenotypes. Second, it explains the classifiers' decisions by applying layer-wise relevance propagation as one example from the pool of explanation techniques. The resulting importance scores are eventually used to determine a subset of the most relevant locations for multiple hypothesis testing in the third step. The performance of DeepCOMBI in terms of power and precision is investigated on generated datasets and a 2007 study. Verification of the latter is achieved by validating all findings with independent studies published up until 2020. DeepCOMBI is shown to outperform ordinary raw P-value thresholding and other baseline methods. Two novel disease associations (rs10889923 for hypertension, rs4769283 for type 1 diabetes) were identified.

U2 - 10.1093/nargab/lqab065

DO - 10.1093/nargab/lqab065

M3 - Journal article

C2 - 34296082

AN - SCOPUS:85119670229

VL - 3

JO - NAR Genomics and Bioinformatics

JF - NAR Genomics and Bioinformatics

SN - 2631-9268

IS - 3

M1 - lqab065

ER -

ID: 327287730