Thanks to the development of technology, patients now have access to advanced diagnostics, cutting-edge treatments and more novel treatment options – all of which are transforming the healthcare industry. For example, Harvard’s AI team has shown that it can pinpoint with 92 % accuracy cancer cells among samples of breast tissue cells. Another study by the University of Heidelberg found that deep learning can identify 95% of melanomas, compared to 86% for dermatologists.
At Benevolent, we use the power of artificial intelligence to unlock new areas of knowledge and to employ that knowledge to remake the way medicines are discovered and developed. In our drug programme for ALS, our Benevolent Platform™ was able to discover novel compounds previously not identified in ALS, including one which showed delay of symptom onset when tested in the gold standard disease model.
Our team’s strength comes from working in a unique, integrated approach which uses AI to augment scientists capabilities to process data and to reason in thousands of dimensions simultaneously. We innovate by enhancing and augmenting our scientists capabilities to discover, develop and test new medicines.
At NeurIPS this year, we will present some of our latest research, authored by our AI and scientific teams, who have embraced this collaborative and integrated approach in their day-to-day work.
Patient stratification using omics data is one of the greatest promises as well as challenges to enable precision medicine. The ability to identify endotypes in heterogeneous diseases like Amyotrophic Lateral Sclerosis (ALS) would facilitate the development of targeted treatments for patients with shared underlying disease mechanisms. Endotyping is usually approached with unsupervised machine learning (ML) methods which produce latent representations of the input data. Recently introduced Bayesian biclustering model BicMix allows decomposition of gene expression matrices into sparse and dense latent factors and loadings, which can capture both dataset-wide and subgroup-specific co-expression patterns. However, the systematic biological interpretation of these latent representations is especially challenging due to poorly defined or altogether unknown ground truth. We have developed an evaluation strategy which allows assessment of the biological relevance of latent factors in terms of association with common genetic variation (eQTLs), known clinical covariates (time to death, C9orf72 mutation status) and confounding variables (GC mean, RIN, gene paralogy). To stratify ALS cases and identify networks of co-expressed genes representing specific disease mechanisms, we fit multiple BicMix models using 483 post-mortem RNA-seq samples obtained from various brain and spinal cord regions of 98 ALS patients and 18 controls in the TargetALS dataset. In our evaluation pipeline we select for biologically meaningful latent representations that 1) lack confounding; 2) are absent in healthy brain tissue; 3) are biologically robust (Protein-Protein Interaction (PPI) connectivity, differential connectivity); 4) are enriched for relevant genesets, clinical endpoints; and 5) point to more homogeneous patient subgroups (assessed by power to detect differential expression). The final set of latent factors identify ALS cases with many desirable endotype properties including an enrichment for ALS family history (t-test = -9.17, p-value = 2.03 x 10-19) and differential survival time as compared to cases not loaded into those latent factors (t-test = 5.52, p-value = 4.16 x 10-8). Identified latent factors potentially correspond to specific disease mechanisms or endotypes and further work is underway to replicate these results in an independent held-out dataset with further confirmation using in vitro assays. Our study reveals the challenges of interpreting latent representations derived from high-dimensional omics data, however it also highlights the great potential for such approaches to make an impact in the clinic. Being able to assess the validity and relevance of latent representations could lead to better validation assay design, resulting in targeted therapies for patient populations.
A focus of precision medicine for drug discovery is to develop treatments that target patient-specific disease mechanisms. A common approach to identifying these biological processes and their defects in disease is to detect co-expression modules from high-dimensional gene expression data. Many methods have been proposed for module detection. While the evaluation and comparison of these methods is an ongoing challenge due to the lack of comprehensive ground truth datasets, existing benchmarks have been developed to compare observed modules to sets of known modules derived from external sources of information such as gene ontologies and experimental results. These benchmarks provide a guide to parameter inference and model selection across the suite of unsupervised machine learning models. However, in drug discovery and other biomedical domains, the necessary additional steps of prioritising and annotating the proposed gene modules are not relatable to these generic benchmarks. Furthermore, these benchmarks can mask a method's ability to identify disease-relevant modules as a significant part of the score comes from modules that are not relevant. The known modules are derived from a heterogeneous mix of diseases and healthy tissue samples and so will contain many irrelevant examples for any particular disease being studied. In short, there is no guarantee that a method performing well on such benchmarks would be effective at discovering the much smaller subset of mechanisms active in a given disease. Here we propose a method to perform disease-specific benchmarking where the known modules are weighted using a disease relevance score based on known gene-disease associations obtained from large public databases augmented with additional relations extracted from the biomedical literature. This disease relevance score addresses the issue of module relevance while simultaneously exposing the full module discovery and prioritisation pipeline to the evaluation exercise. We show on The Cancer Genome Atlas (TCGA) dataset that this disease-specific view of benchmarking produces greater performance differentiation among common module detection algorithms. We also show that the ranking of these algorithms varies across three TCGA cancer types: glioblastoma multiforme, non-small cell lung cancer and gastrointestinal adenocarcinomas. The ability to benchmark module-detection algorithms in the contexts in which they are typically employed will enable researchers to select the best method for a given disease of interest, tune its hyperparameters to maximise efficacy and guide the development of new algorithms.
Incorrect drug target identification is a major obstacle in drug discovery. Only 15% of drugs advance from Phase II to approval, with ineffective genes accounting for over 50\% of these failures (Thomas et al 2016; Harrison et al, 2016; Arrowsmith et al, 2013). Advances in data fusion and computational modeling have independently progressed towards addressing this issue. Here, we capitalize on both these approaches with Rosalind, a comprehensive gene prioritization method that combines heterogeneous knowledge graph construction with relational inference via tensor factorization to accurately predict disease-gene links.Rosalind demonstrates an increase in performance of 18%-50% over five comparable state-of-the-art algorithms. On historical data, Rosalind prospectively identifies 1 in 4 therapeutic relationships eventually proven true. Beyond efficacy, Rosalind is able to accurately predict clinical trial successes (75% recall at rank 200) and distinguish likely failures (74% recall at rank 200). Lastly, Rosalind predictions were experimentally tested in a patient-derived in-vitro assay for Rheumatoid arthritis (RA), which yielded 5 promising genes, one of which is unexplored in RA. Overall, Rosalind provides a flexible, improvable approach to gene prioritization that is able to produce clinically relevant predictions at scale. This could allow it to generate hypotheses for a wide range of diseases with unmet need, and to help slow the trend of declining productivity in drug discovery.