High throughput standard-of-care medical images such as CT, PET or MRI are now used in oncology to detect lesions, to plan treatments, to follow up the disease, etc. Furthermore, the increasing adoption of electronic patient records as well as the diffused use of PACS have made available heterogeneous patient data, spanning different spatial and temporal scales, modalities, and functionalities. The quantitative and heterogenous nature of the data allows us to go beyond the analysis of a single modality, moving towards multimodal (deep) learning to look for correlation between different sources of data for diagnostic, prognostic or predictive endpoints.
In this lecture we will overview this exciting research topic, focusing on how can we learn from multimodal data and discussing how to improve trust and transparency, offering some results of the research.