Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. Patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database (n = 974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process. The ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.

Dimitri, G.M., Spasov, S., Duggento, A., Passamonti, L., Lió, P., Toschi, N. (2022). Multimodal and multicontrast image fusion via deep generative models. INFORMATION FUSION, 88, 146-160 [10.1016/j.inffus.2022.07.017].

Multimodal and multicontrast image fusion via deep generative models

Dimitri, Giovanna Maria
;
2022-01-01

Abstract

Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. Patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database (n = 974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process. The ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.
2022
Dimitri, G.M., Spasov, S., Duggento, A., Passamonti, L., Lió, P., Toschi, N. (2022). Multimodal and multicontrast image fusion via deep generative models. INFORMATION FUSION, 88, 146-160 [10.1016/j.inffus.2022.07.017].
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S1566253522000720-main.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 8.19 MB
Formato Adobe PDF
8.19 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1213854