The automatic segmentation of the aorta can be extremely useful in clinical practice, allowing the diagnosis of numerous pathologies to be sped up, such as aneurysms and dissections, and allowing rapid reconstructive surgery, essential in saving patients’ lives. In recent years, the success of Deep Learning (DL)-based decision support systems has increased their popularity in the medical field. However, their effective application is often limited by the scarcity of training data. In fact, collecting large annotated datasets is usually difficult and expensive, especially in the biomedical domain. In this paper, an automatic method for aortic segmentation, based on 2D convolutional neural networks (CNNs), using 3D CT (computed axial tomography) scans as input is presented. For this purpose, a set of 153 CT images was collected and a semi-automated approach was used to obtain their 3D annotations at the voxel level. Although less accurate, the use of a semi-supervised labeling technique instead of a full supervision proved necessary to obtain enough data in a reasonable amount of time. The 3D volume was analyzed using three 2D segmentation networks, one for each of the three CT views (axial, coronal and sagittal). Two different network architectures, U-Net and LinkNet, were used and compared. The main advantages of the proposed method lie in its ability to work with a reduced number of data even with noisy targets. In addition, analyzing 3D scans based on 2D slices allows for them to be processed even with limited computing power. The results obtained are promising and show that the neural networks employed can provide accurate segmentation of the aorta.
Bonechi, S., Andreini, P., Mecocci, A., Giannelli, N., Scarselli, F., Neri, E., et al. (2021). Segmentation of Aorta 3D CT Images Based on 2D Convolutional Neural Networks. ELECTRONICS, 10(20) [10.3390/electronics10202559].
Segmentation of Aorta 3D CT Images Based on 2D Convolutional Neural Networks
Simone Bonechi
;Paolo Andreini;Alessandro Mecocci;Franco Scarselli;Eugenio Neri;Monica Bianchini;Giovanna Maria Dimitri
2021-01-01
Abstract
The automatic segmentation of the aorta can be extremely useful in clinical practice, allowing the diagnosis of numerous pathologies to be sped up, such as aneurysms and dissections, and allowing rapid reconstructive surgery, essential in saving patients’ lives. In recent years, the success of Deep Learning (DL)-based decision support systems has increased their popularity in the medical field. However, their effective application is often limited by the scarcity of training data. In fact, collecting large annotated datasets is usually difficult and expensive, especially in the biomedical domain. In this paper, an automatic method for aortic segmentation, based on 2D convolutional neural networks (CNNs), using 3D CT (computed axial tomography) scans as input is presented. For this purpose, a set of 153 CT images was collected and a semi-automated approach was used to obtain their 3D annotations at the voxel level. Although less accurate, the use of a semi-supervised labeling technique instead of a full supervision proved necessary to obtain enough data in a reasonable amount of time. The 3D volume was analyzed using three 2D segmentation networks, one for each of the three CT views (axial, coronal and sagittal). Two different network architectures, U-Net and LinkNet, were used and compared. The main advantages of the proposed method lie in its ability to work with a reduced number of data even with noisy targets. In addition, analyzing 3D scans based on 2D slices allows for them to be processed even with limited computing power. The results obtained are promising and show that the neural networks employed can provide accurate segmentation of the aorta.File | Dimensione | Formato | |
---|---|---|---|
electronics-10-02559.pdf
accesso aperto
Tipologia:
PDF editoriale
Licenza:
Creative commons
Dimensione
2.78 MB
Formato
Adobe PDF
|
2.78 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1166448