Recently, deep learning models have had a huge impact on computer vision applications, in particular in semantic segmentation, in which many challenges are open. As an example, the lack of large annotated datasets implies the need for new semi-supervised and unsupervised techniques. This problem is particularly relevant in the medical field due to privacy issues and high costs of image tagging by medical experts. The aim of this tutorial overview paper is to provide a short overview of the recent results and advances regarding deep learning applications in computer vision particularly for what concerns semantic segmentation

Andreini, P., Dimitri, G.M. (2022). Deep Semantic Segmentation Models in Computer Vision. In ESANN 2022 (pp.305-314) [10.14428/esann/2022.ES2022-5].

Deep Semantic Segmentation Models in Computer Vision

Andreini, Paolo;Dimitri, Giovanna Maria
2022-01-01

Abstract

Recently, deep learning models have had a huge impact on computer vision applications, in particular in semantic segmentation, in which many challenges are open. As an example, the lack of large annotated datasets implies the need for new semi-supervised and unsupervised techniques. This problem is particularly relevant in the medical field due to privacy issues and high costs of image tagging by medical experts. The aim of this tutorial overview paper is to provide a short overview of the recent results and advances regarding deep learning applications in computer vision particularly for what concerns semantic segmentation
2022
9782875870841
Andreini, P., Dimitri, G.M. (2022). Deep Semantic Segmentation Models in Computer Vision. In ESANN 2022 (pp.305-314) [10.14428/esann/2022.ES2022-5].
File in questo prodotto:
File Dimensione Formato  
ES2022-5.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: PUBBLICO - Pubblico con Copyright
Dimensione 1.35 MB
Formato Adobe PDF
1.35 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1216797