Autonomous vehicles are expected to drive in complex scenarios with several independent non cooperating agents. Path planning for safely navigating in such environments can not just rely on perceiving present location and motion of other agents. It requires instead to predict such variables in a far enough future. In this paper we address the problem of multimodal trajectory prediction exploiting a Memory Augmented Neural Network. Our method learns past and future trajectory embeddings using recurrent neural networks and exploits an associative external memory to store and retrieve such embeddings. Trajectory prediction is then performed by decoding in-memory future encodings conditioned with the observed past. We incorporate scene knowledge in the decoding state by learning a CNN on top of semantic scene maps. Memory growth is limited by learning a writing controller based on the predictive capability of existing embeddings. We show that our method is able to natively perform multi-modal trajectory prediction obtaining state-of-the art results on three datasets. Moreover, thanks to the non-parametric nature of the memory module, we show how once trained our system can continuously improve by ingesting novel patterns.

Marchetti, F., Becattini, F., Seidenari, L., Del Bimbo, A. (2020). Mantra: Memory augmented networks for multiple trajectory prediction. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp.7141-7150). IEEE Computer Society [10.1109/CVPR42600.2020.00717].

Mantra: Memory augmented networks for multiple trajectory prediction

Federico Becattini;
2020-01-01

Abstract

Autonomous vehicles are expected to drive in complex scenarios with several independent non cooperating agents. Path planning for safely navigating in such environments can not just rely on perceiving present location and motion of other agents. It requires instead to predict such variables in a far enough future. In this paper we address the problem of multimodal trajectory prediction exploiting a Memory Augmented Neural Network. Our method learns past and future trajectory embeddings using recurrent neural networks and exploits an associative external memory to store and retrieve such embeddings. Trajectory prediction is then performed by decoding in-memory future encodings conditioned with the observed past. We incorporate scene knowledge in the decoding state by learning a CNN on top of semantic scene maps. Memory growth is limited by learning a writing controller based on the predictive capability of existing embeddings. We show that our method is able to natively perform multi-modal trajectory prediction obtaining state-of-the art results on three datasets. Moreover, thanks to the non-parametric nature of the memory module, we show how once trained our system can continuously improve by ingesting novel patterns.
2020
978-1-7281-7168-5
Marchetti, F., Becattini, F., Seidenari, L., Del Bimbo, A. (2020). Mantra: Memory augmented networks for multiple trajectory prediction. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp.7141-7150). IEEE Computer Society [10.1109/CVPR42600.2020.00717].
File in questo prodotto:
File Dimensione Formato  
Marchetti_MANTRA_Memory_Augmented_Networks_for_Multiple_Trajectory_Prediction_CVPR_2020_paper.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.6 MB
Formato Adobe PDF
2.6 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
MANTRA_Memory_Augmented_Networks_for_Multiple_Trajectory_Prediction.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 926.45 kB
Formato Adobe PDF
926.45 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1224522