Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.

Mulamba Ke Tchomba, M., Mandi, J., Diligenti, M., Lombardi, M., Bucarey Lopez, V., & Guns, T. (2021). Contrastive Losses and Solution Caching for Predict-and-Optimize. In Proceedings of the International JoinConference on Artificial Intelligence (IJCAI) (pp.2833-2840). International Joint Conferences on Artificial Intelligence [10.24963/ijcai.2021/390].

Contrastive Losses and Solution Caching for Predict-and-Optimize

Michelangelo Diligenti;
2021

Abstract

Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.
978-0-9992411-9-6
Mulamba Ke Tchomba, M., Mandi, J., Diligenti, M., Lombardi, M., Bucarey Lopez, V., & Guns, T. (2021). Contrastive Losses and Solution Caching for Predict-and-Optimize. In Proceedings of the International JoinConference on Artificial Intelligence (IJCAI) (pp.2833-2840). International Joint Conferences on Artificial Intelligence [10.24963/ijcai.2021/390].
File in questo prodotto:
File Dimensione Formato  
0390.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 540.06 kB
Formato Adobe PDF
540.06 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11365/1157194