This paper presents a fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of neural network.

Farina, F., Garulli, A., Giannitrapani, A., Notarstefano, G. (2019). A distributed asynchronous method of multipliers for constrained nonconvex optimization. AUTOMATICA, 103, 243-253 [10.1016/j.automatica.2019.02.003].

A distributed asynchronous method of multipliers for constrained nonconvex optimization

Farina, Francesco
;
Garulli, Andrea;Giannitrapani, Antonio;
2019-01-01

Abstract

This paper presents a fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of neural network.
2019
Farina, F., Garulli, A., Giannitrapani, A., Notarstefano, G. (2019). A distributed asynchronous method of multipliers for constrained nonconvex optimization. AUTOMATICA, 103, 243-253 [10.1016/j.automatica.2019.02.003].
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0005109819300469-main.pdf

non disponibili

Descrizione: Articolo principale
Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.37 MB
Formato Adobe PDF
1.37 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
1075032_postprint.pdf

accesso aperto

Descrizione: https://doi.org/10.1016/j.automatica.2019.02.003
Tipologia: Post-print
Licenza: Creative commons
Dimensione 571.3 kB
Formato Adobe PDF
571.3 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1075032