This paper shows how we can combine the power of machine learning with the flexibility of constraints. More specifically, we show how machine learning models can be represented by first-order logic theories, and how to derive these theories. The advantage of this representation is that it can be augmented with additional formulae, representing constraints of some kind on the data domain. For instance, new knowledge, or potential attackers, or fairness desiderata. We consider various kinds of learning algorithms (neural networks, k-nearest-neighbours, decision trees, support vector machines) and for each of them we show how to infer the FOL formulae. Then we focus on one particular application domain, namely the field of security and privacy. The idea is to represent the potentialities and goals of the attacker as a set of constraints, then use a constraint solver (more precisely, a solver modulo theories) to verify the satisfiability. If a solution exists, then it means that an attack is possible, otherwise, the system is safe. We show various examples from different areas of security and privacy; specifically, we consider a side-channel attack on a password checker, a malware attack on smart health systems, and a model-inversion attack on a neural network.

Falaschi, M., Palamidessi, C., Romanelli, M. (2020). Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy. In Recent Developments in the Design and Implementation of Programming Languages. Dagstuhl : Schloss Dagstuhl--Leibniz-Zentrum für Informatik [10.4230/OASIcs.Gabbrielli.11].

Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy

Falaschi, Moreno;
2020-01-01

Abstract

This paper shows how we can combine the power of machine learning with the flexibility of constraints. More specifically, we show how machine learning models can be represented by first-order logic theories, and how to derive these theories. The advantage of this representation is that it can be augmented with additional formulae, representing constraints of some kind on the data domain. For instance, new knowledge, or potential attackers, or fairness desiderata. We consider various kinds of learning algorithms (neural networks, k-nearest-neighbours, decision trees, support vector machines) and for each of them we show how to infer the FOL formulae. Then we focus on one particular application domain, namely the field of security and privacy. The idea is to represent the potentialities and goals of the attacker as a set of constraints, then use a constraint solver (more precisely, a solver modulo theories) to verify the satisfiability. If a solution exists, then it means that an attack is possible, otherwise, the system is safe. We show various examples from different areas of security and privacy; specifically, we consider a side-channel attack on a password checker, a malware attack on smart health systems, and a model-inversion attack on a neural network.
2020
978-3-95977-171-9
Falaschi, M., Palamidessi, C., Romanelli, M. (2020). Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy. In Recent Developments in the Design and Implementation of Programming Languages. Dagstuhl : Schloss Dagstuhl--Leibniz-Zentrum für Informatik [10.4230/OASIcs.Gabbrielli.11].
File in questo prodotto:
File Dimensione Formato  
OASIcs-Gabbrielli-11.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: Creative commons
Dimensione 843.13 kB
Formato Adobe PDF
843.13 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1124702