The evaluation of the use of Artificial Intelligence (AI) in legal decisions still has unsolved questions. These may refer to the perceived degree of seriousness of the possible errors committed, the distribution of responsibility among the different decision-makers (human or artificial), and the evaluation of the error concerning its possible benevolent or malevolent consequences on the person sanctioned. Above all, assessing the possible relationships between these variables appears relevant. To this aim, we conducted a study through an online questionnaire (N = 288) where participants had to consider different scenarios in which a decision-maker, human or artificial, made an error of judgement for offences punishable by a fine (Civil Law infringement) or years in prison (Criminal Law infringement). We found that humans who delegate AIs are blamed less than solo humans, although the effect of decision maker was subtle. In addition, people consider the error more serious if committed by a human being when a sentence for a crime of the penal code is mitigated, and for an AI when a penalty for an infringement of the civil code is aggravated. The mitigation of the evaluation of seriousness for joint AI-human judgement errors suggests the potential for strategic scapegoating of AIs.

Parlangeli, O., Curro', F., Palmitesta, P., Guidi, S. (2023). Moral judgements of errors by AI systems and humans in civil and criminal law. BEHAVIOUR & INFORMATION TECHNOLOGY, 1-11 [10.1080/0144929X.2023.2283622].

Moral judgements of errors by AI systems and humans in civil and criminal law

Parlangeli, Oronzo
;
Curro', Francesco;Palmitesta, Paola;Guidi, Stefano
2023-01-01

Abstract

The evaluation of the use of Artificial Intelligence (AI) in legal decisions still has unsolved questions. These may refer to the perceived degree of seriousness of the possible errors committed, the distribution of responsibility among the different decision-makers (human or artificial), and the evaluation of the error concerning its possible benevolent or malevolent consequences on the person sanctioned. Above all, assessing the possible relationships between these variables appears relevant. To this aim, we conducted a study through an online questionnaire (N = 288) where participants had to consider different scenarios in which a decision-maker, human or artificial, made an error of judgement for offences punishable by a fine (Civil Law infringement) or years in prison (Criminal Law infringement). We found that humans who delegate AIs are blamed less than solo humans, although the effect of decision maker was subtle. In addition, people consider the error more serious if committed by a human being when a sentence for a crime of the penal code is mitigated, and for an AI when a penalty for an infringement of the civil code is aggravated. The mitigation of the evaluation of seriousness for joint AI-human judgement errors suggests the potential for strategic scapegoating of AIs.
2023
Parlangeli, O., Curro', F., Palmitesta, P., Guidi, S. (2023). Moral judgements of errors by AI systems and humans in civil and criminal law. BEHAVIOUR & INFORMATION TECHNOLOGY, 1-11 [10.1080/0144929X.2023.2283622].
File in questo prodotto:
File Dimensione Formato  
Moral judgements of errors by AI systems and humans in civil and criminal law.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.9 MB
Formato Adobe PDF
1.9 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1251555