The development of artificial intelligence (AI) systems and its impact on our societies raise ethical concerns. Ethical problems are not readily explainable to people. The persisting problem of autonomous vehicle (AV) ethics is how to handle ethical dilemma situations where the moral values of different people and ethical principles are in conflict. There is no consensus about what ethical choices should be made and which moral principles should be embedded to guide AV’s decisions in ethical dilemma situations. The list of ethical principles and AI ethics guidelines say nothing about what to do when principles come into conflict with one another. There is little research about the tangible implementation of ethical values in the field of AI. Floating conclusion is used to conceptualize conflicting propositions that can be extended to AV’s ethical dilemma. A floating conclusion approach in dilemma enables the implementation of conflicting moral values independently in AV settings. A new type of reasoner in the AV decision process is proposed to support the existence of two distinct types of AV. This reason reconciles the moral values and personal self-interest of the traffic participants by embedding different moral preferences to each of the two AV types. A static Bayesian game model is used to design incentives for a mechanism that addresses heterogeneous and inconsistent moral preferences in an AV decision dilemma, prevents human traffic participant’s moral hazard behavior, and improves transportation efficiency in mixed traffic. No single but multiple moral values allow policymakers to develop feasible, practical, and effective mechanism designs for a smooth human-AI collaboration. My dissertation explains the logic and ethical decision making in the AI system meaningfully. It contributes to the body of ethics considerations in Human-AI interactions, specifically in the underexplored area where ethical principles and moral values of different participants in this interaction conflict with each other.

Yoo, D. (2023). The Ethics of Artificial Intelligence from an Economics Perspective: Logical, theoretical, and legal discussions in autonomous vehicle dilemma [10.25434/yoo-dae-hyun_phd2023].

The Ethics of Artificial Intelligence from an Economics Perspective: Logical, theoretical, and legal discussions in autonomous vehicle dilemma

YOO, DAE-HYUN
2023-01-01

Abstract

The development of artificial intelligence (AI) systems and its impact on our societies raise ethical concerns. Ethical problems are not readily explainable to people. The persisting problem of autonomous vehicle (AV) ethics is how to handle ethical dilemma situations where the moral values of different people and ethical principles are in conflict. There is no consensus about what ethical choices should be made and which moral principles should be embedded to guide AV’s decisions in ethical dilemma situations. The list of ethical principles and AI ethics guidelines say nothing about what to do when principles come into conflict with one another. There is little research about the tangible implementation of ethical values in the field of AI. Floating conclusion is used to conceptualize conflicting propositions that can be extended to AV’s ethical dilemma. A floating conclusion approach in dilemma enables the implementation of conflicting moral values independently in AV settings. A new type of reasoner in the AV decision process is proposed to support the existence of two distinct types of AV. This reason reconciles the moral values and personal self-interest of the traffic participants by embedding different moral preferences to each of the two AV types. A static Bayesian game model is used to design incentives for a mechanism that addresses heterogeneous and inconsistent moral preferences in an AV decision dilemma, prevents human traffic participant’s moral hazard behavior, and improves transportation efficiency in mixed traffic. No single but multiple moral values allow policymakers to develop feasible, practical, and effective mechanism designs for a smooth human-AI collaboration. My dissertation explains the logic and ethical decision making in the AI system meaningfully. It contributes to the body of ethics considerations in Human-AI interactions, specifically in the underexplored area where ethical principles and moral values of different participants in this interaction conflict with each other.
2023
35
Yoo, D. (2023). The Ethics of Artificial Intelligence from an Economics Perspective: Logical, theoretical, and legal discussions in autonomous vehicle dilemma [10.25434/yoo-dae-hyun_phd2023].
Yoo, DAE-HYUN
File in questo prodotto:
File Dimensione Formato  
phd_unisi_095520.pdf

Open Access dal 04/06/2024

Descrizione: The doctoral dissertation
Tipologia: Post-print
Licenza: PUBBLICO - Pubblico con Copyright
Dimensione 978.45 kB
Formato Adobe PDF
978.45 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1233836