With the rapid progress of technology, the decision-making and behaviors of artificial intelligence have begun to shift from external settings to internal development. Intelligent agents gradually possess varying degrees of adaptive, decision-making, and behavioral abilities, and their autonomous capabilities are continuously enhanced. For moral considerations, artificial intelligence with autonomous decision-making and behaviors has begun to be regarded as a moral agent. Therefore, how traditional morality can play an autonomous role in intelligent technologies has become a problem that must be faced. The three main theories of normative ethics, consequentialism, deontology, and virtue ethics all have the potential to solve this problem. This article aims to use normative ethical theories to construct an artificial intelligence system capable of making moral decisions, and it is necessary to ensure that the autonomous reasoning of artificial intelligence can be constrained by human social morality and values, remain consistent with human values, and assume the “responsibility” of decision-making.
Cavalier RJ, 2005, Impact of the Internet on Our Moral Lives. State University of New York Press, New York, 19–20.
Bryson JJ, Patiency is not a Virtue: The Design of Intelligent Systems and Systems of Ethics. Ethics and Information Technology, 2018(1): 15–26.
Miller DE, 2003, Actual Consequence Act Utilitarianism and the Best Possible Humans. Ratio, 2003(1): 49–62.
Gigerenzer G, 2008, Why Heuristics Work. Perspectives on Psychological Science, 2008(1): 20–29.
Spahn A, 2020, Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization. Information, 2020(4): 228–242.
Powers T, 2005, Deontological Machine Ethics. Working Papers of the AAAI Fall Symposium on Machine Ethics, 79–80.
Cranefield S, Oren N, Vasconcelos WW, 2018, Accountability for Practical Reasoning Agents. Lujak M. Agreement Technologies. Springer, Douai, 36.
McKenna M, 2008, Putting the Lie on the Control Condition for Moral Responsibility. Philosophical Studies, 2008(1): 29–37.
Schwartz SH, 2006, Basic Human Values: Theory, Measurement, and Applications. Revue Française De Sociologie, 2006(4): 929–968.
Schwartz SH, 2012, An Overview of the Schwartz Theory of Basic Values. Online Readings in Psychology and Culture, 2012(1): 1–20.
Huang G, 2022, Moral Enhancement of Artificial Intelligence: Dynamic Qualification, Normative Position and Application Prospect. Journal of the University of the Chinese Academy of Social Sciences, 2022 (5): 18–30.
Meng LY, 2022, On the Three Research Paths of the Moral Status of Artificial Intelligence. Research on Dialectics of Nature, 2022(2): 30–35.
Zhang JJ, 2022, Discussion on the Moral Subject Status of Artificial Intelligence Body. Quest, 2022(1): 58–65.
Yan KR, 2021, Analysis on the Moral Implication of Artificial Intelligence Design. Social Science in Yunnan Province, 2021(5): 28–35.
Cheng P, Guss Y, 2021, Philosophical Reflection on the Moral Status of General Artificial Intelligence. Research on Dialectics of Nature, 2021(7): 46–51.
Wu TL, 2021, Is AI Qualified to be a Moral Subject. Philosophy Dynamics, 2021(6): 104–116.