Advance
Preprints are early versions of research articles that have not been peer reviewed. They should not be regarded as conclusive and should not be reported in news media as established information.
29.06 Final_Normative Ethics and AI_Final.pdf (209.9 kB)
Download file

Normative Ethics, Human Rights, and Artificial Intelligence

Download (209.9 kB)
preprint
posted on 30.03.2022, 20:14 by SANGHAMITRA CHOUDHURYSANGHAMITRA CHOUDHURY, Shailendra Kumar

At some point in the future, nearly all jobs currently performed by humans will be performed by autonomous machines using artificial intelligence. There is little doubt that it will increase precision, comfort, and save time, but this coincides with the introduction of many ethical, social, and legal difficulties as well. These great difficulties offer the opportunity to revisit some of the basic and time-tested normative moral theories advanced by modern philosophers. There could be significant advantages for the many players in AI, namely producers and consumers, thanks to these moral philosophies. Customers could use it to make a purchase decision concerning AI machines, whereas manufacturers could use it to write good ethical algorithms for their AI machines. To handle any ethical difficulties that may develop due to the use of these machines, the manuscript will summarise the important and pertinent normative theories and arrive at a set of principles for writing algorithms for the manufacture and marketing of artificially intelligent machines. These normative theories are simple to understand and use, and they do not require a deep understanding of difficult philosophical or religious notions. These viewpoints claim that good and wrong may be determined merely by applying reasoning and that arriving at any logical conclusion does not necessitate a thorough understanding of philosophy or religion. Another goal of the manuscript is to investigate whether artificial intelligence can be trusted to enforce human rights and whether it is right to code all AI machines with one uniform moral code, particularly in a scenario where they will be doing different jobs for different parties. Is it possible to use the diversity of moral principles as a marketing strategy, and could humans be allowed to choose the moral codes for their machines?

History

Declaration of conflicts of interest

None

Corresponding author email

theskumar7@gmail.com

Lead author country

India

Terms agreed

Yes, I agree to Advance terms

Comments

Log in to write your comment here...