Normative Ethics, Human Rights, and Artificial Intelligence
- SANGHAMITRA CHOUDHURY,
- Shailendra Kumar
Abstract
At some point in the future, nearly all jobs currently performed by
humans will be performed by autonomous machines using artificial
intelligence. There is little doubt that it will increase precision,
comfort, and save time, but this coincides with the introduction of many
ethical, social, and legal difficulties as well. These great
difficulties offer the opportunity to revisit some of the basic and
time-tested normative moral theories advanced by modern philosophers.
There could be significant advantages for the many players in AI, namely
producers and consumers, thanks to these moral philosophies. Customers
could use it to make a purchase decision concerning AI machines, whereas
manufacturers could use it to write good ethical algorithms for their AI
machines. To handle any ethical difficulties that may develop due to the
use of these machines, the manuscript will summarise the important and
pertinent normative theories and arrive at a set of principles for
writing algorithms for the manufacture and marketing of artificially
intelligent machines. These normative theories are simple to understand
and use, and they do not require a deep understanding of difficult
philosophical or religious notions. These viewpoints claim that good and
wrong may be determined merely by applying reasoning and that arriving
at any logical conclusion does not necessitate a thorough understanding
of philosophy or religion. Another goal of the manuscript is to
investigate whether artificial intelligence can be trusted to enforce
human rights and whether it is right to code all AI machines with one
uniform moral code, particularly in a scenario where they will be doing
different jobs for different parties. Is it possible to use the
diversity of moral principles as a marketing strategy, and could humans
be allowed to choose the moral codes for their machines?