JeeLoo Liu writes:
In continuation of my “Confucian Robotic Ethics” project, I have now embarked on a research on incorporating Confucian virtues in designing Ethical AI for Social Robots. But to do so, I need to first understand what people regard as the more important virtues when two sets of virtues (such as loyalty and humaneness; honesty and loyalty; or obedience and righteousness) conflict with each other in various cases of moral dilemmas.
I would like to know what you think about what future autonomous robots should do in situations involving (1) robot assisted suicide (2) whether a robot should lie or not (3) what principle should rescue robots use to decide whom to rescue first (4) should robots obey human order when such orders violate moral principles. These four sets of scenarios have 15 short questions each, and they are philosophically challenging. It takes about 10 minutes for each set, and you have the choice to continue to the next set or exit.
I hope you will find it interesting to answer them. Also, please help spread the word: post it on social media, encourage your students, friends, and family members to do the same.
The survey is completely anonymous and is conducted in three languages: English, Spanish and Chinese (both traditional and simplified). The main site is http://www.fullerton.edu/ethical-ai
Distinguished Faculty of H&SS
Department of Philosophy
Cal State University, Fullerton
Fullerton, CA 92834
In the introduction to this post, mention was made of origins. Those figure prominently into my own thinking on several topics. Your notes on conflicting priorities are demonstrative. AI is not my expertise, but the moral implications are real. As to choices about rescue, the old trolley problem is classic, among thought experiments. I wish you success and would be privileged to answer your survey, as best I can.