When Spiritual Robots Fail AI-based robots, spirituality, and moral responsibility for bad outcomes
Main Article Content
Abstract
The use of robots for spiritual purposes represents a novel and still little-researched field within technology ethics and debates on moral responsibility. This article examines how moral responsibility should be assessed when AI-based robots contribute to spiritually significant bad outcomes. It argues that spirituality introduces a distinct normative dimension into responsibility attribution that has so far been largely neglected in debates on AI ethics. The analysis first conceptualizes spirituality as an individual orientation expressed in the search for meaning, transcendence, personal growth, or connection to others, nature or the sacred. It then examines current forms of spiritual robots—including SanTO, BlessU2, NAO, and Xian’er—and their roles in addressing spiritual concerns. Finally, three illustrative scenarios of spiritually significantly bad outcomes are analyzed: a lack of guidance (harm), too much guidance (manipulation), and too little guidance (loss of trust). Based on these cases, the analysis supports a differentiated account that combines a backward-looking responsibility, where conditions such as control, knowledge, and intention are met, with a forward-looking collective responsibility for the design and governance of spiritual robots.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
The author(s) retain copyright without any restriction.
LIMINA provides immediately upon publication open access to its content. The content of this journal is licensed under the Creative Commons Attribution 4.0 International Licence. By submitting a contribution, the author(s) agree(s) to the terms of use of the CC BY licence.