When Spiritual Robots Fail AI-based robots, spirituality, and moral responsibility for bad outcomes

Main Article Content

Mario Kropf

Abstract

The use of robots for spiritual purposes represents a novel and still little-researched field within technology ethics and debates on moral responsibility. This article examines how moral responsibility should be assessed when AI-based robots contribute to spiritually significant bad outcomes. It argues that spirituality introduces a distinct normative dimension into responsibility attribution that has so far been largely neglected in debates on AI ethics. The analysis first conceptualizes spirituality as an individual orientation expressed in the search for meaning, transcendence, personal growth, or connection to others, nature or the sacred. It then examines current forms of spiritual robots—including SanTO, BlessU2, NAO, and Xian’er—and their roles in addressing spiritual concerns. Finally, three illustrative scenarios of spiritually significantly bad outcomes are analyzed: a lack of guidance (harm), too much guidance (manipulation), and too little guidance (loss of trust). Based on these cases, the analysis supports a differentiated account that combines a backward-looking responsibility, where conditions such as control, knowledge, and intention are met, with a forward-looking collective responsibility for the design and governance of spiritual robots.

Article Details

How to Cite
Kropf, M. (2026). When Spiritual Robots Fail: AI-based robots, spirituality, and moral responsibility for bad outcomes. LIMINA - Grazer Theologische Perspektiven, 9(1), 221–246. Retrieved from https://www.limina-graz.eu/index.php/limina/article/view/299
Section
Artikel zum Schwerpunktthema