Analysis of the Moral Obligations of AI Developers Thru the Principle of Explainability in the Perspective of Kantian Deontological Ethics: A Qualitative Study
Main Article Content
Abstract
The proliferation of "Black Box" Artificial Intelligence systems creates a significant ethical void regarding accountability and user autonomy, fundamentally challenging the right of individuals to understand decisions affecting their lives. This study aims to analyze the moral obligations of AI developers to implement Explainability (XAI) using the rigorous normative framework of Kantian Deontological Ethics. Employing a qualitative research design with conceptual analysis, the study utilizes secondary data from Kant's foundational texts and contemporary literature on algorithmic transparency, applying the Categorical Imperative as the primary lens. The findings conclude that the deployment of non-explainable AI constitutes a direct violation of Kant’s Formula of Humanity, as it reduces users merely to means for achieving computational goals rather than treating them as autonomous, rational agents. Furthermore, the practice fails the Universal Law test, which prohibits the universalization of opacity in decision-making processes. Consequently, the study asserts that Explainability is a non-negotiable moral duty for developers, establishing that predictive accuracy cannot ethically justify the erosion of human autonomy, thereby demanding a paradigm shift from utilitarian efficiency to deontological adherence in AI development.
Article Details

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
References
Karthik Akinepalli, “The Pervasive Impact of Software Engineering and Architecture: A Multi-Industry Analysis,” Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol., vol. 10, no. 6, pp. 279–289, 2024, doi: 10.32628/cseit241051082.
R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A Survey of Methods for Explaining Black Box Models,” ACM Comput. Surv., vol. 51, no. 5, Aug. 2018, doi: 10.1145/3236009.
A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Inf. Fusion, vol. 58, pp. 82–115, 2020, doi: 10.1016/j.inffus.2019.12.012.
M. K. Land and J. D. Aronson, “Human Rights and Technology: New Challenges for Justice and Accountability,” Annu. Rev. Law Soc. Sci., vol. 16, no. Volume 16, 2020, pp. 223–240, 2020, doi: https://doi.org/10.1146/annurev-lawsocsci-060220-081955.
M. Mori, S. Sassetti, V. Cavaliere, and M. Bonti, “A systematic literature review on artificial intelligence in recruiting and selection: a matter of ethics,” Pers. Rev., vol. 54, no. 3, pp. 854–878, 2025, doi: 10.1108/PR-03-2023-0257.
A. Jedličková, “Ethical approaches in designing autonomous and intelligent systems: a comprehensive survey towards responsible development,” AI Soc., vol. 40, no. 4, pp. 2703–2716, 2025, doi: 10.1007/s00146-024-02040-9.
C. O. Abakare, “Kantian Ethics And The Hesc Research: A Philosophical Exploration,” Predestinasi, vol. 13, no. 2, p. 79, 2021, doi: 10.26858/predestinasi.v13i2.19534.
T. A. Griffin, B. P. Green, and J. V. M. Welie, “The ethical agency of AI developers,” AI Ethics, vol. 4, no. 2, pp. 179–188, 2024, doi: 10.1007/s43681-022-00256-3.
K. Sokol and P. Flach, “Explainability fact sheets: A framework for systematic assessment of explainable approaches,” FAT* 2020 - Proc. 2020 Conf. Fairness, Accountability, Transpar., pp. 56–67, 2020, doi: 10.1145/3351095.3372870.
A. B. Haque, A. K. M. N. Islam, and P. Mikalef, “Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research,” Technol. Forecast. Soc. Change, vol. 186, no. PA, p. 122120, 2023, doi: 10.1016/j.techfore.2022.122120.
R. Chaddha and G. Agrawal, “Ethics and Morality,” Indian J. Orthop., vol. 57, no. 11, pp. 1707–1713, 2023, doi: 10.1007/s43465-023-01004-3.
A. Benlahcene, R. Zainuddin, Bin, N. Syakiran, and B. A. Ismail, “A Narrative Review of Ethics Theories: Teleological & Deontological Ethics,” J. Humanit. Soc. Sci., vol. 23, no. 1, pp. 31–32, 2018, doi: 10.9790/0837-2307063138.
T. E. Hill Jr, “Humanity as an End in Itself.,” Ethics, vol. 91, no. 1, pp. 84–99, 1980.
P. Kleingeld, “Contradiction and Kant’s Formula of Universal Law,” Kant-Studien, vol. 108, no. 1, pp. 89–115, 2017, doi: 10.1515/kant-2017-0006.
O. Akpobome, “The Impact of Emerging Technologies on Legal Frameworks: A Model for Adaptive Regulation”.
N. Brunsson, I. Gustafsson Nordin, and K. Tamm Hallström, “‘Un-responsible’ Organization: How More Organization Produces Less Responsibility,” Organ. Theory, vol. 3, no. 4, 2022, doi: 10.1177/26317877221131582.
D. Silva and L. Cunha, “‘Between me and the machine’: on the apparent suppression of embodied know-how in the automated world and its implications,” Soc. Sci. Humanit. Open, vol. 12, no. March, p. 102227, 2025, doi: 10.1016/j.ssaho.2025.102227.
M. Ryan and B. C. Stahl, “Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications,” J. Information, Commun. Ethics Soc., vol. 19, no. 1, pp. 61–86, 2021, doi: 10.1108/JICES-12-2019-0138.
P. Formosa, “Dignity and respect: How to apply kant’s formula of humanity,” Philos. Forum, vol. 45, no. 1, pp. 49–68, 2014, doi: 10.1111/phil.12026.
R. Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., ... & Ranjan, “Explainable AI (XAI): Core ideas, techniques, and solutions,” ACM Comput. Surv., vol. 55, no. 9, pp. 1–33, 2023.
M. Braham and M. van Hees, “The Formula of Universal Law: A Reconstruction,” Erkenntnis, vol. 80, no. 2, pp. 243–260, 2015, doi: 10.1007/s10670-014-9624-y.
C. Peterson and J. Broersen, “Understanding the Limits of Explainable Ethical AI,” Int. J. Artif. Intell. Tools, vol. 33, no. 3, pp. 1–24, 2024, doi: 10.1142/S0218213024600017.
B. Rasiklal Yadav, “The Ethics of Understanding: Exploring Moral Implications of Explainable AI,” Int. J. Sci. Res., vol. 13, no. 6, pp. 1–7, 2024, doi: 10.21275/sr24529122811.