Explainable Decision Support Systems (2010–2024): A Biblio-metric Review of Intellectual Structure
Main Article Content
Abstract
This study provides a comprehensive bibliometric analysis of Explainable Decision Support Systems (EDSS) research published between 2010 and 2024, with the objective of mapping its intellectual structure, thematic evolution, and emerging trends. Data were collected from the Scopus database and analyzed using quantitative bibliometric techniques, including performance analysis and science mapping, supported by visualization through VOSviewer. The results indicate a significant growth in EDSS publications, particularly after 2019, driven by the increasing demand for transparency and accountability in artificial intelligence-based decision-making. The network and density analyses reveal that core research themes are centered on machine learning, decision support systems, and artificial intelligence, while emerging topics include interpretability, trust, and human-centered design. Co-authorship and co-citation analyses highlight the interdisciplinary nature of the field, with strong contributions from domains such as healthcare, industry, and data science. Furthermore, the findings demonstrate a shift from purely technical model development toward the integration of explainability, ethics, and user interaction in decision-making systems. This study contributes by offering a structured overview of EDSS research and identifying key directions for future studies, particularly in developing standardized evaluation frameworks and expanding applications beyond dominant domains such as healthcare.
Article Details

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
References
[1] D. P. Solomatine, “Data-driven modelling: paradigm, methods, experiences,” Proc. 5th international conference on …. academia.edu, 2002.
[2] D. Ivanov, A. Dolgui, A. Das, and B. Sokolov, “Digital supply chain twins: Managing the ripple effect, resilience, and disruption risks by data-driven optimization, simulation, and visibility,” Handb. ripple Eff. …, 2019, doi: 10.1007/978-3-030-14302-2_15.
[3] S. Feuerriegel and J. Gordon, “Long-term stock index forecasting based on text mining of regulatory disclosures,” Decis. Support Syst., 2018.
[4] E. D. Drenik, “The context-sensitivity of a financial management controlling system in a company undergoing restructuring,” Context Sensitive Decis. Support Syst. IFIP TC8 …, 1998, doi: 10.1007/978-0-387-35356-2_9.
[5] G. Gal and P. Steinbart, “Artificial Intelligence and Research in Accounting Information Systems: Opportunities and Issues.,” J. Inf. Syst., 1987.
[6] M. Goddard, “The EU General Data Protection Regulation (GDPR): European regulation that has a global impact,” Int. J. Mark. Res., vol. 59, no. 6, pp. 703–705, 2017.
[7] G. D. P. Regulation, “General data protection regulation (GDPR),” Intersoft Consult. Accessed Oct., vol. 24, no. 1, 2018.
[8] A. Osterwalder and Y. Pigneur, “Designing business models and similar strategic objects: The contribution of IS,” J. Assoc. Inf. Syst., vol. 14, no. 5, pp. 237–244, 2013, doi: 10.17705/1jais.00333.
[9] D. Müller, F. Te, F. Meyer, and I. P. Cvijikj, “Towards data driven decision support for financial institutions: Predicting small companies business volume in Switzerland,” 2016 7th Int. …, 2016.
[10] X. Wan, D. Yang, T. Wang, and M. Deveci, “Closed-loop supply chain decision considering information reliability and security: should the supply chain adopt federated learning decision support systems?,” Ann. Oper. Res., pp. 1–37, 2023.
[11] A. Yousaf, V. Kayvanfar, A. Mazzoni, and A. Elomri, “Artificial intelligence-based decision support systems in smart agriculture: Bibliometric analysis for operational insights and future directions,” Front. Sustain. Food Syst., vol. 6, p. 1053921, 2023.
[12] M. P. Low and D. Siegel, “A bibliometric analysis of employee-centred corporate social responsibility research in the 2000s,” Soc. Responsib. J., vol. 16, no. 5, pp. 691–717, 2020.
[13] F. C. Fenerich, K. Guedes, N. H. M. Cordeiro, G. de S. Lima, and A. L. G. De Oliveira, “Energy efficiency in industrial environments: an updated review and a new research agenda,” Rev. Gestão e Secr. (Management Adm. Prof. Rev., vol. 14, no. 3, pp. 3319–3347, 2023, doi: 10.7769/gesec.v14i3.1802.
[14] H. Xie and T. C. Lau, “Evidence-Based Green Human Resource Management: A Systematic Literature Review,” Sustain., vol. 15, no. 14, 2023, doi: 10.3390/su151410941.
[15] M. Ghassemi, L. Oakden-Rayner, and A. L. Beam, “The false hope of current approaches to explainable artificial intelligence in health care,” lancet Digit. Heal., vol. 3, no. 11, pp. e745–e750, 2021.
[16] K. A. Tran, O. Kondrashova, A. Bradley, E. D. Williams, J. V Pearson, and N. Waddell, “Deep learning in cancer diagnosis, prognosis and treatment selection,” Genome Med., vol. 13, no. 1, p. 152, 2021.
[17] I. Ahmed, G. Jeon, and F. Piccialli, “From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where,” IEEE Trans. Ind. informatics, vol. 18, no. 8, pp. 5031–5042, 2022.
[18] Y. Zhang, Q. V. Liao, and R. K. E. Bellamy, “Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making,” in Proceedings of the 2020 conference on fairness, accountability, and transparency, 2020, pp. 295–305.
[19] Z. Buçinca, M. B. Malaya, and K. Z. Gajos, “To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making,” Proc. ACM Human-computer Interact., vol. 5, no. CSCW1, pp. 1–21, 2021.
[20] F. Xu, H. Uszkoreit, Y. Du, W. Fan, D. Zhao, and J. Zhu, “Explainable AI: A brief survey on history, research areas, approaches and challenges,” in CCF international conference on natural language processing and Chinese computing, 2019, pp. 563–574.
[21] A. S. Albahri et al., “A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion,” Inf. fusion, vol. 96, pp. 156–191, 2023.
[22] S. Mohseni, N. Zarei, and E. D. Ragan, “A multidisciplinary survey and framework for design and evaluation of explainable AI systems,” ACM Trans. Interact. Intell. Syst., vol. 11, no. 3–4, pp. 1–45, 2021.
[23] A. M. Antoniadi et al., “Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review,” Appl. Sci., vol. 11, no. 11, p. 5088, 2021.
[24] J. Petch, S. Di, and W. Nelson, “Opening the black box: the promise and limitations of explainable machine learning in cardiology,” Can. J. Cardiol., vol. 38, no. 2, pp. 204–213, 2022.
[25] S. Moro, P. Cortez, and P. Rita, “A data-driven approach to predict the success of bank telemarketing,” Decis. Support Syst., 2014.
[26] Z. Lin, “An empirical investigation of user and system recommendations in e-commerce,” Decis. Support Syst., vol. 68, pp. 111–124, 2015, doi: 10.1016/j.dss.2014.10.003.