Explainable and Ethical Artificial Intelligence

 

 

Artificial Intelligence (AI), as a part of our daily life, has achieved remarkable advancements, such as facial recognition, medical diagnosis, and self-driving cars. AI promises extensive benefits for economic and social development, as well as improvements in human well-being, security, and safety.

 
However, AI systems can have unintended negative consequences during real-time application. In particular, they may lead to breaches of privacy, safety, security, compliance, and governance. Such issues commonly arise during development and procurement, often due to rushed timelines, insufficient technical understanding, and inadequate quality assurance.  Moreover, the “opaque” nature of machine learning, where the developed model is not open to human examination, makes such systems unsuitable for safety-critical domains such as healthcare, banking, law, and autonomous systems. Although a growing number of institutions are developing ethical AI principles and standards to mitigate these risks, these measures alone are not sufficient to ensure the responsible and trustworthy use of AI. Explainable AI (XAI) methods have been proposed to address these issues by producing human-interpretable representations and rationales of AI models and their judgments, thereby increasing confidence during usage.  Key elements of XAI include transparency, trustworthiness, interpretability, and reasoning.
 

 

   

 

One of the promising avenues is the integration of symbolic AI with machine learning in so-called neurosymbolic approaches. Symbolic AI brings interpretability through logic and knowledge representation, while machine learning offers strong predictive power. In this context, Semantic Web technologies, and particularly ontologies, are emerging as key tools for building AI systems that are both effective and understandable. Moreover, identifying causal attributes within AI-driven decision systems has great potential to yield favourable outcomes.  Our work focuses on exploring these methods to advance the development of truly explainable AI.


Research Highlights

 

  • Defined key terminology for Explainable AI (XAI) and clarified what constitutes an explanation.

  • Established the mathematical and formal foundations of XAI in specific domains, including Natural Language Processing (NLP) and mental healthcare.

  • Developed an ontology-based classifier that detects and explains classification errors.

  • Introduced counterfactual explanations for ontologies, making AI reasoning understandable to both experts and non-experts.

  • Built knowledge infused detection systems for the explainable assessment of mental disorders and potential contributors.

  • Validated contributions through diverse case studies and user evaluation.

Back