Explainable and Ethical Artificial Intelligence
Artificial Intelligence (AI), as a part of our daily life, has achieved remarkable advancements, such as facial recognition, medical diagnosis, and self-driving cars. AI promises extensive benefits for economic and social development, as well as improvements in human well-being, security, and safety.
One of the promising avenues is the integration of symbolic AI with machine learning in so-called neurosymbolic approaches. Symbolic AI brings interpretability through logic and knowledge representation, while machine learning offers strong predictive power. In this context, Semantic Web technologies, and particularly ontologies, are emerging as key tools for building AI systems that are both effective and understandable. Moreover, identifying causal attributes within AI-driven decision systems has great potential to yield favourable outcomes. Our work focuses on exploring these methods to advance the development of truly explainable AI.
Research Highlights
-
Defined key terminology for Explainable AI (XAI) and clarified what constitutes an explanation.
-
Established the mathematical and formal foundations of XAI in specific domains, including Natural Language Processing (NLP) and mental healthcare.
-
Developed an ontology-based classifier that detects and explains classification errors.
-
Introduced counterfactual explanations for ontologies, making AI reasoning understandable to both experts and non-experts.
-
Built knowledge infused detection systems for the explainable assessment of mental disorders and potential contributors.
-
Validated contributions through diverse case studies and user evaluation.