Our main objectives
In the FairXCluster project, we will focus on these two fundamental challenges in AI research, namely explainability and fairness. We will address these challenges in the context of clustering, a cornerstone AI task by the use of counterfactual explanations (CFEs).
Explainable AI is an essential facet of modern artificial intelligence development, addressing the critical need for transparency and understandability in AI systems. As AI models, particularly deep learning algorithms, become increasingly complex and integrated into high-stakes domains such as healthcare, finance, and autonomous vehicles, the ability to understand and interpret their decisions becomes paramount. Explainable AI seeks to make the decision-making processes of these algorithms transparent, providing insight into how and why certain outcomes are reached. This transparency not only fosters trust among users and stakeholders, but also enables developers and researchers to more effectively diagnose and refine AI models to ensure they meet ethical standards, legal compliance, and social acceptance.
Counterfactual explanations delve into the realm of "what might have been" to shed light on how a machine learning (ML) model's decisions work. At their core, these explanations explore alternative scenarios by suggesting minimal adjustments to the original input that would result in a different outcome from an ML system. They operate on the principle of identifying and modifying critical variables that would change the model's decision, without relying on specific examples. This approach provides a straightforward and tangible method for users to understand the decision-making process of the ML model. By focusing on the changes necessary to achieve a different outcome, counterfactual explanations not only shed light on how ML models make decisions, but also empower users with the knowledge to navigate and potentially influence the outcomes of these systems in future interactions. Notably, this method significantly contributes to enhancing the fairness of AI systems by ensuring that decisions are made transparently and can be scrutinized for bias or inaccuracies, thus promoting equitable treatment across all users.