Assessing and Enforcing Fairness in the AI Lifecycle

   page       BibTeX_logo.png   
Roberta Calegari, Gabriel G. Castañé, Michela Milano, Barry O’Sullivan
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023), pp. 6554-6562
IJCAI Organization
agosto 2023

A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness.
The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult.
This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle.
Gaps and challenges identified during the development of this work are also discussed.

parole chiaveAI Ethics, Trust, Fairness
progetto finanziatore
wrenchAEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems (01/11/2022–31/10/2025)
funge da
pubblicazione di riferimento per presentazione
page_white_powerpointAssessing and Enforcing Fairness in the AI Lifecycle (Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023), 23/08/2023) — Roberta Calegari (Roberta Calegari, Gabriel Gonzalez Castane, Michela Milano, Barry O'Sullivan)
page_white_powerpointXAI: Current Frontiers and the Path Ahead Towards Trustworthy AI (XLoKR 2023 - Explainable Logic-Based Knowledge Representation KR 2023, 02/09/2023) — Roberta Calegari (Roberta Calegari)