Osnaživanje povjerenja osiguravanjem pravednosti u sustavima umjetne inteligencije u zdravstvenoj skrbi
Luka Poslon - Pregledni rad
Enhancing Trust by Ensuring Fairness in Medical AI
Sažetak
Rad se bavi suvremenim izazovima pravednosti u kontekstu primjene umjetne inteligencije (UI) u zdravstvenoj skrbi. Osim izazova konceptualizacije pravednosti, prikazana je i važnost objašnjivosti i objašnjive umjetne inteligencije putem koje se osigurava povjerenje u primjenu UI u zdravstvu. Neželjeni primjeri primjene UI u zdravstvu podsjećaju da je moguće da sustav UI predlože nepravedne i pristrane ishode predviđanja koja mogu dodatno produbiti društvene nejednakosti na rasnoj, etničkoj ili drugim pripadnostima. Potrebno je stoga razvijati sustave UI koji će ublažavati pristranost i posljedice diskriminacije koji mogu nepovoljno utjecati na procese donošenja odluka. Usprkos alatima poput »AI Fairness 360« i »What-If Tool« koji pružaju tehnička rješenja za smanjenje pristranosti u sustavima UI, potrebno je uložiti dodatan napor u razvoj UI koja pruža mogućnost objašnjenja uz ispunjenje etičkih normi. Stoga se rješenje za ublažavanje nepravednosti i pristranosti može naći u sustavu UI pod nazivom TWIX koji, zahvaljujući mogućnostima objašnjivosti, osigurava pravednija predviđanja. Buduća istraživanja trebala bi biti usmjerena u razvoj sličnih sustava UI s mogućnostima objašnjenja predviđanja radi pravednijega donošenja odluka.
Abstract
The paper addresses current issues of fairness in context of the application of artificial intelligence (henceforth: AI) in healthcare. Along with the difficulties in conceiving fairness, paper demonstrates the significance of explainable AI (henceforth: xAI), since xAI helps to mitigate bias and build trust in the use of Medical AI. Case studies of AI's use in healthcare serve as a reminder that an AI system may produce unfair and biased prediction results, which might exacerbate societal disparities based on race, ethnic, or other factors. Thus, AI systems that will mitigate bias and minimaze effects of discrimination—which can negatively impact on the decision-making process—must be developed. Even with technological solutions for reducing bias in AI systems, such as the What-If Tool and AI Fairness 360, additional effort is required to create AI that can provide explanations for its predictions in order to adhere to the ethical norms. Furthermore, explainability is especially important for healthcare in situations requiring informed consent. This was highlighted in the most recent Artificial Intelligence Act regulation, which placed a strong emphasis on explainability, transparency, and human oversight. This paper presents technical solutions that can significantly reduce bias by enabling explainable predictions, which positively impacts clinical outcomes and trust in the technology. Proposal for the development of AI systems like TWIX, which improve trust and mitigate bias by providing explanations, we can contribute to the development of future AI systems that are trustworthy and ethical. Therefore, I think that an AI system named TWIX, which guarantees the fairness of predictions due to its explainability capabilities, offers ways to reduce bias and discrimination. Future research should focus on developing similar AI systems with predictive explanatory abilities in order to promote fair and trustworthy decision-making.
| Naslov: | 04_Osnaživanje povjerenja osiguravanjem pravednosti u sustavima umjetne inteligencije u zdravstvenoj skrbi.pdf |
| Veličina: | 166.48 KB |
