zur Startseite

FAT Decision Support Foundational Research

Fair and Accountable Decision Support
Project TypeFoundational Research
Funded by Haushalt
Start 2017
End
Project Manager Prof. Dr. rer. nat. Melanie Herschel
Staff Oppold, Sarah
Contact
Brief Description

Machine learning models are commonly used for decision support. Ideally, the decisions should be impartial, unbiased, and fair. How- ever, machine learning models are far from perfect, e.g., due to bias introduced by imperfect training data or wrong feature selection. While efforts are made and should continue to be put into develop- ing better models, we also acknowledge that we will continue to rely on imperfect models in many applications. But what if we can provably rely on the “best” model for an individual or a group of individuals and transparently communicate the risks and weaknesses that apply? In light of this question, we propose a system framework that optimizes the choice of model for specific subgroups of the population or even individual persons, relying on metadata sheets for data and models. At the same time, to achieve transparency, the framework captures data to explain the choices made and results of the model at different scales to different stakeholders.

 

 

Sample Model EnsembleMDS