How the board of directors got their start with adversarial debiasing

Time: Tuesday 11-Aug-2020 22:30 (This is a past event.)


Artifacts
meeting-link: please to see content
slides: please to see content

Motivation / Abstract
Designing governance systems for AI is challenging on multiple fronts. Machine learning models are inscrutable to the best of us, yet it is frequently non-technical members of senior management who set budget, scope, and quality targets, with final sign-off on product release. I work through a governance use case of setting and enforcing policy relating to protected category labels (e.g. race, age, and gender). This demonstrates the need for a difficult conversation between data scientists and senior management covering bias mitigation techniques, standards, regulations, and business strategy. I propose a solution that relies on the notion of a multi-layer policy with adaptive verification subprocesses. Using this construct, I show how oversight committees truly can work hand-in-hand with data scientists to bring responsible AI systems into production.
Questions Discussed
- how can we define “algorithmic fairness” and how’s that different from our regular understanding of fairness
- Who should be behind the decision of what should be the fairness definition? How can engineering teams contribute?
- what is the economic incentive for fairness?
Key Takeaways
- there are many algorithmic definitions for fairness and what should be used is still reliant on proper debate and human involvement before we can completely leave the fairness judgement to machines
- there are ways to think about incentives for fairness, like cost avoidance (reputation, legal action, ...) or more positive aspects (acquiring wider customer base), but incentives for companies is not very clear and still legislation and regulation is needed to ensure users' interest 
Stream Categories:
 AI Products