Designing governance systems for AI is challenging on multiple fronts. Machine learning models are inscrutable to the best of us, yet it is frequently non-technical members of senior management who set budget, scope, and quality targets, with final sign-off on product release. I work through a governance use case of setting and enforcing policy relating to protected category labels (e.g. race, age, and gender). This demonstrates the need for a difficult conversation between data scientists and senior management covering bias mitigation techniques, standards, regulations, and business strategy. I propose a solution that relies on the notion of a multi-layer policy with adaptive verification subprocesses. Using this construct, I show how oversight committees truly can work hand-in-hand with data scientists to bring responsible AI systems into production.