Brass Tacks Survey Tools
Welcome to your Test Quiz
Can we complete this sentence using words already in our values statement: “AI will help us become more _______ as a company”?
Would our founder recognise this AI vision as consistent with why the company was started?
Have we defined what AI success looks like in terms of who we’re becoming, not just what we’re implementing?
Have we articulated which decisions must always remain human—not because AI can’t make them, but because we should?
Does our AI story position employees as heroes gaining new powers, rather than problems being solved?
Can a frontline employee retell our AI narrative in a way that makes them proud, not anxious?
Have we been honest about what we don’t know yet, rather than overselling certainty?
Have we named our agents in ways that clarify their role—neither disguising them as humans nor dismissing them as “just tools”?
3.1 Are skeptics being treated as people with legitimate concerns, not obstacles to overcome?
3.2 Have we started with believers and let success earn the skeptics, rather than mandating compliance?
3.3 Is their genuine dialogue happening—where leadership might change course based on feedback?
3.4 Have the people whose work agents will perform been genuinely consulted—not just informed—about how their roles will evolve?
4.1 Are we giving people the chance to learn before we expect them to perform?
4.2 Are we asking people to change how they work, rather than questioning whether they can keep working?
4.3 Have we made the first step easy enough that anyone can take it without fear of failure?
4.4 Are we developing people to work with agents (directing, supervising, correcting), not just making way for agents?
5.1 Are we measuring outcomes we’d care about even if AI didn’t exist? (Customer satisfaction, quality, employee engagement—not“AI adoption rates”)
5.2 Do our metrics reinforce our values, or just our efficiency?
5.3 Have we defined success in terms that a customer or employee would recognise as meaningful?
5.4 Are we tracking agent errors and overreach, not just agent productivity?
5.5 Do we measure how well humans and agents work together; not just how much agents accomplish alone?
6.1 Are we embedding AI into existing management rhythms, rather than creating parallel processes that fragment attention?
6.2 Is AI becoming part of how we already develop people, not a separate initiative competing for mindshare?
6.3 Will AI feel like an evolution of how we work, rather than a disruption imposed from outside?
6.4 Is there a clear human accountable for every agent’s actions—someone who can be asked “why did this happen?”
6.5 Do we have escalation paths for when agents encounter situations beyond their competence—and do agents know to usethem?
Time’s up