A few games people play when it’s time to set goals:
Game #1
Set easy goals that are super attainable within the limits of current behavior. In this game, every goal is always met and nothing ever changes.
Game #2
Set goals that are more ambitious, but are less likely to be met. Real progress can be “hidden” behind partial progress towards unmet goals. But that is OK so long as everyone is playing the same game.
Game #3
Set goals at the theoretical limit. Error should be 0%. No one should be harmed. Everyone must be free.
Game #4
People loiter about, trying to figure out what kind of goal one is meant to set from various context clues, possibly inscrutable or obscured. Then, they play the appropriate game from the list above. Whatever you do, don’t guess wrong! Do what everybody else does, but a little more, but not too much!
Discussion
Of these, game #4 is the one I see played most frequently.
But as far as I’m concerned, game #3—setting goals at the theoretical limit—is the only one worth playing. Why? Because I never want errors to be OK. No one should be harmed. I’m in this for the liberation of all beings.
Cue the voice from the back of the room. Who are we kidding, it comes from the front of the room. This voice is heard to proclaim: “That’s not realistic!”
So yes, as a pragmatic concern, a lot of times people do want to see tidy little goals attained relentlessly, quarter after quarter. When I find myself in that scenario, I’ll play a new game:
Game #5
Near and far—Set a goal at the theoretical limit, describing the world we should have. Then, set a target that would inch the system’s performance toward the theoretical limit. This target is what we’ll commit to going after right now.
Three benefits of this game:
- First, it creates space for small, incremental achievement. It is the zone of daily kaizen, of ongoing practice, and of continuous improvement.
- Second, it allows for huge, transformational shifts—so long as people demonstrate improvements that move things closer to the goal, and there is a supportive management framework.
- And third, it brings trade-offs into view.
Revealing trade-offs
Suppose we want a particular error rate to be 0%, and it’s at 50% right now. Half of the time, we do a thing and something screws up.
It’s likely that we can find small, reversible changes that brings that error rate down from 50%. Just ask the people who do it every day, they can tell you what to fix. Listen to them, do what they suggest. This is where a lot of projects are.
Now suppose that things have become better, but still not good enough. The error rate is now 5%. One out of twenty times, something screws up. We are in the zone where not everything we try will actually prevent errors. Or we might identify things that bring the error rate down further, but are not “worth it”—and there, is where the discussion happens:
- What is acceptable harm, according to the people who sign the checks? At some point further improvements will be revealed to not be “worth it.”
- What amount of pain, damage, loss, is an organization willing to actively produce in its regular workings?
Play this game, set goals at the theoretical limit, and eventually you’ll find out what the organization and its toleration really is.