It’s important to go back and read the old stuff sometimes. When ideas get popularized, often parts get left behind that are just as interesting as the parts that catch on. Today we’ll take a look at Sim Sitkin’s “Learning through failure”, from Organizational Learning (1996), which introduced the concept of “intelligent failure”.
Though talk of “failing fast” is now commonplace, much of the nuance of Sitkin’s original argument is today missing from such discussion. His “strategy of small losses” is less about treating failure as feedback when it happens and more about generating failure on purpose as a proactive learning mechanism. Key here is the fact that a lack of sufficient failure (i.e., unequivocal positive feedback) tends to produce suboptimal results.
Sitkin’s argument is twofold:
1) Failure has benefits that need to be leveraged; and conversely,
2) Success has liabilities that need to be managed.
To focus on rewarding success is to prioritize reliability over adaptability and short-term wins over long-term resilience. Success can foster efficiency in the short term—given environmental conditions don’t change. It can also, however, stifle innovation, strengthen the status quo, make managers overconfident, and create a maladaptive homogeneity of personnel, process, information, and choices in response to emerging problems.
To circumvent this, teams should experiment with methods, ideas, and approaches. Inherent to this process is, of course, failure. Only astrologers never fail. If this failure isn’t rewarded, however, then you will not learn from it, as you are incentivizing people to hide it. I see this with executives who want to know what work is “green”, “yellow”, and “red”, ignoring that all the indicators are green…until they aren’t.
This is “success theater”. When it’s rewarded you punish continuous improvement. This props up a fragile system centered on the traditionally relevant. As Sitkin stresses, failing—and the experience of failing—creates a felt need for corrective action fueling the necessity to search, innovate, and take risks that wouldn’t otherwise be felt and likely cannot otherwise be replicated.
Traditional factory-based thinking makes this worse. You’re not here looking to drive variability out of a process to maximize “efficiency”. In fact, it’s quite the opposite, as Reinertsen makes clear in The Principles of Product Development Flow. The innovator seeks to add variability to the system and increase the range of outcomes. If you reduce variability, then you strip innovation from the system.
Premature success limits knowledge. If you succeed before some set of intelligent failures, then you are deprived of the insights they would have provided. You will take your success and move forward without that additional information, and perhaps with fewer paths forward. Obviously, however, not all failures are created equal. So, what makes some “intelligent?” Here are Sitkin’s key characteristics:
To position themselves to benefit from intelligent failures, organizations must do four things:
1. Increase focus on process. Focus on the process of generating diverse and informative outcomes and not on whether a team “succeeds or fails”. Smaller-scale actions allow for more teams to independently experiment, more quickly generating a distribution of outcomes that contain a sufficient amount of failure.
Goals need to be well balanced. Unchallenging goals produce distributions of small, predictable successes, which are not informative. With modestly challenging goals, more information will be gained by purposively pursuing intelligent failures, what Sitkin calls the “strategy of small losses”.
Action and learning should be somewhat decoupled. By speeding up action and feedback while slowing down plan revisions, sample sizes are increased, which builds in a safeguard against making adjustments based on unreliable observations. (This point is perhaps somewhat at odds with Scrum practice.)
2. Legitimize intelligent failure. It must be monetarily incentivized. If people cannot point to clear evidence of the positive effect of intelligent failure on career mobility and rewards, then no one is going to take claims that the org “values risk taking” seriously—nor should they, because it doesn’t. Organizations cannot expect to foster innovation via intelligent failures if the individuals providing them must pay a price for doing so.
Quick judgment of failure should be stymied; after all, what looks like a failure today may be recognized as a critical contribution tomorrow. Publicly recognizing individuals who intelligently fail and urging successful executives to share their own stories of intelligent failure will show commitment to strategic failure, risk taking, constructive experimentation, and innovation. Such public recognitions also legitimize failure in orgs where this very concept might feel foreign.
3. Change the culture. Employee training should include material on risk taking and the importance and value of failure and surprise. If an organization is serious about innovation, then intelligent failure must be viewed as a strategic asset. This requires the corporate culture to shift its thinking on failure in ways that might seem ironic.
For instance, teams might actually be penalized for not failing enough. Teams not producing a large enough “scrap pile” of intelligent failures may not be sufficiently taking risks, dealing with failure, and learning from it. They are likely not experimenting and learning their way forward, but rather building and claiming success regardless of real outcomes. Such teams are likely risk averse, stagnant, and failing to continuously improve. Calling such a team “high performing” may be rewarding people for playing it safe and sticking with what’s already known to work. In some contexts, an absence of failure should signal the need to remove risk-averse routines.
4. Emphasize failure management systems rather than individual outcomes. Strategic failure must be implemented at the organizational level. Individuals on their own will not produce a sufficient and random enough range of failures to produce optimal organizational learning. Individuals will tend to produce safe successes or predictable failures, neither of which are very informative. It might even be necessary to purposively expose employees to small doses of failure and then reward them for handling such well. This inoculates employees against their hesitance to take risks and better enables them to handle and learn from their failures.
In sum, what matters is the process, not specific outcomes in isolation. Intelligent failure, stated another way, suggests that being right the first time is a risky strategy. The number of successful innovations can only be increased by purposefully increasing both the number and diversity of failures in the overall outcome distribution. There is an analogy in the world of peer-review publications. If a paper’s research idea is interesting and its methodology is sound, then the result of the experiment should be irrelevant—it’s equally informative either way. And yet only papers that “find something” tend to get published, turning the literature itself into an incomplete dataset.
Now, does this mean you should really try to fail? The answer, unfortunately, is yes and no. I encourage you to be more “designerly” and less “factory-esque”, but the onus in leveraging strategic failure does not fall on you. It falls on executives. No amount of writing about best practice circumvents the law that what is incentivized is policy. If executives tout the importance of risk taking while rewarding what they consider success and punishing failure—even if only by not rewarding it—then they incentivize the opposite of what they verbally claim.
Until next time.