Part of our jobs is making decisions that impact the lives of coworkers, users and customers, not to mention the bottom line. These decisions are made in various states of uncertainty, which is why we seek data. Often, however, we want data even when it will not produce better decisions. We want it for psychological reasons.
Such behavior can cost organizations a lot of money. How often do we desire data simply as a form of hand holding? How much time and money are spent gathering, processing, packaging, and communicating data that then don’t change or improve the decisions made? And how much delay cost is incurred by waiting for it when, as we’ll see below with the “disjunction effect”, its real purpose is not information?
In most organizations there are strong ingrained biases about different forms of research, however unfounded they may be. Qualitative findings tend to be dismissed out of hand, surveys incredibly overused and trusted, and any results smacking of “quant” credulously treated as sacrosanct. It is almost ritualistic the way orgs guard against challenging the “numbers”, perhaps suspecting the illusion of objectivity they enjoy is too important to let slip.
This ignores what every expert in research methodology fully knows: Quantitative data are only ever as good as the qualitative decisions made to produce it, and this is true regardless of the cost. The truth is that all quant results are the result of a long chain of choices, from how variables were selected, to how they were operationally defined, to the research protocol and/or experimental design, to how data were collected, cleaned, and sampled, not to mention the choice of analysis used.
There is an odd irony here: While executives run around shouting that everyone should “learn to code”, apparently no one should be learning how to actually interpret data! Such research illiteracy, however, only further perpetuates waste. After all, if a certain proportion of data are waste, then so is the research that produced it. This is quite another matter from the added cost of the resulting suboptimal quality of decisions.
As many researchers in the corporate world know full well, many if not most research requests can be described as a transactional game, as defined by Berne in his classic, Games People Play. This means there is a con and gimmick (which together form the bait), an ulterior motive (the real reason for the interaction), and a cross-up (when the truth comes out). The setup here (the bait) is the invitation to use research expertise supposedly to resolve an open question.
The ulterior motive is typically to procure marketing for a decision that has in fact already been made while simultaneously putting on a show that executives around here are “data-driven”. The cross-up is when the stakeholder pivots to attacking the researcher for failing to proffer the results being shopped for. Here the stakeholder blame-shifts, denigrating the researcher for not recognizing the game that everyone is supposedly knowingly playing.
Learning to sniff out this trap and turn down the game up front is therefore a good way to reduce waste. We can frame this in terms of Savage’s Sure Thing Principle, the violation of which is called the “disjunction effect”. The Sure Thing Principle states that if we will do A given X either occurs or does not, then it is irrational for A to wait for X. The disjunction effect is when we feel we must know the value of X before proceeding.
Here, if Decision A is made whether Information X is found or isn’t (~X), then X is irrelevant to the decision-making process and spending money on it is waste. There will be no small amount of such waste in orgs with HiPPO (Highest-Paid Person’s Opinion) prioritization, where the “research” done is so often theater.
For your own toolkit, here are some common “tells” to look for. These tells (or “smells”) will help signal you’re engaged in research theater and it might be time to push back.
The perceived usefulness of data seems based on its alignment with a particular view or the message seems to be that the research should continue until the results improve.
A result with less methodological validity is favored over your own or your results are countered with others and no one can or will explain how they were generated.
Legit qualitative research results are discounted for not having a “representative sample size”.
People object to your findings while simultaneously demonstrating their own research and statistical illiteracy. (This is common when it comes to notions of statistical “significance”, a concept there are many false beliefs about.)
People tout an interesting single result that contradicts all research done to date. (There are good odds the single result is not replicable. This is known as “Twyman’s Law”.)
The stakeholder wants you to just ask people (whether users or “super users”) what to build and call it “research” or wants you to ask people to predict their own future behavior and then base decisions on it.
Alternative analyses and interpretations of the same data are discounted without good reason, or glaring confounds are ignored, or relevant variables are excluded when including them alters the results.
A particular result is held up and treated as immune to critical thinking and logic.
Remember, you have a right to vet your stakeholders. Interview prospective clients and if it doesn’t look like requested research will be value-adding, if the results will not actually be used to alter decisions made, then why do it? You would only be setting yourself up to be scapegoated. Avoid research theater.
Let’s close with a tweet from Erika Hall, who inspired this post.
Until next time.
If you’re interested in coaching, contact me.
You can also visit my website for more information.