This article continues the discussion from Product Anonymous back in June. Full credit goes to the team and the attendees for providing key steps, insight and critical analysis.
In the last post we identified more alternatives that might address our key issue. Step #3 in the process is to evaluate those alternatives.
You’ve got your problem identified, and you have alternatives A through Z. How are you going to evaluate them? You probably have a gut feel already, but how can you do this more rigorously?
“The great thing about fact-based decisions is that they overrule the hierarchy.”
Jeff Bezos
Get them on the table.
List all the alternatives together – preferably on one page. People find it easier to do comparisons if they can just flick through with their eyes. If you have more than one page then you risk people not being able to remember details as they skip between pages.
Clarify each alternative so that everyone is sure they are talking about the same thing. Perhaps you could just verbally describe each and highlight any differences. Or alternatively document each alternative with a detailed description. It is better to find any misunderstandings early.
Decisions are limited by assumptions. Without proper attention, humans tend to make poor assumptions, if we even realise we make them at all. Call out the assumptions to the team or in the document. Test the assumptions, get the reactions and feedback, and clarify if necessary. Perhaps you’ll discover your assumption was wrong. Again, it is better to find any misunderstandings early.
Finding a valid way to compare alternatives can be hard
Comparisons can be complicated, as you are rarely comparing apples with apples. You will need to find ways to compare your alternatives. This can be done in a few different ways, and the choice of comparison can be as complicated as the original list of alternatives. But if you need a decision, then you first need to find a valid way of comparing your alternatives.
Note that evaluating alternatives is another opportunity to engage with your stakeholders. While this could be formal meetings, it could also be over a coffee with some of the stakeholders to gain their insight. It also has the potential to open up new ways of evaluating, build a better decision, create support and unearth the hidden ‘gotchas’ and opposition.
Cut back the options
More options means more attention, more short-term memory usage and more multitasking between different evaluation methods. Attention and will power are both exhaustible resources, and too many options can be quite draining – possibly leading to analysis paralysis.
For example, what does the product strategy, corporate strategy or vision have to say about your alternatives? If some of the alternatives don’t align with the overall goal then perhaps you should rule them out from the current list. If these alternatives are compelling but disagree with the strategy then you may have a lot of work ahead to pivot, change the product strategy or corporate strategy. It might be perfectly valid, but you probably have even more work ahead.
Make a first pass and cut down the number of options to something manageable – perhaps three to five of the best options. Make sure the status quo is one of the options. Identify the options that are being discarded, and be clear why they are being excluded.
Optimise for one thing
Ideally, work out one thing you are trying to optimise (people, costs, adoption, etc.), and stick with that. If you try to combine different variables and comparison methods you risk going to go into analysis paralysis; when do you optimise for X, when do you optimise for Y? Sort and list you remaining alternatives using only your chosen optimisation.
Dan Ariely and relativity
Dan Ariely has written quite a lot about behavioural economics, and especially how people don’t know what they want unless they see it in context. Watch his TED talk to see how the addition of a third decoy option can make one option more compelling than another. In his example it is hard to compare a holiday to Paris against a holiday to Rome. But when you add a third option that is just worse than a holiday to Rome (Rome without coffee), then the Rome option becomes even more appealing.
The way out of this irrational behaviour is to take a more scientific approach, and try to be more objective.
Quantitative versus Qualitative?
When evaluating the alternatives some people will want quantitative evaluation; numerical data driven evaluation. This can range from counting events, calculating Return On Investment (ROI), applying weighted averages, opportunity cost or even conjoint analysis (my favourite).
The reasoning is that numbers don’t lie, and that an objective decision can be made. Unfortunately quantitative data usually has some human element and can be misled, either intentionally or not. As the saying goes ‘lies, damn lies and statistics’. The reality is you can still choose the statistic or assumption that supports your cause. And if someone doesn’t want to believe your numbers they can always dispute whether the data source is relevant or whether the weighting is correct.
Quantitative data can prove correlation, but it can rarely prove causation. Even qualitative data can only indicate causation. My favourite example is the theory that lack of pirates is leading to global warming.
While quantitative data can be compelling, it won’t be the whole answer.
The decision may require a more qualitative evaluation; what is the impact on people, is it important to the company, what is the customer feedback? My personal favourite is the ‘jobs to be done‘ analysis; what was the customer hiring the product to do?
Qualitative data will always be disputable as it isn’t ‘hard data’ but it is often necessary in small sample sizes or when people are directly involved.
The plural of anecdotes is not data
Frank Kotsonis / Roger Brinner.
The plural of anecdotes is data.
Raymond Wolfinger
Apparently both are quotes (and self-referential proof that quotes are also not data).
Statistically, for small amounts of anecdotes, the plural of anecdotes should not be considered statistically significant data. But just because the sample size is too small doesn’t mean the anecdotes should be completely ignored. Given a large enough set of inputs, then the anecdotes start to become data where overall patterns can be identified.
Sensitivity analysis
You can perform additional sensitivity analysis on your alternatives to assess whether your comparison is solid. One example is when you adjust the weighted averages by +/- 10% to check if it makes a difference on the outcome. If the outcome changes then you are too dependent on the fine details of the weighting and the input data and will not have a stable and reliable answer.
Compare
Use a combination of techniques to narrow the list, perform some quantitative analysis and qualitative research. The list should now be manageable for the next stage – decision time.
At the end of this evaluation you may even have a document. For a large enough decision, team or project, this becomes a bit of a strategy document. Circulating this document as a draft or straw man helps give people a chance to get on the same page and build consensus. It gives people a chance to comment further if they’d like.
Now… a final question. Is that gut feel answer suddenly now the best alternative? You may need to question whether you are doing this objectively. You may be suffering from confirmation bias, where people favour information that confirms their own beliefs.
Hopefully now you have option A to Z cut back to a manageable size of less than 5 options. The next step is deciding between them.
Have you got any suggestions in ways to evaluate alternatives? Please feel free to comment below to add to the discussion.
Read part 4 on identifying alternatives or go forward to part 6 on making your decision.
Steve is a Product Development Manager at Telstra Wholesale. The views expressed in this post are his only and do not necessarily reflect the views of Telstra.