Scott Sehlhorst wrote a good post entitled Successful Products:Lucky or Intentional? about my comment on Phil Myers post entitled Chasing Outcomes. Did you get that? ๐
Phil had summed up his Chasing Outcomes post with the following line:
At the end of the day, its simple. Create a product or service that your buyers want to buy and the rest takes care of itself. |
I argued that things don’t take care of themselves and asked for other people’s comments.
Scott took my comment, that product success is not easy, and analysed Phil’s post and then provided guidance on how to increase the likelihood of product success. He states 7 steps should be followed. These are:
Note: Step 7 should occasionally replace step 6, so that you stay focused on your market, and not just an out-of-date snapshot of what used to be important to your customers. |
This is a good list that describes a good standard process for developing products. It basically says one should do their homework before implementing solutions to problems, and then iterate on the solution based on feedback from the market.
And while this should reduce chances of failure, and help develop something the people want, it doesn’t mean that there will be product success or that anything will take care of itself.
There are several reasons for this.
First, even if you follow the steps above, none will be done perfectly, and there will be aspects of the market, the problem, your solution etc. that are not fully taken into account or addressed. It could simply be because there are things you cannot know for certain, or because your resources or budget don’t permit more accurate research or for other reasons. Regardless, it goes without saying that your research and your efforts will not be perfect at each step. Think of these imperfections as “error bars” associated with decisions made at each step.
And remember, decisions are what we make when we have imperfect information. When we have ALL the information needed, we no longer are making a decision, but a calculation. Now, when do we have ALL the data? Well, almost never, or almost certainly when the market or opportunity has passed us by. And keep in mind that the error bars from these decisions can compound over the product development process.
In addition, the marketplace is not static. Customer needs or preferences may change. New competitors may appear. Economic conditions may change. The market may not develop as expected. Thus the solution you provide may no longer address the market need the way you thought. Now step 7 wisely says you should revisit step 1 and step 2.
While this is going to help reduce some of the error in the solution, it will not eliminate it. Thus no matter how hard you try, there will always be a delta between what you offer and what the market needs. That delta could be small or it could be large, but it will always be there.
Remember McPizza? McDonald’s can be accused of a lot of things but not for rushing to market with this product. They did a lot of research and test marketing. They had a well defined target audience. They advertised and promoted heavily. It was hard not to know about their pizza.
Yet it failed miserably. Why? In short, people didn’t associate McDonald’s with pizza and they could get better pizza elsewhere. Now why didn’t any of the market research turn up this insight before McDonald’s invested untold millions on pizza ovens and marketing on the nationwide launch?
And this wasn’t the only failed product from McDonald’s. Remember McRib? Yum!!! And how about the Arch Deluxe? And McDonald’s isn’t the only such company with these kinds of failures.
Here’s a great recent article from the LA Times about Proctor and Gamble — the company where modern Brand Management was developed. While part of the article refers to a new book by P&G Chairman and CEO A.G. Lafley, there is a great line in the article that is relevant to this topic.
The book’s strongest message comes not from successes such as Olay, Febreze and Swiffer but from failures, such as Fit, a fruit and vegetable sterilizing wash launched in 2000 and sold three years later at a loss of $50 million.
After all the changes, the company still has a success rate of 60% on innovations. But that “is as high as P&G wants to go. Any higher would be playing it too safe,” the authors say. |
Even at P&G, they view a failure rate of 40% as acceptable. Later in the article:
He also insists on disciplined control of innovation that weeds out failures before they become painfully big. Each innovation team must, from the start, identify the issue that represents the biggest threat to its success and explain how to deal with it.
… There will be plenty of failure to go round — and also, probably, innumerable new versions of Whitestrips. |
So, what can be concluded from this? Bringing new successful products to market, even for large well established companies, take a real disciplined process. There is no guarantee for success with a disciplined approach. In fact innovation has to be measured and carefully managed. And for those products that make it to the market, but don’t succeed, there needs to be enough discipline in the company to know when to kill them or sell them off and focus efforts and resources on more lucrative or potentially rewarding ventures.
Saeed
P.S. There is a good article called “Why Startups Fail” that covers some of the things I’ve talked about here.