“Spidey-sense” instincts are good, hard data is way better
For a sport known for it’s statistics, major league baseball has always been a game managed by instinct. Managers viewed the situation, discussed strategy and made calls on batting order, pitching changes, fielding positions and more. This was the norm, until Billy Beane became manager of the Oakland Athletics in the late 1990s and literally changed fundamental assumptions of how teams should be managed. The story is described in detail in the book Moneyball.
In short, Beane started using hard metrics in virtually all aspects of managing the team, from scouting for and recruiting undervalued players to deciding on what plays to run for any given situation of the game.
The result, a great winning record with one of the lowest payrolls in baseball. For example, in 2002, the Athletics tied the mighty Yankees for most wins in the American League. Both teams had 103 wins that season. The high-flying Yankees had a total payroll of $126M that year. The Athletics had a payroll of only $41M. Today many teams have picked up on Beane’s success and in fact the techniques used by Beane in baseball are spreading to other professional sports.
Software product management today is very much like baseball was before Billy Beane. Even though we have the ability to be metrics driven, we rarely collect, assimilate and mine hard analytical data in making product direction and functionality decisions. Some product managers do run web surveys and the like to collect data, but rarely have I seen a closed-loop process that uses metrics to drive significant aspects of requirements gathering and feature prioritization. [NOTE: That last link is a shameless plug for you to read an article I wrote on the subject — it’s not boring, honest!]
Granted, it’s not always easy to get statistically valid sets of this data. In a startup or small company with few customers, the sample size of the data set may be too small to be meaningful, but that shouldn’t stop us for trying where possible. In larger companies with larger customer bases and more resources, using analytic data to support and drive product decisions should be the norm. But sadly that’s not the case.
If Product Management is truly as strategic and important as we claim it is, then why do we enact it in ways that make it look adhoc and haphazard? I can’t tell you how many times I’ve seen major product decisions made based on empirical evidence and personal opinions. Statements like “This feels like the right direction to go.” are spoken far to often in strategic planning meetings.
And let’s be very clear, there are hard costs to these decisions in terms of engineering investment, marketing campaigns, sales efforts, support calls etc. This of course does not include lost, or at best delayed, revenue because of incorrect prioritization of functionality.
Why is it that we hold groups such as engineering, marketing and sales to a higher standard than we hold ourselves. If development teams planned their schedule on instinct, or marketing teams managed their programs the same way, or sales teams projected their target using gut feel, we’d replace the leaders of those teams instantly.
But when it comes to making product decisions, many of them are made without any significant effort to collect and analyze hard data. There are many excuses that are given for this lack of analytical acumen, but the fact that others aren’t asking you explicitly for hard data is not an excuse not to get it. Set up a process to get the hard data that is needed to support or justify decisions. Not only will others be pleasantly surprised, but they’ll also have a hard time refuting your recommendations.
Saeed
Part 2 – Be technical without becoming a technologist
Part 4 – The 4 Cs of Leadership