Weeknotes 338

This week I did:

Quicker validation

We tried a couple of experiments this week to speed up how we validate meeting a user need. There are two parts; getting a solution in front of real users quickly (in hours rather than days or weeks) and setting the measures to know whether the solution is meeting the core user need. If we get a positive result from the experiment, then we’re in a good position improve the solution, and if not when we’ve learned quickly without a lot of effort. Shifting thinking from ‘right first time’ to ‘right over time’ means accepting a lot more uncertainty and imperfection in order to learn more quickly, but in lots of cases it’s definitely worth it.

Will technology create a better future?

The answer is by no means certain. The tech-optimist perspective isn’t rationally defensible, but there is some suggestion that an agency-based perspective that says tech will make things better if we actually decide to make tech make things better could be a good way to go.

And I read:

Calculated Risk: A Framework for Evaluating Product Development

This article in MIT Sloan Management Review from 2002 talks about the limitations to traditional financial risk management approaches and introduces a framework for considering market risk, technical risk and user risk. I wonder if this is the source of the thinking about the ‘four big risks’ of value, desirability, feasibility, viability. I like getting the source of ideas (like how boundary objects are the source idea for user stories), not only does it help to understand how ideas change over time .

Learning reviews

Apart from doubleloop looking like a very cool business data visualisation tool, this post on conducting learning reviews . In general, it asks, ‘what assumptions did you have eight weeks ago and what do you think about them now?’ If retros look back at how the team worked and what they could improve, and traditional reviews look back at the work completed, this kind of thinking (and tool) reviews assumptions and hypotheses about what drives metrics. Doing all three regularly feels like it might help teams improve on the how, the what and the why of work.

Bringing Scientific Thinking to Life

Improving rational, objective thinking that is an essential underpinning to good practice and process, and especially for product managers.

Thought about:

What makes products successful

I been thinking about what are the biggest factors in the success of a product. I haven’t found any good research to answer the question yet, but it seems that one of the biggest factors is probably how well a product fits into the market and culture and one of the smallest factors is what process the product team uses to create and manage the product. Interesting then how much product managers talk about tools and techniques and how little they talk about ongoing horizon-scanning, market analysis, and thinking about what things are changing in society that might affect their product. Obviously a product team needs to have all the things it needs to be able to do something about changes and opportunities, but they have to be able to see and understand them first.

Delivery management and product management

Been thinking a bit more about how delivery and product compare, contrast and work together. They are both about behaviour change, delivery for internal teams and product for external users. If delivery’s overall purpose (for want of a better term) is about ‘facilitation’, then product is about ‘validation’.