Weeknotes 301

Did this week:

Safe measures

Been thinking about the problem of measurement for improvement in a safe way. It’s easy for measures to feel like criticism, so how can we have measures that are clearly of the system and not of the person and the work?

And I worked on how the Team Topologies four indicators of high performing organisations for achieving a faster flow of change can be used to measure product development process performance.

Virtual citizen

This week’s irregular ideas newsletter was about becoming a virtual citizen and whether nationality has to be tied to locality, and about what might happen if there was a marketplace for changing nationality.

Bought a new phone

My phone stopped charging, and as I use my phone as a hotspot to connect my laptops to the internet I needed to replace it quickly. The interesting thing for me was putting the response plans that I wrote ages ago into practice. I spent some time doing a risk assessment of my life to think about what things could go wrong and what I could if they happened. Phone not working is a minor one, but it’s good to know that planning was worth it.

Read:

Progressive Organizational Structures

Organisational structures are fascinating. What is the best way to organise a group of people to work on lots of different things but all towards the same goal? Corporate rebels have collected ten organisational structures that are rooted in practice. Lots to think about.

10 principles for making collective progress

This review of existing approaches and principles for making progress towards shared visions for social change is fantastic. It’s great to see Collect Impact as the first approach on the list, and that a network approach to systems change is included.

Thought about:

Product’s iron triangle

Project management has the iron triangle of scope, time and budget, so I was wondering what the product version is of this. Paul Brown suggested the Mobius loop. I need to read more to understand its use but it seems like exactly what I was imagining the product version would be. Being a loop implies the ‘never finished’ nature of modern products and connects user, outcome, delivery and measurement.

Failing on purpose

There’s a lot of thinking in tech and product about failing fast, learning from failure, etc., but I was thinking about how despite recognising failure as a learning opportunity, why we still try so hard to avoid it. Wouldn’t we learn more and better if we welcomed failure, even failed on purpose? If we see a potential failure, shouldn’t we allow it to fail for the learning opportunity it presents?

Understanding scale

I think scale might be one of the hardest things for human brains to grasp. From the smallest to the largest known scales in physics, the way the scale of networks has such a significant effect on behaviours, and how difficult it is to know if the understanding you have is accurate.

Measuring product development performance

What do we want to achieve? 

We want our product development process to achieve a fast flow of change, so that we can deliver value to our users effectively. We can define that value as solutions that solve, or contribute towards solving, problems that our users are facing. We can deliver value through the work we make available for our users, whether it’s a large piece of project work or a small piece of content or process change. 

How are we going to achieve it? 

We have to define what work we want to measure. We probably don’t want to measure all the work that we deliver but could start with the new work that goes through our product development process. We can decide to add other types of work later. 

Based on Team Topologies approach, there are four measures that are indicators of organisational performance in delivering value to users: 

  • Lead time – How long does it take us to go from starting work to making it live? 
  • Deployment frequency – How often does work go live? 
  • Change fail percentage – How much work goes live that doesn’t solve the problem it set out to? 
  • Mean time to resolution – How quickly is work that doesn’t solve the problem fixed? 

How long does it take us to go from starting work to making it live? (Lead time) 

How might we measure it? 

  1. Record start date. 
  1. Record live date. 
  1. Count the number of days, divide by the number of projects to get an average lead time. 

What could we do with the measurements? 

  • Decide whether the benchmarked average lead time is where we want it to be. 
  • Decide whether we want to reduce lead time. 

How often does work go live? (Deployment frequency) 

How might we measure it? 

  1. Record the live date of all projects. 
  1. Count the number of days between each project, divide between the number of projects to get an average deployment frequency. 

What could we do with the measurements? 

  • Decide whether how often we make new projects live is where we want to be. 
  • Decide whether we want to decrease our deployment frequency. 

How much work goes live but doesn’t solve the problem it set out to? (Change fail percentage) 

How might we measure it? 

  1. Set goals and measurements during the project. 
  1. Regularly measure live products/services against the goals. 
  1. When a product/service drops is evidenced to not be achieving its goal, request work to fix it.  
  1. Record the date of the request. 
  1. Count the number of products/services against the number that are achieving their goals to get a percentage. 

What could we do with the measurements? 

  • Decide whether the percentage of products/services going live and not achieving their goals is where we want to be. 
  • Decide whether we want to reduce our change fail percentage. 

How quickly is work that doesn’t solve the problem fixed? (Mean time to resolution) 

How might we measure it? 

  1. Record the date of the request for work to fix. 
  1. Record the date the new solution goes live. 
  1. Count the number of days between them, add to the total of all projects, divide by the number of projects to get an average number of days to mean time to resolution. 

What could we do with the measurements? 

  • Decide whether the average number of days we take to solve problems is where we want to be. 
  • Decide whether we want to reduce our mean time to resolution. 

What behaviours can we expect these measures to drive? 

Measuring, and responding to measurements always drives behaviour change in a system.

If we sought to improve on all these metrics, we can expect to see work reduce in size and scope to tackle smaller specific problems faster.