Weeknotes 466

I did:

Took the week off but some good stuff still happened:

  • Some work that seemed not to deliver much value was stopped. Yay for considering value and not doing work just because it’s on the plan.
  • Chatted about quarterly planning and what problems it’s really trying to solve (predict delivery, reduce dependencies, drive outcomes, optics of management, etc.).
  • Wrote up some notes on the implications of our shift to a more continuous way of working (because cadence-based ways of working are flow killers).
  • Considered what kind of tracking data we’d need to support a north star metric.

I read:

When Cross-Functional Teams Still Need Too Much Coordination

A cross-functional team might “own the UI for checkout and maybe some business logic. But every meaningful change requires coordinating with five other teams. This happens because team boundaries were drawn around what seemed like logical business domains, but without understanding the actual system dependencies. The boundaries look clean on the org chart, but create coordination hell in practice.”

“Most coordination problems arise when companies attempt to standardize their teams’ work processes, rather than acknowledging that different types of teams require distinct interaction patterns.”

“The solution isn’t better coordination, it’s better boundaries.”

Transformation Happens At The Speed Of Trust

Successful digital transformation needs these eleven types of trust.

Important, then, to question organisational behaviours that signal a lack of trust or erode and undermine trust. For me, heavy management process do exactly that. They suggest that teams aren’t trusted and that managers haven’t been given the space or skills to help teams build trust with each other. The obvious go-to solution is to implement a control-focused process that demands teams report more. The better approach would be for managers to coach teams in how to build trust in themselves, with other teams, and for stakeholders.

Pulp friction

My personal experience of the NHS is also, as Ralph describes, one of friction, poor coordination, opaque decision-making, and all underpinned by an attitude of patients being an inconvenience. It feels to me like the natural result of large organisations that are heavy on process (which enforces learned helplessness) and implicit about culture (in this case that doctors know everything, patients know nothing).

I thought about:

Probability of achieving outcomes

How might we figure out the probability of achieving an outcome?

Most probability analysis seems to rely on having historic data or knowing the range of possible results, which isn’t easy with outcomes. But maybe we could look back at previous work and ask questions like, “Have we been able to change this user group’s behaviour in the past?”, “What type of behaviours have we and haven’t we been able to affect?” and “How much have we been able to change behaviours, at what scale and for how long?” Maybe getting a sense of how successful we’ve been at changing similar behaviours for similar users to a similar degree could be used to predict how successful we’ll be next time.

And then looking outside, I think there’s also something about understanding the nature of the behaviours we’re seeking to change that affects how likely we are to change them. Behaviours that are part of a deep-rooted belief will be really hard to change, whereas behaviours that are more mundane should be easier. Considering the systems and environmental context might also be important for understanding how likely users are to change.

So, maybe by understanding user’s propensity for change plus our ability to cause change, we can come up with a probability of achieving an outcome.

Another little experiment I’ve been doing is to start with the assumption we’ve got a 50% chance of achieving the outcome, and every time a challenge gets in the way I take a few percent off, and every time we overcome a challenge I add a few percent. It keeps a (very subjective) running total of how likely we are to achieve the outcome by considering factors that come up along the way instead of having historic data.

Supply and demand

I got a bit obsessed with supply and demand curves and trying to figure out if the economics models can be used to prioritise work, which is a kind of supply and demand problem. There’s always more demand for work than the team can supply, and that creates a shortage. In a normal competitive market this would cause the price of work to rise, but an internal market with no invisible hand goes straight to a market failure state of requiring ‘regulation’ to control the demand. That regulation is the prioritisation activities organisations do to try to balance supply and demand. Economics is cool.

Right-sizing

I’ve been wondering if it’s possible to use the same approach of right-sizing a feature to right-sizing a project, albeit loosely. We’d need data about completed projects, what the scope was, how long it took, how many people worked on it, how much it cost, etc., but then you’d be able to look at a potential new project and say it looks a bit like this project we did three years ago that took us four months. That means we think we can complete it in four to six months, but if you want it finished sooner, we could make it look like this other project we did a while ago that took one month.

Downside of long-lived teams

The old way was to treat work as if it was on a conveyor belt. It would come to a team, they’d do their bit, and the work would move onto the next team. We learned that with hand-offs between teams comes considerable knowledge drop and this leads to more failures and poorer quality products.

So the solution was to create long-lived, cross-functional teams that move with the work, maintain knowledge as they go, and create better products. That created another problem: professional development. Working with one team on one product for a long time means individuals don’t get the opportunity to improve their skills that come from doing things again and again. How can we expect product managers to be good at vision setting if they only do it every decade? How can we expect product managers to be good problem solvers if they don’t face all kinds of problems in all kinds of situations?

You can’t do OKRs without a strategy

I think OKRs are brilliant. They are so suited to product work which is all about changing behaviours. But they only work if they are being used to deploy a strategy. If you don’t have a strategy then OKRs aren’t going work, and it’s why they sometimes end up as a short-term to-do list.