Weeknotes 455
I did:
Quiet week with lots of people out for the Easter holidays and I was only in for three days so I decided to focus mostly on my efforts to help product managers identify new opportunities, including:
- Joined-up some different threads between product-specific opportunity exploration within a team’s ability to prioritise, and the kind of opportunities that require leadership prioritisation and commitment before being assigned to a team.
- Created a copilot chatbot to help product managers assess opportunities.
- Started writing a presentation for our product community of practice, which I think will be about the intersection of ‘PM’s assessing opportunities’ and ‘using AI in product practice’, with the chatbot as the live example.
- Talked about testing approaches and how we start with best practices that the team can revise to fit their needs. Lots more to do around how teams should interact when they both need to test their work, governance, increasing people’s knowledge of testing, etc.
- Picked up some slow moving work, which made me think about what mechanisms drive our focus and how we lean into our ability switch between things.
- Spoke to one of our UX designers and information architecture specialists. It was a high-energy chat and I learned lots.
- Chatted to a product manager colleague about what making our products AI-first might look like in the near future.
The numbers
Time spent in meetings: 360.
Number of tasks completed: 22.
(Three day week)
Experiments with short prompts
I blogged my little experiment with short prompts. The idea is that rather than having to copy and paste an entire prompt into a chatbot, you can tell it to go to a webpage that has the full prompt, and it follows those instructions. It should provide a better user experience for finding out more about a topic after a workshop or using prompt libraries. The next experiment is to make the prompt on the web page more specific and see if that gets better results (thanks to Adam Gillett for his advice).
I read:
The Cybernetic Teammate
This paper examining how AI changes collaboration in teams shows that AI significantly enhances performance to the point where individuals with AI matched the performance of teams without AI, demonstrating that AI can effectively replicate certain benefits of human collaboration.
The tool or team mate question is an interesting one. What does it mean to treat AI as a team mate? Maybe the AI holds most of the power and does most of the work (because it has all the information and capability) and humans are it’s assistants helping to keep it on track. Maybe it’s like pair programming, where human and AI are pretty equal. Maybe it’s more like a digital assistant/infinite intern, where the AI is subordinate to the human. Maybe it’s more tool-like and less team-mate-like, where AI is used to perform a task but gets no recognition. It seems likely we will need to “rethink the very structure of collaborative work” to figure out how humans and AI work together, and how humans work with other humans when they no longer have specialist knowledge or contextual decision-making authority.
A2A
A little while ago I wrote about the need for product teams to think about building their products so agents can use them. The A2A protocol looks like a good step for that as it allows agents to interact with each other. Along with Anthrophic’s Model Context Protocol, which allows agents to interact with data systems, I can start to see a route for product teams to figure out how to make their product agent-ready.
Duolingo
This post poses an interesting question; are engagement and pedagogy so opposite that you have to choose one over the other, or is there a middle ground for a product that engages users and helps them learn?
Four week sprints
Kelly Lee wrote about a experiment her team did running four-week sprints rather than two. She shares lots of interesting insights into things the team needs to do to make four weekly cycles work.
I used to believe in the benefits in fixed timeboxes, but over the years I’ve seen more and more situations where it’s more of a hinderance than a help. Fixing the release cycle and then adjusting the work to match seems backwards to me. It seems arbitrary and like it doesn’t match the messiness of cross-functional team’s work. Some work takes a short amount of time, some work takes longer. Ship when done.
I thought:
Agile leadership
If we accept that agile is the steering wheel not the accelerator, then the first place in an organisation to try out agile transformation is with leaders, not with teams.
So the experiment for leaders to help them figure out if they are ready for an agile transformation is to make decisions on the day they are asked. If they think they don’t have enough information to make the decision, they still have to make it but they iterate on the information people bring until they always get what they need to make decisions quickly. If they learn they made the wrong decision, they make a different one (that’s the steering). If they can’t do this, they know other parts organisation aren’t going to become more agile either.
Good process, bad process
I’ve been thinking about where process is good and where it’s bad. When I say “process”, I mean a sequential set of actions designed to achieve a set goal. The scientific method is a process, so is double diamond. Some processes might be better at dealing with uncertainty than others, OODA, for example, but all processes have three things in common; multiple actions, performed sequentially, to achieve a goal.
With that definition in mind, process might not be helpful where the actions can’t be predicted ahead of time or the goal is unknown, basically in situations of uncertainty. In chaotic circumstances, the best approach is to take a step, look around, and choose the next step based on what you see.