Weeknotes 300

This week I did:


I’ve been using an inductive reasoning ‘practice into principles’ approach to take from some of the work I’ve been doing recently and shape up some standards that we can use to evaluate products. It’s interesting how different aspects of standardisation are at different stages of maturity and adoption. For example, password security has some good practice guidelines from NCSC but assessing the environmental impact of a website has lots of emerging tools but no agreed standard yet. Nothing is ever finished, everything evolves, and so if you’re referencing those things you need to be able to evolve with each of them.

When future shock is the norm

This week’s Irregular Ideas newsletter explores what effects virtual reality might have on our relationship with technology and ability to handle so much change.

Too many domain names?

I listed all the domain names I own. 52. That’s about the normal amount, right? I think starting an new idea with a domain name starts to give the idea branding and identity, and whatever you do with that domain name such as redirecting to Notion workspace or building a website, the previous links always go to where you want them to.

Open benches

I listed my first memorial bench on Open Benches. It’s very cool (the project, not the bench).


I updated my projects page to include some of my more recent projects and remove the images. I also started trying to standardise each of the project pages a bit more.

And thought about:


I’ve spent quite a lot of time this week reflecting on how I’m working towards my goals, looking back over the hypothesis mapping I did a while ago and looking for some criteria to judge whether the things I’m doing are likely to increase the chances of achieving the goals or not.

I read:

The Evolution of Learning to Edutainment 3.0

This presentation about the future of learning is very cool (except it’s focus on pedagogy rather than andragogy).

Who gets to choose?

There is a lot of talk about algorithmic bias. But who gets to decide which moral intuitions, which values, should be embedded in algorithms in order to overcome that bias? That seems like an impossible question to me. Algorithms can’t be fair because humans don’t know how to be fair. The answer then, is to make algorithms of different moral persuasions and let them fight it out in constant, real-time battles, just like the human moral landscape.

A reading list about brighter futures

This wonderful inspiring list of stories about the future.