What technology does to expertise

My car broke down. Just a flat battery needing a jump start. The mobile mechanic connected a device to the battery and an app on his phone showed that the battery needs to be replaced. A few years ago a mechanic might have used a multimeter to measure the charge and used their expertise to interpret the results. Today, the expertise for understanding the battery is with the device and its developers. The mechanic no longer needs that expertise.

Technology shifts expertise.

Carl Frey and Michael Osborne, from University of Oxford, study the future of work and machine learning. They suggested that those jobs involving routine work are likely to be substituted with automation technologies. This is technology shifting the expertise that used to be required to do a job away from those that do the work. Fixing car batteries is just one of those jobs.

The shift occurs within organisations as well as across entire economies. Communication used to take large typing pools writing the same memo on lots pieces of paper to send across the company. Today, it takes a few highly skilled people to maintain the systems that allow anyone to type an email and communicate with millions of people. This expertise shift shows as technology increasingly replaces medium skilled work, and people are pushed towards either low skilled work or highly skilled work.

These two extremes are what Goos and Mannings, a research assistant and professor at the London School of Economics, called ‘lousy jobs’ and ‘lovely jobs’. ‘Lovely jobs’ are those that require creative thinking and the ability to confront novel situations successfully. Automation will complement those lovely jobs in “performing non-routine problem solving and complex communications tasks” but is unlikely to replace them. ‘Lousy jobs’ are those that aren’t worth automating because they can be done cheaply by all those low skilled workers. A machine can diagnose a battery, and maybe one day drive the van, but opening the bonnet and attaching the device might need a human in loop for a while yet.

But, as technology progresses, and becomes more advanced, the expertise shift increases. The higher skilled workers become fewer as the work becomes more specialised. The lower skilled workers become more as the routine work that isn’t easy to automate is all that is left. This is a power law distribution of wages and worker numbers. It’s how a relative change in fewer highly specialised people commanding higher wages results in a proportional change in the number of low skilled workers.

This is the future of work. No middle ground. Increasingly fewer highly skilled specialists doing creative work, and more and more lower skilled workers doing what isn’t worth automating.

Will technology create a better future?

“I really do believe when ingenuity gets involved, when invention gets involved, when people get determined and when passion comes out, when they make strong goals — you can invent your way out of any box. That’s what we humans need to do right now. I believe we’re going to do it. I’m sure we’re going to do it.”, said Jeff Bezo, then CEO of Amazon. He was being interviewed by journalist Brad Stone about climate change and whether technology might solve the problem.

Jeff is a techno-optimist. He believes that technology plays a key role in ensuring that the good prevails over the bad, that technology can solve even wicked problems like climate change, and that the future will be better because of technology. He’s not alone. Lots of people believe that technology we provide humanity with a better future, even if they don’t call themselves tech-optimists.

Others aren’t so sure. Much of the academic debate about the impacts of technology on society is more pessimistic. It highlights the ethical harms and unanticipated effects of technology on the environment, social norms and personal wellbeing.

But the tech-optimists aren’t dissuaded. They look back at the history of technological development and the benefits it has brought the modern world, things like electricity, vaccinations, the Internet, and assume that the future will be like the past. But that’s all it is; an assumption, a guess, a prediction about an unknown future. Tech-optimists like Jeff aren’t basing their opinion about the future on evidence.

That’s an important point. Obvious, but important. None of us know the future. None of us actually know for certain that technology can or will make things better or worse.

So, how can we think about the impact of technology in the future?

Techno-optimism isn’t really a single view that just believes any and all technology will make things better. In fact, there are lots of dimensions to consider. Are we talking about technology making things better soon or far in the future? Is it about low possibility or highly likely effects? Do we think of technology as simply the instrument that helps do some task or the institutions and cultural processes?

One interesting dimension is the personal/impersonal effects of technology. The personal kind is where technology makes life better for the individual and impersonal is where technological progress makes things generally better for the whole of humanity.

The philosopher Lisa Bortolotti developed an agency-based theory of optimism which applies to technology at the personal and impersonal level. Bortolotti says that optimism is not simply cultivating positive beliefs about yourself and your goals and expecting your beliefs to lead to better outcomes. Believing that everything will be okay and that you can sit back and enjoy the ride doesn’t work. You have to do something. Belief and action are both necessary. Having positive agency-related beliefs helps with action by motivating you to make realistic plans to achieve good outcomes. We are agents. We can take an active role in producing a specified effect.

So, when we talk about the impact of technology in the future, whether we’re talking about how we each use technology to make our own lives better, or how our society decides about the direction of technology, we can choose to make the future better. We can do things that make the future better.

Technology can create a better future if we choose to make better use of technology. Techno-optimism is a self-fulfilling prophecy.

Garbage in, garbage out

“If you put into the machine wrong figures, will the right answers come out?” The members of parliament asking such a question clearly don’t understand, thought Babbage. Charles Babbage was a mathematician, philosopher, inventor and mechanical engineer. He was the originator of the concept of a digital programmable computer. He knew that poor inputs led to poor outputs.

A hundred or so years later, William D. Mellin and his fellow US Army mathematicians working with computers had the same insight. Put nonsense into a computer programme and you’ll get nonsense out. ‘Garbage in, garbage out’, they called it.

Computers and programmes have changed a lot since then, but GIGO, as it’s known, persists. In fact, we recognise it outside of computers too. Garbage in, garbage out affects every process. Anything that takes an input, transforms it, and produces an output is affected by GIGO.

Cooking a meal, manufacturing a chair, calculating a budget, making a decision. All will give the wrong answers if you have the wrong inputs.

Why do we tend to focus on the process, then? Better recipes, faster production methods, more advanced spreadsheet formulas, specialist analysis tools. However efficient and effective the process, poor inputs lead to poor outputs. Better processes create garbage better, with less waste, but still garbage.

So, we also need to focus on the inputs. Fresher ingredients, more robust materials, better information. Better quality inputs create less garbage. In cooking food or making chairs the benefits are obvious. And so are the costs. Better inputs cost more. But is the same true for information?

Reading a good book costs the same as reading a bad book. Listening to a well-researched podcast takes the same time as listening to an uninformed opinion piece.

The problem with information is there’s no way to judge its quality until after we’ve consumed it. And by then, if its bad, its too late. Perhaps Mellin and the other mathematicians had the same problem. They only knew that their computers had produced garbage outputs after they had processed the garbage inputs. We face the same problem.

How do we improve the quality of the information we consume, how do we get better inputs so that we can get better outputs?

Perhaps only through active intentional learning so that we consider the usefulness of what we read and watch and listen to. Perhaps only by honing our ability to pre-judge similar things in the future. Perhaps only by questioning and critiquing our information inputs. Mellin’s computers couldn’t do that. Computers don’t care about their outputs. But we do.

It’s an unreliable, flawed and fallible way of improving the quality of what we consume. But it’s worth doing. We need good thinking processes, using good information inputs.

Good in, good out. GIGO.

Humans out of the loop

“How does GPT-3 work?”, I typed into OpenAI’s playground website.

“GPT-3 (Generative Pre-trained Transformer 3) is an artificial intelligence (AI) system developed by OpenAI, a research laboratory based in San Francisco. It is a large-scale language model that uses deep learning to produce human-like text. GPT-3 is trained on a massive amount of text from the internet and is designed to generate human-like text when given a prompt. This is achieved by predicting the next word in a sentence based on the words that precede it. GPT-3 can generate text from scratch, complete tasks like question-answering, and generate text that is almost indistinguishable from human-written text.”, GPT-3 replied.

Good answer. Makes sense, understandable. It reads like it could have be written by a copywriter. Perhaps GPT-3 just copied it from a website, I thought. So I googled it. I searched for that exact paragraph. But, according to Google, there are no matching search results. That paragraph has never existed before, no one has ever written it and put it on the internet for GPT-3 to copy, it’s new.

GPT-3 used how it works to write an explanation about how it works. That’s a bit of a loop.

Generative AI, whether it’s making text, images, music, or code, doesn’t really create something new. As GPT-3’s explanation says, it has been trained on text found on the internet. It takes all of that information and regurgitates it. This means that it can only create based on what already exists. AI generated images are only possible because there are lots of images on the Internet for the AI to learn from. Same with music. There’s lots available to learn from.

The more generative AI is used, the more AI generated material there will be, and the more AI will learn from it. Eventually, everything on the internet will have been created by AI that has been trained on everything on the Internet that was created by AI. That’s a bigger version of the same loop that GPT-3 used to explain how it works.

But hang on. Isn’t that exactly how human culture was created? Music referencing history, art reacting to politics, writers influenced by other writers, all the way back through human history. Maybe it was the last time anyone did anything original was when some prehistoric person made a meaningful mark on a cave wall. Since then, everything has built on what went before it.

So, using AI to write and to make images and music, is to continue with developing human culture, at an increasing scale and increasing pace, but a continuation nonetheless. It’s like using a calculator to do sums. It makes it easier but it doesn’t change how mathematics works. Generative AI is not discontinuous change for human culture, it’s the natural next step.

These self-referential, self-reinforcing loops of AI learning from what AI created repeating over time, eventually remove the humans from creating human culture. We become the consumers, no longer having input into our own culture. When AI can create a thousand similar images in a second, what hope does one painting that takes a person weeks to create have in influencing mass culture? And when culture is wholly and completely caught in a self-referencing loop that uses what already exists, it prevents anything different from arising.

But what humans do that AI doesn’t, what is absolutely core to human nature, is seeking novelty, reacting against things, or just being plain awkward.

When we think about leaps in human culture, where things have seemed new to us, they have always been as a reaction to what has gone before or a merging of things. Impressionist painters reacted against Romanticism, the mainstream art of it’s day. Jazz music came out of entwining American and European classical music with African folk songs.

Culture builds on what went before, but when humans build culture they do it in messy, tangential, reactionary ways. When AI builds culture it optimises for efficiency, sameness and incremental change.

Taking humans out of the culture creating loop leads to a very inhuman human culture.

A clockwork butterfly

Presenting to the American Association for the Advancement of Science in December 1972, MIT meteorology professor Edward Lorenz asked, “Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?”

Over a decade earlier Lorenz had observed how small and seemingly insignificant changes in the starting conditions of his weather modelling caused outsized and unpredictable outcomes. This fascinated him and formed the basis for a branch of mathematics known as chaos theory. That question he asked overturned our understanding of how the world works that had stood for three hundred years.

Seventeenth and eighteenth century scientists, people like Sir Isaac Newton and Pierre-Simon Laplace, believed in a clockwork universe. They argued that unpredictability has no place in the universe, and that if we knew all the physical laws of nature, then “nothing would be uncertain and the future, as the past, would be present to our eyes.”

These different ways of understanding how one thing leads to another present a bit of a conundrum.

If I drop a plate on the floor, it breaks. That’s predictable. Cause and effect. But the way the plate shatters, the size and shape of the broken parts, is unpredictable and unrepeatable. No two plates ever break in exactly the same way.

If you tap the ‘a’ key on your keyboard, the ‘a’ character appears on your screen. That’s predictable. But who reads what you wrote, what effect it has on them, what it causes them to think, to feel, to do, that is unpredictable. You can never know the effect you have on people. Lots of responsibility, huh?

If a deep learning computer model is trained on 84,743 photos of retinas, it can identify a person’s sex with a high degree of accuracy. That wasn’t predicted. Scientists had no idea that it might be possible to identify gender from the retina. But the data set only contained photos from a generally healthy Caucasian population, which is not exactly a diverse data set. How might such a limited data sets affect how algorithmically identifying sex happens in the future. Who has responsibility then?

Our understanding of technology is increasingly moving from Newton’s predictable world to Lorenz’s unpredictable world. Unpredictable outcomes aren’t an inherently bad thing, they are part of how the world works. The problem is when we assume predictability for unpredictable things. Then we get surprised by the outcomes. Rather than accepting we don’t know what effect new and emerging technology might have, we assume we can predict it. Not only is our prediction wrong, but so is our thinking about predictability.

Does the flap of a clockwork butterfly’s mechanical wings in Brazil set off an unpredictable algorithm in Texas?

Copy and paste

Copy and paste is probably the most impactful productivity tool in the modern workplace.

This simple, elegant solution has prevented millions of hours of retyping, reduced errors and made writing and data entry on computers far more efficient. But not just efficient, actually usable. Imagine if it didn’t exist. Imagine if every time you wanted to move a piece of text from one application to another you had to retype it. Imagine how much more time you’d have spent throughout your life tapping away on a keyboard.

Copy and paste has been part of human/computer interaction for almost forty years. It was invented in 1973 by Larry Tesler along with Tim Mott. You probably haven’t heard of Tesler but he was one of the silicon valley mavericks of the seventies. A contemporary of Steve Jobs, a free-spirited anti-war counter culturist, a humble computer scientist who changed the world.

The key to Larry’s thinking behind copy and paste is the idea of modeless interaction. Modeless systems always have the same result from the same user input. That means, whether you copy and paste in a Google document, an Excel spreadsheet, or an Internet browser, you can expect it to always work in the same way. The simplicity, predictability and reliability of how people interact with computers owe a lot to this idea. It set the tone for every interaction we have with any computerised device, from laptops to phones, to supermarket checkouts and parking payment machines. We expect them all to interact with us in simple, predictable, reliable ways.

Not having to start from scratch every time, as a designer creating those interactions and as a user learning how to use a piece of technology, establishes strong foundations. It means always being able to build on what went before because what went before can be easily taken into a new context. Copy and paste is just one of those ratchet mechanisms It enables progress to occur, work to become more efficient, ideas to build and grow. It enabled far more than just copying and pasting.

Nowadays, copy and paste has almost become an insult, a suggestion that something lacks originality. And yet, Larry’s original thinking gave us a lot to copy, and good reasons for pasting.

The soft power of civil society

The North Wind and the Sun had a difference of opinion. The North Wind boasted of great strength. But the Sun argued that there was power in gentleness.

“We shall have a contest,” the North Wind bellowed. They looked down upon a man wearing a warm winter coat. “As a test of strength,” said the North Wind, “let us see which of us can take the coat off of that man. I’ll go first”.

The Wind blew so hard, the birds clung to the trees. The world was filled with dust and leaves. But the harder the wind blew, the tighter the shivering man clung to his coat.

Then, the Sun came out from behind a cloud, and the man turned down his collar. Sun warmed the air and the frosty ground, and the man unbuttoned his coat. The sun grew brighter and brighter, and soon the man felt so hot, he took off his coat and sat down in a shady spot.

This kind of power is everywhere. Soft power – getting others to want the outcomes that you want – co-opts people rather than coerces them. Hard power is easy to recognise, it typically relies on carrots and sticks, rewards and punishments, coercing others. But soft power is the ability to change what others do by changing what they want.

Joseph Nye coined the term, “soft power” in 1990, as the Cold War was coming to an end. In same year, McDonald’s opened a restaurant at Pushkin Square in Moscow in one of the most famous examples of soft power used on a global scale. At the time, there was the belief that if America exported enough fast food, blue jeans, and rock ‘n’ roll around the world, then everywhere else would adopt American cultural ideals and values. Exporting American culture to other countries changed what the people of those countries did by changing what they wanted.

Of course, soft power existed before 1990, it just didn’t have a name. Civil society organisations have always used soft power to achieve their aims. They have no means of coercing or paying people to do something, so the only option is a soft power approach.

The Keep Britain Tidy campaign used soft power to get people to want what Keep Britain Tidy wanted, and to change their behaviour to achieve it. Starting in 1955, the campaign encouraged people to put their rubbish in bins. Throughout the decades the campaign has exercised soft power in many ways.

When Nye explains the concept of soft power, he talks about ‘attraction and persuasion’. Making the outcome attractive is as important as persuading people to want to achieve it. Wanting our local communities and open spaces to be free of rubbish sounds like an attractive outcome, but sometimes it’s not enough.

The Bin It for Good campaign incentivised people to put their litter in a bin by offering another benefit to the local community. Bins were transformed into charity collection tins with a new local charity or cause each month. The more litter that went into the bins, the more money the featured charity or cause received. Not only was the idea of cleaner streets attractive to people, but their litter becoming a donation to charity was even more attractive. In three years, Bin it for Good prevented an estimated 34,314 items of litter from being dropped on the ground, and raised more than £10,800 for participating charities.

The International Tidy Man symbol was more subtle but long-lasting. It appeared on many crisp packets, soft drink cans, and chewing gum packets to remind people to put the packaging in the bin. It was the persuasion of soft power, the constant prompt to do the right thing.

Interestingly, there were also hard power elements to the campaign. When littering became an offense with a fine, suddenly there was a ‘stick’ for beating those that littered. But it wasn’t the civil society organisation that was wielding that power. They influenced the holders of that hard power to act on their behalf, making it another great example of using soft power.

A survey by Keep Britain Tidy in 2012 showed that seven out of ten people would feel guilty about dropping litter. That’s the result of soft power in action. What seemed like an inconsequential thing became something to feel guilty about in less than sixty years. The campaign achieved a generational shift in attitudes towards litter. That is soft power in action.

This is how all civil society operates. Hard power is for the state. Soft power is for those organisations that want to influence and persuade. It is by successfully exercising their soft power that civil society organisations bring about change in society.

Civil society is the sun that shines down on us, gently warming us until we take off our coat’s.


Perpetual motion machines

Bhāskara II was a smart man. He has been called the greatest mathematician of medieval India. Also known as Bhāskarāchārya, he reached an understanding of number systems and solved equations that was several centuries ahead of European mathematical knowledge. He understood about zero and negative numbers, quadratic equations, spherical trigonometry, astronomy, cosmography and geography. His works represent the peak of mathematical and astronomical knowledge in the 12th century.

He also had an idea for a machine that once in motion would go on forever. This machine, known as Bhāskara’s wheel, would have curved spokes filled with mercury. As the wheel spun, the mercury would flow to the other side of the wheel, over balancing it and keeping the wheel spinning.

We don’t know if he believed that such a machine was physically possible or whether it was an interesting mathematical conundrum, an intellectual challenge, but we do know that perpetual motion machines are impossible. The laws of thermodynamics tell us that.

It’s a nice idea, isn’t it. Getting more out than you put in.

Really, a perpetual motion machine is an attempt to break unbreakable rules. In a way, it’s a test of the system. Any rule that can be broken is one that has been arbitrarily applied. Someone made it up. A rule that can’t be broken is a fundamental system rule. How can we know which are which unless we test them?

This is what our modern software technologies do. It’s what we expect of them. They try to find ways of breaking the rules to get more out than we put in.

If you’d searched the web in 1992, you would have had to use Aliweb, the first ever search engine developed by Martijn Koster at Nexor. Today, you’d probably use Google. You’d get a hundred million search results in less than a second. That’s the promise of the magic data machine of search. It’ll give us all the answers, if we give it all our data. It doesn’t mention the cost. It doesn’t mention the privacy issues, and the influence it has on people’s opinions, and the shaping of our view of history, and the shaping of the future.

If you’d bought one bitcoin in April 2011 you would have paid $1. In November 2021, when bitcoin was at its record high, it would have been worth $69,044. Today, if you’d managed to avoid investing in a scam or being hacked, that bitcoin would be worth $16,644. That’s the promise of the magic money machine of cryptocurrencies. It’ll make us rich, if we give it all our money. It doesn’t mention the cost. It doesn’t mention the crashes, and the hacks, and the carbon emissions, and the people who lost their life savings.

All modern software technology promises it can break the rules. It promises we’ll get more out than we put in. That’s the false promise of the perpetual motion machine, a promise that an impossibility can be overcome. But, even though we know these things are impossible, we still keep trying to do them.

Bhāskarāchārya, means Bhāskara the teacher. If his perpetual motion machine taught us anything, it should be that we can’t get out more than we put in.

How complex systems succeed

Dr. Richard Cook was a system safety researcher. He researched and wrote about how complex systems fail. The kinds of systems he looked at were things like surgical operations, commercial aeroplanes, power stations. All systems that are obviously complex and which you wouldn’t want to fail.

What we’ve come to realise more recently, is that everything is a complex system. Every aspect of our lives is interwoven into complex systems. Food, manufacturing, banking, education, health, economy, transport, climate. When anyone talks about the polycrisis we are facing, they are talking about how when crises in many of these systems become entangled, they produce harms greater than the sum of those the crises would produce in isolation. Being interconnected makes things worse. And yet, for how failure prone these systems are, they seem to keep working. Why?

How do complex systems succeed?

Let’s consider a particular kind of complex system; a fictional social media platform. Let’s call it Mutter. It starts out small, just a few people using it to message people they know. It grows. It grows more, and within four years fifty million Mutts are being Muttered every day. News broadcasters find stories in real time and share them with a large audience. Media companies use it to create a buzz about TV shows. Advertisers promote every kind of product and service. Communities spring up where people share knowledge and learn from each other. Politicians use it to communicate. Activists highlight and fight oppression. Bullies and bigots attack people. Businesses are built and customers serviced. The more interwoven into life it becomes, the more it is depended upon. It’s no longer just a social media platform, its a complex system interwoven into modern digital life.

It could have failed at any point. But it didn’t. It still could. But maybe Cook can help us understand a bit about why it won’t.

All systems are flawed, but they run nonetheless – Catastrophe requires multiple failures, individual flaws and single point failures are not enough to bring down the system. Not just technical flaws like the Remutt button not working, but flaws in how people interact, how some messages get more attention than they should, how things that shouldn’t be on Mutter, are. And, because the system is wider than just what happens on the platform, there are flaws in how a businesses customer service team isn’t able to help customers, how people are affected by barrages of hate, how adverts targeted inappropriately might create distress. Mutter continues to run successfully because despite these many flaws because people find a way around them. People are the most adaptable element of complex systems, and so multiple small failures over time that are learned from and improved on create a more successful system overall.

Every action is a gamble, but with unpredictable consequences – The consequences of small actions are impossible to predict, but because of the size and complexity of the system, every new change introduces new and unexpected ways for the system to succeed. The effect a Mutt sent by a politician has on the stock market, is a gamble. A change to Mutter’s algorithm that sees posts about dogs getting more visibility than cats, is a gamble. Hiring more people than Mutter can afford to pay in the hope they’ll be able to increase income, is a gamble. Communities using Mutter to connect, is a gamble. Businesses investing in building an audience, is a gamble. Dealing in uncertainties doesn’t make the system any less resilient, in fact the more of those unpredictable interactions going on the more likely the system is to succeed.

Systems defend themselves, but in well established ways – The high consequences of failure lead to the construction of multiple layers of defences. For Mutter, because of how interwoven it is into so many things, the defences are well established and well proven. They includes laws on how people can be fired, the essential institutional knowledge contained in people heads, the backup servers, the commitment of organisations, the number of people using the platform. All of these things exist to protect the system and ensure its continued success.

So, because Mutter has existed for long enough to become interwoven into complex political, commercial, social systems, learned how to deal with many small failures and accept the uncertainty of the consequences it creates, and has multiple layers of protection, it becomes extremely resilient.

The only measure of success for a complex system is that it continues to function. Mutter will.

Be more fox

Can you explain why the world is the way it is? How things like the economy works? What things affect how your life turns out? No? Not surprising really.

We all rely on our mental model of the world to understand how it works. It’s how we comprehend the risks we face in the world; emotional, social, financial, physical risks. We want how we perceive the world to be as close to how it actually works as possible so we can make better, safer, decisions.

If we believed that all drivers pay attention to pedestrians and do all that they can to keep them safe, we might not look both ways before crossing the road. If we believed that cryptocurrencies can make us rich, we might invest our life savings in bitcoin. The things we believe about how things work in the world affect the decisions we make.

But we don’t often examine those beliefs or consider how we created them.

Sir Isaiah Berlin, a Russian-British social and political theorist and philosopher, wrote of the metaphor of the fox and the hedgehog. He took the idea from the Greek poet Archilochus who said: ‘The fox knows many things, but the hedgehog knows one big thing.’ These are two ways of looking at the world, of explaining what causes things to be the way they are.

Foxes, so the metaphor goes, know a bit about lots of things. They consider lots of often unrelated and even contradictory ideas and pieces of information as they build up their view of the world. They don’t attempt to make it all fit into a single coherent picture and are sceptical about big explanations. They know that when something happens, it’s because lots of things happened, and the explanation is never simple. Foxes succeed in creating an understanding of how the world works by accepting how complex and uncertain things are.

Hedgehogs, on the other hand, have only one way to deal with the world. They are invested in one big idea, or a single explanation of how things work. They are the experts. When a hedgehog looks at the world they only take in the information that fits their worldview, and they ignore anything else. Hedgehogs win big when the world turns out to actually work the way their big idea said it would, but this is rare.

It isn’t what they think that separates the fox and the hedgehog, it’s how they think.

There might have been a time when our world was slow-moving enough for one big idea about how things worked to be close enough to the reality. That was the time of the hedgehogs. Then, it made sense to have a particularly static view of the world. It made sense to be biased towards what we already believed and be quick to discount information that doesn’t fit. That worldview had conviction and certainty. It felt safe.

But that’s not the world we find ourselves in today.

It’s easy to fall into the trap of the hedgehog. But sticking to a single narrative about the way the world works, especially as our world becomes ever more complex, interconnected and uncertain, makes it almost impossible for our understanding to be even close to how things actually are.

Berlin recognised that thinking about ways of seeing the world as two broad classifications is an over-simplification and, if pressed too hard, becomes absurd. That’s the fox speaking. It says, foxes and hedgehogs might provide an informative way of looking at how we think about the world, but don’t take it as the one big idea to hang everything on. If it’s useful, use it. And when it isn’t, go looking for other points of view.

Being a fox is hard. Being pragmatic about our beliefs, examining our opinions, looking for evidence that our view of the world is incomplete, these are hard things to do. Questioning things, especially things that seem to fit what you already believe, and even more so if everyone around you agrees risks confusion and uncertainty. But, it is essential to continually reassess and recalibrate our understanding of how the world works if we want a worldview that helps us deal with the modern world.

Be more fox.