The weaponisation of digital: How AI scales inequalities in society and why charities need to up their digital game to fight back

Within the lifetime of someone born today…

Algorithm says no – From intentional attacks to the unintended consequences of Artificial intelligence

AI has the potential to reshape society in the future because it is a general purpose technology that will be applied in all industries and across all areas of life with unpredictable second and third order effects.

Can machines think? – the origins of the idea of thinking machines

When Alan Turing first conceived of thinking machines he referred to the human operator as ‘The computer’. They were the person using the machine to compute things. Over time, as machines became able to compute in ways that human operators couldn’t even understand, we dropped the notion of the human being part of the machine and the machine itself became known as ‘the computer’. The trend of separating human from machine has continued with computers controlling a myriad of things without any human operation being required.

In 1950, he posed what became his most famous question, “Can machines think?”. Turing’s test to answer this question involved a person asking written questions to a person and a machine in order to identify which is male and which is female without knowing that one is a machine. If the questioner could not tell any difference between the answers then the machine could be said to be thinking.

“We can only see a short distance ahead, but we can see plenty that needs to be done.”

Alan Turing. OB)

Turing had thought that by the year 2000 the average interrogator would have a less than 70% change of correctly identifying. Today, in 2020, a machine is still yet to pass the Turing Test and we may have to accept that it simply isn’t the best way to measure machine intelligence, but in many ways it isn’t the machine passing the test that is important, it is that humans keep asking that question, even seventy years later we’re still asking, ‘can machines think?’

Asking whether machines can think like humans or be smarter than humans is to ask the wrong question. Machines will think like machines and be smart in machine ways. Humans are good at being humans. Machines are good at being machines.

So much of the discussion about whether AI can truly exist comes from the definition that we expect machines to think, imagine, feel, reason, and behave like humans. This anthropocentric viewpoint merely highlights the problems we will face if we continue to expect this of the machines we create. AI will not think like humans because it is not human. The question is whether the results of machine thinking can reliably be the same as human thinking, do that even if they arrive at the same answer in different ways we can be confident in the machine reaching an answer humans would be willing to accept. This takes us into the ethics of technology, to think about what we expect of our machines, how we want them to treat us.

“Can Machines Be Conscious

This is perhaps the most daunting question about machines. Consciousness is highly regarded as a trait that is unique to living things. It is generally a state of being aware of your existence. The question about machines can be more appropriately asked as “Can machines become the subject of their own thoughts””

Consciousness has implications. Might a machine that is aware of its own existence take steps to protect itself from threats? Should a conscious machine have rights?

Olafenfwa and Olafenfwa point out that machine consciousness could never be like human consciousness because we are motivated and informed by our subconscious, thoughts and feelings we aren’t aware of and don’t understand. A machine could never be affected by data it wasn’t aware of.


It’s hard to talk about a future with AI without talking about Ray Kuzweil (or maybe it would just be rude). Kurzweil is an inventor and futurist, and creator of the term ‘The Singularity’ when referring to Artificial Intelligence. The term seems to be variably used to mean the point in time at which a General AI is created to the point where humans merge with AI to become transhuman super intelligent beings. 

What is even more interesting is Kurzweil’s contribution to the thinking about how we get there, what it takes to get to the point of AI becoming a reality. Technical innovations are subject to what he calls ‘The law of accelerating returns’. The law says that as change happens the results of the change cause more change which results in even more change, and so on. As change accelerates so do the benefits of the change.

Kurzweil quantifies the increasing increases where the law applies to the development of technology leading to AI by stating that “technical innovation doubles every ten years”. So, this means that starting at today, we can expect ten years’ work of innovation to occur over the next ten years, at today’s rate of change. Today’s rate of innovation is the constant we use to appreciate the rate of change. Ten years from now the rate of innovation will be double what it is today, meaning that between ten and twenty years from today, a gap of ten years, we can expect twenty years of innovation. Ten years on from then we can expect the rate of innovation to have doubled again and be at four times today’s rate. Fifty years from our start point and tech is being innovated on at 16 times the rate of today. One hundred years from today the rate of technical innovation will be more than two thousand times what it is today.

That means that if the rate of innovation was to remain constant it would take two thousand years to reach that level of technology. However, because of the law of accelerating returns it will take one hundred years. 

If this seems hard to believe, we can ask of ourselves a short thought experiment. If we showed all of the technical innovations of the last hundred years to someone who lived a hundred years ago and asked them, “Do you think all of this can be created in the next 100 years or next 2000 years?”, which timescale do we think they are more likely to choose? Here are just some of the inventions of the last hundred years: The Internet, the world wide web, man on the moon, a network of satellites surrounding planet earth, a man-made space leaving the solar system, computers, mobile phones, self-driving cars, robots that build cars, microwave ovens, digital cameras, solar and wind power, 3D printing, AR, VR, aerosol spray cans, PTFE, injection molding, ballpoint pens, GPS, transistors, semiconductors, jet engines, helicopters, kidney dialysis machines, defibrillators, and alkaline & lithium batteries. 

Kurzweil, and many other AI thinkers, quote the law accelerating returns when explaining why we can expect AI to be developed in the near future. The expert expectations range from between the year 2040 to 2075.

The fulcrum of the future

Bostrom and the Future of Humanity Institute  

“This will the most important thing this species will have ever done on this planet; giving birth to this new level of intelligence”

Nick Bostrom. Profesor at the Future of Humanity institute

the research for the next five years.

Artificial Intelligence is coming. There is nothing we can do to prevent it, but we can prepare for it.

Understanding the threats

We can group the threats posed by AI into four types.

Indirect impactsGovernments
Direct impactsHackers.
Criminal gangs.
Nation states & cyber armies
Intentional threatsUnintended threats

The bad actors – Direct impact of intentional threats

The bad guys, from individual hackers, to criminal gangs, to cyber-terrorists to nation states conducting cyber wars, are those who will use AI to intentionally do harm. 

The FoHI report on Malicious AI describes the effect AI will have as:

  • Expand existing threats – expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets. The report describes how AI can be used to deliver hyper-targeted spear phishing attacks 
  • Introduce new threats – AI will be able to conduct, and be susceptible to, attacks that humans aren’t. It will be feasible for AI to use speech synthesis to imitate people for spear phishing and impersonation attacks. AI will be capable of conducting drone swarm attacks, something no team of human operators could control. Develop and deploy computer viruses that infect and control other AI systems. Imagine the damage and loss of life that could result from a virus taking control of the AI system that controls the network of trains across a city, or air traffic control at a major airport, or autonomous weapon systems.
  • Alter the typical character of threats – “attacks supported and enabled by progress in AI to be especially effective, finely targeted, difficult to attribute, and exploitative of vulnerabilities in AI systems…  the properties of efficiency, scalability, and exceeding human capabilities suggest that highly effective attacks will become more typical”

Types of attacks

  • Automated social engineering attacks – using hyper-targeted phishing using fake video and speech impersonation to convince the target to do things like provide login credentials.
  • Identifying and exploiting system vulnerabilities – probing other systems, AI and not, for opportunities to gain access
  • Denial of service attacks – overwhelming systems until they crash or become useless. Overwhelming Google Maps with false traffic information to convince the system that every single road is full of traffic at a standstill would cause chaos on the roads without even needing to be successful in attacking Google. The attack could happen where security is not so well enforced, i.e. in the devices that communicate their position and speed to Google maps.
  • Criminal enterprise pipeline automation – money laundering, ransom negotiation, supply and distribution, law enforcement counter measures, all could be automated by criminals, distancing themselves even further from their crimes and making them even harder to bring to justice.
  • Data poisoning – corporate sabotage 

To prevent bad actors from triumphing you have to succeed in foiling every attack every time, to triumph they only have to succeed once. With the odds stacked in the favour of the bad guys

The benevolent digital dictators – Direct impact of unintentional threats

When governments and businesses adopt AI to achieve efficiencies and introduce new capabilities they 

Direct impact means the effect is felt by those who were the intended users or subjects of the AI system, 

These are examples of how AI has an impact on those who directly interact with it which causes unintended consequences. The AI may have beneficial aims but 

Government processing benefit applications.

CCTV surveillance.


The blindsides – Indirect impact of intentional threats

Using AI systems in ways they weren’t intended for to deliberately harm or cause damage. Such as compromising a commercial fleet of autonomous vehicles to use them for terrorist attacks.

With unknown millions of people around the world accessing child pornography and yet we hear stories of law enforcement identifying two hundred people through their credit card details. To think that law enforcement technologies will catch up with criminal activities is ludicrous. It’s not even the tip of the iceberg, it’s a snowflake on the tip of the iceberg. When criminals use AI technologies are used to protect their systems and networks, more children are exploited and abused.

The existential crises – Indirect impact of unintentional threats

Environmental damages, economic collapse, annihilation of the human species

AI will become such a powerful weapon because it is a general purpose technology.

Bionics augments the human body but doesn’t contain any feedback into the nervous system. A bionic person cannot feel their bionic legs.

(Herr, 2018)