Most of the stories we hear about AI tend to involve the emergence of some kind of non-human super-intelligence, the loss of human jobs on a huge scale, or Armageddon as the robots wake up and eliminate us.
A far more likely scenario involves ubiquitous AI softly changing how humans relate to each other and perhaps making us question what it means to be human. It’s far too early to rush to any definitive conclusions, but the way in which children shout commands at digital assistants may be a foretaste of how humans might treat each other in the future. So too might the way in which people are slowly starting to treat semi-intelligent machines as friends, confidants and even therapists, possibly to the exclusion of human equivalents.
But how realistic is it that AI will create a new class of social actors that will stunt our emotional development or erode deep human connections, perhaps making our relationships more transactional, more narcissistic and less reciprocal? Could AI result in a retreat from real intimacy? Given the commercial imperatives driving much of these technologies, these are important questions.
Given how individuals already form strong emotional attachments to inanimate objects, it’s seriously possible that, someday soon, people will start preferring the company of machines to people. Then it’s just a small hop to preferring robot mates to human partners and ultimately, perhaps, people having sex with machines rather than people. This sounds like sci-fi, but it’s actually happening. In 2018, a Japanese man ‘married’ an AI-powered hologram and there are already companies creating robots for sexual gratification.
Self-driving cars might seem a long way removed from such things, but even here AI is likely to change human behaviour. Driving, apart from being a skill, requires high levels of cooperation between different drivers. What happens if this is removed? Could such skills and cooperation disappear? Could self-driving cars make driving more dangerous because human drivers start to pay less attention?
What other externalities, or spill-over effects of AI, have not been thought about?
Ref: The Atlantic (US), 4.19, ‘How AI will rewire us’ by N. Christakis.
The speed at which facial recognition technology (FRT) is being rolled out in countries such as the UK is a cause for concern. In China, both the rollout and its acceptance are positively alarming. An AI-powered technology that can catch jaywalkers can also enforce an Orwellian level of surveillance.
Let’s consider a few of the issues. First, FRT doesn’t work terribly well at the moment, with countless instances of individuals being wrongly identified. If you are black rather than white it’s even worse. When Amazon’s FRT system was tested on photographs of members of Congress, 28 of them were incorrectly identified as criminals. Such errors can lead to miscarriages of justice, but also an erosion of trust between the police and members of the public. Despite this there has hardly been a national, let alone international, conversation about FRT.
Of course, it’s important to distinguish between two uses of FRT. One is face matching using static images, the other is live scanning. The former is fine if it’s properly regulated and overseen (which generally it’s not). The latter isn’t, because using it is a form of search — similar to allowing the police to collect DNA samples unfettered, or search peoples’ houses en masse. The right to anonymity is a fundamental right that underpins our psychological sense of liberty. If people realise they’re being watched all of the time, this will fundamentally change how people behave and will benefit governments intent on taking more control in the name of safety and security.
Moreover, FRT is merely the beginning. It’s been alleged that, in China, CCTV cameras are not only being linked to FRT — thereby potentially giving the government a real-time picture of where everyone in the country is — but are also being linked to emotion-sensing technology that allows the government to assess what mood everyone in the country is in.
Ref: Daily Telegraph (UK) Technology Intelligence, 14.5.19, ‘Big Brother tech that thinks it’s seen your face before’ by L. Dodds and N. Bernal.
Are you WEIRD? That’s someone that’s from a country that’s Western, Educated, Industrialised, Rich and Democratic. Most of the products people consume, especially digital products, are made by WEIRD people. Furthermore, in technology, 25 per cent of people working in tech are young men and they’re mostly white. Within AI, the bias towards white men is even greater.
So what? Surely tech is neutral? No! Any technology is biased by the experiences and beliefs of the people behind it, even if they are totally unaware of these biases.
If most digital products are designed by men they could well be biased against women and so on. Moreover, far from simply picking up the conscious, or unconscious, biases of their creators, machine-learning systems can significantly amplify such biases. Given that technologies, and especially digital technologies, are running significant parts of our daily lives and society as a whole, this is a matter of huge concern.
How do we solve this? Making teams of engineers more diverse, not only in terms of age, ethnicity and gender, but geography and experience too, will help. So too will simply being aware of any potential biases that may be infecting design and data sets. (See “explainable AI" as a way forward on this).
Ref: Financial Times (UK) 9.3.18, “Tech’s sexist algorithms and how to fix them’, by H. Kucher. www.ft.com Links: See ‘explainable AI’, also Technically Wrong by Sara Wachter-Boettcher.
To some extent, the history of modern civilization is the history of materials. Think of how glass, concrete and steel have influenced the look and feel of cities since their invention. So, what materials might be next?
Graphene, the first 2D material, is the first of many new materials that will radically change not only the physical infrastructure of our cities, but many elements of our daily lives too. Graphene falls under the broad classification of nanomaterials: materials less than 100 nanometres think (a human hair is 80,000 nanometres thick). Graphene is made by packing carbon atoms in a honeycomb lattice, and the result is a material that’s stronger than steel, more flexible than rubber and conducts both heat and electricity.
What can you do with such materials? Applications are almost endless, but include solar cells, pollution filters, sensors, robotics, medicine and so on.
One of the very latest nanomaterials is molybdenum disulphide. In its common form, this is a lubricant for engines and is used in Teflon and nylon, but thinned down to a single layer of atoms it’s hugely useful in solar-cell arrays or fibre-optic networks.
Two other new nanomaterials are silicene and phosphorene, which are hugely malleable semiconductors. Most developments in computing and electronics over the past few decades have been around miniaturisation, so such new materials could be hugely useful in creating smaller, faster, and cheaper computing. Computers that boot instantly and save data in nanoseconds could be just two applications. Another use might be to print electronic circuits and displays on packaging, so that paper, cardboard or plastic can warn of spoilage or indicate temperature. You might even be able to ask a cereal packet, or birthday card, to play a short video. More serious applications could be wearable health-tech, self-cleaning surfaces, materials that indicate damage, or even skin repair.
Going further into the future, nanomaterials will almost certainly feature in batteries that recharge much faster and last much longer and perhaps protective coatings that last for 100 years or more.
Ref: Strategy + Business (US) 26.7.17, ‘The next big technology could be nanomaterials’ by J. Rothfeder. www.strategy-business.com
The key argument in favour of autonomous vehicles, such as driverless cars, is generally that they will be safer. In short, computers make better decisions than people and less people will be killed on the world’s roads if we remove human control from the equation. Meanwhile, armed robots on the battlefield are widely seen as a very bad idea, especially if these robots are autonomous and make life-or-death decisions unaided by human intervention. Isn’t this a double standard? Why can we delegate life-or-death decisions to a car, but not to a robot in a conflict zone?
You might argue that killer robots are designed to kill people, whereas driverless cars are not, but should such a distinction matter? In reality it might be that driverless cars kill far more people by accident than killer robots do on purpose because there are many more of these machines in proximity to people. If we allow driverless vehicles to make instant life-or-death decisions, surely we must allow the same for military robots? And why not police robots with the same life-or-death potential?
My own view is that no machine should be given the capacity to make life-or-death decisions involving humans. AI is smart and getting smarter, but no AI is even close to being able to understand the complexities, nuances or contradictions that can arrive in any ordinary human situation.
Ref: New Scientist (UK) 11.11.17, ‘Lethal Logic’ by D. Hambling.
AI will soon be smarter than people, at which point it will take over and possibly eradicate us. Way before this happens, AI will steal most of our jobs and those of our children.
Maybe not. At the moment, computers lack even the most basic essence of intelligence. Beyond performing a handful of specialist tasks, they are unable to understand the physical world well enough to make precise predictions about basic elements of it. In other words, computers lack common sense and there is no clear strategy in sight to make this happen.
Moreover, how does a rational, rule-based machine deal with abstract ideas and beliefs: ideas and beliefs that must be understood if a machine is to intuit useful truths that are not directly expressed? Dealing with irrational, emotional human beings is one problem. Dealing with a world where not all knowledge is codified or digitalised is another. For these reasons alone, true AI — that is, broad or generally useful AI — would appear to be a very long way off indeed.
Furthermore, maybe AI isn’t even the right problem for us to be worrying about. For true AI to work — indeed, for the Internet of Things or anything else digital that’s remotely useful to work — we need near-perfect security, guarantees about privacy, and agreements about who owns the data they create.
Even more importantly: Before we address any of this, we need to discuss and either defend or attack the way in which AI — in the guise of automation — is fuelling a future in which value is only measured in a short-term manner and in which profits are being concentrated in a handful of big tech behemoths that seem accountable to nobody but themselves.
Re: MIT Technology Review (US) Vol 121, number 1. ‘The Great AI Paradox’ by B. Bergstein. www.technologyreview.com