Top Stories for Issue 38: April 2016

1. Brief encounters and temporal experiences

You don’t get to write headlines like this one too often. And you don't get to write about articles written over three years ago either, because we somehow equate new with value whereas old seems rather out of date and even worthless. But sometimes the oldies are the goodies and many more recent articles aren’t up to much.

A professor of American art history at Harvard University has a few things to say about time, patience and immersion that may be pertinent to our accelerated and increasingly instant, digital culture.

Her view is that deceleration, patience and what she calls “immersive attention” are sadly lacking in our modern culture – or “nature”, as she calls it. She says spending a painfully long time looking at a single work of art can pay dividends.

How can you possibly look at a painting for three hours? Is there enough to see? Such questions rather miss the point. But it’s an easy point to miss because we assume there’s nothing much to be said about looking or seeing, just because it’s so easy and so innate. That’s possibly why we generally don’t look at things for very long.

Apparently, the average time visitors to the Louvre Museum in Paris spend looking at the most famous painting in the world (the Mona Lisa) is 15 seconds. Another museum reckons visitors look at less famous paintings for less than two seconds. What are these people seeing? More importantly, what might they be missing?

First, there is often an enormous amount of information in a painting. Older pictures, in particular, are ‘time batteries’. But much is hidden to the casual and impatient observer. Second, it is often assumed that vision is immediate and the only things worth seeing or knowing - about anything - are immediately obvious and observable.

But this is clearly untrue. Just because you have seen something doesn’t necessarily mean you have looked at it. Looking at art critically teaches us the power of critical attention and patient investigation.

Third, delays are not always negative. Insight can arise from extended periods of delay or apparent nothingness, much the same as ideas often spring from apparent idleness. Therefore, periods of disconnection from the modern world, along with variations in the pace that information is consumed, can pay rich rewards for our productivity.

All of this, of course, is an old idea. The idea that patience is a virtue or that time is directly related to the mastery of skills sounds almost quaint. Suggesting that access or, heaven forbid, ‘entertainment’ is not synonymous with learning sounds like suffering or even loss of control.

But what if we flipped all this on its head? What if we started to see disconnection, waiting and patience, in particular, as the ultimate form of control? What if we started to teach patience as a strategic art? How long does it take to look at a work of art? In some cases, it’s a lifetime.

Read/post comments on this story (currently 0 comments)

2. Why do we dislike work?

A recent global survey by Gallup found almost 90 per cent of workers are either “actively disengaged” or “not engaged” with their work.  This may not seem like a big deal, but think about it for a moment. 

Nine out of every ten people around the world are doing something they don’t especially like, in a place, or with people, they don’t especially like. That’s half their lives doing something they’d rather not be doing.

The survey result could have a perfectly acceptable explanation. Perhaps it’s just human nature. Maybe we’re naturally lazy or easily bored. Or maybe we’re doers who like to be physically busy, and we’re not made for sitting in offices typing information into screens.

Barry Schwartz, writing in the New York Times, thinks the de-skilling of work caused by automation and to some extent, artificial intelligence, is boosting economic efficiency but workers are paying a high price for it. This is especially so when compensation becomes the only measure that seems to matter.

The quest for economic efficiency has resulted in loss of meaning and personal satisfaction from work. In short, work has become too easy. Given the chance, Schwartz argues, workers will readily accept more work for less pay if their work becomes more engaging and allows them to exercise more discretion and control over whatever it is they do. It also means giving people the opportunity to learn and grow and to work alongside people they respect and who, in turn, respect them.  Above all, ensuring employers explain how an employee’s work can help make the lives of others a bit better.

Cleaning toilets, for example, can be nasty, routine and monotonous but, done well, it can make a small difference to the lives of customers.

None of this thinking is new, but the aspiration to make all work something that creates joy or pride may be. We too easily forget that money is not the essence of human motivation and that meaningless work translates directly into poor quality and performance in the individual and the institution.


Read/post comments on this story (currently 0 comments)

3. War in the robotic age

Writers have been in love with thinking robots since at least the 1920s, but fiction is fast becoming fact. Researchers around the world are designing robots that can do diverse tasks, such as driving vehicles, picking up wounded soldiers and looking after kindergarten kids and the elderly.

While most robots are still fairly dumb, this could change particularly in the area of warfare.

Peter Singer, a high-profile Washington-based military futurist, has been predicting robotic warriors for decades. His vision is coming to fruition with the development of autonomous machines – or robots that think, if you prefer. This, according to Singer, will change how we think about and wage war, especially ground warfare.

Swarms of low-cost disposable drones that could overwhelm an enemy’s defences overturn some of our assumptions about war: that war is expensive and that soldiers don’t want to die.  Robots, especially semi-autonomous robots, allow armies to do more with less and it’s much the same story with unmanned fighter aircraft, self-driving warships and submarines.

This probably sounds a long way off, but remember that ‘robots’ and thinking machines are already among us. The life saving air bags in your car are, in effect, an autonomous system, and so too is increasingly common self-parking technology. 

Within the military, the US Air Force already deploys long-range missiles with an autonomous navigation system. Israelis are using drones that can hover over potential targets for hours looking for missile launches, then the drones automatically engage. 

A paper published by the Center for a New American Security (CNAS) says unmanned and autonomous systems will play a central role in future conflict and the Pentagon is funding developments in this area.

But why is this happening now? One answer is the evolving nature of technology. Greater computing power, along with advances in the size and power of both sensors and cameras, have combined to make robotic weapons a practical proposition. Another reason is politicians are becoming warier of sending troops into battle in case some of the troops don’t come home.

Robots are the ideal solution from both a cost and political standpoint. A bigger worry, though, is what happens when weapons start to make life or death decisions without human oversight.

Jody Williams, who won a Nobel Medal for her campaign against landmines (another autonomous weapon in a sense) has launched the Campaign To Stop Killer Robots. Williams questions where society is heading, “if some people think it’s OK to cede the power of life and death of humans over to a machine.”

Moreover, how could machines tell the difference between civilians and military personnel and who should be held responsible in the event of an atrocity?

These are good questions, but it's already hard sometimes to distinguish between civilians and soldiers and can be equally hard to work out who is culpable when things go wrong. This angst links to a broader societal angst about artificial intelligence, robots and autonomous systems in general.

The common worry is that thinking machines have the potential to steal our jobs and perhaps even our minds. But most robots are currently fairly stupid and, in the case of war, still largely dependent upon human action. Hopefully, humans are still smart enough to figure out the best future is one where humans and machines work together, with each focused on what they do best.

Read/post comments on this story (currently 0 comments)

4. Addressing an unseen problem

This story just goes to show how wrong one can be and the power of really small ideas. What3words is a UK-based digital start-up that divided the Earth into 57 trillion 3x3 metre squares - and named them all with words such as ‘stylish’, ‘water’, ‘overheated’.

At first we thought this was one of the silliest ideas we’d ever come across. The explanation for how the idea came about also struck us as pretty daft. One of the founders – Chris Sheldrick – used to work in the music events business and once had a problem with a delivery driver who went to the wrong place due to over-reliance on GPS.

Our initial reaction was to teach people to read proper maps. But then it hit us. GPS has its problems, but so do maps, regardless of whether they are on screens or on paper. Developing countries and remote rural areas can also be poorly catered for and this can have negative economic impacts and even life threatening consequences.

Being able to very accurately locate someone – or something – using a simple combination of three words does solve some problems beyond having a pizza delivery arrive on time.

For example, the Red Cross has started to trial this technology to mark contaminated water locations during a cholera outbreak. Marking a spot with three unique words can also help deliver mail to Rio de Janeiro’s 11.5 million largely unmarked slum properties or the hundreds, if not thousands, of patients needing urgent medical deliveries to townships in South Africa.

First responder and emergency services are one potential beneficiary of this idea, but so too are companies such as UPS, Royal Mail or even Amazon. They need to deliver packages to people in remote rural locations or in dense urban areas containing multiple homes and businesses.

Future plans include indoor mapping and possibly adding height as a parameter. This could be especially useful if drone deliveries go mainstream and people want pizza delivered through an open window. This is not the best use of the technology, but its other applications are world changing.

Read/post comments on this story (currently 0 comments)

5. Anti-cash rhetoric

The question whether physical money will disappear keeps ebbing and flowing, depending on the economy and our need for physical security to handle unforeseen emergencies. It does seem fairly inevitable that coins will disappear over the coming decades, but the abolition of paper currency could take far longer. 

People who are old enough to have witnessed economic collapse and financial panic tend to like cash as a hedge against uncertainty. If interest rates are near zero, then the appeal of keeping money in a bank or savings account starts to evaporate. This is perhaps why recent demand for 20 and 50 pound notes in the UK has risen.

On the other hand, businesses, and especially governments, loathe cash. It’s liable to be stolen, costs money to handle and the existence of cash limits government ability to control the money supply – and therefore their citizens.

Around 75 per cent of cash is used for untraceable illegal transactions so tax authorities would love to get rid of the loathsome encumbrance too. If cash disappears, governments can watch individuals more closely and there’s no limit to what they can do with negative interest rates.

In a recent speech, the chief Economist of the Bank of England, Andy Haladane, essentially proposed the abolition of paper currency as a solution to the problem of monetary control.

At the moment money is cheap. We’ve seen discussed capital controls around the edge of Europe and negative interest rates, where savers have to pay institutions to hold their money. At the same time, the European Central Bank is doing what is effectively the opposite: paying people to borrow from them.

It’s all a bit of a mess and adds to the potential volatility and uncertainty across much of the world.

Once most money becomes digital, then we all become entirely dependent upon the whim of government and at the mercy of bank failures, cyber attacks and systematic failures within the entire financial system. The end of cash is also the end of anonymity.

So here at What’s Next, our advice is to cash out now and stick a little something under the bed in case things really start to wobble.

Read/post comments on this story (currently 0 comments)