Work, business & professional services


Predictive workforce analysis


After WW2, when America last faced a war for talent, HR (then called Personnel) enthusiastically embraced a series of human behaviour, aptitude, intelligence and medical tests to ensure the suitability of potential recruits. But what was so widespread in corporate America in the 1950s and 60s had all but disappeared by the 1990s. This was because frequent job-hopping meant that it became less important, and less economical, to test people when most of them would soon leave.

A study by the Corporate Executive Board recently found almost 25% of all new hires are gone within 12 months of accepting a new job. Heightened focus on short-term financial results also resulted in the abandonment or restriction of training and assessment schemes that would only bear fruit over the longer term. Between the 1990s and about 2010, most hiring was informal, but now science is back.

Thinking scientifically about how companies hire (and fire) people (known as workforce analytics, workforce science, analytic assessment and people analytics) is back in favour thanks in part to Big Data. Its supporters believe it can tell an HR not only who to hire and fire, but what a person’s future potential and monetary value might be. Ironically, data about whether these data really work is almost impossible to come by - some of the tools and techniques are a little creepy.

For example, if your boss encourages you to play Dungeon Scrawl or Wasabi Waiter but doesn't say why, you should be worried. They are games developed by a team of psychologists, neuroscientists and data geeks to work out human potential and do so by collecting abundant data about things you are barely conscious of doing.

Some of this is acceptable, especially if you are trying to figure out which of two similar people to hire. But imagine if human intuition is totally removed from the equation. Or what if people are no longer interviewed face to face but questioned solely by a machine? Even the tests that do currently exist could end up ruling out whole classes of people, simply because the data say these groups are too risky.

The most unsettling matter is not the use of transparent tests and data to hire and fire, but using covert data to monitor a person across their entire working life. For example, Bloomberg allegedly monitors every keystroke of every one of its employees and their comings and goings into the office throughout their career.

Harrah’s Las Vegas casino allegedly tracks every smile and grimace of each of its card dealers and waiting staff. Other sinister tracking technologies include ‘badges’ that monitor where people go and who they interact with, and record their conversations too. If a human being interpreted these data it would be bad enough, but in some cases the employer ends up with an automated sheet of data. The human who generated the data is reduced to a set of numbers.

Having a better idea of who is talented and where and when people do their best work isn't a bad thing at all. But people can and do change and people should also be given some level of privacy, even at work. It would be interesting to know whether corporate executives would be happy to submit to the same type of surveillance.

Ref: The Atlantic (US) December 2013, ‘They’re watching you at work’ by D. Peck. www.theatlantic.com
Book link: Big Data: A revolution that will transform how we work, live and think by Viktor Mayer-Schnoberger
Search terms: Datafication, workforce analytics, people analytics, gamification, and prediction
Trend tags: Big Data

Making old mistakes with new data


Big Data is a big deal and a buzzword worth a lot of money. But we don’t need more data: we need more good questions.

In 2009, Google hit the headlines because it worked up a quick, accurate and virtually free way of analysing vast amounts of search data to predict outbreaks of flu in near real-time. This previously took the US Centre for Disease Control (CDC) 7 days and cost taxpayers a lot of money.

Big Data refers to large, sometimes staggering sets of data. The Hadron Collider computers, for instance, hold 15 petabytes of data, the equivalent of 15,000 years’ worth of music files. Facebook, Amazon and Google hold even larger amounts of data. Big Data is characterised by being mostly unstructured (messy) and is cheap to capture relative to its size.

These data, in short, are the stuff that spew out, often unknowingly, when we walk around with our phone switched on, buy books online, travel to work (see story above), drive a car, buy a train ticket or go to sleep. While Big Data could offer huge, useful advances, there will be a few big traps. According to the cheerleaders of Big Data, many of the old rules no longer apply.

The first new rule is that all data can now be captured, making traditional sampling techniques and research obsolete. Second, correlations no longer matter because the numbers tell you all you need to know. Third, there no need to develop models or hypotheses because, as Wired magazine once put it: “with enough data, the numbers speak for themselves”.
Finally, Big Data analysis produces amazingly accurate predictions. David Spiegelhalter, Professor of the Public Understanding of Risk at Cambridge University claims this is “complete bollocks” (don’t you love it when academics speak like that).

So should Facebook, Amazon, Tesco and the like be worried about their business models? Not if they are open minded and keep in mind a handful of old rules.

First, causality cannot be discarded. Caring more about correlation than causation is storing up big trouble, because if you have no idea what is lurking behind a correlation, you will have no idea when such a correlation might break down.

Second, size isn’t everything and it is especially dangerous to think you have captured all the data. Traditional opinion polls still work when the sample error is recognised: the larger the sample, the smaller the error.

Yet, expanding the data to give you a better, or even a perfect, answer misses the point about sampling bias. Sampling error is when, by chance, a sample does not accurately reflect the total number. Sampling bias is when the random sample isn’t random at all (or isn’t the whole number), although you might think that it is.

For example, some people suggest Twitter gives you an instant picture of what people are thinking or doing, but clearly not everyone is on Twitter and those that are might be of a particular type. According to PEW, in 2013 US Twitter users were overwhelmingly young, urban or suburban and black.

Or consider Street Bump. This is a very clever way of using the accelerometer found in almost all smartphones to work out where the potholes are in roads. It works simply by people driving around with their phones switched on. But Street Bump actually measures potholes in affluent areas, where people are more likely to own smartphones and drive cars.

A father once complained to Target, the US retailer, because the company was sending his teenage daughter coupons for baby products. He was furious. But Target was right: she was pregnant and her father didn’t know. This story doesn’t mention the fact many women who were not pregnant still received the same coupons. We hear about the hits but not the misses. The algorithm isn’t infallible but, because it is not transparent and open to external review, genuine patterns are sometimes missed while spurious patterns are not.

Nobody is saying Big Data isn't a big thing. But it can be invasive at times too and raises numerous ethical questions about who owns or should profit from the ‘exhaust data’ or ‘digital shadows’ we produce without realising it. So although Big Data has arrived, big insights will not automatically be produced and there is a danger we will ignore small data and make age-old mistakes on a much bigger scale.

Ref: Financial Times Magazine (UK) 29-30 March 2014, Big Data: Are we making a big mistake? By T. Harford. www.ft.com/magazine
Book link: Big Data by Viktor Mayer-Schonberger.
See also: Why most published data is false’ by John Ioannidis (2005 paper).
Search terms: big data, ethics, sample size, sampling error, Target, Facebook, Twitter, random sample
Trend tags:

Alice through the Google Glass


Google Glass was launched in the US and the UK this year, but the product (smart eyewear) seems to solve a problem that doesn’t clearly exist. In the workplace, it may be different. Anyone sitting at a desk already has a screen in front of them. According to Wearable Intelligence, a US software firm, 80% of workers are not deskbound and therefore Glass could come in handy.

There are still privacy issues at work (consumers worry about this) but, unless you are dealing directly with customers or sensitive information (doctors and financial advisors, for example) these matter less.

Surgeons who have been enthusiastic adopters of iPads, for example, can use Glass while performing operations. Their hands may be full and covered in blood, but they can still look at imagery (x-rays and scans) to guide their instruments. Similarly, technicians with dirty hands may find a hands-free display invaluable, especially if they have to navigate checklists while doing maintenance or repairs. People in factories, engineering, agriculture and security, especially police officers looking for video, photographic or verbal evidence, might be a ready market too.

Perhaps we will see various Glass spin-off products with unique features – full eye protection, safety cords and longer battery life for lumberjacks stuck up trees all day, for example.

Ref: International New York Times (US) 8 April 2014, ‘Google Glass moves into the workplace’ by C. Cain Miller.
Search terms: Datafication, workforce analytics, people analytics, gamification, and prediction.
Trend tags: Big Data

More employment or less?


During the 1930s, JM Keynes, the great economist, used to worry about mechanisation. With hindsight, the US and many other nations ended up with the opposite problem. WW2, the postwar period, and new technologies created a huge demand for labour and there was a labour shortage.

In Britain incomes tripled between 1570 and 1875 and tripled again between 1875 and 1975. Industrialisation did not eliminate labour, it merely shifted it from one region or profession to another. In 1500 roughly 75% of people in the UK worked in agriculture compared to 2% now, yet we manage to produce more food.

Technologically-induced unemployment is back on the cards and many commentators argue the robots are coming. Yet robots have been around since the early 1970s. They did create unemployment, especially in low-skill and no-skill assembly and manufacturing, but to date we have managed (with some exceptions) to generate new jobs, many of which couldn’t have been dreamed of or described 10 years ago, let alone 25 or 50 years ago.

The pessimistic argument in 2014 is that this time it’s different. This time skilled workers are under threat and that, dear reader, could be you and me. A study by Carl Benedikt and Michael Osborne at Oxford University, for example, found 47% of current occupational categories are at risk from automation.

If you are very good at your job, and your job is difficult to automate, you will probably earn more rather than less in the future. But if you have no skill, a low skill or an outmoded skill, it could be a very different story. In the US real wages haven’t moved upwards in 4 decades, while in the UK and Germany, where employment is reaching new peaks, wages have been similarly flat.

Much of the employment in depressed regions and industries is part-time, zero-hours contracts. Or what David Graeber, an anthropologist at the London School of Economics, refers to as “BS jobs” (another straight-talking academic). Other employment is classified “insecure” and offers very little stability. Furthermore, most unemployment statistics stated by politicains are plain wrong.

So are the pessimists right? History, economics and the history of technology suggest a mixed and somewhat messy picture.There will be a race between education and employment, with people trying to acquire new skills before old skills expire. The safest jobs will be those that smart machines will find it hard or impossible to replace. For example, those with a high degree of creativity or human empathy (counsellors) would be at the top of the list.

Teachers, nurses and managers who motivate and inspire would be on the list too, along with anyone with skills complementary to machine intelligence. (As for the safest job of all, how about a career in the clergy?)

As smart machines become cheaper, they may challenge what it means to be human and force us to place more value on human interactions. Jobs that are not human-facing and involve routine information processing or repetitive physical tasks (especially manufacturing that can be outsourced) are already at risk.This may include brainwork in the information economy, because logical thinking is what machines are very good at, for example, searching legal material for precedents.

Nevertheless, just because a job can easily be automated or outsourced doesn't necessarily mean it will be. Governments may decide to protect certain white collar or service sector jobs, because large groups of people who are articulate, connected and idle could cause trouble.

As Thomas Piketty, an economist, argues: “The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics.”

So what’s next? The most likely scenario, in our view, is a highly disruptive period of economic growth, with most income going to a detached elite. Most growth won’t show up in productivity or GDP figures, precisely because it won’t be based on human labour. There will be fewer people in the workforce, partly because of ageing, partly because of robots, and partly because companies looking for efficiencies tend to shed labour.

Ref: The Economist (UK) 18 January 2014, The future of jobs: The onrushing wave (print edition). www.economist.com
Also New Scientist (UK) 26 April 2014, ‘Automatic, for the people’ by N. Firth. www.newscientist.com
Search terms: labour, earnings, elite, robots, automation, skills, creativity.
Trend tags:

Three scenarios for the future of work


According to Andrew McAfee, IT commentator, the current wave of automation, together with a forthcoming wave of robotics, implies three possible scenarios.

The first is that automation will hit employment figures hard but, as with the industrial revolution, this will ultimately result in more jobs - not less. In the short term, established jobs and professions will suffer, but over the longer term, they will be replaced rather than eliminated.

The second scenario is that the technological tide will keep rising: each successive wave will require people to be retrained and those who aren’t could go under. This is a much more volatile scenario where planning ahead will be difficult.

The third scenario is where the economy simply doesn’t need lots of human labour, a ‘labour-lite’ scenario. What will humans do instead of work and what, in place of work, provides income, identity, community, purpose and most of all dignity?

Of course there’s a fourth scenario too. This is where the owners of the technology (and the capital) live in feudal luxury while everyone else toils away in lowly paid positions, servicing both the androids and the elite.

Ref: New Scientist (UK) 26 April 2014, ‘Automatic for the people’ by A.McAfee and N. Firth.
Search words: technology, jobs, unemployment, retraining, labour-lite, automation.
Trend tags: