Science, technology & design
Crowdsourced delivery systems
This is a lovely idea. Eric Horvitz, a researcher at Microsoft Research in Seattle, had dreamed up a plan to use crowdsourcing to courier packages in the US. The idea – provisionally called TwedEx – uses an algorithm linked to aggregated location data from tweeters in New York. Crowdsourced delivery systems already exist, which hire strangers found on the internet, but this idea is different because it taps into existing journeys and routines.
The sender of a package simply needs to identify who they are, using their twitter handle for example (or agree to have their smartphone tracked) and the crowd does the rest. Each ‘citizen courier’ in the mail chain is paid a small sum and, even if the recipient is moving around, the package can still find them, as long as they and everyone in the forward mail chain continues to broadcast their whereabouts.
If people in the chain can be persuaded to linger at agreed transfer points, the idea works across a bigger geographical area. Experiments have shown it is possible to get a package from New York to San Francisco in around five hours. However, where citizen distribution may prove especially useful is in remote developing regions, for example, delivering vaccines.
Ref: New Scientist (UK) 18 May 2013, ‘You’ve got chainmail’, by H. Hodson.
Source integrity: *****
Search words: Internet, crowdsourcing, digitalisation, delivery, aggregation
Trend tags: Digitalisation
Self-monitoring and medical diagnosis
Star Trek has given us many wonderful visions of the future, including teleportation, handheld wireless communication devices and, (from 1966) the medical tricorder. This latter device was a handheld device used by Dr McCoy to sense, analyse and diagnose illness or injury. Now a number of organisations are competing for a $US10 million dollar prize to build this device.
The Tricorder X Prize is backed by Qualcomm, a wireless communications company, and its aim is to help ordinary people diagnose 15 common conditions, from diabetes to pneumonia, using a mobile device. It sounds like a great idea, but like many new technologies, the tech is running ahead of the regulations. Nevertheless, it seems inevitable that mobile health diagnosis and home-based or self-monitoring is coming, whether the medical profession and the FDA like it or not.
Part of the problem is demographics. As doctors retire in America, for instance, the US could be short of 90,000 doctors as early as 2020. Then there’s the cost. As populations age, in the US and elsewhere, chronic illnesses will become more prevalent and costs of treatment will rise. We will need a way of reducing the cost of care.
This is precisely where diagnostic devices could help. In developing countries too, the lack of doctors and hospitals in remote regions offers mobile healthcare a crucial role. A multi-purpose mobile diagnostic tool is a possibility but, in the shorter term, there is a simpler solution: the smartphone.
In 2014, the number of mobile phones will probably exceed the number of people. Before too long, almost every person on earth could own a smartphone, which could then be turned into a medical device. Cellscope is a device that turns a smartphone into a microscope used for retinal scans or to detect pathogens by analysing images of slides with samples of blood or sputum. The US company behind Cellscope is already looking at ‘digital first aid kits’. These can look into a child’s ear to detect infections while another gizmo, Scout, can be used to detect vital signs like heart rate, pulse and temperature.
It seems too obvious to be true. So far, the only piece of medical equipment in almost every home is a thermometer and it can’t be too hard to turn phones into ‘digital first aid kits’ of similar ubiquity. Unfortunately, even if regulatory agencies such as the FDA are brought onside, the medical profession can be territorial, even paternalistic. The last thing this profession needs, according to some, is empowered and informed patients who turn medical professionals into data entry clerks. On the other hand, if patients can be trusted to monitor and diagnose themselves up to a point, this could free up doctors and nurses to concentrate on the higher value end of their profession. Time and technology will tell.
Ref: The Economist (UK) Technology Quarterly 1 December 2012, ‘The dream of the medical tricorder’ (Anon). Links: The creative destruction of medicine by Eric Topol.
Source integrity: *****
Search words: Mobile health, self-diagnosis, medicine 2.0, digital health
Trend tags: Digitalisation
3D printing: Disruptive tech or just a load of plastic junk?
According to fans of the technology, 3D printers are about to change life as we know it. 3D printing (3DP, also known as additive manufacturing or, rather cynically, piracy machines) will change how we shop and how we live. We will soon be printing physical possessions just as we now download music or photographs. This is all theoretically possible, but it’s more likely that 3DP, like Facebook, will fall from favour over the next few years.
Beyond the hype, the first problem is speed. 3D printing is slow, although for low volume and one-off manufacturing this is less of a problem. Second, is gravity. Many complex models require material support so they do not collapse during printing. Third, material costs can be high. Bespoke powders for 3D printers can cost $US80 a kilo, although typically, the machines themselves (like razors and 2D office printers) can be very cheap. It is also difficult to print an object in more than one material, although this may change in the future.
Microchips and batteries, for example, still have to be added by hand, which makes printable phones look unrealistic. However, we may soon be printing batteries along with most electrical components directly into an end product. It is possible to print transistors, which in theory means printing logic circuits, but printing billions of tiny transistors found in microprocessors and other chips will be a challenge.
Overall, the idea of a 3D printer in every home does seem unrealistic. Even so, we could simply use local print shops instead or access printing services online and have objects delivered by post in the usual manner. This could be especially handy for hard-to-find spare parts or one-off designs. Personalised medical implants, for example, could suit a 3DP model. GE, the world’s largest manufacturer, is already heavily investing in 3DP technology, as are Airbus, Boeing, Ford and Siemens. This is partly because 3DP adds flexibility and resilience to manufacturing models, especially in the face of ageing workforces and labour shortages.
This still leaves the question of exactly what we might want to print and how easy it might be to print it. A broken kettle handle, for example, is much harder to create than you might imagine, especially for people used to thinking in 2D. There is also the issue of copyright and patent infringement (someone, somewhere designed the original kettle handle). Maybe instead of printing in 3D we will copy in 3D using 3D scanners and send the data off to an online 3DP.
3DP still raises some rather curly questions about what and when we buy things and how quickly we should get rid of them. Perhaps the biggest impact of 3DP will be less what we end up making, but more how the technology impacts our mindsets about the physical world. I would be interested to know if it’s possible to unprint something – so it returns to its original raw materials (concrete, for instance). 3DP may end up changing the world in other areas we don’t expect. In dentistry, for example, the use of 3DP to make teeth or models of patient’s faces and jaws is already reasonably common.
Ref: The Economist (UK) 7 September 2013, ‘3D Printing Scales Up’, (Anon), New Scientist (UK) 15 December 2012, ‘Absolutely fabricated’ by M. Campbell, Financial Times magazine (UK) 26-27 January 2013. ‘3D printing sows its teeth’ by C. Cookson, The Economist Technology Quarterly 1 December 2012, ‘The PC all over again’.
Source integrity: Various
Search words: D Printing, 3DP, additive manufacturing, fabbing
Trend tags: Digitalisation
Paper versus screens
Does the technology we use to read change the way we read? Since the 1980s, researchers have been looking at the differences between reading on paper and reading on screens. Before 1992, most studies concluded that people using screens read words more slowly and remember less about what they’ve read. Since 1992, with a heady proliferation of screens, a more mixed picture has emerged.
The most recent research suggests people prefer to use paper when they need to concentrate, but even this may be changing. In the US, 20% of all books sold are now e-books and digital reading devices have developed significantly over the last 5-10 years. Nevertheless, it appears digital devices stop people from navigating effectively and may inhibit comprehension.Screens drain more of our mental resources and make it harder to remember what we’ve read. Screens are plainly useful, but more needs to be done to appreciate the advantages of paper and to limit the digital downsides of screens.
One problem is topology. Paper books contain two domains – a right and left hand page – with which readers can orientate themselves. Paper books offer a sense of physical progression, which allows the reader to know where they are and form a coherent mental picture of the whole text. Digital pages are more ephemeral. They literally vanish once they have been read and it is difficult to see a page or a passage in the context of the larger text. Some research (Anne Mangen, University of Stavanger) claims this is precisely why screens often impair comprehension.
It has been suggested that operating a digital device is more mentally taxing than reading a book because screens shine light directly into a reader’s face, causing eyestrain. A study by Erik Wastlund at Karlstad University found reading a comprehension test on-screen raised levels of stress and tiredness versus people reading the same test on paper. It is rarely recognised too, that people bring less mental effort to screens in the first place. A study by Ziming Lui at San Jose Sate University found people reading on screens use a lot of shortcuts and spend time browsing or scanning for things not directly linked to the text. They are more distracted.
Another piece of research (Kate Garland/University of Leicester) also emphasises that people reading on a screen rely much more on remembering the text than people reading on paper, who concentrate on understanding what the text means. This distinction between remembering and knowing is especially critical in education. Research by Julia Parrish-Morris and colleagues (now at the University of Pennsylvania) found three to five-year old children reading stories from interactive books spent much of their time being distracted by buttons and easily lost track of the narrative and what it meant.
Clearly screens offer considerable advantages. Convenience or fast access to information is one. For older or visually impaired readers, the ability to change font size is another. But it is precisely the simplicity and uncomplicated nature of paper that makes it so special. Paper does not draw attention to itself. It does not contain hyperlinks or other forms of distraction and its tactile and sensory nature is not only pleasing but actually allows us to navigate and understand the text. (And it smells nice.)
Ref: Scientific American (US) November 2013, ‘Why the brain prefers paper’ by F. Jabr. www.scientificamerican.com Links: Hamlet’s Blackberry: Why paper is eternal’ by William Powers (2006).
Source integrity: *****
Search words: Paper, screens, reading, education, learning
Trend tags: Digitalisation
Machine assessment (the computer says no)
You may already be aware that employers use software to electronically read and grade job applications; similar forms of machine intelligence have been used for years in education to grade multiple-choice exams. A relatively new development is using artificial intelligence to instantly grade essays and to provide feedback about how essays can be improved.
An organisation called EdX, set up by Harvard University and Massachusetts Institute of Technology (MIT), is making software freely available on the internet that does just this. A dozen US universities have already started to use the software. As you might imagine, this is causing spirited debate within education circles.
Les Perelman, a researcher at MIT, says no attempt has been made to statistically compare machine grading of essays with that of human graders. Mr Perelman, with others, has set up a group called Professionals Against Machine Scoring of Students in High-Stakes Assessment and has recruited the likes of Noam Chomsky to support their mission.
As the group’s website, humanreaders.org, states: “Computers cannot ‘read’. They cannot measure the essentials of effective written communication; accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, veracity, among others.” Others disagree. Although not perfect, defenders argue that machine grading has a place in education, moreover, in many educational settings, teachers cannot give meaningful one-to-one tuition or criticism.
Most of the critics come from prestigious educational establishments and do not appreciate how the rest of the educational world works – or does not work. In an age of MOOCs – Massive Open Online Courses – it appears online assessment is here to stay, whether people like it or not.
Ref: International Herald Tribune (US) 6-7 April 2013, ‘Can essays be graded by artificial intelligence?’ by J. Markoff.
http://humanreaders.org
Source integrity: *****
Search words: Education, grading, marking, AI
Trend tags: Digitalisation