Do you act like your computer (designer) thinks you ought to?

PDF version of the article

Since everyone from office worker to police officer uses computers nowadays, their design has a major impact on our life. To make systems design successful requires understanding exactly how work is carried out, including the often unnoticed human expertise used every day in what we call unskilled jobs.

Ethnography uses systematic data capture and rigorous analysis to uncover what people actually do instead of just what they say they do. Xerox, a pioneer of applying this research to technology innovation, uses ethnography to identify unmet needs, recommend practices that work and avoid design constraints that impede how work is done.

When carried out prior to design, ethnographic study reveals the real contingencies of the work and helps us design systems which can handle that complexity; post-design ethnography can make the often implicit and taken-for-granted models underlying systems design visible.

 In her ground-breaking ethnographic study, Plans and Situated Action, Suchman [i], demonstrated what happens when design bears little resemblance to how people act. In the study even engineers had major problems using early photocopiers because system design embodied cognitive principles about how people plan and act, which bore little relation to how users actually interacted with these machines.

 In another study, Whalen et al [ii] provide an eloquent dismissal of the founding principles underlying the implementation of computerized “expert systems” in a call centre. These systems were intended to allow unskilled call agents to answer customer problems in the place of trained experts. However, the so-called expert systems were not able to respond to the contingencies of the interaction in the way human experts could, leaving the untrained agents with little option but to escalate their calls. Most of us have had at least one customer service interaction which ended in frustration because the representative seemed unable or unwilling to deal with our request. Typically this is caused by poorly designed workflow systems rather than truly unhelpful people.

Human skill versus computing power

Whilst automation and the pursuit of cost reduction are not inherently bad, the danger comes from an oversimplified model of low-skilled work: just because workers do not need advanced education or long apprenticeships, does not mean a computer can do that work better. There is a fundamental difference between the human skill and computing power:  we’re good at interpretation, improvisation and interaction, computers are good at large scale number crunching.

 If ethnographers ruled the world of work it would be a very different place. Automation of manual tasks that computers can do just as well or better than humans makes sense. It’s important to recognize, however that the main use for computers should be to enable people to capitalise on their expertise - whether it be highly refined professional knowledge or mundane, everyday sense-making. We can do this through designing systems which 1) enable people to use their skills and reasoning to deal with the problems which routinely crop up, rather than restricting them to rigid workflows or 2) consist of unique combinations of human and digital input to create output that neither can complete on their own, e.g., crowdsourcing image categorisation, where people identify the semantic parts of an object (e.g. wing, beak, etc. of a bird) and the machine applies the rules to identify exactly which object this is (e.g. this bird is a speckled hummingbird). Call centres provide a good illustration of what can go wrong and right with computer system design.

Agency and expertise

Time and again, ethnographic studies have shown the importance of agency (the capacity of a human to act). These studies have highlighted the myriad human skills which go into even mundane work. For example, we compared two parallel government processes, each consisting of a call centre and a processing centre. Where the teams were collocated, the workers could circumnavigate the rigid workflow with face-to-face communication to ensure vulnerable citizens got the benefits they were due without delay. In another setting the teams were distributed, had no agency to step outside of the workflow and could only communicate using official channels. In this situation, the call centre could not answer many queries  - resulting in caller frustration– and it was necessary to employ someone full-time to follow up on citizens’ complaints, most of which could have been avoided.

In another example, we worked with a customer service company that wondered why customers chose to call the call centre when there were online resources to help solve customer problems. By putting the same resources used by the call centre agents online, the company was hoping customers would choose to solve problems themselves, making it possible to remove the middleman.

Studying the call centre agents as well as customers using the online resources we identified all the extra work that the agents did to solve the customers problem – from persuading unwilling customers to troubleshoot in the first place, to translating the customer’s language into that of the online system.

 When customers used the online system alone we found that even when a search for answers returned the right results, customers often did not realize it was the right result. Call centre agents, on the other hand, were skilled in making the same search result relevant to the customers’ problem. They used their semantic skills to guide the customer through the problem-solving process. The company realized that simply removing the agents was not an option.

These examples point to an important two-fold lesson: even low skilled workers are often engaging in semantic work which is not apparent at first glance but which is almost impossible for systems to replicate. Secondly, tight control through rigid workflows is rarely optimum, as it tends to stop workers from using their skills effectively. Design should support and enhance human expertise,  rather than attempting to automate and control it.

About the author: 

Jacki O’Neill  is a principal scientist at Xerox Research Centre Europe. Her main area of interest is in the design of useful, usable and innovative computer systems, through both the detailed understanding of work practices and a consideration of the interaction of the social and the technical in prototyping and development work.

[i] Plans and Situated Actions: The Problem of Human-machine Communication, Lucille Alice Suchman, Cambridge University Press, 1987
[ii] Whalen, J., & Vinkhuyzen, E. (2000). Expert systems in (inter) action: diagnosing document machine problems over the telephone. Workplace studies: Recovering work practice and informing systems design, 92-140.