iConference 2018 Automation Workshop Summary

This article was originally published on The Health Automation Project Website. Read the original article.


On March 25, 2018 Dr. Matt Willis and Professor Eric T. Meyer organized a workshop on Automation and the next wave of computerisation: Sociotechnical approaches to automation, robots, machine learning and artificial intelligence. We had sixteen attendees, from PhD students to tenured faculty, heads of department, computer scientists, and industry professionals. The goal of this workshop was to create a space for socio-technical inquiry of currently trending technologies: automation, machine learning, robotics, intelligent machines, and other technologies that we anticipate having profound social changes. Particularly we are interested in how and what research methods can be designed and applied to these technologies to better understand their social effects or potential changes. Also, to clarify the challenges and opportunities that social science researchers face studying these social and technical phenomena.

Most workshop attendees wrote a short position paper. These position papers were to elucidate both the participants interest in, as well as experience or current research on, the workshop theme: computerisation and automation. We were particularly interested in focusing on how these technologies change the nature of work and social life and potential challenges these technologies will create. Position papers were written several weeks before the workshop and then shared with all workshop participants to establish difference perspectives in the workshop and make visible everyone’s interests and experience.

We designed the workshop in two parts. First, there were three case studies presented about current or ongoing research or development into topics related to the workshop theme presented by workshop participants. These case studies were selected from interesting position papers that were submitted earlier as well as presenting work from our project concerning the future of healthcare and automation in NHS primary care. Each case study serves as a short conversation starter that contributes to the theme of the workshop and builds momentum toward the second half of the workshop. Questions and discussion time was allotted after each case study presentation. Then, the second part of the workshop focused on small group discussions, each addressing a different high level question previously identified and included in the workshop abstract. Each question was asked in a manner that elicited a pro and con style response. After individual group discussion the groups would meet and share their discussion points with the rest of the workshop attendees.

The first case study was presented by one of the organizers (and authors of this post) Dr. Matt Willis, of the Oxford Internet Institute at the University of Oxford. The case study focused on an overview of a project that investigates the potential opportunities and challenges to the implementation of automation technologies in NHS primary care. Typically seen as a threat to workers in many sectors of the economy, automation technologies are perceived as an opportunity in healthcare. Specifically, in NHS primary care and general practice services it is hoped that automation can address pressures of reduced funding, staff shortages, skill shortages, increased paperwork processing, and increased patient demand for appointments.

To address the question of what can and cannot be automated in general practice, Matt Willis gathered data from fieldwork in five primary care clinics with more fieldwork planned in the future. During the fieldwork he observes and interviews every type of occupation in primary care and documents their work at the task based level. From this data the project team plans to understand each tasks probability of being automated by existing technologies. The project team consists of Dr. Willis, Dr. Angela Coulter, and Professor Meyer on the ethnographic and general practice fieldwork team with Dr. Paul Duckworth, Dr. Carl Frey, and Professor Mike Osborne on the machine learning and quantitative analysis team.

The discussion during this first case study focused on what can and cannot be automated. Specifically, how automated systems do not or cannot capture the uniquely human quality of certain parts of work that may at first be perceived as routine and therefore appropriate to automate. For example, secretaries that had considerable years of experience in understanding how the NHS is organized and the process of booking appointments in other parts of the healthcare system knew who to call or how to make an appointment happen. They would apply social relationships and institutional logics that are only learned after experience working in different parts of very large and complex organisations. This was the difference between a machine processing someone’s request and waiting weeks or the secretary applying this expertise and the appointment happening within days. This was just one example of the skilled craftwork of humans that automated systems cannot replicate. This also raised the question of how occupations in healthcare should be designed in the future given that significant portions of certain kinds of work in healthcare can be automated. Is more time made for different occupations to gain, apply, and cultivate this kind of skilled craftwork and expertise? Or does this kind of knowledge disappear in favour of efficient and mass information processing?

The second case study was presented by Dr. Dobrica Savić, Head of the Nuclear Information Section at the International Atomic Energy Agency (IAEA)in Vienna, Austria. In contrast to the first case study that was more exploratory toward the relationship between humans and machines in a specific area of work, the second case study presents an example of how automation can be applied to address a specific problem. Dobrica discussed the International Nuclear Information System (INIS) and how it hosts one of the world’s largest collections of published information on the peaceful uses of nuclear science and technology. INIS is a literature repository that has a thesaurus, classification system, index, and other meta data of thousands of words and descriptions in multiple languages. Although parts of this system already contain automation for information processing, the system also relies on the labour of scientists and other specialists to index documents and assign the most appropriate and meaningful description in different languages to the millions of documents in INIS. This process takes time to decide on terminology and standardise the index terms, often requiring conversations to settle on terms. The application of automation in the problem of slow human indexing presents an interesting case study because it attacks the direct human skill that is often cited as being the differentiating factor concerning the current wave of automation: knowledge work. While humans have been automating work for as long as we have been using tools, what separates this current trend from previous waves of automation is that for the first time educated, skilled, and knowledge based specialisations are ask risk for automation. The benefits of automating indexing are clear: consistency of index terms, easily predictable, efficiency gains in other parts of the system processing the information, savings on human labour costs, and increased search accuracy.

Again, in this case study we also see how automation can change the role of specialists. The first case study showed how certain vulnerable jobs such as receptionists and secretaries can have their roles reconfigured if certain tasks within their occupations were automated. In the second case study here we see how specialist knowledge is outpaced by the application of machine learning technologies on knowledge management and indexing. While receptionists may take on new tasks or shift toward more patient face-to-face oriented work, nuclear specialists working on indexing and terminology of the INIS system may shift to a more confirmatory role rather than an export role. Instead of experts directly supplying technical descriptions and definitions to the system, this task is automated and the specialists move to confirming, or looking for errors, in the machines work.

The third case study was presented by Jun Zhang, PhD student at the University of Sheffield. Jun’s work concerns the analysis of deploying an integrated smart transportation application within the context of Chinese cities. This is an interesting case because of the lack of data sharing from customers to commercial transportation companies limits the type of services, quality of services, applications and features that customers are offered. There are many information islands between companies and the Chinese Government, none of this information is shared to improve customer experiences. This presents a challenge given the many proposals and concepts of smart cities and smart transportation infrastructure that is typically a feature of these smart cities. These are clearly challenges of interoperability of systems. But this is an exemplar of many other systems in different contexts that pose a challenge to techno-utopian views because of issues like interoperability.

This case study also presents a unique view of automation that stands apart from the previous two case studies. This view is that in future smart cities that we will interact with in day-to-day living, automation becomes a necessary technology to make other features in the infrastructure work. In this view, automation does not impact a job or shift an occupations skills, it enables new forms of civil engineering and interaction between people and the city; between and the information generated by the city and its citizens. Typically, when we think about automation we think about impacts on the labour force. This case study, however, shows how automation will be a key technology in the designs of future smart cities, often automating work that was never done by a human to begin.

After these three case studies and associated discussions, the workshop was split into two groups to each attack one of the following questions:

Group 1. What are methodological challenges and innovations to studying this phenomenon?

Group 2. What are the social challenges and impacts of studying this phenomenon?

Each question is phrase in a pro/con framework, provoking groups to discuss the challenges and problems of the questions and then the opportunities, impacts, or innovations of the question.

Group 1 consisted of the following members: Matthew Willis (Oxford University), Jeremy Foote (Northwestern University), Ansgar Koene (University of Nottingham), Rahmi Rahmi (University of Tsukuba), Robert Jäschke (Humboldt University Berlin), Timothy John-Gollins (National Records of Scotland), Liliana Sepulveda Garcia (University of Sheffield), Jun Zhang (University of Sheffield).

This group approached the question from a social science perspective. One of the main challenges they identified in studying automation and computerisation, and indeed in all research in technology, is how thoroughly do you need to understand the technology? And how do you accurately represent the technology? As researchers interested in both the technology and the social components of the technology, in this case how automation technologies can shape social structures, it is clear that the technology needs to be ‘unpacked’ or described in a detailed manner such that readers have a clear understanding of the functionality pertinent to the research. However, several examples were shown during the discussion where the level of detail and rigour was beyond what was needed, or practical, for the associated research. Clearly, there is a balance to be struck with doing the work to understand and accurately convey the technology under scrutiny, but also to keep in mind the research questions and aims of the project.

The first challenge pairs with the second challenge that was discussed. Technological systems come become remarkably complex. Especially those that rely on large data sets, proprietary corporate technologies, multiple development teams, and assemblages of other smaller technologies and components to form a new coherent whole technology. For example, the act of searching for information on Google is an easy and straightforward process. But to make searching work the back end of Google search represents entire farms of servers, multiple development teams, the proprietary page rank algorithm, and other digital components and corporate assets. However, what adds to the challenge of social scientists working in this area is that not only are technical systems becoming more complex, but social systems are also complex. At times this dual complexity can represent a challenge for any clearly worded straightforward research question. Some of these complexities that come out during the discussion include unique cultural contexts and the cultural competency of the researcher, I.e. studying technology use in other cultures. Also, how different people can perceive or interact with technologies differently based on their experience, background, goals, and a myriad of other factors. Lastly, some concerns arose around applying the correct method to research in this area. Although this is a general question that applies to all scientific inquiry, it is a question that evokes additional questions about epistemology, contribution, approach, ability to collect data, and how the research questions are written.

The first group then turned the discussion toward innovations and opportunities for methods. The first conversation centred on the potential methodological innovations that may arise from studying currently trending automation technologies. While this is nothing new to this specific area of research, nevertheless innovation can be found in adapting methods from other areas of inquiry. Specifically outside of traditional fields that study technology. What can we learn from methods used in Anthropology, Sociology, or Biology? One participant also noted that the methods you choose might be influenced by those around you. This observation pertains most notably to graduate students that venture out on their theses and dissertations. They will be influenced or best supported by advisors and the peers around them. This makes sense as some advisors and mentors may be unknowledgeable or disagree with certain methods.

After discussing the possibilities of adapting methods from other fields, the second discussion on innovations turned to developing novel methods that pair with emerging computational techniques. The prospect of using machine learning to automatically qualitatively code text is not a new idea, and has been tried previously with limited success. This approach of ‘automating’ qualitative coding still provides an interesting line of thought on the synergy between the work of qualitative researchers and the continued development of recent computational methods. This idea also mirrors the theme of aspects of the workshop: the connecting between non-automatable human processes and automatable routine work. Rather than using the machine to automate coding, how can it be used to inform coding, or to act as an intercoder reliability heuristic? Through this discussion we imagine machine learning techniques not to serve as a replacement to qualitative analysis but as a compliment, or an extra cycle, to the qualitative coding process. There is no single approach to this idea. Qualitative analysts could complete a preliminary pass of their data and use those labels as training data for a machine learning algorithm, an area of prior work as previously cited. Or different machine learning approaches could be used to help pre-process and categorise data at the start of the qualitative analysis process. Or, most interestingly, informing machine learning algorythms with different social theories given the research questions. Helping apply theoretical frameworks to data and interpretation. Whatever the approach new computational tools offer potential to develop unique complimentary skills to qualitative researchers.

Now we turn to the next group in the workshop. Group 2 consisted of the following members: Kate Marek (Dominican University), Philip J. Reed (University of Washington), Eric Meyer (University of Oxford), Virginia Pow (University of Alberta), Andrew Cox (University of Sheffield), Abdulaziz Almanea (University of Sheffield), Dobrica Savic (International Atomic Energy Agency), and Jiaxin An (Peking University).

This group tackled the question: What are the social challenges and impacts of studying this phenomenon? They identify four areas of impact and challenges of automation on society: in academia, on jobs, the societal impact, and impact on the individual. The conversation was directed in part by news of the recent Cambridge Analytica scandal, discussing the disruptive role in technology in ways that are not always desired, such as putting democracy at risk, devaluing human labour, or exacerbating already existing social challenges. While technology has created problems, they suggest technology can also fix the problem. To this end, they note some stories about disruption can be success stories, such as the Internet as a public utility or a human right.

Another important stream of this group’s discussion was on the role of the academic, how can we be more active in this space beyond reading papers, understanding the technology, and writing for each other? Being more public facing in our writing is one strategy, taking the social and technological implications of emerging technologies and writing for a mainstream audience. Some scholars clearly do this, but smaller communities and given the different ways people consume media may require academics to write for a general public.

When we look at the area of jobs, there is a clear need to move beyond the utopian / dystopian dichotomy in which automation has either come to save us all from the drudgery of things like driving cars (a perspective that harkens back to the 1950s and the way that domestic technologies would free us all), or conversely to impoverish us all as robots take all our jobs and leave us destitute and goal-less (unless of course you are one of the lucky few who own the robots). As with previous technological disruptions (printing, steam, industrialization, information age, etc.), it seems fairly inevitable that there will be disruptions in the status quo, but that new opportunities for (human) work and innovation will emerge as work and organizations are reconfigured. This is one area that academic engagement in the public discourse about automation can add nuance and evidence to the sometimes overly black-and-white views espoused in public outlets.

At the level of society and the roles of individuals in society, we seem to be at an inflection point, where the technologies of automation have either made a step-change or appear to be about to do so. It is at these inflection points that decisions will be made, and the results will have lasting consequences once we have established a path. As academics, we have evidence to bring to the conversation about how automation has, can, should, and will shape our socio-technical worlds. However, we might need to step outside our comfort zones to engage in these more public debates. It seems, however, important that we challenge ourselves to do so.

The workshop organisers, Matt and Eric, clearly see this as the beginning of a conversation. Indeed, an active conversation that is taking place across society. We echo the discussions about the role of academics in articulating and discussing these technologies that will be part of social conversation for years to come. Also, it is not enough to make technical progress in developing automated and machine learning technologies, but we must also make progress in news ways of social organising, policy, and rethinking work in light of these advancements.

2018-07-05T18:01:57+00:00 June 18th, 2018|Research|