The increasing economic demands of the growing and ageing population on our already overburdened healthcare system make our current model of care unsustainable.1 Novel ways of delivering care are needed, and consequently, there is growing interest in using artificial intelligence (AI) to aid medical decision-making.2,3 However, the impact of AI on the future landscape of medicine remains unclear. We briefly explore how, in the coming decades, the traditional role of the doctor will be challenged by AI in a) autonomously performing diagnosis, and b) autonomously making treatment decisions.
Artificial intelligence in other industries
Driven by the economic benefits of tireless labour, machines have been replacing human workers since the industrial revolution.3 Historically, tasks such as manufacturing have been most susceptible to automation. However, due to recent advances in computing, such as machine learning,4 cognitive tasks,such as decision-making, are becoming increasingly susceptible to automation through AI.3 In fact, with the exponential nature of technological advances, almost half of current jobs in the US are considered as ‘high-risk’ of technological unemployment over the next one to two decades.3 A range of industries are being affected by AI, from technologies such as self-driving cars,5 through to software that writes plain English news stories from structured data.6
Artificial intelligence in medicine: automated diagnosis and treatment decisions
Turning to the healthcare industry, to what extent will AI be able to carry out the cognitive tasks traditionally performed by doctors?
The British Medical Association states that diagnosis “largely differentiates doctors from other health professionals.”7 However, this ‘unique’ role of diagnosis is ultimately a pattern-recognition algorithm. Information is gathered, synthesised, and compared with predefined categories we call diseases. If a patient’s pattern of symptoms, signs and test results match that of a known disease, then we classify and treat them accordingly. Clearly, this process could be performed by an appropriate AI.
Indeed, IBM have already created an AI known as Watson, that is able to perceive, ‘understand’, and make decisions based on natural language. In addition to defeating the champions of Jeopardy! (An American television game show competition in which contestants are presented with general knowledge clues in the form of answers, and must phrase their responses in the form of questions), it is used at Memorial Sloan Kettering Cancer Centre to aid diagnosis and produce management plans for oncology patients.3 In contrast to humans, who can only learn from personal experience, Watson synthesises information from millions of medical reports, patient records, clinical trials and medical journals. Furthermore, Watson does not eat, sleep, go on holiday, or get sick.
According to principal investigator, David Ferrucci, Watson is already “out-diagnosing” medical residents in certain situations.8 Similarly, Isabel—a web-based clinical decision support system (CDSS)—suggested the correct diagnosis in 96% of 50 consecutive cases published in the New England Journal of Medicine.9 This is comparable with human doctors, who have been shown to make the correct diagnosis in 95% of outpatients.10
Notably, medical specialties that utilise images for diagnosis are particularly amenable to appropriation by AI. This is exemplified by an algorithm that ‘learned’ from a database of normal and abnormal images to diagnose and classify diabetic retinopathy as accurately as human doctors.11 Similarly, when applied to a dataset of 340 brain magnetic resonance images, an algorithm developed at the University of Malaya classified images as either ‘healthy’ or ‘diseased’ with 100% accuracy.12 Even aspects of the physical examination can be performed by AI, with a computer-vision algorithm classifying a group of 55 patients as either ‘healthy’ or ‘Parkinson’s disease’ based on automated analysis of handwriting with 79% accuracy.13
Although these solutions are intended to be physician assistants as opposed to physician substitutes, these findings have huge implications for us because diagnosis, our defining role, could be performed better, faster and more inexpensively by AI in the near future. If nothing else, these finding suggest that AI could substitute for human diagnosis in ‘visual’ medical specialties such as radiology, pathology, dermatology and ophthalmology in the very near future.
Following diagnosis, the doctor and patient must decide on appropriate treatment. This process relies on the doctor applying their clinical acumen to a particular problem, in combination with available evidence and patient preferences.14 In the same way as making a diagnosis, the process is largely algorithmic. As a result, there is growing use of treatment CDSSs that range from simple information resources, to ‘intelligent’ algorithms that suggest patient-specific evidence-based treatment recommendations.15 Antimicrobial Resistance Utilisation and Surveillance Control (ARUSC) is an example of an ‘intelligent’ antibiotic CDSS that is fully integrated with the electronic health record. In a recent prospective cohort study in Singapore, use of ARUSC halved mortality rates in patients who were initially started on empiric antibiotics.16 Similarly, Watson is currently making useful patient-specific treatment suggestions to leading oncologists.3 Clearly, when making treatment decisions, humans and machines combined are superior to humans alone.
Where does this leave the doctor?
As these systems become more intelligent, diagnosis and routine treatment decisions could, in principle, be performed independentlyby AI. As a result, the human clinician would only need to perform tasks that are beyond the capability of AI, such as communicating with patients, performing procedures, or making the final treatment decision in combination with the patient. Therefore, the clinician does not necessarily need to be a doctor. The cognitive tasks, which require many years of medical school training and decades of clinical experience, would no longer be the role of the doctor. This would be more apparent in the hospital setting, where there is a greater emphasis on the diagnostic process—as opposed to primary care—where the relationship between doctor and patient is often more important.
However, in both community and hospital settings, health professionals requiring less intensive training than doctors, such as clinical nurse specialists, could be trained to ‘fill the gaps’ where AI remain less capable—for instance, in history-taking, physical examination or basic procedures. Indeed, it has been shown that with appropriate training, nurse practitioners are comparable to physicians when treating patients in primary care.17 There may be a role for a small number of doctors to oversee processes, but the current role of a doctor as an expensive problem solver would become largely redundant.
Over the coming years, AI will challenge the traditional role of the doctor. Human doctors make errors simply because they are human, with an estimated 400,000 deaths associated with preventable harm in the US per year.18 Furthermore, the relentless growth of first world health care demands in an economically-constrained environment necessitates a new solution. Therefore, for a safe, sustainable healthcare system, we need to look beyond human potential towards innovative solutions such as AI. Initially, this will involve using task-specific AI as adjuncts to improve human performance, with the role of the doctor remaining largely unchanged. However, in the longer term, AI should consistently outperform doctors in most cognitive tasks. Humans will still be an important part of healthcare delivery, but in many situations less expensive, fit-for-purpose clinicians will assume this role, leaving the majority of doctors without employment in the role that they were trained to undertake.