"Move fast and break things" (just not patients)
Last month, the Academy of Medical Royal Colleges (the AoMRC) published a report commissioned by NHS Digital: "Artificial Intelligence in Healthcare". The report is the product of interviews with leading thinkers and practitioners from the worlds of artificial intelligence (AI), medicine and science.
The AoMRC's report highlights the predicaments facing companies that produce new devices which harness AI. In the foreword to the report, Professor Carrie MacEwen (Chair of the AoMRC) refers to the tension between the tech mantra of "move fast and break things," and the principle enshrined in the Hippocratic Oath, "first, do no harm."
The issue of the public wanting innovation in healthcare, but with as little risk as possible, is not a new one. Regulators and the courts have had to grapple with this for decades, trying to determine how much risk should be tolerated when allowing new products onto the market.
Here are three particularly critical issues covered in the AoMRC's report that create dilemmas for industry and clinicians to deal with. We include our suggestions for mitigating the risks:
Issue 1. AI depends on human programming
The dilemma
The report recognises the power of AI. Algorithms can standardise treatment according to up-to-date guidelines, raise minimum standards quickly and reduce unwarranted variations in treatment. However, decision making tools may be programmed based on incorrect assumptions and incomplete information about patients, or whole populations. If that happens, significant numbers of patients across an entire healthcare system could be harmed over a short timeframe.
Mitigating the risks
Producers can work with hospitals and regulators to establish a framework for the safe development of AI in healthcare. This might include how clinicians should be trained, how new devices or techniques should be put through clinical trials, and the best methods for capturing patient information and conducting follow-up reviews of patient populations.
Issue 2. Many different professions are involved in creating AI
The dilemma
The report emphasises that people other than doctors will become integral to the provision of healthcare, particularly computer programmers and technology companies. This is generally a good thing as wider expertise is brought to bear on treating disease. However, few clinicians understand how AI works, few computer programmers will have medical qualifications and computer coding may be hidden as intellectual property. In the event AI is defective, it may be difficult to collate the evidence needed to determine responsibility for avoidable injuries.
Mitigating the risks
It will be mutually beneficial for manufacturers and hospitals to work out, in advance, where liability should lie in the event of injuries caused by AI. The allocation of liability should be set out clearly in contracts. All parties should take responsibility for identifying how and where injuries could arise and how they can collectively limit the risks. For example, manufacturers could provide training in new equipment; and hospitals could ensure that only clinicians who had received the necessary training were permitted to use the AI.
Issue 3. AI is already in the hands of the public
The dilemma
The report refers to the plethora of apps offering advice to patients directly, without the involvement of a doctor. It notes that apps could, potentially, prescribe treatments. On the one hand, greater numbers of people could access healthcare in a far quicker and cost-effective way than they do now. People who struggle to make an appointment to see an overstretched GP may benefit from AI solutions. However, this approach may prove dangerous if patients misunderstand the data generated by their smartphones, or if users are young, elderly or otherwise vulnerable.
Mitigating the risks
Manufacturers should set out the limits of the AI, emphasising the need for users to exercise their own judgement and common sense. Disclaimers should warn against unreasonable reliance on AI. Manufacturers could also consider limiting access to some of their products, perhaps based on prescription by doctors.
The AoMRC report provides an up-to-date assessment of the opportunities and risks presented by AI. Clinicians, manufacturers and hospitals will need to work together to ensure that patients benefit from new products, moving as quickly as is safe for patients and breaking nothing but technological boundaries.
Link to the AoMRC can be found here.
Stay connected and subscribe to our latest insights and views
Subscribe Here