Technology has been used in healthcare throughout its history – devices ranging from stethoscopes to heart rate monitors are ubiquitous and unremarkable. So why is healthcare technology such an important topic right now?
Healthcare technology has become a hot topic in recent years because of the emergence of potentially game-changing innovations. Previous advancements have generally come in the form of tools that doctors and nurses can use. In contrast, modern technological developments are making it possible for medical staff to be replaced, in some circumstances, by robotics or artificial intelligence systems. For example, in the context of diagnostics, there are already technologies that analyse medical images, such as checking radiological images for tumours, with the same accuracy as a human expert. Similarly, machine-learning early warning systems that alert clinicians to patients at risk of deterioration can already perform better than current clinical outcomes. A shift from a model where clinicians uses technology as tools, to one in which clinicians are to some extent replaced has profound implications for how medical care is regulated and where legal liability falls when something goes wrong.
Why is Artificial Intelligence such an important development in healthcare? What AI developments do you think will have the greatest impact on healthcare services?
The use of Artificial Intelligence in the healthcare context has great promise, particularly in the field of diagnostics. We already have emergent diagnostic AI tools that are performing at the same level as human clinicians. But AI has the potential to make diagnostic predictions in a way no human could. Machine learning is a technology that allows computers to learn directly from examples and experience in the form of data. One application of machine learning in the healthcare context is to provide a computer programme with a large set of patient data, where the health outcomes for those patients is already known. The programme then identifies patterns in that data, developing a complex understanding of the relationship between certain factors and health outcomes for patients. When that programme is provided with data about a new patient, whose diagnosis and/or prognosis is unknown, it uses what it has “learned” to make a prediction. As computers are able to process ever more vast data sets, AI tools have the potential to make diagnostic and prognostic predictions that are more accurate than those made by clinicians. Similarly, machine learning can be used to devise treatment plans for patients. And this technology isn’t years away, it’s being developed and used now.
What are the legal implications of the use of Artificial Intelligence in diagnostics? Who will be liable if an algorithm provides an incorrect diagnosis or generates an incorrect treatment plan?
The increased use of AI tools in healthcare will place significant pressure on existing approaches to legal liability in cases where things go wrong. A conventional claim in medical negligence depends for its success on a finding that the doctor, or other medical professional, has breached their duty of care to the patient by falling below the standard of reasonable care expected of someone in their position. The doctor made the wrong diagnosis or prescribed the wrong treatment because he did not take reasonable care. But what happens where the diagnosis or treatment plan is generated by a software programme, upon which the doctor relies?
At present, most AI systems are being designed for use to support clinical decision-making, rather than to replace it. That is, clinicians are meant to take the output generated by the programme and consider this as one factor amongst many in reaching their own decision. In these circumstances, ordinary principles of clinical negligence can still apply. It is the clinician herself who exercises the final judgment and, in doing so, can fall below the reasonable standard expected of her. An interesting question which will necessarily arise is what use a reasonable doctor would make of the information provided by an AI system.
This raises a host of tricky questions. In circumstances where an AI system is more accurate than human clinicians, would it ever be reasonable for a doctor to depart from the system’s prediction? Where an AI system is so complex that its internal workings cannot be understood by a doctor, is there any meaningful way that a clinician can decide whether to accept or reject a prediction? In these circumstances, it will be difficult to say that an individual doctor would be negligent in relying on an AI system. Instead it may be asked whether the NHS Trust, or private healthcare provider, was negligent in deciding to deploy that AI system in that particular context. The relevant duty of care might be at an institutional level. As such, scrutiny would focus on whether adequate care was taken in determining whether AI was appropriate in the particular context and in sourcing an effective and safe system. It is hoped and expected that a new regulatory framework for AI products will emerge to assist healthcare institutions in making these important decisions.
Beyond the hospital itself, the provider of the AI system may also be liable, potentially in negligence or product liability. However, the application of existing legal structures to international tech companies is bound to prove complex and challenging. There will be particular difficulties in proving what went wrong and why, not least because tech companies are loathe to share details of their code or the data sets on which a system has been trained.
What about those left behind – doctors and hospitals who cannot afford to acquire the very latest healthcare tech. At what point does a piece of healthcare tech become essential to discharge your clinical duty of care?
Another way in which a doctor or healthcare institution could be exposed to liability is if they fail to use AI systems when doing so is required to discharge their duty of care. How will the Bolam/Bolitho test be adapted to accommodate different rates of adoption of emergent technology? At what stage will it be become Bolam irresponsible or unreasonable for “late adopters” to decline to use emerging technologies? Given the perceived benefits of AI technology at what point would refusal to adopt it, or to follow its guidance, become Bolitho illogical? To what extent can healthcare institution rely upon cost or resource arguments to avoid investing in healthcare technology? Is this a matter of clinical judgment to which the Bolam test applies or is there a different test? Are decisions about resourcing matters for the Courts at all?
What about problems with bias in AI systems?
One critical weakness in machine learning is that it will replicate, and sometimes magnify, existing biases which are present in the data sets on which it is trained. This is of particular importance in the medical context, given known discrepancies in health outcomes along gender and racial lines. For example, clinical trials often fail to include enough women and do not adequately investigate how a drug might affect women’s bodies differently. Women’s self-reported pain is more likely to be questioned by doctors and pain conditions in women are frequently misdiagnosed.
AI can be both a positive and a negative in this context. On the one hand, a carefully designed programme might provide more accurate clinical predictions and go some way to overcoming potential biases in human decision-makers. On the other, an AI tool trained on a data set that is skewed along gender lines will replicate this imbalance. It is plausible that an AI diagnostic tool might be more accurate than a human clinician overall, but distribute those benefits unevenly, with minimal improvement in detecting a condition in women but a large improvement for men.
Where a woman suffers harm as a result of negligent treatment by a clinician, who was un-assisted by AI, she can bring a civil claim. If women are more likely to suffer negligent treatment, then we can expect this to be reflected in a larger number of successful negligence claims brought by women. Where an AI system has been adopted by a hospital for use, things will be more complex. As discussed above, a doctor might not be negligent for relying on a complex algorithm that, overall, improves outcomes. However, a hospital who chooses to use an AI system which distributes healthcare gains unevenly could arguably be in breach of its duty of care. Traditionally, negligence law has not concerned itself with questions of discrimination. Perhaps these issues will now arise, given the practical possibility of testing AI systems for bias. More plausibly, this could be addressed by a new bespoke legal regime, more suited to these emergent technologies.
Might there be a new legal regime developed to deal with AI systems in healthcare?
The European Parliament’s Legal Affairs (JURI) Committee has considered the question of how liability for damage caused by advanced robots and AI should be determined. They suggest a new regime under which it is compulsory for all those using these technologies to be insured. Liability for harm resulting from use of AI could be determined strictly or could be apportioned between different actors on a risk-management basis. The latter approach would attach liability to those who were able to minimise risks posed by the AI system in question but failed to do so, for example, the programmer or the user/owner, depending on the circumstances. The BSB has expressed strong support for compulsory insurance to cover owners and/or operators of AI. They argue a direct right of action against the insurer should be available to an innocent third party harmed by an accident involving AI, with liability being strict. As to who the insurer should then have recourse to, the BSB is less clear, aside from rejecting the need for a harmonised EU approach. The England and Wales Law Commission has already started a project looking into Autonomous Cars. It seems likely AI will be on the agenda in years to come. We will have to watch this space.