This thesis analyses the interaction between innovation and law through the example of medical artificial intelligence (AI) and the informed consent requirements of the United Kingdom and California. Specifically, it asks: ‘Can the common law doctrine of informed consent ensure an adequate protection of patient autonomy as artificial intelligence is introduced into medicine?’ More widely, this analysis will be used to challenge three prevalent assumptions in the law and technology field: (1) that law is primarily an instrument to realise extra-legal standards related to innovation; (2) that legal solutions, as well as legal thinking, lag far behind technological developments; (3) that extra-legal regulatory options, such as the adaptation of a system’s architecture, offer comparably fast and effective solutions.
A wide variety of clinical AI are currently being developed and approved for use in many health care systems. AI is defined as technology capable of accomplishing the kinds of tasks that human experts have previously solved through their knowledge, skills and intuition. In particular, the machine learning (ML) approach has enabled the development of clinical AI with such capabilities.
Characteristics of such intelligent devices are described and relevant case studies are demarcated. Above all, the different quality of interactions between AI and patient, including different degrees of automation, are classified. Many intelligent medical devices will only complement human knowledge. They will not directly diminish the expertise that medical experts contribute to the diagnosis, prognosis, or treatment of the patient. Other products will partially replace this expertise. For example, less skilled human professionals will be able to perform tasks that previously required expert knowledge. In a very limited number of cases, AI will even determine parts of the clinical decision-making process. For instance, the technology will contribute a surprising insight that must then be taken into account or it will triage certain findings, directing human professionals’ attention.
Further distinctive factors are associated with ML-driven medical devices. These include (1) shortcomings in their performance evaluation, especially when applied to diverse groups and/or environments; (2) an incomplete understanding of how they function in individual cases that will persist, despite emerging technological solutions; (3) a consequent reliance by human decision makers on general knowledge associated with the technology to integrate it into a collaborative form of decision-making; and (4) an influence of the device on joint decision-making through the exploitation of subconscious human biases.
As a result, involving AI in clinical care threatens to undermine the ability of patients to make judgments that are tailored to their personal circumstances and beliefs. In short, it may infringe the principle of patient autonomy. What this standard requires of AI-assisted medical decision-making, which for the first time allows non-human actors to apply expertise to individual cases, is a pressing question.
It is examined how the legal standards of the United Kingdom and the United States (through a case study of California specifically) respond to this problem. In doing so, a method is developed for capturing the inherent dynamics of these standards and for undertaking a forward-looking assessment of the common law’s adaptability. First, in line with the functional comparative law method, the pre-legal problem, the tertium comparationis is delineated. This refers to the violation of patient autonomy. To determine which AI features contribute to this, a procedural notion of autonomy is applied which focuses on the process by which individuals make their decisions. Specifically, it associates autonomous decisions with the individual's values and beliefs and with the exercise of rationality.
This enables the identification of four AI challenges to patient autonomy: (1) uncertainty in the use of clinical ML that justifies analogies to innovative treatments with generic risk-related characteristics (2) the making of some normatively important decisions relatively independently, i.e. without meaningful patient involvement (3) an interreference with the patient's ability to make rational evaluations in the collaborative medical decision-making process; and (4) the non-obvious substitution of human expertise by AI-based information, posing an epistemological problem for the patient.
In a next step, the relevance of this concept of autonomy and the resulting problems in the two legal systems is discussed. It is determined that the procedural understanding of autonomy represents a defensible concretisation of patient autonomy in the medical law of both countries. Further, this concept can be constructed as a legal principle. This explains, among other things, how autonomy-based reasoning interacts with other legal norms, meets the doctrinal requirements of those norms and nonetheless creates a flexibility that grants dynamism to the law’s ability to guide actions.
The derived principles are then used to examine the extent to which individual norms requiring patient consent can meet the challenges of medical AI. Specifically, in the United Kingdom and California, these are the common law tort claims of "battery" and "negligence". In doing so the developmental possibilities under the autonomy principle, as well as the limits of such possibilities, are identified in light of the AI challenges. In consequence, a forward-looking, nuanced picture emerges of how the law can deal with the new technological problems of medical ML applications.
In both countries it is foreseeable that the autonomy principle will only be able to bring about necessary changes in the design, as well as the interpretation, of specific norms to a certain extent. The adaptability of the common law will reach its limits. At this point it is discussed how a complementary, statutory regulation could build upon the insights of the preceding analysis, so that it complies with the principle of patient autonomy, as well as with the distinctive doctrinal requirements of the respective systems.
In a final section, the thesis returns to the three basic assumptions concerning the relationship between law and technology discussed at the beginning. The methodology developed and applied illustrates the proactive dynamics inherent in the law, moving with societal developments and also guiding them in a variety of ways. It is a misunderstanding to conceptualise the law only as a system of specific regulations that constitute either stringent hurdles to – or sometimes limited facilitators of – innovation and which must be periodically adapted to the new ‘reality’. Moreover, the touted benefits of extra-legal solutions have not been identified in the case of clinical AI: immediate protection of patient autonomy or inherent, faster adaptability were not hallmarks of any of the analysed technological solutions.