by John Fox

As with stocks, shares and the gold market the perceived value of Artificial Intelligence (AI) waxes and wanes. Stanley Kubrick’s ‘2001 a space odyssey’ captured everyone’s imagination in 1968; the Japanese 5th Generation created an enthusiastic bubble in the early 70s, to be punctured in the 80s; the defeat of world chess champion Garry Kasparov by the supercomputer ‘Deep Thought’ and the extended operation of NASA’s ‘Deep Space 1’ probe by an “autonomous agent” regenerated funders’ interest, and Spielberg’s ‘A.I. Artificial Intelligence’ renewed public appetite, and expectations. As with all markets however AI bulls and AI bears tell the most convincing stories by turns. The bears who say we are far from constructing anything truly intelligent seem presently in the ascendant, pointing out that few automata are truly autonomous and most “robots” are largely pre-programmed or remotely controlled. Avatar, the film which most recently captured the public imagination as ‘A.I.’ did a decade before, focused on devices that project and amplify human capabilities rather than act independently.

Yet, as with markets, the swing of short-term sentiment between bulls and bears isn’t the whole story; despite constant turbulence the long-term trends in the gold price and AI are solidly “up”. In the world of stocks and shares we are seeing the emergence of autonomous systems in high frequency (and in some cases predatory) trading which are already reputed to be making serious money in global markets. A rather different domain for AI applications is healthcare (see Link 1). In the ‘Safe and Sound’ project we demonstrated how AI systems could populate the future digital economy in healthcare, demonstrated in a video which shows a benign future of human and artificial agents cooperating for the benefit of patients, clinicians and medical research (see Link 2).

The narrative follows a fictitious patient through her cancer “journey” showing how many different tasks and medical services can be automated and choreographed using AI technologies. A key station on this journey is the multi-disciplinary meeting, where all the members of the clinical team discuss each patient to decide what to recommend. The photograph below shows a multidisciplinary meeting at the Royal Free Hospital in London where the team reviews each patient’s history, imaging and lab results and so on. The screens in the meeting room show a system called MATE which summarises the data and assists in most decisions (eg diagnosis, risk assessment and prognosis, test selection and treatment recommendation) taken by the team. (The MATE system was developed by my colleagues Vivek Patkar MRCS and Dionisio Acosta PhD working with Mo Keshtgar FRCS who leads the breast cancer team at the RFH. We are grateful to Mr. Keshtgar for his enthusiastic support for this project and for his leadership in organising the first clinical trial anywhere of this kind of AI technology.)

MATE does not operate autonomously. For obvious professional and ethical reasons all decisions remain the responsibility of the clinical team. However, it uses the same AI technology (see Link 3) developed in the Safe and Sound project, so that with little more than a “flick of a switch” any decision, or care plan, or even the whole system, could operate without human supervision. The simplicity of this change deserves attention.

Looking after cancer patients demands a large slice of every healthcare budget, and multi-disciplinary meetings require the time and expertise of a lot of highly paid professionals as is evident from the picture. If evidence emerges that decision-making and care planning could be run without human supervision, yet effectively and safely, the pressure to deploy them in this way would surely grow. (The idea that responsibility for landing a plane full of people could be delegated to an automated system was once outrageous; today we take autolanders for granted, not least because they can get aircraft down safely in conditions that human pilots might not.) The ethical implications of this are obvious.

Safe, sound and ethical?
Medical decision-making has been an important setting for the discussion of ethical questions in professional practice, in which the following are taken as axiomatic:

  • Beneficence: do good.
  • Non-maleficence: do no harm.
  • Distributive justice: be fair.
  • Patient autonomy: respect patient self-direction.

If systems like MATE could be rolled out in an autonomous mode should they? If not, why not? As we develop the autonomous systems of the future we are likely to consider questions like those traditionally discussed in medicine. However additional ethical principles are also likely to be needed before the possibility of widely rolling out such systems would be considered. For example:

  • Personalisation: a system must be able to engage in natural and cooperative interactions with its users and accommodate the user’s personal goals and preferences.
  • Accountability: an understandable rationale must be available for all recommendations, at whatever level of detail the user may reasonably require.
  • Controllability: the user must be able to modify the system’s assumptions and goals, and the system must adapt appropriately and safely to such changes.

If autonomous systems are rolled out without considering such issues then, as with automated markets, those without technical skills will be excluded from many benefits and the potential for disastrous failures and abuse will grow.

Links:
[1] Drawing on a long-term R&D programme on the application of AI in safety-critical applications starting with the RED project http://www.comp.lancs.ac.uk/computing/resources/scs/App-A.pdf (p22) and culminating in Safe and Sound: Artificial Intelligence in Hazardous Applications, J Fox and S Das, AAAI and MIT Press, 2000.)

[2] Safe and Sound http://www.clinicalfutures.org.uk
was a collaboration between Oxford University, Edinburgh University, Imperial College/St. Mary’s Healthcare and UCL/Royal Free Hospital, video at http://www.clinicalfutures.org.uk/video/final

[3] PROforma agent modelling language (Das et al, JETAI, 1997; Fox et al, AI Communications 2003; Sutton and Fox, J Am. Med. Informatics 2003) and the Tallis application development platform:
http://www.cossac.org/technologies/proforma
http://www.openclinical.org/gmm_ proforma.html

COSSAC - Interdisciplinary Research Collaboration in Cognitive Science & Systems Engineering: http://www.cossac.org

Please contact:
John Fox
Department of Engineering Science, University of Oxford and
University College London/Royal Free Hospital, UK
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

{jcomments on}
Next issue: January 2019
Special theme:
Transparency in Algorithmic Decision Making
Call for the next issue
Get the latest issue to your desktop
RSS Feed