How Theme Park, Space Invaders and Go have paved the way for exponential healthcare

invaderI often imagine my retired self looking back at this point in my career, marvelling at how primitive it all was. By that stage, hospital fax machines, handwritten patient notes, stethoscopes, ‘bleeps’ and other relics of a time-gone-by will be collecting dust in the Ancient Medical History Museum. I’ll be a regular visitor at the museum, chuckling at my old life. But I’ll do so remotely from the comfort of my own home with a virtual reality headset on. And I’ll be living on a spaceship.

It’s 2017, and we are standing at the precipice of the most dynamic and transformative era in healthcare history. The digital health era. New technology is poised to rampage through the status quo, radically changing the role of the clinician and the patient. Our industry is ripe for disruption, and one technology promises to provide the most exponential change – artificial intelligence (AI).

But we’ve been hearing this for ages, right? Is anywhere in the NHS applying this technology or is it all just hot air?

First, what is AI?

It’s frustratingly hard to define. Google and YouTube produce hundreds of conflicting explanations. The reason being –  intelligence itself is a confusing concept.

In simple terms, AI ‘agents’ are machines that utilise a wide range of computer science applications to solve problems previously thought only possible using ‘natural’ human cognition. There are two distinct categories – ‘narrow’ and ‘general’ AI.

Narrow AI agents tackle specific tasks. For example, weather forecasting, playing jeopardy or operating a driverless car. It involves pre-programming the machine with all the knowledge it could possibly need for task completion. Therefore, it could be argued that the ‘intelligence’ demonstrated by narrow AI resides in the brain of the human developer.

Narrow AI is already ubiquitous in the average person’s daily life via Apple’s Siri, Facebook’s friend suggestions or Netflix’s film recommendations. It has rapidly become an indispensable part of modern digital infrastructure, but each individual application is limited to its specific function.

A much greater, more complex challenge, is creating general AI, sometimes described as ‘true’ AI. This is when the machine can attempt a broad range of tasks without any prior pre-programming. It interprets and operates in its environment in a similar way to a human being. At the core of general AI is the ability to learn from scratch – ‘machine learning’.

Who are DeepMind?

General AI has remained elusive for the scientific community until relatively recently. Enter British AI company DeepMind.

The start-up was founded in 2010 by Londoners Mustafa Suleyman, Shane Legg, and Demis Hassabis (child chess prodigy and creator of wildly successful simulation game Theme Park aged 17) with the simple but ambitious mission to “solve intelligence, and use it to make the world a better place” (1).

DeepMind first hit the headlines when they designed a single algorithm that taught itself how to play and subsequently dominate forty-nine different Atari 2600 video games (2).

They followed up with a high-profile breakthrough in 2016 when their AlphaGo algorithm won at the Ancient Chinese game of Go against 18-time world champion, Lee Se-Dol (3, 4). The machine won 4-1 in a 5-game series.

On first look, AlphaGo seems a narrow application of the technology, with a strikingly similar feat famously achieved nearly two decades earlier by IBM’s Deep Blue computer when it defeated chess GrandMaster Garry Kasparov (5). On closer inspection, AlphaGo and Deep Blue are very different digital creatures.

Chess is a game with a finite number of scenarios, and is therefore logic-based. Deep Blue was pre-programmed with every single possible move and outcome, and defeated Kasparov by ‘brute force’. Definitively narrow AI.

Go is also a board game, but there are more possible moves than there are atoms in the universe, making it arguably the most complex game ever devised (6). With such an unfathomable number of different possible strategies, the game goes beyond simple logic, and boils down to a battle of gut feeling and instinct. It would be impossible to pre-programme AlphaGo with every potential scenario. Instead, the machine self-learnt the game from raw experience (hundreds of hours of ‘practice’ game time), approximated expert intuition and creativity, and defeated the world’s best.

DeepMind’s algorithms uniquely combine two neuroscience-inspired machine learning strategies. The first is called ‘deep learning’. A grossly oversimplified explanation of this process is to say that artificial neural networks ingest vast quantities of raw data input, with layers of ‘neurones’ interacting as they process the information, gradually refining observations made, and eventually producing abstract ideas (7, 8).

The second strategy is ‘re-enforcement learning’. Essentially, the machine observes and interacts with its defined environment (e.g. random firing in Space Invaders, or moves in Go), learns by trial-and-error to maximise rewards/minimise penalties, gradually deepening understanding of the environment and moving itself closer to the overall goal (e.g. a perfect game score) (7, 9).

Whilst Atari games and Go are relatively simple environments, they have provided the perfect platform for demonstration of definitively general-purpose AI that truly mimics human-style learning – observation, reflection, mental model refinement and improved action. The first ‘look’ the machine has at a game of Go is directly comparable to a baby opening its eyes for the first time.

Naturally, DeepMind’s work hasn’t gone unnoticed by the tech giants. In 2015, before the company had started generating revenue, they were acquired by Google for £400 million (10). Oh, and Elon Musk is an investor (11).

DeepMind’s top priority? Healthcare.

The Ophthalmologist with a lightbulb moment

“I was a nerd. I loved computer games” says Dr. Pearse Keane, Consultant Ophthalmologist and Clinical Scientist Award recipient. “When I was kid growing up in Dublin, I used to sneak over to my neighbour’s house to play on their Atari 2600. I wanted one, but my parents wouldn’t allow it. I saved up and bought my first computer which was a Commodore 64. I started to learn to programme a tiny bit then, but to my great regret I didn’t pursue it and just went down the video games road. I remember buying Theme Park.”

Pearse KeaneAs the son of a doctor, and a high achiever in the academic arena, Dr. Keane was always destined for medical school. He graduated in 2002, and then embarked on a 13-year journey through rigorous clinical and academic ophthalmology training. He never lost his passion for technology but struggled to fit it into his busy schedule.

“I would have loved to have had the time to do a Masters in machine learning or build some software development skills,” he says. “The long path and hierarchical nature of medicine is a barrier to that”.

Today he works at Moorfields Eye Hospital in London, and subspecialises in retinal disease, with 70% of his working week spent doing research. His academic focus is retinal imaging via optimal coherence tomography (OCT – like ultrasound, but light waves are measured as opposed to the echoes of sound waves). OCT is quick, non-invasive, and produces extremely high resolution retinal images that resemble histopathology slides.

Ophthalmology is a specialty drowning in referrals. Eye clinic appointments constitute 10% of hospital appointments across the entire NHS (roughly 10 million total), second only to orthopaedics (12). Referrals have increased by a third in the last 5 years, and it’s only going to get worse as OCT scanners have been rolled out in most high-street opticians across the UK (13). In most centres, the expertise to accurately read OCT scans is usually not available, and so even the tiniest possible deviation from normal means referral to a retinal specialist.

According to Dr. Keane “it’s as if every GP in the country was given an MRI scanner, told to scan every cough or headache that comes in, but not given the training to read MRIs”. Swathes of false positive referrals crowd clinics, and increase the risk that patients with genuine sight-threatening retinal disease, like wet age-related macular degeneration (AMD), don’t get seen in a timely fashion.

“You have patients who have lost sight in one eye completely from wet AMD, and then they develop the early signs of it in their other eye,” he explains. “They’re given an urgent appointment for 6 weeks later. You can imagine how stressful that must be.”

In July 2015, Dr. Keane read an interview in Wired magazine profiling DeepMind (14). “Mustafa Suleyman – one of DeepMind’s cofounders and its Head of Applied AI – mentioned interest using AI for healthcare,” he says. “That was when I had my lightbulb moment. I tracked him down and sent him a message on LinkedIn. To my joy, he replied in a day or so, and within a few days I was meeting him for coffee.

“My idea was simply that we should apply AI, in particular deep learning, to OCT scans, and triage the scans,” he continues. “So, if you develop a sight threatening disease you can get in front of a retinal specialist within days rather than months. Similarly, if you have nothing serious wrong with you, you don’t get falsely referred in, with all the anxiety associated with that”.

In July 2016, the collaboration between Moorfields and DeepMind was formerly announced to the media, generating the kind of fanfare one might expect from the NHS, say, partnering with a commercial powerhouse like Google (15). However, research in earnest hadn’t yet begun, and the year preceding the announcement was spent meticulously packaging approximately one million anonymised, historical OCT scans so that they could be reliably fed to DeepMind’s algorithm. The biggest obstacles were predictable NHS issues like poor labelling, awkward file formatting, and multiple different types of scanner providing the images. And, naturally, there was a lengthy ethics approval process.

“We really put a lot of effort into being as transparent as we could” says Dr. Keane. “We have a section of the Moorfields website dedicated to the collaboration, with questions and answers, interviews, and how to opt out if you are a patient (16). We’ve also published our study protocol prior to starting research, which is slightly alien as normally you wait until you have the results (17).”

Research is well under way, and Dr. Keane describes the preliminary results as “very encouraging… even exciting”. Deep learning is applied in much the same as with the Atari or Go research. With Space Invaders, the AI agent was rewarded every time it fired its laser cannon and hit an alien. With retinal disease, the AI agent labels segments of OCT scans that have been pre-labelled by ophthalmologists, and gets a game-like reward if it is accurate (18).

“What we hope to do is to have an algorithm able to achieve expert performance,” says Dr. Keane. “In other words, an algorithm that can look at an OCT and diagnose any retinal disease that a retinal specialist from Moorfields could diagnose.”

The collaboration is hoping to publish their results by the close of 2017 and establish proof of concept. The aim after that would be to then perform a full, prospective randomised controlled trial to properly validate the algorithm.

Why is this a big deal?

If the results live up to the hype and prove that a machine can be rapidly trained to demonstrate human-level expertise at interpreting highly specialised imaging, it will be a monumental moment In NHS history.

But it’s not just happening at Moorfields. Similar AI-based projects are underway elsewhere in the NHS and around the world. DeepMind are also collaborating with University College London Hospital in research that explores radiotherapy planning for head and neck cancers (19), as well as developing an app with the Royal Free Hospital that identifies patients at risk of acute kidney injury (20). IBM’s Watson is being piloted as an interactive tool for improving patient engagement and understanding at Alder Hey Children’s Hospital in Liverpool (21), whilst researchers at Stanford University have trained a deep learning algorithm to diagnose skin cancers as accurately as dermatologists (22).

One can imagine as the evidence-base snowballs and clinicians increasingly ‘trust’ AI, researchers will loosen their grip on what the machines are ‘allowed’ to learn. They will excel beyond mere expert-level diagnostics, and start shedding light on new patterns and pathophysiological associations in much the same way that AlphaGo introduced new strategies to a game thousands of years old. This technology has the potential to de-shroud the countless mysteries of modern medicine, and transform what we can achieve as treating physicians, as well as what we can survive from as ailing patients.

Where I work in the Emergency Department, I see multiple opportunities where AI could pounce and turbo charge our clinical performance. More accurate interpretation of emergency radiology and electrocardiography seem like the lowest hanging fruit. We could use it to identify deteriorating patients hyper-early by continuous monitoring of vital signs in combination with other data sets such as blood results and past medical history. It could augment effective pharmacological decision-making, including antibiotic stewardship. Perhaps there is scope for AI-assisted front-of-house triage, along with a reliable ‘Dr. Google’ algorithm that will reassure well patients at home and ease pressure on primary and secondary care (23).

The human brain requires oxygen. AI requires data. With 90% of the worlds data being generated in the last 2 years, this technology is quintessentially ‘exponential’. Data-driven healthcare brings with it new obstacles; not least digitising the archaic NHS documentation systems, along with the massive ethical hurdles of patient data-sharing and accountability. The teething process will prove tedious, and there will be those amongst us that refuse to submit to the technology, but if and when the evidence-base is sufficiently built, the ethical principle of non-malificence will give us no option but to fully embrace our digital future.

And judging by the success of DeepMind’s previous exploits, along with the look I saw in Dr. Keane’s eyes as we chatted, the pending evidence-base is a foregone conclusion.

Robert Lloyd
@PonderingEM

References

  1. https://deepmind.com
  2. Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. 2015 518:529-33.
  3. Silver D, Schrittwieser J,  Simonyan K et al. Mastering the game of Go without human knowledge. 2017 Oct 18;550(7676):354-359.
  4. Burgess M. Google’s DeepMind wins historic Go contest 4-1.Wired 2016 Mar 15.
    http://www.wired.co.uk/article/alphago-deepmind-google-wins-lee-sedol
  5. How a computer beat the best chess player in the world – BBC http://www.bbc.co.uk/news/av/world-us-canada-39888639/how-a-computer-beat-the-best-chess-player-in-the-world
  6. Google AlphaGo computer beats professional at worlds most complex bard game go http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-alphago-computer-beats-professional-at-worlds-most-complex-board-game-go-a6837506.html
  7. What is Machine Learning?
    https://bdtechtalks.com/2017/08/28/artificial-intelligence-machine-learning-deep-learning/
  8. Deep Learning – Wikipedia
    https://en.wikipedia.org/wiki/Deep_learning
  9. Public Lecture with Google DeepMind’s Demis Hassabis. Royal Television Society. Youtube.
    https://www.youtube.com/watch?v=0X-NdPtFKq0&t=1212
  10. Google buys UK artificial intelligence start-up DeepMind. BBC. http://www.bbc.co.uk/news/technology-25908379
  11. Elon Musk says he invested in DeepMind over ‘Terminator’ fears. The Guardian. https://www.theguardian.com/technology/2014/jun/18/elon-musk-deepmind-ai-tesla-motors
  12. Harnessing Deep Learning to unlock new insights from Ocular Health. Pearce Keane, lecture. Contact Innovatemedtec. Youtube. https://www.youtube.com/watch?v=54r3GQmd0qM
  13. OCT rollout in every Specsavers announced. Optometry today. https://www.aop.org.uk/ot/industry/high-street/2017/05/22/oct-rollout-in-every-specsavers-announced
  14. DeepMind: inside Google’s superbrain,Wired.
    http://www.wired.co.uk/article/deepmind
  15. Google’s DeepMind to analyse one million NHS eye records to detect signs of blindness. The Telegraph.
    http://www.telegraph.co.uk/technology/2016/07/05/googles-deepmind-to-analyse-1-million-nhs-eye-records-to-detect/
  16. DeepMind Health Research Partnership. Moorfields Eye Hospital website. http://www.moorfields.nhs.uk/landing-page/deepmind-health-research-partnership
  17. Automated analysis of retinal imaging using machine learning techniques for computer vision. F1000 Research.
    https://f1000research.com/articles/5-1573/v1
  18. The computer will assess you now. BMJ2016;355:i5680
  19. DeepMind and University College London Hospitals NHS Foundation Trust.
    https://deepmind.com/applied/deepmind-health/working-nhs/health-research-tomorrow/health-uclh/
  20. DeepMind and the Royal Free.
    https://deepmind.com/applied/deepmind-health/working-nhs/how-were-helping-today/royal-free-london-nhs-foundation-trust/
  21. Alder Hey Children’s Hospital set to become UK’s first ‘cognitive’ hospital. Press release, 11 May 2016.
    http://www.alderhey.nhs.uk/alder-hey-childrens-hospital-set-to-become-uks-first-cognitive-hospital/
  22. Esteva A, Kuprel B, Novoa R et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118
  23. Doctor AI will see you now. Student BMJ.
    http://student.bmj.com/student/view-article.html?id=sbmj.i6528

Leave a Reply