There’s a lot more to artificial intelligence than what goes on inside your smartphone. Professor Geraint Rees, recently appointed Pro-Vice-Provost (AI) shares the future of machine learning at UCL

Photography Stocksy

UCL has been working on artificial intelligence for about 50 years. Ever the innovator, the university this year launched its AI for People and Planet strategy and appointed its first Pro-Vice-Provost for AI, Professor Geraint Rees. As Dean of the UCL Faculty of Life Sciences and Founding Chair of the UCL AI Steering Group, he is a key innovator in using AI to transform healthcare. “There are two types of AI: narrow and general”, explains Professor Rees. “Narrow AI is things like AlphaGo, a computer programme that can play Go better than anyone in the world, but unfortunately it can’t order your groceries or assist you in tying your shoelaces!” While a voice-activated speaker has more of the building blocks of machine learning that make them quite useful to many people, it’s still not all-purpose. “Artificial general intelligence is the kind that you might expect from having a conversation with someone to solve all sorts of problems,” says Professor Rees.

By looking at the bigger picture, general AI addresses real-world situations. It means developers and experts working together to design systems that have a positive impact in the real world. “Co-creation is something that UCL is particularly good at because we have a full range of expertise, from health, to ethics to banking,” says Professor Rees, “and we’re thinking hard about how to connect them. Hopefully we can play a role in helping to choose the right problems, as well as helping solve those problems.” UCL’s AI strategy for People and Planet (read more about the strategy at reinforces its focus on research and innovation that benefits societies around the world.

So how are UCL bringing AI into the real world?


“If you have a problem with your eyes, you go to an emergency opthalmologist and have an Optical Coherence Tomography (OCT) scan of your retina. It’s a very common investigation but the scan is quite hard to interpret because it’s black and white and quite indistinct,” says Professor Rees. It’s invaluable in distinguishing between a genuine emergency that needs treatment now from issues that could wait, but it requires expert knowledge and time to interpret. Dr Pearse Keane, a Clinician Scientist at UCL’s Institute of Ophthalmology and Consultant Ophthalmologist at Moorfields Eye Hospital, has worked in collaboration with AI experts at DeepMind Health, founded by UCL alumni Demis Hassabis, Shane Legg and Mustafa Suleyman, to create an AI algorithm that accurately interprets the OCT scan, overlaying a diagram to highlight the abnormalities and make an accurate referral. “The performance the AI system achieved was equivalent or even superior to those of highly trained experts at Moorfields,” says Professor Rees. “It speeds up the process, copes with an increasing number of scans and frees up the clinicians to focus on the patient.” The Institute of Ophthalmology is putting the technology into clinical trials and implementing it in clinical practice.


A good example of the real-world use of AI was developed by then PhD student, now Senior Research Associate Dr Amy Nelson and Dr Parashkev Nachev of UCL’s Institute of Neurology. It tackles the thorny problem of people who miss their NHS appointments. The present methods for solving this problem are calling, texting or writing a reminder to every patient before their appointment. This is both inefficient and expensive. “Amy asked the very simple question,” says Professor Rees: “If we took the information we already have in NHS computers about patients who are going to turn up for their appointment (her study case was with scan appointments), could we use a deep-learning algorithm to better predict who would not turn up and then focus on helping them?” The team found that you could predict quite accurately who would not turn up. “It’s applying this highly sophisticated technique to a very practical problem,” says Professor Rees, “and exemplifies the idea of finding the right use cases; practical examples that you can implement in the real world.” This AI solution is now being implemented at UCL’s partner hospitals and has also been picked up by the Health Secretary, Matt Hancock, to be part of NHSX, a new unit that will oversee the digital transformation of the health and care system.

A bat

Bats are a tricky mammal to observe, but each bat species has its own distinctive call which it uses to navigate and find prey


One of Professor Rees’s favourite applications of AI is a system for monitoring bat populations, developed by Kate Jones, Professor of Ecology and Biodiversity at UCL. The nocturnal lifestyle of bats makes them a tricky mammal to observe; however, each bat species has its own distinctive call which it uses to navigate and find prey – echolocation. “Professor Jones is mapping bats by getting people to download an app to their phone,” says Professor Rees. “They then stick the phone (securely and safely) to the outside of their car window and record bat sounds as they drive around. An artificial intelligence algorithm matches the bat sounds to their species and the phone provides GPS data so we can create a bat map.” Applications like this are vital for conservation biology in order to understand the status of bat and other declining wildlife populations.


Looking to the future of AI, Professor Rees expects that machine learning will make some fundamental scientific advances. He cites protein folding as the next breakthrough. “Genetic code is translated by the machinery inside us into proteins. The genetic code is essentially a long list of instructions, which creates the proteins. But proteins aren’t long. They fold up after the translation, so one of the unsolved challenges is, what shape do these proteins make?” By using AI and deep-learning algorithms, Deep Mind are making good progress. “It’s not completely solved,” admits Professor Rees, “but it’s a fundamental problem in biology and once you can open out a protein structure, there are huge applications in terms of the design of drugs, because the drugs often target the proteins.”


The brave new world of AI brings with it issues and challenges. Professor Rees thinks that immediate problems, such as the direction that social media is going in, polarisation in society, and data breaches and abuses contribute to public wariness of AI. “Take facial recognition,” says Professor Rees. “It works quite well when used internally in airports but not out there in the wild. Before it starts being used everywhere, we need to have a conversation with policy makers and citizens to understand what are the limits. The technology itself isn’t intrinsically good or evil, but while investing in all these advances in machine behaviour, we also have to concentrate on understanding human behaviour.”

Here are his hot AI topics.


“We live in a society that is very individual and personalised, but many of us are worried that society has gone a bit too far towards the individual and that we need to think about the collective,” says Professor Rees. He cites MMR vaccinations as a good example. “We have this tension between an individual who says, erroneously, that they don’t believe that vaccines are safe and then suddenly we have these community effects, like the recent measles outbreak.” Understanding and balancing the effects of a technology at both individual level and population level is vital. “To what extent do we want the technology to enable us all to jump into driverless taxis?” asks Professor Rees. “If we all did, then we’d find we’re all stuck in a traffic jam, because every single person has a driverless taxi. In that extreme case we haven’t optimised at the level of actually getting anywhere!” These are important tensions to think about and understand. There may be benefits at a population level of doing something that aren’t reduceable to the individual. “The herd immunity that we get from our children being vaccinated, which eliminates measles, is a population effect. That’s going to be more important, and I don’t think we’ve started to scratch the surface of that within AI,” admits Professor Rees. He sees too many tech companies aiming at the individuals. “They’re not yet thinking as much about how we benefit a population.”


Among the building blocks of most artificial intelligence systems are machine-learning algorithms that typically ingest vast amounts of data and then discover patterns hidden or apparent. “The danger is that there is a sort of garbage in, garbage out,” says Professor Rees. “If you ingest biased data then you get a biased answer.” He recalls a well-known internet company that decided to speed up its hiring process by developing an AI agent to recruit software engineers. Unfortunately, at present most software engineers are men, so when the agent ingested all the data it inevitably concluded that it was easier to select just men. The intelligent humans using the algorithm immediately realised that this was wrong and stopped using the algorithm. It’s an example of how you can replicate a bias in silico that exists in our society. “It would be terrible if we were to use technology just to replicate those biases,” says Professor Rees, “but how do we work out if an algorithm has a bias and how do we measure a bias? Could we work out that something was biased and develop an algorithm that counter-biased it?” It’s a different way of thinking about AI and, from an ethical perspective, could be used to remedy existing real-world biases.

In 2018 UCL co-launched the Institute for Ethical Artificial Intelligence in Education to examine the assumptions about human behaviour that underlie current AI development and how social values are manifested in AI design. The IEAIE are looking at how ethical frameworks can be grounded in responsible innovation and integrated with our assumptions to transform how AI innovators make decisions when designing for educational AI.


It’s fair to say that most people would expect an AI system to be explainable. If a diagnostic algorithm recommends a treatment or surgery, then you would want to know why. “Often algorithms can feel like black boxes; data goes in and a decision comes out,” says Professor Rees. “If your ‘why’ cannot be answered, you might have less trust in the algorithm.” However, Professor Rees suggests that something similar happens when you see a doctor. They too take in huge amounts of information about you and duly recommend a course of treatment. “You can then ask them why, but the answer they give you wouldn’t necessarily be complete or accurate. As a psychologist, I know that the stories we tell ourselves about why we make particular decisions are not always as accurate as we think they are,” says Professor Rees. He thinks that the way forward is to design AI systems that have a degree of explainability – and to think carefully about what counts as a good explanation.


In 2002, the British Medical Journal asked its readers what makes a good doctor. “Setting aside competency, the words used included ‘compassion’, ‘empathy’, ‘respect’ and ‘dignity’. They’re some of the deepest human words,” says Professor Rees, “and some of the most difficult to replicate with artificial general intelligence. So one idea about how AI systems might coexist with us is that they give us time to care.” People say that a problem with seeing your GP is that technology gets in the way. The GP is busy typing into the computer because that’s the process, and there’s appointment-making software and digital admin that gets in the way. “What if we had a healthcare system where time was freed up for the person looking after you to care and to inquire about your symptoms?” asks Professor Rees.

“What if the AI technology were in the air, listening and plucking out phrases like ‘We’ll see you in six months’ time’ and that it was all automatically recorded, scheduled in your diary and texted to your phone without it needing to take up valuable time in your consultation slot? What if it was all recorded electronically and automatically put on your patient health record rather than the doctor or some healthcare professional having to type while they were talking to you? I think most people would intuitively see that as a good thing, and I think we can empirically demonstrate that it’s a good thing in terms of better health, better care and better outcomes. It all comes back to our fundamental idea of designing AI for people and planet rather than as some sort of replacement for humans.”

Professor Geraint Rees is the Pro-Vice-Provost (AI) and Dean of UCL Faculty of Life Sciences. For more information about AI at UCL, visit

  • NewsNews
  • InsiderInsider
  • ProvostProvost
  • Inspired byInspired by
  • FlashbackFlashback
  • On the mapOn the map
  • Artificial IntelligenceArtificial Intelligence
  • Chronic painChronic pain
  • UCL saves the planetUCL saves the planet
  • At the hatcheryAt the hatchery
  • Jeremy Bentham speaksJeremy Bentham speaks
  • In conversationIn conversation
  • This idea must dieThis idea must die
  • SpotlightSpotlight
Portico Issue 6. 2019/20