Authors: Michael Bowen, Director of Research and Daniel Hardiman-McCartney MCOptom, Clinical adviser
Date: 20 July 2018
This year we have seen the publication of some truly remarkable research relating to computers’ abilities to independently make predictions from a fundus image. Recent research has demonstrated the detection of diabetic retinopathy and cardiovascular risk factors from fundus image, both impressive feats, but perhaps not surprising being that humans have had such abilities for decades. However, the recent results published relating to the prediction of a person’s age, sex, smoking status, and now, refractive error findings that have left clinicians both amazed and in complete confusion as to how such predictions have been made. This technology is significant, both in its ability to act as a ‘force for good’, it will help save sight once deployed, whilst at the same time unnerving for optometrists, such innovation will in the foreseeable future disrupt the practice of optometry.
The UK is at the forefront of using AI within eye health.
The term artificial intelligence (AI) has been around since the mid-1950s but is often used quite loosely. ‘Machine Learning’, ‘Deep Learning’ and ‘Big Data’ are commonly used interchangeably, resulting in confusion; it is important to distinguish between them. In brief, AI is the concept of machines being able to carry out tasks in a way humans would consider ‘smart’. Machine learning is an application of AI, based on the idea that we should be able to just give computers access to the data and let them learn for themselves.
(Source: Cray, Inc.)
It is an advanced form of machine learning, called Deep Learning, that has recently enabled some quite remarkable predictions from fundus images, such as sex and refractive error. Deep learning is a form of machine learning that makes use of artificial neural networks. An artificial neural network is a computer system designed to work by classifying information in the same way a human brain does. The development of artificial neural networks has been key to teaching computers to think and understand the world in the that way humans do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias. Based on a system of probability – data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. A feedback loop enables ‘learning’, being told whether its decisions are right or wrong, it can then modify the approach it takes in the future.
The UK is at the forefront of using AI within eye health. Ophthlamologist Pearse Keane leads the research collaboration between Moorfields and Google Deepmind. This work is currently focused on diabetic retinopathy and AMD, and in particular with the ability of health systems to deal with the large volumes of complex data now accompanying referrals in the form of digital fundus images and OCT scan data. Although the projects are still in their early stages, Keane recently spoke at the Royal Society’s Science+ event, where he pointed out that the AI system was already able to distinguish between male and female fundus images, based on the image alone – and none of the eye health professionals involved currently understand how this is possible or being achieved! Deep learning technology cannot necessarily communicate how it came about its findings.
It is important to note there are numerous, significant challenges still to be overcome before we have genuine AI – machines that are conscious and self-aware – or simply sufficiently reliable to provide professional-level diagnosis. Stark evidence that we have a way to go is offered by the application of Watson, the IBM flagship system for AI in healthcare, related to oncology imaging. Despite showing that Watson routinely performed better than humans in terms of sensitivity and specificity of tumour detection and identification, researchers and clinicians remain distrusting of the system for it to be used effectively. In part, this may be linked to the fact that IBM has never allowed an independent study of Watson. It may also reflect current socio-cultural attitudes or a lack of general awareness of how Watson works.
Such an example may suggest that in order for AI to be effectively utilised at the front line, people will continue to be key. As for the present, practitioners should be minded to follow the advances of AI in ophthalmology closely. Despite the recent successes of AI, most recently a version of Watson taking part in a debate the role of the clinician may be essential in providing an interface with the new technology, and important in communicating its findings. The skills of future optometrists may be increasingly focused on communication and compassion.
Return to blog listing.