By Jeffrey R. Willis, Associate Medical Director at Genentech
In a proof-of-concept
Tragically, many people lose their sight to DME in the prime of their lives, making it harder for them to work and care for themselves. The rising prevalence of diabetes has a direct correlation on new cases of DME. By 2045 the number of people with diabetes is projected to reach 629 million worldwide, and roughly 10 percent of them will have vision-threatening eye disease.
The best way to prevent DME is through regular eye exams, but an estimated 60 percent of people with diabetes don’t obtain these. The exams use a technique called colour fundus photography (CFP), which takes a two-dimensional image of the retina.
Although CFP provides valuable information, the gold standard for diagnosing DME and determining the need for treatment is optical coherence tomography (OCT), which takes a three-dimensional measurement of the macula, the middle part of the retina that increases in thickness with the progression of DME. However, OCT is often unavailable in screening programmes due to cost and technical limitations. A thickness of 250 microns is considered the threshold for the condition, whereas 400 microns is the point at which many ophthalmologists recommend starting treatment.
Our team decided to explore whether we could use deep learning to teach computers how to estimate macular thickness from CFP images, making DME diagnosis easier for patients and ophthalmologists. Currently, CFP images are interpreted by specialists who develop the ability over years to gauge the retina’s thickness from the features they see on its surface, but still need to rely on OCT for confirmation and measurement. Our team wanted to generate similar ability in an automated system.
For people with DME who have fluid leaking into their retina, an indirect measure of macular thickening is bright spots on colour fundus photographs (i.e., hard exudates), making it easy to see and diagnose.
In deep learning, a computer trains itself to detect patterns and relationships in a set of training data, using hundreds of layers of analysis that each pick up different relevant features in an image without any guidance from a user. The system then applies its knowledge to novel input data of the same type. In this case, we gave our computers a large set of CFP and OCT data from participants in two large DME clinical trials to train on.
The deep learning system examined a total of 17,997 CFP images from ~700 participants and compared them with corresponding OCT thickness measurements. The best model we developed using this training set was able to predict macular thickness greater than the 250 micron threshold with an accuracy of 97 percent — an impressive level of performance. Deep learning could even do a reliable job of predicting the actual OCT measurement of the macula’s thickness from a CFP image if it was of sufficient quality.
This initial finding surpassed our expectations, and we wanted to learn more about how it happened. When we looked into it, we were thrilled to find that the computer was focusing on the same parts of the images as specialists have been doing for years, such as the contours and calibre of blood vessels.
To test that finding, we still need to validate our system by testing it on other datasets. But presuming it works well, this tool could be of tremendous value to ophthalmologists as they treat people with diabetes and DME. Once people with DME begin treatment, for example, many of them have to be seen every four weeks for OCT testing to ensure that their condition is not progressing. AI might enable people to use a cell phone camera to monitor their retinal tissue in real time, making it much easier for doctors to keep track of their patients’ need for and response to treatment. We could even envision an app to assess whether treatment is working. Such an innovation would not only be more convenient for patients, but also make them much more active participants in their own care. For ophthalmologists, the ability to estimate macular thickness with CFP would make it easier to identify the most urgent cases and treat them quickly and appropriately.
One important lesson of this experiment was the value of having a large clinical trial dataset to train our system on. Machine learning, which encompasses deep learning as well as other techniques that computers use to develop knowledge bases for data analysis, depends on robust, high-quality and representative training data for success. That is an asset that an organisation like ours possesses in abundance, in the form of lab measurements, clinical trial data and real-world information.
And using this data to support diagnosis is just the beginning — there may be clues in CFP images to help AI personalise DME care by predicting which people will progress most quickly or who will respond well to treatment. Other sources of data associated with trials could be leveraged as well, including medical history, genomics and other information. In the end, we hope this data-driven approach will produce a much better understanding of DME, diagnostic improvements that deliver needed treatments faster, and ultimately preserved vision for people with diabetes.
This is the first manuscript published as part of Roche/Genentech’s Ophthalmology Personalised Healthcare initiative, which aims to combine meaningful large-scale data and AI technology to predict and prevent ocular conditions and preserve vision. The study adds to the growing literature about the use of AI in ophthalmology. It also sheds light on how Roche/Genentech can utilise its vast clinical trial database to develop AI algorithms to predict the presence of disease, risk of disease progression, and response to treatment; all of which could be supplied to ophthalmologists to deliver higher quality personalised healthcare.
This article first appeared on
Ahuja, AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. 2019; 7:e7702.
This website contains information on products which is targeted to a wide range of audiences and could contain product details or information otherwise not accessible or valid in your country. Please be aware that we do not take any responsibility for accessing such information which may not comply with any legal process, regulation, registration or usage in the country of your origin.