The blood vessels in the eye reflect the state of the whole circulatory system.

The idea behind using a neural network for image recognition is that you don't have to tell it what to look for in an image. You don't even need to care about what it looks for. With enough training, the neural network should be able to pick out details that allow it to make accurate identifications.

For things like figuring out whether there's a cat in an image, neural networks don't provide much, if any, advantages over the actual neurons in our visual system. But where they can potentially shine are cases where we don't know what to look for. There are cases where images may provide subtle information that a human doesn't understand how to read, but a neural network could pick up on with the appropriate training.

Now, researchers have done just that, getting a deep-learning algorithm to identify risks of heart disease using an image of a patient's retina.

The idea isn't quite as nuts as it might sound. The retina has a rich collection of blood vessels, and it's possible to detect issues in those that also effect the circulatory system as a whole; things like high levels of cholesterol or elevated blood pressure leave a mark on the eye. So, a research team consisting of people at Google and Verily Life Sciences decided to see just how well a deep-learning network could do at figuring those out from retinal images.

To train the network, they used a total of nearly 300,000 patient images tagged with information relevant to heart disease like age, smoking status, blood pressure, and BMI. Once trained, the system was set loose on another 13,000 images to see how it did.

Simply by looking at the retinal images, the algorithm was typically able to get within 3.5 years of a patient's actual age. It also did well at estimating the patient's blood pressure and body mass index. Given those successes, the team then trained a similar network to use the images to estimate the risk of a major cardiac problem within the next five years. It ended up having similar performance to a calculation that used many of the factors mentioned above to estimate cardiac risk—but the algorithm did it all from an image, rather than some tests and a detailed questionnaire.

The neat thing about this work is that the algorithm was set up so it could report back what it was focusing on in order to make its diagnoses. For things like age, smoking status, and blood pressure, the software focused on features of the blood vessels. Training it to predict gender ended up causing it to focus on specific features scattered throughout the eye, while body mass index ended up without any obvious focus, suggesting there are signals of BMI spread throughout the retina.

The researchers say that even a 300,000-image training set is small for a deep-learning algorithm, so they think they could do better if given more data to work with. And the improvement is needed, as they note that performance similar to the diagnostic calculation isn't all that great, since the calculation has a large uncertainty. With some improvement, the algorithm could be a useful diagnostic tool, since retinal images are often taken to screen for eye problems associated with diabetes—which, in turn, is often associated with cardiac disease.