An artificial intelligence-enabled radar holds promise for providing better visual functional assessment of patients with glaucoma compared to currently available tools, according to Siamak Yousefi, PhD.
The glaucoma radar is being developed for use in clinical practice and glaucoma research as a personalized staging and monitoring tool for glaucoma assessment. It provides three layers of knowledge about the disease—severity of global visual functional loss, extent of visual functional loss in hemifields, and local patterns of visual functional loss. It can identify rapid and slow progressing eyes and retain data from thousands of past glaucoma patients.
According to Dr. Yousefi, assistant professor, Department of Ophthalmology and Department of Genetics, Genomics, and Informatics, University of Tennessee, Memphis, TN, visual function assessment of patients with glaucoma is typically done using standard automated perimetry.
“While this technique is well established, most of the methods for longitudinal visual field data analysis have a number of limitations,” he said. “The algorithms rely on traditional paradigms such as linear progression, but glaucoma progression may be nonlinear, especially at the later stages of disease. In addition, they generate a binary outcome of progression or no progression, adopt ad-hoc rules to identify progression, and lack advanced visualization and interpretation.”
Dr. Yousefi explained that the artificial intelligence radar is an advanced computational tool that can provide more detailed analyses.
“It can generate an informative, objective outcome with multiple layers of glaucoma knowledge that may enable development of more interpretable models,” he said.
The radar was developed using more than 13,000 visual fields from more than 8,000 subjects. Principal component analysis was applied to linearly reduce the number of dimensions and extract the global characteristics of the visual fields, and manifold learning was applied to identify the local patterns of visual field loss..
The map or “cloud” of datapoints from the 13,000+ visual fields was then annotated by first identifying very dense regions of the visual field and then applying unsupervised clustering which identified 32 nonoverlapping clusters representing visual fields at different levels of severity.
The 32 clusters on the radar “dashboard” had different mean deviation of the visual fields corresponding to pattern of global visual functional severity with normal eyes located on the right side of the dashboard and eyes with severe visual functional loss at the left and bottom left corner.