miophthalmology


AI in Cataract and Refractive Surgery: The Implications and Outcomes

Dr Matt Russell explores how artificial intelligence is transforming cataract and refractive surgery, its current applications, future potential, and ethical implications.

WRITER Dr Matt Russell

Artificial intelligence (AI) is no longer a futurist concept – it is a transformative force already reshaping modern medicine – and ophthalmology is at the forefront of this shift. AI has its origins in the 1950s with early machine learning algorithms, but it has now evolved into sophisticated deep learning (DL) models capable of surpassing human expertise in specific diagnostic tasks. Ophthalmology, with its reliance on imaging and structured data, stands at the forefront of this revolution. Key milestones include the United States Food and Drug Administration’s (FDA) approval of autonomous AI devices for diabetic retinopathy screening (such as the EyeArt Artificial Intelligence)1 and IDX-DR,2 as well as the successful integration of glaucoma progression tools.3

Additionally, AI has the potential to transform how eye care professionals manage patients undergoing cataract and refractive surgery, where precision and meeting patients’ increasingly high expectations are paramount. It is being used to reshape every stage of the surgical journey, from biometry tools to intraoperative guidance systems, to predictive analytics in postoperative care.

Keeping up with AI advancements is critical to understand how the delivery of surgical eye care is set to rapidly change over the next decade.

THE FOUNDATIONS OF AI IN OPHTHALMOLOGY

AI refers to computational systems capable of performing tasks that typically require human intelligence. These include reasoning, learning, pattern recognition, and language processing. AI technologies in ophthalmology are largely underpinned by machine learning (ML) and deep learning (DL), which can process vast datasets to recognise patterns and deliver predictive insights.

ML systems are often categorised by their learning approach. Supervised learning involves training algorithms on labelled datasets – for example, predicting postoperative visual outcomes based on preoperative biometry. Unsupervised learning, by contrast, identifies hidden patterns in unlabelled data, such as patient clusters with similar corneal profiles. Reinforcement learning allows systems to optimise decision-making strategies through trial and error and has potential applications in surgical robotics.

DL models often employ convolutional neural networks (CNNs), which are highly effective at interpreting ophthalmic images like optical coherence tomography (OCT), fundus photographs, and corneal topographies. Meanwhile, large language models (LLMs) such as GPT (Generative Pre-trained Transformer) are revolutionising the documentation and communication side of care by generating natural language-powered platforms that transcribe, document, and summarise patient encounters, generating reports and medical record entries automatically. LLMs even have the potential to create sophisticated AI agents that support patients throughout their care.

Several AI tools have already received FDA approval, marking important milestones in clinical acceptance. IDx-DR was the first autonomous AI system approved to detect diabetic retinopathy without human oversight.2 EyeArt by Eyenuk is another FDA-approved tool that automates diabetic retinopathy screening with high sensitivity and specificity.1 iPredict by iHealthScreen has been approved by the Therapeutic Goods Administration (TGA) for early detection of diabetic retinopathy, age-related macular degeneration, and glaucoma suspect status using retinal photography.4 These approvals underscore the potential of AI to decentralise care and expand access to critical screenings.

Validated AI tools are also used to track glaucoma progression (e.g., Cambridge Ophthalmology’s DARC technology),5 detect keratoconus, and stratify cataract surgical risk. These successes have spurred the development of AI for anterior segment diagnostics, intraocular lens (IOL) calculations, surgical workflow optimisation, and patient education.

CORNEAL REFRACTIVE SURGERY: AI’S EXPANDING ROLE

Advancements in AI are transforming how candidates for refractive surgery are evaluated. Laser vision correction (LVC) procedures – laser-assisted in situ keratomileusis, photorefractive keratectomy, and small incision lenticule extraction (LASIK, PRK, SMILE) – and implantable collamer lens (ICL) surgeries offer safe and effective vision correction, but careful patient selection is critical to avoid complications. Identifying patients at risk of adverse outcomes and matching potential candidates to the optimal treatment is complex and nuanced. Traditionally, surgeons rely on risk scoring systems (e.g. Randleman ectasia score)6 and subjective judgement to screen for conditions like keratoconus, thin corneas, or severe dry eye that contraindicate surgery. AI promises to enhance this screening by analysing amalgamated complex diagnostic data such as corneal topography/tomography, epithelial thickness maps, and biomechanical metrics to detect subtle warning signs and provide decision support. Recent studies show that AI models can outperform traditional methods for assessing patient suitability for refractive surgery.7

Image

“Advancements in AI are transforming how candidates for refractive surgery are evaluated”


One of the most critical screening tasks is identifying corneas at risk of post-LASIK ectasia – often associated with undiagnosed subclinical keratoconus (KC) or biomechanical weakness. AI algorithms have shown exceptional accuracy in detecting keratoconus and high-risk corneas by analysing corneal imaging data. Unlike the conventional Randleman score or Belin-Ambrósio indices that rely on fixed cut-offs, AI can mine nuanced patterns from topography and tomography maps, pachymetry distributions, and posterior corneal elevation that may elude human observers. For example, Lopes et al.7 developed an AI-enhanced tomographic assessment that improved early ectasia detection beyond subjective expert evaluation. Similarly, Santhiago et al.8 introduced an ‘ectasia risk model’ based on machine learning, which avoids arbitrary cut-offs and more accurately flags high-risk eyes compared to prior scoring systems. In practice, this means AI can provide a continuous risk probability rather than a binary pass/fail, catching borderline cases for closer scrutiny.


“AI algorithms have shown exceptional accuracy in detecting keratoconus and high-risk corneas by analysing corneal imaging data”


Corneal topography and tomography are fundamental inputs for these models. Many efforts use Scheimpflug tomography (e.g. Oculus Pentacam) data, which provides anterior and posterior corneal maps and pachymetry. Studies have applied random forests, support vector machines, and neural networks to Pentacam-derived parameters with considerable success. In a review of 62 ML studies for KC diagnosis (ranging 2015–2024), random forest was the most common algorithm (27%), followed by deep CNNs (16%), and Pentacam was the leading imaging modality (used in 56% of studies).9 This reflects how well-established Scheimpflug data has been in training AI to distinguish normal corneas from forme fruste and clinically-significant keratoconus. For instance, Yoo et al.10 reported that a random forest model combining multiple Pentacam indices improved ectasia risk stratification, suggesting better sensitivity than the Randleman score.

Deep learning on topography maps has also been explored. CNNs can directly analyse Placido ring images or curvature maps. These have achieved high accuracy in classifying keratoconus versus normal corneas. Alió del Barrio et al. recently developed an artificial neural network (ANN) for automated keratoconus detection using a combined Placido plus OCT device (MS-39).11 Remarkably, the network reached 98.6% accuracy (96% precision, 97.9% recall) for keratoconus detection, and even for keratoconus suspect eyes (early disease), it attained 98.5% accuracy. The inclusion of corneal OCT-derived epithelial and stromal thickness maps was key – the authors noted that adding these new OCT indices significantly improved detection of subclinical cases. This ANN has been implemented in the MS-39 device’s software, meaning clinicians can presently obtain an AI keratoconus risk readout during screening. These results underscore that integrating multimodal corneal data (topography, tomography, and epithelial mapping) increases sensitivity in detecting subtle ectatic changes.

Ocular surface health is another pillar of refractive surgery screening. Moderate dry eye disease (DED) can both disqualify a patient from LASIK/PRK and present as a common postoperative complaint. Identifying patients with significant dry eye or meibomian gland dysfunction preoperatively is crucial for counselling and treatment before surgery. AI is beginning to play a role here by analysing clinical data and ocular surface images. Large databases and deep learning can synthesise patient history, symptoms, and examination findings to flag pre-existing mild DED before it becomes obvious. For example, an AI has the capacity to correlate subtle corneal staining patterns or tear film metrics with reported symptoms to predict who is likely to have refractory dry eye after LASIK.12

One practical application of AI in ocular surface disease is in meibography – infrared imaging of the meibomian glands. Researchers have trained deep learning models to automatically segment meibomian glands and quantify dropout (gland loss) in meibography images. In a 2022 study published by Saha and colleagues in The Ocular Surface, a deep CNN analysed 1,600 meibography images, achieving 73% accuracy in classifying meibomian gland grades (meiboscores) on a validation set, compared to 53% by human experts.13 The AI also provided rapid, quantitative measures of gland area and even removed image artifacts, yielding a “fully automated, fast quantitative evaluation… sufficiently accurate for diagnosing dry eye disease”. Such AI-driven meibomian gland analysis can be integrated into refractive surgery workups to objectively identify patients with advanced meibomian gland dysfunction who might experience worse dry eye after a corneal procedure.

Beyond imaging, machine learning models have been developed that take into account lifestyle factors, blink rates, tear osmolarity, and questionnaire scores to predict dry eye severity.12 These could enable an overall ‘dry eye risk score’ for surgical candidates. In clinical practice, we are starting to see AI-assisted dry eye diagnostics (e.g., AI algorithms in topographers that assess tear film break-up patterns on the Placido rings).14


“Retrieval-Augmented Generation (RAG) systems can generate personalised consent documents or educational leaflets by combining patient-specific data from the EMR with verified clinical knowledge sources”


For refractive surgeons and clinics, AI enhances dry eye screening by adding objectivity and early detection. It can identify borderline cases (e.g., mild ocular surface disease) and quantify them, so that appropriate interventions (tear supplements, meibomian gland dysfunction therapy) can be implemented before treatment to improve outcomes. Nonetheless, these AI systems for dry eye are in earlier stages of adoption compared to keratoconus detection. Limitations include variability in imaging (e.g., different meibography machines) and the multifactorial nature of DED, which can be hard to capture fully with an algorithm. Still, as data grows, we expect AI-based dry eye assessment to become a standard part of refractive surgery workups, ensuring patients are not just optically suitable but also have a healthy ocular surface for surgery.

The MS-39 (CSO), mentioned previously, is a state-of-the-art multimodal anterior segment analyser that combines Placido-based topography with high-resolution anterior segment OCT.14 This hybrid imaging enables detailed epithelial thickness mapping, pachymetry profiling, and posterior corneal elevation analysis. When integrated with AI algorithms, the MS-39 becomes a powerful tool for detecting subtle epithelial compensation patterns – a key feature of early ectatic change that may go unnoticed with standard tomography alone. AI-assisted interpretation of MS-39 data can enhance diagnostic sensitivity and assist in surgical planning by flagging borderline candidates or suggesting additional biomechanical testing. This allows clinicians to proactively modify surgical approaches, such as opting for surface ablation techniques or excluding at-risk patients altogether.

Refractive surgery, with its precision demands and high patient expectations, is ideally suited for AI optimisation. AI systems now integrate topographic, biomechanical, epithelial, and subjective patient data to assess surgical suitability. Platforms such as the CSO AI Module for MS-39 aggregate inputs from corneal tomography, epithelial thickness maps, corneal hysteresis metrics, and meibography to construct predictive profiles for procedural success and complication risk.14 These systems are capable of flagging patients with early subclinical keratoconus, high dry eye potential, or borderline biometric parameters that might contraindicate LASIK or SMILE. Additionally, models developed by Kundu et al.15 and integrated into decision support systems, are being trialled to combine ocular surface parameters and neural net-derived quality-of-vision predictions to personalise refractive modality selection. These tools enhance shared decision making, reduce enhancement rates, and increase the likelihood of spectacle independence while maintaining patient safety.

AI-driven nomograms are adjusting treatment profiles based on long-term clinical and biometric data, reducing risks of regression, overcorrection, and variability between surgeons.16 These models, like the ones integrated into the Schwind Cam software17 and Alcon’s Contoura Vision platform,18 dynamically adjust treatment parameters by analysing outcomes from thousands of previous procedures. For instance, they incorporate individualised ablation profiles, epithelial thickness changes, and corneal biomechanics to customise treatment plans for each eye. In complex cases, such as enhancements, irregular astigmatism, or post radial keratotomy corneas, these adaptive AI-enhanced nomograms offer increased predictability and fewer subjective surprises post-operatively. Research published by Ang and colleagues19 demonstrated that AI-optimised ablation parameters can lead to a higher percentage of eyes achieving uncorrected 6/6 vision and lower levels of induced higher-order aberrations. These are particularly valuable in complex corneas or enhancement cases.


“ AI systems now integrate topographic, biomechanical, epithelial, and subjective patient data to assess surgical suitability ”


AI IN CATARACT AND REFRACTIVE LENSECTOMY SURGERY

In preoperative refractive lensectomy and cataract surgery assessment, AI-driven tools analyse a wide range of biometric inputs, including axial length, anterior chamber depth, lens thickness, white-to-white distance, corneal curvature, and posterior corneal astigmatism. Devices like the IOLMaster 700 (ZEISS) and Lenstar LS 900 (Haag-Streit), when paired with AI platforms such as ZEISS Veracity or Alcon’s SmartCataract planning system, enable personalised IOL selection, toric axis alignment, and prediction of effective lens position. Some systems also integrate patient lifestyle questionnaires, contrast sensitivity data, and neural net models to simulate postoperative visual performance under photopic and mesopic conditions. This holistic preoperative planning improves refractive predictability and enables better-informed discussions with patients about the trade-offs of monofocal, toric, extended depth of focus (EDOF), and multifocal lens choices.

AI-driven IOL calculators such as Hill-RBF 3.0,20 the Kane formula,21 and PEARL-DGS22 represent a major leap forward from traditinal methods like SRK/T, Hoffer Q, or Holladay 1. Traditional formulas rely heavily on linear regression models using a limited set of variables – axial length (AL), keratometry (K), and anterior chamber depth (ACD)– to estimate effective lens position (ELP). These models perform well in average eyes but are inaccurate in 20% of seemingly normal eyes, and struggle in atypical scenarios, such as post-refractive surgery patients or those with extreme axial lengths.

In contrast, AI-based formulas utilise non-linear models trained on tens or hundreds of thousands of surgical cases. Hill-RBF uses radial basis functions to match a patient’s biometric profile with similar outcomes in a reference dataset.20 The PEARL-DGS algorithm factors in posterior corneal curvature and uses dynamic ELP predictions.22,23 The Kane formula integrates theoretical optics with AI-derived adjustments, incorporating additional biometric variables like lens thickness and central corneal thickness.21 It has consistently outperformed legacy formulas in multiple comparative studies.21

Clinical studies show that AI-augmented IOL planning results in significantly better refractive accuracy and lower residual error rates compared to traditional calculation methods. For instance, the Kane formula has demonstrated mean absolute error (MAE) improvements of up to 15–20% over SRK/T and Holladay 1 in normal eyes, and even more in long or post-refractive eyes.21 In a large multicentre retrospective study, Hill-RBF 3.0 achieved over 85% of cases within ±0.50D of target refraction.20 Similarly, the PEARL-DGS model, incorporating posterior corneal curvature and machine learning adjustments, improved prediction accuracy in post-LASIK eyes.24 These consistent improvements have translated into enhanced surgical efficacy, fewer enhancement procedures, and greater patient satisfaction. As these outcomes become widely recognised, multifocal IOL utilisation may become far more widely adopted, supported by greater predictability and informed patient selection through AI modelling.

One notable example of how AI could transform lens-based surgery would be a multifocal IOL suitability model integrating anterior and posterior segment imaging, contrast sensitivity thresholds, and personality indices derived from patient questionnaires. Similar models are being deployed via platforms such as the Voptica Visual Adaptive Optics (VAO) Simulator and Alcon’s Clareon Decision support suite. Voptica’s system uses ray-tracing and AI-powered vision simulation tools to assess potential postoperative dysphotopsia while integrating patient lifestyle preferences through structured questionnaires. The Alcon suite includes predictive scoring for trifocal suitability based on biometric variability, expected neuroadaptation time, and sensitivity to visual phenomena. These models could potentially improve multifocal IOL patient selection accuracy, reduce dissatisfaction-related explantation, and support more personalised preoperative counselling. These models help identify optimal candidates for presbyopia-correcting lenses and recommend the most appropriate trifocal, EDOF, or monovision strategies.

INTRAOPERATIVE AI AND REAL-TIME SURGICAL GUIDANCE

AI is entering the operating theatre as a digital assistant. Imagine a CNN-powered platform analysing surgical video feeds in real time, alerting surgeons if the capsulorhexis deviates or the phaco probe risks breaching the posterior capsule. A recent study published in British Journal of Ophthalmology demonstrated the potential applications of intraoperative AI at Moorfields Eye Hospital, allowing identification of key surgical steps and complications as they occur.25 The system used deep CNNs to segment and label intraoperative video frames during phacoemulsification and was able to distinguish between stages like capsulorhexis, hydrodissection, nucleus rotation, and IOL implantation with high accuracy.

Real-time AI analysis enhances safety and enables automated documentation. Region-based CNNs (R-CNNs) for detecting posterior capsule rupture or zonular instability have been explored with promising results. These models analyse intraoperative video data by identifying object boundaries and tracking surgical instrument movement in real time,26 and could theoretically alert a surgeon to the risk of intraoperative complications in real-time.27 Such systems are being integrated into platforms like Touch Surgery Enterprise and Theator in general surgery and neurosurgery,28 which support real-time video annotation and risk flagging. There is certainly potential to apply this technology to ophthalmic surgery, to not only enhance intraoperative vigilance, but also to generate annotated datasets for quality improvement, surgeon benchmarking, and credentialing purposes. These technologies are also being used in surgical training programs, where AI scores surgical performance and identifies areas for improvement.28,29

POSTOPERATIVE MONITORING

AI doesn’t stop at surgery. Post-operatively, AI tools can assess trends in vision recovery, visual acuity, aberrometry, symptom progression, contrast sensitivity, and patient-reported outcomes to identify patients at risk of suboptimal outcomes or needing enhancements.28,30 These systems could be particularly valuable in managing patients with multifocal or EDOF lenses, where neuroadaptation can be unpredictable. AI could stratify patients into expected adaptation trajectories based on historical cases, helping clinicians identify outliers who may benefit from targeted neuroadaptation training, additional counselling, or early consideration for IOL exchange.

GENERATIVE AI, AUTOMATED DOCUMENTATION, AND AI CLINICAL ASSISTANTS

Large language models like GPT are transforming documentation. Systems like Heidi Health convert voice conversations into structured clinical notes, discharge summaries, and referrals in real time. Heidi Health uses a combination of natural language processing (NLP) and large language models to transcribe and interpret doctor-patient and technician-patient dialogues, producing context-aware documentation tailored to ophthalmology workflows. Its ophthalmology-specific templates include cataract assessments, IOL consent, refractive screening summaries, and postoperative LASIK follow-up. The platform integrates directly with leading electronic medical records (EMRs) and includes features such as AI-generated action plans, automated coding, medication reconciliation, and flagged safety alerts. Users can review and edit generated content before approval, ensuring clinical accuracy while reducing administrative time by up to 80%.31


“As artificial intelligence becomes deeply integrated into cataract and refractive care, robust ethical oversight and clinical governance are essential”


This reduces administrative load and improves note quality. Retrieval-Augmented Generation (RAG) systems can generate personalised consent documents or educational leaflets by combining patient-specific data from the EMR with verified clinical knowledge sources. For instance, systems like Abridge and Nabla Copilot use large language models with RAG workflows to extract relevant patient history and procedural details, automatically populating consent forms with tailored information on risks, benefits, and alternatives. These documents maintain legal compliance through templated language reviewed by regulatory experts and reduce variability by standardising consent discussions. Clinics using RAG systems report reductions in medico-legal incidents and improved patient comprehension scores in informed consent evaluations.32

ACCOUNTABILITY INTO THE FUTURE

As artificial intelligence becomes deeply integrated into cataract and refractive care, robust ethical oversight and clinical governance are essential. Clinicians remain accountable for all decisions and outcomes – even when assisted by AI tools. Whether supporting IOL calculations, keratoconus screening, or automating documentation, AI must be treated as a decision-support adjunct rather than a substitute for professional judgement. Medicolegal frameworks in Australia reinforce this, requiring clinicians to document the use of AI, incorporate its outputs judiciously, and ensure informed consent processes reflect its role in decision making. One of the greatest challenges is the ‘black box’ nature of many AI systems – particularly deep learning models – where decision pathways are not easily explainable. This can undermine trust and regulatory acceptance. Emerging tools like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) aim to address this by making AI outputs more transparent through interpretive overlays and saliency maps. Equally, concerns around bias and health equity persist. Models trained on non-representative datasets risk poor performance in diverse populations, including Aboriginal, Torres Strait Islander, and Southeast Asian patients. Local validation and hybrid training frameworks, such as those explored by the Centre for Eye Research Australia (CERA) and the Australian Institute of Health Innovation (AIHI), are critical to ensure fairness and safety. Nationally, regulatory bodies like the TGA, Australian Commission on Safety and Quality in Health Care (ACSQHC), the Royal Australian and New Zealand College of Ophthalmologists (RANZCO), and Optometry Australia (OA), are developing principles-based guidelines for AI deployment in eye health – emphasising explainability, clinician oversight, patient-facing transparency, and continuous monitoring.33 As these frameworks mature, optometrists will play a pivotal role in ensuring ethical AI integration, bridging technology and patient understanding.

By 2030, AI will permeate every level of optometric care, from diagnostics and workflow automation to patient education, empowering clinicians to focus on human connection and personalised care. Far from replacing the practitioner, AI will amplify their effectiveness, provided we embed transparency, trust, and clinical accountability at its core.

Dr Matt Russell MBChB FRANZCO is a specialist ophthalmologist with international training in refractive surgery, cataract surgery, and medical retinal disease. He has also served as a Clinical Assistant Professor at the University of British Columbia in Canada.

Dr Russell started in private practice in 2007 with Vision Eye Institute in Brisbane and has since gone on to found OKKO Eye Specialist Centre and VSON Vision Correction Specialists in Auchenflower, Queensland.

To earn your CPD hours from this activity, visit mieducation.com/ai-in-cataract-and -refractive-surgery-the-implications-and-outcomes.

References available at mieducation.com.