1. Core Vision
Our overall research vision is to build an "End-to-End Clinical Reasoning Pipeline" that goes beyond simple classification in medical imaging—quantifying diseases, generating structured clinical representations, and connecting them to LLM-based clinical reasoning.
This vision encompasses the following core objectives:
- Medical images → Lesion segmentation → Quantification → Prediction models based on quantitative indicators
- Quantitative indicators + Images + Text reports → Multimodal LLM-based AI Doctor Assistant
- Safety and reliability enhancement: Model verification (Formal Verification), safety neuron analysis, mechanistic interpretability
2. Research Theme A: Medical Image Quantification & Disease Modeling
This research line focuses on generating continuous biomarkers that quantify disease progression and are clinically interpretable, moving beyond traditional CNN-based classification.
A.1. Ophthalmology (Ophthalmology Image-based Quantification)
- Epiretinal Membrane (ERM), Diabetic Retinopathy, Macular Disease, etc.
- Based on Fundus / OCT-B-scans:
- Lesion segmentation (Membrane, Retina layers, cyst regions)
- Generation of quantitative indicators such as thickness map, curvature, reflectance profile
- Disease staging and progression prediction models based on quantitative indicators
A.2. Gait / Orthopedics (Orthopedic Gait Analysis)
- Markerless video (pose estimation) → Biomechanical features → Gait anomaly quantification
- Diagnosis and progression models utilizing quantitative features instead of clinical grading
- Extendable to pediatric, elderly imbalance assessment, etc.
A.3. Multi-modal Structured Data Integration
Integration of images, quantitative features, EMR, lab values, etc.
Final goal: Building a disease progression world model.
3. Research Theme B: Domain-Specialized Medical LLMs (Ophtimus-V2 Series)
A research line on Ophthalmology-specific LLMs (Ophtimus-V2-Tx) developed directly by the research team.
B.1. Clinical Reasoning Models
- Fine-tuning based on case reports
- Learning "clinical knowledge pathways" connecting symptoms–images–diagnosis–treatment
- Experiments with LoRA and structured LoRA to reduce hallucination and enhance safety
B.2. Multi-modal Input Extension
- Fundus / OCT (B-scan) embedding + Structured quantification + Textual description
- Further integration with medical World Models to enable progression simulator linkage
B.3. Safety & Trustworthiness
- "Safety Neurons" analysis
- Mechanistic interpretability (circuit-level patterns in reasoning)
- Detection and unlearning of clinically harmful outputs
4. Research Theme C: Formal Verification + AI Safety for Medical AI
An independent research line combining Formal Methods + AI Safety to ensure reliability and regulatory compliance (e.g., medical device approval) for medical AI.
C.1. Verified Environment Models
- Medical process models based on Timed Automata
- Verification of safety constraints through model checking (PCTL, CTL, TCTL)
- Providing control shields to prevent reinforcement learning or AI inference from violating these constraints
C.2. Verified AI Controllers
- Enforcing safety properties in Medical AI inference pipelines
- Analysis to verify "when and what inputs can lead to dangerous outputs"
- Verification-aware fine-tuning or pruning
C.3. Trustworthy Data & Contamination Check
- LLM-cheating detection in crowd annotation (based on peer prediction)
- Ensuring reliability of medical data labels
5. Research Theme D: Medical World Models & Embodied AI
A research direction directly aligned with core trends at NeurIPS 2025 ("World Models", "Embodied AI for Healthcare").
D.1. Disease Progression World Model
- Generative world models that model Retina / ERM progression dynamics
- Temporal latent dynamics based on OCT/B-scan sequential images
- Enabling counterfactual simulation such as "If a patient's condition is X, how would the OCT change after 6 months?"
D.2. Multi-modal Clinical Simulator
- Including images, quantitative biomarkers, text reports, treatment history
- Providing structured clinical simulation contexts to LLMs
- Maximizing clinical decision support enhancement
D.3. Reinforcement Learning in Verified Clinical Simulation
- For cases where direct learning from real-world medical settings is prohibited
- Application of safe RL based on verified world models
- Extendable to research on treatment planning or screening policy optimization
6. Research Theme E: Foundations for AI-Driven Clinical Decision Support
Supporting the ultimate goal of medical AI—automated clinical reasoning—by integrating all the above axes (A~D).
E.1. Image → Biomarker → Reasoner → Recommendation
- Building fully end-to-end connectable pipelines
- Linking image-based quantification to input structures for LLM reasoning
E.2. Multi-lingual / Multi-institution Generalization
- Based on collaborations with Korea, USA (UPenn), and other institutions
- Conducting research on robustness and distribution shift
E.3. Regulatory-readiness
- Reliability assessment metrics (specificity, sensitivity, FN-critical tasks)
- Enabling medical AI documentation with "Safety case" structures
7. Overall Theme Summary (One-page Executive Summary)
Our Medical AI research focuses on building the following integrated research ecosystem, going beyond simple image classification.
- Disease Quantification Technology
- Image-based lesion analysis, quantification, progression modeling
- Development of Clinical Domain-Specialized LLMs (Ophtimus-V2-Tx)
- Ophthalmology-specific reasoning models
- Multi-modal processing (OCT/Fundus + EMR + biomarkers)
- Application of AI Safety & Formal Verification
- Ensuring safety constraints for medical AI
- Verified environment + verified inference
- Clinical Simulation Based on World Models
- Disease progression simulation
- Foundation for LLM's clinical decision reasoning
- Building Comprehensive Medical Decision Support Systems
- End-to-end from Data → Image → Quantification → LLM → Decision