AiX Lab – Formal Methods Research Themes

Building Safe Intelligent Systems based on Verified Environment + Verified Learning

1. Core Vision

Our overall research vision is to build an "End-to-End Verified System Pipeline" that goes beyond simple offline verification—connecting formal verification, automatic code generation, and runtime monitoring for cyber-physical systems (CPS) and intelligent systems.

This vision encompasses the following core objectives:


FM-U1. Verified Environment Modeling & Execution

This research axis focuses on precisely formalizing the environment in which intelligent systems interact, and ensuring that systems operate only within verified environments. It clearly defines temporal and logical constraints of the environment, and guarantees safety up to the execution stage.

FM-U1.1. Timed Automata-based Environment & Mission Modeling

We model temporal, sequential, and constraint conditions of real-world environments using Timed Automata (TA) to define safety envelopes that systems must never violate. TA models are automatically verified for safety, liveness, deadlines, etc., using model checkers such as UPPAAL.

FM-U1.2. TADA-based Semantic Normalization + Automatic Code Generation

We normalize the ambiguous temporal semantics of TA using TADA (Timed Automata with Disjoint Actions) so that model semantics and execution semantics correspond 1:1. TADA clearly separates time-transitions and action-transitions, and expresses all boundary conditions (invariant violations, x==n, etc.) as states to enhance execution safety.

Normalized TADA models are automatically converted to Go code, and a Runtime Safety Monitor is generated that detects and blocks safety violations in real-time during execution (shielding).

FM-U1.3. Verified Simulation Environment

Based on TA/TADA-verified environments, we construct an execution environment that forces Vision, LLM, RL, and Control modules to always behave only under safety constraints. Dangerous behaviors are immediately blocked, structurally preventing AI from forming incorrect policies during learning and inference.


FM-U2. Verified World Models, Embodied AI, and Safe Reinforcement Learning

This research axis integrates model-checking-based safety to ensure that Embodied AI, World Models, and Reinforcement Learning do not generate incorrect worlds or learn dangerous policies.

FM-U2.1. Verified World Models

We verify that states and transitions generated by World Models do not violate physical or logical constraints. This ensures that incorrect dynamics or unrealistic scenarios do not contaminate AI's policy learning.

FM-U2.2. Safe RL via Verified Environment + Formal Shields

Dangerous behaviors that occur during RL's exploration and policy-learning processes are immediately blocked through Verified Environment and Formal Safety Shields. This provides a Safe Learning Framework that enables RL to learn only within safety constraints.

FM-U2.3. Counterfactual Simulation under Formal Constraints

We impose formal constraints on counterfactual scenarios generated by World Models and Embodied AI to ensure that only realistic and safe simulations are generated. This prevents AI from learning incorrect assumptions or dangerous scenarios.


Executive Summary

FM-U1 is a technology that formally models, verifies, and connects environments to execution, ensuring that systems operate only within safe environments regardless of which AI algorithm is used.

FM-U2 integrates model checking from the learning and simulation stages to ensure that Embodied AI, World Models, and RL do not generate dangerous states or policies, realizing Verified Learning.

The combination of these two technologies forms a Verified Intelligent System where the entire loop of Perception → Reasoning/Planning → Control → Execution → Learning is equipped with safety guarantees.