AiX Lab – Formal Methods Research Themes
Verified Environment + Verified Learning 기반의 안전한 지능형 시스템 구축
1. Core Vision
사이버-물리 시스템(CPS)과 지능형 시스템에서 단순한 오프라인 검증(Offline Verification)이 아니라, 형식 검증(Formal Verification)에서 실행 코드 생성(Code Generation) 및 런타임 모니터링(Runtime Monitoring)까지 이어지는 "End-to-End Verified System Pipeline"을 구축하는 것이 전체적인 연구 비전입니다.
이 비전은 다음과 같은 핵심 목표를 가집니다:
- Timed Automata 모델링 → UPPAAL 기반 형식 검증 → TADA 정규화 → Go 코드 자동 생성
- 검증된 모델 + 자동 생성 코드 + Runtime Safety Monitor → 실시간 안전성 보장 시스템
- 안전성·신뢰성 강화: 모델 체킹(Model Checking), 시간 제약 검증, 런타임 위반 감지·차단(shielding)
- Verified World Models: World Model이 생성하는 상태·전이가 물리적·논리적 제약을 위반하지 않도록 검증
- Safe Reinforcement Learning: Verified Environment + Formal Safety Shields를 통한 위험 행동 차단 및 안전 학습 프레임워크
- Counterfactual Simulation under Formal Constraints: 형식적 제약 하에서 현실적이고 안전한 시뮬레이션만 생성
FM-U1. Verified Environment Modeling & Execution
지능형 시스템이 상호작용하는 환경을 정밀하게 형식화(Formalize)하고,
검증된 환경에서만 시스템이 실행되도록 만드는 핵심 기술 축입니다.
환경의 시간·논리 제약을 명확하게 정의하고, 실행 단계까지 안전성을 보증합니다.
FM-U1.1. Timed Automata 기반 Environment & Mission Modeling
현실 환경의 시간, 순서, 제약 조건을 Timed Automata (TA)로 모델링하여
시스템이 절대 위반해서는 안 되는 안전 경계(safety envelope)를 정의합니다.
TA 모델은 UPPAAL 등의 모델체커를 통해 safety, liveness, deadline 등을 자동 검증합니다.
FM-U1.2. TADA 기반 의미 정규화 + 자동 코드 생성
TA의 모호한 시간 의미를 TADA(Timed Automata with Disjoint Actions)로 정규화하여
모델 의미와 실행 의미가 1:1로 대응하도록 구성합니다.
TADA는 time-transition과 action-transition을 명확히 분리하고,
경계조건(invariant violation, x==n 등)을 모두 상태로 표현하여 실행 안전성을 강화합니다.
정규화된 TADA 모델은 자동으로 Go 코드로 변환되며,
실행 중 safety 위반을 실시간으로 감지·차단(shielding)하는
Runtime Safety Monitor가 함께 생성됩니다.
FM-U1.3. Verified Simulation Environment
TA/TADA로 검증된 환경을 기반으로 Vision, LLM, RL, Control 모듈이
항상 안전 제약 아래에서만 행동하도록 강제하는 실행 환경을 구성합니다.
위험한 행동이 발생하면 즉시 차단되며,
AI가 학습·추론 중 잘못된 정책을 형성하는 것을 구조적으로 방지합니다.
FM-U2. Verified World Models, Embodied AI, and Safe Reinforcement Learning
Embodied AI, World Model, Reinforcement Learning이
잘못된 세계를 생성하거나 위험한 정책을 학습하지 않도록
모델체킹 기반 안전성을 통합하는 연구 축입니다.
FM-U2.1. Verified World Models
World Model이 생성하는 상태(state)와 전이(transition)가
물리적·논리적 제약을 위반하지 않도록 검증합니다.
잘못된 dynamics나 비현실적 시나리오가
AI의 정책 학습을 오염시키지 않도록 보장합니다.
FM-U2.2. Safe RL via Verified Environment + Formal Shields
RL의 exploration과 policy-learning 과정에서 발생하는
위험 행동을 Verified Environment와
Formal Safety Shields를 통해 즉시 차단합니다.
이로써 RL이 안전 제약 안에서만 학습할 수 있는
Safe Learning Framework를 제공합니다.
FM-U2.3. Counterfactual Simulation under Formal Constraints
World Model 및 Embodied AI가 생성하는 counterfactual 시나리오에
형식적 제약을 부여하여,
현실적이고 안전한 시뮬레이션만 생성하도록 보장합니다.
이는 AI가 잘못된 가정이나 위험한 시나리오를 학습하는 것을 방지합니다.
Executive Summary
FM-U1은 환경을 정형적으로 모델링하고 검증하며 실행까지 연결하는 기술로,
시스템이 어떤 AI 알고리즘을 사용하더라도 안전한 환경 안에서만 동작하게 합니다.
FM-U2는 Embodied AI, World Model, RL이
위험한 상태나 정책을 생성하지 않도록
학습·시뮬레이션 단계에서부터 모델체킹을 통합하여
Verified Learning을 실현합니다.
두 기술이 결합하여,
Perception → Reasoning/Planning → Control → Execution → Learning
전체 루프가 안전성을 갖춘
Verified Intelligent System을 구성하게 됩니다.
AiX Lab – Formal Methods Research Themes
Building Safe Intelligent Systems based on Verified Environment + Verified Learning
1. Core Vision
Our overall research vision is to build an "End-to-End Verified System Pipeline" that goes beyond simple offline verification—connecting formal verification, automatic code generation, and runtime monitoring for cyber-physical systems (CPS) and intelligent systems.
This vision encompasses the following core objectives:
- Timed Automata modeling → UPPAAL-based formal verification → TADA normalization → Automatic Go code generation
- Verified models + Auto-generated code + Runtime Safety Monitor → Real-time safety-guaranteed systems
- Safety and reliability enhancement: Model checking, temporal constraint verification, runtime violation detection and blocking (shielding)
- Verified World Models: Verifying that states and transitions generated by World Models do not violate physical or logical constraints
- Safe Reinforcement Learning: Blocking dangerous behaviors through Verified Environment + Formal Safety Shields, providing a safe learning framework
- Counterfactual Simulation under Formal Constraints: Generating only realistic and safe simulations under formal constraints
FM-U1. Verified Environment Modeling & Execution
This research axis focuses on precisely formalizing the environment in which intelligent systems interact,
and ensuring that systems operate only within verified environments.
It clearly defines temporal and logical constraints of the environment, and guarantees safety up to the execution stage.
FM-U1.1. Timed Automata-based Environment & Mission Modeling
We model temporal, sequential, and constraint conditions of real-world environments using Timed Automata (TA)
to define safety envelopes that systems must never violate.
TA models are automatically verified for safety, liveness, deadlines, etc., using model checkers such as UPPAAL.
FM-U1.2. TADA-based Semantic Normalization + Automatic Code Generation
We normalize the ambiguous temporal semantics of TA using TADA (Timed Automata with Disjoint Actions)
so that model semantics and execution semantics correspond 1:1.
TADA clearly separates time-transitions and action-transitions,
and expresses all boundary conditions (invariant violations, x==n, etc.) as states to enhance execution safety.
Normalized TADA models are automatically converted to Go code,
and a Runtime Safety Monitor is generated that
detects and blocks safety violations in real-time during execution (shielding).
FM-U1.3. Verified Simulation Environment
Based on TA/TADA-verified environments, we construct an execution environment that
forces Vision, LLM, RL, and Control modules to always behave only under safety constraints.
Dangerous behaviors are immediately blocked,
structurally preventing AI from forming incorrect policies during learning and inference.
FM-U2. Verified World Models, Embodied AI, and Safe Reinforcement Learning
This research axis integrates model-checking-based safety to ensure that
Embodied AI, World Models, and Reinforcement Learning do not generate incorrect worlds or learn dangerous policies.
FM-U2.1. Verified World Models
We verify that states and transitions generated by World Models
do not violate physical or logical constraints.
This ensures that incorrect dynamics or unrealistic scenarios
do not contaminate AI's policy learning.
FM-U2.2. Safe RL via Verified Environment + Formal Shields
Dangerous behaviors that occur during RL's exploration and policy-learning processes
are immediately blocked through Verified Environment and
Formal Safety Shields.
This provides a Safe Learning Framework that enables RL
to learn only within safety constraints.
FM-U2.3. Counterfactual Simulation under Formal Constraints
We impose formal constraints on counterfactual scenarios generated by World Models and Embodied AI
to ensure that only realistic and safe simulations are generated.
This prevents AI from learning incorrect assumptions or dangerous scenarios.
Executive Summary
FM-U1 is a technology that formally models, verifies, and connects environments to execution,
ensuring that systems operate only within safe environments regardless of which AI algorithm is used.
FM-U2 integrates model checking from the learning and simulation stages
to ensure that Embodied AI, World Models, and RL do not generate dangerous states or policies,
realizing Verified Learning.
The combination of these two technologies forms a
Verified Intelligent System where the entire loop of
Perception → Reasoning/Planning → Control → Execution → Learning
is equipped with safety guarantees.