← Back to Blog

AI-Defined Safety-Critical Systems: A New Paradigm

The End of Software-Designed Systems?

For decades, safety-critical systems have followed the same pattern: human engineers write requirements, human engineers design architectures, human engineers write code, and human assessors verify everything. The process works — it has kept railways, aircraft, and nuclear plants safe — but it's slow, expensive, and limited by human cognitive bandwidth.

What if there was a different way?

AI-Defined: What It Means

AI-Defined doesn't mean AI-generated. It means AI acts as a co-engineer — not replacing human judgment, but augmenting it at every stage of the development lifecycle:

  • Requirements extraction: Parsing 42 specification documents into 2,527 structured, traceable requirements
  • Architecture design: Proposing layered safety boundaries that separate SIL4 core logic from I/O wiring
  • Implementation: Writing Rust code with #[requirement("REQ-026-...")] annotations that maintain traceability by construction
  • Test generation: Creating test harnesses that execute 69,857 SS-076 test steps against a real EVC implementation
  • Documentation: Generating 80 CENELEC-compliant documents as code, automatically linked to the implementation
  • Safety analysis: Identifying 152 hazards across 6 subsystems and deriving 441 safety requirements

The human remains the authority. The AI is the force multiplier.

The SS026 Demonstration

The SS026 project implements the complete ERTMS/ETCS system — the European Train Control System that governs railway signalling across Europe. This isn't a toy demo. It's:

  • 55,000 lines of Rust across 19 crates in 6 architectural layers
  • SIL4 integrity: 13 no_std crates with zero heap allocation, zero unsafe blocks, zero unwrap/expect in production code
  • Baseline 3, Release 2 (v3.6.0): The current standard, covering Levels 0, 1, 2, and 3
  • 6 complete subsystems: EVC, RBC, LEU, DMI, JRU, and Euroradio
  • 99.6% requirements coverage: 2,516 of 2,527 requirements fully covered with implementation and tests

All developed in collaboration between a railway signalling engineer and Claude.

Why Rust for SIL4?

The choice of Rust isn't accidental. SIL4 demands:

SIL4 RequirementRust Solution
No undefined behaviorOwnership system, borrow checker
Deterministic executionno_std, zero heap allocation
Memory safetyCompile-time guarantees, no GC
No hidden panicsResult<T, E> everywhere, #![deny(unsafe_code)]
Traceable to requirements#[requirement] procedural macros

The language's type system becomes a safety argument. The compiler is the first line of defense.

Honesty as a Feature

Perhaps the most important principle in the SS026 project:

"An honest 80% is worth more than a dishonest 100%"

Every metric in this project is real. When we say 99.6% coverage, we show the 11 uncovered requirements. When we say 100% test pass rate, we document exactly what the tests do and don't verify. When we say SIL4, we explain that no independent assessment has been performed.

This honesty isn't a weakness — it's a feature. It's what makes the project credible and useful as a reference for how AI-defined safety-critical systems should work.

What Comes Next

AI-defined safety-critical systems are at the beginning. The SS026 project demonstrates that it's possible to maintain SIL4 discipline while leveraging AI for productivity. The next steps:

  1. Independent assessment: Having a real safety assessor evaluate the artifacts
  2. Hardware integration: Connecting to real train interfaces via STM boundaries
  3. Formal methods: Applying model checking to the supervision state machine
  4. Community: Open-sourcing the approach so others can build on it

The question isn't whether AI will transform safety-critical development. It's how fast the industry will adopt the new paradigm.


This post is part of the SS026 project journal. The project is a collaborative effort between a railway signalling engineer and Claude, Anthropic's AI assistant.