The AI Safety Landscape: Why Detection Is Not Enforcement

Stephen Calhoun
Feb 04, 2026By Stephen Calhoun

Everyone is building dashboards, scanners, and firewalls. Here is why none of them solve the liability problem.

 
If you look at the AI safety market right now, it feels crowded. You see logos for governance platforms, red-teaming tools, observability dashboards, and prompt injection filters.

It is easy to assume the problem is "solved."

But if you are a developer trying to get liability insurance or pass a clinical safety review, you quickly realize something terrifying: None of these tools actually stop the AI from making a mistake in production.

The market is full of tools that watch AI, test AI, or manage policies for AI. But almost no one is building the runtime enforcement layer that actually controls the AI.

Here is the hard, honest breakdown of where SASI fits in a landscape of partial solutions.

1. The Governance Platforms (e.g., Credo AI)
What they do well: These platforms are excellent at "Governance, Risk, and Compliance" (GRC). They help you define policies, manage documentation, and generate reports for regulators. They treat safety as a systems and process problem.

Why they aren’t SASI: They live above the execution layer, not in it. Credo AI helps you write the law; it doesn’t hire the police officer. They are governance dashboards, not runtime governors. They don't provide the fail-closed, real-time gating that stops a dangerous message milliseconds before it reaches a user.

The Difference: Credo is your compliance binder. SASI is your Hard Safety Enforcement Layer.

2. The Red-Teamers & Scanners (e.g., Robust Intelligence)
What they do well: They are fantastic at "AI QA." They hammer your model with thousands of attacks to find weaknesses before you launch. They are the scanners that tell you where the holes are.

Why they aren’t SASI: Detection is not enforcement. Knowing your model fails 2% of the time doesn't help you when it fails live with a user in crisis. These tools are scanners, not brake systems.

The Difference: Robust Intelligence tells you your brakes might fail. SASI is the deadman switch that stops the car automatically if they do.

3. The Security Firewalls (e.g., Lakera Guard)
What they do well: They are excellent at stopping hackers. They block prompt injections and jailbreak attempts with clean APIs.

Why they aren’t SASI: They are narrow. A firewall stops an attack, but it doesn't understand clinical safety, emotional nuance, or role integrity. A prompt injection filter won't catch a wellness bot slowly drifting into giving dangerous medical advice—that’s not a "hack," that’s a safety failure.

The Difference: Lakera stops hackers. SASI stops harm—including ambiguity, role drift, and clinical risks via Mode Integrity Enforcement.

4. The Developer Frameworks (e.g., NeMo Guardrails, Microsoft Guidance)
What they do well: They give developers tools to build better apps. They offer code libraries to structure conversation flows.

Why they aren’t SASI:

They live inside the application code. This creates a massive conflict of interest for insurers. If the safety rails are just code written by the same developer trying to ship the feature, they aren't independent controls—they are self-attestation.

The Difference: NeMo is a library you import. SASI is an independent Canonical Evidence Model that insurers can trust because it exists outside your codebase.

The SASI Distinction: Runtime Enforcement
The reason the market feels fragmented is that everyone is solving a piece of the puzzle, but avoiding the hardest part: taking responsibility for the decision.

Observability tools (HoneyHive, Arize) report problems after they happen.

Moderation APIs (OpenAI Mod, Azure Safety) provide signals but don't own the decision.

SASI is different because it is a Runtime Enforcement Layer.

  • Independent: We sit between the app and the user.
  • Fail-Closed: If we detect a crisis or a policy violation, we have the authority to block it.
  • Insurer-Facing: We generate Regulator-Readable Outputs designed for audit, not just debugging.

The One-Liner
If you need to explain this to an investor or a buyer who asks, "Why not just use Llama Guard?", here is the answer:

Most AI safety companies either detect problems or manage policies. SASI is built to enforce safety at runtime in a way insurers and regulators can trust, without turning app teams into safety companies.

The category isn't crowded. It's empty. Everyone is watching the car; SASI is the only one willing to grab the steering wheel.

Read the next article in this series.