See the Compliance Penalty Tables

The Compliance Hub -  Regulatory & Insurance Alignment

"Safety is no longer a self-attestation. It is a forensic requirement."

The Cost of Manual Compliance
The projections below estimate the annual manual labor required to maintain compliance for an AI application deployed without middleware. Calculations assume a standard release cycle of 1 to 2 updates per week.

Who Gets Fined — The Chatbot Platform or the Business Using It?

In most cases, the deployer, the business that embedded the chatbot on their website, bears primary legal liability. Regulators treat the deployer as the responsible party because they chose to use the tool, they control the deployment context, and they have a direct relationship with the user.

Key points every operator needs to understand:

  • You cannot outsource your compliance obligation to your chatbot vendor. A healthcare clinic, school, or mental health nonprofit that deploys a no-code chatbot owns the compliance requirement — regardless of what the platform's terms of service say.
  • The chatbot platform may share liability in cases where the platform's own architectural decisions caused the violation — for example, transmitting unredacted Social Security numbers to a third-party AI backend. But regulators typically pursue the deployer first.
  • "We didn't know the chatbot was doing that" is not a legal defense. Under HIPAA, COPPA, the Fair Housing Act, and the EU AI Act, liability is strict or negligence-based — intent does not determine exposure.
  • Fines scale with the number of users affected. Under Colorado SB 24-205, each affected consumer is a separate violation. A single non-compliant chatbot deployment touching 500 users creates up to $10 million in potential exposure.
  • Any penalties collected under NY GBL Art. 47 go directly to New York's suicide prevention fund — giving the AG additional political motivation to enforce.

Global Regulatory Mapping
SASKI is engineered to map directly to the requirements of the world’s most stringent regulatory frameworks and underwriting standards.

🇺🇸 US Healthcare & Privacy (HIPAA / FDA)

For clinical and telehealth applications, SASKI provides the deterministic "Hard Governance" floor required for patient safety and liability defense.

HIPAA Safe Harbor: Automated, pre-LLM redaction of all 18 PHI identifiers with a mandatory 7-year audit retention to ensure private data never becomes model training data.

FDA 524B Readiness: Generates the tamper-evident receipts, decision traces, and reconstruction artifacts required for FDA post-market cybersecurity audits, officially supporting compliance for any AI-enabled Software Device Function.

🇪🇺 EU AI Act Compliance

SASKI helps developers navigate the complexities of the EU AI Act, particularly regarding emotion recognition and transparency.

Article 5 Alignment: Features a jurisdiction-aware "EU-Compliant Mode" that disables emotional state detection in specific contexts (like Student Mode) while maintaining strictly governed crisis detection.

Article 13 Transparency: Provides structured explanation components to help partners generate required transparency documentation and prove explainability to external auditors.

Children & Education (COPPA / FERPA)

SASKI child and student modes are designed to provide absolute boundary enforcement for the most vulnerable users.

COPPA Compliance: Enforces maximum PII redaction (including school and location data) at the system level, which cannot be disabled by application administrators.

FERPA Alignment: Provides 7-year cryptographic audit logs and academic concern flags to maintain educational integrity and institutional compliance.

Insurable AI Infrastructure

Cyber and E&O (Errors & Omissions) underwriters are increasingly demanding independent verification of AI boundaries before issuing liability policies. SASKI provides the structural proof carriers require:

Deterministic Governance: Cryptographic proof that your safety and compliance logic is completely independent of the "black box" LLM.

Tamper-Evident Auditability:
Every decision includes a forensic decision tree path and action rationale to prove exactly why a specific safety action was triggered or overridden.

Model-Agnostic Insurability:
Your liability profile and governance floor stay constant even if you switch model providers, protecting your enterprise risk posture over time.

The Operational ROI of Deterministic Governance

This table illustrates the direct resource efficiency gained by implementing SASKI automated middleware. By shifting critical compliance workflows—such as safety prompting, PII redaction, and audit generation—from manual engineering tasks to a real-time deterministic layer, teams eliminate over 800 hours of manual overhead per year. SASKI transforms regulatory compliance from a labor-intensive bottleneck into a seamless, automated infrastructure.

(Statutory citations and text for each law are linked in the references section below.)

US Federal Laws

Federal law sets the compliance floor for AI deployments across all fifty states. These requirements apply regardless of where your company is headquartered or where your platform was built. 

(Statutory citations and text for each law are linked in the references section below.)

US State Laws

In the absence of federal AI legislation, states have moved quickly to fill the gap. These laws apply based on where your users are located, not where your business is based. A chatbot deployed from Singapore or South Africa must comply with California law if it serves California residents.

Algorithmic Decision Compliance

These laws apply when an AI system moves beyond answering questions and begins making or substantially influencing decisions that affect people's access to jobs, housing, credit, or services. A no-code chatbot crosses this threshold when deployed in HR intake, tenant screening, loan qualification, or insurance eligibility workflows — even without formal decision-making architecture built in.

(Statutory citations and text for each law are linked in the references section below.)

EU and International

The EU's regulatory framework applies to any organization worldwide that offers services to EU residents or processes their personal data. These are not regional rules. They follow the user, not the company.

US FEDERAL LAWS

The following federal laws apply across all US states. Any AI chatbot deployed in a healthcare, children's education, or mental health context that handles personal information is subject to these requirements regardless of which state the company or user is located in.

HIPAA — Health Insurance Portability and Accountability Act HIPAA requires that any electronic health information — including names, dates of birth, Social Security numbers, insurance details, and medical history — be protected with appropriate technical safeguards both in transit and in storage. When an AI chatbot collects or transmits this type of information without encryption, redaction, or access controls, the deploying organization is directly exposed to federal enforcement. Penalties are tiered based on how serious the violation was and whether the organization knew about it. An organization that knowingly allows patient data to flow unprotected faces the highest penalties. Enforcement is handled by the HHS Office for Civil Rights.

COPPA — Children's Online Privacy Protection Act COPPA requires that any website or online service directed at children under 13 — or that knowingly collects information from children — must obtain verified parental consent before collecting personal data, maintain a written security program, and delete children's data when it is no longer needed. AI chatbots deployed on EdTech platforms, school websites, or children's apps are directly covered. The updated COPPA rule took full effect April 22, 2026, meaning full enforcement is now active. Violations are assessed per incident per day, making non-compliance extremely costly for platforms that interact with large numbers of young users.

US STATE LAWS

The US has no single federal AI law. Instead, a growing patchwork of state laws creates specific obligations depending on where a user is located — not where the company is based. A chatbot deployed in California must comply with California law even if the company is headquartered in Singapore or South Africa.

California SB 243 — Companion Chatbot Safety Act This law requires any AI chatbot that engages users in emotionally supportive, human-like, or sustained personal conversation to implement a protocol for detecting expressions of suicidal ideation and immediately referring users to crisis services such as the 988 Suicide and Crisis Lifeline. It also requires clear disclosure to users that they are interacting with an AI, not a human. The law explicitly covers mental health, wellness, and emotional support contexts. Any person harmed by a violation can sue directly in court without waiting for government action, making this one of the most immediately enforceable AI safety laws in the US.

California AB 3030 — Healthcare AI Disclosure Law This law requires hospitals, clinics, medical groups, and licensed healthcare providers that use generative AI to communicate patient clinical information to display a prominent disclosure throughout any chat-based interaction stating that the content is AI-generated and has not been reviewed by a licensed clinician. It also requires clear instructions telling patients how to reach a human healthcare provider. The law applies any time a chatbot responds to questions about symptoms, treatment, mental health, or other clinical topics in a healthcare setting. Penalties of up to $25,000 per violation apply to licensed health facilities.

California AB 489 — Healthcare AI Impersonation Ban This law prohibits AI developers and deployers from using any words, titles, phrases, credentials, or design elements that could lead a user to believe they are receiving care from a licensed healthcare professional when they are not. A mental health chatbot that responds to clinical disclosures without clearly identifying itself as AI — or that uses language implying professional clinical authority — is directly covered. Enforcement is handled by California's healthcare professional licensing boards, which can seek injunctive relief.

California CPRA / CCPA — Consumer Privacy Rights California's comprehensive consumer privacy law gives California residents the right to know what personal data is being collected about them, the right to delete that data, and the right to opt out of its sale or use for profiling. When an AI chatbot continues to transmit a user's personal data — including Social Security numbers or medical details — to a third-party AI provider after the user has explicitly requested deletion or opted out, the deployer is in direct violation of this right. Intentional violations carry higher penalties than unintentional ones.

New York GBL Article 47 — AI Companion Models Act New York was the first state to specifically regulate emotionally responsive AI chatbots. This law requires any AI system designed to simulate a sustained human relationship with a user to implement a protocol for detecting expressions of suicidal ideation or self-harm and immediately referring the user to crisis services. It also requires a clear disclosure at the start of every session — and at least every three hours during longer interactions — informing users they are speaking with an AI, not a human. The New York Attorney General enforces this law and fines accumulate daily, meaning a non-compliant platform can rack up significant penalties quickly. All fines collected go directly to funding suicide prevention programs in New York.

Illinois WOPRA — Wellness and Oversight for Psychological Resources Act Illinois became the first US state to broadly prohibit AI from independently delivering therapy or psychotherapy services. Under WOPRA, only licensed human professionals — including psychologists, clinical social workers, counselors, and marriage and family therapists — may provide therapeutic services to Illinois residents. An AI chatbot that responds to a user's mental health disclosure with therapeutic guidance, treatment recommendations, or emotional interventions without licensed human oversight is in direct violation of this law, even if the chatbot is operated by an out-of-state company. The law applies based on where the user is located, not where the company is based.

Colorado SB 24-205 — Colorado AI Act Colorado's AI Act is the most comprehensive state-level AI regulation in the United States. It requires any business that uses an AI system to make or significantly influence consequential decisions — including those affecting healthcare, education, housing, employment, lending, or legal services — to implement a risk management program, conduct impact assessments, notify consumers when AI is being used, and provide a path for human review of adverse decisions. The law applies to any business serving Colorado consumers, regardless of where the company is located. Penalties are calculated per consumer per transaction, meaning a single non-compliant AI deployment affecting a large user base creates substantial exposure. The Colorado Attorney General has exclusive enforcement authority with a 60-day cure period before penalties are assessed.

Colorado Privacy Act (CPA) — The Colorado Privacy Act gives Colorado residents the right to opt out of the processing of their personal data for targeted advertising, profiling, and sale. It also establishes the right to access and delete personal information. An AI chatbot that retains sensitive personal data in its active session context — and continues transmitting it to a third-party AI provider — after a user has explicitly invoked their opt-out rights is in direct violation of the CPA. The Colorado Attorney General enforces the CPA as an unfair trade practice.

EU AND INTERNATIONAL

The EU's regulations apply to any organization worldwide that offers services to EU residents or processes their data — regardless of where the company is based. A US company with EU users is fully subject to GDPR and the EU AI Act.

EU AI Act Article 5 — Prohibited AI Practices The EU AI Act's most serious category of violations covers AI systems that exploit the psychological vulnerabilities of people in situations of health-related or social vulnerability in ways likely to cause significant harm. A mental health chatbot that responds to a user's expressions of distress with generic empathy rather than crisis referral — or that fails to protect sensitive personal data from unnecessary exposure — can implicate this prohibition. These are the highest penalties in the entire AI Act, exceeding even GDPR, reflecting the EU's position that certain AI behaviors represent an unacceptable risk to fundamental rights and cannot be permitted regardless of context or intent.

EU AI Act Article 50 — Chatbot Transparency Disclosure Beginning August 2, 2026, any AI system designed to interact with natural persons must ensure that users are informed they are interacting with an AI — unless it is obvious from context. This applies to every AI chatbot operating in the EU market, including customer service bots, mental health support tools, and educational assistants. The obligation is on the deployer, not just the platform provider, meaning any business that embeds a third-party chatbot into its website to serve EU users must ensure the disclosure is present and compliant. Penalties apply to the deployer even if the underlying platform failed to build the disclosure in by default.

GDPR Article 32 — Security of Processing The General Data Protection Regulation requires any organization processing the personal data of EU residents to implement appropriate technical and organizational security measures — including encryption, pseudonymization, and access controls — to protect that data during processing and transmission. When an AI chatbot transmits unredacted personal data including names, dates of birth, or identification numbers to a third-party AI provider without any protective measures, the deployer fails this requirement. GDPR applies to any company anywhere in the world that processes data belonging to EU residents, and penalties are calculated as a percentage of global revenue — meaning large companies face proportionally larger fines.

 

All penalty figures reflect current 2026 inflation-adjusted amounts where applicable. Figures represent maximum civil penalties unless otherwise noted. This page is maintained by Technical Visionaries PBC and is updated as new laws take effect. Nothing on this page constitutes legal advice. Consult qualified counsel for compliance assessment specific to your deployment context.