AI Governance & Transparency
AI Governance & Transparency
This document describes how Inori uses artificial intelligence. It is provided for transparency and compliance with applicable AI governance regulations, including the Colorado AI Act (SB 24-205).
Status: DRAFT — Pending legal review before publication.
1. System Purpose
Inori uses AI to extract structured data from insurance Certificates of Insurance (COIs) and compare the extracted data against user-configured compliance requirements. The system assists compliance review — it does not make final compliance determinations.
2. AI Technology
- Model: Anthropic Claude (Haiku for standard extraction, Sonnet for complex documents requiring escalation)
- Method: Computer vision + structured extraction from PDF documents
- Scope: ACORD 25 certificate forms (89 fields extracted across 10 sections)
- Processing: Ephemeral — documents are sent to the AI provider for extraction only. Documents are not used for model training.
3. Data Inputs
The AI system processes:
- Certificate of Insurance PDF documents uploaded by users or vendors
- No personal health information (PHI), financial account numbers, or social security numbers are processed
- Extracted structured data (coverage limits, dates, names, endorsements) is stored in the platform database
4. Testing & Accuracy
- Calibration suite: 273 test scenarios across 19 categories, verified at 100% accuracy
- Accuracy claim: Greater than 95% on standard ACORD 25 certificate forms
- Known limitations:
- Low-quality scans, handwritten, or faded documents may produce reduced accuracy
- Non-standard certificate formats (carrier-specific forms) may have lower extraction rates
- Multi-page continuation certificates may miss fields on subsequent pages
- Endorsement verification is text-based (form code detection), not endorsement document validation
5. Human Oversight
- All AI compliance results are presented as recommendations, not final determinations
- Users can manually review and override any AI-extracted field via the review system
- Users can approve certificates "as-is" or edit individual fields with audit trail
- The consent gate (shown on first use) discloses the AI nature of the analysis
- Every result displays: "AI-analyzed · Review recommended for binding decisions"
6. Consumer Notice & Rights
Notice
- A consent dialog is displayed before any AI analysis results are shown
- All compliance results include an inline AI disclosure marker
- Email notifications include: "Certificate analysis powered by AI. Not insurance advice."
Appeal & Correction
- Vendors who disagree with compliance results can upload a corrected certificate at any time via the vendor portal
- Certificate holders can request manual review by their property manager
- The manual review system allows field-by-field correction with audit trail
Opt-Out
- Users can choose not to use AI-powered analysis and perform manual compliance review
- The platform does not make automated decisions without human review capability
7. Record Retention
- AI extraction results are stored for the lifetime of the user's account
- Deleted data is soft-deleted and permanently purged 90 days after account closure
- Audit trails of AI analysis, manual reviews, and compliance decisions are retained for compliance purposes
8. Risk Classification
Under the Colorado AI Act (SB 24-205), Inori's AI system may be classified as a "high-risk" system if compliance determinations result in consequential decisions affecting vendor access to projects. Inori mitigates this through:
- Human-in-the-loop design (all results require user action)
- Appeal mechanisms (vendor re-upload, manual review)
- Consumer notice (consent gate, inline markers)
- Documentation of testing methodology and accuracy
9. Contact
For questions about Inori's AI governance: ask@askinori.com