AI-Powered COI Verification: How It Works and Why Accuracy Matters
Inori Team
COI Compliance Experts
A trained compliance analyst can audit a Certificate of Insurance in 15 to 30 minutes. They read the document, cross-reference fields against requirements, check provision language, and make a compliance determination. They do this well — but they do it slowly, and they do it differently each time depending on fatigue, workload, and individual interpretation.
AI-powered COI verification performs the same audit in as little as 30 seconds. It reads the same document, extracts the same fields, checks the same requirements, and produces the same compliance determination. But it does so with measurable consistency and at a scale that manual processes cannot match.
This article explains how AI COI verification works, what it can and cannot do, and why the accuracy numbers hold up under scrutiny.
The 6-Stage Verification Pipeline
AI COI verification is not a single step. It is a pipeline of six stages, each performing a distinct function. Understanding the pipeline explains both the accuracy and the limitations.
Stage 1: Document Upload and Preprocessing
The process begins when a certificate is uploaded — typically as a PDF, though JPEG, PNG, and TIFF formats are also supported. The preprocessing stage:
- Normalizes the document: Corrects rotation, adjusts contrast, removes artifacts
- Identifies the document type: Confirms this is an ACORD 25 (or ACORD 27, 28, or other recognized form) rather than an unrelated document
- Determines quality: Assesses whether the scan quality is sufficient for accurate extraction
Poor-quality documents — heavily compressed JPEGs, faxed copies of copies, documents with handwritten overlays — are flagged at this stage. The system can still process them, but accuracy confidence scores are adjusted downward.
Stage 2: Vision Extraction
This is the core technology. Computer vision models read the certificate the way a human does — but with the ability to locate and extract every field simultaneously.
The ACORD 25 form is semi-structured. Fields occupy defined locations on the form, but the content within those fields varies in format, length, and presentation. The vision model:
- Locates field boundaries: Identifies where each field starts and ends on the form
- Reads text within fields: Extracts the content using OCR (Optical Character Recognition) tuned for insurance documents
- Interprets checkboxes: Determines whether checkboxes are checked, unchecked, or ambiguous
- Parses the Description of Operations: Reads the free-text Description field, which contains the most variable and critical information on the certificate
Modern vision models go beyond traditional OCR. They understand the spatial relationships between fields, can disambiguate overlapping text, and handle the inconsistencies that plague insurance documents — misaligned printing, variable fonts, stamps, and handwritten annotations.
Stage 3: Field Parsing and Normalization
Raw extracted text is not usable data. "1,000,000" and "$1M" and "1000000" all represent the same limit, but they look different to a computer. The parsing stage:
- Normalizes currency values: Converts all limit representations to standardized numeric format
- Parses dates: Handles MM/DD/YYYY, MM/DD/YY, spelled-out dates, and other date formats
- Standardizes entity names: Identifies and normalizes carrier names, insured names, and producer names
- Classifies coverage types: Maps the certificate's coverage sections to standard coverage type categories
- Extracts provision language: Identifies Additional Insured, Waiver of Subrogation, Primary and Noncontributory, and Notice of Cancellation language from the Description of Operations
- Identifies endorsement forms: Recognizes ISO form numbers (CG 20 10, CG 20 37, WC 00 03 13, etc.) and their edition dates
Stage 4: Compliance Check
With all 89 fields extracted and normalized, the system compares the certificate data against your defined compliance requirements:
- Coverage type matching: Are all required coverage types present on the certificate?
- Limit comparison: Does each coverage type meet or exceed the minimum required limit?
- Date validation: Are all policies currently in effect? Are any expiring within the warning threshold?
- Provision verification: Does the Description of Operations contain all required provision language?
- Endorsement form detection: Are endorsement form numbers (CG 20 10, CG 20 37, etc.) mentioned in the Description of Operations? Full form-level validation against our 2,490-form endorsement index is coming soon.
- Certificate holder verification: Does the certificate holder section show the correct legal entity name?
Each check produces a result: pass, fail, or needs review. The compliance check applies the same logic every time, eliminating the reviewer-to-reviewer variation that plagues manual processes.
Stage 5: Gap Detection and Severity Classification
When the compliance check identifies deficiencies, the gap detection stage classifies each one:
Critical gaps (non-compliant):
- Missing required coverage type
- Limits below minimums
- Expired policy
- Missing Additional Insured endorsement
- Insured name mismatch
Warning gaps (needs review):
- Missing Waiver of Subrogation
- Missing Primary and Noncontributory language
- Endorsement forms not referenced
- Policy expiring within 30 days
- Ambiguous provision language
Informational findings:
- Minor formatting issues
- Non-standard but acceptable carrier
- Certificate unsigned
The severity classification matches the framework that experienced compliance analysts use — but it applies consistently, without the judgment drift that occurs when a human reviewer is on their 30th certificate of the day.
Stage 6: Scoring and Determination
The final stage produces the compliance output:
- Overall compliance status: Compliant, Non-Compliant, or Needs Review
- Compliance score: A percentage reflecting how many requirements are met
- Gap list: Every identified deficiency with severity, description, and remediation guidance
- Confidence score: The system's confidence in its own extraction accuracy, reflecting document quality and any ambiguous fields
The entire pipeline — from upload to final determination — completes in approximately 30 seconds.
What "89 Fields" Means
The ACORD 25 Certificate of Liability Insurance contains 89 distinct data fields. These include:
Producer section (5 fields): Producer name, address, phone, fax, email
Insured section (4 fields): Named insured, address, contact information, insured type
Insurer section (10 fields): Up to 5 carriers with names and NAIC numbers
General Liability section (14 fields): Policy number, effective date, expiration date, occurrence/claims-made, each occurrence limit, damage to rented premises, medical expense, personal & advertising injury, general aggregate, products/completed operations aggregate, additional insured checkbox, subrogation waived checkbox, policy form type, retroactive date
Automobile Liability section (8 fields): Policy number, dates, combined single limit, bodily injury per person, bodily injury per accident, property damage, auto type checkboxes, additional insured and subrogation waived indicators
Umbrella/Excess section (7 fields): Policy number, dates, each occurrence, aggregate, retention/deductible, occurrence/claims-made, umbrella/excess type
Workers' Compensation section (7 fields): Policy number, dates, per statute checkbox, each accident, disease each employee, disease policy limit, subrogation waived checkbox
Description of Operations (1 field, variable content): Free text containing provision language, project references, and endorsement information
Certificate Holder section (3 fields): Name, address, additional insured notation
Other fields (30+ fields): Signature, date, additional policy sections, revision number, certificate number, and form-specific fields
AI extraction covers every one of these fields. Manual review typically focuses on 15-25 of the most critical fields. This difference means AI catches data points that manual reviewers routinely skip.
Accuracy Benchmarks
Accuracy claims require substantiation. Here is what the numbers mean and how they are measured.
Field Extraction Accuracy: over 95%
This measures how often the AI correctly extracts the content of a field. Across a test set of thousands of certificates:
- Over 95% of fields are extracted with perfect accuracy
- The remaining fraction includes partial extractions (correct value with minor formatting differences) and incorrect extractions
- Error rates are highest on poor-quality documents (faxed copies, low-resolution scans) and handwritten content
For comparison, manual data entry by trained operators typically achieves 95-97% accuracy. The AI's advantage is not just speed — it is also precision.
Compliance Determination Accuracy: 98.7%
This measures how often the AI's compliance determination (compliant, non-compliant, needs review) matches the determination of an expert human reviewer.
- 98.7% agreement with expert reviewers
- False positive rate (flagging compliant certificates as non-compliant) is less than 1%
- False negative rate (missing genuine compliance gaps) is less than 0.5%
The false negative rate is the more important number. A false positive wastes time (you investigate a gap that does not exist). A false negative creates risk (you miss a gap that does exist). The sub-0.5% false negative rate means AI verification catches compliance gaps more reliably than most manual processes.
Why not 100%?
No verification system — human or AI — achieves 100% accuracy. The remaining 1-2% of cases involve genuinely ambiguous certificates: truncated Description of Operations, contradictory language, non-standard forms, and documents where even expert reviewers disagree. For these edge cases, human review is necessary.
Speed: 30 Seconds vs. 15-30 Minutes
The time comparison is straightforward:
| Metric | Manual | AI |
|---|---|---|
| Time per certificate | 15-30 minutes | ~30 seconds |
| Certificates per hour (1 reviewer) | 2-4 | 120 |
| Certificates per day (1 reviewer) | 16-32 | 960 |
| Time to process 500 certificates | 125-250 hours | ~4 hours |
The speed advantage compounds at scale. An organization processing 500 certificates per month needs a dedicated full-time reviewer for manual audit. AI processes the same volume in a few hours, freeing human expertise for exception handling and vendor communication.
Limitations
AI COI verification is powerful, but it is not omniscient. Understanding its limitations is as important as understanding its capabilities.
Handwritten Forms
Handwritten certificates exist, particularly from smaller agencies. While modern OCR handles most handwriting, accuracy drops significantly for illegible handwriting, unusual characters, or handwriting that overlaps printed field boundaries. Handwritten certificates are flagged for human review.
Poor-Quality Scans
Heavily compressed images, multi-generation fax copies, and documents scanned at very low resolution degrade extraction accuracy. The system processes these documents but reduces its confidence score, signaling that human verification is recommended.
Non-Standard Forms
While the ACORD 25 is the industry standard, some certificates use carrier-specific or state-specific forms that deviate from the standard layout. The AI can process most common variations, but highly unusual formats may require manual review.
Ambiguous Language
The Description of Operations is a free-text field. Producers write provision language in countless variations, some of which are genuinely ambiguous. "Additional Insured status may apply subject to policy terms" — does this confirm or qualify the coverage? The AI flags ambiguous language rather than making a judgment call, routing these cases to human reviewers.
Policy vs. Certificate Disconnect
The certificate describes what the producer believes is on the policy. It is not the policy itself. No verification system — human or AI — can confirm that the underlying policy actually contains the stated endorsements without reviewing the policy documents. AI can identify inconsistencies and red flags, but the certificate-to-policy verification gap is inherent in the ACORD 25 system.
Human-in-the-Loop
The most effective COI verification programs combine AI processing with human oversight. The AI handles the high-volume, repetitive verification work. Humans handle:
- Exception review: Certificates flagged by the AI for ambiguous language, low confidence, or non-standard formats
- Judgment calls: Situations where the certificate is technically non-compliant but the gap is immaterial or the vendor has provided alternative documentation
- Vendor communication: Explaining deficiencies, negotiating corrections, and managing relationships
- Program design: Defining requirements, adjusting severity thresholds, and evolving the compliance program
This division of labor plays to each party's strengths. AI excels at consistent, high-speed data extraction and rule application. Humans excel at judgment, communication, and handling novel situations.
Cost Comparison
The economics of AI verification are compelling at scale.
| Cost Factor | Manual (per certificate) | AI (per certificate) |
|---|---|---|
| Labor cost | $8-15 (loaded) | $0 (automated) |
| Technology cost | $0-2 (basic tools) | $0.25-0.50 (AI processing) |
| Error correction cost | $3-5 (rework rate) | $0.10-0.25 (exception rate) |
| Total cost | $11-22 | $0.35-0.75 |
At 500 certificates per month, manual processing costs $5,500-$11,000 monthly. AI processing costs $175-$375 monthly. The difference is not marginal — it is an order of magnitude.
Beyond direct cost, consider the opportunity cost. A compliance analyst spending 30 minutes per certificate has no time to improve the compliance program, analyze trends, or manage vendor relationships strategically. AI frees that time.
How Inori Uses AI Verification
Inori's verification engine implements the 6-stage pipeline described above, built specifically for insurance compliance. The system is trained on hundreds of thousands of ACORD 25 certificates across industries — commercial real estate, construction, property management, and enterprise operations.
Key capabilities:
- Extracts all 89 fields from the ACORD 25 in a single pass
- Checks compliance against configurable requirements per vendor, project, or requirement set
- Identifies all four key provisions (AI, WoS, PNC, NoC) and detects endorsement form mentions
- Classifies gaps by severity with specific remediation guidance
- Routes exceptions to human reviewers with full context
- Tracks compliance status over time with automated expiration monitoring
The technology does not replace compliance expertise. It amplifies it — allowing a single compliance professional to manage the verification workload that would otherwise require a team.
See AI verification in action
Upload a certificate and watch Inori extract 89 fields, check compliance against your requirements, and deliver a complete audit report in as little as 30 seconds.
Related Articles
Ready to automate COI compliance?
Start with our free COI checker — no sign-up required. Or try the full platform free.