Compliance Scoring: How to Quantify Your COI Program's Health
Inori Team
COI Compliance Experts
Every compliance program eventually faces the same question from leadership: "How are we doing?" Without a scoring model, the answer is anecdotal — "I think we are in pretty good shape" or "we have some gaps we are working on." Anecdotal answers do not drive decisions, do not secure budget, and do not demonstrate improvement over time.
A compliance scoring model replaces intuition with measurement. It gives you a number — or a set of numbers — that quantifies the health of your program at any point in time, tracks trends over months and years, and identifies exactly where the program is strong and where it is failing.
This guide covers the four core metrics every COI compliance program should track, the formulas behind them, industry benchmarks for what good looks like, and a reporting framework that communicates program health to everyone from your compliance team to your board.
The Four Core Metrics
A useful scoring model is simple enough to calculate consistently and comprehensive enough to capture the dimensions that matter. These four metrics accomplish that.
1. Compliance Rate
Formula: (Number of Compliant Records / Total Active Records) x 100
This is the headline number — the single metric that answers "what percentage of our vendor relationships are fully insured to our standards right now?"
A record is compliant when every element of the applicable requirement set is met: all required coverages are in force, all limits meet or exceed minimums, all required provisions (additional insured, waiver of subrogation, primary and non-contributory) are evidenced, and the certificate is current (not expired).
A record is non-compliant if any single element fails. There is no partial compliance in this calculation — a vendor missing waiver of subrogation on Workers' Comp is non-compliant, just as a vendor missing Workers' Comp entirely is non-compliant. The severity of the gap matters for prioritization, but not for the compliance rate calculation.
Example: You manage 400 active vendor records. 340 are fully compliant. Your compliance rate is (340 / 400) x 100 = 85%.
Benchmarks:
| Compliance Rate | Assessment |
|---|---|
| Below 60% | Critical — the program is not functioning. Requirements may be unclear, enforcement may be absent, or the team lacks capacity. |
| 60% – 70% | Poor — systemic issues exist. Many vendors are operating without adequate insurance. |
| 70% – 80% | Fair — the program is operational but has material gaps. Focused effort on the non-compliant population can drive meaningful improvement. |
| 80% – 90% | Good — the program is solid. Most vendors meet requirements. Focus on closing the remaining gaps and preventing new ones. |
| 90% – 95% | Excellent — the program is performing at a high level. The remaining non-compliant records are likely edge cases (new vendors in onboarding, vendors in the process of renewing). |
| Above 95% | Best-in-class — extremely rare and typically only achievable in programs with fewer than 100 records or those using automated compliance platforms. |
2. Collection Rate
Formula: (Number of COIs Received / Number of COIs Required) x 100
The collection rate measures a more fundamental question than compliance: are you even receiving certificates from your vendors? You cannot audit what you do not have. A program with a high compliance rate but a low collection rate is an illusion — the compliance rate only measures the vendors who submitted certificates, not the ones who did not.
Example: You have 400 active vendor relationships that require COIs. You have certificates on file for 360 of them. Your collection rate is (360 / 400) x 100 = 90%. The other 40 vendors have no certificate at all — you have no idea whether they carry insurance.
Benchmarks:
| Collection Rate | Assessment |
|---|---|
| Below 70% | The program has a fundamental collection problem. A third of your vendors may be uninsured and you would not know. |
| 70% – 85% | Collection is inconsistent. Focus on the onboarding process — most collection failures happen at the start of the vendor relationship. |
| 85% – 95% | Collection is healthy. The missing certificates are likely from vendors in the onboarding pipeline or those who recently renewed. |
| Above 95% | Collection is strong. Maintain the process and focus on timely renewals. |
3. Average Resolution Time
Formula: Sum of (Resolution Date - Gap Detection Date) for all resolved gaps / Number of resolved gaps
Resolution time measures the operational efficiency of your gap management process. It answers: "When we find a problem, how quickly do we fix it?"
This metric is calculated only on gaps that have been resolved. Open gaps are tracked separately. Including open gaps in the average would require estimating a future resolution date, which introduces speculation.
Example: In the past month, you resolved 50 gaps. The total days from detection to resolution across all 50 was 600 days. Your average resolution time is 600 / 50 = 12 days.
Benchmarks:
| Avg Resolution Time | Assessment |
|---|---|
| Under 7 days | Excellent — most gaps are resolved after initial notification. Your vendor communication is effective. |
| 7 – 14 days | Good — within the standard notification-to-deadline window. |
| 14 – 21 days | Fair — gaps are reaching the escalation stage. Review your notification process for clarity. |
| 21 – 30 days | Slow — vendors are not treating your notices as urgent. Review enforcement practices. |
| Above 30 days | The gap management process is not working. Gaps that take more than 30 days to resolve are typically gaps that are being tolerated, not managed. |
4. Gap Density
Formula: (Total Open Gaps / Total Active Records) x 100
Gap density measures the volume of open issues relative to the size of your portfolio. A program with 500 records and 25 open gaps (5% density) is in a fundamentally different position than one with 500 records and 200 open gaps (40% density), even if both have the same compliance rate (because gap density counts individual gaps, not records).
A single non-compliant record can have multiple gaps — missing additional insured, insufficient GL limits, and expired Workers' Comp is three gaps on one record. Gap density captures this nuance.
Example: You have 400 active records and 80 open gaps across those records. Gap density = (80 / 400) x 100 = 20%.
Benchmarks:
| Gap Density | Assessment |
|---|---|
| Under 10% | Healthy — open gaps are manageable and being resolved in normal course. |
| 10% – 25% | Moderate — the team has a meaningful backlog. Prioritize by gap severity. |
| 25% – 50% | High — the team is falling behind. Consider whether capacity, process, or vendor responsiveness is the bottleneck. |
| Above 50% | The program is overwhelmed. Gap volume exceeds the team's ability to manage. Triage by risk tier and address critical gaps first. |
Segmented Scoring
Aggregate metrics tell you the overall story, but they can mask problems in specific segments. A program with an 85% overall compliance rate might have 95% compliance for Tier 4 vendors and 60% compliance for Tier 1 vendors — a dangerous situation hidden by a reassuring headline number.
Break your metrics down by:
Risk tier: Compliance rate, collection rate, and gap density by Tier 1 / Tier 2 / Tier 3 / Tier 4. Your Tier 1 compliance rate is the most important number in your program.
Vendor category: Construction subs, technology vendors, janitorial services, tenants. Some categories are chronically non-compliant — the data will show you which ones.
Property or project: If you manage multiple properties or projects, score each one independently. A property manager who sees that Building A is at 92% and Building B is at 68% knows exactly where to focus.
Gap type: What percentage of open gaps are coverage gaps vs. limit gaps vs. provision gaps vs. documentation gaps? If 70% of your open gaps are expired certificates (documentation gaps), the solution is a better renewal tracking process, not stronger vendor requirements.
Monthly Reporting Template
Report compliance metrics monthly to your operational leadership and quarterly to senior leadership or the board. The monthly report should fit on a single page and include:
Program Summary
| Metric | Current Month | Prior Month | 3-Month Trend |
|---|---|---|---|
| Active Records | [count] | [count] | — |
| Compliance Rate | [%] | [%] | [arrow up/down/flat] |
| Collection Rate | [%] | [%] | [arrow] |
| Avg Resolution Time | [days] | [days] | [arrow] |
| Gap Density | [%] | [%] | [arrow] |
Compliance Rate by Risk Tier
| Tier | Records | Compliant | Rate | Change |
|---|---|---|---|---|
| Tier 1 (Critical) | [n] | [n] | [%] | [+/- pp] |
| Tier 2 (High) | [n] | [n] | [%] | [+/- pp] |
| Tier 3 (Medium) | [n] | [n] | [%] | [+/- pp] |
| Tier 4 (Low) | [n] | [n] | [%] | [+/- pp] |
Top 5 Open Gaps by Severity
List the five highest-priority open gaps — the ones that represent the most significant uninsured exposure — with vendor name, gap description, days open, and current status.
Actions Taken / Actions Planned
A brief narrative (3 to 5 bullet points) summarizing what the team accomplished this month and what is planned for next month.
Trending Analysis
A single month's metrics are a snapshot. The value of scoring emerges over time, when you can see trends.
Compliance rate trending upward: The program is improving. New gaps are being resolved faster than they are created. Requirements are being met more consistently at onboarding.
Compliance rate trending downward: Something is degrading. Common causes: a surge of new vendors (who start as non-compliant until their certificates are reviewed), a wave of policy renewals that vendors are not addressing, or reduced compliance team capacity.
Resolution time trending upward: Vendors are taking longer to respond, or the compliance team is taking longer to review submitted documents. Investigate whether the bottleneck is on the vendor side (poor communication) or the internal side (backlog).
Gap density flat despite improving compliance rate: This means new gaps are being created at the same rate old ones are resolved. The program is running on a treadmill. Look at the source of new gaps — are they onboarding gaps (fixable with a better onboarding process) or renewal gaps (fixable with proactive renewal tracking)?
Board-Level Reporting
Quarterly or semi-annual reporting to the board or executive leadership should be even more condensed. Executives need three things:
- The headline number: Overall compliance rate, with context ("85% of our 400 vendor relationships are fully insured to our standards").
- The trend: Is it improving, stable, or declining? Show a 12-month trend line.
- The risk exposure: Translate non-compliance into business risk. "The 60 non-compliant records include 8 Tier 1 vendors performing high-risk work on active projects. We are actively managing these through our escalation process, with resolution expected within 30 days."
Do not present gap-level detail to the board. They need confidence that the program is managed, not a walkthrough of every open gap.
Building the Scoring Habit
The hardest part of compliance scoring is not the math — it is the discipline. Calculating metrics once is easy. Calculating them consistently, every month, and acting on what they show, is what separates programs that improve from programs that stagnate.
Start with the four core metrics. Calculate them monthly. Report them to the people who can act on them. Set targets: "We will reach 90% compliance rate by end of Q3." Track progress against those targets. Celebrate improvement. Investigate decline.
Over time, the scoring model becomes the nervous system of your compliance program — it tells you where you are healthy, where you are injured, and where you need to act before a small problem becomes a large one. That feedback loop is what transforms compliance from a box-checking exercise into a genuine risk management function.
Related Articles
Ready to automate COI compliance?
Start with our free COI checker — no sign-up required. Or try the full platform free.