Right to Opt Out of Profiling
A consumer right recognized by 18 of the 20 comprehensive US state privacy laws to decline being subject to automated decision-making that produces legal or similarly significant effects — such as denial of credit, housing, insurance, or employment.
Overview
The Right to Opt Out of Profiling lets a consumer refuse to be subject to solely automated decisions that produce legally or similarly significant consequences. "Profiling" in this context is narrower than the everyday meaning — it refers to automated processing that evaluates, analyzes, or predicts aspects of a natural person and feeds into a consequential decision.
The right is rooted in GDPR Article 22 but has been adopted — with softer carve-outs — by nearly every US state privacy law. The CCPA/CPRA regulatory overlay from the California Privacy Protection Agency adds a stricter layer in the form of the ADMT (Automated Decision-Making Technology) regulations.
The right does not require the controller to abandon automation. It requires the controller to give the consumer an opt-out mechanism and, depending on the state, a human-review alternative when the decision is significant enough.
When It Applies
The opt-out right engages when all three conditions are met:
- Solely or largely automated processing — a human does not meaningfully review the decision before it takes effect
- Profiling — the system evaluates, analyzes, or predicts aspects of the consumer (creditworthiness, reliability, behavior, health, preferences, movements)
- Legal or similarly significant effects — denial of, or access to, financial services, housing, insurance, education, employment, healthcare, essential goods, or criminal-justice outcomes
Examples that trigger the right:
- Algorithmic underwriting for insurance premiums or eligibility
- Automated tenant-screening decisions in commercial real estate
- Resume-filtering systems in recruitment
- Algorithmic credit scoring used for loan approval
- Dynamic pricing based on consumer attributes
Examples that typically do not trigger the right:
- Content recommendation on a streaming service (no legal or similarly significant effect)
- Fraud-detection scoring that flags a transaction for human review (not "solely" automated)
- Aggregate analytics used for business planning (no individual decision)
Variations Across Jurisdictions
From docs/privacy-knowledge/consolidated/COMPLIANCE_MATRIX.md Table 1:
| State | Right Available | Notable Nuance |
|---|---|---|
| California (CCPA/CPRA) | Yes | ADMT regulations impose additional pre-use notice + access + opt-out + human-review obligations |
| Virginia, Colorado, Connecticut | Yes | Standard framework |
| Colorado AI Act (SB205) | Yes | Parallel high-risk AI obligations; DPIAs interact with CPA DPIAs |
| Utah | No | UCPA omits profiling opt-out entirely |
| Iowa | No | Similarly minimal |
| Oregon, Texas, Florida, Montana, Delaware, Nebraska, New Hampshire, New Jersey, Tennessee, Minnesota, Maryland, Indiana, Kentucky, Rhode Island | Yes | Standard framework |
California's ADMT rules go further than any peer state: controllers must provide a pre-use notice before deploying ADMT, give consumers access to information about the system's logic, and offer a human-review alternative for consequential decisions.
Colorado layers its SB205 AI Act on top — for "high-risk AI systems" (which includes most employment, housing, and insurance automation), controllers must publish a statement about the system, provide consumer notice, and conduct algorithmic impact assessments in addition to the CPA DPIA.
How Inori Handles This
Inori uses automated processing in two narrow contexts: (i) AI-assisted extraction of coverage data from certificates via Claude Haiku, and (ii) automated compliance scoring of vendor records.
Grounding in code:
- Scope — the extraction step is pre-decision (a human reviewer confirms before action); the compliance score is deterministic rule-matching, not probabilistic profiling. Neither currently meets the "solely automated + consequential effect" threshold for profiling opt-out under any state law.
- Pre-use notice —
src/content/legal/privacy.mdxv1.2 discloses the use of AI extraction in the "Automated Processing" section, per California ADMT's anticipated notice requirement. - Human review — the
review_statusenum incertificates(migration014_certificate_review.sql) plus/api/certificates/[id]/reviewenforce a human checkpoint before extracted data drives any downstream compliance decision. - Opt-out channel — if a customer-controller delegates automated decisioning to Inori (future capability), the DSAR flow at
src/app/api/dsar/accepts profiling-opt-out requests and routes them with the 45-day statutory timeline. - Heuristic versioning —
certificates.guard_version(SP17) provides the auditable record of which version of the compliance heuristic was applied to each decision.
Related Concepts
Profiling opt-out is frequently bundled with Sensitive Personal Information analysis — automated decisions using SPI are the highest-risk category. A DPIA is effectively mandatory before deploying any profiling system. California's ADMT regime sits inside the broader CCPA/CPRA framework.
See how Inori handles right to opt out of profiling
Try our free COI checker first, or start a free trial of the full platform.