Starting October 1, 2025, California will have the strictest rules in the U.S. for using AI in hiring and promotions. Employers must tell job seekers when they use AI tools, check every year that these tools are fair to everyone, and keep all records for four years. All kinds of hiring software, from resume screeners to chatbots, are included. Both companies and AI vendors can be fined big money if they break these rules, so getting ready early is very important.
What are California’s new AI hiring compliance requirements for employers?
Starting October 1, 2025, California employers using automated decision systems (ADS) for hiring or promotions must:
- Disclose ADS use to applicants
- Audit annually for bias on protected groups
- Retain algorithmic records for four years
Non-compliance risks significant legal penalties.
Starting October 1, 2025 California will enforce the most detailed AI-hiring rules ever enacted in the United States. Every company that processes a résumé, ranks a candidate, or predicts promotion readiness with software inside the state must now:
- *disclose * that an automated decision system (ADS) is used
- *audit * it at least once a year for disparate impact on race, gender, disability, age, or any other FEHA-protected group
- retain* * algorithmic inputs, outputs, and scoring documentation for four full years** (up from the previous two)
Failure to do so exposes both the employer and the vendor to direct liability under the Fair Employment and Housing Act.
What tools are caught by the new definition?
The statute casts an intentionally wide net. The following all qualify as ADS:
Tool type | Typical use-case |
---|---|
Résumé screeners | Keyword or ML filters that reject low-ranking CVs |
One-way video interview platforms | Tools that read facial expressions or voice tone |
Chatbot recruiters | Conversational AI that pre-qualifies applicants |
Work-style or personality games | Cognitive-style assessments that generate a fit score |
Predictive promotion engines | Models that forecast leadership readiness from past performance data |
Any system that “facilitates” the decision – not only fully automated ones – is covered.
The new risk map for HR tech vendors
California law now treats an AI vendor as an “agent of the employer” whenever its product is used in hiring or promotion. This shift means:
- contractual *indemnification * clauses may be unenforceable
- vendors must expose their training data, feature weights, and validation studies to customers
- joint *liability * attaches if the tool produces disparate impact
Early market response: at least six major ATS providers have announced California Compliance Modules that export bias-audit and decision logs in the exact format requested by the Civil Rights Department.
Practical first steps for employers
A pre-implementation checklist compiled from the regulations and recent compliance guidance looks like this:
- Inventory . Map every algorithmic tool that touches candidates or employees (even Excel macros).
- Risk screening. Run a disparate-impact analysis on the most recent 12 months of data.
- Documentation package. Prepare a folder for each ADS that contains algorithm description, validation report, and annual audit results.
- Vendor addendum. Add compliance riders that require four-year record retention and quarterly re-validation.
- Notice language. Insert a simple sentence in job postings: “We use automated decision systems; applicants may request information on how they work.”
Enforcement outlook
The California Civil Rights Department (CRD) has doubled its FEHA investigation staff for 2025-2026. According to the CRD announcement the agency will prioritise complaints involving:
- facial or voice analysis in video interviews
- scoring tools that rely on social-media data
- promotions filtered by predictive performance models
Penalties mirror standard FEHA exposure: up to $150,000 in administrative fines plus uncapped compensatory and punitive damages in civil court.
With 1.8 million open roles filled each year in California, even a 1 % uptick in litigation could cost employers hundreds of millions. Early compliance, not reaction, is the safer path.
What counts as an AI hiring tool under California’s new rules?
Any “automated decision system” (ADS) that helps make or support employment choices is now regulated.
This covers:
- Resume-screening algorithms
- Chatbot interviewers
- Personality or skills assessments
- Promotion-readiness predictors
- Job-ad targeting software
Even tools that only assist humans (for example, ranking candidates before a recruiter reviews them) fall under the definition.
When do the requirements take effect and what is the penalty window?
October 1, 2025.
Employers have a 14-month runway to audit systems, update policies, and train staff before the first compliance checks.
Do employers have to audit AI for bias themselves?
Yes.
California requires regular anti-bias testing and documentation of:
- Disparate-impact analysis
- Algorithm data inputs & outputs
- Corrective actions taken
Records must be kept for at least four years – double the previous two-year rule.
Are third-party vendors liable too?
Absolutely.
The law treats AI vendors as agents of the employer.
If a vendor’s tool causes discrimination, both the vendor and the employer can be held responsible.
Contracts should now include:
- Compliance warranties
- Bias-audit sharing
- Indemnification clauses
What should HR teams do today?
- Inventory every AI or automated tool used in hiring, promotion, or firing.
- Contact vendors for bias-audit reports and updated compliance language.
- Schedule bias tests before October 2025 (many firms are booking external auditors now).
- Update privacy notices to tell applicants when AI is used and what data is collected.
- Retain records starting immediately; the four-year clock starts with each decision.