New California Regulations on AI in Employment Go Into Effect

As artificial intelligence continues to reshape the modern workplace, California is taking proactive steps to ensure that its use in employment decisions remains fair and legally compliant. On June 27, 2025, the California Office of Administrative Law approved new regulations aimed at protecting workers from discrimination arising from the use of automated decision systems (ADS) in employment practices. These regulations, developed by the California Civil Rights Council (CRC), will officially go into effect on October 1, 2025.
Why the New Regulations?
AI technologies—particularly in the form of ADS—are becoming increasingly common in recruitment, hiring, performance evaluations, and promotions. While these tools promise greater efficiency and objectivity, the CRC has raised concerns that they may amplify existing biases and contribute to discriminatory outcomes, especially when used without adequate oversight.
The CRC’s amended regulations acknowledge this risk and are designed to ensure that employers and third-party vendors using these technologies are held accountable under California’s Fair Employment and Housing Act (FEHA).
Key Changes in the Regulations
1. Expansion of “Agent” Definition
The term “agent” now includes any person or entity acting directly or indirectly on behalf of an employer, which now includes the use of automated decision systems. This means third-party vendors offering AI-driven employment tools can now be held liable if their tools contribute to discriminatory outcomes.
2. Recordkeeping Requirements
Employers are now required to retain all data related to the use of automated decision systems (ADS) for a period of four years. This includes:
- Inputs used in automated tools
- Decision-making outputs
- Logs of any human oversight or interaction
This change ensures that ADS-related employment decisions are auditable and transparent, aligning with broader efforts to prevent discriminatory outcomes.
How Employers Can Stay Compliant
To ensure compliance with the new rules, employers must take the following steps before the regulations go into effect:
- Retain ADS Data for Four Years
Maintain all data related to the use of automated decision systems, just as you would with traditional employee records. - Audit Existing AI Tools
Review any current AI or machine-learning tools used in screening, hiring, evaluating, or promoting employees to ensure they do not unintentionally perpetuate bias. - Maintain Human Oversight
Ensure that human personnel are reviewing and overseeing decisions made by AI tools. AI should assist decision-making, not replace it. - Vet Your Vendors
If you’re using third-party services to provide AI tools for hiring or employee evaluation, ensure they have robust anti-bias protocols in place. These vendors can now be directly liable under California law. - Train HR and Management
Educate internal stakeholders—including HR, recruiters, and managers—on the new legal standards and best practices for responsible AI use in employment.
Final Thoughts
As AI continues to evolve, regulators are making it clear that technological innovation does not exempt employers from compliance with anti-discrimination laws. California’s new regulations are a signal to businesses nationwide: If you’re using AI in employment, you must do so responsibly.
The October 1st deadline is here. Now is the time for employers to review their practices, audit their systems, and train their teams—because the future of fair employment is not just human, it’s also algorithmic.