California’s Move to Regulate AI: What Employers Need to Know

California and American flag waving against a blue sky background
Discover how California's new AI regulations are reshaping workplace practices. Learn what employers need to know to stay compliant and competitive in today's evolving digital landscape.

Share This Post

California is at the forefront of introducing stringent regulations on artificial intelligence (AI), a development that demands immediate attention from employers and business owners. The state legislature is reviewing over 30 AI-related proposals, alongside ambitious regulatory efforts by the privacy agency to protect consumers, which together have far-reaching implications for workplace practices.

These proposed laws and guidelines, focusing on everything from employee privacy to the ethical use of AI in hiring and performance assessment, could set a precedent affecting businesses nationwide. Understanding and preparing for these potential changes is crucial for companies aiming to stay compliant and competitive in an increasingly regulated digital landscape.

AB 2930: Curbing Algorithmic Bias

AB 2930 targets the elimination of algorithmic discrimination by employers. This legislation bars the use of automated decision-making systems in critical employment decisions—such as salary adjustments, promotions, hiring, firing, or assigning tasks—if these systems foster unjust treatment based on race, ethnicity, or other protected characteristics. Violations could lead to civil penalties up to $25,000 per incident.

Companies must inform individuals when such automated systems will be deployed for significant decisions and, if possible, offer an alternative human-based decision process upon request. Following its failure to pass in 2023, the bill returns with substantial backing, indicating a stronger likelihood of success.

AB 3058: Responding to Job Losses Due to AI

AB 3058 reflects California’s proactive stance on addressing the displacement of jobs by artificial intelligence. The bill outlines an intention to establish a universal basic income for residents affected by AI-induced employment shifts.

While specifics on funding—whether through employer contributions or public funds—remain unclear, this initiative underscores the state’s commitment to mitigating the economic impacts of technological advancements on the workforce.

SB 1047: Setting the Bar for AI Safety

This legislation mandates that developers of significant AI models—referred to as “covered models”—conduct thorough safety evaluations prior to public release.

Specifically, they must verify the absence of any “hazardous capability,” such as generating biological or nuclear threats or enabling cyberattacks causing damages upwards of $500 million. It requires measures to block unauthorized access and mandates that these models can be rapidly shut down until they’re confirmed safe.

Importantly, the bill offers protection to whistleblowers reporting violations. While you might not directly develop technology, the ramifications of this Act on the availability and functionality of AI tools in the workplace are significant. It reflects the state’s stance on AI and indicates the legislative direction concerning technology regulation. Understanding this bill is crucial for anyone utilizing AI tools in their business operations, as it highlights the increasing emphasis on safety and security in AI development.

AB 2602: Addressing Digital Replicas in Entertainment

Using digital replicas in the entertainment sector is a pressing concern, highlighted during last year’s actors’ strike. AB 2602 aims to regulate this practice by rendering any contract provision that allows the creation and use of an individual’s digital likeness or voice, either as a substitute for actual work or for training generative AI systems, unenforceable unless it transparently outlines the intended uses.

This bill particularly safeguards individuals at risk of job loss due to digital replication, especially those without legal or union representation. It mandates that entities capable of creating or using digital replicas must inform affected individuals by February 1, 2025, about the unenforceability of such provisions.

SB 896: Regulating AI in State Agencies

Titled the “Artificial Intelligence Accountability Act,” SB 896 takes a critical look at AI usage within state agencies. It compels these agencies to draft reports identifying optimal uses for generative AI tools and conduct a comprehensive risk analysis on how AI could threaten California’s essential energy infrastructure.

The Act obliges state agencies to disclose to individuals whenever generative AI facilitates their interactions. This legislation underscores the importance of transparency and accountability in the deployment of AI technologies by government entities.

Additional Regulatory Proposals to Know

California lawmakers are actively shaping the future of AI in the workplace and beyond. As an HR consultant, it’s my mission to ensure you’re ahead of the curve on these developments. Here’s a brief overview of other items you should know.

Deepfake Accountability

Proposed laws target the unauthorized use of deepfakes. Entities creating deepfakes with someone’s personal identifiers without consent could face legal repercussions. A bill seeks to establish a group dedicated to studying deepfake impacts, aiming to safeguard individuals from unauthorized exploitation.

Election Security

In response to the misuse of deepfakes in elections, proposed measures would outlaw misleading digital content close to election periods. This initiative underscores the commitment to maintaining electoral integrity against the backdrop of advancing technology.

Legal Sector Transparency

With AI’s integration into legal document preparation, a bill proposes legal professionals disclose AI or machine learning usage to courts. This ensures adherence to ethical standards, promoting transparency and accountability.

Healthcare Decisions

The “Physicians Make Decisions Act” emphasizes the importance of human oversight in healthcare, particularly when algorithms influence patient care decisions. This proposal advocates for maintaining the essential human element in medical determinations.

Regulatory Developments

The California Privacy Protection Agency is proposing regulations to enhance consumer protections against automated decision-making technology (ADMT). If enacted, businesses would need to do the following.

Preemptively Notify

Inform consumers about ADMT usage, detailing the decision-making process and their rights to opt-out and access information.

Opt-Out Provisions

Allow consumers to reject ADMT in life-altering decisions, including employment-related ones.

Access to Information

Enable consumers to understand how ADMT influences decisions affecting them.

Despite industry pushback, these proposed regulations signal a move towards greater transparency and consumer control over ADMT. It’s a critical development for businesses to monitor as it evolves.

Work With a Trusted HR & Small Business Consultant to Stay Compliant

California’s legislative and regulatory landscape regarding AI is rapidly evolving. For small business owners, startup HR teams, and solopreneurs, staying informed is not optional—it’s imperative. My consultant services are here to navigate these complexities, ensuring that your business remains compliant and ahead of the curve. Let’s embrace these changes together, fostering a safe and equitable AI-integrated workplace.

Contact me to get started!

 

More To Explore

Schedule a Consultation and Unlock Your Full Potential

Get Compliant with Bryan Driscoll

Schedule your business review