Skip to main content
State Laws · 6 min read

5 U.S. States Now Regulate AI in Hiring — Is Yours Next?

There is no federal law governing AI in hiring. Congress has floated several bills; none have passed. So the states stepped in, and they didn't coordinate with each other.

The result is a patchwork. Five states and municipalities now have enforceable rules on automated employment decision tools, each with different definitions, different obligations, and different enforcement mechanisms. If you use AI anywhere in your hiring pipeline — resume screening, video interview analysis, skills assessment scoring, candidate ranking — you likely have compliance obligations you didn't have two years ago.

The Five Laws at a Glance

Here's what each jurisdiction requires. The differences matter more than the similarities.

JurisdictionLawEffectiveKey RequirementPenalty
New York CityLocal Law 144July 2023Annual bias audit by independent auditor; public posting of audit results; candidate noticeUp to $1,500 per violation
IllinoisAI Video Interview Act (AIVITA)January 2020Consent before AI analyzes video interviews; explanation of how AI works; deletion upon requestEnforced under Consumer Fraud Act
ColoradoSB 24-205June 2026 (delayed)Impact assessments; notice to applicants; opt-out rights for certain decisions; duty to avoid algorithmic discriminationEnforced by AG; civil penalties
MarylandHB 1202October 2020Applicant consent required before facial recognition analysis in interviewsEnforced under state employment law
New JerseyS 1588March 2025Disclosure that AEDT is being used; impact assessment; candidate can request alternative processUp to $10,000 per violation

NYC Local Law 144: The One Everyone Knows

New York City's law gets the most attention because it was the first to require a bias audit — a statistical analysis of the tool's selection and scoring rates across race, ethnicity, and sex categories. The audit must be conducted by an independent third party, and the results must be published on the employer's website.

The scope is narrow in some ways: it covers "automated employment decision tools" that substantially assist or replace discretionary decision-making. A system that simply searches a database doesn't qualify. But the definition has teeth. If your AI tool scores, ranks, or filters candidates in a way that materially influences who gets an interview, it's probably covered.

The tricky part: the audit requirement resets annually. You can't do it once and forget about it. And the published summary must include the selection rate for each demographic category and the impact ratio — the selection rate of each group compared to the most-selected group. That data is public.

Colorado SB 24-205: The Most Comprehensive (and Delayed)

Colorado took the broadest approach. SB 24-205 doesn't just apply to hiring — it covers any "high-risk AI system" that makes consequential decisions about people, including employment, education, financial services, insurance, and housing. But the employment provisions are particularly detailed.

Important update: the original February 2026 effective date has been pushed back. The Colorado legislature held a special session in 2025 to try to amend the law, but lawmakers couldn't reach a compromise. Governor Polis, citing concerns about the high compliance costs the law would impose on businesses, signed SB 25B-004 on August 28, 2025, postponing implementation to June 30, 2026. The underlying requirements haven't changed — they just take effect later.

Deployers must conduct impact assessments before deploying a high-risk system and annually thereafter. The assessment must analyze the system's purpose, how it was evaluated for risks, the data it uses, its expected outputs, and any safeguards against algorithmic discrimination. This isn't a checkbox exercise — the statute expects a substantive analysis.

Applicants must receive notice that an AI system is being used, a description of what it does, and instructions for requesting a human alternative or appealing the outcome. That last part is significant. If a candidate asks for a human to review their application instead of the AI, you need a process for that.

The Others: Illinois, Maryland, New Jersey

Illinois was actually first. AIVITA, passed in 2019, requires employers to notify applicants before using AI to analyze video interviews, explain what characteristics the AI evaluates, and get consent. Applicants can request deletion of the video, and employers can't share it without consent. The original scope was narrow — video interviews only — but Illinois expanded it in 2025 when Governor Pritzker signed HB 3773, the AI Transparency in Employment Act. That law extends notice and consent requirements beyond video interviews to cover any AI analysis of applicants, including resume screening and other automated evaluations. If you hire in Illinois, the obligations now go well beyond video.

That broader scope signaled the direction things were heading.

Maryland's HB 1202 is even narrower: it prohibits employers from using facial recognition during interviews unless the applicant consents in writing. Simple, limited, and easy to comply with. But it shows that even states that don't pass sweeping legislation are carving out specific prohibitions.

New Jersey's law, which took effect in early 2025, borrowed from NYC's model but added the right to request an alternative assessment process. The penalties are also steeper — up to $10,000 per violation, which adds up fast at scale.

What to Do If You Use AI in Hiring

The compliance burden depends on where your candidates are located, not where your company is headquartered. If you're hiring remote workers across the U.S., you should assume these laws apply to at least some of your applicant pool.

  • Audit your pipeline. Identify every point where AI influences a hiring decision — screening, scoring, ranking, scheduling, assessment. Include vendor tools.
  • Map your candidate geography. Which states are your applicants in? That determines which laws apply to which candidates.
  • Get your bias audit done if you have NYC applicants. Use an independent auditor. Publish the results. Repeat annually.
  • Build a consent and disclosure workflow. Most of these laws require notice before the AI system is used, not after. Retrofit your application process.
  • Prepare for Colorado's impact assessment requirement. Even if you're not deploying there yet, the assessment framework is a useful model for any jurisdiction.
  • Create an opt-out or alternative process. New Jersey and Colorado both require it. Having a human-review path ready is good practice regardless.

More States Are Coming

At least a dozen states had active AI hiring bills in their 2025-2026 legislative sessions, including California, Massachusetts, Texas, and Washington. The trend is clear: more states, more requirements, more variation.

One wildcard: the federal government's December 2025 executive order on AI preemption has introduced real uncertainty about whether new state AI laws will survive federal preemption challenges. Some argue the order signals an intent to establish a unified federal framework that overrides the state patchwork. Others contend that executive orders can't preempt state legislation on their own. Until Congress acts or courts weigh in, the legal landscape is genuinely unsettled. Don't assume future state laws will stick — but don't assume they won't either.

The pragmatic move is still to build toward the most demanding standard — right now, that's Colorado — and treat everything else as a subset. If you can satisfy Colorado's impact assessment and applicant-rights requirements, you're well-positioned for whatever comes next.

Key Takeaways

  • Five U.S. jurisdictions now regulate AI in hiring: NYC, Illinois, Colorado, Maryland, and New Jersey. Each has different requirements.
  • Compliance depends on where your candidates are located, not where your company is based.
  • Colorado's SB 24-205 is the most comprehensive — impact assessments, applicant notice, opt-out rights, and annual reviews. The effective date was delayed to June 30, 2026 after Governor Polis signed a postponement bill.
  • Building to the most demanding standard (Colorado) is the most efficient long-term strategy.

Related Regulations

Sources & References

Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.