Our AI, Data, and Analytics thought leaders have pulled together their top predictions for the new year so that employers can get a running start to 2026.

Top Predictions for Artificial Intelligence

Colorado and Virginia Will Take Different Regulatory Paths

Colorado’s landmark AI law will not take effect in 2026, as legislators, regulators, and stakeholders delay things to renegotiate. But Virginia will take another crack at AI lawmaking, this time aiming to regulate healthcare transparency and exposure.

California’s ADMT Regulations Will Make AI Governance a National Standard

Gearing up for CPPA regulations to take effect in 2027, businesses will need to grapple with risk assessments, cybersecurity audits, pre-use notices, opt-out rights, and annual cybersecurity audits in 2026. And most multistate businesses will choose to comply with this stringent standard by adopting an AI governance program as table stakes.

Bias Audits Will Become a Must-Have for Employers

Even without a federal AI law, plaintiffs’ attorneys are already using the absence of an audit as evidence of negligence or discriminatory design. Learn more about FP’s AI Bias Detection and Mitigation Program here.

Congress Will Still Be Talking About AI, Not Passing a Comprehensive Law

Despite urging from the White House and an Executive Order pushing lawmakers to reach a deal, Congress will not pass a comprehensive federal AI statute in 2026. Instead, employers will face a growing patchwork of state and local laws – and growing expectations that they need to align with recognized frameworks (like NIST’s AI risk management principles).

States Will Zero In on Children’s Safety in AI Chatbots

In the absence of federal action, state regulators will increasingly focus on AI systems that interact with minors in 2026, particularly chatbots and companion-style tools. Expect heightened scrutiny around age verification, data collection, content moderation, emotional manipulation, and mental-health impacts. Even companies that don’t view themselves as “child-focused” may find their AI tools swept in if minors can access them.

Comments are closed.