State Attorneys General Step In to Regulate AI Amid Federal Gaps

May 20, 2025
3 mins read
Artificial Intelligence AI and Legal Systems: A digital judge's gavel covered in binary code.

As artificial intelligence continues to evolve rapidly and embed itself into nearly every aspect of daily life—from hiring and healthcare to law enforcement and education—government oversight has struggled to keep pace. With Congress slow to pass comprehensive AI legislation and federal agencies still in the early stages of drafting meaningful regulatory frameworks, state attorneys general are stepping in to fill the gap.

This decentralized approach is not unprecedented, but the scope and speed of AI’s influence have made the absence of national standards particularly glaring. Now, state-level leaders are beginning to assert their authority, launching investigations, introducing regulations, and filing lawsuits aimed at holding tech companies accountable for how their AI systems are built and used.

The Growing Role of State Attorneys General

Historically, attorneys general have played a critical role in consumer protection, data privacy enforcement, and corporate accountability. In the digital age, their attention has turned toward how AI affects civil rights, consumer safety, and economic fairness. Over the last year, multiple state AGs have expressed concerns about algorithmic bias, deceptive AI-generated content, and the lack of transparency surrounding the training data behind generative AI tools.

For example, the Attorney General of New York recently opened an inquiry into how major AI companies acquire and use publicly available online content to train their models. This came on the heels of complaints from authors and artists who allege that their work has been used without consent or compensation to train language models and image generators.

Similarly, California’s AG has hinted at potential regulatory action aimed at requiring companies to disclose the risks associated with deploying AI technologies in sensitive sectors like healthcare and education. In Massachusetts, officials are considering consumer-protection rules that would mandate clearer disclosures when consumers are interacting with AI—whether in chatbots, recruitment tools, or customer service lines.

Why States Are Taking Action

The push at the state level is partly driven by a lack of clarity and urgency at the federal level. While President Biden issued an executive order on AI in 2023 calling for greater transparency, ethical safeguards, and security standards, meaningful legislation has yet to emerge from Congress. Proposed bills addressing algorithmic accountability or data privacy have stalled, in part due to partisan divisions and the complexity of the technology.

Without comprehensive federal guardrails, states have both the legal footing and the motivation to act. Many state constitutions empower attorneys general to take broad action in defense of public welfare. And in the absence of federal preemption, their regulations can stand—at least until challenged in court or superseded by national law.

State AGs also recognize that AI poses localized threats. For instance, biased algorithms can disproportionately impact certain communities, such as facial recognition systems misidentifying people of color, or automated tenant-screening tools denying housing applications unfairly. These issues often manifest at the local level, making state-level oversight especially relevant.


Industry Pushback and Legal Gray Areas

Not surprisingly, the tech industry has pushed back on this patchwork approach. Executives argue that inconsistent state laws make compliance more difficult and stifle innovation. There’s also the risk that overly broad state enforcement actions could chill progress in fields like healthcare diagnostics, autonomous vehicles, or educational software.

Furthermore, there are genuine questions about the limits of state power in regulating technologies that operate across borders. For example, can a state attorney general compel a multinational tech company to alter the architecture of a machine learning model or provide visibility into proprietary algorithms?

Still, some state leaders believe the risk of doing nothing is greater than the risk of overstepping. As Massachusetts AG Andrea Campbell said in a recent panel, “We cannot wait for Washington to catch up. Our communities are already feeling the impact of AI—good and bad—and we must act now to ensure fairness and accountability.”


A Preview of the Future?

The increasing involvement of state attorneys general could signal a future in which AI oversight mirrors the existing landscape for data privacy or environmental protection—where states set their own standards, forcing companies to comply with the strictest jurisdictions.

This could lead to innovation, as companies adapt to meet higher standards, but also fragmentation, where businesses must navigate conflicting rules. Already, comparisons are being drawn to the California Consumer Privacy Act (CCPA), which prompted broader changes in how companies handle personal data—not just in California, but across the U.S.

In the meantime, attorneys general are calling for cooperation rather than confrontation. Several have urged federal lawmakers to move faster in crafting legislation that would give the U.S. a unified approach to regulating AI. But until that happens, it appears the front lines of AI regulation will remain at the state level.

Why This Matters for Legal Professionals and Businesses

Attorneys, compliance officers, and business owners—especially those in tech, healthcare, education, and marketing—should pay close attention to this evolving landscape. The rise in state-led AI oversight means that legal exposure could vary significantly based on where a company operates or serves customers.

Firms should consider conducting internal audits of their AI tools, updating compliance protocols, and monitoring state-level developments closely. Those who wait for federal clarity may find themselves unprepared for the regulatory challenges already emerging across state lines.

As the legal frameworks for AI continue to take shape, state attorneys general are making it clear: the age of unchecked algorithmic influence is coming to an end. Whether that transformation happens through thoughtful legislation or through the courtroom remains to be seen.

Leave a Reply

Your email address will not be published.