U.S. Attorneys General Push for Stronger AI Safety Measures
42 state attorneys general urge major AI companies to implement stronger safety measures to protect users, especially children, from harmful AI outputs.
Coalition Calls for Immediate Action
A coalition of 42 state attorneys general is urging major AI companies to implement stronger safety measures to protect users from harmful outputs. The group has sent a letter to 13 leading AI firms, including Google, Microsoft, and OpenAI, highlighting concerns about the impact of generative AI on children and the public.
Letter Leadership and Concerns
The letter, led by attorneys general from Pennsylvania, New Jersey, West Virginia, and Massachusetts, emphasizes the need for immediate action. It cites instances where AI outputs have been linked to harmful behaviors, including grooming, supporting suicide, and encouraging violence. The attorneys general argue that residents should not be used as “guinea pigs” while AI companies experiment with new applications.
Proposed Safety Measures
To address these concerns, the coalition has proposed 16 measures that companies should adopt by January 16, 2026. These include:
- Conducting safety tests to prevent harmful outputs.
- Establishing recall procedures for problematic models.
- Displaying permanent on‑screen warnings.
- Separating revenue optimization from model safety.
- Increasing transparency and independent third‑party testing.
- Providing age‑appropriate outputs.
Pennsylvania AG’s Statement
Pennsylvania Attorney General Dave Sunday stressed the importance of ensuring AI products are safe before they reach the market. He noted that children already face significant stressors in the digital world, and harmful interactions with AI must cease immediately.
Balancing Innovation and Public Safety
While acknowledging the potential of AI innovation, the attorneys general insist that public safety must come first. The letter serves as a warning that failure to act could result in violations of state laws.
