Trump takes another shot at dismantling state AI regulation


The Trump administration on Friday unveiled its new legislative blueprint for AI regulation, and the seven-point plan includes a clear message: The federal government should avoid many AI regulations beyond a set of child safety rules, and it should bar states from messing with the “national strategy to achieve global AI dominance.”

The plan advises Congress to protect minors using AI services with more safeguards and take action to attempt to prevent electricity costs from spiking due to AI infrastructure. It encourages “youth development and skills training” to boost familiarity with AI tools, without much further detail. But it suggests taking a wait-and-see approach to whether training AI models on copyrighted material without permission is legal, and it maintains a long-running Republican push to limit whether states can enact their own AI laws.

The entire document and all its provisions, however, will only take effect if Congress adopts them into legislation and passes them into law.

The Trump administration blueprint encourages passing laws similar to the Take It Down Act — which was signed into law in May 2025 and bars nonconsensual AI-generated “intimate visual depictions,” requiring certain platforms to rapidly remove them. The document also is pro-age verification, suggesting that Congress “establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age-gating is controversial from a privacy standpoint and has a lot of potential surveillance implications. It proposes other child protection measures like limiting the ability for AI models to train on minors’ data and limits to targeted advertising based on their data. (The document does not seek to prohibit those practices for children’s data, just limit them.) At the same time, it states that Congress “should avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.”

In the age of deepfakes, when AI-generated videos are looking more real than ever and a fake video of a politician can instantly propagate global conspiracy theories, the new policy blueprint seeks to “consider establishing a federal framework protecting individuals from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes.” (That could mean finally creating a federal likeness law.) But it also says lawmakers should provide “clear exceptions” for parody, news reporting, satire, and other First Amendment-protected use cases.

The blueprint also discourages Congress from taking up AI copyright issues. “Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue,” it says. “Congress should not take any actions that would impact the judiciary’s resolution of whether training on copyrighted material constitutes fair use.”

In another section, the blueprint raises concerns about large-scale scams and fraud that are increasingly powered by AI, stating that Congress should “augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors,” although no extra details are provided.

The Trump administration continued leaning into the pro-federal, anti-state approach to AI regulation that it’s been promoting (so far unsuccessfully) for nearly a year. The blueprint says Congress should “preempt state AI laws that impose undue burdens” and avoid “fifty discordant” standards for companies, adding that states “should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.” Other legal protections for AI companies were baked in, too, such as the idea that states shouldn’t be allowed to “penalize AI developers for a third party’s unlawful conduct involving their models.” But in the child-privacy section, the document does allow states some limited wiggle room, stating that Congress shouldn’t preempt states from “enforcing their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.” The allowance comes after numerous figures from both parties expressed concern about overturning local child safety laws, including nearly 40 attorneys general for US states and territories.

The overall goal, as in earlier Trump administration proposals, is speeding AI development. “The United States must lead the world in AI by removing barriers to innovation [and] accelerating deployment of AI applications across sectors,” the document states, adding that Congress should find ways to make federal datasets available to AI companies and academics in “AI-ready formats for use in training AI models and systems.” It didn’t specify which types of federal datasets it sought to make publicly available for AI training. The plan also definitively answers a long-asked question in AI regulation — whether there should be one federal body responsible for AI regulation or whether AI regulation should be left to each sector — and says that Congress “should not create any new federal rulemaking body to regulate AI”; instead, it says, it will “support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise.”

President Trump signed an executive order last July seeking to prevent “woke AI” by banning government agencies from using models that “incorporated” topics like systemic racism. He recently ordered all agencies to blacklist the “Radical Left AI company” Anthropic for setting limits on military use of its models, something Anthropic alleges violates its First Amendment rights. At the same time, the blueprint states that the government “must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent.” It goes further to say that Congress should explicitly prevent the government from “coercing” AI providers “to ban, compel, or alter content based on partisan or ideological agendas” — and that in the event that government agencies censor expression on AI platforms or dictate the information they provide, then Congress should provide a way for Americans to “seek redress.”

Last month, we saw the first bipartisan effort to address higher utility bills in communities with data centers nearby, and the new AI policy framework seems to address those concerns on both sides of the aisle, saying that Congress should find ways to make sure that “residential ratepayers do not experience increased electricity costs as a result of new AI data center construction and operation.” But, it says, Congress should streamline federal permits for data center construction and operation, making it easier for AI companies to and make it easier for “develop or procure on-site and behind-the-meter power generation” — meaning that data center construction should still be full-speed-ahead, but community members shouldn’t have to literally pay the price on their monthly bills.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top