AI executive order could deepen trust crisis, not solve it
After a failed attempt earlier this year, President Trump is poised to sign an executive order that would pre-empt states from enacting their own forms of AI regulation. While the exact text is unknown today, a draft was previously leaked that outlines how the Trump administration would leverage the Justice Department to challenge state AI laws, in an attempt to cast regulation of AI as both an issue of interstate commerce β solidly in federal jurisdiction β and a blocker to innovation needed to maintain a strong national security posture. However, signing this EO will only inject further uncertainty into the regulatory landscape, spur prolonged legal battles, and undermine the Trump administrationβs goals to unleash AI innovation.
The EO will make the AI trust gap worse
An EO that punishes states for attempting to introduce trust and oversight into the ecosystem will actually have the inverse intended effect. A KPMG study from Spring 2025 showed that 59% ofΒ Americans surveyed simply do not trust AI systems. This sentiment has been registered in similar studies, which translates to a majority of Americans distrusting AI systems, or at the very least, remaining skeptical.
Implementing the EO would further exacerbate the distrust because there will be fewer barriers to prevent bad actors from entering the ecosystem. The threat of bad actors manipulating and harnessing AI with malintentions can further degrade trust in AI systems, which can slow AI investment and innovation, in addition to impacts on consumer and business confidence in AI. However, even when bad actors can be properly contained, the perception of AI being unregulated could erode trust, especially in highly regulated sectors like the public sector, healthcare or finance. Lower trust means lower AI adoption rates at a time when AI companies may need it most. There is an existing βadoption and revenue gapβ in the AI space, which has led to legitimate fears of an AI bubble. A burst in this bubble could be disastrous for AI investment and slow the growth of US AI companies.
In addition, regulation itself functions as a form of risk transference. Without clear regulatory guardrails, organizations, especially those in highly regulated sectors, must assume full responsibility for evaluating, validating and monitoring AI systems. This significantly increases the perceived liability of deploying AI. For example, the CIO of a large hospital system is far less likely to approve a new AI tool if their organization must shoulder the entire burden of due diligence and potential downstream harm.
We believe in a balanced, shared responsibility model on risk. Well-designed, evidence-backed regulations shift part of this burden to regulators, who establish baseline safety, transparency and accountability requirements. When some categories of risk are managed at the regulatory level, individual organizations face lower overhead, less uncertainty, and fewer barriers to adoption. This risk-sharing effect reduces friction in the market and can meaningfully accelerate AI procurement decisions.
Without such regulatory scaffolding, every enterprise must reinvent its own governance framework, often at great cost, which slows adoption and contributes to the current βadoption and revenue gapβ facing AI companies. At a time when confidence in AI markets is fragile, the absence of regulation can amplify uncertainty, reduce investment appetite, and heighten the risk of an AI bubble contraction.
Inevitable lawsuits will further fragment AI ecosystem
According to the leaked draft, the EOβs main order directs the DOJ to establish a task force to evaluate state AI laws for any onerous impacts and sue states for enacting βunconstitutionalβ AI laws. States have been preparing for this and will almost immediately sue the Trump administration once the EO is signed. While the states and the federal government fight it out in the courts, companies will be caught in the crossfire as they wait for judges to decide whether enforcement deadlines can come into effect while litigation is pending. Compliance with existing state laws, some of which address AI in critical sectors such as insurance and healthcare, would also be thrown into disarray as the courts decide whether the executive branch can unilaterally pre-empt state legislatures.
If the regulatory ecosystem was already messy and difficult to navigate, then a nationwide legal battle over AI and federalism will only intensify the problem. Companies are already struggling to understand their obligations under laws in places like California, Colorado, Utah and Texas. Pulling the rug out from under these companies means that countless dollars will be spent trying to anticipate whether their compliance programs will be relevant or necessary in a couple of years.
A solution in search of a problem
A key complaint about state laws is that they miss the mark on addressing critical issues through the AI supply chain. Issues such as allocating liability, workforce displacement, and training data sources remain largely untouched by state laws. Instead, state lawmakers have prioritized high-level rules for frontier models providers, which have left companies struggling to answer these legal questions on their own. The EO would not offer much help and would actually hinder progress towards fixing these gaps. Leaving the questions unanswered makes it difficult for companies to confidentially invest in trustworthy AI products and services.
What needs to happen next?
Sidelining state legislatures is not the answer to accelerating AI innovation and improving AI adoption. In some respects, having a federal AI law to set the floor on how AI technologies can be developed with trust and security in mind will calm the uncertain regulatory seas. However, states also have a vital role to play when it comes to how AI can be used in certain instances. For instance, states handle insurance regulations and it would make sense to allow them to pass laws that speak to their unique markets. At the end of the day, the best outcome for AI innovation is when states and the federal government work in tandem to build a robust, common sense regulatory safety net that provides companies with clear and simplified rules of the road.
Andrew Gamino-Cheong is co-founder and CTO of Trustible.
Β
The post AI executive order could deepen trust crisis, not solve it first appeared on Federal News Network.

Β© The Associated Press