Connect with us

Hi, what are you looking for?

Trillions - 19FortyFive

Even Texas and California Agree: AI Needs Real Oversight Now

Artificial intelligence development faces a critical juncture, with growing bipartisan agreement on the need for independent oversight beyond industry self-regulation.

iPhone 13. Image Credit: Creative Commons.
iPhone 13. Image Credit: Creative Commons.

The development of artificial intelligence is at an inflection point. Concerns exist not only over what the technology can do, but also about who is willing to take responsibility when it fails.

Across the country, from California to Washington, D.C., the debate over AI governance is heating up. The recent California AI Expert Advisory Council’s interim working report, commissioned by Gov. Gavin Newsom, and the 8,755 comments submitted to the White House’s AI Action Plan, both signal a growing bipartisan consensus: It is time to stop treating AI oversight as an abstract future problem and start building real, independent capacity to evaluate risk now.

The California report, authored by luminaries including the so-called Godmother of AI, Stanford University’s Fei-Fei Li, is among the most sober, technically informed roadmaps that has emerged from any state. While it stops short of prescribing specific legislation, it pulls no punches about the fact that currently the AI ecosystem lacks independent third-party evaluations, standardized stress-testing before deployment, and structured pathways for public disclosure when AI models go awry.

Strikingly, even Texas’s 2024 AI Advisory Council interim report, the product of a politically conservative state known for resisting regulatory overreach, echoes similar themes. The report concludes that lawmakers must explore independent technical assessments and public-risk disclosures to ensure AI systems don’t compromise safety, civil liberties, or critical infrastructure. 

While the Texas report places greater emphasis on national security and state-level procurement, the overlap between California’s and Texas’ AI positioning is striking. Despite the two states’ political and economic rivalry, there is growing bipartisan recognition that self-regulation alone is not good enough for managing AI systems.

The California report’s lead authors are blunt in stating that transparency is a starting point, not an endpoint. Voluntary disclosures from model developers, no matter how well intentioned, cannot substitute external, verified testing of potential real-world harms, especially as models grow more powerful and opaque. 

At the federal level, commenters on the National Science Foundation and the Office of Science and Technology Policy echoed this very concern. Industry leaders such as OpenAI and Palantir acknowledged the need for expanded federal capacity and stronger public-private partnerships to evaluate risk. The Business Roundtable urged action to avoid a fractured regulatory landscape, while organizations including the Open Source Initiative, the Center for Security and Emerging Technology, and Open Philanthropy called for common evaluation standards, independent audits, and pre-deployment testing protocols to ensure accountability.

Whether they be red or blue, public or private, academic or commercial, most serious voices agree that self-policing is not enough for AI governance. The stakes are too high, the risks to privacy too immediate, and the lessons from industries like oil and social media—as well as California’s own experience with data protection through the California Consumer Privacy Act—are too fresh to ignore.

This concern is also backed by the public. In a March 2025 YouGov poll, most Americans (58 percent) were very concerned about the possibility of AI spreading misleading video and audio deep fakes. Further, around “half of Americans are very concerned about each of the following: the erosion of personal privacy (53 percent), the spread of political propaganda (52 percent), the replacement of human jobs (48 percent), and the manipulation of human behavior (48 percent).” 

But what could independent oversight look like? While oversight could come from a newly formed federal agency, Congress could also empower existing partnerships, such as expanding the National Institute of Standards and Technology’s AI Safety Institute, to work more closely with trusted third-party evaluators. Policymakers could bolster public testing labs or fund universities to audit powerful models. State-level procurement agencies could also require independent safety benchmarks as a condition of doing business.

California’s AI report may not bind the Newsom administration to specific action, but it is already influencing bills under consideration in the legislature, including State Senator Scott Wiener’s revived AI safety legislation and Assembly Member Buffy Wicks’ transparency requirements. In fact, nearly three dozen AI-related bills are under consideration in the California legislature this session. Many of these bills could draw on the report’s recommendations around third-party evaluation, whistleblower protections, and public risk disclosure.

Concurrently, at the federal level, President Donald Trump’s Executive Order “Removing Barriers to American Leadership in Artificial Intelligence” emphasizes promoting AI development free from ideological bias. This reflects a broader U.S. pivot toward an innovation-first, market-driven approach to AI aimed at maintaining global superiority in the face of rising Chinese advancements, such as DeepSeek’s recent breakthroughs.

However, proposals to cut regulations have sparked debate about balancing innovation with necessary safeguards for consumer protection and national security. The forthcoming AI Action Plan, informed by public and industry feedback, presents an opportunity to address these concerns comprehensively and potentially thread the needle between global leadership and responsible governance. AI safety and security measures may represent a rare bipartisan opportunity in an otherwise divisive political environment. 

The risk, as always, is inertia. But the cost of doing nothing is clear. The California expert panel compared the current moment in AI to the early days of tobacco and fossil fuels, when industry knew the risks of their products but lacked accountability, and policymakers didn’t yet have the tools to respond.

We don’t need to wait for an AI model to fail catastrophically before we act. We already know what a reasonable baseline looks like: transparency, third-party testing, and shared responsibility. The only question left is whether we have the political will to get there before it’s too late.

About the Authors: 

Joseph Hoefer is the AI Practice Lead at Monument AdvocacyJeff Le is Managing Principal at 100 Mile Strategies LLC and a Visiting Fellow at George Mason University’s National Security Institute. From 2015 to 2019, Jeff was Deputy Cabinet Secretary to former California Governor Jerry Brown responsible for the emerging technology and cybersecurity portfolio for the state. 

1 Comment

1 Comment

  1. DavyJones

    April 14, 2025 at 9:04 pm

    You can’t put the genie back into the bottle. It’s like the internet, Once it spreads it goes everywhere even to the remotest places.

    In the future, it may even spark or ignite war.

    So, now’s the time to develop a nuke arsenal if you don’t have one, increase the size of yours if you already have it and for those with a big bullseye on their heads, time to deploy some nukes to space.

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement