Scott Wiener on his battle to make Large Tech expose AI’s risks via NewsFlicks

Asif
15 Min Read

This isn’t California state Senator Scott Wiener’s first try at addressing the hazards of AI.

In 2024, Silicon Valley fastened a fierce marketing campaign in opposition to his debatable AI protection invoice, SB 1047, which might have made tech corporations responsible for the possible harms in their AI methods. Tech leaders warned that it could stifle The us’s AI increase. Governor Gavin Newsom in the long run vetoed the invoice, echoing an identical considerations, and a well-liked AI hacker area promptly threw a “SB 1047 Veto Birthday party.” One attendee informed me, “Thank god, AI continues to be felony.”

Now Wiener has returned with a brand new AI protection invoice, SB 53, which sits on Governor Newsom’s table anticipating his signature or veto someday in the following couple of weeks. This time round, the invoice is a lot more standard or a minimum of, Silicon Valley doesn’t appear to be at struggle with it.

Anthropic outright counseled SB 53 previous this month. Meta spokesperson Jim Cullinan tells TechCrunch that the corporate helps AI legislation that balances guardrails with innovation and says “SB 53 is a step in that path,” even though there are spaces for growth.

Former White Space AI coverage guide Dean Ball tells TechCrunch that SB 53 is a “victory for affordable voices,” and thinks there’s a robust likelihood Governor Newsom indicators it.

If signed, SB 53 would impose probably the most country’s first protection reporting necessities on AI giants like OpenAI, Anthropic, xAI, and Google — corporations that these days face no legal responsibility to expose how they take a look at their AI methods. Many AI labs voluntarily post protection studies explaining how their AI fashions may well be used to create bioweapons and different risks, however they do that at will and they’re now not at all times constant.

The invoice calls for main AI labs — in particular the ones making greater than $500 million in income — to post protection studies for his or her maximum succesful AI fashions. Similar to SB 1047, the invoice in particular makes a speciality of the worst types of AI dangers: their talent to give a contribution to human deaths, cyberattacks, and chemical guns. Governor Newsom is thinking about a number of different expenses that cope with different kinds of AI dangers, comparable to engagement-optimization ways in AI partners.

Techcrunch match

San Francisco
|
October 27-29, 2025

SB 53 additionally creates secure channels for workers running at AI labs to file protection considerations to executive officers, and establishes a state-operated cloud computing cluster, CalCompute, to offer AI analysis sources past the massive tech corporations.

One reason why SB 53 could also be extra standard than SB 1047 is that it’s much less serious. SB 1047 additionally would have made AI corporations responsible for any harms led to via their AI fashions, while SB 53 focuses extra on requiring self-reporting and transparency. SB 53 additionally narrowly applies to the sector’s biggest tech corporations, relatively than startups.

However many within the tech trade nonetheless consider states must go away AI legislation as much as the government. In a contemporary letter to Governor Newsom, OpenAI argued that AI labs must handiest need to conform to federal requirements — which is a humorous factor to mention to a state governor. The undertaking company Andreessen Horowitz wrote a up to date weblog put up vaguely suggesting that some expenses in California may just violate the Charter’s dormant Trade Clause, which prohibits states from unfairly proscribing interstate trade.

Senator Wiener addresses those considerations: he lacks religion within the federal executive to go significant AI protection legislation, so states wish to step up. If truth be told, Wiener thinks the Trump management has been captured via the tech trade, and that contemporary federal efforts to dam all state AI regulations are a type of Trump “rewarding his funders.”

The Trump management has made a notable shift clear of the Biden management’s focal point on AI protection, changing it with an emphasis on expansion. In a while after taking administrative center, Vice President J.D. Vance seemed at an AI convention in Paris and stated: “I’m now not right here this morning to discuss AI protection, which was once the name of the convention a few years in the past. I’m right here to discuss AI alternative.”

Silicon Valley has applauded this shift, exemplified via Trump’s AI Motion Plan, which got rid of obstacles to development out the infrastructure had to teach and serve AI fashions. Lately, Large Tech CEOs are ceaselessly noticed eating on the White Space or saying hundred-billion-dollar information facilities along President Trump.

Senator Wiener thinks it’s essential for California to steer the country on AI protection, however with out choking off innovation.

I lately interviewed Senator Wiener to speak about his years on the negotiating desk with Silicon Valley and why he’s so considering AI protection expenses. Our dialog has been edited calmly for readability and brevity. My questions are in daring, and his solutions don’t seem to be.

Maxwell Zeff: Senator Wiener, I interviewed you when SB 1047 was once sitting on Governor Newsom’s table. Communicate to me concerning the adventure you’ve been directly to keep an eye on AI protection in the previous few years.

Scott Wiener: It’s been a curler coaster, a fantastic finding out revel in, and simply in point of fact rewarding. We’ve been in a position to assist raise this factor [of AI safety], now not simply in California, however within the nationwide and world discourse.

We now have this extremely robust new era this is converting the sector. How can we be certain that it advantages humanity in some way the place we scale back the danger? How do we endorse innovation, whilst additionally being very conscious of public well being and public protection. It’s a very powerful — and in many ways, existential — dialog concerning the long term. SB 1047, and now SB 53, have helped to foster that dialog about secure innovation.

Within the remaining two decades of era, what have you ever realized concerning the significance of regulations that may grasp Silicon Valley to account?

I’m the man who represents San Francisco, the thrashing center of AI innovation. I’m instantly north of Silicon Valley itself, so we’re proper right here in the course of all of it. However we’ve additionally noticed how the huge tech corporations — probably the most wealthiest corporations in global historical past — were in a position to forestall federal legislation.

Each time I see tech CEOs having dinner on the White Space with the aspiring fascist dictator, I’ve to take a deep breath. Those are all in point of fact sensible individuals who have generated huge wealth. A large number of other people I constitute paintings for them. It in point of fact pains me after I see the offers which might be being struck with Saudi Arabia and the United Arab Emirates, and the way that cash will get funneled into Trump’s meme coin. It reasons me deep fear.

I’m now not anyone who’s anti-tech. I would like tech innovation to occur. It’s extremely vital. However that is an trade that we must now not consider to keep an eye on itself or make voluntary commitments. And that’s now not casting aspersions on any individual. That is capitalism, and it may possibly create huge prosperity but additionally reason hurt if there don’t seem to be smart rules to offer protection to the general public passion. In relation to AI protection, we’re seeking to thread that needle.

SB 53 is concentrated at the worst harms that AI may just imaginably reason — demise, large cyber assaults, and the introduction of bioweapons. Why focal point there?

The dangers of AI are numerous. There may be algorithmic discrimination, process loss, deep fakes, and scams. There were more than a few expenses in California and in different places to handle the ones dangers. SB 53 was once by no means supposed to hide the sphere and cope with each and every possibility created via AI. We’re considering one particular class of possibility, relating to catastrophic possibility.

That factor got here to me organically from other people within the AI area in San Francisco — startup founders, frontline AI technologists, and people who find themselves development those fashions. They got here to me and stated, ‘This is a matter that must be addressed in a considerate means.’

Do you’re feeling that AI methods are inherently unsafe, or have the possible to reason demise and large cyberattacks?

I don’t assume they’re inherently secure. I do know there are a large number of other folks running in those labs who care very deeply about seeking to mitigate possibility. And once more, it’s now not about getting rid of possibility. Lifestyles is set possibility, except you’re going to are living for your basement and not go away, you’re going to have possibility for your existence. Even for your basement, the ceiling would possibly collapse.

Is there a possibility that some AI fashions may well be used to do vital hurt to society? Sure, and we all know there are individuals who would cherish to do that. We must attempt to make it tougher for dangerous actors to reason those serious harms, and so must the folk growing those fashions.

Anthropic issued its beef up for SB 53. What are your conversations like with different trade avid gamers?

We’ve talked to everybody: huge corporations, small startups, traders, and teachers. Anthropic has been in point of fact optimistic. Final 12 months, they by no means officially supported [SB 1047] however that they had sure issues to mention about sides of the invoice. I don’t assume [Anthropic} loves each and every facet of SB 53, however I feel they concluded that on stability the invoice was once value supporting.

I’ve had conversations with huge AI labs who don’t seem to be supporting the invoice, however don’t seem to be at struggle with it in the way in which they have been with SB 1047. It’s now not sudden. SB 1047 was once extra of a legal responsibility invoice, SB 53 is extra of a transparency invoice. Startups were much less engaged this 12 months for the reason that invoice in point of fact makes a speciality of the most important corporations.

Do you’re feeling force from the huge AI PACs that experience shaped in contemporary months?

That is some other symptom of Electorate United. The wealthiest corporations on the planet can simply pour unending sources into those PACs to check out to intimidate elected officers. Underneath the foundations we have now, they have got each and every proper to do this. It’s by no means in point of fact impacted how I method coverage. There were teams seeking to smash me for so long as I’ve been in elected administrative center. Quite a lot of teams have spent tens of millions seeking to blow me up, and right here I’m. I’m on this to do proper via my constituents and check out to make my group, San Francisco, and the sector a greater position.

What’s your message to Governor Newsom as he’s debating whether or not to signal or veto this invoice?

My message is that we heard you. You vetoed SB 1047 and equipped an overly complete and considerate veto message. You correctly convened a running team that produced an overly sturdy file, and we in point of fact appeared to that file in crafting this invoice. The governor laid out a trail, and we adopted that trail to be able to come to an settlement, and I’m hoping we were given there.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *