• The Deep View
  • Posts
  • ⚙️ Ok Dr. Li, it’s time to talk about California SB 1047

⚙️ Ok Dr. Li, it’s time to talk about California SB 1047

Good morning. Today, we’re finally tackling California SB 1047, a highly contentious bill that aims to establish AI safety guardrails.

Also, Disney is raising the subscription prices on Disney+, Hulu and ESPN+. So it’s a good time to advocate for DVDs. DVDs are awesome. And they don’t require a subscription …

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Outsmarting invasive species

Source: University of Florida

Florida has a bit of a lizard problem. Though the state is home to a long list of reptilian creatures, including plenty of snakes, iguanas and geckos, it has more recently been invaded by a species of lizard called the Argentine Tegu. 

Tegus aren’t native to Florida, and because of their impact on the local wildlife, they’ve been categorized as an invasive species and a threat to local biodiversity. 

Researchers are using artificial intelligence to get a handle on them. 

The details: Researchers at the University of Florida recently launched a project involving “smart” traps — loaded up with AI software — designed to target these invasive tegus. 

  • The AI incorporated into these traps was trained on thousands of pictures to recognize tegus.

  • The traps are also web-enabled, coming complete with remote controls through a mobile app. This all means that researchers can save a lot of time that would have been spent checking ordinary traps, and apply that time instead to fighting other invasions. 

“If we can implement innovative solutions to remove invasive species that are effective and reduce costs, that is a win-win,” project lead Melissa Miller said in a statement.

Customize AI Automations with Your Own Data

MindStudio's intuitive no-code platform makes AI accessible to every company — even those with limited dev resources.

Learn to build simple apps + how to scale your learnings to fully customize AI automations powered by your data.

OpenAI makes a $60 million hardware investment 

Source: Opal

OpenAI’s startup fund recently led a $60 million funding round in Opal, according to The Information

The details: Opal, previously known as Opal Camera, is a camera maker backed by Youtubers and TikTokers alike. 

  • It is best known for selling professional webcams. 

  • The Information reported that the company plans to sell AI devices integrated with OpenAI’s software in addition to its line of webcams. 

The vision is to develop creative tools, rather than agents or assistants; Opal will be working closely with OpenAI to achieve this vision. 

The context: Sam Altman — who invested in Humane and worked with former Apple designer Jony Ive — is super interested in AI devices. 

Gary Marcus said in response that “Altman wants to know — and monetize — everything about you. He might succeed.”

  • Warren Buffett now owns more T-bills than the Federal Reserve (CNBC).

  • How Intel spurned OpenAI and fell behind the times (Reuters).

  • ‘There’s no price’ Microsoft could pay Apple to use Bing: all the spiciest parts of the Google antitrust ruling (The Verge).

  • Inside the company that gathers ‘human data’ for every major AI company (Semafor).

  • AI Is Heating the Olympic Pool (Wired).

Humane Pin returns are outpacing sales

Source: Humane

Remember that universally hated Humane AI Pin? The genAI-powered device that users can clip to their shirt? The one that Marques Brownlee called the “worst product” he’s ever reviewed? 

Returns of the device, which launched in April, have been steadily mounting since its release. 

The details: Between May and August, “more AI pins were returned than purchased,” according to The Verge’s Kylie Robison

  • The Pin and its accessories have brought in around $9 million in total sales, according to The Verge. More than $1 million worth of product has been returned. 

  • Humane said that there were inaccuracies in The Verge’s report, but didn’t provide specifics. 

The company has raised more than $200 million in funding. The Pin costs $699 and requires a $24/month subscription. Only around 7,000 Pins are still in circulation.

Everyone tells you to learn AI but no one tells you where.

We have partnered with Growthschool to bring this ChatGTP & AI Workshop directly to you. (It normally costs $199, but readers of TDV get it free) 🎁

This workshop has been taken by 1 Million people across the globe, who have been able to:

  • Build business that make $10,000 by just using AI tools

  • Make quick & smarter decisions using AI-led data insights

  • Write emails, content & more in seconds using AI

  • Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Ok Dr. Li, it’s time to talk about California SB 1047

Source: Created with AI by The Deep View

Since California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — widely known as SB 1047 — was introduced, it has become the center of a fierce debate around the idea of over-regulation. Y Combinator and a16z have come out strongly against it, multiple times, and a list of people within the industry have added their voices to the “nay” list, including Andrew Ng. 

Most recently, Stanford's Dr. Fei-Fei Li came out against the bill. It is first worth noting that Li recently built an AI startup that, in just a few months, has achieved a $1 billion valuation based on funding from Radical Ventures and a16z. As mentioned above, a16z is against the bill. 

Li’s arguments: Though she mentioned that the bill is “well-meaning” and aims to address “real problems,” she said it will harm the industry and — I hate this phrase, but — “stifle innovation.”

  • She said that, in the event of model misuse, the bill would hold liable model developers in addition to the responsible parties, which she said isn’t good. 

  • She also said that the bill mandates a “kill switch” that would stifle open-source research/innovation. 

And her last point was that the bill doesn’t address issues of other harms of AI advancement, such as bias and deepfakes. 

In her op-ed, Li does not make an alternative proposal, despite recognizing the importance of this regulation. 

Okay, let’s break it down: On Li’s last point, California has a separate bill in process — AB 2655 — that addresses issues of deepfakes. So. 

  • On her point about a kill switch … first, that language does not exist in the text of the bill. The bill’s language is instead “full shutdown,” an emergency provision to be used only in, well, emergencies. This applies to models only “within the control of the developer,” and does not apply to open-source models outside of a developer’s control. 

  • On her point about liability … current tort law — here’s a good breakdown for the uninitiated — applies broad liability to companies/developers. SB 1047 applies narrow liability: “only the Attorney General (can) file a suit if and only if a developer of a covered model fails to perform a safety evaluation or take steps to mitigate catastrophic risk and if a catastrophe then occurs,” according to the bill’s author. 

Read the full text of the bill here

Read the author’s defense here

While there are legitimate criticisms of the bill (such as this group of University of California students/professors worried about the unscientific process of AI risk forecasting), here’s the reality of SB 1047:

  • Developers of covered models — models using 10 ^ 26 FLOPs of compute OR models that cost $100 million or more in training — must conduct safety testing and share that testing with the state before releasing said model. 

  • They must also implement a capacity for “full shutdown” in these covered models in the event of an emergency, which just means ceasing the operation of a model or ceasing the training of a model. 

If a developer doesn’t do this, and the model causes “critical harm” that leads to mass casualties OR causes $500 million in damages, then the state AG can sue. At that point, the state can mandate the deletion of said covered model. 

How people feel about it: While unpopular in Silicon Valley, multiple polls have shown support for the bill. One found that 77% of California voters support the bill’s proposals. And polling from the Artificial Intelligence Policy Institute (AIPI) recently found that 65% of Californians support the bill in its current form. 

A screenshot of the AIPI’s recent polling.

A few last thoughts here. First, many of these companies — OpenAI, Anthropic, etc. — have called for the regulation of their industry. And they are now balking at this piece of legislation that promises to call them on their BS; OpenAI has warned for months of the existential risks of AI. Despite the lack of science supporting some of those risks, SB 1047 basically says: ‘okay, put your money where your mouth is. If something goes wrong — which you say is possible —  you will be held responsible.’ 

Remember that letter — oozing with AI hype — that said we must mitigate the “extinction risk” of AI? That was signed by the CEOs of DeepMind, Anthropic and OpenAI, in addition to plenty of researchers and academics. 

  • Wouldn’t this bill do exactly what that statement called for? 

  • It seems that these people don’t believe their own hype about AI. And it seems very clear that, hype or not, they do not want to bear responsibility for anything that might go wrong. 

No piece of legislation is going to be perfect. But many criticisms of this bill are little more than inflammatory fabrications. And none of Dr. Li’s arguments hold up.

Which image is real?

Login or Subscribe to participate in polls.

A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Your view on paying for an in-home robot assistant:

20% of you said you absolutely would; 18% of you said you never would.

The rest weren’t sure, with some saying that you would consider it if it was under a certain price.

Something else:

  • “If the robot was covered by insurance to help care for the elderly, and if the quality of care was sufficient, and if a sufficient human was not available at the same cost, then yes, I would use a robot to provide this service.”

Never:

  • “I don't believe there will not be a malfunction that could make it dangerous for the elderly.”

What do you think about SB 1047?

Login or Subscribe to participate in polls.