• The Deep View
  • Posts
  • ⚙️ OpenAI's Superalignment team implodes; safety research wasn't a priority

⚙️ OpenAI's Superalignment team implodes; safety research wasn't a priority

Good Morning, and happy Monday. Last week, I chatted with Kian Katanforoosh, an award-winning computer scientist who has worked closely with Andrew Ng, a celebrated pioneer in the AI space. 

Katanforoosh’s great passion is teaching people about AI at a time when technological skill sets must be more rapidly evolving than ever before. Read on for the full story. 

In today’s newsletter: 

  • ⛑️ Jan Leike says safety wasn’t a priority at OpenAI

  • 🛜 The FTC has a few questions about AI

  • 🏛️ Colorado signs first comprehensive U.S. AI legislation

  • 🏦 Exclusive Interview: Deeplearning.ai founding member on the importance of regulation for job creation

Jan Leike says AI safety wasn’t a priority at OpenAI

Image Source: OpenAI

In the wake of OpenAI’s recent exodus of safety researchers, former Superalignment co-lead Jan Leike posted a viral thread explaining why he left: a growing disagreement with company leadership over OpenAI’s core priorities

  • “Over the past few months my team has been sailing against the wind,” he wrote. “Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

  • “Over the past years, safety culture and processes have taken a backseat to shiny products.

OpenAI later dissolved its Superalignment team entirely. Reports, meanwhile, of a culture of broken trust regarding safety efforts at OpenAI have proliferated, with former employees too afraid to share their names due to a non-disparagement agreement (tied to their equity) that Sam Altman claimed he knew nothing about.

Now we know what Jan saw ... Do you trust OpenAI?

Login or Subscribe to participate in polls.

The FTC has a few questions about AI

Image source: Unsplash

The Federal Trade Commission’s Office of Technology (OT) last week highlighted eight “questions of interest” regarding AI. The OT’s goal at this time is to simply learn more about these specific areas:

  1. AI & Fraud

    1. How does AI supercharge fraud (it does, through enhanced speed, scale and personalization), what safeguards are there to protect against this and are those safeguards at all effective

  2. AI & Competition

    • How are firms with market power & vertically integrated tech stacks leveraging that power in the AI sector? 

  3. Algorithmic Pricing

    • How are companies using AI to fix prices? 

  4. Surveillance & Data Privacy

    • What is the scope and scale of first-party data collection? 

  5. Data Security

    • How common are security vulnerabilities within LLMs? 

  6. Open Models

    • How do models with open weights impact competition? 

  7. Platform Design

    • How are design choices by social media/gaming platforms exacerbating or mitigating problematic usage or other mental health harms?

  8. Digital Capacity

    • How can we cultivate a pipeline of tech skills that can be translatable to tech policy?

Together with Zaplify

Need more prospects? Try this new AI Sales Assistant

Spam filters. Irrelevant LinkedIn InMail templates. Obvious automated follow-ups that don’t work.

If you’re ready to ditch the {insert_first_name} tags for real results, try Zaplify’s new AI Sales Assistant.

Pairing human touch with a savvy AI, this tool helps find your next prospect, engage meaningfully, and retain relationships – all in one platform.

Start by searching from 450 million people, and the AI Sales Assistant will…

  • Suggest similar people worth pursuing

  • Draft hyper-personalized messages based on prospect’s profile/background

  • Revive old leads and make you reach a new level of reply rates

Plus, it unifies your LinkedIn and email chats so you never lose a prospect again.

Join the waitlist and be the first to experience the future of authentic outreach.

Colorado enacts first piece of U.S. AI legislation

Colorado State Capitol

Image Source: Unsplash

Colorado Gov. Jared Polis on Friday signed the Consumer Protections for AI bill into law. This marks the introduction of the first piece of comprehensive U.S. AI legislation. It won’t enter into force until February 2026. 

The bill is specifically focused on preventing discrimination, and targets developers of high-risk systems, which it defines as any system involved in making “consequential decisions.” 

Key Points: 

  • Developers and deployers (doing business in Colorado) must use “reasonable care” to protect consumers from “known or reasonably foreseeable” risks of algorithmic discrimination. 

  • Developers of high-risk systems must provide deployers with information & disclosures regarding the models, including a summary of data used to train the model and potential biases

  • Deployers must conduct regular impact assessments. 

The Colorado Tech Association is (not surprisingly) unhappy with the legislation. Polis isn’t thrilled with it either; he signed “with reservations,” and noted concern about how it might impact the tech industry. 

The Future of Privacy Forum, meanwhile, called it a “watershed moment.” 

💰AI Jobs Board:

  • Data Scientist: Compunnel· United States · Hybrid; Sacramento, CA · Full-time · (Apply here)

  • Quantum Research Scientist: IBM · United States · Yorktown Heights, NY · Full-time · (Apply here)

  • Staff AI Linguist: LinkedIn · United States · Hybrid; Mountain View, CA · Full-time · (Apply here)

 🔭 Tools: *

🌎 The Broad View:

  • OpenAI employees can’t criticize the company after they leave (Vox).

  • BlackRock is in talks with multiple governments about ways to support investments in AI (Reuters).

  • Johns Hopkins students designed a silencer for leafblowers (Fortune).

*Indicates a sponsored link

Together with Sana

Work faster and smarter with Sana AI

Meet your new AI assistant for work.

On hand to answer all your questions, summarize your meetings, and support you with tasks big and small.

Try Sana AI for free today.

Deeplearning.ai cofounder on the importance of regulation for job creation 

Created with AI by The Deep View.

Kian Katanforoosh – who co-created Stanford’s deep learning class before becoming a founding member of deeplearning.ai (all alongside renowned computer scientist Andrew Ng) – has one big concern when it comes to AI: that the workforce will get left behind. 

I had asked him what his single greatest AI-related concern is; his answer —without hesitation — had nothing to do with unfounded fears of a singularity, or of active AI-related harms. Instead, his concern is “that the gap between the techies and the rest is going to keep growing.

But he remains excited by the fact that companies outside of the tech space haven’t given up efforts to upskill and train their workers. This, he said, is the “right way to go.” 

  • “I don’t think we need everyone to be a data scientist,” he said, adding that instead of abandoning their areas of expertise, people should work to “learn AI and bring it to your space.” 

A key element of this sweeping upskilling effort that Katanforoosh believes is so important involves smart regulation, something that he said could act as an “incredible vehicle of job creation.” (Other experts in the field, including Dr. Suresh Venkatasubramanian, have told me similar things about regulation and job creation). 

Citing Europe’s GDPR regulation, which is estimated to have created around 500,000 jobs, Katanforoosh said that regulation forces companies to think longer term, which is a huge win for the workforce

  • He said that responsible AI regulation would create a whole new set of jobs in measuring, evaluating and red-teaming AI models. 

  • He said that such regulation would force “us to upskill the workforce to a point that it will help us in the future.” 

And when such regulation does (hopefully) come around, Katanforoosh hopes that it will additionally address growing monopolies in the sector, something he is already seeing (& something the FTC is curious about, as I mentioned above). 

Katanforoosh sees the AI market as consisting of three layers: infrastructure, foundation model and application. The first two layers, he said, are already dominated by a few companies; Nvidia and the cloud providers in the first and Big Tech in the second. 

  • “I just really hope that the type of monopolies we're seeing in the first two layers are not going to happen in the third layer, because I think it will create issues later on for this entire ecosystem,” he said. 

  • “I'm worried that without the right regulations, the app layer is going to just be deferred to Big Tech, and that's going to create a less innovative and less progressive ecosystem.

Image 1

Which image is real?

Login or Subscribe to participate in polls.

Image 2

  • OneNode: An AI-powered note-taking app.

  • Breezemail: An AI-powered tool for organizing your email inbox.

  • aiPDF: An AI-powered tool for summarizing documents.

Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).

*Indicates a sponsored link

SPONSOR THIS NEWSLETTER

The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more.

If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here.

One last thing👇

That's a wrap for now! We hope you enjoyed today’s newsletter :)

What did you think of today's email?

Login or Subscribe to participate in polls.

We appreciate your continued support! We'll catch you in the next edition 👋

-Ian Krietzberg, Editor-in-Chief, The Deep View