• The Deep View
  • Posts
  • ⚙️ The strange story of OpenAI and Sam Altman

⚙️ The strange story of OpenAI and Sam Altman

Good morning. Ever since reports first surfaced that OpenAI was planning to restructure into a for-profit, I’ve wanted to step back and examine the startup’s history.

So, we’re diving into it today.

OpenAI has been a core part of the current AI moment, and the startup is decidedly weird.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Tokyo’s disaster detection

Source: Created with AI by The Deep View

The Tokyo metropolitan government has deployed a high-tech answer to disaster mitigation: a network of cameras and artificial intelligence. 

The details: The system analyzes real-time footage from cameras to automatically identify fires and structural collapses. 

Once identified, it sends that information — including details on the incident location and the scope of the disaster — to the necessary emergency response teams. 

The system began a full-scale deployment in March. 

Design and Make Anything with Autodesk software

Autodesk Flex gives you the freedom to access a variety of Autodesk products on an as-needed basis. 

By pre-purchasing tokens, you can unlock different tools for a 24-hour period, making it a great option for occasional users or teams who don't need full-time access. You only pay for what you use, giving you full control over costs.

Flex is simple to manage, letting you assign users and control product access based on your team’s needs. Explore a wide range of Autodesk tools without committing to long-term subscriptions.

Learn more about how Autodesk Flex works here.

Microsoft ships massive Copilot update

Source: Microsoft

Microsoft rolled out a huge overhaul of its generative AI-enabled Copilot feature Tuesday. 

The details: The refresh of the software boils down to three main features: Copilot ‘voice,’ ‘vision’ and ‘Think Deeper.’ 

  • The voice feature enables users to interact with Copilot on an audio level; the vision feature enables Copilot to scan what a user is doing on a given website. Microsoft said that it can then use its voice feature to “talk” to users about this in real-time. 

  • Think Deeper, similar to OpenAI o1, allows Copilot to respond more slowly to prompts. 

Microsoft said that safety and security are a “top priority.” It said that all Copilot sessions are opt-in, adding that none of the data viewed by Copilot will be stored, processed or used for training purposes. 

Microsoft’s goal is to create an “AI companion for everyone.” Mustafa Suleyman, the CEO of Microsoft AI, called the launch the “beginning of a fundamental shift” in how we interact with technology. 

Microsoft did not address issues of bias and hallucination in the underlying models; Microsoft also did not say what Copilot was trained on or how energy-intensive it is to operate at this scale. 

The Biggest Disruption to IP Since Disney

History is being made in the $2T global entertainment & media industry and you can get a piece by investing in Elf Labs — but only for a limited time.

With over 100 historic trademark victories, Elf Labs owns rights to some of the highest-grossing characters in history, including Cinderella, Snow White, Little Mermaid, and more. These icons have generated tens of billions in merchandise revenue alone, since their inception.

Now, Elf Labs is revolutionizing these characters with patented next-gen technology, including AR, VR and advanced compression algorithms for an unprecedented level of immersion. From virtual reality — without headsets — to AI-powered talking toys, this may be the biggest disruption to IP since Disney.

Become an Elf Labs shareholder now. But hurry, the round is almost full & closing this month!

  • Peak is approaching, but is your customer service team ready?

  • Boost your software development skills with generative AI. Learn to write code faster, improve quality, and join 77% of learners who have reported career benefits including new skills, increased pay, and new job opportunities. Perfect for developers at all levels. Enroll now.*

  • Malaysia plans national cloud policy, AI regulations (Reuters).

  • AI chipmaker Cerebras files for IPO to take on Nvidia (CNBC).

  • Iran launches missile attack against Israel (Politico).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

The strange story of OpenAI and Sam Altman

Source: Created with AI by The Deep View

OpenAI is reportedly on the brink of restructuring into a for-profit corporation, one in which its non-profit board would hold a minority stake but would no longer have any real power over the operations of the company.

As part of this, it has been reported that CEO Sam Altman — one of the only founders remaining at the company — will get a massive equity stake in OpenAI, a company that could soon be valued at $150 billion (despite losing $5 billion in 2024 alone). 

  • Former OpenAI researcher Carrol Wainwright said that OpenAI was structured as a non-profit to do the “right thing” when faced with high stakes; the company’s abandonment of the non-profit now indicates that the “promise was ultimately empty.” 

  • “You should not believe OpenAI when it promises to do the right thing later,” he said

In light of all this, I thought it would be important to take a step back to examine the history of OpenAI and its strange structure.

This story is not about scientific advancements. It’s not about utopias or dystopias or the betterment of mankind. 

It is about power. 

Sam Altman’s rise

Atman is a Stanford computer science dropout; in 2005, he started Loopt, a social networking app that attracted some $30 million in venture capital funding but failed to take off. He and the other co-founders sold it in 2012 for $43 million and two years later, Altman was selected to run Y Combinator, the tech venture capital firm (that funded Loopt) that was founded by Paul Graham in 2005. 

Graham said in 2016 that “Sam is extremely good at becoming powerful.”

Altman stepped down from Y Combinator in 2019 to helm OpenAI. 

OpenAI’s foundation

OpenAI was launched in 2015 as a nonprofit research lab by a team that included Altman, Elon Musk and Peter Thiel. The idea, according to Musk, was to develop AI “in a way that is safe and beneficial to humanity.”

  • Musk donated an initial $100 million (of the $1 billion he had promised), though said that the company would need billions per year to survive. 

  • He wanted OpenAI to merge with Tesla, and he wanted full control of the company. When this didn’t pan out, he left OpenAI in 2018. With Musk’s millions off the table, OpenAI needed a source of funding; in 2019, it transitioned into a hybridized non-profit, for-profit mix, which it said would allow it to achieve its mission of AI for the people while securing the funds needed to complete this project.

The non-profit board remained in control. 

Shortly after this, OpenAI secured its first big investor in Microsoft, which has sunk around $13 billion into the company. 

In a 2015 blog post, OpenAI said that, “as a non-profit, our aim is to build value for everyone rather than shareholders.” Nine years later, securing investors has become a core focus for the company.

The saga 

Things got interesting in 2022, when OpenAI launched ChatGPT — reportedly without the knowledge of the non-profit board. OpenAI became the firm at the epicenter of the AI storm, and Altman became a celebrity. 

But in a whirlwind weekend in 2023, the board fired Altman for not being “consistently candid” with board members; just a few days later, Altman was rehired and the board replaced. As we’ve reported, OpenAI has since bled both founders and safety researchers at a high rate. 

  • Former board member Helen Toner wrote in May that “the board’s ability to uphold the company’s mission had become increasingly constrained due to long-standing patterns of behavior exhibited by Altman.” 

  • She said that current financial incentives make self-regulation an impossibility, and has repeatedly called for strong governance and oversight. 

Shortly before that whole saga played out, Altman said: “No one person should be trusted, here. The board can fire me and that's important."

Surveillance capitalism & AI 

The rise of the internet was an exciting moment of democratization and transformation, and despite the bubble, it took hold. Every facet of modern society is built on the internet. 

But it didn’t take long for internet companies — which morphed into social media and cloud companies — to become enamored with the idea of a surveillance-based business model. 

We can surf the web at no cost today because, for years, these companies have been gathering an enormous quantity of data about us, and then either selling that data to third parties or leveraging that data for, say, targeted advertising. 

  • As Shoshana Zuboff, professor emerita at Harvard Business School, has said, this effort was undertaken quietly and in the background. Now we know it goes on, but the internet is too much a vital part of modern life to back out now. 

  • “I define surveillance capitalism as the unilateral claiming of private human experience as free raw material for translation into behavioral data,” Zuboff said in 2019. “These data are then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later.”

The AI boom — operated and controlled by the same Big Tech giants we were talking about in the ‘00s and the ‘10s — further exacerbated the need for data. This is partly why these companies are in such a strong position for AI; they have the data already, and they have the means to tap into even more of it.  

Powered by that data, AI is enabling a whole new scale of hyper-surveillance; algorithms can parse through cameras and satellite feeds, audio, biometric data and online activity to closely track — and predict behavior for — anyone, anywhere and at any time. This has been ongoing for years, according to the Carnegie Endowment for International Peace, giving governments “unprecedented capabilities to monitor their citizens and shape their choices.”

As Meredith Whitaker, the president of Signal, has said: “I see AI as born out of the surveillance business model . . . AI is basically a way of deriving more power, more revenue, more market reach.”

When OpenAI started, it promised to do good, similar to Google’s early “don’t be evil” days. But the fiscal incentives don’t jibe with that, and now, with the non-profit on the brink of being hamstrung, any wide-eyed illusion that it was positioned for good has vanished.

It is a business. Nothing more. The removal of the non-profit here erases the shield that OpenAI has often hefted that it is acting for the greater good; and in that sense, this is a good thing. OpenAI is no longer trying to look like something it isn’t.

Examining the role of a simple for-profit corporation in developing and selling a technology as risky and ethically fraught as AI, it becomes clear that incentives matter.

We need clear governance and regulation that will twist those incentives back in the favor of people, rather than corporate profits. What Big Tech and OpenAI have shown us again and again is that we cannot trust them to do this on their own (the non-profit has been beaten back and overruled. The for-profit has won out, in the end).

AI has great potential. But the current environment is not staged to allow the masses to access it. The current incentives would need to shift dramatically for the idea of “AI for Good” to become some sort of reality.  

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “I can't see a situation where AI would create dirty snow.”

Selected Image 2 (Right):

  • “Something about the clouds in the first photo doesn't sit right.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on transparency & trust in AI:

A third of you said trust and transparency are the missing elements in AI. The same amount said that we are missing far more. 23% said current systems seem trustworthy enough.

Missing way more:

  • “We are also missing a general understanding of how AI results are generated and the impact of biased datasets on responses that require applying judgment … Users should learn, or be made aware of, the potential pitfalls of an AI-generated response … to discount (or amplify) what they get from AI platforms.

Something else:

  • “Transparency and trust are always going to be issues in the digital space. It follows on from life in general and always has an element of risk that needs to be acknowledged and accepted...or live under a rock.”

What do you think about OpenAI's transition to for-profit?

Login or Subscribe to participate in polls.